id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
1442106 | pes2o/s2orc | v3-fos-license | Health Questions Posed by Amerindians in Guyana’s Deep Interior
Background: The forest-dwelling Amerindian peoples of Guyana are among that nation’s most impoverished, vulnerable and least served. Health promotion messaging has been informed in large part by nation-level health indicators that may not be well targeted to this group. Our study sought to identify local health education needs, and to identify factors preventing proper uptake of health messaging. Methods: As part of medical missions to the interior, we asked patients waiting for care to anonymously submit their health questions in writing. Conventional content analysis was employed to identify prevalent themes in their responses. Findings: Sexual health (63.6%) and nutrition (17.4%) were the most popular themes asked about. Within the former, the science of sexual maturation and reproduction (31.4%) and HIV/AIDS (28.8%) were the most common sub-themes, with the pathophysiology and etiology of HIV/AIDS being the most common sub-theme within the latter. Interpretation: Within Guyana’s Amerindian community, there exists a prevalent curiosity about the basic science of both sexual reproduction and the transmission of sexual disease.
Introducation
In many countries, aboriginal populations are vulnerable to environmental and societal changes, leading to health crises within their communities. An excellent example of this is seen the marginalized Amerindian communities in the Mazaruni region of Guyana, South America, and its immediate neighbours (Mantini et al., 2008;Peplow et al., 2012). Guyana is a poor, English-speaking country on the South American continent, but with economic and cultureal affiliations with the nations of the Caribbean. It is underpopulated, with fewer than 800,000 residents, though almost all are clustered along the Caribbean coast.
The interior of the country is largely undeveloped rainforest, sparsely populated by AmerIndian tribes, and of interest to international mining and timber concerns. These communities, made up in large part of Awarak tribes, consist of villages of 200-300 people situated on the banks of rivers within the South American rainforest. Their economies have been largely re-tasked to service the mining industry, which, anecdotal and some empirical evidence suggests, has resulted in an increase in migrant labour, environmental toxins, diabetes, obesity and sexually transmitted diseases (Frery et al., 2001;Seguy et al., 2008). In particular, HIV/AIDS is thought to be a serious concern in these populations (Palmer et al., 2002), with a national prevalence rate of 2.4% (Government of Guyana, 2012). This has come in partnership with a decrease in local food production and an ongoing crisis in health literacy and health care resourcing.
Direct health care interventions in such remote communities are unavoidably costly, requiring the transportation of people and equipment hundreds of miles through dense rainforest. Funding priorities, therefore, have focused on the needs of the more accessible, coastal-dwelling populations. Priorities have traditionally included HIV/AIDS, maternal and child health, diabetes and nutrition (Pan American Health Organization, 2001). The interior-dwelling Amerindian population has not received a great deal of funding attention nor has it been a consistent target for epidemiological investigation of health need, despite being the most impoverished sector of Guyanese society (Pan American Health Organization, 2001).
Given the lack of targeted population health information (Ministry of Health of Guyana, 2008), the needs assessments defining extant limited investments in Amerindian health have likely been informed by country-level indicators, which may or may not be reflective of the actual local needs of the forest-dwelling population. The Toronto-based NGO Ve'ahavta has sent regular medical missions into Amerindian communities in Guyana's Mazaruni region for several years. During two such missions, residents in the villages of Kamarang and Waramadong were given the opportunity to anonymously ask questions about personal and community health issues. These questions constitute a qualitative community-based health needs assessment, separate from the standard population-level indicators that typically inform health policy for these populations.
This paper describes a summary, by theme, of the health questions posed by the villagers, organized to suggest an alternative, ground-up health needs assessment for this and similar aboriginal populations
Method
During two visits by Ve'ahavta 's medical volunteers, once in March of 2008 and again in March of 2010, Amerindians who had gathered to receive direct clinical care from Canadian doctors were given a standard group health education presentation, focusing on sexual health and personal hygiene. In the course of presentation, a bag with pieces of paper with pens were passed around the group, and the attendees were instructed to anonymously write down any questions about any aspects of their personal or community health that they wished the health educator to address to the group, and to place their questions in the bag.
In 2008, both Kamarang and Waramadong were visited. In 2010, only Waramadong was visited, due to scheduling limitations. The presentation and instructions were given in the local vernacular. Selected responses were taken from the bag and informed the content of the remaining presentation time. In Waramadong, the medical teams' visits to the local residential public school contributed significantly to the complement of responses. All of the written questions were later qualitatively analyzed using conventional content analysis, as described by Hsieh and Shannon (2005), to determine prevalent themes and sub-themes.
Ethics approval for this study was granted by the University of Ottawa research ethics office.
Results
In this paper, we refer to the questions offered by subjects as "responses". Typically, 100-300 patients would gather at any of our clinics. In 2008, 33 responses were gathered from the Kamarang visit, while 66 were received in Waramadong. In 2010, 98 responses were collected in Waramadong. Several responses featured more than question, or were coded as being relevant to more than one content theme. Respondents were a combination of village residents, residential teenaged schoolchildren and residents of surrounding villages who had journeyed to both locations specifically to access medical care.
Six major themes arose from our content analysis: Diabetes, Alcohol/drugs/smoking, Heart disease/hypertension/atherosclerosis, Malaria, Nutrition, and Sexual Health. A Miscellaneous category was created for 16 responses, which included, in addition to other issues, questions relating to hair loss, stress, and cancer. The distribution of responses across the major thematic areas is summarized in Table 1. Given that Sexual Health garnered much interest, its responses were given a deeper content analysis. The distribution of Sexual Health responses across sub-themes is presented in Table 2. Responses to the HIV/AIDS sub-theme were further broken down into three categories: the pathophysiology or etiology of the disease, prevention or treatment of HIV/AIDS, and signs/symptoms of infection. The distribution of those responses is presented in Table 3.
Discussion
At face value, the predominance of Sexual Health, in particular HIV/AIDS, in our content analysis suggests that villagers' concerns are recapitulating the national funding trends, which also focus on reproductive and sexual health issues, lending validity to the official health funding policies. However, one must consider three sources of bias implicit in our approach. The first is the oversampling of teenaged residential students in Waramadong, for whom sexual issues are a natural point of interest, given their age.
The second is the unavoidable ubiquitous messaging around HIV/AIDS that percolates all about Guyana, due to persistent government literature, NGO missions, and health promotion efforts in the form of traveling "street" theatre, posters in the health stations and advertisements on radio. This continuously reinforces the perceived importance of HIV/AIDS to population and individual health.
The third source of bias is the exclusive sampling of individuals sufficiently literate in English to write questions legibly. There was a sense that those most likely to be insufficiently English literate were the extremely elderly, whose first language was more likely to be a tribal dialect. However, during visits to the schools of Waramadong, all of the respondents were highly literate. How this bias affects our insights is uncertain, except to the extent that is unlikely that the extremely elderly would be as concerned with Sexual Health.
Despite these biases, it is telling that most of the curiosity concerning HIV/AIDS is not about its prevention or treatment, which is where most interventionist programs tend to focus, but on its pathophysiological essence. Sample responses in this domain include, "What is AIDS?" (a common question) and "What is the difference between HIV and AIDS?" This suggests a fundamental educational gap about the nature of the disease that must be filled before prevention campaigns can achieve full potency.
The other popular sub-theme of Sexual Health was the basic science surrounding reproductive physiology and maturation. Questions such as, "Why do I sometimes get my period twice a month?" and "Can a girl get pregnant if she has unprotected sex during menstruation?" suggest, again, that a gap in basic science education is a barrier to proper uptake of public health messaging.
Given that this population, like much of the global Aboriginal population, suffers disproportionately from Diabetes. Questions like, "What is a carbohydrate?" suggest that previous public health messaging has employed a largely didactic approach, without sufficiently accounting for the appropriateness of language and the need to provide basic nutritional science overviews.
It is a surprising result that there was little curiosity expressed about either Tuberculosis or Malaria. Both are serious health issues in Guyana (Alladin et al., 2011;Rawlins et al., 2008), though Malaria is on the decline. Considerations of our aforementioned biases aside, this suggests either that long term multi-generational experience with these diseases is sufficient to quell much curiosity, or that these diseases are not considered to be as important or as interesting subjects as are sexual issues. If indeed this is the result of lack of appreciation of the prevalence of TB, in particular, then it is concerning that health promotion efforts have perhaps oversold the importance of Sexual Health at the expense of self-protection from other infectious diseases.
Our qualitative approach leads to insights that are, of course, purely speculative and suggestive, not conclusive.
Our results suggest that a more structured, quantitative study of the health literacy and personal health curiosities of Amerindian peoples would help to better inform the plethora of health promotion projects currently besetting these populations. | 2018-04-03T01:51:14.124Z | 2000-07-01T00:00:00.000 | {
"year": 2012,
"sha1": "dab278347955ea00fd28558839c16419c1a0ad0f",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/gjhs/article/download/19423/13174",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dab278347955ea00fd28558839c16419c1a0ad0f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Sociology",
"Medicine",
"Geography"
]
} |
254707130 | pes2o/s2orc | v3-fos-license | Basic income in Australia: an exploration
Purpose – Basic income (BI) is predicted to be the major economic intervention in response to raising income inequality and accelerating technological progress. Financing is often the first question that arises when discussing a BI. A thorough answer to this question will determine the sustainability of any BI program. However, BI experiments implemented worldwide have not answered this question. This paper explores two options for a BI program in Australia: (1) BI and (2) top-up basic income (TBI). Design/methodology/approach – The authors employ “ back-of-the-envelope ” calculations with the latest publicly available data on income distribution, the poverty line and the share of income tax in the government revenue to estimate the costs of implementing BI in Australia. Findings – Even without any change in the current tax regulations, the TBI option, which requires a contribution of 2 – 3% disposable income from net contributors, will guarantee that no Australian family lives under the current national poverty line. The BI for all options is not financially feasible under the current tax andtransferregulationsbecauseitrequiresanadditionaltaxrateofatleast42%ofdisposableincomefromnet contributors. Practical implications – The results of this study can serve as inputs for the design and implementation of BI options in Australia and similar countries. Originality/value – ThisisthefirstpaperthatexaminesthemacroeconomiceffectsofBIoptionsinAustralia.
Introduction
The rationale for a basic income (BI) is due to the rapid technological progress, especially the development of robotics and artificial intelligence, which may lead to a large-scale replacement of workers (Straubhaar, 2017).Many jobs currently undertaken by people may be taken over by robots (Hoynes and Rothstein, 2019).Brynjolfsson and McAfee (2014) also argued that human work in the "second age of the machine" would be taken over by robots with artificial intelligence.In addition, the current tax systems relying on labor income might be under pressure as robots neither are taxed nor contribute to social security systems.Furthermore, technological change may further increase inequality and polarization between capital owners and workers, especially lower-skilled workers (Straubhaar, 2017).Therefore, policymakers and the general public are paying attention to the future of employment, the feasibility of social welfare and stable social security systems (Straubhaar, 2017).
BI is the provision of income to all adult individuals without any tests, including meantested [1] or conditional, to meet their basic needs (Francese and Prady, 2018;Arthur, 2016;Colombino, 2019;Ghatak and Maniquet (2019a).It is generally accepted that BI is universal, adequate and unconditional.The provision of BI will require a significant source of revenue and affect an economy substantially in many aspects (Colombino, 2019).
BI has been experimented in Finland, the Netherlands and the Canadian province of Ontario (Ghatak and Maniquet, 2019a).According to Hale (2019), the Greens party of Australia initiated the first ever universal BI experiment with a $55 m package funded by the New South Wales (NSW) government and undertaken on the NSW South Coast.This experiment aims to reduce inequality, provide economic security and share Australia's wealth fairly.
Debates have revolved around the potential effects of BI across European countries, the United States of America (USA), Canada and Australia (Colombino, 2019;Arthur, 2016;Hale, 2019;Hoynes and Rothstein, 2019;Ghatak and Maniquet, 2019a).Colombino (2019) suggested that BI can be an efficient approach to redistributing the benefits from automation and globalization and does not create "welfare traps" or "poverty traps."BI is simple and transparent, with low administrative costs (Francese and Prady, 2018;Colombino, 2019).In addition, BI may have positive impacts on labor supply, responsibility and human capital investment (Francese and Prady, 2018;Colombino, 2019).BI can reach the poor more effectively than means-tested programs (Francese and Prady, 2018).Nikiforos et al. (2017) indicated that BI could be a tax-financed or debt-financed program.If BI is financed by increasing taxes on households, Levy's Keynesian model forecasts no impact on the economy.This is because BI provides households with cash assistance which is taken away from high-income households (i.e.households pay higher taxes due to BI policy).When distributional effects are included in the model, the economy grows.This is because households paying more in taxes than receiving in cash assistance have a low propensity to consume, while households receiving more in cash assistance than paying in taxes have a high propensity to consume.Therefore, even if the BI is tax-financed rather than debt-financed, output, employment, prices and wages will increase (Nikiforos et al., 2017).In contrast, Francese and Prady (2018) and Colombino (2019) indicated some shortcomings of BI.For example, the BI may result in higher taxes or lower government expenditures in other sectors such as health, education and investment with efficiency and equality losses or high fiscal costs; decreased effort, motivation and autonomy and benefit to the "undeserving."However, such effects have rarely been estimated in our region.Despite some shortcomings, Yamamori (2016) argued that a BI can be a solution to cover the minimum subsistence level.Further, Straubhaar (2017) suggested that the BI is necessary to change the social system.The minimum subsistence level should be guaranteed to everybody, and people with no income receive net transfers.He argued that the BI is economically efficient, socially fair and financially viable.The BI offers the best social-political prerequisite for "prosperity for all" in the 21st century.
This paper makes two contributions to the existing literature.First, to our knowledge, this is one of the first examinations of possible BI options fully tax-funded and applied across Australia.Second, we estimate the BI's effects on key macroeconomic indicators, including labor supply, capital, investment and wages.
Literature review
The idea of a BI was first introduced by Rhys-Williams (1943).Due to the unfair distribution of wealth and the need to address chronic unemployment issues, she proposed a social security subsidy that could cover the minimum basic needs of all citizens.Friedman (1962Friedman ( , 1968) ) then developed the concept of a negative income tax as a coupling of income tax and social transfers.Tobin (1966) developed the "case for an income guarantee" based on the negative income tax concept.He suggested both structural and distributive strategies.This is because the former helps build up the capacities of the poorest fifth of the population to earn decent incomes, while the latter helps assure every family a decent standard of living regardless of its earning capacity.Brown (1995) further developed the concept of a BI, which provided a social minimum for economic activity, and founded the European Basic Income Earth Network in 1986.Colombino (2019) recommended that BI might be a viable alternative or complementary to selective and conditional social assistance policies.BI redistributes the gains from automation and globalization by building an efficient and transparent buffer against global volatility and systemic risks, generating positive incentives and avoiding recurrent risks of falling into poverty.Colombino (2019) pointed out that the experiments' findings show that many BI recipients use the BI transfers to redesign their careers and occupational choices.They use unconditional cash transfers to cover their training in new skills and related costs of changing jobs (Standing, 2011).The administrative cost of a non-means-tested transfer is approximately 1-2% of the total costs of BI in the USA, whereas means-testing boosts the administrative cost to four or five times that amount (Colombino, 2019).In addition, in 2010, the rate of overpayment because of fraud and error in the United Kingdom was at about 1% for non-means-tested benefits and 4% for means-tested ones.Colombino (2019) indicated that the BI experiments with non-means-tested transfers in developing countries show positive results on labor supply and human capital investments such as education, occupation and health.
The literature on the macroeconomic effects of BI is scant, with only a few studies included in the review.Ghatak and Maniquet (2019a), Banerjee et al. (2019) and Hoynes and Rothstein (2019) argued that BI may be likely to decrease labor supply in developed countries, at least in the short run, while no evidence of cash transfer programs in developing countries negatively affects labor supply.Ghatak and Maniquet (2019a) indicated that BI might be more appropriate in developing countries to help the poor, but it is not a long-term solution to poverty alleviation.In the USA, Nikiforos et al. (2017) proposed three packages of unconditional income transfers: $US500 or $US1,000 per adult per month and $US250 per child under 16 per month; however, there is no evidence showing that the amount of transfers is enough to cover basic needs.Luduvice (2021) applied an overlapping generation model to the US economy and found a moderate impact of BI on labor supply.However, the impact on the consumption tax rate was substantial, with a proposed BI of $1,000.Steenkamp et al. (2022) applied a general equilibrium model to the South African economy and found that BI was associated with increased tax and crowding-out effects on consumption and investment.
Although there have been discussions on BI across countries, the analyses of possible BI options and their impacts in Australia are scant.This current study will fill the gap in the literature by proposing BI options and exploring its potential macroeconomic impacts.
Methods
The cost of implementing BI in Australia is estimated using "back-of-the-envelope" calculations with the latest publicly available data on income distribution, the poverty line and the share of income tax in government revenue.Although Australia is a wealthy country, 3.2 m people, or 13.6% of its population, live below the national poverty line (Davidson et al., 2020).Our estimates reveal that this level of BI can be funded by additional tax revenue from the top 10% high-income group of the population and benefit the remaining group (90% of the population).An alternative approach is to provide BI as the additional income to the 13% lowest income group of the population using tax revenue evenly applied to the remaining population at the rate of 1% per dollar of equivalised weekly income above $474.
Initial results reveal that the proposed BI program creates a sharp decline in the labor supply.A top-up BI (TBI) positively affects consumption, investment and capital in the short and long term.Labor supply declines and does not return to the base level 10 years after launching the program.More positive, long-term effects are achievable if other sectors' productivity growth rate and tax share increase.
Assumptions
The estimation of the costs and effects of the BI program in this paper is conducted using the following assumptions.
By definition, a BI should be enough to cover basic necessities.It is assumed that income at the Australian poverty line of $474 [2] equivalised disposable income per week (Melbourne Institute, 2019).For an Australian representative family of two adults and two children under the age of 16 years, which have a total equivalised weight of 2.7 (1 for the first adult, 0.7 for the subsequent adult and 0.5 for each dependent child), this income is $1,280 per week.
(1) The Australian population structure is represented by a family of two adults and two children.This assumption is conservative with the current Australian population structure, with a quarter of the population being children and young people under 19 years of age (ABS, 2022). ( The BI is assumed to be funded by a tax increase at the current share of income tax and other sources of government revenue.This assumption is conservative as technological progress is expected to accelerate in the future; tax regulation may change to increase the share of capital and decrease the share of labor in the tax revenue (Straubhaar, 2017).
(3) The BI level and funding options are estimated at an aggregate level using data from the Australian Bureau of Statistics on income distribution in 2019.The midpoint in each income bracket is selected to represent their income level.The only exception is the last group which has a weekly income of $2,000 and above.It is assumed arbitrarily that the average disposable income of this most affluent group is $2,500 per week.The share of the population in each income group is used to estimate the weighted average income for the whole population.
(4) The economy will be able to generate more goods and services as demand increases.
This assumption is based on the reasoning that technological progress will continue and affect increases in goods and services produced with the same or fewer requirements on labor and materials.The belief in rapid technological progress in the future is also the main reason for the increased discussion on BI.
(5) The BI will not replace the existing welfare programs.Although this assumption will make the cost of funding a BI larger, it achieves the underlying objectives of protecting vulnerable people alongside a BI.Some segments of the population (e.g.people with disability and single-parent households) may receive a level of welfare support higher than the poverty line income.Thus, by replacing the existing welfare support with a poverty line, the BI will make them worse.When existing welfare benefits are maintained, the BI also encourages the potential long-term unemployed to get jobs without reducing their allowance.
(6) The BI is assumed to be funded by a budget-neutral tax policy, aiming to maintain an effect free on government budget balance in the short run.This assumption is selected to test the economy-wide impacts of the proposed scheme.While a budget-deficit or borrowed funding approach can be used to fund a BI program, these options are difficult to maintain on a long-term basis.Also, the estimated effects of deficit-funding BI are not apparent.Nikiforos et al. (2017) found positive short-term effects, while Paulson (2018) predicted an opposite long-term outcome for the same BI program.
(7) The current welfare administrative budget is sufficient to manage a BI scheme.We assumed this because most BI activities electronically redistribute income in the current welfare system.
(8) Lower-income households have a higher marginal propensity to consume (MPC); hence, BI will lead to more demand for goods and services, leading to the growth of outputs (i.e.goods and services).Although we do not explicitly model the household sector with different income brackets like Nikiforos et al. (2017), our analysis considers the differences in the MPC values for different income groups based on the MPC values of Nikiforos et al. (2017).For example, the MPC is 0.3 for the highest income group and 0.9 for the lowest income group.
(9) Multifactor-productivity growth will be maintained at the 2016-2017 rate of 0.6% per year (ABS, 2018a).One of the main reasons for increasing BI discussion is that the global financial crisis caused a recession, job losses, unemployment and a slowdown in income growth in many developed countries (Arthur, 2016).The second main reason is that the rapid development of new digital technologies may permanently reduce the demand for labor, including both low-skilled workers and high-skilled fellows (Arthur, 2016).Thus, the assumption that productivity growth remains at the same rate as the current period is modest.
Financing a basic income program
The cost of a BI program depends on the level of benefits it provides.A high-benefit BI will be too costly, while a low-benefit BI may not be enough to provide essential support for its recipients.We choose the level of support at the current poverty line, which is assumed to be enough to cover the costs of basic needs.The average income at the poverty line considered in Australia in the June Quarter of 2019 was $995.14 per week for a representative family of two adults and two children (Melbourne Institute, 2019).According to the Organisation for Economic Co-operation and Development equivalence scale, the full scale of 1 is given to the first adult, a half scale of 0.5 for an additional adult and a fractional scale of 0.3 for each child under 16 years of age.Thus, the equivalised scale for a representative family of two adults and two children is 2.1, and an equivalised disposable income at the current Australian poverty line is approximately $474 per week (i.e.$995.14/2.1).
In the budget period 2017-2018, Australia's total income tax revenue was $312.5 bn, accounting for 59.1% of the total tax revenue of $528.6 bn (ABS, 2019).Although the total Australian government revenue in 2019-2020 was $669 bn, we focus on tax revenue as the source of finance for BI because other sources of government revenue, such as sales of goods and services or investment dividends, are less stable.Assuming the same share of tax sources will be maintained, income tax raises $280 (i.e.$474 3 59.1%) of equivalised income per week.We propose two options to implement a BI at the current Australian poverty line income level.The first option provides unconditional BI at the poverty line level to every citizen, while the second option only provides top-up income for those below the poverty line.
Estimating macroeconomic effects
The effects of BI on the Australian economy were estimated based on a dynamic stochastic general equilibrium by Smets and Wouters (2003) using parameters collected from the Australian Bureau of Statistics and the recent Australian model by Rees et al. (2016).The estimation was conducted in gEcon package, which provides comprehensive and convenient tools to construct macroeconomic models (Klima et al., 2015) of the R programming language (R Core Team, 2020).
The model depicts the economy through interactions between three representative agents: the household, the firm and the government.The household aims to maximize the expected lifetime utility with a time-discounted rate of β.The instantaneous utility in each period is obtained from consumption (C) and labor/leisure (L), and the balance is subject to a budget constraint with wage income and rental return from capital.The household balances the wealth between holding cash for consumption and government bond for capital investment.The firm hires labor and capital from households through the bond market to produce goods and services (Y) to service the household and the government.The government collects taxes (T) from the household and the firm to provide public services and cash transfers, like the BI.The economy is in equilibrium when the supply of goods and services meets the demand for goods and services from the household by the firm.Effects of supply shocks (productivity and labor supply) and demand shocks (changes in consumer preferences, business investment costs and government spending) on the economy are modeled using structural equations.In this paper, the effects of BI are modeled by changes in government spending (i.e.BI increases cash transfer to the household from the government) and consumption of the household.We assume that multi-factor productivity growth is maintained at the 2016-2017 level of 0.6% per year, which is a conservative rate because the long-term trend for multifactor productivity in the past 30 years was 1% per year (Parliament of the Commonwealth of Australia, 2010).We also use calibrated parameters from Table 1 of a multi-sector model by Rees et al. (2016), such as the discount factor β 0.9996, capital depreciation rate of 0.0175 and labor elasticity (with respect to wage) of 1.
Results and discussions
4.1 Financing effects 4.1.1Option 1: Basic income.BI is provided at $474 equivalised disposable income per week.Using the current share of income tax (59.1%) and the weighted average of equivalised weekly disposable income of $995.14, the gross contribution is required from every citizen, resulting in a flat (gross) tax rate of 28.1% of disposable income (i.e.$280/$995.14).After adjusting for clawbacks (i.e. a $474 transfer from the government to every citizen), the net contributors are only the two wealthiest income brackets: those earning $1,700 per week and above (see Table 1).The effective marginal tax rate (EMTR) is 5% for disposable income from $1,700 to $1,999.For those earning $2,000 and above, the EMTR is 27%.The wealthiest group also pays only 5% of the income, from $1,700 to $1,999.No BI tax is required for a weekly disposable income lower than $1,700.The weighted average of BI-adjusted income is $1,214, which is considerably higher than the original $1,020.The average income increases after redistribution because other sources of government revenue contribute 42.4% of the fund required for the BI.
In the unlikely scenario that income tax is the only source of funding for a BI, the gross tax rate is 47.6% (i.e.$474/995.12)(detailed calculations of this unlikely scenario are not presented for brevity).Net contributors will start from those earning a weekly equivalised income of $1,000.The EMTR is 42.7% for the equivalised income bracket $1,000-$1,049 per week and 47.3% for any weekly disposable income from $1,050 and above.Contributing almost half of the income if earning just over $1,000 per week after fulfilling all existing tax obligations is a challenging policy option.It may create a disincentive to work for the middleclass and high-income earners.However, as argued previously, we will not consider this scenario because technological acceleration leads to the increased discussion of BI, which will lead to changing tax regulations toward higher contribution from capital income accordingly.
4.1.2Option 2. Top-up basic income.This option identifies up front a guaranteed, tax-free income threshold of $474 per week.Under the assumption that the current income tax share of 59.1% in total tax revenue is maintained, an average of $10 per week is required to finance what we call a TBI.The TBI will provide additional income to those earning a disposable income, including income from current welfare programs, below the current poverty line level.A flat tax rate of 1.9% (i.e.$10/$532) to the fraction of equivalised weekly disposable income above $474 per week is sufficient to finance the TBI (see Table 1).In the unlikely scenario that income tax will be wholly responsible for the TBI, a flat tax of 3.3% (i.e.1.9%/ 0.576) is required for every dollar of equivalised disposable income above $474 per week.
Implementing BI or TBI will lift approximately three million people (or 13% of the Australian population) with a weekly equivalised disposable income of less than $450 (the first ten rows of Table 1) out of poverty.In the case of the BI, the income redistribution is larger, resulting in improved living standards (proxied by income) for 90% of the population.The order of income brackets did not change after redistribution by BI (see original and BIadjusted income columns in Table 1).The TBI also maintains the order of income brackets after redistribution except for the poorest 10 income brackets (those earning a disposable income less than $450 per adult equivalent per week), which have the same income at the poverty line level after redistribution.We believe that BI/TBI recipients, especially the TBI that targets people living below the poverty line, will spend most of their adjusted income on necessities.Thus, the BI/TBI will have expansionary effects (i.e.increased demand for goods and services) on the Australian economy.Since the BI injects more money into 90% of the population, we expect its expansionary effects to be larger than those of the TBI.However,
Basic income in Australia
the BI requires a much more significant increase in tax and government transfer; it may create unexpected consequences for the economy.
Macroeconomic effects
Implementing a BI will increase tax and government spending on transfers by the same amount (budget neutral).We expect an overall increase in household consumption because net recipients (the poor) will spend a higher fraction of their income (i.e. higher propensity to consume) than net contributors (the rich).Labor supply, especially among net recipients of BI, may increase because their benefits will not be phased out until they reach the top 10% of the income brackets when they become net contributors.However, if increasing automation becomes a reality, implementing a BI may not lead to increased labor supply, at least in the traditional way of labor supply (Hahn, 2015).For example, voluntary, domestic or hobby work may increase, while demand for wage-earning workers may decline due to automation.
Although most of the funds for a BI program are redistribution, the government still needs to collect tax at the level of $474 per adult equivalent per week for BI and $17 for the TBI (i.e. the average TBI tax of $10 is shared by 57.6% of income tax.Thus, the total fund needed for the TBI is $10/0.5765 $17).In 2016-2017, the tax revenue as a proportion of the gross domestic product was 27.8% (ABS, 2018b).Thus, the average gross weekly income is $980/ (1-0.278) 5 $1,357 per adult equivalent.The BI tax rate to gross income is 34.9% (i.e.$474/ $1,357), and the TBI tax rate is 1.3% (i.e.$17/$1,357).Thus, the amount of tax must increase by 125% from the 2016-2017 level if a BI is implemented (i.e.34.9/27.8).If TBI is implemented, the tax increase is only 4.6% (i.e.1.3/27.8).The 125% tax increase to cover a BI at the level of $474 per adult equivalent per week in this paper is similar to the 120% tax increase to fund a BI at the level of $US 1000 per adult per month by Nikiforos et al. (2017) for the USA.
Because the BI results in such a significant tax level increase, we propose implementing it in five years.Thus, the BI will be rolled out with an incremental tax increase of one-fifth of the required amount (125%).The increment of one-fifth of the required BI tax does not mean we recommend providing BI at a lower level in the first four years.Instead, we recommend gradually rolling out full BI incrementally for a randomly selected 20% of the population.One advantage of the incremental implementation is that the percentage of tax increase in the following years may be less than planned due to the multiplier effects of BI spending in the previous year.The multiplier effect is the cumulative effect that the expenditure of a person will become the income of the next person.For example, if a person earns one dollar and spends 50 cents (i.e.assume that the MPC is 0.5), this 50 cents will become the income of providers of goods and services, who will, in turn, spend 25 cents to buy goods and services.The cumulative effect is calculated as 1/(1-0.5)5 200%.For convenience, we assume that the multiplier effects of BI fade out in five years but spread evenly through the years.
The multiplier effect of a BI program will depend on the marginal propensity to consume its beneficiaries.Based on the distribution of MPC among income deciles in the USA reported by Nikiforos et al. (2017), we assume the following: (1) The beneficiaries of the proposed BI, consisting of 90% of the population, have an average MPC of 0.75, while net contributors, the 10% most affluent population, have an MPC of 0.35.Thus, the income redistribution under BI will create a 0.4 change in the MPC.
(2) The beneficiary of TBI, which consists of 13% of the poorest population, have an MPC of 0.9, while the net contributor has an MPC of 0.6.Thus, the income redistribution under TBI will create a 0.3 change in the MPC.
The difference in the MPC of net beneficiaries and net contributors will estimate the potential expansionary effects of 57.6% BI/TBI transfer.Effects of the remaining 42.4% transfer will be estimated using only the MPC of the net beneficiary, which is 0.8 for BI and 0.9 for TBI.The TBI requires only a 4.6% tax increase compared with the current period, and thus, there is no need for gradual implementation.The effects of BI/TBI on the economy will be evaluated five years after the program is fully rolled out.
The simulation results show that a BI significantly increases government transfer and shrinks the labor supply substantially.We acknowledge that the model only includes waged labor while BI may change the nature of work and increase nonwage labor (e.g.domestic duties, volunteer and hobby work).Most BI experiment programs (Ghatak and Maniquet, 2019a) show that the labor supply did not reduce, but none of the experiments included a tax increase to fund the BI.In our standard model, households' utility increases with consumption and decreases with work.A sharp rise in cash transfer like the proposed BI collapses the labor supply and makes the economy unstable.Thus, we will not pursue further analysis of the BI and will focus on discussing a more affordable alternative -the TBI.
The TBI only requires a 4.6% tax increase, and we propose full implementation within one year.The results show that consumption (C) and output (Y) grew rapidly in the first few years and slowed down after Year 5, where consumption increased by 7.2% and output increased by 7.4% (see Table 2).This positive result is substantially lower than that of Nikiforos et al. (2017) for the USA, which predicted 13% output growth after four years of completing the implementation of a BI of $1,000 per adult per month in the USA.Investment (I) follows a similar pattern but at a smaller scale.By the end of Year 5, investment had only grown at a rate of 2.8%.One possible factor leading to slow growth in investment is the reduction of savings from high-income earners, which have a lower MPC and hence a higher propensity to save.Capital (K) looks almost flat in Figure 1, but it grows at a minuscule rate to reach 0.16% by Year 5.The slow growth of capital and investment will gradually slow down output and hence consumption in the long run.The "biggest loser" is labor (L), which declines sharply after the first three years and gradually recovers, and by Year 10, it is only lower than in Year 1 by 0.96% (see Figure 1).The finding of labor supply reduction is in line with recent findings for the USA by Luduvice (2021) and Scotland by Connolly et al. (2022).The rising trend of labor, capital and slowing consumption and investment suggest that the economy is moving toward a higher equilibrium level.
To test the robustness of the results, we estimate Model 2, where multifactor productivity is assumed to grow at the average long-term rate of 1% per year (The Parliament of the Commonwealth of Australia, 2010).The higher growth of multi-factor productivity indeed creates even more positive effects but only in the long term (e.g. 10 years).Consumption and output increased by a respective rate of 6.2 and 7.3% by Year 10.Investment and capital improve slightly to 3.1 and 0.4% growth rates, respectively.However, higher productivity worsens labor outcomes, declining by 1.8% by Year 10.
Basic income in Australia
Changing the tax share toward the higher capital contribution is one way to cope with the expected increasing automation in the future.Thus, we also explore a scenario in that nonincome tax is wholly responsible for financing the TBI.This scenario is expected to create higher expansionary effects because it will result in higher overall consumption.Indeed, this scenario leads to an increase in consumption and output by the same growth rate of 7.7% in Year 5. Investment and capital also improve slightly, while labor supply worsens by 0.7% points compared with the main model.
Conclusions
This paper has explored options for BI and its potential effects on the Australian economy.A BI at the level of the current poverty line will require a contribution from the top two highest income earners at the rate of 5% for the equivalised disposable income of $1,700-$1,999 per week and 27% for the fraction income of $2,000 and above.This BI will improve living standards (proxied by income) for 90% of the population.A more affordable option is a basic top-up income, which provides additional support for 13% of the population living below the poverty line.This option requires contributions from the middle class and high-income earners at an average tax rate of about two cents for every dollar of equivalised disposable income above $474 per week.
With the assumption that people gain higher utility by more consumption or less work, the substantial increase in government transfer of BI creates a massive reduction in the labor supply to an unstable level only by the third year.A modification of the model to assume that labor supply will not reduce by BI transfer is the subject of future analysis.The main Effects of a TBI overtime JED limitation of our estimation is that nonwage labor supply, such as voluntary and domestic work, which could be popular with the rise of automation, is not accounted for.
An alternative form of BI, a top-up for low-income earners (TBI), creates expansionary effects.Key macroeconomic indicators, including consumption, output, capital and investment, increase compared with the base period.However, the labor supply declines slightly.In the optimistic scenario that multifactor productivity grows at 1% per year, the TBI's effects are higher in the long run, but the labor supply worsens.Long-term effects are also improved if other tax sources (e.g.capital) are wholly responsible for funding the TBI.Overall, the positive longterm effects of a modest BI are feasible if robots take our jobs and the tax burden.
Figure 1 .
Figure 1.Effects of a TBI overtime
Table 1 .
Equivalised disposable weekly income and BI
Table 2 .
Summary effects of a TBI (%) | 2022-12-16T16:13:11.013Z | 2022-12-15T00:00:00.000 | {
"year": 2022,
"sha1": "39df6211e0dc0285fa16d424f617679babc7b694",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/JED-07-2022-0119/full/pdf?title=basic-income-in-australia-an-exploration",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "41fd11f3c5952aaa18ed8f27327c3de85283723d",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
34678515 | pes2o/s2orc | v3-fos-license | Investigation on the durability of man-made vitreous fibers in rat lungs.
Two types of sized stonewool with median lengths of 6.7 and 10.1 microns and median diameters of 0.63 and 0.85 microns, and crocidolite with fibers of median length of 4.8 microns and median diameter of 0.18 microns were instilled intratracheally into female Wistar rats. A single dose of 2 mg in 0.3 ml saline was used for the stonewool samples and 0.1 mg in 0.3 ml saline for crocidolite. The evenness of distribution of fibers in the lung was checked by scanning electron microscopy (SEM). Five animals per group were sacrificed after 2 days, 1, 3, 6, and 12 months. After low-temperature ashing of the lungs about 200 fibers per animal were analyzed by SEM for length and diameter. The number and mass of fibers in the total lung were calculated. For the stonewool samples the decrease in the number of fibers in the lung ash followed approximately first order kinetics resulting in half-times of 90 and 120 days. The analysis of fiber number and diameter of different length fractions was used to estimate the contribution of three processes of fiber elimination: transport by macrophages for short fibers, breakage of fibers, and dissolution of fibers. (The process of transport by macrophages was found fastest for fibers with length < 2.5 microns). For the elimination of critical fibers with length > 5 microns, the breakage and dissolution were the most important processes. The breakage of fibers was predominant for one of the stonewool samples. The preferential type of the mechanism of fiber elimination is dependent on chemical composition and size distribution.
Introduction
The durability of fibers in the lung is one important criterion of carcinogenic potential. A parallel study of in vivo durability (1) and carcinogenicity investigated by the intraperitoneal test (2) did not show a significant tumor rate by this method for fibers with retention half-times of approximately 40 days.
Biodurability studies with stonewool fiber have been published only for samples with relatively thick fibers. For example, SG stonewool with a median diameter of about 2.0 pm was used in a 12-month inhalation study with rats (3). From the fiber retention data up to 16 months after termination of exposure, half times of approximately 200 days can be calculated.
A half-time of about 280 days was reported for fiber retention data up to 24 months after intratracheal instillation of stonewool with a median diameter of 1.8 pm (4). A similar half-time was found for a glasswool in the same study.
In a study of solubility of stonewool fibers with a median diameter of 1.1 pm and a median length of 28 pm, fibers with length >20 pm, analyzed by light microscopy up to 18 months after intratracheal instillation had an unchanged median diameter; but the fibers had become thinner at their ends, indicating a low solubility (5,6).
In this study the biodurabilities of sized samples of a commercial stonewool composition (MMVF21) and of a modified stonewool with increased alumina content (stonewool HT) were analyzed and compared with a crocidolite sample.
Materials and Methods
The test substances were a basalt-based stonewool (MMVF21) and stonewool HT fiber, both of known chemical composition (7). A special preparation of UICC crocidolite with an increased fraction of long fibers was used as positive control with an expected high durability.
A small sample of each test material was suspended in doubly-distilled water, sonicated, and filtered onto a Nuclepore filter (pore size 0.2 or 0.4 pm). Part of the filter was mounted on an aluminum stub and sputtered with approximately 30 nm of gold, then analyzed by a Cambridge Stereoscan 360 scanning electron microscope (SEM). Two or three magnifications were used to enable the measurement of both the longest and the thinnest fibers with sufficient precision; at each magnification, fiber length limits were set to avoid double counting. The length and diameter of about 400 fibers of stock samples were measured. The calculated number of critical fiber (L > 5pm, D < 2 pm, L/D > 5/1), their percentiles of length, and diameter are given in Table 1.
Two milligrams of fibers per rat for stonewool samples and 0.1 mg of crocidolite, were each suspended in 0.3 ml of 0.9% NaCl solution and instilled intratracheally in a single dose into lungs of female Wistar rats, body weight approximately 200 g. Five animals per group were sacrificed after 2 days, 2 weeks, 1, 3, 6, and 12 months for the stonewool groups, and after 2 days, 6 and 12 months for crocidolite. lated using a regression analysis of loga-After sacrifice, the lungs were isolated, rithm of number or mass of fibers versus oven-dried at 1050C and ashed at low temtime after instillation for individual aniperature. That this procedure did not alter mals. The resulting clearance rate constants the size distribution of test materials was k with their 95% confidence limit were shown by comparing lung ash samples transformed to the corresponding halffrom rats sacrificed 2 days after intratra-times tj,2 by: t,12 = 1n2/k. cheal instillation with the corresponding initial test materials ( Table 2). A fraction of Results the ashed lung was suspended in filtered SEM examination of the distribution of water, filtered on a Nudepore filter (pore fibers in the lung two days after intratrasize 0.2 or 0.4 pm) within 15 min, and cheal instillation of MMVF21, showed prepared for analysis by SEM. For each fibers in the main bronchi, on the epithesample, 200 fibers were measured on SEM lium of the distal segments of bronchioli video prints or photos, the size distribution and in alveoli. No agglomerations of fibers of the fibers was analyzed, and the total were found. number of fibers per lung was calculated Table 3 presents the analysis of fibers for each animal. The volume of the partiin the ashed lungs for sacrifice dates 2 days, cles was estimated assuming cylindrical 1, 3, 6, and 12 months after intratracheal geometry. Clearance kinetics were calcu-instillation. A logarithmic plot of the num- ber of fibers versus time ( Figure 1) indicated that the elimination of fibers can be described approximately by first order kinetics, defined by only one parameter, the half-time (Table 4). No significant change was observed in the size distribution of fibers in the lung ash up to 12 months, with the exception of the diameter distribution for the stonewool HT fiber, which shifted to thicker fibers with time (Table 2).
Discussion
Decrease in the number of fibers is influenced by three processes: mechanical clearance of short fibers, breakage of longer fibers, and dissolution of fibers. Only fibers with length up to about 10 pm can be engulfed completely by macrophages. For anthophyllite and crocidolite the fastest clearance in rats was found for fibers below 5 pm in length (4,8). Anthophyllite fibers >17 pm were not cleared from the lung in humans (9). These results suggest that fibers >20 pm in length will disappear only by breakage or dissolution. To estimate the contribution of each of these processes, an analysis of the number of fibers of different fiber length fractions (Figure 2; Tables 5, 6) and of different diameter fractions (Figure 3) was performed. Fiber breakage causes a shift to a shorter length fraction within the same diameter fraction without changing the cumulative length ( Figure 3). Dissolution of fibers will result in a shift to thinner diameter fractions without changing the length distribution. Mechanical clearance of fibers should remove fractions of the same length regardless of the diame-Environmental Health Perspectives For crocidolite, data from the sacrifice an (95% CL) dates 2 days, 6 and 12 months after instil-3 (364-cc) lation are available. From 2 days to 6 .9 (187-372) months, a relatively fast clearance was 0 (126-185) found for the length fraction > 40 pm, indicating that long crocidolite fibers were broken in the lung (Figure 2). The relatively fast clearance of the fibers < 5 pim in length is due to the mechanical clearance of LL>= 40um these fibers. From MMVF21 fibers in the length fractions >20 pm were eliminated the fastest as a result of the breakage of these long fibers.
-L >-40um 1 From 3 to 12 months the elimination of thicker fibers, >1.25 pm in diameter, was significantly faster than that of thinner fibers, resulting in a shift to fractions of thinner diameter, due probably to reduction by dissolution. The elimination of fibers <5 pm in length is the result predominantly of mechanical clearance. For the stonewool HT fiber, the elimination of all length fractions >10 pm was relatively fast; but the highest elimination rate was found for the thinner diameter 100 200 300 400 fractions.
Days
SEM photos showed some fibers whose diameter was not constant all along the fiber due to corrosion. In in vitro tests with -L >= 40u this stonewool sample (7) the dissolution rate at pH 4.8 was much faster than at pH 7.7, and pH 4.8 corresponds to the pH in the phagolysosomes of the macrophages (10). The parts of fibers with thinner diameter were approximately 5 to 10 pm in length, so that part could have been within a macrophage, leaving the thicker parts outside. The thinner sections would be the likely sites for fiber breakage, so that first the thin fibers and later the thicker ones might break to smaller fragments. Those This may also explain the shift to thicker diameter fractions. In in vitro studies for MMVF21 the dissolution rate at pH 4.8 was also higher than at pH 7.7, but only by a factor of about 2 (2), which accords with the observation that the shape of fibers in the lung ash was relatively regular for all MMVF21 fibers up to 12 months after sacrifice. It was observed, however, that some long fibers were thinner at the ends than in the middle (6). The higher dissolution rate of stonewool HT at acid pH compared with MMVF21 could be due to the substitution of magnesium and some silica by aluminum. For the glasswool fiber (MMVF1 1) the in vitro dissolution rate was lower at acid pH than at pH 7.7 (7,11). In another study, glasswool fibers with high solubility at pH 7.7 showed a rapid decrease in fibers longer than 10 pm, while the shorter fibers were much more durable (12), due to the slower dissolution of the phagocytized short fibers in the acid pH of the macrophages.
Conclusions
The breakage of fibers with length >40 pm was a common phenomenon for MMVF21, stonewool HT, and crocidolite. For stonewool HT, breakage of fibers was detected for all fractions >10 pm in length. Both breakage and the dissolution of fibers are important reasons for the decrease of number of critical fibers with length >5 pm. The overall elimination rate of fibers increased in the order crocidolite < MMVF21 < stonewool HT, although the mean diameter of samples increased in the same order. | 2014-10-01T00:00:00.000Z | 1994-10-01T00:00:00.000 | {
"year": 1994,
"sha1": "1220a435f11cc0e1d00a54111a603fdf52d4c923",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.94102s5185",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1220a435f11cc0e1d00a54111a603fdf52d4c923",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
219480475 | pes2o/s2orc | v3-fos-license | Evaluation of chemical literacy assessment instruments in solution materials
The purpose of this study was to produce information about assessment instruments quality to assess chemical literacy of high school students in solution material developed by researcher. Research method used was descriptive with quality parameters tested were empirical validity, reliability, readability, distinguishing power, difficulty level and distractors function. Research participants were 26 high school students class XI in Bandung. Chemical literacy assessment instrument that were tested consisted of 49 multiple choice items, 12 essay items and 26 attitude scale statements. Study results showed that empirical validity values of multiple choice items, essays and attitude scale respectively in range 0.11-0.87, 0.4-0.8, 0.1-0.74. Reliability values for multiple choice items, essays and attitude scales were 0.93, 0.87, and 0.94, respectively. Readability test scores for multiple choice items, essays and attitude scales were respectively 97%, 99%, 100%. The results of distinguishing power test for multiple choice and essays items were in range of 0.07-0.85 and 0.3-0.8. Difficulty level test results for multiple choice items were 16% difficult category, 61% moderate category, and 23% easy category while for essay items were 9% difficult category, 75% moderate category, and 16% easy category. Distractor function test results for multiple choice items were 37 items which the distractor was still required to be improved.
Introduction
Today the world is in the 21st century, which results an expansion of knowledge that impacts in life. This has resulted humans must have the understanding of science and technology in living their lives so that they are not outdated [1]. The impact of science for society can be seen in various aspect in life, such as social, political, educational, technological and economic [2]. But the development of science and technology raises issues that threaten the survival of organism in the world such as global warming, the use of harmful additives into food, water pollution and air pollution. To improve this, it is necessary to prepare a community that has an understanding of science and technology that is more environmentally responsible. This can be done through science learning that can deliver students have 21st century skills. This is appropriate with the 2013 curriculum currently applied in Indonesia. In the 2013 curriculum, students must have the skills needed in the 21st century, so that they can form human resources that have competitiveness, competence and skill. If the 2013 curriculum is applied in science learning, it can train students to use scientific knowledge and abilities to solve problems in the world [3]. This requires the need to have scientific literacy [4].
Scientific literacy is currently the main goal in science learning even in science education [4,5,6]. Scientific literacy related with the ability to use conceptual knowledge of science and the ability to distinguish between scientific data and data from other scientific disciplines [7]. Scientific literacy is the gateway to achieving scientific and technological progress and economic survival that can be achieved through science education [2].
Chemistry is part of science and one of the important branches of science. Chemistry topics generally study matter and understand material properties that are important in many scientific disciplines such as health sciences, geography, physics, environmental sciences and economics [4,8]. Understanding chemistry is very important, because nature is greatly influenced by chemistry and is filled with chemical products [9]. Therefore, the goal of learning chemistry must consider the problems that exist in life, so students can use conceptual understanding of chemistry to solve problems that exist in life [10]. This ability is called chemical literacy. Thus, current chemistry learning must aim to encourage the development of student chemical literacy effectively [11,12].
Several studies about chemical literacy have been done, that are the efforts to improve students' chemical literacy through learning [5,13] [4,22]. In addition, there are also studies that not only develop chemical literacy assessment instruments, but also assessment instruments for chemical literacy and generic science skills [23].
To identify chemical literacy, a chemical literacy assessment instrument is needed. The importance of chemical literacy assessment instruments is based on the fact that the achievement of chemical learning requires assessment instruments that not only assess understanding and memorization, but also assess students' ability to apply concepts that have been learned when they have problems [23]. Today it is difficult to find suitable instruments to assess students' chemical literacy [4]. Research on chemical literacy is very important so that chemical learning can effectively improve students' chemical literacy. Based on this, researchers are interested to develop assessment instruments for chemical literacy because these instruments are especially needed for high school students. The instruments which developed need to be analyzed the quality with certain parameters.
Method
The research method used in this study was descriptive with quality parameters tested were empirical validity, reliability, readability, distinguishing power, difficulty level and distractors function. Participants in this study were 26 high school students of class XI in one of the high schools in the city of Bandung who had studied solution material which included the concept of acid base, buffer solution, hydrolysis of water by salt, solubility and solubility product constants.
The chemical literacy assessment instrument which were analyzed was developed by the researchers themselves. The instruments developed consisted of 55 multiple choice items to assess knowledge and understanding of chemical content, 12 essay items to assess knowledge and understanding of chemical relations with technology and social, analytical thinking applications, application of reasoning and 28 statements for assess aspects of attitude. The items and attitude scale which had developed then their contents validation tested with the CVR (Content Validity Ratio) method. Items and attitude scale statements that were declared valid then corrected (if needed) based on the advice given by the validator.
Based on the results of the content validity test there were 6 multiple choice items with CVR value 0.6, while the other were 1. For a number of validators of five, the minimum CVR value for each item was 0.99 [24]. Thus, the item was declared valid or was appropriate with criteria for content validity if the CVR value > 0.99 and the item was declared invalid or was not appropriate with criteria for content validity if the CVR value < 0.99. For essay items, there were 12 valid items which have CVR value 1 Thus, there were 49 valid multiple choice items and 12 valid essays items. The results of the content validity test for the attitude scale, there were 28 statements which have CVR value 1. In addition, the CVI value for multiple choice item was 0.96, for essay item was 1 and for attitude scale was 1.
Result and Discussion
The researcher analyzed the quality of the chemical literacy assessment instruments of high school students in solution material.
Empirical Validity, Reliability and Readability Test Results
Chemical literacy assessment instruments were tested for empirical validity by correlation statistics (Table 1 and 2). The validity of multiple choice items were calculated by the biserial point correlation formula, the validity of the essay items and attitude scale were calculated by the product moment correlation formula. An instrument could be said to have empirical validity if it had been tested into the school then calculated using the correlation formula. In general, if the correlation value was > 0.3, then the item was valid [25]. Based on the results of the empirical validity test there were 6 invalid multiple choice items, there were items number 11, 19, 20, 22, 29 and 43, while for essays all items were valid ( Table 1). The results of the empirical validity test for the attitude scale were 2 invalid statements with a correlation value were 0.27 (statement 2) and 0.1 (statement 25) ( Table 2). A valid instrument, it could be said that the instrument can assess students' chemical literacy. Thus, there were 43 items multiple choice items, 12 essays items and 26 statements attitude scale which were valid. Based on the results of the multiple choice reliability test using the KR-20 formula, the correlation value was 0.93 and that was included in the high category [26], while the essay uses the Cronbach's Alpha formula, the Alpha value was 0.87 and belongs to the high category. Furthermore, for the attitude scale using the Cronbach's Alpha formula, the Alpha value was 0.94 and was included a very Based on the results of multiple choice, essays and attitude scale readability test respectively were 97%, 99%, 100%. This shows that the chemistry literacy assessment instrument developed almost all students could understand the assessment instruments which is given to them.
Distinguishing Power, Difficulty Level and Distractor Function Test Results
Difficulty level test was conducted to classify items that were developed including difficult, moderate or easy categories, while the distinguishing power test was conducted to find out the ability of item to distinguish between high-ability students and low-ability students (Table 3). Based on the results of the distinguishing power test there were 20 multiple choice items with very good categories, 22 items with good categories and 1 item with enough categories. One item with a enough category could be corrected to be able to distinguish students who were in the upper and lower groups or could also be omitted. In this study, the item was omitted because there were other items whose indicators were the same which had better D values and the items were enough to measure the indicators that had been set. Twelve essay items had good categories. Items with high distinguishing power were positively correlated with the overall test results. In other words, the items were answered correctly by most groups of students who scored high on the test and were answered incorrectly by most students who scored low on the test, so they could distinguish students in the upper and lower groups. Furthermore the results of the difficulty level test of multiple choice items were 26 items including the medium category, 10 items including easy categories and 7 items including difficult categories, while for essays, 9 items included medium category, 2 items including easy categories, and 1 item including difficult categories. The proportion for this item was quite good because the proportion was close to 3: 5: 2 (30% easy category items, 50% moderate category items and 20% difficult category items). For multiple choice items 23% including easy category, 61% including moderate category and 16% including difficult categories. For essays 16% were easy categories, 75% were moderate categories and 9% were difficult categories.
A distractor could function well if chosen by more than or equal to 5% of all students, there were chose especially from the lower group, chosen more by the lower group than the upper group, the number of upper group who chose a distractor was less than the number of upper group who chose the answer key. Based on the results of the distractor function test, there were 12 items which had distractors that were chosen less than 5% of all students (Table 4) . Thus, there were 5 items that did not need to be corrected, namely items number 1, 2, 3, 5 and 21, while distractors of the other 37 items needed to be corrected. The chemical literacy assessment instrument developed consists of multiple choice items to assess aspects knowledge and understanding chemical content, there were 42 items, essay items to assess aspects knowledge and understanding of the relationship between chemistry, technology and society, the application of analytical thinking and the application of reasoning there were 12 items and to assess the attitude aspect there were 26 statements.
Based on the development test results, a chemical literacy assessment instruments which were developed were valid and reliable, so that this instrument was suitable to be used to assess students' chemical literacy specifically in solution materials. This was appropriate with the results of other research [4], that the developed chemical literacy assessment instrument consisting of multiple choice test to assess aspects of knowledge and understanding of chemical content. To examine aspects of knowledge and understanding of chemical content it was suitable to use multiple choice tests that were flexible in measuring various cognitive levels, the chance to be answered correctly through smaller guessing than true-false forms and open stem and could include distractors in the form of misconceptions about certain concepts [27]. Essay test to assess aspects of knowledge and understanding of the relationship between chemistry, technology and society, the application of analytical thinking and the application of reasoning. Essay were suitable for assessing this aspect because the items involved high-level thinking skills and essay tests could provide an opportunity to freely show the breadth of knowledge and depth of understanding on certain concepts. Aspects of attitude on chemical literacy researchers adopted aspects of attitude from PISA, so that there were 3 aspects of attitudes that were evaluated, namely interest in chemistry, support for scientific inquiry, and responsibility for resources and environment using attitude scales [28]. The construction of the chemical literacy assessment instrument which was developed similar with assessment instrument which was developed by Thummathong and Thathong, Shwartz and Celik, the instrument have to asses 5 aspect chemical literacy.
Conclusion
The | 2020-05-28T09:13:16.114Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "85299db0772883500016e56aff64265bfe23ab3e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1521/4/042061",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "147970d3f2ae92ab9a05fd418c649fb17e05008b",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
119644879 | pes2o/s2orc | v3-fos-license | Epic substructures and primitive positive functions
For $\mathbf{A}\leq\mathbf{B}$ first order structures in a class $\mathcal{K}$, say that $\mathbf{A}$ is an epic substructure of $\mathbf{B}$ in $\mathcal{K}$ if for every $\mathbf{C}\in\mathcal{K}$ and all homomorphisms $g,g^{\prime}:\mathbf{B}\rightarrow\mathbf{C}$, if $g$ and $g'$ agree on $A$, then $g=g'$. We prove that $\mathbf{A}$ is an epic substructure of $\mathbf{B}$ in a class $\mathcal{K}$ closed under ultraproducts if and only if $A$ generates $\mathbf{B}$ via operations definable in $\mathcal{K}$ with primitive positive formulas. Applying this result we show that a quasivariety of algebras $\mathcal{Q}$ with an $n$-ary near-unanimity term has surjective epimorphisms if and only if $\mathbb{SP}_{n}\mathbb{P}_{u}(\mathcal{Q}_{RSI})$ has surjective epimorphisms. It follows that if $\mathcal{F}$ is a finite set of finite algebras with a common near-unanimity term, then it is decidable whether the (quasi)variety generated by $\mathcal{F}$ has surjective epimorphisms.
Introduction
Let K be a class of first order structures in the same signature, and let A, B ∈ K. We say that A is an epic substructure of B in K provided that A is a substructure of B, and for every C ∈ K and all homomorphisms g, g : B → C such that g| A = g | A , we have g = g . That is, if g and g agree on A, then they must agree on all of B. At first glance the definition may suggest that A generates B, but on closer inspection this does not make sense. As A is a substructure of B, generating with A will yield exactly A. However, as the main result of this article shows, the intuition that A acts as a set of generators of B is not far off. In fact, if K is closed under ultraproducts, we prove that A actually "generates" B, only that the generation is not through the fundamental operations but rather through primitive positive definable functions. Let's take a look at an example. Write D for the class of bounded distributive lattices. There are several ways to show that both of the three-element chains contained in the bounded distributive lattice B := 2 × 2 are epic substructures of B in D. One way to do this is via definable functions. Note that the formula ϕ(x, y) := x ∧ y = 0 & x ∨ y = 1 defines the complement (partial) operation in every member of D. Let A be the sublattice of B with universe { 0, 0 , 0, 1 , 1, 1 }, and suppose there are C ∈ D and g, g : B → C such that g| A = g | A . Clearly B ϕ( 0, 1 , 1, 0 ), and since ϕ is open and positive, it follows that C ϕ(g 0, 1 , g 1, 0 ) and C ϕ(g 0, 1 , g 1, 0 ). Now ϕ(x, y) defines a function in C, and g 0, 1 = g 0, 1 , so g 1, 0 = g 1, 0 . Theorem 5 below says that every epic substructure in a class closed under ultraproducts is of this nature (although the formulas defining the generating operations may be primitive positive).
The notion of epic substructure is closely connected to that of epimorphism. Recall that a homomorphism h : A → B is a K-epimorphism if for every C ∈ K and homomorphisms g, g : B → C, if gh = g h then g = g . That is, h is right-cancellable in compositions with K-morphisms. Of course every surjective homomorphism is an epimorphism, but the converse is not true. Revisiting the example above, the inclusion of the three-element chain A into 2 × 2 is a Depimorphism. This also illustrates the connection between epic substructures and epimorphisms. It is easily checked that A is an epic substructure of B in K if and only if the inclusion ι : A → B is a K-epimorphism. A class K is said to have the surjective epimorphisms if every K-epimorphism is surjective. Although this property is of an algebraic (or categorical) nature it has an interesting connection with logic. When K is the algebraic counterpart of an algebraizable logic then: K has surjective epimorphisms if and only if has the (infinite) Beth property ([2, Thm. 3.17]). For a thorough account on the Beth property in algebraic logic see [2]. We don't go into further details on this topic as the focus of the present article is on the algebraic and model theoretic side.
The paper is organized as follows. In the next section we establish our notation and the preliminary results used throughout. Section 3 contains our characterization of epic substructures (Theorem 5), the main result of this article. We also take a look here at the case where K is a finite set of finite structures. In Section 4 we show that checking for the presence of proper epic subalgebras (or, equivalently, surjective epimorphisms) in certain quasivarieties can be reduced to checking in a subclass of the quasivariety. An interesting application of these results is that if F is a finite set of finite algebras with a common near-unanimity term, then it is decidable whether the quasivariety generated by F has surjective epimorphisms (see Corollary 12).
Preliminaries and Notation
Let L be a first order language and K a class of L-structures. We write I, S, H, P and P u to denote the class operators for isomorphisms, substructures, homomorphic images, products and ultraproducts, respectively. We write V(K) for the variety generated by K, that is HSP(K); and with Q(K) we denote the quasivariety generated by K, i.e., ISPP u (K).
• A is an epic substructure of B in K if A ≤ B, and for every C ∈ K and all homomorphisms g, g : We say that A is a proper epic substructure of B in K (and write A < e B in K), if A ≤ e B in K and A = B.
The next lemma explains the connection between epic substructures and epimorphisms. (1) h is a K-epimorphism.
Proof. Immediate from the definitions.
Here are some straightforward facts used in the sequel.
Main Theorem
Recall that a primitive positive (p.p. for brevity) formula is one of the form ∃ȳ α(x,ȳ) with α(x,ȳ) a finite conjunction of atomic formulas. We shall need the following fact. (1) Every primitive positive L-sentence that holds in A holds in B.
(2) There is a homomorphism from A into an ultrapower of B.
Let K be a class of L-structures. We say that the L-formula ϕ(x 1 , . . . x n , y 1 , . . . , y m ) defines a function in K if In that case, for each A ∈ K we write [ϕ] A to denote the n-ary partial function defined by ϕ in A.
If X is a set disjoint with L, we write L X to denote the language obtained by adding the elements in X as new constant symbols to L. If B is an L-structure and A is a subset of B, let B A be the expansion of B to L A where each new constant names itself. If L ⊆ L + and A is an L + -model, let A| L denote the reduct of A to L.
Next we present the main result of this article.
Theorem 5. Let K be a class closed under ultraproducts and A ≤ B structures. T.f.a.e.: (1) A is an epic subalgebra of B in K.
(2) For every b ∈ B there are a primitive positive formula ϕ (x, y) andā from A such that: Let c, d be two new constant symbols and take Let C be a model of K * such that C Σ(c) ∪ Σ(d). By Lemma 4, there are elementary extensions E, E of C. and homomorphisms The elementary amalgamation theorem [6, Thm. 6.4.1] provides us with an algebra D and elementary embeddings g : E → D, g : E → D such that g and g agree on C. Next, observe that are homomorphisms that agree on A, and since D| L ∈ K we must have . So, as g is 1-1, and g and g are the same on C we have c C = d C .
Thus we have shown By compactness (and using that the conjunction of p.p. formulas is equivalent to a p.p. formula), there is single p.p. L-formula ϕ (x, y) such that and hence K ∀x, y, z ϕ(x, z) ∧ ϕ(x, z) → y = z. This completes the proof of (1)⇒(2).
(2)⇒(1). Suppose (2) holds for A, B and K. Let C ∈ K and h, h : B → C homomorphisms agreeing on A. Fix b ∈ B. There are a p.p. formula ϕ (x, y) andā elements from A such that It is worth noting that (2)⇒(1) in Theorem 5 always holds, i.e., it does not require for K to be closed under ultraproducts. On the other hand, as the upcoming example shows, the implication (1)⇒(2) may fail if K is not closed under ultraproducts.
Example 6. Let L = {s, 0} where s is a binary function symbol and 0 a constant. Let B be the L-structure with universe ω ∪ {ω} such that 0 B = 0 and Take A the subalgebra of B with universe ω. It is easy to see that the identity is the only endomorphism of B. Thus, in particular, we have that A ≤ e B in {B}.
We prove next that there is no p.p. formula with parameters from A defining ω in where ω is a new constant, and let Γ be the L + -theory obtained by adding to the elementary diagram of B the following sentences: Again, it is easy to see that h and h are homomorphisms from B to C| L . Since they agree on A and h(ω) = h (ω), we conclude that there is no p.p. formula with parameters from A defining ω in B.
3.1. The finite case. When K is (up to isomorphisms) a finite set of finite structures, we can sharpen Theorem 5. In this case it is possible to avoid the existential quantifiers in the definable functions at the cost of adding parameters from B.
Theorem 7. Let K be (up to isomorphisms) a finite set of finite structures, and let A ≤ B be finite. T.f.a.e.: (1) A is an epic substructure of B in K.
(2) For every b 1 ∈ B there are a finite conjunction of atomic formulas α(x,ȳ), a 1 , . . . , a n ∈ A and b 2 , Proof. Since K is a finite set of finite structures, there are finitely many formulas in ∆(x,ȳ) up to logical equivalence in K. Thus, there is a finite conjunction of atomic formulas α(x,ȳ) such that K α(x,ȳ) ↔ ∆(x,ȳ).
Take C ∈ K and suppose C α(c,d) ∧ α(c,ē). Then the maps h, h : B → C, given by h :ā,b →c,d and h :ā,b →c,ē, are homomorphisms. Since h and h agree on A, it follows that h = h . Henced =ē, and we have shown that α(x,ȳ) defines a function in K.
The example below shows that, in the general case, the existential quantifiers in (2) of Theorem 5 are necessary.
Example 8. Let B be the Browerian algebra whose lattice reduct is depicted in there are a conjunction of equations α(x 1 , . . . , x n , y 1 , . . . , y m ), c 1 , . . . , c n ∈ A and d 2 , . . . , d m ∈ B such that Let C and D be the subalgebras of B generated byc andc,d respectively. Note that D is finite and C < D. Also note that α(x,ȳ) defines a function in V(D), and D α(c,d), because α is quantifier-free. So we have C < e D in V(D); but this is not possible, as Corollary 5.5 in [1] implies that there are no proper epic subalgebras in finitely generated varieties of Browerian algebras.
Checking for epic subalgebras in a subclass
In the current section all languages considered are algebraic, i.e., without relation symbols. Given a quasivariety Q it can be a daunting task to determine whether Q has surjective epimorphisms, or equivalently, no proper epic subalgebras. In this section we prove two results that, under certain assumptions on Q, provide a (hopefully) more manageable class S ⊆ Q such that Q has no proper epic subalgebras iff S has no proper epic subalgebras.
Our first result provides such a class S for quasivarieties with a near-unanimity term. The second one for arithmetical varieties whose class of finitely subdirectly irreducible members is universal.
When n = 3 the term t is called a majority term for K. In every structure with a lattice reduct the term (x ∨ y) ∧ (x ∨ z) ∧ (y ∨ z) is a majority term. This example is specially relevant since many classes of structures arising from logic algebrizations have lattice reducts.
For the sake of the exposition the results are presented for quasivarieties with a majority term. They are easily generalized to quasivarieties with an arbitrary near-unanimity term.
For functions f : A → A and g : g(b)).
Theorem 9 ([8]). Let K be a class of structures with a majority term and suppose ϕ(x, y) defines a function in K. T.f.a.e.: (1) There is a term t(x) such that K ∀x, y ϕ(x, y) → y = t(x).
(2) For all A, B ∈ P u (K), all S ≤ A × B and all s 1 , . . . , s n ∈ S such that An algebra A in the quasivariety Q is relatively subdirectly irreducible provided its diagonal congruence is completely meet irreducible in the lattice of Qcongruences of A. We write Q RSI to denote the class of relatively subdirectly irreducible members of Q. For a class K let K × K := {A × B | A, B ∈ K}.
Theorem 10. Let Q be a quasivariety with a majority term and let S = P u (Q RSI ). T.f.a.e.: (1) Q has surjective epimorphisms. Let ψ(y) := ϕ(ā, y), and note that ψ(y) defines a nullary function in K. Note as well that ∃y ψ(y) ∈ Σ, and hence [ψ] K is defined for every K ∈ K . We aim to apply Theorem 9 to K and ψ(y). To this end fix C, D ∈ P u (K) = K and let S ≤ C × D. Note that as Σ is a set of p.p. formulas we have C × D Σ, and thus by Lemma 4 there is an ultrapower E of C × D and a homomorphism h : B A → E. We have that E ∈ P u (K × K) ⊆ P u (K) × P u (K) = K × K, and so E| L ∈ K| L × K| L ⊆ S × S.
Next observe that since h(A) ≤ e h(B) in Q, and h(A), h(B) ≤ E| L , by (3) it follows that h(A) = h(B). Also, as S is an L
The fact that B ψ(b) implies E ψ(hb), and so [ψ] E = hb ∈ S. We know that {C, D, C × D} ∃y ψ(y); furthermore, since ψ is p.p., we have Putting all this together Thus, Theorem 9 produces an L A -term t such that (4.1) K ∀y ψ(y) → y = t.
In particular, for all C ∈ Q RSI and all c 1 , . . . , c n ∈ C such that Σ and thus B A i ∈ K for all i ∈ I. Since ∀y ψ(y) → y = t is (equivalent to) a quasi-identity, from (4.1) and (4.2) we have B A ∀y ψ(y) → y = t. Hence b = t B A ∈ A, and the proof is finished.
Observe that Theorem 10 holds for any S ⊆ Q closed under ultraproducts and containing Q RSI .
Proof. For any class K we have Q(K) RSI ⊆ ISP u (K). Thus if Q is finitely generated, then Q RSI is (up to isomorphic copies) a finite set of finite algebras, and the corollary follows at once from Theorem 10.
Recall that an algebra A is finitely subdirectly irreducible if its diagonal congruence is meet irreducible in the congruence lattice of A. It is subdirectly irreducible if the diagonal is completely meet irreducible. For a variety V we write (V F SI ) V SI to denote its class of (finitely) subdirectly irreducible members.
An interesting consequence of Corollary 11 is the following.
Corollary 12. Let F be a finite set of finite algebras with a common majority term. It is decidable whether the (quasi )variety generated by F has surjective epimorphisms.
Proof. Let V be the variety generated by F. By Jónsson's lemma [7] V SI ⊆ HSP u (F) = HS(F) is a finite set of finite structures, and by Corollary 11 it suffices to decide whether S(V SI × V SI ) has surjective epimorphisms, and this is clearly a decidable problem. If Q is the quasivariety generated by F, then Q RSI ⊆ ISP u (F) = IS(F), and the same reasoning applies.
4.2.
Arithmetical varieties whose FSI members form a universal class. A variety V is arithmetical if for every A ∈ V the congruence lattice of A is distributive and the join of any two congruences is their composition. For example, the variety of boolean algebras is arithmetical.
Lemma 13. Let V be an arithmetical variety such that V F SI is a universal class, and let ϕ(x, y) be a p.p. formula defining a function in V. Suppose that for all A ∈ V F SI , all S ≤ A and all s 1 , . . . , s n ∈ S such that A ∃y ϕ(s, y), we have S ∃y ϕ(s, y). Then there is a term t(x) such that V ∀x, y ϕ(x, y) → y = t(x).
Proof. Add new constants c 1 , . . . , c n to the language of V and let K := {(A,ā) | A ∃y ϕ(c, y) and A ∈ V F SI }. Note that ψ(y) := ϕ(c, y) defines a nullary function in K, and this function is defined for every member of K. Also note that by our assumptions K is a universal class. Using Jónsson's lemma [7] it is not hard to show that V(K) F SI = K. Since K| L is contained in an arithmetical variety it has a Pixley Term [3,Thm. 12.5], which also serves as a Pixley Term for K, and thus V(K) is arithmetical. Next we show that ψ(y) is equivalent to a positive open formula in K. By [4,Thm. 3.1] it suffices to show that • For all A, B ∈ K, all S ≤ A, all h : S → B and every a ∈ A we have that A ψ(a) implies B ψ(ha). So suppose A ψ(a). From our hypothesis and the fact that ψ(y) defines a function we have S ψ(a), and as ψ(y) is p.p. we obtain B ψ(ha). Hence there is a positive open formula β(y) equivalent to ψ(y) in K. Now, [5,Thm. 2.3] implies that there is a conjunction of equations α(y) equivalent to β(y) (and thus to ψ(y)) in K. We have K ∃!y α(y), and by [4,Lemma 7.8] there is an L ∪ {c 1 , . . . , c n }-term t such that V(K) α(t ). Let t(x 1 , . . . , x n ) be an L-term such that t = t(c). So, if Γ is a set of axioms for V F SI , we have Γ ∪ {∃y ϕ(c, y)} ϕ(c, t(c)), and this implies Γ ∃y ϕ(c, y) → ϕ(c, t(c)), or equivalently V F SI ∀y(ϕ(c, y) → ϕ(c, t(c))). This and the fact that that ϕ(x, y) defines a function in V yields V F SI ∀x, y ϕ(x, y) → y = t(x).
To conclude, note that ∀x, y ϕ(x, y) → y = t(x) is logically equivalent to a quasiidentity, and since it holds in V F SI it must hold in V.
Theorem 14. Let V be an arithmetical variety such that V F SI is a universal class T.f.a.e.: (1) V has surjective epimorphisms.
Proof. We prove (3)⇒(2) which is the only nontrivial implication. Suppose A ≤ e B in V and let b ∈ B. We shall see that b ∈ A. By Theorem 5 there is a p.p. L-formula ϕ (x, y) defining a function in V, and such that [ϕ] B (ā) = b for someā ∈ A n . Let Σ := {ε | ε is a p.p. sentence of L A and B A ε}, Claim. K is a universal class.
Since K is axiomatizable we only need to check that K is closed under substructures. Let C ≤ D ∈ K; clearly C| L ∈ V F SI , so it remains to see that C Σ. As To show that V(K) is arithmetical we can proceed as in the proof of Lemma 13. We prove V(K) F SI = K. Note that for C ∈ K we have that C and C| L have the same congruences; hence every algebra in K is FSI. For the other inclusion, Jónsson's lemma [7] produces V(K) F SI ⊆ HSP u (K), and by the first claim HSP u (K) = H(K). So, as H(K) Σ, we have that V(K) F SI Σ and thus V(K) F SI ⊆ K.
Next we want to apply Lemma 13 to V(K) and ϕ(ā, y), so we need to check that the hypothesis hold. Take C ∈ K and S ≤ C. Since K is universal we have S ∈ K, and thus S ∃y ϕ(ā, y). Let t be a term such that V(K) ∀y ϕ(ā, y) → y = t. Then b = t B A ∈ A, and we are done.
Every discriminator variety (see [3,Def. 9.3] for the definition) satisfies the hypothesis in Theorem 14. Furthermore, in such a variety every FSI member is simple (i.e., has exactly two congruences). Writing V S for the class of simple members in V we have the following immediate consequence of Theorem 14.
Corollary 15. For a discriminator variety V the following are equivalent.
(2) For all A, B ∈ V we have that A ≤ e B in V implies A = B.
(3) For all A, B ∈ V S we have that A ≤ e B in V S implies A = B.
It is not uncommon for a variety arising as the algebrization of a logic to be a discriminator variety; thus the above corollary could prove helpful in establishing the Beth definability property for such a logic.
Another special case relevant to algebraic logic to which Theorem 14 applies is given by the class of Heyting algebras and its subvarieties (none of these are discriminator varieties with the exception of the class of boolean algebras). Heyting algebras constitute the algebraic counterpart to intuitionistic logic, and have proven to be a fertile ground to investigate definability and interpolation properties of intuitionistic logic and its axiomatic extensions by algebraic means (see [1] and its references).
I would like to thank Diego Castaño and Tommaso Moraschini for their insightful discussions during the preparation of this paper. | 2019-04-12T03:55:17.707Z | 2016-07-11T00:00:00.000 | {
"year": 2016,
"sha1": "cf80da9300d1d4624b4b70d622db121240642e6b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cf80da9300d1d4624b4b70d622db121240642e6b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
265032026 | pes2o/s2orc | v3-fos-license | Overlapping Atypical Hemolytic Uremic Syndrome and C3 Glomerulopathy with Mutation in CFI in a Japanese Patient: A Case Report
A 34-year-old Japanese man presented with blurred vision, headache, nausea, anemia, thrombocytopenia, and severe renal dysfunction. Thrombotic microangiopathy was initially suspected to have been caused by malignant hypertension. Antihypertensive medications did not improve his thrombocytopenia or renal dysfunction, and other diseases causing thrombotic microangiopathy were ruled out. Therefore, the patient was diagnosed with atypical hemolytic uremic syndrome. A renal biopsy revealed an overlap of thrombotic microangiopathy and C3 glomerulopathy. Genetic testing revealed c.848A>G (p.Asp283Gly), a missense heterozygous variant in the gene encoding complement factor I. Overlapping atypical hemolytic uremic syndrome and C3 glomerulopathy with complement factor I mutation is very rare, especially in Japan.
Introduction
Dysregulation of the complement pathway has been reported to be associated with aggravation of renal damage (1).Atypical hemolytic uremic syndrome (aHUS) and C3 glomerulopathy are representative complement-mediated kidney diseases.Although genetic and acquired factors related to the complement pathway reportedly play a role in the etiology and disease susceptibility of both disorders, aHUS and C3 glomerulopathy generally differ in terms of renal pathology and clinical findings.
aHUS clinically shows thrombotic microangiopathy (TMA), characterized by the triad of acute kidney injury, microangiopathic hemolytic anemia, and thrombocytopenia.The diagnosis of aHUS requires the exclusion of other dis-eases that cause TMA, such as thrombotic thrombocytopenic purpura (TTP) and Shiga toxin-producing Escherichia coli hemolytic uremic syndrome.In renal pathology, aHUS exhibits TMA findings, such as endothelial swelling, subendothelial accumulation of proteins, and thrombosis in glomerular capillaries on light microscopy with no deposition of immune complexes or complements on immunofluorescence staining.
aHUS is caused by overactivation of the alternative pathway of the complement system at the endothelial cell surface (2).It is triggered by congenital or acquired abnormalities in the complement activation regulators.Genetic abnormalities are found in approximately 46% of patients with aHUS (3).Genetic cases of aHUS have been caused by pathological mutations in genes encoding complement factor H (CFH), complement factor I (CFI), complement factor B (CFB), complement C3 (C3), CD46 (CD46), thrombomodulin (THBD), diacylglycerol kinase ε (DGKE), plasminogen (PLG), and inverted formin 2 (INF2), whereas reported acquired cases have included those with anti-factor H antibody positivity (4).There are extensive reports on antifactor H antibodies and mutations in CFH or C3 in Japanese patients with aHUS (3).However, there have been no reported Japanese cases of aHUS due to mutations in CFI.
C3 glomerulopathy often presents with membranoproliferative glomerulonephritis patterns and glomerular accumulation of complement proteins, characterized by bright C3 staining on immunofluorescence microscopy with minimal or no staining for immunoglobulins.C3 glomerulopathy often progresses to end-stage kidney disease and recurs following renal transplantation.Pathological observations in patients with C3 glomerulopathy indicate selective alternative pathway overactivation and C3 consumption during the fluid phase (1).In addition, C3 glomerulopathy is subclassified according to electron microscopy findings, either as C3 glomerulonephritis or dense deposit disease.C3 glomerulonephritis is characterized by mesangial, subendothelial, intramembranous, and sometimes subepithelial capillary wall deposits, while in dense deposit disease, the deposits are dense, osmiophilic, sausage-shaped, intramembranous, and mesangial (5,6).
We herein report a rare case of a Japanese patient with overlapping aHUS and C3 glomerulopathy with the p.Asp 283Gly mutation in CFI.
Case Report
A 34-year-old Japanese man with no remarkable medical history presented to our hospital with a 1-week history of blurred vision in both eyes, headaches, and nausea.On the day of the presentation, the patient was alert and oriented.There were no episodes of diarrhea or medication use in the previous few months.His body temperature was normal (36.2°C).His pulse rate was 90 beats/min with a regular rhythm; however, his blood pressure was 230/130 mmHg.He had a family history of hypertension, including his mother and brother; however, his personal blood pressure history was unknown because he had not undergone any medical examinations.None of his close relatives had a history of kidney disease or urinary abnormalities.
A physical examination revealed no noticeable neurological abnormalities, mucosal ulceration, lymphadenopathy, or skin rashes.A fundus examination revealed discoid edema, flaming hemorrhaging, hard exudate, and arteriolar narrowing, consistent with grade-4 hypertensive retinopathy (Fig. 1).The laboratory data at the time of the presentation are presented in Table .Blood tests revealed anemia (hemoglobin, 8.1 g/dL), thrombocytopenia (platelet count, 7.6×10 4 / μL), renal dysfunction [serum creatinine, 8.16 mg/dL; estimated glomerular filtration rate (eGFR), 7.1 mL/min/1.73m 2 ], elevated lactate dehydrogenase levels (928 U/L), and decreased haptoglobin levels (3 mg/dL).The white blood cell count and C-reactive protein level were within normal ranges.Tests for antinuclear antibodies, myeloperoxidaseanti-neutrophil cytoplasmic antibody, proteinase 3-antineutrophil cytoplasmic antibody, hepatitis B virus, and hepatitis C virus were negative.A urinalysis showed proteinuria with 3.5 g/gCr and microhematuria with 30-49 glomerular red blood cells/high-power field.Computed tomography (CT) findings of the brain, chest, and abdomen were all unremarkable, and an electrocardiogram showed a sinus rhythm.
TMA due to malignant hypertension was initially suspected based on these clinical findings.The patient was immediately admitted to our hospital and started on antihypertensive treatment with nicardipine (50 mg/day).Simultaneously, we carried out several tests to rule out other conditions known to cause TMA.The a disintegrin and metalloproteinase with thrombospondin type 1 motif, member 13 (ADAMTS13) activity was normal, and its inhibitor was negative, excluding the possibility of TTP.The stool culture was negative for STEC, which ruled out Shiga toxinproducing E. coli hemolytic uremic syndrome.No hypocomplementemia was observed.In addition, there was no evidence of collagen disease, cancer, drug reactions, or infectious disease, which could be causes of secondary TMA, as assessed by laboratory and imaging examinations.
After antihypertensive medication was started, the headache and nausea improved, and urinary protein excretion gradually decreased to approximately 2 g/gCr.The platelet counts increased to 160,000/μL but soon decreased again to 50,000/μL, and the improvement in the renal function was poor.Drug-induced thrombocytopenia, infection, and disseminated intravascular coagulation were excluded as causes of the decrease in the platelet count.In addition, there was no bleeding tendency during clinical observation at this time point.Therefore, we considered that pathological information from a renal biopsy was necessary for a definitive diag- prothrombin time-international normalized ratio, aPTT: activated partial thromboplastin time, FDP: fibrin degradation product, AD-AMTS13: a disintegrin and metalloproteinase with thrombospondin type 1 motif, member 13, UPCR: urine protein/creatinine ratio, HPF: high-power field, STEC: Shiga toxin-producing Escherichia coli nosis, treatment decision, and prognostic estimation.A percutaneous renal biopsy was performed on day 13 of hospitalization to determine the cause of the severe renal insufficiency.Renal biopsy specimens included eight glomeruli, four of which were globally sclerosed.Periodic acid-Schiff staining of the glomeruli revealed diffuse global endocapillary proliferative changes with endothelial swelling (Fig. 2a).Periodic acid-silver methenamine staining showed double contours of partial capillary walls (Fig. 2b), and Masson's trichrome staining showed fibrin thrombi (Fig. 2c).There were no adhesions or crescents in Bowman's space.Focal cellular infiltration, tubular atrophy, and interstitial fibrosis were observed in the tubulointerstitium.There was also intimal fibrosis with narrowed lumina and concentric lamination of intimal fibrosis, a so-called "onion skin" appearance (Fig. 2d).Immunofluorescence staining predominantly showed C3 deposition along the basement membrane and mesangial area (Fig. 2e), whereas IgA, IgG, IgM, C1q, and C4d staining were all negative.Electron microscopy revealed widespread foot process effacement and electrondense deposits in the subendothelial spaces (Fig. 2f).Therefore, the diagnosis of the renal pathology was considered to be overlapping TMA and C3 glomerulopathy.
The clinical course of the patient is shown in Fig. 3. Thrombocytopenia and renal dysfunction did not improve with antihypertensive treatment; therefore, the differential diagnosis of TMA was re-assessed, and aHUS was diagnosed.Three sessions of plasma exchange were performed, followed by eculizumab with meningococcal vaccine administration.After plasma exchange and subsequent initiation of eculizumab, the platelet counts increased and remained stable.In addition, the serum creatinine levels gradually decreased.Hypertension was controlled at approximately 130/ 80 mmHg with oral antihypertensive medications, nifedipine, and olmesartan medoxomil.The patient was discharged from the hospital on day 40 and switched from eculizumab to ravulizumab after discharge, eventually continuing treatment with ravulizumab every eight weeks.
To confirm activation of the complement cascade, we sent the patient's plasma to the Department of Nephrology at Nagoya University, Japan for a quantitative hemolytic assay using sheep blood cells and human citrated plasma (7).Only 11-15% hemolysis was observed in the patient's plasma compared to healthy human plasma supplemented with the monoclonal antibody which inhibits CFH activity.This result suggests that this patient may not have either any CFH mutations nor anti-factor H antibodies. Furthermore, no antifactor H autoantibodies were detected in the patient's plasma by an enzyme-linked immunosorbent assay.In addition, genetic testing was performed for the significant genes in the complement pathway related to aHUS (CFH, CFI, CFB, C3, CD46, THBD, DGKE) at the Kazusa DNA Research Institute in Chiba, Japan.One missense variant was found in the compound heterozygous form, c.848A>G (p.Asp283Gly), in CFI.Franklin by Genoox (Palo Alto, CA, USA) (8) and VarSome (Saphetor, Lausanne, Switzerland) (9) were used to classify variants.These classifications are based on population, predictive, computational, functional, segregation, de novo, allelic, and other types of data.This gene candidate was classified as a likely pathogenic variant by Gennox and a pathogenic variant by VarSome.
Discussion
We encountered a rare Japanese case of overlapping aHUS and C3 glomerulopathy with the p.Asp283Gly mutation in CFI.The clinical diagnosis of aHUS is TMA with the triad of microangiopathic hemolytic anemia, thrombocytopenia, and acute kidney injury, excluding Shiga toxinproducing E. coli hemolytic uremic syndrome, TTP, and secondary TMA such as metabolic diseases, autoimmune diseases, gestational hypertension, HELLP syndrome, malignancy, infection, drug-induced hypertension, and malignant hypertension.However, differentiating aHUS from other TMAs is often difficult, particularly in cases with malignant hypertension.The practical guidelines for aHUS in 2023 (Japan) also recommend reassessing the cause of TMA in cases of poor remission with secondary TMA (10).In our case, TMA due to malignant hypertension was initially suspected, but the TMA findings did not improve with antihypertensive drugs; therefore, the patient was re-evaluated and finally diagnosed with aHUS.
Interestingly, the renal pathology in this case showed an overlap between TMA and C3 glomerulopathy.Light microscopy revealed characteristic findings of TMA, such as glomerular endothelial swelling, double contours, and fibrin thrombi, as well as segmental lobulation and membranoproliferative glomerulonephritis-like findings.Immunofluorescence staining also revealed bright glomerular staining for C3 and no staining for other immunoglobulins or complement factors.Furthermore, electron microscopy revealed electron-dense deposits in subendothelial spaces.These findings are incompatible with those of TMA alone.Therefore, the diagnosis of renal pathology in our case was considered to be overlapping TMA and C3 glomerulopathy.
Both aHUS and C3 glomerulopathy are representative of complement-mediated kidney disease caused by genetic or acquired dysregulation of the complement pathway (11).In the present case, a heterozygous missense variant, c.848A>G (p.Asp283Gly), in CFI was identified by screening for mutations in genes related to the complement pathway.The variant in this patient (c.848A>G) resulted in the substitu-tion of the Asp residue with Gly at position 283 in the lowdensity lipoprotein receptor domain class A of the CFI protein, which has been identified as a crucial calcium-binding site in this domain (12).Some variants in genes related to the complement pathway, which result in dysregulation of the complement pathway, have been implicated in the pathogenesis of both aHUS and C3 glomerulopathy (1).To date, various pathological mutations in CFH, CFI, CFB, and C3 have been reported as congenital causes of aHUS and C3 glomerulopathy.In addition, both diseases share the same genetic variations (12,13).Ravindran et al. recently reported a series of five patients with overlapping C3 glomerulopathy and TMA from 114 patients with C3 glomerulopathy in native kidney biopsies.Among them, three cases underwent complement evaluations, of which two cases were abnormal; one case showed a pathogenic mutation of CFH, and the other showed multiple variants of unknown significance along with anti-factor H autoantibody and C4 nephritic factor (14).Interestingly, a previously reported case of C3 glomerulopathy and TMA in a transplanted kidney after pulmonary infection in a young man had two CFI mutations, one of which was c.848A>G (p.Asp283Gly), similar to that in our patient (15).This report suggests that patients with the missense variant c.848A>G (p.Asp283Gly) in CFI may be genetically susceptible to C3 glomerulopathy and TMA.
Furthermore, although this missense variant in CFI was classified as a likely pathogenic variant by Gennox (8) and as a pathogenic variant by VarSome (9), its allele frequency was 0.1% according to the Japanese genome database, which is notably higher than the global database frequency of 0.0007% (16).The penetrance of this variant in Japanese populations, even if pathogenic, is therefore considered low.
To date, differences in the frequency of genetic variations in patients with aHUS have been reported between Japan and other countries, such as Europe and the USA.In a nationwide epidemiological survey of 118 Japanese aHUS patients enrolled between 1998 and 2016, C3 mutations were the most common (25.3%), and there have been no reports of CFI mutations (3).In Italy, France, and the USA, C3 mutations are rare, and CFH mutations account for more than 20%; CFI mutations were reported at rates of 3.7% in Italy, 8.4% in France, and 8.3% in the USA (17)(18)(19).Therefore, our case of a Japanese patient with a CFI in aHUS is considered rare.
In conclusion, we encountered a rare case of a Japanese individual with overlapping aHUS and C3 glomerulopathy and a heterozygous p.Asp283Gly mutation in CFI.Complement-mediated kidney diseases, such as sHUS and C3 glomerulopathy, are rare.Data on Japanese patients remain insufficient; therefore, further case series and findings from both experimental and clinical studies of complementmediated kidney disease are needed to elucidate the pathology.
The patient described in this case report provided his informed consent for publication of the details of his case.
Figure 2. Kidney biopsy findings.Periodic acid-Schiff staining revealed diffuse global endocapillary proliferative changes with endothelial swelling (a).Periodic acid-silver-methenamine staining showed double contours of the partial capillary walls (b).Masson's trichrome staining revealed fibrin thrombi (c).There was intimal fibrosis with narrowed lumina and concentric lamination of intimal fibrosis, which causes an "onion skin" appearance (d).Immunofluorescence microscopy showed prominent C3 positivity (+) along the basement membrane and mesangium (e), whereas IgA, IgG, IgM, C1q, and C4d staining were all negative.Electron microscopy of one glomerulus showed widespread foot process effacement and electron-dense deposits in the subendothelial spaces (f) (Original magnification, a-e, 400×; f, 5,000×). | 2023-11-07T06:18:18.066Z | 2023-11-06T00:00:00.000 | {
"year": 2023,
"sha1": "e038f89f2b504cae34f31a01ef148c8305e2b5d3",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/advpub/0/advpub_2713-23/_pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "77bc0c5747e0692270462371a2dd7ce3089b423a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
182293073 | pes2o/s2orc | v3-fos-license | A new method for congestion avoidance in wireless mesh networks
Wireless Mesh Networks (WMNs) will play an important role in next-generation wireless communications involving wireless networks. The traffic in this network (WMN) often saturates on certain paths, causing congestion problems to occur. Currently, many proposed protocols have been created based on Ant Colony Optimization (ACO) to contribute towards solving this particular problem. Unfortunately, most of these methods disregard the congestion problem after an optimal path is found. In this paper, a New Congestion Avoidance Method (NCAM) is proposed. NCAM is designed to improve load balancing by solving congestion problems after the optimal path is found. There are three mechanisms proposed in NCAM: detection of congestion in each optimal node to prepare a suboptimal path, updating of suboptimal pheromone value, and transferring data packets to the suboptimal path. We implemented our method in Network Simulator Version 2 and measured its effective performance compared to a family of existing ACO approaches in terms of packet throughput, end-to-end delay, and packet loss. The result demonstrates that NCAM provided better throughput, decreased end-to-end delay, and less packet loss compared to AntNet and CACO.
Introduction
A Wireless Mesh Network (WMN) is a network in which the nodes in the network use a mesh topology. Each node in WMN acts as a part of the whole network; they are able to perform selfconfiguration and self-healing. Mesh routers connect mesh nodes to the Internet, and are therefore called gateway nodes [27]. Traffic flow in the mesh network is transmitted through these gateways. There are two kinds of nodes that are used in WMN: access points (packet forwarding/serving) and router (packet forwarding). The architecture of WMN is divided into three groups: infrastructure, client WMNs, and hybrid WMNs. In the infrastructure, the network consists of mesh routers/access points with a mesh topology that allows client nodes to transmit packets through the network. Meanwhile, the client WMNs provide a direct network connection between the client devices. Finally, the hybrid WMN is a combination of backbones and client networks. The hybrid WMNs are
Related works
In this section, we provide an overview of some Ant Colony Optimization (ACO) routing algorithms and congestion avoidance in wireless networks. The implementation of the proposed ACO family has its own strengths and weaknesses in applying each implementation of ACO, which are based on the characteristics and nature of each network.
In 2007, one study [13] proposed POSANT, which was based on the ant colony routing algorithm for mobile ad-hoc network infrastructures (MANETs). The idea of the study [13] centered on how to make use of a position that focused on electronic instruments, [13] where its method is applied with GPS receivers and the ACO technique. [13] The proposed method is a reactive routing algorithm, in which the author argues about the use of position information that could reduce a huge amount of ant generation and at the same time decrease route organizing time. Position information played the main role in the work [13], and was used in the heuristics maintained at each MANET node to assist ants to choose the next neighbor to move to during path routing. The drawback of POSANT is the transmission time of the ant in moving to its next node, The study assumed all transmission times to be the same, meaning that this algorithm ignored the common problem in WMN, namely intra-flow/interflow interference and packet loss.
Another distributed routing algorithm for MANET named SARA was also proposed in a previous study [7]. SARA was designed in 2010 with the objective of lowering computational complications. The particular purpose of this algorithm was to create routes on insistence, establishing these to be a reactive protocol. The characteristics of agents (ants) in SARA are that they store only the node identity information to avoid redundant overheads. Ants make the decision based on pheromone values (probabilities) only, meaning that there is no mechanism to capture network dynamics or characteristic of network as interference, saturated path, etc. that often happen in a network with high nodal densities such as the Wireless Mesh Network (WMN).
Ant Mesh routing for Interference Avoidance (AMIRA) in a previous work [1] was introduced as an interference-aware routing protocol. It was designed to improve load balancing in WMN. AMIRA is designed to avoid interference problem in WMN. AMIRA was designed by combining the local heuristics technique and meta-heuristics approach to prevent interference within and among network flows. While performing path routing, the agents in AMIRA were used for selecting lesser interference paths. To select the lesser interference paths, each node in AMIRA used MAC level information gathered from link quality measurements. The methods that AMIRA uses to measure the link quality were explained in detail in the study. However, in actual fact, AMIRA was designed only for singleradio and single-channel WMNs.
Bokhari [3] and Zaruba [4] designed AntMesh and SmartAnt, respectively. Both studies used the ACO algorithm. They also implemented a load balancing advancement similar to the study mentioned above [1]; whilst also implementing a multi-radio infrastructure. Bokhari [3] and Zaruba [4] implemented two modules: link estimation (LE) and path estimation (PE). LE was used for estimating delay and load balancing, while measuring the cost of local link nodes. PE captured and measured interference and then used this information to select a low-interference path with high channel diversity. All nodes in both studies contained two kinds of tables i.e. the pheromone table (fitness of choosing the next node) and local estimation table (quality/strength of the outgoing links). Nevertheless, both studies lacked the mechanisms to avoid congestion problems when all packets flowed on optimal paths. Grover [16] proposed an idea to reduce congestion problems in a network. In the study, buffers at each intermediate node were identified two times. First, the capacity of the buffer on the whole was identified, on whether or not the second identity would be operated (the capacity of packets in the contain path in the buffer for the available destinations). In case both conditions were true, the packet would be forwarded through alternate paths. However, the method in the study forwarded the data packets to an alternate path for a particular source destination pair that was chosen, despite the fact that the alternate path was not checked for optimality.
In another study, a mechanism was proposed to minimize the congestion problem. The method is called the Congestion Aware Ant Colony Optimization (CACO) approach [17]. CACO covers the design of a routing framework and congestion control algorithm in Wireless Mesh Networks (WMNs). It used AntMesh techniques previously presented in a previous work [3]. The main idea of CACO is to avoid packets from flowing into congested areas when the mechanisms are identified. The disadvantages of CACO, however, are the security of packets that have to route to other paths when optimal path is congested: while routing packets away from the congestion areas, it is not guaranteed whether or not the new path are optimal paths.
To extend the ACO algorithm, AntNet was proposed [19]. The idea of the study is to find the best path in the network depending on two types of ants: the forward ant (FA) and backward ant (BA). FAs can store useful information such as paths, traffic condition, or neighbor node status. In AntNet, FAs will die after they reach the destination node. All dead FAs are replaced by BAs. The BA is used to update the pheromone 4 of the probability that the next nodes would be selected in a routing operation. Each node has its own pheromone table that usually stores the list of reachable nodes and their pheromone value. More details on how to compute the pheromone value in an ACO AntNet framework is detailed out in the study [19]. Table 1 outlines the strengths and weaknesses of the above researches that have implemented the idea of the Ant Colony Optimization algorithm.
SmartAnt (2012)
-It is an efficient data forwarding scheme. -Designed for load balancing in multi-radio in WMNs -Has the capability to use space/channel diversity in multiradio WMNs. -Can discover high throughput paths. -Less inter/intra-flow interference.
-It demonstrates the result under high-load network conditions. -Has no mechanism to reduce congestion after the optimal path is found. -It provides high throughput, low end-to-end delay. -It works well under network congestion.
-Is designed for single radio and single channel. -No mechanism to estimate path quality of neighbor nodes.
AntHocNet (2001)
-It is a hybrid multi-path algorithm. -Designed for wireless mobile ad-hoc networks.
-It consists of reactive and proactive methods. -It can explore new paths and get up-to-date link quality information.
-Number of ants that need to be sent to find the destination. -It avoids congested areas.
-Keeps forwarding packets to reach destination nodes.
-No security for packets.
-It does not check whether the new path is optimal or not.
CCS (2015)
-It is a congestion reducing protocol (MANET) -Based on the AODV routing protocol.
-Has a mechanism to check the buffers at the intermediate nodes to avoid congestion.
-Outside the scope of our work -Alternates particular source destination pairs without checking whether or not the alternate path is optimal.
New Congestion Avoidance Method
The New Congestion Avoidance Method (NCAM) is a congestion avoidance method designed to improve load balancing by avoiding the congestion problem after the optimal path is found, after which the data packets are split to a suboptimal path. The basic operation of NCAM is based on the routing protocol described in a previous study [4]. There are four types of ants that are used in NCAM: forward ants (FAs), which travel from source to destination to discover paths, backward ants (BAs) that travel from the destination to source to update the routing tables, local forward ants (LFAs) and local backward ants (LBAs), in which both ants are used for routing suboptimal paths (on the pair of optimal nodes) when each optimal nodal pair have gone beyond the limit of data packets.
How NCAM work
The concept of NCAM is based on SmartAnt [4], after the optimal path is found. NCAM adds three mechanisms to work for congestion control, as illustrated in figure 1. Step 1: When an agent finds an optimal path, most data packets will follow the optimal path. At the same time, the level of congestion is detected to prepare a suboptimal path.
Step 2: When the buffer for each optimal nodal path goes over the threshold, local forward ants (LFA) and local backward ants (LBA) will be used to find the suboptimal path by following the rules of the algorithm process, as described in a previous work [4].
Step 3: When the suboptimal path is found, newly arrived data packets will follow the new path to the destination node.
Step 4: In case the suboptimal path goes over the threshold, Steps 2 and 3 will be repeated to explore other optimal paths.
Detect congestion
In WMN, congestion can occur at any intermediate node; thus, deteriorating the performance of the network. This is a situation when a link or a node carries excessive load that leads to the deterioration of quality of service. Furthermore, when the routing protocol determines the same best path due to the traffic load occurring on the same path while the other path is seldom used, packet loss will result; therefore degrading the performance of the network [15]. In this research, when all packets go forward through optimal paths, as explored by the agent, the congestion level in each intermediate node would be measured by detecting the queue space, i.e. if the number is bigger than or equal to 75%. If the number of packets in the buffer is equal or more than a certain threshold, LFAs and LBAs will be used to explore the suboptimal path of that node pair. After the suboptimal path is found, the data packet would be separated and then forwarded through the new path, as explained in figure 2.
Transfer data to suboptimal path
Applying the behavior of real ants [4] [18] [20], these useful characteristics are useful resources for designing a mechanism to reduce data packets in a pair of optimal nodes when the load on the network is greater than the capacity of the network. In NCAM, strategies to reduce data packets from busy optimal pair nodes to a suboptimal path include assuming that while all packets are generated on the optimal path, a few of the nodal paths have spilled over the threshold, as per figure 3:
First step:
LFAs and LBAs are used to find the suboptimal nodal path. Second step: When the suboptimal path is found, the probabilities (pheromone value) of each optimal node will be decreased by 30%. Third step: All suboptimal intermediate nodes will update their pheromone table by copying the optimal intermediate node, so the optimal path and suboptimal path will have the same pheromone value (the probabilities would be 50% among both). Fourth step: Newly arrived data packets will randomly select the previous optimal path and the new suboptimal path, but this depends on the width and how much lesser the data packets are on the suboptimal path, with all data packets moving faster on the suboptimal path than the optimal path. Thus, the probabilities of choosing the suboptimal path increases, while the probabilities for choosing an optimal path decreases.
Update pheromone table on suboptimal node
In NCAM, after the suboptimal path is found, each suboptimal intermediate node will update their pheromone table by copying the optimal node that has been stuck in congestion. Where, ∆p is the reinforcement value that will be added to the pheromone table and depends upon how good Tripi,d is.
When the pheromone substance in suboptimal intermediate nodes is equal to the pheromone substance in the optimal intermediate node, the optimal path and suboptimal path will have the same probabilities (50%), so the newly arrived packets will randomly choose both paths. Because of the width of the suboptimal path, the probabilities of the suboptimal path increases while the optimal path probability decreases.
Performance evaluation
In this research, Network Simulator Version 2 (NS-2) was selected for the implementation of our scenarios. In our scenarios, the discussions mainly revolved around throughput performance, end-toend delay, and packet loss. There are 12 nodes, with source nodes and destination nodes denoted by 3 and 11, respectively, and are part of an incomplete mesh network topology. Table 2 details our simulation setup. Table 3 shows the effective results of using the NCAM method compared with AntNet and CACO. Figure 5 below illustrates the throughput comparison graph of AntNet, CACO, and NCAM respectively represented by black, brown, and blue lines. The X-axis represents the duration of sequence simulation time in minutes (m) and the Y-axis represents number of packets that are succesfully delivered to node 11. When the packets start forwarding from node 3 to node 11 at = 2 s, [19] [17] the NCAM is found to yield the same throughput performance. At t = 1.29 m when paths 7-11 become congested, AntNet does not have any mechanism to handle this situation [19] while in CACO [17], the packets are routed away from paths 7-11. Meanwhile, the NCAM will split some packets to paths 7-6-10-11. Because NCAM could select a path with less host nodes than AntNet [19] and the packets were transmitted through optimal path and suboptimal paths, NCAM resulted in more packet throughput than AntNet and CACO. The blue line of figure 5 illustrates the effectiveness of packet throughput using the NCAM method, which is better than AntNet [19] and CACO [17]. Figure 6 shows the different delays of AntNet, CACO, and NCAM. The X-axis represents the duration of sequence simulation time in minutes (m) and the Y-axis represents the end-to-end delay in seconds (s). The average end-to-end delay using AntNet [19], CACO [19], and NCAM [17] is 0.11 s, is 0.099 s, and 0.065 s, respectively. NCAM gave a lower end-to-end delay than AntNet [19] and CACO [17] based on the blue line statistical data showed in figure 6. When paths 7-11 became congested, all packets in AntNet [19] continually waited on paths 3-7, while in CACO, the packets were routed to neighbor node 7 and NCAM tried to measure the level of congestion on paths 7-11 before it explored a suboptimal path an then split newly arrived packets to that suboptimal path. Figure 7 demonstrates the percentage of packet loss in each scenario. We calculated our packet loss from the statistical data taken from the 'out.tr' file. The percentage of packet loss of NCAM was found to be lower than that of AntNet and CACO. The result showed that NCAM provided better throughput compared to AntNet and CACO. CACO yielded no packet drops, similar to NCAM, while 10 packets were dropped in AntNet. The number of packets lost in NCAM was less than 4 and 5 compared to AntNet and CACO, respectively. Furthermore, end-to-end delay using the NCAM method was only 0.0653 s. The increase in packet throughput in NCAM is due to the capturing of level of packet data inside the queue and in front of the queue. Although routing algorithms proposed by previous works [1], [2], [3], [4], [8], [17], [19] were able to find the optimal path based on specific strategies, the congestion avoidance after a suboptimal path is found should be included with these algorithms to guarantee high-efficiency packet transmission.
Conclusion
In this paper, a method for avoiding the congestion problem after an optimal path is found in the Wireless Mesh Network (WMN) was proposed. The proposed method is called the New Congestion Avoidance Method (NCAM) and contains three mechanisms: detection of congestion level, transferring of packets to a new path, and updating the pheromone value. The result showed that NCAM outperformed AntNet and CACO. Packets lost in NCAM were 5.99% less than AntNet and 0.82% less than COCA. Furthermore, end-to-end delay using the NCAM method took only 0.0653 s while AntNet took 0.111 s and CACO took 0.098 s. The increase in packet throughput, lower end-toend delay, and less packet loss using NCAM was due to its capture of packets inside the queue, incorporation of a mechanism to transfer packets to a suboptimal path, and implementation of a mechanism to update pheromone value in each sub-intermediate node.
NCAM was based on the SmartAnt protocol and was able to perform better compared to other protocols. It also yielded a higher throughput, less packet loss, and lower end-to-end delay. However, there are several aspects that could not be handled in this study due to: -Input data traffic rate that went over the capacity of the output lines -Abilities of the mesh node -Security of network attack If this study were able to control the rate of input data traffic compared to the capacity of output lines, congestion on certain paths would decrease. In addition, if the mesh nodes were to have better ability to perform book-keeping tasks (queuing buffers, updating tables, etc.), the packets would reach destination nodes more effectively. On the other hand, if the data packets were kept safe from different types of network attacks, positive effects such as improved throughput, reduced packet loss, and cutting down of end-to-end delay in the network could result. | 2019-06-07T20:44:01.270Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "568dd05f73b861e07c512e63513c10eb6dff8b35",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1192/1/012062",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5b8871dba3c7c33478ea0fe137df0f20ff25e35e",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
235346400 | pes2o/s2orc | v3-fos-license | Meta-QTL analysis and identification of candidate genes for quality, abiotic and biotic stress in durum wheat
The genetic improvement of durum wheat and enhancement of plant performance often depend on the identification of stable quantitative trait loci (QTL) and closely linked molecular markers. This is essential for better understanding the genetic basis of important agronomic traits and identifying an effective method for improving selection efficiency in breeding programmes. Meta-QTL analysis is a useful approach for dissecting the genetic basis of complex traits, providing broader allelic coverage and higher mapping resolution for the identification of putative molecular markers to be used in marker-assisted selection. In the present study, extensive QTL meta-analysis was conducted on 45 traits of durum wheat, including quality and biotic and abiotic stress-related traits. A total of 368 QTL distributed on all 14 chromosomes of genomes A and B were projected: 171 corresponded to quality-related traits, 127 to abiotic stress and 71 to biotic stress, of which 318 were grouped in 85 meta-QTL (MQTL), 24 remained as single QTL and 26 were not assigned to any MQTL. The number of MQTL per chromosome ranged from 4 in chromosomes 1A and 6A to 9 in chromosome 7B; chromosomes 3A and 7A showed the highest number of individual QTL (4), and chromosome 7B the highest number of undefined QTL (4). The recently published genome sequence of durum wheat was used to search for candidate genes within the MQTL peaks. This work will facilitate cloning and pyramiding of QTL to develop new cultivars with specific quantitative traits and speed up breeding programs.
pigments in the kernel. Combining the highest number of genes involved in carotenoid trait expressionis thereforea tool for both improving the nutritional value of wheat and satisfying consumers 8 .
In 2019 nearly 16 million tons of pasta were produced worldwide. Italy is the greatest consumer, with near 24 kg of pasta consumed per person each year (https:// inter natio nalpa sta. org/). There is increasing awareness of the importance of wheat-based products in a healthy diet, and producers are identifying and exploiting natural variations in bioactive compounds. However, in some cases natural variations in a trait may be limited in extent or be difficult to exploit, so that other approaches may be required, as in this case. The most important targets of this type of approach are currently minerals, resistant starch, antioxidant compounds, carotenoids, protein content and dietary fibre. As mentioned earlier, quality is directly linked to biotic and abiotic stress. In recent years many quantitative trait loci (QTL) studies have focused on these traits, such as fiber content QTL in Marcotuli et al. 9 , root and shoot morphological traits in Iannucci et al. 10 , and many others reviewed in Colasuonno et al. 11 . These studies identified hundreds of QTL in different mapping populations with different types of markers besides. To identify the genome regions most involved in trait variationand the major, stable QTLs affecting these traits, the QTL meta-analysis approach developed by Goffinet and Gerber 12 can help narrow down QTL regions, identify candidate genes and tackle map-based cloning strategies.
This approach allows the integration of independent QTL studies in a consensus mapor reference genome of the species. QTL meta-analysis is a powerful tool for discovering genome regions most frequently implicated in trait variation and forreducing the QTL confidence intervals, thereby enhancing the detection of candidate genes for positional cloning 13 . To identify meta-QTL (MQTL) for their use in marker-assisted breeding, Loffler et al. 14 defined three criteria: (1) the MQTL must have a small supporting interval, (2) include a high number of original QTL, and (3) those QTL must have a large effect on the phenotypic variance explained.
Many of the traits mentioned above and analysed in the present paper are polygenic traits, and associated QTL have been located on all the tetraploid wheat chromosomes.
Meta-QTL (MQTL) analysis is a good instrument for studying many traits at once and finding the consensus, robust QTL region through the use of data reported in multiple studies for the reliability of their location and effect across different genetic backgrounds and environments, as well as to refine QTL positions on a consensus map 12 .The recent sequencing of the 'Svevo' durum wheat genome has enabled the identification of consensus genomic regions, the study of relationships among candidate genes within QTL, and the identification of pleiotropic effects among them 15 .
There are many examples in which MQTL analysis has also been successfully used to detect consensus QTL regions in wheat: root-related traits 13,16 , pre-harvest sprouting tolerance 17 , ear emergence 18,19 , resistance against Fusarium head blight [20][21][22] , plant height 23 , grain dietary fiber content 24 , seed size and shape 25 , yield-contributing traits 24,[26][27][28] , resistance to leaf rust 29 ; pasta-making quality 30 ; potassium use efficiency 31 ; drought tolerance 32 ; tan spot resistance 33 . The objective of the present study was to focus on MQTL analysis of durum wheat progenies using a highly saturated consensus map from Macaferri et al. 15 , taking into account a high number of traits in order to identify major regions and possible pleiotropic gene effects.
Results
QTL distribution and projection. A total of 41QTL studies for quality, abiotic and biotic stress reported inColasuonno et al. 11 were analysed, including 36 different traits ( Table 1). The studies involved 34 different mapping populations, including 53 different parental accessions (Table 2). QTL projection was carried out using only QTL having the same flanking markers in the consensus map. A total of 368 QTL distributed on all 14 chromosomes (genomes A and B) were projected: 171 corresponded to quality-related traits; 127 to abiotic stress, and 71 to biotic stress.
Differences in the number of projected QTL were observed not only among all the seven homoeologous groups, but also among individual chromosomes within a homoeologous group (Fig. 1).The number of projected QTL per genome was 144 (39%) and 244 (61%) for genomes A and B, respectively.The number of QTL per chromosome ranged from 11 in chromosome 1A to 40 in chromosomes 2B and 7B, with an average of 26 QTL per chromosome.
The means of the proportion of phenotypic variance explained (PVE) by the original QTL showed a similar pattern among the traits, with 63%, 53% and 48% of the QTL showing a PVE < 0.10, for abiotic stress, biotic stress and quality respectively (Fig. 2).
When the confidence interval (CI) was not reported in the original studies, it was calculated as the distance between the flanking markers. The CIs in the projected QTL were estimated at 95% using the empirical formula proposed by Guo et al. (2006). Comparison between CIs in original and projected QTL (Fig. 3) revealed clear differences for abiotic stress and quality traits. Most of the projected QTL for these traits showed lower CIs, with respective mean values of 35 cM and 18 cM for original and projected abiotic stress CIs and of 28 cM and 14 cMfor original and projected quality traits. In the case of biotic stress traits, instead, the original QTL showed lower CIs (mean 13 cM) than the projected QTL (mean 17 cM). For abiotic stress, 69% of the original QTL had CIs greater than 20 cM, whereas 73% of the projected QTL had CIs lower than 20 cM. For biotic stress traits, 79% and 65% of the original and projected QTL yielded CI values lower than 20 cM, respectively. Lastly, for quality traits, 54% of the original QTL had CIs greater than 20 cM, whereas 85% of the projected QTL yielded CIs lower than 20 cM. QTL meta-analysis. Of www.nature.com/scientificreports/ or because the predicted QTL peaks were not included within any MQTL. They were not considered as single QTL, as their CI overlapped with MQTL. The number of MQTL per chromosome ranged from four in chromosomes 1A and 6A to 9 in chromosome 7B. Chromosomes 3A and 7A showed the highest number of individual QTL (4), chromosome 7B the highest number of undefined QTL (4). The number of QTL per MQTL ranged from 2 in 26 MQTL to 11 in the durumMQTL2B.7.As 41 MQTL (47%) derived from the clustering of QTL from threeor more different studies on different parental lines, they will be more stable across environments. The number of traits involved in each MQTL ranged from 1 in twelveMQTL to 7 in the MQTL durum MQTL1B.3. Six MQTL involved 5or more different traits ( Table 3). The CI of the MQTL ranged from 0.1 to 14 cM, with an average of 4.9 cM. This isa significant reduction from the original QTL, whichranged from 0.4 to 108.1 cM, with an average of 25.5 cM.
The three criteria proposed by Löffler et al. 14 were used toidentify the most promising MQTL for markerassisted selection and candidate gene analysis: (1) small MQTL supportintervals, (2) large number of initial QTL and (3) high PVE values of the original QTL. A total of 17 MQTL were selected using the following criteria: a number of QTL per MQTL equal to or greater than 5, with a CI equal toor lower than the average (4.9), and a mean PVE value for the original QTL in the MQTL equal to or greater than 0.10 (Table 4).Only MQTL with a physical distance of less than 5 Mb were subsequently selected for candidate gene (CG) identification. www.nature.com/scientificreports/ in the grain tissues for quality CGs were subsequently analysed using the RNAseq data available at http:// www. wheat-expre ssion. com/ 35 . Thebread wheat gene models were analysed using the RNAseq experiments available at www. wheat-expre ssion. com 35,36 . In particular, the study focused on identifying expression genes involved in biotic and abiotic stress, in different tissues and developmental phases (Fig. 4).
A total of 36 CGs upregulated under biotic and abiotic stress were found in seven MQTL. MQTL3B.1 and MQTL7B.9 in 'Svevo' and 'Chinese spring' did not yieldhomologous gene models, and no upregulated gene models were found for MQTL6A.4 (Fig. 4).
Gene expression in grains was analysed not only under biotic or abiotic stress conditions but also to detect candidate genes of importance in grain quality.
When grain tissues ofthe endosperm, embryo, aleurone layer, seed coat and transfer cells were dissected, all the genes described above for the whole grain were strongly expressed in at least one of the different tissues. Other gene models that expressed over 2 tpm were: glycerol-3-phosphate dehydrogenase [NAD( +)] in the aleurone layer and seed coat, a 28S ribosomal S34 protein in the embryo, S-acyltransferase in the aleurone layer, a pimeloyl-[acyl-carrier protein] methyl ester esterase in the aleurone layer, glycosyltransferase in the endosperm, hydroxyproline-rich glycoprotein-like G in the aleurone layer and seed coat, histidine-containing phosphotransfer protein in the embryo, a general regulatory factor 1G in the embryo, aleurone layer and seed coat, S-adenosyl-L-methionine-dependent methyltransferase superfamily protein in the seed coat, an F-box in the aleurone layer, and phosphatidylinositol N-acetylglucosaminyl transferase subunit Y in the endosperm, embryo and seed coat.
Discussion
One of the main challenges of breeding programs is to increase crop yield. Crop productivity is highly affected by environmental constraints and diseases, so thatnew cultivars must incorporate new loci to cope with the different stresses affecting plant growth and yield. Breeders have another important challenge in the development of new cultivars: to improve grain quality for end products that meet industrial and consumer requirements.
In recent years numerous studies have been carried out to identify new loci controlling traits for abiotic and biotic stress tolerance and grain quality in bread and durum wheat. QTL meta-analysis has been carried out on most of the QTL identified in durum wheat for disease resistance, environmental tolerance and grain quality. This approach has been used extensively in plants since its development in 2004 37 . It is especially useful in detecting major loci for quantitative traits and, by increasing map resolution, in identifying candidate genes controlling polygenic traits 12 . www.nature.com/scientificreports/ This is the first study that provides an overview and comparison of genetic loci controlling multiple traits in durum wheat, including quality traits and biotic and abiotic traits. It adds new MQTL for durum grain traits: some of the MQTL were mapped with high precision and are relatively more robust and stable with major effects.
We report a total of 368 QTL distributed on all 14 chromosomes, of which 171 are related to quality traits, 127 to abiotic stress, and 71 to biotic stress, over a total of 34 mapping population. A total of 85 meta-QTL were identified, of which 15 meta-QTL were selected as the most promising for candidate gene selection.
The meta-analysis conducted in this study accurately compared genomic positions of individual QTL identified in different studies and refined the confidence intervals of the main genomic regions associated with different traits. The durum wheat consensus map 15 preserved the marker order of individual maps, and confidence intervals were calculated to highlight differences between the original map position and its projection. For abiotic stress and quality traits, there was a reduction in the CI, whereas biotic stress traits showed an increase in the confidence interval. This may be due to the quantitative nature of the different traits; individual QTL for abiotic stress and quality showed lower PVE values, whereas those related to disease resistance yielded higher values (means of 0.11, 0.12 and 0.20 respectively). Biotic stress traits were controlled by a lower number of genes than traits related to abiotic stress or quality. Results reveal that the number of QTL per study was 25 for abiotic stress traits, 12 for quality related traits and 3 for biotic stress traits. Comparison of the reduction of CIs and number of genome regions involved in trait variation between this study and other studies carried out in durum wheat (quality) 30 , bread wheat (abiotic and biotic traits) 13,29 and maize (yield) 38 is reported in Additional file 3. Reduction of the CI and number of QTL after meta-analysis was 80% and 77% respectively, which is within the range among the different studies (from 60 to 88% for CI and from 65 to 90% for number of QTL).
The MQTL identified provide more closely linked markers due to the availability of a durum wheat consensus map 15 . Some of these are also linked to known major genes for other agronomically important traits, there by adding value to these MQTLas targets for marker assisted selection using the SNP markers flanking the MQTL, however an initial validation of the alleles reporting favourable effects should be addressed. According to the genome position of important agronomic genes reported in Liu et al. 39 , eleven MQTL were found to include 12 genes enhancing grain yield, quality, or plant development. DurumMQTL5A.5 and durumMQTL7B.9 included the vernalization genes Vrn-A1 and Vrn-B3 respectively. The incorporation of favourable alleles for this gene during breeding helps develop spring habit without cold requirements for flowering 40 , thus can be used as a strategy for introgressing important target traits from non-adapted pre-breeding materials combining the most favourable vernalization alleles. DurumMQTL4B.4 carries the dwarfing gene Rht-B1. Dwarfing genes were the basis of the green revolution, allowing an up to 35% increase in the yield of durum wheat 41 . Five durumMQTL, 2B.7, 4A.1, 7A.1, 7A.2 and 7A3, included genes involved in grain weight and size, the genes TaGS2-B1, TaCwi-A1, TaTEF-7A, TaGASR7-A1 and TaTGW -7A. Other genes affecting grain yield and quality were the TaSdr-A1 and TaALP-4A involved in preharvest sprouting tolerance and located in durumMQTL2A.4 and durumMQTL4A.5, respectively. Preharvest sprouting is an important limiting factor for grain yield in the major wheat production areas, especially when frequent rainfall occurs during harvest. Lastly, two genes involved in grain quality were found in durumMQTL1A.1 (Glu-A3) and durumMQTL7B.9 (Psy-B1). According to Subirà et al. 42 , the introgression of favorable alleles for HMW and LMW glutenin subunits led tothe improvement of pasta-making quality in modern durum wheat cultivars. The phytoene synthase gene Psy-B1 is involved in the biosynthesis of carotenoid pigments.
An interesting case of study was in the durumMQTL2B.1 where are co-located QTL for RRT (abiotic stress) and SBCMV (biotic stress). Looking at candidate gene reported in Fig. 4, NBS-LRR-like resistance genes were highly expressed in both abiotic and biotic stresses experiments, which may indicate a link between the two traits and a pleiotropic effect on root development and pathogen growth. This theory has been supported by Kochetov et al. 43 , which reported a differential expression of NBS-LRR-encoding genes detected in the root transcriptomes of two Solanumphureja.
To correlate between MQTL and previous QTL identified by GWAS, MQTL positions were compared with marker trait associations (MTA) reviewed by Colasuonno et al. 11 for abiotic and biotic stress and quality traits. Of the 352 MTA, 58 were located within 33 durum MQTL. Of these, 37 MTA in 26 MQTL reported associations with one of the traits included in the MQTL (Additional file 2). The highest number of MTA per trait category corresponded to LR for biotic stress, NDVI for abiotic stress and YPC for grain quality. These MTA were distributed in 11 chromosomes. These results suggest that new bioinformatic tools are required to integrate association studies with QTL meta-analysis for better understanding the molecular bases of trait variation in crop species.
Conclusions
QTL meta-analysis can help validate QTL previously detected in different populations and unravel the most stable QTL for the most important wheat traits. This studyused QTL meta-analysis toacquirea comprehensive picture of the mainregions of the durum wheat genome involved in the control of multiple traits so as to identify QTL-enriched regions and candidate genes with possible pleiotropic effects. www.nature.com/scientificreports/ The numerous markers within stable QTL and rich candidate gene regionscan helpelucidate the mechanism regulatingmany traits and speed up breeding programs for the production of top-quality cultivars.
Collection of QTL database and projection on a consensus map. A thorough bibliographic review
was carried out on the literature reported in Colasuonno et al. 11 . QTL information on biparental durum wheat populations was retrieved from 41 independent studies, including a total of 36 different traits (Table 1) relating to quality (14), biotic stress (22) and abiotic stress (5).
Information on chromosome location, the most closely flanking markers, QTL position, logarithm of odds (LOD) values, confidence intervals (CIs) and phenotypic variance explained (PVE or r 2 ) values are summarized in the review by Colasuonno et al. 11 .
To representall the QTL in one linkage map, the durum wheat consensus map developed by Maccaferri et al. 15 was used for QTL projection, following the homothetic approach described by Chardon et al. 37 as described in Colasuonno et al. 11 . The CIs for the projected QTL were estimated for a confidence interval of 95% using the empirical formula proposed by Guo et al. 47 .
QTL meta-analysis. QTL meta-analysis was conducted using BioMercator v.4.2 48 , available at https:// urgi. versa illes. inra. fr/ Tools/ BioMe rcator-V4, adopting the approach developed by Veyrieras et al. 49 . Meta-analysis determines the best QTL model based on model choice criteria from the Akaike information criterion (AIC), a corrected AIC, a Bayesian information criterion (BIC) and the average weight of evidence (AWE). The best QTL model was selected when the lowest values of the model selection criteria were achieved in at least threemodels. Consensus QTL from the optimum model were regarded as MQTL.
Identification of candidate genes underlying the MQTL region and expression analysis. Gene models within MQTL were identified using the high-confidence genes reported for the durum wheat reference sequence 34 , available at https:// wheat. pw. usda. gov/ GG3/ jbrow se_ Durum_ Svevo based on the positions of markers flanking the CI of the MQTL.
In silico expression analysis and the identification of upregulated gene models was carried out using the RNAseq data available at http:// www. wheat-expre ssion. com/ 35 using gene models, from 'Chinese spring' , located within the markers flanking the MQTL (https:// iwgs. org/). Homologous genes from 'Svevo' were subsequently identified in durum wheat.
Data availability
All data generated or analysed during this study are included in this published article [and its supplementary information files]. www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-06-06T06:16:37.773Z | 2021-06-04T00:00:00.000 | {
"year": 2021,
"sha1": "4ba333034b9928660aec5ce36ca8d583ec0b4b64",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-91446-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e1e84a902d0754fe90f3520be802bb9df20b105",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26953578 | pes2o/s2orc | v3-fos-license | Effect of Fruit Juice on Glucose Control and Insulin Sensitivity in Adults: A Meta-Analysis of 12 Randomized Controlled Trials
Background Diabetes mellitus has become a worldwide health problem. Whether fruit juice is beneficial in glycemic control is still inconclusive. This study aimed to synthesize evidence from randomized controlled trials on fruit juice in relationship to glucose control and insulin sensitivity. Methods A strategic literature search of PubMed, EMBASE, and the Cochrane Library (updated to March, 2014) was performed to retrieve the randomized controlled trials that evaluated the effects of fruit juice on glucose control and insulin sensitivity. Study quality was assessed using the Jadad scale. Weighted mean differences were calculated for net changes in the levels of fasting glucose, fasting insulin, hemoglobin A1c (HbA1c), and homeostatic model assessment of insulin resistance (HOMA-IR) using fixed- or random-effects model. Prespecified subgroup and sensitivity analyses were performed to explore the potential heterogeneity. Results Twelve trials comprising a total of 412 subjects were included in the current meta-analysis. The numbers of these studies that reported the data on fasting glucose, fasting insulin, HbA1c and HOMA-IR were 12, 5, 3 and 3, respectively. Fruit juice consumption did not show a significant effect on fasting glucose and insulin concentrations. The net change was 0.79 mg/dL (95% CI: −1.44, 3.02 mg/dL; P = 0.49) for fasting glucose concentrations and −0.74 µIU/ml (95% CI: −2.62, 1.14 µIU/ml; P = 0.44) for fasting insulin concentrations in the fixed-effects model. Subgroup analyses further suggested that the effect of fruit juice on fasting glucose concentrations was not influenced by population region, baseline glucose concentration, duration, type of fruit juice, glycemic index of fruit juice, fruit juice nutrient constitution, total polyphenols dose and Jadad score. Conclusion This meta-analysis showed that fruit juice may have no overall effect on fasting glucose and insulin concentrations. More RCTs are warranted to further clarify the association between fruit juice and glycemic control.
Introduction
Diabetes mellitus is now one of the most challenging health problems globally. As reported by the International Diabetes Federation (IDF), more than 371 million people worldwide have diabetes in 2012, and this number is projected to increase to 552 million people by 2030 if no urgent action is taken [1,2]. In addition, numerous people with impaired glucose tolerance (IGT) or impaired fasting glycaemia (IFG) are at high risk of progressing to type 2 diabetes mellitus (T2DM) [3,4]. It has been proven that T2DM and its complications are the major cause of disability, reduced quality of life and premature death, imposing a heavy burden on patients and society [5]. Therefore, the importance of efforts to reduce the incidence of diabetes has never been greater.
Accumulating evidence suggests that lifestyle changes, including eating healthy foods can help prevent or delay the development of T2DM [6][7][8]. Fruits are rich in fiber, antioxidants, and phytochemicals that may have beneficial effects on health, and thus are recommended for the primary prevention of T2DM [9]. A recent study also suggested that the consumption of specific whole fruits is related to a significant reduction of T2DM risk [10]. In contrast, evidence on whether fruit juices possess similar protective effects attribute to the whole fruits is still inconclusive [11][12][13]. According to the recommendation by 2010 Dietary Guideline for Americans, fruit juice is considered to be less desirable because it has less dietary fiber than whole fruit [14]. Fruit juice is also criticized by its concentrated or additionally supplemented sugars and contributing the extra calories when consumed in excess [15]. However, Schulze et al. [16] found that fruit fiber was not significantly related to the lower risk of diabetes based on the data of previous prospective studies [17][18][19][20][21][22][23].
Additionally, it has been demonstrated that although fruit juice is deficient in fiber, other important preventive nutritional components, such as antioxidants and phytochemicals (e.g. polyphenols), are present in fruit juice [24]. In view of these dual properties of fruit juice, great concern has been aroused to identify the effect of fruit juice on T2DM risk. To date, several RCT studies have been conducted to evaluate the association between fruit juicy consumption and glycemic control, but the results have been conflicting. Therefore, we conducted this meta-analysis to synthesize evidence from previous RCTs and provide a more precise estimate of the effect of fruit juice on glucose control and insulin sensitivity based on the PRISMA guidelines.
Study Selection
Studies were selected for this analysis if they 1) were RCTs conducted in human subjects; 2) used a concurrent control group (such as placebo beverage, water, or controlled drink) for the fruit juice treatment group and the difference between the control and treatment group was fruit juice. 3) included subjects ingesting fruit juice for $2 wk (to remove acute or very short-term studies); 4) provided the information of baseline and endpoint values or the difference of fasting glucose and insulin concentrations with SD or SEM or 95% CI (when necessary, the authors were contacted to obtain the unavailable data); 5) did not give fruit juice as part of a multi-component supplement.
Quality Assessment
The methodological quality of all studies was evaluated using the following criteria: 1) randomization; 2) double blinding; 3) withdrawals (number and reasons); 4) allocation concealment; and 5) generation of random numbers. One point was given for each area addressed in the study design and the total Jadad score ranges from a minimum of 0 to a maximum of 5 points [25]. The trials with a score of $4 were classified as high quality, whereas those receiving a score of ,4 were considered as lower quality.
Data Extraction
All data were screened by two investigators (BW and KL) independently with any disagreement resolved by consensus and then collected onto a pre-designed template that included the following items: 1) study characteristics including authors, publication year, sample size, study design, population information, study duration, total polyphenols dose, type of intervention and type of diet; 2) net changes in fasting glucose and insulin concentration, hemoglobin A1c (HbA1c) and the homeostatic model assessment of insulin resistance (HOMA-IR). All values were converted to mmol/L for glucose and mIU/ml for insulin by using conversion factors 1 mg/dL = 0.0556 mmol/L for glucose and 1 pmol/L = 6.945 mIU/ml for insulin concentrations. If primary outcome and secondary outcome concentrations were reported several times in different stages of trials, only values representing the final outcomes at the end of trials were extracted for our meta-analysis.
Statistical Analysis
All statistical analysis in our meta-analysis was performed using STATA, version 11 (StataCorp, College Station, TX, USA). Treatment effects were defined as weighted mean difference (WMD) and 95% CIs in concentrations of fasting glucose and insulin, and values of HbA1c and HOMA-IR. The statistical heterogeneity was examined using Cochran's test (a P value ,0.1 was considered statistically significant) and I 2 tests (I 2 .50%, significant heterogeneity) [26]. A random or fixed effects model was used for heterogeneous or non-heterogeneous data, respectively. Nonetheless, fixed effects model was used when less than 5 trials were included in the analysis due to uncertainty in the prediction of heterogeneity [27]. Funnel plots and the Egger's tests were used to assess the potential publication bias when 10 or more studies were included in meta-analysis. When not directly available, SD values were calculated from standard errors, 95% CIs, P-values, or t-values. In addition, we assumed a correlation coefficient of 0.5 between baseline and final values, as suggested by Follmann et al [28]. To assess the possible source of heterogeneity between the studies, subgroup analyses were conducted by comparing the study results of population region, baseline glucose concentration, study design, duration, type of fruit juice, glycemic index (GI) of fruit juice, fruit juice nutrient constitution, total polyphenols dose and Jadad score. GI values were obtained from the international GI database [29]. Additional sensitivity analyses were also performed in accordance with the Handbook for Systematic Review of Interventions of Cochrane software (Version 5.0.2; The Cochrane Collaboration, Oxford, UK).
Results of Literature Search
Detailed steps of the literature search are shown in Figure 1.
Of the 2579 initially identified reports, 2528 articles were excluded either because they were duplicate or not relevant to the current meta-analysis. Therefore, 51 potentially relevant articles were further examined. Of these, an additional 39 articles were excluded for the following reasons: 22 articles did not have the data on outcome measures, 9 articles treated the subjects with multi-component supplement, 8 articles did not report enough details of SD or baseline or endpoint or mean difference for primary or secondary outcome measures [30][31][32][33][34][35][36][37]. We contacted the main authors of these 8 studies by email, but only one replied and the requested data was no longer available [31]. Thus, 12 articles were finally selected for inclusion in the meta-analysis [38][39][40][41][42][43][44][45][46][47][48][49].
Study Characteristics
The characteristics of trials included in the meta-analysis are shown in Table 1 and Table S1. A total of 12 trials involved 412 subjects and the trials varied in size from 12 to 63 subjects. Total polyphenols content of fruit juice ranged from 341.9 to 2660 mg/ d (median: 933.6 mg/d). The study duration varied from 4 wk to 3 mo (median: 7 wk). Seven of 12 RCTs included subjects with hyperglycemia (.110 mg/dL), while the remaining 5 studies selected participants with normal fasting glucose concentrations. Most of the studies (8 of 12) used parallel design. Of the 10 studies which maintained a usual diet, 7 studies suggested the participants avoid the intake of other dietary confounding factors such as wine, green tea or soy products ( Table 1). In addition, 10 trials used fruit juice with low GI value (#55) and the rest 2 studies used median GI fruit juice (56 to 69) [29]. Eight of the 12 included studies reported no significant changes in body weight, and the remaining 4 studies did not report the information of body weight. The participants in all included studies consumed the fruit juices and placebo drinks in the free-living situation. Only 2 studies used ellagic acid, or b-cryptoxanthin plus vitamin C as the biomarkers to evaluate the intervention compliance [45,48], while 9 studies reported that they have assessed the compliance by periodical inperson interview, or the questionnaire survey, or counting the unconsumed drinks to record the consumption information, etc (Table S1).
Data Quality
Study qualities of the selected trials were assessed by the Jadad scale [25], and the results were diverse. Six trials were classified as high quality (Jadad score $4) [39,41,42,44,48,49], and the remaining 6 trials were classified as low quality (Jadad score , 4). All of the 6 high-quality trials had the clearly adequate allocation concealment (ie, allocated by a third party or used opaque envelopes), and 2 high-quality trials reported the
Effect of Fruit Juice on Glucose Control and Insulin Sensitivity
As shown in Table 2, fruit juice did not significantly affect the concentrations of fasting glucose, fasting insulin and HbA1c, while it significantly increased the HOMA-IR values. No significant heterogeneity was found in the concentrations of fasting glucose, fasting insulin and HbA1c. Significant heterogeneity was noted in the results of HOMA-IR (P = 0.01). For the 12 trials that reported data on fasting glucose concentration, no significant mean differences was found in the subjects supplemented with fruit juice (0.79 mg/dL; 95% CI: 21.44, 3.02 mg/dL; P = 0.49; Figure 2) compared with control subjects. The mean difference change in fasting insulin concentrations was reported in 5 trials and found to be not significantly different (20.74 mIU/ml; 95% CI: 22.62, 1.14 mg/dL; P = 0.44; Figure 3). In addition, the results of the effects of fruit juice on HbA1c concentrations and HOMA-IR values were 20.03% (95% CI: 20.28, 0.23%; P = 0.84) and 0.59 (95% CI: 0.20, 0.97; P,0.01), respectively ( Figure 4 and Figure 5).
Subgroup and Sensitivity Analysis
Pre-specified subgroup analyses showed that the pooled effects of fruit juice on fasting glucose concentrations were not influenced by population region, baseline glucose concentration, duration, type of fruit juice, GI values of fruit juice, fruit juice nutrient constitution, total polyphenols dose and Jadad score. The consumption of fruit juice significantly increased fasting glucose concentrations in parallel-design groups. Results are presented in Table 3. Sensitivity analysis showed that the pooled effects of fruit juice on fasting glucose were not altered when analyses were limited to high-quality studies and were not changed after imputation using a correlation coefficient of 0.5. In addition, we found no significant change of outcome measures through systematic removal of each trial during sensitivity analysis.
Publication Bias
The shape of funnel plot for the studies on fruit juice and fasting glucose did not show obvious publication bias ( Figure 6). Similarly, no evidence of publication bias was observed by Egger's test (P = 0.50). Publication bias of the studies on fasting insulin, HbA1c and HOMA-IR was not assessed owing to the limited numbers of studies currently available (n = 5, 3 and 3, respectively).
Discussion
Our meta-analysis showed that fruit juice consumption did not significantly affect fasting glucose and insulin concentrations. Subgroup analyses further suggested that the pooled mean difference changes of fasting glucose concentrations were not significantly influenced by the population region, baseline glucose concentration, duration, type of fruit juice, GI values of fruit juice, fruit juice nutrient constitution and total polyphenols dose, and the outcome remained non-significant when analyses were limited to high quality studies. Our assessment of the effects of fruit juice on HbA1c and HOMA-IR values was limited by the small number of studies currently available.
As one of the most popular beverages, the global consumption of fruit juice has been steadily increased in recent years, probably due to the public perception of fruit juice as a natural source of nutrients [50]. It has been demonstrated that the polyphenols contained in fruit juice can improve the antioxidant status and immune function of the participants, thus may have beneficial effect in reducing the risk of cancer and cardiovascular disease [51,52]. However, the role of fruit juice consumption on diabetes control has not been well studied and the result remains inconclusive. In this meta-analysis study, we found that fruit juice had no significant effects on fasting glucose and insulin concentrations. One possibility is that fruit juice has less fiber than whole fruit, and a previous meta-analysis indicated that increasing consumption of dietary fibers can reduce fasting glucose concentration and HbA1C [53]. Another possibility is that fruit juice intervention might modestly increase the participants' dietary consumption of sugars and energy, which may influence the total effects of fruit juice on glucose control since most of the trials suggested that the participants maintained their usual diet during the intervention duration. In addition, previous studies suggested that fruit juice consumption had no significant favorable effect on lipid abnormalities, which often clusters with insulin resistance [54,55]. Therefore, the effects of fruit juice on glycemic control in this study might be mildly underestimated, since the participants in the majority of the selected studies (10/12) had abnormal lipid profiles [38,[40][41][42][43][44][45][46]48,49].
We found that fruit juice intake significantly increased fasting glucose concentrations when we pooled the data of the parallel- design RCTs. However, this increasing effect might not be clinically significant since the participants in most selected paralleldesign studies (6 of 8) had normal baseline glucose concentration, which can lead to a certain glucose fluctuation in the normal regulation of glucose homeostasis [56]. Therefore, we conducted an additional subgroup analysis using parallel-design trials which included participants with abnormal baseline glucose concentration, and the result suggested that fruit juice does not increase glucose concentration (P = 0.90). This result partly supported our inference, although it had less statistical power since only 2 trials were included in this subgroup analysis. In addition, we consistently found that fruit juice consumption had no significant effects on increasing glucose, when we conducted subgroup metaanalyses by the other variables, such as population region, baseline glucose concentration, duration, type of fruit juice, fruit juice nutrient constitution, total polyphenols dose or Jadad score. On the other hand, we also found that fruit juice could significantly increase the HOMA-IR values using the fixed effects model. However, the result is limited by the significant heterogeneity (P = 0.01) and the small number of available trials (n = 3). In addition, we found that fruit juice had no significant effects on HOMA-IR values when we used the random effects model (P = 0.87). For these reasons, whether fruit juice can affect the insulin sensitivity should be further evaluated. It is suggested that the GI values indicate the extent of glycemic response to carbohydrate ingestion [57] and vary among different fruit juices [29]. To evaluate whether the GI values affects the association between fruit juice and glycemic control, we further investigated the effects of GI levels of fruit juices on fasting glucose concentrations. Our study indicated that neither low GI fruit juices nor medium GI fruit juices had significantly effect on fasting glucose concentrations. Since no selected RCTs used fruit juice with high GI as supplement, we could not evaluate the effect of high GI fruit juice on fasting glucose concentrations. In addition, this result was further limited because none of the selected 12 trials reported the GI values of the fruit juices they used. To minimize the difference between estimated and actual GI values, we indirectly obtained the most proximate GI values from the online GI database according to the types of respective fruit juices [29], and subsequently classified them in a certain range (low GI fruit juice, #55; medium GI fruit juice, 56-69) in the subgroup analysis. Consequently, more high-quality RCTs focused on the association between GI values of fruit juice and glycemic control are needed to further assess these causal conclusions.
To our knowledge, this is the first study to systematically review the potential effects of fruit juice on glycemic control. The relatively large amount of pooled participants possesses a greater statistical power than small number of subjects in a single RCT. However, some limitations should be addressed when interpreting the findings of this study.
At first, explicit doses of sugars, artificial flavoring agents and vitamins were unavailable in most of selected studies and the total polyphenols doses of fruit juice ranged from 341.9 to 2660 mg/d (median: 933.6 mg/d). Although the wide range of total-polyphenols dose did not result in the significant heterogeneity in our study, this might affect the overall outcomes of the meta-analysis.
Secondly, only 1 study excluded subjects with weight-reducing dietary regimen [41], and the remaining 11 did not report the information on the advice of losing weight. Ten of the 12 studies advocated that fruit juice was added to the usual diet, which might increase energy content of the diet and body weight of the participants. Nonetheless, 8 of the included studies reported nonsignificant changes in body weight and the remaining 4 studies did not report the related information. This potential discrepancy may partly due to the controlled amount of fruit juice suggested by the investigators (120-500 ml, or 50 g freeze-dried powders), which prevented the excess extra energy intake (150-350 kcal/d; information provided by 6 studies, [42][43][44][45]48,49]) of the participants. However, we could not further evaluate the association between fruit juice intake and body weight change owing to the limited available information on dietary structure, total energy intake and daily physical activities, etc. In addition, we could not conduct further subgroup analyses on the fasting insulin concentration, HbA1c and HOMA-IR due to the small number of available RCTs.
Thirdly, measures for glucose control or insulin sensitivity were not primary outcomes in some RCTs included in this metaanalysis and the null findings of secondary outcomes may not have always been published. In addition, although only randomized controlled trials were selected for our analysis which are inherently less susceptible to bias, the results synthesized from the 12 RCTs are still limited by their varied study qualities (from low to high), relatively small sample sizes (from 12 to 63) and short period of follow-up (from 4 wk to 3 mo). Moreover, only 2 studies used objective measures such as ellagic acid, or b-cryptoxanthin plus vitamin C to evaluate the intervention compliance [45,48].
In conclusion, our study showed that the consumption of fruit juice may have no significant effect on fasting glucose and insulin concentrations. To further advance this area, future high-quality RCTs with adequate sample size and long follow-up periods are needed to evaluate and confirm the effect of fruit juice on glucose control and insulin sensitivity, particularly in the patients with hyperglycemia. To improve the quality of study, investigators should ensure and objectively evaluate the compliance, and use the appropriate blinding methods, control group and matched placebo. In addition, the result will be more convincing if they can effectively rule out the confounding effect of extra energy provided by fruit juice on body weight and other related measures.
Supporting Information
Table S1 Other characteristics of 12 randomized controlled trials included in analysis. | 2017-05-01T15:47:41.635Z | 2014-04-17T00:00:00.000 | {
"year": 2014,
"sha1": "ba7998bdfb7b6b3a27fe775484a0134fd9d4e4ea",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0095323&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba7998bdfb7b6b3a27fe775484a0134fd9d4e4ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265683732 | pes2o/s2orc | v3-fos-license | Tacrolimus monitoring in hair samples of kidney transplant recipients
Background Calcineurin inhibitors, including tacrolimus, remain a cornerstone of immunosuppressive therapy after kidney transplantation. However, the therapeutic window is narrow, and nephrotoxic side effects occur with overdose, while the risk of alloimmunization and graft rejection increases with underdose. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) allows quantification of tacrolimus in biological samples from patients. This study investigates the feasibility of quantifying tacrolimus in scalp hair from kidney transplant (KT) recipients and correlates hair tacrolimus concentrations with tacrolimus dosage and blood trough levels. The aim was to provide proof-of-principle for hair tacrolimus drug monitoring in KT recipients. Method Single-center prospective study between September 9, 2021 and December 4, 2021, including KT recipients under tacrolimus. Minors, patients with active skin or hair diseases, and patients with scalp hair shorter than 4 cm were excluded from participation. Scalp hair was collected from the posterior vertex of patients, cut into segments, and analyzed for tacrolimus by LC-MS/MS. Patients filled out a questionnaire on hair treatments and washing habits. In parallel, tacrolimus trough levels were measured in whole blood and correlated with hair tacrolimus concentrations. Results In total, 39 consenting KT recipients were included, and hair samples were collected at 53 visits. Tacrolimus was detected in 98% of hair samples from patients exposed to the drug. Tacrolimus hair levels and whole blood trough levels were correlated with a beta coefficient of 0.42 (95% CI: −0.22–1.1, p = n.s.). Age and dark hair affected hair tacrolimus measurements, while different tacrolimus formulations (immediate release vs. extended release), hair washes, and permanent coloring did not. Longitudinal measurements in a subgroup of patients indicate that long-term measurement of hair tacrolimus levels is feasible. Conclusion Measuring tacrolimus in hair is a potentially reliable method to monitor drug exposure in KT patients. Rapid wash-in effects and consistent concentrations over time indicate that tacrolimus is incorporated into the hair matrix, allowing temporal resolution in the analysis of recent exposure and exposure history. This method provides a simple and low-risk alternative to regular blood sampling, sparing patients from frequent hospital visits through the self-collection of hair samples.
Introduction
Kidney transplantation (KT) is an effective treatment for advanced and end-stage kidney disease (1,2).Although shortterm outcomes have improved substantially over the last decades, long-term results are still unsatisfactory (3).The primary causes of allograft failure remain chronic antibody-mediated rejection due to relative under-immunosuppression and calcineurin inhibitor (CNI) toxicity.The latter reflects a common nephrotoxic side effect of CNI, namely, cyclosporine A (CsA) and tacrolimus (Tac) (4)(5)(6).While these agents represent a cornerstone in the treatment of solid transplant recipients, they have a narrow therapeutic range and pose a substantial toxicity risk if overdosed.In particular, the nephrotoxic effects of these drugs may lead to progressive allograft disease and premature graft failure (7).Furthermore, CNI elicits extra-renal side effects, including progressive cardio-vascular disease (8), vulnerability against infections, and risk for cancer (9), all of which contribute to increased morbidity and mortality in KT recipients (10).
In the past decades, much effort has been made to measure CNI exposure and to adjust treatment doses to pre-specified target CNI blood levels for individual patients (11,12).Indeed, tailoring immunosuppressive therapy to each individual KT recipient is a good example of precision-and patient-centered medicine (13).Unfortunately, these efforts have not yet led to substantial breakthroughs since CNI blood levels only poorly correlate with toxicity and we cannot predict whether CNI toxicity will progress or not (14).
Drugs and metabolites can be analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) from biological matrices, including blood (15), hair (16), and nails (17).In forensic toxicology, retrospective quantification of chemicals in hair samples has gained widespread acceptance.Chemicals such as cocaine (18), ethyl glucuronide (19), and delta9-THC (20) are quantified to confirm abstinence in patients who are recovering from addiction (21,22).Furthermore, long-term medication monitoring in hair is feasible, accurate, and predictive in specific clinical settings.For instance, tenofovir concentrations measured in hair samples can be readily used to monitor treatment adherence in HIV patients (23).However, although LC-MS/MS analysis of substances in hair is specific and sensitive, certain factors, such as hair products, hair washing routines, hair color, and artificial coloring, are known to significantly affect the results of hair analysis (20,24,25).
The aim of this trial was to quantify tacrolimus in the scalp hair of KT recipients and correlate concentrations with tacrolimus dosing and blood C 0 levels.
Study design and population
This study evaluates a subgroup of the Bernese transplant cohort.KT recipients on maintenance therapy with tacrolimus (Prograf R , Advagraf
Clinical and laboratory parameters
Baseline characteristics and treatments were extracted from the electronic patient documentation.Information on hair color, care, and utilized hair treatment products was collected with a questionnaire.Tacrolimus concentration was determined 12 h after the last dose of immediate-release tacrolimus (Prograf R ) and 24 h after the last dose of extended-release tacrolimus (Advagraf R or Envarsus R ).The daily tacrolimus dose was recorded as a cumulative dose in mg per day.Serum creatinine was measured from plasma samples; eGFR was estimated according to the CKD-EPI equation (26) and expressed in mL/min/1.73m 2 .
Hair sampling and processing
Patients were allowed to provide hair specimens at multiple study visits.Specifically, a strand of hair with a diameter of 2-4 mm was cut at the base from the posterior scalp of participants.The end of the hair tuft adjacent to the scalp was marked.The bottom proximal 2 cm segment (S1) and the adjacent 2 cm segment (S2) of the specimens were segmented and used for further analysis.Hair specimens were cleaned, chopped into snippets, ground into a powder, and then utilized for mass spectrometry analysis.First, hair samples were cut into segments of exact length, and each segment was decontaminated with the following standard protocol for forensic hair analysis.The hair was washed once with 5 mL of deionized water and twice with 5 mL of acetone for 3 min each.After drying at room temperature, hair segments were chopped into snippets using scissors.For extraction, between 5 and 25 mg of snippets were exactly weighed into an Eppendorf vial, and the snippets were pulverized for 15 min at 30 Hz.Then, 100 µL of IS solution and 1,400 µL of methanol were added, and the samples were sonicated for 2 h at 40 • C.After centrifugation for 10 min at 9,000 g, the supernatant clear solution was transferred to a vial for evaporation under a stream of nitrogen at 40 • C. For injection into the LC-MS/MS system, the residue was reconstituted in 30 µL of methanol and 70 µL of 5 mM ammonium formate (pH 3) with 10% (v/v) of methanol.
Preparation of working solutions
Spiking solutions for calibrators and quality control were prepared in methanol to obtain concentrations comparable to those found in hair.As an internal standard (IS), a solution was prepared in methanol containing 13CD 4 -tacrolimus at a concentration of 800 pg/mg at a sample weight of 1 mg.
LC-MS/MS parameters
The LC-MS-MS system consisted of a Shimadzu Prominence high-performance liquid chromatography system (Shimadzu, Duisburg, Germany) and a QTrap 6500 mass spectrometer (Sciex, Darmstadt, Germany) using electrospray ionization (ESI) operating in positive mode.Separation was achieved using a Kinetex R F5 column (100
Calibration curve and method validation
Three calibration concentrations (C1-C3) and a blind hair were prepared to establish the linearity of the calibration.Approximately 20 mg of tacrolimus-free hair was analyzed without or spiked at concentrations C1-C3.The regression was calculated using a linear model (Supplementary Table 2).The method was partially validated for the selected parameters, namely, selectivity, the lower limit of detection (LLOD), the lower limit of quantification (LLOQ), and linearity.Tacrolimus hair concentration (hC 0 ) was measured in picograms per sample (pg/sample) and normalized to the input weight, resulting in a hair tacrolimus concentration (pg/mg).
Longitudinal sampling
Drugs and metabolites are transported to the hair follicle via the bloodstream and permanently incorporated into the matrix.Over time and along with hair growth, the matrix moves away from the follicle and remains relatively inert in terms of component incorporation and washout.To test this notion, we analyzed hC 0 levels in the S1 segment of visit 1 (representing recent tacrolimus exposure) and the S2 segment of visit 2 (representing tacrolimus exposure 2-4 months ago).Furthermore, we compared hC 0 in S1 and S2 segments in two patients, one with recent tacrolimus withdrawal due to belatacept-conversion and one with recent tacrolimus exposure for de novo KT.
Statistical analysis
Results were reported as the number of participants (percentage) for categorical data and the median (interquartile range) for continuous data.To assess correlations between hC 0 and drug exposure, we employed a linear regression model with hC 0 as the dependent variable and daily dose (mg/day) as the independent parameter without (a crude model) or with potentially interfering patient-related (partial model) or cosmetic treatment-related cofactors (full model).Data were presented using histograms and xy-plots.The Pearson correlation coefficient between hC 0 and bC 0 (blood tacrolimus concentration) and the daily tacrolimus dose were calculated.A two-tailed p-value below 0.05 was considered statistically significant.Statistical analyses were performed using R (version 4.0.3) and R Studio (version 1.3.1093).
Overall characteristics of participants and hair samples
The study cohort includes 39 KT recipients of the Bernese Transplant project.Baseline characteristics are given in Table 1.62% of patients were female, had a median age of 53.1 years (IQR: 42.0-63.4),and had a median transplant history of 2.8 years (IQR: 0.4-6.9) at study inclusion.In total, 19% of patients suffered from glomerulonephritis as an underlying disease.A total of 24 patients (61%) were under immediate-release tacrolimus (Prograf R ) and the remainder were under extended-release tacrolimus (Advagraf R , Envarsus R ).In total, 74% were under low-dose prednisolone, and 83% were under antimetabolites (azathioprine, mycophenolate mofetil, or acetate).The average daily tacrolimus dose was 4.5 mg (IQR: 3-6) and the average trough (C 0 ) level was 6.2 ng/mL (IQR: 4.6-8.0).A total of 37 patients had been taking tacrolimus for at least 6 months prior to study entry; one patient started within 2 months before entry (recent KT); and one patient was switched to belatacept between two samplings.Characteristics of hair color and treatment are given in Table 2.
Longitudinal hair tacrolimus concentrations
For five subjects in the study cohort with no change in tacrolimus medication, a hair sample was collected at visits 1 and 2, ∼2 months apart.These hair samples were analyzed in segments.Assuming a hair growth rate of 1 cm/month, the proximal segment S1 of the visit 1 sample and the distal segment S2 of the later visit 2 sample represent approximately the same time period.The corresponding hC 0 values are shown in Figure 2A.
One patient with recent KT was started on tacrolimus at the time of the first visit.Both proximal S1 segments of visits 1 and 2 were compared, and tacrolimus was detected only at the second visit (Figure 2B).Conversely, one patient changed immunosuppressive treatment from tacrolimus to belatacept between the two visits.Comparison of hC 0 in the S1 segment at the two visits showed a decrease of 40% from the first measurement (Figure 2C).
Discussion
To the best of our knowledge, this is the first study assessing tacrolimus hair concentration in KT recipients and correlating results with patient-related and hair treatment-related cofactors.In the vast majority of patients, tacrolimus was detectable in the hair specimen collected from the vertex.The correlation of matrix levels (hair and blood) with daily tacrolimus exposure was rather low, yet higher in hair samples compared to blood.A correlation between hC 0 and bC 0 was not significant.The continuous deposition of tacrolimus in growing hair is supported by the analysis of patients with recent tacrolimus withdrawal and exposure.Together, these findings strongly support the assumption that tacrolimus is incorporated into the hair matrix via the bloodstream and thereafter remains detectable weeks to months after exposure, with only limited washout effects from hair washing and hair
FIGURE
Correlation between daily dose of tacrolimus and measured drug levels in hair and blood.(A) Correlation between hC and daily tacrolimus dose.Pearsons correlation coe cient is ., p = . .A non-significant positive correlation between the daily tacrolimus dose and the measured hC levels.(B) Non-significant correlation between bC and daily tacrolimus dose.Pearsons correlation coe cient is .
. In this sample we could show a stronger correlation between hC and daily dose than with bC .product applications.Patient age significantly influenced results, while results were reliable and comparable among all tacrolimus formulations (Prograf R , Advagraf R , and Envarsus R ).
The main differences were found between patients with different hair colors.Thus, there is a higher hC 0 in patients with darker hair (brown and black).Differing levels of metabolites depending on hair pigmentation are described in hair analyses of a variety of different drugs (27,28).Gray hair naturally correlates with increased age; we interpret the lower hC 0 levels in older patients as a consequence of a higher fraction of gray hair.
Prednisolone has been described to induce CYP3A and/or P-glycoprotein, therefore increasing the needed tacrolimus dose to reach the target bC 0 , especially after transplantation (29).In our population, the majority of patients were on low-dose prednisolone.Prednisolone maintenance therapy had no impact on Tacrolimus concentrations in hair.
This study highlights new opportunities for therapeutic drug monitoring.First, our approach enables therapeutic drug monitoring from biological samples, independent of blood collection.Hair specimens are easily accessible and may even be collected by patients themselves or their relatives.Furthermore, sampling is independent of healthcare facilities, does not require pre-analytical processing (centrifugation and cooling), and poses a negligible risk of transmission of infectious diseases.Since hC 0 concentrations appear to be relatively stable during the course of hair growth, this method could even be used to quantify tacrolimus exposure for weeks to months in the past.
Our study has several limitations.First, the cohort is small and comprises mainly single-time point evaluations.Second, patientand hair treatment-associated confounders were associated with hC 0 levels.The sample size was too small and the sampling procedures too limited to test whether these confounders remain stable over time on a patient level and whether natural or hair treatment-related changes (graying of hair in aging patients, new hair products, or permanent coloring) affect longitudinal hC 0 values.Likely, further confounders have not been captured in detail, notably ethnic differences, given the predominance of Caucasian patients in this study.Although there is a wide distribution of pigmentation in hair, it is controversial if ethnicity affects hair analysis (28).Finally, tacrolimus and chronic kidney disease are known causes of alopecia (30,31).However, not all patients were eligible for participation; notably, bald patients (predominantly elderly men) had to be excluded.
Conclusion
Tacrolimus detection in patient hair offers a reliable method to quantify drug exposure, including longitudinal measurements.Further studies are needed to determine therapeutic target levels for tacrolimus hair measurements and to quantify the effects of age, hair color, and different hair treatments on hC 0 and washout effects.
FIGURE
FIGUREStability over time, washin and washout.(A) Washout e ect of hC in patients with constant (± %) bC concentrations.S segment of the first visit was compared to S of the second visit, representing roughly the same time frame.Each segments represents months of tacrolimus ingestion.S segment was cut of just above the skin therefore roughly representing the months prior to analysis.Mean concentration was constant among both samples.Rather wide distribution of values between the samples indicates the presence of cofounders impacting stability of tacrolimus in hair.(B) Positive quantification in one patient de novo taking tacrolimus, comparing S segments of both visits showing a washin e ect.(C) Washout e ect after changing from tacrolimus to belatacept between two visits.Comparing S segments of both visits.Prompt washin and washout e ects suggesting dose related incorporation in the hair.With measurable washin and out e ects over a short period of time the possibility of temporal resolution in hair measurements is likely.
September 9, 2021 and December 4, 2021.Minors, patients without at least 4 cm long hair in their vertex, and patients with active skin or hair diseases were ineligible to participate.The study was approved by the Local Ethics Committee (2020-00953).All patients provided oral and written consent.
R , or Envarsus R ) were screened and enrolled in the study during routine outpatient follow-up at the Nephrology Department of the University Hospital Insel in Bern between 2.1 mm, 100 Å, 2.6 µm, Phenomenex) coupled with SecurityGuard TM ULTRA Cartridges ultra-high performance liquid chromatography (UHPLC) F5 (2.1 mm ID).A mobile phase A [water containing ammonium formate (1 mM) and formic acid (0.1%)] and a mobile phase B [acetonitrile containing ammonium formate (1 mM) and formic acid (1 mM)] were used.A post-column spray of methanol was applied with a flow rate of 0.04 mL/min to support the ionization process.The flow rate was set at 0.6 mL/min, and the gradient was programmed as follows: 0.01-1.5 min, 10% eluent B; 1.5-9 min increasing to 95% eluent B; 9- 11 min, 95% eluent B; 11-11.1 min decreasing to 10% eluent B; and 11.1-12 min starting conditions (10% eluent B).The column oven was set at 40 • C. The dead time (t0) was about 0.3 min (0.19-mL void volume of the column).The autosampler was operated at 15 • C, and the autosampler needle was rinsed before and after aspiration of the sample using methanol.The mass spectrometry (MS) instrument was operated in the "Scheduled MRM TM Algorithm Pro" mode.Quantification was achieved by calculating the mean concentration of both transitions.MRM transitions and retention times of tacrolimus and 13CD 4 -tacrolimus (IS) are given in Supplementary Table1.The following identification criteria were used: (1) the retention time (RT) between the analyte and the IS and (2) deviations ≤20% for the relative area ratios of the three transitions (MRM 1 to MRM 2 and MRM1 to MRM3, respectively).
TABLE Characteristics of the acquired S samples comparing di erent tacrolimus formulations.
TABLE Linear regression models for hC level in segment with independent parameters.Crude model: hC0 in relation to daily tacrolimus dose.Partial model: hC0 in relation to daily tacrolimus dose, tacrolimus formulation (extended vs. immediate release), sex, and patient age.Full model: Partial model and the parameters, namely, reported washes per week, hair color (dark hair vs. fair hair), and permanent treatment (yes).Beta coefficient, 95% confidence intervals (CI), and values are given.hC0, hair tacrolimus level.Bold values are significant p-values (<0.05). | 2023-12-06T16:04:17.287Z | 2023-12-04T00:00:00.000 | {
"year": 2023,
"sha1": "aecaa45cf1ea027a5dcdcd2b21f82d5fa4ec39cf",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2023.1307505/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8796eb01c55dc3c846666b54622582ede3c4e2c5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
17623413 | pes2o/s2orc | v3-fos-license | Pretreatment with low-dose gadolinium chloride attenuates myocardial ischemia/reperfusion injury in rats.
AIM
We have shown that low-dose gadolinium chloride (GdCl3) abolishes arachidonic acid (AA)-induced increase of cytoplasmic Ca(2+), which is known to play a crucial role in myocardial ischemia/reperfusion (I/R) injury. The present study sought to determine whether low-dose GdCl3 pretreatment protected rat myocardium against I/R injury in vitro and in vivo.
METHODS
Cultured neonatal rat ventricular myocytes (NRVMs) were treated with GdCl3 or nifedipine, followed by exposure to anoxia/reoxygenation (A/R). Cell apoptosis was detected; the levels of related signaling molecules were assessed. SD rats were intravenously injected with GdCl3 or nifedipine. Thirty min after the administration the rats were subjected to LAD coronary artery ligation followed by reperfusion. Infarction size, the release of serum myocardial injury markers and AA were measured; cell apoptosis and related molecules were assessed.
RESULTS
In A/R-treated NRVMs, pretreatment with GdCl3 (2.5, 5, 10 μmol/L) dose-dependently inhibited caspase-3 activation, death receptor-related molecules DR5/Fas/FADD/caspase-8 expression, cytochrome c release, AA release and sustained cytoplasmic Ca(2+) increases induced by exogenous AA. In I/R-treated rats, pre-administration of GdCl3 (10 mg/kg) significantly reduced the infarct size, and the serum levels of CK-MB, cardiac troponin-I, LDH and AA. Pre-administration of GdCl3 also significantly decreased the number of apoptotic cells, caspase-3 activity, death receptor-related molecules (DR5/Fas/FADD) expression and cytochrome c release in heart tissues. The positive control drug nifedipine produced comparable cardioprotective effects in vitro and in vivo.
CONCLUSION
Pretreatment with low-dose GdCl3 significantly attenuates I/R-induced myocardial apoptosis in rats by suppressing activation of both death receptor and mitochondria-mediated pathways.
Introduction
Myocardial infarction (MI) is a leading cause of death and a major health problem worldwide. The current most effective therapy after acute MI is the restoration of blood flow through the occluded coronary artery to limit infarct size and preserve cardiac function. However, this treatment causes additional injury during ischemia/reperfusion (I/R), which is a major risk factor for MI-induced arrhythmia, contractile dysfunction, and heart failure [1][2][3] . Mitochondrial oxidative phosphorylation returns to pre-ischemic levels within seconds of reperfusion, but contractile power lags behind, which is termed 'myocar-dial stunning' [4] . Stunned myocardium exhibits excess oxygen consumption for a given rate of contractile work, and it has reduced mechanical efficiency [5] . Abundant evidence has suggested that myocardial I/R is a main cause of apoptotic and necrotic cell death because of oxidative stress, inflammation, Ca 2+ overload, and ATP depletion [6][7][8][9] . Several observations in human hearts have indicated that apoptotic cardiomyocytes contribute dramatically to overall cell loss during MI [10,11] . Notably, cardiomyocyte apoptosis contributes to left ventricle dysfunction following cardiac surgery [12] . Previous studies have demonstrated that mitochondrial-related pathways and death receptor-mediated pathways are involved in I/Rinduced cardiomyocyte apoptosis [13][14][15] . Therefore, the exploration of the mechanisms of myocardial I/R-induced apoptosis and the identification of the potential target(s) to prevent or reverse cell apoptosis are important for the prevention and Several studies have suggested that a large concentration of arachidonic acid (AA) is released from cell membrane phospholipids under pathological conditions, such as stress, ischemia, hypoxia, and I/R, thus leading to multiple deleterious consequences, including abnormal excitation-contraction coupling due to alterations in ion channel kinetics, bioenergetic inefficiency, apoptosis, and accelerated necrosis, which collectively promote the development of heart dysfunction, heart failure, and sudden death [16,17] . Accumulating evidence has indicated that AA-induced cell apoptosis partially occurs through mitochondria-mediated pathways via an increase in mitochondrial membrane permeability, the release of cytochrome c, and caspase-3 activation, which induces apoptosis and cell death [17] . Our previous data have suggested that lowdose gadolinium chloride (GdCl 3 ) is protective against AAinduced Ca 2+ overload [18,19] and cardiomyocyte apoptosis in primary cultured neonatal rat ventricular myocytes (NRVMs) via AA scavenging [20] . Therefore, whether GdCl 3 can effectively prevent I/R-mediated cell apoptosis and myocardial injury in vivo and the potential involvement of AA as a candidate for the underlying mechanism (s) were investigated.
Previous studies have suggested that the effect of GdCl 3 is dose dependent because high doses of GdCl 3 (≥300 µmol/L) activate the calcium-sensing receptor, which may induce apoptosis in cardiomyocytes [21] . However, GdCl 3 (≤10 µmol/L) completely blocks AA-mediated Ca 2+ increase in HEK293 cells [19,20] . GdCl 3 (10 mg/kg) has been demonstrated to exert a protective potential in I/R-induced brain injury and hepatic injury and to protect the myocardium against I/R-induced inflammation via the reduction of circulating monocytes and neutrophils and the infiltration of leukocytes. This dose also attenuated myocardial stunning when administered prior to the onset of ischemia or during ischemia, but it did not enhance the contractile function of normal myocardium [21][22][23][24] . However, the precise mechanism(s) underlying the effect of GdCl 3 are not known. The present study used low-dose GdCl 3 (10 mg/kg, iv), which is safe and effective in the treatment of hepatic I/R in rats [25] , to detect the protective effect and the mechanism against myocardial I/R-induced cell apoptosis and myocardial infarction in rats and cultured ventricular cardiomyocytes.
Materials and methods
Materials and reagents GdCl 3 and nifedipine were purchased from Sigma-Aldrich (St Louis, MO, USA). Fluo-4/AM was obtained from Molecular Probes (Invitrogen Inc, CA, USA). Antibodies for Fas, DR5, FADD, cytochrome c, casepase-8 and GAPDH were purchased from Santa Cruz Biotechnology, Inc (CA, USA). Cox IV was purchased from Cell Signaling Technology (CST, MA, USA).
Ethics statement
The Capital Medical University Animal Care and Use Committee approved this study, and all studies were conducted in accordance with the 'Guide for the Care and Use of Labora-tory Animals' adopted by the Beijing government and 'Guide for the Care and Use of Laboratory Animals' published by the US National Institutes of Health (publication No 85-23, revised 1996).
Confocal Ca 2+ transients
Myocytes were loaded with Fluo-4/AM and measurements of intracellular Ca 2+ concentration ([Ca 2+ ] i ) were performed as previously described [27] . Experiments were performed at room temperature (22-24°C). All NRVMs at 70%-80% confluence were used after 48 h in culture. Cardiomyocytes were loaded with 4 μmol/L Fluo-4/AM for 30 min at 37°C followed by three washes with HEPES-buffered physiological saline solution (HBSS) (mmol/L: NaCl 135, KCl 5, MgCl 2 1, CaCl 2 1.8, HEPES 10, glucose 11, pH 7.4) to remove extracellular Fluo-4/ AM for 20 min. These cells were visualized with a laser confocal microscope (Leica SP5) equipped with a 40× oil immersion objective (NA 1.35). The fluorescence of interested regions, which generally contained approximately 30 cells, was recorded at an excitation of 488 nm and emission detection at 515 nm. Cardiomyocytes for the AA stimulation protocol were pretreated with vehicle, GdCl 3 (5 μmol/L) or nifedipine (1 μmol/L) for 2 min, which were then followed by AA (10 μmol/L) treatment for 2 min. Fluo-4 loaded cardiomyocytes for the A/R cells were recorded for 100 s and stimulated with 50 μmol/L phenylephrine (PE) to detect cellular responses to agonist [27] . Changes in the fluorescence intensity over time were collected using series image scanning, and the [Ca 2+ ] i was expressed as F/F 0 , where F 0 stands for the mean basal fluorescence obtained from 4 images with cell at resting state. The relative basal [Ca 2+ ] i in A/R cells were calculated as F 0(A/R) normalized with the F 0(con) in normal cells from the same experiment.
Ischemia and reperfusion model
Sprague-Dawley rats (males, 10 weeks old) were obtained from the experimental animal center of Capital Medical University (Beijing, China). Myocardial ischemia and reperfusion was conducted as previously described [28] . Briefly, rats were anesthetized with urethane (5 mg/kg, ip). Rats were incubated, and mechanical ventilation was achieved by connecting the endotracheal tube to the ventilator. The left chest was opened to expose the heart. An 8-0 silk suture was passed underneath the left anterior descending (LAD) coronary artery, and a slipknot was tied. Sham-operated rats underwent the same surgical procedures except that the suture was placed beneath the LAD without ligation. I/R was induced by 30 min of ischemia followed by 2 h of reperfusion. Significant elevations of ST segment were detected using electrocardiography. Rats were treated with GdCl 3 (10 mg/kg) via tail vein injection 30 min before LAD ischemia (Supplementary Figure 1B). Experimental groups included sham (no treatment, no ischemia), I/R (no treatment but subject to ischemia and reperfusion), GdCl 3 (treated and subject to I/R) and nifedipine 10 mg/kg (treated and subject to ischemia and reperfusion).
Assessment of the area at risk and infarct size
The LAD was immediately religated at the end of the 2-h reperfusion, and 2 mL of 2% Evans blue dye (Sigma, USA) was injected into the left ventricular cavity. The heart was quickly removed, frozen at -20°C, and sliced horizontally to yield five slices. Slices were incubated in 1% triphenyl tetrazolium chloride (TTC) (Amresco, USA) prepared with phosphate buffer (pH=7.8) for 15 min at 37°C and photographed using a digital camera. The areas stained with Evans blue (blue staining, area not at risk) and TTC (red staining, ischemic but viable myocardium), the TTC-negative area (white area, infarct size) and the LV area were measured digitally using Image J.
Assessment of apoptosis
Apoptotic death of cultured cardiomyocytes was detected by a caspase-3 activity ELISA kit (Applygen, Beijing, China) and an FITC annexin V apoptosis detection kit (BD Pharmingen, USA) for flow cytometry. Myocardial apoptosis in hearts was detected by the terminal deoxyribonucleotide transferasemediated dUTP nick end labeling (TUNEL) detection kit (Roche, Switzerland) and caspase-3 activity ELISA kit (Applygen, Beijing, China) according to the manufacturers' protocols.
Enzyme-linked immunosorbent assay (ELISA) Blood was collected from the heart immediately after 2 h of reperfusion. Creatine kinase MB (CK-MB), cardiac troponin I (cTn-I), lactate dehydrogenase (LDH) and AA assay kits (CUSABIO, Shanghai, China) were used to detect the activities of these myocardial markers. Heart tissues were rinsed in icecold PBS to remove excess blood and minced into small pieces. Tissues (100 mg) were homogenized in 1000 mL ice-cold lysis buffer and centrifuged for 10 min at 12 000×g at 4°C. Supernatants were assayed immediately with a caspase-3 activity ELISA kit (Applygen, Beijing, China), caspase-8 activity ELISA kit (Applygen, Beijing, China), and Fas ELISA kit (CUSABIO, Shanghai, China). All procedures were performed according to the manufacturers' protocols.
Histological examination
Hearts were fixed in 10% formalin, embedded in paraffin, and cut into 6-μm sections. Sections were stained using hematoxylin and eosin (HE) for histochemical examination. An observer who was blinded to the experimental conditions of animals recorded the data. Two investigators evaluated all histopathological changes in a blinded fashion, and the main observation indexes, including intercellular space, heart tissue edema, and inflammatory cell infiltration, were assessed under a microscope.
Subcellular cytoplasmic and mitochondrial fractionation
Subcellular cytoplasmic and mitochondrial fractionations were obtained as previously described [29] . Isolation of mitochondrial and cytosolic proteins was performed using a mitochondria/cytosol fractionation kit (Beyotime Inst Biotech, Peking, China). Briefly, cells or tissue were collected and washed in PBS followed by the addition of mitochondrial isolation buffer. Lysates were centrifuged at 3500×g for 10 min at 4°C. The resulting pellets were used as the mitochondrial fraction. Supernatants were centrifuged further at 11000×g for 10 min at 4°C and used for the analysis of the cytosolic fraction.
Western blots
Samples were homogenized in RIPA lysis buffer and centrifuged at 15 000 r/min for 15 min at 4 °C. Protein concentrations were measured using a bicinchoninic acid (BCA) protein assay kit (Thermo, Rockford, IL, USA). Equal amounts of protein lysates were separated using 12% or 15% sodium dodecyl sulfate-polyacrylamide (SDS-PAGE) gel electrophoresis. Gels were blotted onto nitrocellulose membranes and incubated with the indicated antibodies. Blots were developed using ECL according to the manufacturer's instructions.
Statistical analysis
All data are presented as the mean±SD. Statistical analysis was performed using one-way ANOVA and Student's t-test, as appropriate. A value of P<0.05 was considered statistically significant.
GdCl 3 inhibited A/R-induced cardiomyocyte apoptosis in vitro
We investigated whether GdCl 3 inhibited caspase-3 activity, which is a final common pathway in caspase-dependent apoptosis, to characterize the inhibitory effect of GdCl 3 in myocardial cell apoptosis. The effects of GdCl 3 were compared with www.nature.com/aps Chen M et al Acta Pharmacologica Sinica npg the effect of the Ca 2+ channel blocker nifedipine, which reduces intracellular Ca 2+ overload in cardiomyocytes and dilates coronary arteries for the treatment of variant angina. Cells were treated with different concentrations of GdCl 3 (2.5-10 μmol/L) and subjected to apoptosis analysis on the basis of caspase-3 activity and annexin-V-FITC/PI staining. We found that caspase-3 activity was triggered by A/R and inhibited by GdCl 3 treatment in a dose-dependent manner ( Figure 1A). GdCl 3 at the concentration of 2.5 μmol/L did not affect caspase-3 activity, but 5 and 10 μmol/L exerted significant changes compared to the A/R group. The 5 μmol/L concentration was selected for the following experiments. Flow cytometric analysis of myocardial cells also demonstrated that A/R induced early stage cell apoptosis, and GdCl 3 significantly decreased the extent of cardiomyocyte apoptosis with a similar potency as nifedipine ( Figure 1B, 1C). These results indicated that GdCl 3 inhibited A/R-induced apoptosis in NRVMs.
GdCl 3 inhibited A/R-induced cardiomyocyte apoptosis via death receptor and mitochondrial signaling pathways
The expression levels of cleaved caspase-8, DR5, Fas, FADD and cytochrome c were evaluated by Western blotting to identify the molecular mechanisms underlying the protective effect of GdCl 3 treatment against apoptosis. A/R treatment potently promoted the expression of these proteins compared to the control group, and GdCl 3 (5 μmol/L) reversed all these activations ( Figure 2). These results suggest that GdCl 3 prevents cell apoptosis likely via the inhibition of the A/R-induced death receptor signaling pathway. We also evaluated the level of cytochrome c in mitochondrial fractions/cytosol fractions and found that cytochrome c was normally localized in the mitochondria, as shown in the control group. However, the ratio of mitochondria cytochrome c to cytosol cytochrome c was reduced in A/R-injured NRVMs compared to the control group. Notably, treatment with GdCl 3 significantly increased the ratio of mitochondria cytochrome c to cytosol cytochrome c compared to the A/R group ( Figure 3A). These results indicated that GdCl 3 reversed A/Rinduced cell apoptosis via inhibition of the mitochondrialrelated signaling pathway.
Accumulating evidence has indicated that AA is involved in cell apoptosis via the induction of Ca 2+ overload, increase in mitochondrial membrane permeability, release of cytochrome c and activation of caspase-3, which eventually leads to cell apoptosis and death [16,17] . We monitored Ca 2+ signals in NRVMs in response to AA to investigate the effect of GdCl 3 on AA-induced Ca 2+ overload and found that AA caused a marked increase in [Ca 2+ ] i . GdCl 3 pretreatment significantly inhibited the AA-induced increase in [Ca 2+ ] i , but nifedipine did not exhibit this effect ( Figure 3B and C). These results suggest that the AA-induced the Ca 2+ signal is independent of potential-dependent Ca 2+ channels. We also assessed internal Ca 2+ activities in NRVMs treated with A/R or GdCl 3 plus the A/R procedure and compared the results with NRVMs cultured in normal O 2 as a control. Figure 3D and 3E shows that spontaneous oscillations were observed only in normal cells, and PE treatment significantly potentiated these oscillations, as reported previously [26] . In contrast, A/R cells lost their spontaneous oscillations and response to PE. Basal Ca 2+ levels were also different between groups. A/R NRVMs exhibited much higher resting [Ca 2+ ] i than normal cells. These abnormal changes in Ca 2+ signaling reflect damaged cellular function because of A/R. GdCl 3 treatment partially prevented these cells from A/R injury via lowering basal Ca 2+ levels and responding to PE, but the spontaneous Ca 2+ transients also disappeared in GdCl 3 +A/R cells ( Figure 3D) and GdCl 3 -treated normal cells (data not shown). These data suggest that GdCl 3 reversed A/R-induced cell apoptosis partially through its inhibition of exaggerated Ca 2+ activities.
GdCl 3 protected against myocardial I/R injury in rats
The induction and evaluations in cardiac I/R injury of rats were performed as described in the Methods (supplementary Figure 1), and nifedipine was used as the positive control drug. Figure 4A and B shows that the infarct areas were smaller in GdCl 3 and nifedipine groups compared to the I/R group (15.0%±4.6% and 17.1%±5.5% in GdCl 3 and nifedipine, respectively, versus 23.2%±5.8% in I/R), but the area-at-risk (AAR) area between these three groups were similar. HE staining in the sham group revealed the normal architecture of the myocardium with cardiomyocytes of normal size, clear boundaries and regular arrangement. However, cardiomyocytes in the I/R group were arranged irregularly, and intercellular spaces were enlarged, which indicates serious damage. However, GdCl 3 and nifedipine pretreatment significantly alleviated the I/R-induced injury of the myocardium ( Figure 4C). The release of CK-MB, cTn-I and LDH, which are mark-ers of cardiomyocyte I/R injury, were higher in the I/R group than the sham group, and GdCl 3 or nifedipine pretreatment significantly reduced the release of these factors as compared with the I/R group ( Figure 4D-F). These results demonstrated that GdCl 3 effectively protected the heart against I/R injury.
GdCl 3 inhibited I/R-induced myocardial myocyte apoptosis in rats
Substantial evidence has suggested that apoptosis plays a critical role in I/R injury [30,31] . Therefore, the effect of GdCl 3 on I/Rinduced cardiomyocytes apoptosis in vivo was examined. A significant increase in the number of TUNEL-positive cells was detected in cardiac tissues in the I/R group after 2 h of reperfusion compared with that of the sham group. GdCl 3 treatment exerted a remarkable anti-apoptotic effect, which was evidenced by reduced TUNEL-positive staining ( Figure 5A and B). Similarly, I/R increased caspase-3 activity, and GdCl 3 treatment significantly reduced caspase-3 activity compared with that in the I/R group ( Figure 5C). These results provide direct evidence for GdCl 3 -mediated alleviation of I/R-induced cell apoptosis in vivo.
GdCl 3 alleviated I/R-induced cardiomyocyte apoptosis via the death receptor and mitochondrial signaling pathways
The activity of caspase-3 and caspase-8, the levels of Fas and the expression levels of DR5, FADD and cytochrome c were evaluated using ELISA and Western blotting to examine the molecular mechanisms underlying the protective effect of GdCl 3 against I/R-induced apoptosis. Rats subjected to I/R injury exhibited significantly increased caspase-8 activity and Fas levels ( Figure 5D and E). Rats in the GdCl 3 -or nifedipinetreated groups exhibited decreased caspase-8 activity and Fas Figure 5D and E) compared to the I/R group. We evaluated the expression levels of DR5 and FADD in total tissues and cytochrome c in the mitochondrial fraction. I/R treatment potently promoted the expression of DR5 and FADD in whole cells and reduced cytochrome c in mitochondria, and GdCl 3 reversed the effect of I/R-induced upregulation of these apoptosis-related signaling proteins ( Figure 5F-H). GdCl 3 inhibited cell apoptosis via death receptor-related and mitochondrial-related signaling pathways, a result consistent with the in vitro data (Figures 2 and 3).
Pretreatment with GdCl 3 reduced AA levels in in vitro and in vivo cardiomyocyte injury Figure 6 shows that A/R-and I/R-treated cardiomyocytes and rats exhibited a dramatic upregulation of AA release compared to the control and sham groups, respectively. In con- Acta Pharmacologica Sinica npg trast, GdCl 3 pretreatment significantly reduced A/R-and I/Rinduced AA levels compared to the vehicle-treated A/R and I/R group, respectively. However, nifedipine did not exert a significant inhibitory effect on AA release. These results suggest that GdCl 3 abolished the induced AA augmentation due to the A/R or I/R process, which is a crucial factor in the induction of myocardial injury during I/R in vitro and in vivo [32,33] .
Discussion
Recent studies have indicated that low-dose GdCl 3 minimizes hepatic I/R injury and prevents primary graft dysfunction after liver transplantation. The present myocardial I/R rat model demonstrated a lower extent of myocardial injury with decreased infarction size and reduced levels of myocardial injury markers (eg, CK-MB, cTnI, and LDH) in GdCl 3 (10 mg/kg) and nifedipine groups compared with the I/R group (Figure 4). The dose of GdCl 3 used in the current study was the same as that used in other studies [20,34] . The increased infarction area of the heart after reperfusion is most likely the result of cell apoptosis [35,36] . Our study demonstrated that GdCl 3 administration reduced the number of TUNEL-positive cells, the death receptor expression, the cytochrome c release and the caspase-3 activation, suggesting an improved cell viability ( Figure 5). Previous studies have demonstrated the involvement of the mitochondrial and death receptor mediated pathways in cardiomyocyte apoptosis [13,14] . We investigated the regulators of apoptosis in intrinsic (mitochondria) and extrinsic (death receptor) pathways in isolated NRVMs to further elucidate the precise mechanism of GdCl 3 against myocardial injury. Mitochondria in a pro-apoptotic state release pro-apoptotic triggers, such as cytochrome c and apoptosis-inducing factor, from the intermembrane space [37] . Ca 2+ is one of the common secondary messengers that is likely involved in mitochondriamediated apoptosis pathways directly or indirectly. [Ca 2+ ] i and [Ca 2+ ] m overload directly causes post-I/R oxidative stress and myocardial apoptosis. In this study, we found that lowdose GdCl 3 dramatically inhibited AA-induced Ca 2+ overload and elevated resting [Ca 2+ ] i in A/R NRVMs ( Figure 3B-E). We also demonstrated that low-dose GdCl 3 significantly ameliorates I/R-induced cytochrome c release in vivo ( Figure 5H) and in vitro ( Figure 3A) and inhibits cell apoptosis via regulation of Previous studies have demonstrated that pathophysiological responses triggered after reperfusion include the release of activation factors and free radicals, which activate phospholipase A 2 and increase AA release [24] . The accumulated AA in the myocardium may play an important role in post-I/R injury because a time-dependent degradation of membrane phospholipids associated with an increase in membrane permeability was observed in the ischemic myocardium [17,38] . The [Ca 2+ ] m overload may be an upstream signal for AA-induced mitochondrial-mediated apoptosis [39] . Our previous studies have suggested that GdCl 3 at a molar ratio of 1/3 AA, which is different from other Ca 2+ antagonists, almost completely inhibits AA-induced intracellular Ca 2+ release and extracellular Ca 2+ inflow [18,19] . Another study of ours [20] has found that a fixed ratio of GdCl 3 /AA (1: 3) is required to satisfactorily inhibit Ca 2+ signal responses in NRVMs. GdCl 3 eliminates AA-induced cardiomyocyte apoptosis, probably through a direct chemical interaction with AA because mass and UV-Vis spectra measurements suggested that a new complex formed when GdCl 3 was proportionally mixed with AA (GdCl 3 :AA=1:3). Therefore, GdCl 3 may act as a scavenger of AA and block the properties and effects of AA. In the present study, we observed that low-dose GdCl 3 pretreatment inhibited cell apoptosis in A/R and I/R models, and it dramatically decreased the level of AA in in vitro and in vivo measurements ( Figure 6). Therefore, this study provides further evidence for the association of enhanced AA accumulation following ischemic damage in vitro in cardiomyocytes and an in vivo animal model heart. This study also suggests a possible mechanism in which GdCl 3 acts as an AA scavenger to protect against cell apoptosis during I/R.
Death receptor-related signaling is another important pathway associated with cardiomyocyte apoptosis during I/R. Fasmediated apoptosis is an important effector process in the progressive loss of cardiomyocytes [40] . The binding of Fas to its ligand (FasL) results in receptor cross-linking and apoptosis via receptor oligomerization and recruitment of the Fas-associated death domain protein (FADD), which regulates the proteolytic activity of caspase-8 and caspase-3 activation [41] . Our data indicated that A/R and I/R treatment activated death receptor pathways, and GdCl 3 pretreatment clearly decreased the levels of death receptors and related downstream signal molecules, such as Fas/DR5/FADD (Figures 2 and 5), caspase-8 and caspase-3 activity in both models (Figures 1, 2 and 5).
Notably, GdCl 3 was less potent than nifedipine in the inhibition of KCl-induced Ca 2+ increases (an indication of L-type Ca 2+ channel activation), but GdCl 3 was much more potent in suppressing AA-induced Ca 2+ signaling: 5 μmol/L GdCl 3 caused a complete abolishment of such signaling in NRVMs (Figure 3 and Supplementary Figure S2). These data suggest that GdCl 3 exerts a weaker inhibitory effect on myocardial contraction and rhythm than nifedipine, which is a common and dangerous side effect of Ca 2+ channel blockers in clinical practice, especially in ischemic hearts. This difference in their effects may represent an advantage of GdCl 3 over Ca 2+ channel blockers for the clinical application of inhibiting cell apoptosis without affecting myocardial contraction and rhythm. In addition, a previous study has suggested that stretch-activated ion channels also play a key role in cardiac pathophysiology [42] , and GdCl 3 is the most widely used antagonist of this channel. Abnormal tissue stretch is a classical feature of myocardial I/R [21,43] . Therefore, the antagonism effect of GdCl 3 on stretchactivated ion channels may also reflect another advantage of GdCl 3 over other drugs.
In summary, the current data demonstrated for the first time that low-dose GdCl 3 significantly ameliorates I/R-induced myocardial infarction via the reduction of cardiomyocyte apoptosis through the inhibition of death receptor and mitochondrial-mediated apoptosis pathway activation, thus providing a potential candidate for therapies of acute coronary syndrome, thrombolysis, or extracorporeal circulation-induced myocardial injury. | 2017-11-08T00:56:32.725Z | 2016-03-07T00:00:00.000 | {
"year": 2016,
"sha1": "1feebc47798fa6e4a0ce59f88fa03c27e3d98acf",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/aps2015156.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1feebc47798fa6e4a0ce59f88fa03c27e3d98acf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
188684603 | pes2o/s2orc | v3-fos-license | Automation in the Teaching of Descriptive Geometry and CAD. High-Level CAD Templates Using Script Languages
The main purpose of this work is to study improvements to the learning method of technical drawing and descriptive geometry through exercises with traditional techniques that are usually solved manually by applying automated processes assisted by high-level CAD templates (HLCts). Given that an exercise with traditional procedures can be solved, detailed step by step in technical drawing and descriptive geometry manuals, CAD applications allow us to do the same and generalize it later, incorporating references. Traditional teachings have become obsolete and current curricula have been relegated. However, they can be applied in certain automation processes. The use of geometric references (using variables in script languages) and their incorporation into HLCts allows the automation of drawing processes. Instead of repeatedly creating similar exercises or modifying data in the same exercises, users should be able to use HLCts to generate future modifications of these exercises. This paper introduces the automation process when generating exercises based on CAD script files, aided by parametric geometry calculation tools. The proposed method allows us to design new exercises without user intervention. The integration of CAD, mathematics, and descriptive geometry facilitates their joint learning. Automation in the generation of exercises not only saves time but also increases the quality of the statements and reduces the possibility of human error.
Introduction
By definition, descriptive geometry is a method of studying 3D geometry through 2D images. It provides insight into the structure and metric properties of spatial objects, processes, and principles. Descriptive geometry courses cover not only projection theory but also modelling techniques for curves, surfaces, and solids, thus offering insight into a broad variety of geometric shapes [1]. 'Learning by doing' is an important methodological principle in this subject, and one traditional goal is to develop and refine the students' problem-solving skills. As the drawing tools have drastically changed in the last years, this has had consequences for descriptive geometry education. CAD packages replace manual drawings. This has made the subject more interesting and attractive for pupils and students because they can now produce high-quality rendered graphics results. Of course, this development takes place at the cost of training in geometric reasoning. The increasing importance of information technologies in the everyday world and in education makes the question of teaching descriptive geometry with the use of computer software an urgent one [2].
Computer Aided Design (CAD) is the use of computer systems to help in the creation, optimization, modification, or analysis of a design. CAD software is used to increase designer productivity, improve design quality and communications through documentation, and create databases for manufacturing. As the CAD modelling techniques become more and more advanced, it is necessary to complete product modelling and design changes faster than ever [4]. Updating assemblies that have hundreds of sub-assemblies and parts manually in 3D modelling software is very complicated and time consuming. Undoubtedly, once a task is fully defined, computers and machines are unparalleled in executing it repeatedly with great speed and sustained accuracy. To this end, Hopgood [3] states that "computers have therefore been able to remove the tedium from many tasks that were previously performed manually". The process referred to is also called Design Automation (DA) by various researchers. The key phrase here is that many manual tasks have been removed through DA and a natural question would be: why not remove the tedium from all manual tasks? [4].
One of the great benefits of using CAD to create our technical drawings is the ability to adapt to suit our company's processes. If you we establish a technical drawing process that we perform frequently, it can be automated. If we've ever had to do the same thing with CAD twice, think about how we could automate it so we never have to do it again.
One of the easiest ways to automate a CAD process is to write a script [5]. In computer programming terms, a script is a program that will run with no interaction from the user. In AutoCAD [6], a script file is an ASCII text file that contains a set of command line instructions to follow, just like an actor reading from a script. AutoCAD script files always have the file extension '.scr'. AutoLISP [7] is the original and most popular programming language for AutoCAD. The reason for its popularity is that it is a natural extension of the program. No additional software needs to be run, and AutoLISP can run commands that Autodesk and other developers offer in the command window.
The LISP code can be entered directly into the command window or loaded using '.lsp' or '.scr' files. Once a LISP program is loaded, the built-in functions can be executed from the command window. These functions can be executed similarly to CAD commands, but it is the programmer who decides which messages to display. It is possible to use LISP code with a command macro that is activated from the CAD user interface or from a tool on a palette.
Visual languages can be very useful for helping architecture students understand general programming concepts, but scripting languages are fundamental for implementing generative design systems [8].
It is possible to learn to draw with AutoCAD and to program with AutoLISP for AutoCAD using the manuals and online aids offered by both Autodesk (knowledge.autodesk.com) and other independent developer websites (lee-mac.com, afralisp.net, or cadtutor.com). Self-learning through tutorials and videos is very widespread and numerous websites are available to solve any questions we may raise using advanced search engines if we search for the terms 'AutoCAD' or 'AutoLisp' as appropriate. In the design of complex engineering products, it is essential to handle cross-couplings and synergies between subsystems. An emerging technique that has the potential to considerably improve the design process is multidisciplinary design optimization (MDO) [10].
MDO requires a concurrent and parametric design framework. Powerful tools in the quest for such frameworks are DA and knowledge-based engineering The knowledge required is captured and stored as rules and facts to finally be triggered upon request. A crucial challenge is what type of knowledge should be stored in order to realize generic DA frameworks and how it should be stored [9]. The required knowledge is captured and stored as rules and facts that will finally be activated upon demand. The aim is to shift from manual modelling of disposable geometries to CAD automation by introducing high-level generic geometry templates. Instead of repeatedly modelling similar instances of objects, engineers should be able to create more general models that can represent entire classes of objects.
With regard to Asperl [11], in CAD learning, students achieve a level of skills and knowledge appropriate for their motivation with two basic objectives: to achieve good marks in examinations and to be well prepared for further tasks. As a consequence, the position and role of teachers have to change too. Teaching in front of the audience and explaining constructions step by step should be only a small part of education in CAD and other subjects. The teacher should not be the centre of activities; it is more efficient to put the individual student in the centre. In comparison with team sports, a good teacher does not have to be the MVP CAD player but has to be the best coach. According to Bokan [12], "rapid development of CAD/CAM software has made classical methods of Descriptive Geometry entirely obsolete. However, this discipline is still very important for strengthening one's spatial intuition". The introduction of modern CAD software packages in descriptive geometry improves the quality of studies; the students become more involved and interested. They also gain some practical skills that are useful in the job market. It is encouraging to learn that students can produce high quality and very useful teaching accessories.
The idea [12] is not to teach students particular methods of descriptive geometry but to solve the same spatial problems using 3D features of AutoCAD. In such a way, the geometrical solution of a problem stays the same, but the technique is no longer classical. The solution is immediately a 3D object that can be easily projected in many ways, each of which would require a separate drawing in classical descriptive geometry. This approach is also useful for strengthening a student's spatial intuition. It is a kind of geometrical modelling but is applied to classical problems of descriptive geometry.
According to Dosen [13], "generally, online students require additional support as they need to adjust their approaches to learning. However, teaching CAD online is more challenging than teaching other subjects. On-campus students attend face-to-face tutorials and interact with their tutor who is able to interactively and visually demonstrate aspects of the CAD graphical user interface, while distance learning students rely on the communication with the lecturer as well as the available teaching materials".
Oraifige [14] states that "the overall results support the argument that such online systems for students' teaching, learning and evaluation can be reliably implemented; however careful planning and analysis are necessary to gain the potential benefits".
In an effort to address the above challenges, this paper proposes the creation of high-level CAD templates (HLCts) for the manipulation of geometry and high-level analysis (HLAts) templates for concept evaluations.
Descriptive geometry and CAD templates
AutoCAD and other compatible applications automatically create object identifiers, hidden during the drawing process, that an average user usually does not know. These are necessary for the internal manipulation of the objects, but because of their extension and their difficult assimilation they are difficult to incorporate in procedures of descriptive geometry. Through LISP variables it is possible to create geometric and object references similar to those traditionally used in descriptive geometry. These references will be very useful in the detailed description of graphic procedures, although with limitations: a) they do not distinguish between capital "A" and "a" lowercase; b) they do not allow "a'", but "a _", "a$", "a%", "a#", "a*", or "a&" is possible.
Figure 1. File P5a.scr -parametric data
In the command window, in macros, and in script files, the use of CAD drawing commands can be combined with LISP commands, functions, and variables (used as geometric and object references). They are equivalent: _polygon !N1 _e !A !B ; in script files and in the command window (command "_polygon" N1 "_e" A B) ; in LISP files and in script files Notes: The comment lines are preceded by semicolons and serve to facilitate the understanding of the code. All AutoCAD commands preceded by the underscore will be executed even if the program is installed in another language. The references used in the script files or in the command window are preceded by the exclamation mark. The keyword "_edge" or its abbreviation "_e" specifies the quotation marks not in in the commands but in the command function. LISP functions (in parentheses) can be used as arguments in AutoCAD commands. The commands can be used in AutoLISP using the command function. Solving an exercise in descriptive geometry using 3D modelling with CAD applications does not require advanced knowledge and, with effort, it is possible to undertake it in a self-taught way. But certain exercises require additional knowledge of descriptive geometry. Learning an exercise typology also requires the study of a topic and all possible exercises. To solve them efficiently requires the study and application of advanced procedures. By creating a set of descriptive-geometry and CAD templates that incorporate these advanced procedures and their application to any type of exercise, it can be solved and learned to solve the most frequent exercises of geometry.
The procedure for the creation of a generic template of descriptive geometry and CAD is as follow: We select examples from statements of current exercises. We solve first as paper-and-pencil sketches and later with AutoCAD. We extract the command history and summarize it in script files. To avoid unexpected errors when executing script commands with scripts, it is essential to disable certain drawing aids that are visible in the status line and to activate them when finished. We create reference variables with the parametric data and draw the objects. We calculate the derived references that can be reused and redraw. A later analysis will allow us to process the information in a global way using variables, functions, and commands created with LISP. Distinguishing between direct and computer processes can be of great interest. A detailed analysis and design allows us to anticipate existing relationships between surfaces through shadows or intersections. We create the necessary analysis functions. We have defined a class of exercises from an existing exercise. Finally, we carry out debugging to remove possible errors.
We present and discuss the results of two practical examples of templates: Shadowed Developable Surfaces and Intersection of Planes. The first is a shadows exercise of distant light on four surfaces: a pyramid, a prism, a cone, and a cylinder. The surfaces are supported on a horizontal plane. The shadows are solved by the separatrix contours, their projection on the horizontal plane, and their projection on the surfaces. The second exercise is the typical intersection of planes in multiview orthographic projection by means of auxiliary projections of one plane on another.
Results and discussion
The first template example "Practice 5 -Shadowed Developable Surfaces" consists of several types of files: a) an empty drawing template (A3.dwt); b) six scripts (P5.scr, P5a.scr, P5b.scr, P5c.scr, P5d.scr, and P5e.scr); and c) an AutoLISP (dgfun.lsp). The 3D modelling procedure is divided into several script files, organized by layers (episodes): a) P5a.scr -a_data (understanding); b) P5b.scr -b_auxiliary (analysis and design); c) P5c.scr -c_process (planning); d) P5d.scr -d_process (scan); and e) P5e.scr -e_results (verification). Modifying parametric data from both templates allows us to obtain different results. We plan to provide the ability to change the coordinates of reference points, number of edges of polygons, cone or cylinder radius, and height of any surface or distant light vector. The descriptive geometry and CAD templates, based on ontologies, represent a great advance for the automatic generation of educational resources. These templates can be used as custom practice monitors. The student is simultaneously introduced to descriptive geometry and to the complex world of CAD with its derivatives. An exercise can be repeated with multiple parametric data until it is strengthened. To avoid the exchange of results between students it is possible to change the starting data and automatically have multiple results. Although exist development platforms more advanced than AutoLISP and script files exist, they don't work in applications compatible with AutoCAD or in other operating systems (macOS and Linux). They also have the drawback that they require programming skills. Other current advanced CAD programs have powerful development environments with scripting languages, similar to AutoLISP, that enable their use as a platform for the generation of geometry and CAD templates.
Descriptive geometry and CAD templates can be useful for: explaining theoretical concepts using practical examples, allowing detailed reading of the text of the script files where the procedure followed is explained by comments, allowing detailed reading of the script files where the mathematical functions created are explained, solving the exercises provided by the teacher in a flexible way, testing the level of theoretical knowledge of the student through self-assessment, serving as a test platform for students who wish to practice successfully before being tested, serving as a development platform for the creation of new templates. Only the synergy of three complementary core subjects (mathematics, descriptive geometry, and CAD) will enable students to access this knowledge in less time and with less effort. In this sense, Stachel [1] writes that "only people with a deep knowledge of Descriptive Geometry will be able to make extensive use of CAD programs" and that "the importance of mathematics continues to increase even though computers take charge of the computational work".
It can be done more in the field of multidisciplinary design optimization and incorporate programming as the fourth core subject involved in the management of descriptive geometry and CAD templates.
Conclusions
This document proposes a method of automation of existing procedures to generate exercises in descriptive geometry and CAD. The proposed method is analysed with several parametric data in the two examples presented, and the following conclusions can be drawn: 1) By using automation processes, it is possible to design exercises without user intervention. 2) Automation in the generation of exercises not only saves time but also increases the quality of the postulates and reduces the possibility of human errors. 3) Multidisciplinary optimization of design in exercises reduces learning efforts and speeds up the acquisition of graphic skills. Expert authors make the first descriptive geometry and CAD templates, but their use and modification does not require advanced knowledge. | 2019-06-13T13:10:49.749Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "27a04406ec4e92b67d9318244ad9703e92e326cb",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/245/6/062040",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b15161ceb0e90a74e74c0fbd23e2ba1b9abd61c9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
258781836 | pes2o/s2orc | v3-fos-license | Model-Based Reinforcement Learning via Stochastic Hybrid Models
Optimal control of general nonlinear systems is a central challenge in automation. Enabled by powerful function approximators, data-driven approaches to control have recently successfully tackled challenging applications. However, such methods often obscure the structure of dynamics and control behind black-box over-parameterized representations, thus limiting our ability to understand closed-loop behavior. This paper adopts a hybrid-system view of nonlinear modeling and control that lends an explicit hierarchical structure to the problem and breaks down complex dynamics into simpler localized units. We consider a sequence modeling paradigm that captures the temporal structure of the data and derive an expectation-maximization (EM) algorithm that automatically decomposes nonlinear dynamics into stochastic piecewise affine models with nonlinear transition boundaries. Furthermore, we show that these time-series models naturally admit a closed-loop extension that we use to extract local polynomial feedback controllers from nonlinear experts via behavioral cloning. Finally, we introduce a novel hybrid relative entropy policy search (Hb-REPS) technique that incorporates the hierarchical nature of hybrid models and optimizes a set of time-invariant piecewise feedback controllers derived from a piecewise polynomial approximation of a global state-value function.
I. Introduction
The class of nonlinear dynamical systems governs a vast range of real-world applications and underpins the most challenging problems in classical control, and reinforcement learning (RL) [1], [2]. Recent developments in learningfor-control have pushed towards deploying more complex and highly sophisticated representations, e.g., (deep) neural networks and Gaussian processes, to capture the structure of both dynamics and controllers. This trend led to unprecedented success in the domain of RL [3] and can be observed in both approximate optimal control [4]- [6], and approximate value and policy iteration algorithms [7]- [9].
However, before the latest revival of neural networks, research has focused on different paradigms for solving complex control tasks. One interesting concept relied on decomposing nonlinear structures of dynamics and control into simpler piecewise (affine) components, each responsible for an area of the state-action space. Instances of this abstraction can be found in the control literature under the labels of hybrid systems or switched models [10]- [13], while in the machine and reinforcement learning communities, the terminology of switching dynamical systems and hybrid state-space models is more widely used [14]- [17].
While the hybrid-state paradigm is a natural choice for studying jump processes, it also provides a surrogate piecewise approximation of general nonlinear dynamical behavior. Despite being less flexible than generic black-box approximators, hybrid models can regularize functional complexity and contribute to improved interpretability by imposing a structured representation.
Adopting this perspective in this paper, we present techniques for data-driven automatic system identification and closed-loop control of general nonlinear systems using piecewise polynomial hybrid surrogate models. More concretely, we focus on dynamic Bayesian graphical models as hybrid representations due to their favorable properties. These models have an inherent time-recurrent structure that captures correlations over extended horizons and carry over the advantages of well-established recursive Bayesian inference techniques for dynamical time series data.
In prior work [18], we presented a maximum likelihood approach for hierarchical piecewise system identification and behavioral cloning. Here, we robustify that approach by introducing suitable priors over all parameters. However, the central contribution of this paper is the introduction of an infinite horizon reinforcement learning framework that integrates the structured representation of stochastic hybrid models. The resulting algorithm interactively synthesizes nonlinear feedback controllers and value functions via a hierarchical piecewise polynomial architecture.
This paper is structured as follows. In Section II, we start by reviewing and comparing prominent paradigms of system modeling and optimal control of hybrid systems. Using that context in Section III, we highlight the advantages of our contributions in comparison with the literature. In Section IV, we cast the control problem as an infinite horizon Markov decision process and extended it to accommodate a hybrid structure. Next, in Section V, we introduce our notation of stochastic switching models in the form of hybrid dynamic Bayesian networks, as previously established in [18]. In Section VI, we recap our approach from [18] and improve it to derive a maximum a posteriori expectation-maximization (EM) algorithm for inferring the parameters of probabilistic hybrid models from data. This inference method is helpful for automatically decomposing nonlinear open-loop dynamics into switching affine regimes with arbitrary boundaries and deconstructing state-of-theart nonlinear expert controllers into piecewise polynomial policies. Furthermore, in Section VII, we formulate hybrid optimal control as a stochastic optimization problem and derive a trust-region reinforcement learning algorithm that incorporates an explicit hierarchical model of the nonlinear dynamics. We use this approach to iteratively learn piecewise approximations of the global nonlinear value function and stationary feedback controller. Finally, in Section VIII, we empirically evaluate our approaches on examples of stochastic nonlinear systems, including results from [18] that contribute to the overall picture.
Our empirical evaluation indicates that hybrid models can provide an alternative to generic black-box representations for system identification, behavioral cloning, and learningbased control. Hybrid models are able to reach comparable performance and deliver simpler, easily identifiable switching patterns of dynamics and control while requiring a fraction of the number of parameters of other functional forms. However, the results also reveal certain drawbacks, mainly in poor scalability and increased algorithmic complexity. We address these issues in a final outlook in Section IX.
II. Related Work
This section reviews work related to the modeling and control of hybrid systems and highlights connections and parallels between approaches stemming from the control and machine and reinforcement learning literature.
Hybrid systems have been extensively studied in the control community and are widely used in real-world applications [19], [20]. For research on hybrid system identifi- x 1 FIGURE 1: A hybrid system with K = 3 piecewise affine regimes. The top row depicts the mean unforced continuous transition dynamics in the phase space. The bottom row shows the distinct activation regions of the three dynamics regime across the phase space. We illustrate examples of affine (left), quadratic (middle), and third-order polynomial (right) switching boundaries. Figure reproduced from [18]. cation, we refer to survey work in [21] and [22]. There, the authors focus on piecewise affine (PWA) systems and introduce taxonomies of different representations and procedures commonly used for identifying sub-regimes of dynamics, ranging from algebraic approaches [23] to mixed-integer optimization [24], and Bayesian methods [25]. Furthermore, identification techniques for piecewise nonlinear systems have been developed based on sparse optimization [26] and kernel methods [27]. Finally, it is worth noting that the majority of literature considers deterministic regimeswitching events with exceptions in [28], [29].
Research in the area of optimal control for hybrid systems stretches back to the seminal work in [30], which highlights the possibility of general nonlinear control by considering piecewise affine systems. In [31], an overview of control approaches for piecewise affine switching dynamics is presented. The authors categorize the literature by distinguishing between externally and internally forced switching mechanisms. The bulk of optimal control approaches in this area focuses on (nonlinear) model predictive control (MPC) [32]. Here we highlight the influential work in [33], which formulates the optimal control problem as a mixed-integer quadratic program (MIQP). This approach was later extended in [34] and [35] to solve a multi-parametric MIQP and arrive at time-variant piecewise affine state-feedback controllers and piecewise quadratic value functions with polyhedral partitions. Recently, more efficient formulations of hybrid control have been proposed [36], which leverage modern techniques from mixed-integer and disjunctive programming to tackle large-scale problems.
Hybrid representations also play a central role in datadriven, general-purpose process modeling and state estimation [37], [38], where different classes of stochastic hybrid systems serve as powerful generative models for complex dynamical behaviors [39]- [41]. The dominant paradigm in this domain has been that of probabilistic graphical models (PGM), more specifically, hybrid dynamic Bayesian networks (HDBN) for temporal modeling [42], [43]. One crucial contribution of recent Bayesian interpretations of switching systems is rooted in the Bayesian nonparametric (BNP) view [44]- [47]. This perspective theoretically allows for an infinite number of components, thus dramatically increasing the expressiveness of such models. Given the limited scope of this review section, we highlight only recent contributions with high impacts, such as [48] and [17], which successfully develop Markov chain Monte Carlo (MCMC) and stochastic variational inference (SVI) techniques for system identification. More recently, the rise of variational auto-encoders [49] has enabled a new and powerful view of inference techniques [50] for hybrid systems. A distinct drawback of such approaches is their reliance on end-to-end differentiability and the need to relax discrete variables in order to perform inference.
In the domain of learning-for-control, the notion of switching systems is directly related to the paradigm of model-free hierarchical reinforcement learning (HRL) [51], [52], which combines simple representations to build complex policies.
Here it is useful to differentiate between two concepts of hierarchical learning, namely temporal [53], and state abstractions [54]. In their seminal work [55], [56], the authors build on the framework of semi-Markov decision processes (SMDP) [57] to learn activation/termination conditions of temporally extended actions (options) for solving discrete environments. Additionally, pioneering work in optimizing hierarchical control structures with temporally extended actions is developed in [58] and [59]. Recent work has focused on formulations of the SMDP framework that facilitate simultaneous discovery and learning of options [60]- [64].
However, the concept of state abstraction -partitioning state-action spaces into sub-regions, each governed by local dynamics and control -carries the most apparent parallels to the classical view of hybrid systems. In [65], a proof of convergence for RL in tabular environments with state abstraction is presented, while [66] does a comprehensive study of different abstraction schemes and gives a formal definition of the problem. Furthermore, recent work has shown promising results in solving complex tasks by combining local policies, albeit while leveraging a complex neural network architecture as an upper-level policy [67].
Switching systems serve as a powerful tool in behavioral cloning. For example, [68] combines hidden Markov models (HMMs) with Gaussian mixture regression to represent trajectory distributions. In contrast, [62] uses a semi-hidden Markov model (HSMM) to learn hierarchical policies, and [69] introduces switching density networks for system identification and behavioral cloning. Finally, a Bayesian framework for the hierarchical policy decomposition is presented in [70], albeit while considering known transition dynamics.
III. Contribution
In light of the motivation and reviewed literature from Section I and II, we establish here the overall contribution of our methodology and highlight the main differences that distinguish it from related approaches.
As previously stated, this work strives to cast the problem of nonlinear optimal control into a data-driven hierarchical learning framework. Our aim is to introduce explicit structure and adopt hybrid surrogate models to avoid the opaqueness of recently popularized black-box representations. While this paradigm has been established before, our realization differs from previous attempts in two central aspects: • System Modeling: This work leverages probabilistic hybrid dynamic networks as hierarchical representations of nonlinear dynamics. Contrary to a piecewise autoregressive exogenous systems (PWARX), HDBNs straightforwardly account for noise in both discrete and continuous dynamics. They also incorporate nonlinear transition boundaries, thus minimizing partitioning redundancy. Furthermore, HDBNs admit efficient inference methods in data-driven applications. Finally, by pursuing an abstraction over states instead of time, we circumvent the need to infer termination policies of the SMDP framework. • Control Synthesis: We propose a hybrid policy search approach that formulates a non-convex infinite horizon objective and optimizes a piecewise polynomial approximation of the value function with nonlinear partitioning. This approximation is used to derive stationary switching feedback controllers. In contrast, trajectory optimization and model predictive control techniques for hybrid models are often cast as sequential convex programs that assume polyhedral partitions and optimize a fixed horizon objective, yielding time-variant value functions and controls.
IV. Problem Statement
Consider the discrete-time optimal control problem of a stochastic nonlinear dynamical system to be defined as an infinite horizon Markov decision processes (MDP). An MDP is defined over a state space X ⊆ R d and an action space U ⊆ R m . The probability of a state transition from state x to state x' by applying action u is governed by the Markovian time-independent density function p(x'|x, u). The reward r(x, u) is a function of the state x and action u. The state-dependent policy π(u|x), from which the actions are drawn, is a density determining the probability of an action u given a state x. The general objective in an average-reward infinite horizon optimal control problem is to maximize the average of rewards V π (x) = lim T →∞ 1 T E T t=1 r , where V π denotes as the state-value function under the policy π, starting from an initial state distribution µ 1 (x).
Given the context of this work and our choice to model the system with hybrid models, we introduce to the MDP formulation a new hidden discrete variable z, an indicator of the currently active local regime. The resulting transition dynamics can then be expressed by a factorized density FIGURE 2: Graphical model of recurrent autoregressive hidden Markov models (rARHMMs) extended to support hybrid controls. In rARHMMs, the discrete state z explicitly depends on the continuous state x and action u, as highlighted in red. Figure reproduced from [18].
function p(x', z'|x, u, z) = p(z'|z, x, u)p(x'|x, u, z'), which we depict as a graphical model in Figure 2 and discuss in further detail in the upcoming section. In the same spirit of simplification through hierarchical modeling, we employ a mixture of switching polynomial controllers π(u|x, z), associated with a piecewise polynomial value function V π (x, z).
V. Hybrid Dynamic Bayesian Networks
In this section, we focus on the modeling assumptions for the stochastic switching transition dynamics p(x', z'|x, u, z), see Section IV. We choose recurrent autoregressive hidden Markov models (rARHMMs) as a representation, which is a special case of recurrent switching linear dynamical systems (rSLDS) [17], also known as augmented SLDS [71]. In contrast to rSLDS, an rARHMM lacks an observation model and directly describes the internal state up to an additive noise process. We extend rARHMMs to support exogenous and endogenous inputs in order to simulate the open-and closed-loop behaviors of driven dynamics. Figure 2 depicts the corresponding graphical model, which closely resembles the graph of a PWARX. An rARHMM with K regions models the trajectory of a dynamical system as follows. The initial continuous state x 1 ∈ R d and continuous action u 1 ∈ R m are drawn from a pair of Gaussian and conditional Gaussian distributions 1 , respectively. The initial discrete state z 1 is a random vector modeled by a categorical density parameterized by φ The transition of the continuous state x t+1 and actions u t are modeled by affine-Gaussian dynamics The discrete transition probability p(z t+1 |z t , x t , u t ) is governed by K categorical distributions parameterized by a state-action dependent multi-class logit link function f [72] where f may have any type of features in (x, u). The vectors ω ij parameterize the discrete transition probabilities for all transition combinations i → j ∀i, j ∈ [1, K]. Figure 1 depicts realizations of different logit link functions leading to various state space partitioning.
The remainder of this paper focuses on using these hybrid models in three scenarios: • An open-loop setting that treats the control u as an exogenous input is used for automatically identifying nonlinear systems via decomposition into continuous and discrete switching dynamics. • A closed-loop setting that assumes the control u to originate from a nonlinear controller. We show that this setting can simultaneously decompose dynamics and control in a behavioral cloning scenario. • A reinforcement learning setting where we develop a model-based hybrid policy search algorithm to learn switching controllers for general nonlinear systems.
VI. Bayesian Inference of Hybrid Models
In this section, we sketch the outline of an expectationmaximization/Baum-Welch algorithm [73]- [75] for inferring the parameters of an rARHMM given time-series observations. The resulting algorithm can be used two-fold. First, it can be applied to automatically identify hybrid models and approximate the open-loop dynamics of nonlinear systems given state-action observations. Second, it can clone the closed-loop behavior of a nonlinear controller and decompose it into a set of local experts. Our developed approach is related in some aspects to the Baum-Welch algorithms proposed in [76] and [62]. However, we introduce suitable priors over all parameters and derive a maximum a posteriori (MAP) technique with a stochastic maximization step and hyperparameter optimization. In our experience, the priors significantly regularize the sensitivity of EM with respect to the initial point, making it less prone to getting stuck in bad local minima.
Moreover, a good prior specification is crucial in small data regimes since a vague prior may dominate the predictive posterior and effectively cause under-fitting. We implement a hyperparameter optimization scheme that elevates this concern by optimizing the prior parameters via empirical Bayes [77], thus attenuating the prior influence and improving the predictive performance significantly.
A. Maximum A Posteriori Optimization
Consider again the rARHMM in Figure 2 where the continuous state x and action u are observed variables, while 2 We abuse notation slightly by sometimes using z to refer to the discrete state index instead of treating it as a one-hot vector. the K-region indicators z are hidden. To infer the model parameters, we assume a dataset of N state-action trajectories where (X n , U n , Z n ) represent the time concatenation of an entire trajectory (x n 1:T , u n 1:T , z n 1:T ). The objective corresponding to system identification and behavioral cloning can be cast as a maximization problem of the log-posterior probability of the observations where p(D n , Z n |θ) is the complete-data likelihood of a single trajectory and factorizes according to and p(θ|h) is the factorized parameter prior We choose all priors to be conjugate or semi-conjugate with respect to their likelihoods. Therefore, we place a normal-Wishart (NW) prior on the initial state distribution (µ k , Ω k ) ∼ NW(0, κ 0 , Ψ 0 , ν 0 ), and a matrixnormal-Wishart (MNW) on the affine transition dynam- The initial discrete state takes a Dirichlet prior φ ∼ Dir(τ 0 ), while the logit link function parameters are governed by a non-conjugate zero-mean Gaussian prior with diagonal precision ω ik ∼ N(0, αI). Finally, we place a separate matrix-normal-Wishart prior on the conditional action like- The choice of priors is not restricted to these distributions. Depending on modeling assumptions, one can assume dynamics with diagonal noise matrices and pair them with gamma distribution priors. Moreover, if the system is known to have a state-independent noise process, the K Wishart and gamma priors can be tied across components, leading to a more structured representation.
B. Baum-Welch Expectation-Maximization
On closer examination of Equations (2) and (3), we observe that the optimization problem is non-convex with multiple local optima since the complete-data likelihood N n=1 p(D n , Z n |θ) can follow complex multi-modal densities. Another technical difficulty is the summation over all possible trajectories of the hidden variables Z n , which is of computational complexity O(N K T ) and is intractable in most cases. Expectation-maximization algorithms overcome the latter problem by introducing a variational posterior distribution over the hidden variables q(Z n ) and deriving a lower bound on the complete log-probability function We find a point estimate θ MAP by following a modified scheme of EM, alternating between an expectation step (Estep), in which the lower bound in Equation (4) is maximized with respect to the variational distributions q(Z n ) given a parameter estimateθ, a maximization step (M-step), that updates θ given (q(Z n ),ĥ), and finally, an empirical Bayes step (EB-step) that updates h given (q(Z n ),θ). A sketch of the overall iterative procedure is presented in Algorithm 1.
1) Exact Expectation Step
Maximizing the lower bound with respect to q(Z n ) is determined by reformulating Equation (4) This form of the lower bound implies that the optimal variational distributionq(Z n ) minimizes the Kullback-Leibler divergence (KL) [78], meaninĝ q(Z n ) = p(Z n |D n , θ) = p(z n 1:T |x n 1:T , u n 1:T , θ). (5) This update tightens the bound if the posterior model q(Z n ) belongs to the same family of the true posterior [15]. Notice that the E-step is independent of the prior p(θ). Moreover, Equation (5) indicates that the E-step reduces to the computation of the smoothed marginals p(z n t |x n 1:T , u n 1:T ,θ) under the current parameter estimateθ. Following [73] and [72], we derive a two-filter algorithm, which enables closedform and exact inference by splitting the smoothed marginals into a forward and backward message 3 γ n t (k) = p(z n t = k|x n 1:T , u n 1:T ) ∝ p(z n t = k|x n 1:t , u n 1:t )p(x n t+1:T , u n t+1:T |z n t = k, x n t , u n t ) = α n t (k)β n t (k), where α n t (k) = p(z n t = k|x n 1:t , u n 1:t ) is the message which computes the filtered marginals via a forward recursion and β n t (k) = p(x n t+1:T |z n t = k, x n t , u n t ) is the backward message that performs smoothing by computing the conditional likelihood of future evidence ×p(x n t+1 |x n t , u n t , z n t+1 = j)p(u n t+1 |x n t+1 , z n t+1 = j). Additionally, by combining both forward and backward messages, we can compute the two-slice smoothed marginals p(z n t , z n t+1 |x n 1:T , u n 1:T ) which will be useful during the maximization and empirical Bayes steps
2) Stochastic Maximization Step
After performing the E-step and computing the smoothed posteriors, we are able to evaluate the lower bound and maximize it with respect to θ given (q(Z n ),ĥ). By plugging Equations (3) and (5) into (4), leveraging conditional independence, and disregarding terms independent of θ, we arrive at the expected complete log-probability function Q(θ, γ, ξ,ĥ) The function Q is non-convex in ω when a nonlinear logit link function f (., ω) is chosen as an embedding for the transition probability χ, see Equation (1). In that case, stochastic 3 We briefly drop the dependency onθ for an uncluttered notation while deriving the forward-backward recursions.
Algorithm 1: Expectation-Maximization for System Identification and Behavioral Cloning
optimization is recommended [79] as batched noisy gradient estimates allow the algorithm to escape shallow local minima and reduce the computational cost that comes with evaluating the gradients for all data instances.
Consequently, when implementing the M-step, we apply stochastic optimization on the transition parameters ω. We use a stochastic gradient ascent direction with an adaptive learning rate ε and batch size M [79] For the parameters with conjugate priors, we derive closed-form optimality conditions. Effectively, we derive the posterior distribution via Bayes' rule and take the mode of each posterior density for a MAP estimate update.
By considering only relevant terms, we write the MAP of the initial gating parameter φ as while the estimate of the initial state parameters (µ k , Ω k ) can be decoupled for each k as follows Analogously, the MAP of the dynamics parameter and, finally, to learn closed-loop behavior, we can infer the controller parameters Due to space constraints, we will refrain from stating the explicit solution for these optimization problems. Instead, we provide the general outline of how to compute these posteriors and their modes based on the unified notation for exponential family distributions in Appendix A and B.
3) Approximate Empirical Bayes
Inference techniques that leverage data-independent assumptions run the risk of prior miss-specification. In our MAP approach, the priors are weakly informative and carry little information. Their main purpose is to regularize greedy updates that might lead to premature convergence. However, when there is little data, the priors, especially those on the precision matrices, may dominate the posterior probability, leading to over-regularization and under-fitting of the objective. Empirical Bayes approaches remedy this issue by integrating out the parameters θ and optimizing the marginal likelihood with respect to the hyperparameters h [77]. In our setting, marginalizing all hidden quantities does not admit a closed-form formula. An approximate approach to empirical Bayes is to interleave the E-and M-steps with hyperparameter updates that optimize the lower bound given an estimate of parametersθ and a step size ϱ where the gradient of Q with respect to h reduces to
VII. Reinforcement Learning via Hybrid Models
The last sections focused on the system modeling aspect and how to use hybrid surrogate models to approximate nonlinear dynamics. Now we turn our attention to the problem of using these models to synthesize structured controllers for general nonlinear dynamical systems. One possible approach is to use the learned hybrid models and apply the classical hybrid control methods, which we have reviewed Section II. However, as discussed earlier, these methods suffer from several drawbacks. On the one hand, they rely a polyhedral partitioning of the space. This limitation is severe because it often leads to an explosion in the number of partitions.
On the other hand, these methods are often focused on computationally expensive trajectory-centric model predictive control. This class of controllers is disadvantageous in applications that require fast reactive feedback signals with broad coverage over the state-action space.
In this section, we address these points and present an infinite horizon stochastic optimization technique that incorporates the structure of hybrid models. This approach can deal with rARHMMs with arbitrary non-polyhedral partitioning and synthesizes stationary piecewise polynomial controllers. Our algorithm extends the step-based formulation of relative entropy policy search (REPS) [80]- [82] by explicitly accounting for the discrete-continuous mixture state variables (x, z). Our approach, hybrid REPS (Hb-REPS), leverages the state-action-dependent nonlinear switches p(z'|z, x, u) as a task-independent upper-level coordinator to a mixture of K lower-level policies π(u|x, z). While the proposed approach shares many features with [62], our formulation relies on a state-abstraction representation of hybrid models and embeds the hierarchical model structure into the optimization problem in order to learn a hierarchy over the global value function. In contrast, [62] operates in the framework of semi-Markov decision processes and optimizes a mixture over termination and feedback policies without considering the existence of a hierarchical structure in the space of dynamics and value functions. For more details on differences between state-and time-abstractions, refer to Section II.
A. Infinite-Horizon Stochastic Optimal Control
In the REPS framework, an optimal control problem is presented as an iterative trust-region optimization for a discounted average-reward objective under a stationary stateaction distribution π(u|x, z)µ(x, z), Equation (6a). The trust-region is formulated as a KL [78], Equation (6c). Its purpose is to regularize the search direction and limit information loss between iterations. The REPS formulation explicitly incorporates a dynamics consistency constraint, Equation (6b), that describes the evolution of the stochastic state of the system. The following optimization problem is solved during a single iteration of hybrid REPS where µ(x, z) is the stationary mixture distribution, q(x, u, z) is the trust-region reference distribution, and the constraint in Equation (6d) guarantees the normalization of the state-action distribution. The factor 1 − ϑ, ϑ ∈ [0, 1), represents the probability of an infinite process to reset to an initial distribution µ 1 (x, z). The notion of resetting is necessary to ensure ergodicity of the closed-loop Markov process and allows the interpretation of ϑ as a discount factor and regularization of the MDP [82], [83].
B. Optimality Conditions and Dual Optimization
To solve the trust-region optimization in Equations (6a)-(6d), we start by constructing the Lagrangian of the primal [84] where we use p(x, u, z) = µ(x, z)π(u|x, z) for convenience and leverage the following identities The second identity implies that the resetting is only dependent on the parameter ϑ and independent of the state and actions (x, u, z) to satisfy the ergodicity property. The parameters η and λ are the Lagrangian variables associated with Equation (6c) and (6d), while V (x, z) is the state-value function, which appears naturally in REPS as the Lagrangian function associated with Equation (6b). Next, we take the partial derivative of L with respect to p(x, u, z) and set it to zero to get the optimal point where A(x, u, z, V ) is the advantage function given as The optimal point p * (x, u, z) = µ * (x, z)π * (u|x, z) has to satisfy the constraint in Equation (6d), which in turn enables us to find the Lagrangian variable λ *
=
By substituting λ * back into p * (x, u, z) in Equation (7), we retrieve the normalized density softmax form Now by plugging the solutions p * and λ * back into the Lagrangian, we arrive at the dual function G as a function of the remaining Lagrangian variables η and V where q(x, u, z) = q(x, u)q(z|x, u) and q(z|x, u) is the posterior over z given x and u. In Section VI, we derived a forward-backward algorithm for inferring this density, allowing us to compute the expectation over z. The expectations over x and u are analytically intractable. Therefore, we approximate them given samples from the reference distribution q(x, u). The multipliers η and V are then obtained by numerically minimizing the dual G(η, V ) arg min that acts as the upper bound on the primal objective.
C. Modeling Dynamics and State-Value Function
Up to this point, the derivation of Hb-REPS has been generic.
We have made no assumptions on initial distributions µ 1 (x, z), the dynamics p(x', z'|x, u, z), or the value function V (x, z). Now, we introduce the piecewise affine-Gaussian dynamics and logistic switching described in Section V and assume these representations to be available in parametric form as a result of a separate learning process. Furthermore, we model the state-value function with piecewise n-th degree polynomial is the state-feature vector which contains polynomial features of the state x, and τ z is the parameter vector assigned to the different regions.
Under these assumptions, we can use the available joint density µ 1 (x, z) and p(x'|x, u, z) to compute the necessary expectations in Equation (8) This computation allows our approach to capture the stochasticity of the dynamics and delivers an estimate of the advantage function A(x, u, z, V ) instead of the temporal difference (TD) error in the general REPS framework [80]. Ultimately, this leads to better estimates of the expected discounted future returns captured by V .
Practically, these integrals can be either naively approximated by applying Monte Carlo integration [85] or, more efficiently, by recognizing the structure of the integrand V (x', z') and using Gauss-Hermite cubature rules for exact integration over polynomial functions [86].
D. Maximum-A-Posteriori Policy Improvement
A significant advantage of our model-based reinforcement learning approach becomes evident when considering the policy improvement step in the REPS framework. The policy update is incorporated into the optimality condition of the stationary state-action distribution p(x, u, z) = π(u|x, z)µ(x, z) in Equation (7). As a consequence, updating the mixture policies π(u|x, z) requires the computation of state probabilities µ(x, z), which in turn require knowledge of the dynamics model. This issue is circumvented in other model-free realizations of REPS by introducing a crude approximation to enable a model-free policy update nonetheless. For example, in [87], the authors postulate that the new state distribution µ(x, z) is usually close enough to the old distribution q(x, z), thus allowing the ratio q(x, z)/µ(x, z) to be ignored when a weighted maximum-likelihood fit of the actions u is performed to update π.
While the assumption of closeness may be practical and empowers many successful variants of REPS, it is crucial to be aware of its technical ramifications, as it undermines the primary motivation of a relative entropy bound on the stateaction distribution in Equation (6c). This aspect is unique in the REPS framework when compared to other state-ofthe-art approximate policy iteration algorithms [7], [9], [88], that optimize a similar objective, albeit with a relaxed bound that only limits the change of the action distribution π.
In contrast, our algorithm uses the surrogate hybrid dynamics and updates the policy π(u|x, z) with the correct weighting. The optimality condition in Equation (7) is satisfied by computing a weighted maximum a posteriori estimate of the parameters θ of the state-action distribution p(x, u, z|θ), thus implicitly updating π(u|x, z). This procedure is equivalent to a modified Baum-Welch expectationmaximization algorithm that learns the parameters of a closed-loop rARHMM, as derived in Section VI. The difference is that the EM objective in Equation (2) has to be augmented with the importance weights from Equation (7) arg max θ log N n=1 z n w n p(X n , U n , Z n |θ)p(θ), where (X n , U n ) are state-action trajectories collected via interaction with the environment and w n are the associated weights resulting from Equation (7) w n = exp A(X n , U n , Z n , V )/η . initialize: q(u|x, z), τ z , η This augmentation leads to weighted M-and EB-steps while the E-step is not altered.
Note that during the policy improvement step, we can either assume an a priori estimate of the open-loop dynamics p(x', z'|x, u, z) and only update the control parameters corresponding to the conditional π(u|x, z), or we can iteratively update p(x', z'|x, u, z) as more data becomes available. A compact sketch of the overall optimization process is available in Algorithm 2.
VIII. Empirical Evaluation
In this section, we benchmark different aspects of our approach to system modeling and control synthesis via hybrid models. In the following, • we assess the predictive performance of rARHMMs at open-loop system identification of nonlinear systems and validate our choice of hybrid surrogate models as a suitable representation. • we test the ability of rARHMMs to approximate and decompose expert nonlinear controllers in a closed-loop behavioral cloning scenario. • we deploy rARHMMs in the proposed hierarchical RL algorithm Hb-REPS to solve the infinite horizon stochastic control objective and optimize piecewise polynomial controllers and value functions.
A. Piecewise Open-Loop System Identification
We start by empirically benchmarking the open-loop learned rARHMMs and their ability to approximate nonlinear dynamics. We compare to popular black-box models in a longhorizon and limited-data setting. This evaluation focuses on rARHMMs with exogenous inputs. We learn the dynamics of three simulated deterministic systems; a bouncing ball, an actuation-constrained pendu- Figure 3. The values reflect the total number of parameters of each model. The values in parentheses represent the hidden layer sizes S of the neural models and the number of discrete components K for the (r)ARHMM, respectively.
lum, and a cart-pole system. We compare the predictive time forecasting accuracy of rARHMMs to classical non-recurrent autoregressive hidden Markov models (ARHMMs) 4 [16], feed-forward neural nets (FNNs), Gaussian processs (GPs) 5 , long-short-term memory networks (LSTMs) [89], and recurrent neural networks (RNNs). During the evaluation, we collected segregated training and test datasets. The training dataset is randomly split into 24 groups, each used to train different instances of all models. These instances are then tested on the test dataset. During evaluation, we sweep the test trajectories stepwise and predict the given horizon. All neural models have two hidden layers, which we test for different layer sizes, S ∈ {16, 32, 64, 128, 256, 512} for FNNs, S ∈ {16, 32, 64, 128, 256} for RNNs, and S ∈ {16, 32, 64, 128} for LSTMs. In the case of (r)ARHMMs, we vary the number of components K, dependent on the task. As a metric, we evaluate the forecast NMSE for a range of horizons averaged over the 24 data splits. We report the result corresponding to the best choice of S and K. Finally, in Table 1, we qualitatively compare the complexity of all representations in terms of their total number of parameters.
1) Bouncing Ball
This example is a canonical instance of a dual-regime hybrid system due to the hard velocity switch at the moment of impact. We simulate the dynamics with a frequency of 20 Hz and collect 25 training trajectories with different initial heights and velocities, each 30 s long. This dataset is split 24 folds with ten trajectories, 10 × 150 data points, in each subset. The test dataset consists of 5 trajectories, each 30 s long. We evaluate the NMSE for horizons h = {1, 20, 40, 60, 80} time steps. We did not evaluate a GP model in this setting due to the long prediction horizons that led to a very high computational burden. The (r)ARHMMs are tested for K = 2. The logit link function of an rARHMM is parameterized by a neural net with one hidden layer containing 16 neurons. The results in Figure 3 show that the rARHMM approximates the dynamics well and outperforms both ARHMMs and the neural models.
2) Pendulum and Cart-Pole
These systems are classical benchmarks from the nonlinear control literature. Here we consider two different observation types, one in the wrapped polar space, where the angle space θ ∈ [−π, π] includes a sharp discontinuity, and a second model with smooth observations parameterized with the Cartesian trigonometric features {cos(θ), sin(θ)}. Both dynamics are simulated with a frequency of 100 Hz. We collect 25 training trajectories starting from different initial conditions and apply random uniform explorative actions. Each trajectory is 2.5 s long. The 24 splits consist of 10 trajectories each, 10 × 250 data points. The test dataset consists of 5 trajectories, each 2.5 s long. Forecasting accuracy is evaluated for horizons h = {1, 5, 10, 15, 20, 25}. The (r)ARHMMs are tested for K = {3, 5, 7, 9} on both tasks. The logit link function of the rARHMM is parameterized by a neural net with one hidden layer containing 24 neurons. As shown in Figure 3, the forecast evaluation provides empirical evidence for the representation power of rARHMMs in both smooth and discontinuous state spaces. FNNs and GPs perform well in the smooth Cartesian observation space and struggle in the discontinuous space, similar to RNNs and LSTMs. Moreover, in Table 1, it is clear that rARHMMs reach comparable predictive performance to state-of-the-art models with a fraction of the parametric complexity.
B. Piecewise Closed-Loop Behavioral Cloning
We want to analyze the closed-loop rARHMM with endogenous inputs as a behavioral cloning framework. The task is to reproduce the closed-loop behavior of expert policies on challenging nonlinear systems. For this purpose, we train two feedback experts on the pendulum and cart-pole. The two environments are simulated at 50 Hz and are influenced by static Gaussian noise with a standard deviation σ = 1×10 −2 . The experts are two-layer neural network policies with 4545 parameters (pendulum) and 17537 parameters (cart-pole), optimized with the SAC algorithm [9].
For cloning, we construct two 5-regime rARHMMs with piecewise polynomial policies of the third order. The hybrid controllers have a total number of parameters of 100 (pendulum) and 280 (cart-pole). Learning is realized on a dataset of 25 expert trajectories, each 5 s long, for each environment and using the EM technique from Section VI. The decomposed controllers complete the task of swinging up and stabilizing both systems with over 95% success rate. Figure 4 shows the phase portraits of the unforced dynamics and closed-loop control identified during cloning. Figure 5 depicts sampled trajectories of the hybrid policies highlighting the switching behavior.
C. Nonlinear Control Synthesis via Hybrid Models
Finally, we evaluate the performance of the hybrid policy search algorithm Hb-REPS on two nonlinear stochastic dynamical systems: an actuation-constrained pendulum swingup and a cart-pole stabilization task. We make no claim to the absolute sample efficiency of our RL approach when compared to state-of-the-art RL algorithms. Instead, we aim to provide empirical support for the premise that structured representations that rely on compact piecewise parametric forms can provide an alternative to black-box function approximators with comparable overall performance. Therefore, we compare the performance of Hb-REPS to two baselines. The first is a vanilla version of REPS that does not maintain any hierarchical structure and uses nonlinear function approximators with random Fourier features (RFFs) [90] to represent both policy and value function. The second baseline assumes a hierarchical policy structure and a nonlinear value function with Fourier features. This baseline is somewhat akin to what is implemented in [62], albeit with a hierarchy based on state abstraction rather than time. We will refer to this algorithm as hierarchical REPS (Hi-REPS). We assume an offline learning phase in which the hybrid models are learned from pre-collected data.
1) Pendulum Swing-up
In this experiment, the actuation-constrained pendulum is simulated at 50 Hz and perturbed by Gaussian noise with a standard deviation σ = 1 × 10 −2 . The REPS agent relies on a policy and value function with 50 and 75 Fourier basis functions, respectively. Hi-REPS assumes a similar form of the value function but with a piecewise third-order polynomial policy over five partitions. Hb-REPS represents both policy and value function with piecewise third-order polynomials over five partitions. Empirical results in Figure 7 (left half) feature comparable learning performance of all algorithms over ten random seeds. Every iteration involves 5000 interactions with the environment. We provide a phase portrait of the closed-loop behavior for a qualitative assessment of the final stationary hybrid policy.
2) Cart-pole Stabilization
This evaluation features a cart-pole constrained by an elastic wall modeled by a spring. The dynamics are linearized around the upright position. The environment is simulated at 100 Hz and perturbed by Gaussian noise with a standard deviation σ = 1 × 10 −4 . The REPS policy and value function both use 25 random Fourier basis functions. Hi-REPS adopts the same value function structure with a twopartition piecewise affine policy. Hb-REPS also assumes a two-partition piecewise affine policy and second-order value function. Figure 7 (right half) depicts comparable learning performance over ten random seeds. Every iteration involves 2500 interactions with the environment.
IX. Discussion
We presented a general framework for data-driven nonlinear system identification and stochastic control based on the structured representation of hybrid surrogate models. To introduce the hybrid structure, we proposed replacing commonly used piecewise affine auto-regressive models with probabilistic hybrid dynamic Bayesian networks, as they offer a range of advantages in data-driven scenarios. Furthermore, we presented a novel reinforcement learning algorithm that leverages the learned hybrid models to synthesize piecewise polynomial feedback controllers for nonlinear systems.
Our hybrid-model-infused reinforcement learning approach is able to reach comparable performance on control tasks with a significant reduction in the complexity of functional representation. Furthermore, in contrast to deterministic hybrid model predictive control, our approach solves the infinite-horizon stochastic optimal control problem by approximating the global value function and lifts the requirement for polyhedral partitioning.
While initial empirical results are encouraging, the application of this work is limited to low-dimensional dynamical systems. Although a viable alternative to expensive mixedinteger optimization, the inference techniques used in this paper still present a bottleneck in the face of scalability to higher dimensions. While our MAP approach significantly improves the quality of expectation-maximization solutions, it nevertheless struggles in more challenging environments.
A possible course of action is to investigate Bayesian nonparametric extensions of hybrid dynamic Bayesian networks based on non-conjugate variational inference. Fully Bayesian methods tend to improve learning in large structured models significantly. Another potential avenue of research is to improve the hybrid reinforcement learning framework by considering the control-as-inference paradigm. Such approaches may offer ways of integrating the Bayesian structure of the models into the control optimization and constructing an uncertainty-aware approach that is better equipped to deal with the exploration-exploitation dilemma.
Appendix A: Exponential Family
Our work focuses on random variables with probability density functions belonging to the exponential family. The unified minimal parameterization of this class of distributions lends itself for convenient and efficient posterior computation when paired with conjugate priors.
We assume the natural form for a probability density of a random variable x where h(x) is the base measure, η are the natural parameters, t(x) are the sufficient statistics and a(η) is the log-partition function, or log-normalizer. Following the same notation, a conjugate prior g(η|λ) to the likelihood f (x|η) has the form g(η|λ) = h(η) exp λ · t(η) − a(λ) , with prior sufficient statistics t(η) = [η, −a(η)] ⊤ and hyperparameters λ = [α, β] ⊤ . By applying Bayes' rule, we can directly infer the posterior q(η|x) q(η|x) ∝ f (x|η)g(η|λ) ∝ exp ρ(x, λ) · t(η) − a(ρ) , where the posterior natural parameters ρ(x, λ) are a function of the likelihood sufficient statistics t(x) and prior hyperpa- The structure of the resulting posterior reveals a simple recipe for data-driven inference. By moving into the natural space, the posterior parameters are computed by combining the prior hyperparameters with the likelihood sufficient statistics and log-partition function. By definition, every exponential family distribution has a minimal natural parameterization that leads to a unique decomposition of these quantities [91].
Appendix B: Conjugate Posteriors
We present an outline of all M-step updates. We use an adapted form of the exponential natural parameterization, as it offers a clear methodology for deriving and implementing such updates for all relevant distributions.
A. Categorical with Dirichlet Prior
A weighted categorical likelihood over a one-hot random variable z with size K has the form where w nk are the importance weights for each category K. The conjugate prior is a Dirichlet p(φ) distribution The posterior q(φ) is likewise a Dirichlet distribution The maximization step requires computing the mode categorical weights. For a Dirichlet distribution the mode weights areφ = (τ −1)/( K k=1 τ k −K) with τ k > 1. The parameter vector τ is given by τ k = τ 0,k + N n=1 w n,k ∀k ∈ [1, K].
Linear-Gaussian with Matrix-Normal-Wishart Prior
A weighted linear-Gaussian likelihood takes a random variable x ∈ R d and returns a random variable y ∈ R m according to a linear mapping A : R d → R m p(Y|X, A, V) = N n=1 N(y n |x n , A, V) wn where w n are the weights and W = diag(w n ) is the diagonal weight matrix. The data matrices X and Y are of size d×N and m × N , respectively. The conjugate prior p(A, V) is a matrix-normal-Wishart with zero mean p(A, V) = N(A|0, V, K 0 ) W(V|Ψ 0 , ν 0 ) The posterior q(µ, Λ) is matrix-normal-Wishart The mode mapping and precision of a matrix-normal-Wishart are = M andΛ = (ν − m)Ψ, respectively. The standard posterior parameters are | 2021-11-12T02:15:42.429Z | 2021-11-11T00:00:00.000 | {
"year": 2021,
"sha1": "bb8cd1067577533c0b32bd21e57a8f7fb0d0e656",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1109/ojcsys.2023.3277308",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "bb8cd1067577533c0b32bd21e57a8f7fb0d0e656",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
10954872 | pes2o/s2orc | v3-fos-license | From amino acids polymers, antimicrobial peptides, and histones, to their possible role in the pathogenesis of septic shock: a historical perspective
This paper describes the evolution of our understanding of the biological role played by synthetic and natural antimicrobial cationic peptides and by the highly basic nuclear histones as modulators of infection, postinfectious sequelae, trauma, and coagulation phenomena. The authors discuss the effects of the synthetic polymers of basic poly α amino acids, poly l-lysine, and poly l-arginine on blood coagulation, fibrinolysis, bacterial killing, and blood vessels; the properties of natural and synthetic antimicrobial cationic peptides as potential replacements or adjuncts to antibiotics; polycations as opsonizing agents promoting endocytosis/phagocytosis; polycations and muramidases as activators of autolytic wall enzymes in bacteria, causing bacteriolysis and tissue damage; and polycations and nuclear histones as potential virulence factors and as markers of sepsis, septic shock, disseminated intravasclar coagulopathy, acute lung injury, pancreatitis, trauma, and other additional clinical disorders
Introduction
The pioneering work of Anton van Leeuwenhoek, Edward Jenner, Ignaz Semmelweiss, Louis Pasteur, Robert Koch, Elia Metchnikoff, and many others linked microbes with infectious diseases and helped establish the germ theory of disease. 1 Robert Koch's postulates and Metchnikoff's phagocytosis theory described various functions of macrophages and their ability to kill microorganisms. This formed the basis for numerous studies on the biochemical properties and role in infection and immunity of phagocytic cells such as neutrophils and macrophages. These phagocytic cells are rich in antimicrobial peptides, lysosomal hydrolases, and oxidants. These early theories also lead to the discovery of proinflammatory cytokines and their role in microbial infections. However, the early discovery of lysozyme in 1923, and the further discovery of penicillin by Fleming, Florey, and Chane in 1928, was a major step in attempting to control severe microbial infections that may result in septic shock and organ failure; these conditions have a very high mortality even today. The recent concern regarding acquisition of antibiotic resistance by the microbe and the ongoing risk to life of the postinfectious sequelae prompted a very intense search for alternative antimicrobial cationic peptides to hopefully cope with severe microbial infections and their aftermath.
The basic concepts of microbial virulence were skillfully reviewed by Casadevall and Pirofski. 2 They stated that "while the importance of a host's susceptibility for Dovepress Dovepress 8 Ginsburg et al a microbe's virulence was often recognized, the existing definitions did not account for the contributions of both pathogen and host." As we know today, sepsis is indeed the host response (organ dysfunction) to invasion by pathogenic microorganisms (infection), while septic shock is the more extreme form of this host reaction where vasodilation and translocation of fluid from the vascular space to the interstitial space causes hypotension and other cellular and metabolic abnormalities. Casadevall and Pirofski 2 reviewed historical concepts of microbial pathogenicity and virulence, proposed new definitions, and suggested a classification system for microbial pathogens based on their ability to cause damage as a function of the host's immune response. One typical example is septic shock (see section on "Can histone released from neutrophils" section).
This review will address the nature and role of histones as well as other modulators of infection and the host response to infection. Histones may be defined as highly basic proteins around which DNA coils to form chromatin. Their role in infection is described further.
The effect of the synthetic polymers of basic of poly a amino acids poly l-lysine and poly l-arginine on blood coagulation, fibrinolysis, bacterial killing, and blood vessels Even as early as 1952-1956, teams of investigators at the Weitzman Institute of Science in Rehovot, Israel, headed by Ephraim Katchalski, were the first to synthesize and investigate the role of the linear polymers of basic amino acids such as poly l-lysine, poly l-arginine, and poly l-ornithine in the retardation of blood coagulation and fibrinolysis, killing of bacteria and mammalian cells, promotion of phagocytosis, and toxicity to blood vessels. [3][4][5][6][7][8][9] It is of note that synthetic cationic polymers are actually histone mimics, sharing a high cationic charge capable of interaction with anionic agents. The researchers at Weizmann Institute also successfully explored the use of such poly amino acids as protein models and studied many of their physical, chemical, and biological properties. In all cases, the effects of the cationic polymers were abrogated by poly anions such as poly l-aspartic acid, poly l-glutamic acid, and the highly sulfated compound heparin. [3][4][5][6][7][8][9] Unfortunately, these pioneering "ancient" studies are hardly ever cited in the modern literature and may be lost to the clinicians forever.
The retardation of clot lysis by basic poly cations was reconfirmed in 2015. 10,11 In sepsis, both coagulation (activation of the coagulation cascade) as well as fibrinolysis are enhanced. Therapeutic strategies (eg, activated protein C) have previously been directed at reducing clot formation in the microcirculation to reduce organ dysfunction in septic patients. The authors showed that histone mimics(poly l-lysine and poly l-arginine) and neutrophil extracellular traps exerted antifibrinolytic effects in a plasma environment and that the combination of histones and DNA also significantly prolonged clot lysis by forming thicker fibers accompanied by improved stability and rigidity. [10][11][12] The properties of natural and synthetic antimicrobial cationic peptides designed to replace antibiotics In 1956 and later on in 1958 and in 1960, poly l-lysine was shown to possess potent bactericidal effects against a variety of microorganisms and also against certain viruses, all abrogated by poly-anions such as heparin poly-glutamic and poly-aspartic acids. 9,13,14 However, being toxic to mammalian cells, their clinical use should be considered with caution. 15 Antimicrobial peptides (AMP) are mainly small peptides (12-50 amino acids) containing a positive charge and an amphipathic structure. The AMPs, which are rich in proline, tryptophan, arginine, lysine, or histidine, are actually mimics of nuclear cationic histones (see section on "Can histone released from neutrophils") and are able to interact with negatively charged microbial and mammalian membranes to disrupt the bilayer curvature, beyond a threshold concentration of membrane-bound peptide. In bacteria, AMPs rapidly interact with surface lipopolysaccharide (LPS) of Gram-negative organisms and with the membrane-associated lipoteichoic acid (LTA) in Gram-positive organisms, and they also demonstrate toxicity to a variety of mammalian cells. AMPs may also induce bacteriolysis.
Since 1956, an overabundance of publications focused on the chemistry, physics, biology, and bactericidal effects of a large variety of linear and nonlinear cationic AMPs. [16][17][18][19][20][21][22] These cationic agents may be considered evolutionarily ancient weapons against microbial infections. They may also play a pivotal role in innate immunity and as agents for specific uses because of their natural antimicrobial properties and a low propensity for the development of bacterial resistance. Hopefully, one day, AMPs may provide an alternative to conventional antibiotics (discussed later).
Readers who require further background on this topic would be well-served spending time reading the works of and paying tribute to the pioneers: M Zasloff, R Hancock, K Brogden, Y Shai, T Ganz, A Peschel, P Elsbach, Robert I
Polycations as opsonizing agents promoting endocytosis/phagocytosis
In general, polycationic agents can interact via electrostatic forces with negatively charged sites, mainly on the surfaces of microbial and mammalian cells. Such interactions may perturb the membrane, induce cell agglutination, and also cause permeability changes that may lead to cell lysis. 23 The attachment of cationic agents to surfaces of negatively charged particles is called opsonization and is similar to the effect of antibodies. Both facilitate the internalization (phagocytosis) of cationic particles by the professional phagocytic cells, neutrophils (PMNs), and macrophages, but surprisingly, also by nonprofessional phagocytes and by certain tumor cells. Although we will not fully cover the topic, it is worth mentioning that a variety of positively charged agents have also been shown to act as transfecting agents and as agents promoting the delivery of drugs as conjugates and as "decorators" of drug-loaded carriers. 24 It is important to note that nonspecific plasma globulins and IgGs are positively charged macromolecules. However, nonspecific cationic globulins, which can bind to cell surfaces by electrostatic forces, might interfere with the binding of specific antibodies. Indeed, a thermostable cytotoxic factor, globulin, in normal human plasma inhibited the action of heterologous antibodies on HeLa cells. 25 In 1986, it was shown that Entamoeba histolytica and Acanthamoeba palestinensis, two distinct classical phagocytic cells (possibly evolutional forefathers of neutrophils and macrophages), which stubbornly refused to internalize/engulf Candida albicans, nevertheless did so very avidly if precoated by arginine-rich polycations. 26 These studies resulted in later experiments that showed phagocytosis-endocytosis of Candida albicans and of Group A Streptococci by mouse fibroblasts and by epithelial cells in culture. 27 In this study, the most potent opsonins for Group A Streptococci were specific antibodies supplemented with complement, nuclear histone, poly lysine, poly arginine, ribonuclease, leukocyte lysates, leukocyte cationic proteins, and, to a lesser extent, cationic lysozyme and myeloperoxidase.
Highly cationic histone, RNAse, leukocyte extracts, and platelet extracts also functioned as opsonins for phagocytosis of streptococci in the peritoneal cavity. 27 However, the phagocytic capabilities of mouse fibroblast poly karyons (cells with multiple nuclei) were much higher than those of ordinary spindle-shaped fibroblasts, probably due to their very large cytoplasmic area. Calf thymus histone also functioned as a good opsonic agent for the uptake of Candida by human fibroblasts, HeLa cells, epithelial cells, monkey kidney cells, and rat heart cells in culture. 27 Phagocytosis of Streptococci and Candida by macrophages and the uptake of Candida by fibroblasts were both strongly inhibited by the polyanions hyaluronic acid, DNA, and dextran sulfate. The paucity of nonprofessional phagocytes of hydrolases capable of breaking down microbial cell wall components may contribute to the persistence of nonbiodegradable components of bacteria in tissues and lead to the perpetuation of chronic inflammatory sequelae such as granulomatosis. 28 Two excellent, but concerning, examples of phagocytosis of microbes in vivo showed that Staphylococcus aureus, by forming microcolonies, could survive unharmed within skin keratinocytes, waiting for the opportunity to attack patients with low immunity. The mechanism of cell uptake was not disclosed. 29,30 It was also demonstrated that nonbiodegradable cell wall components could persist for long periods within macrophages in arthritic granulomas. Another example is the chronicity of lesions in tuberculosis. [31][32][33] Furthermore, macrophages and neutrophils loaded with opsonized streptococcal cell walls can be translocated to remote sites to induce chronic inflammation. [34][35][36]
Polycations and lysozyme as activators of autolytic wall enzymes (muramidases) in bacteria, causing bacteriolysis and tissue damage
The discovery of the bacteriolysis phenomenon dates back to 1893 32,37 when Buchner 37 reported that fresh serum was able to kill certain bacteria, an effect which was lost upon heating to 55°C. He attributed the bactericidal action of serum to a heat-labile constituent that he called "alexine" (from Greek "to ward off "). One year later, Pfeiffer described the dissolution of Vibrio cholera by fresh serum of guinea pigs immunized with heated vaccine, which could be correlated with protection against infection in both passively and actively immunized animals. 32 However, the significance of the biochemical degradation of microbes as related to tissue injury in inflammation, infection, and postinfectious sequelae has emerged mainly from a large series of investigations 32,38,39 that focused on: 1. The structure and function of the bacterial cell walls 2. The role of muramidases (autolytic wall enzymes) in normal bacterial multiplication 3. The role played by lysozyme, leukocyte-derived polycations, cationic enzymes, and antibiotics (mostly β-lactams) in bacteriolysis 4. The role of muramidase-deficient strains in pathology 5. Antibiotic resistance 6. Microbial killing and degradation: The role of the cell wall components: LPS, LTA, and peptidoglycan (PPG) in the activation of leukocytes and in the generation of oxidants, proteinases, and cytotoxic cytokines. 7. The role of microbial cell wall components in the pathogenesis of granulomatous inflammation and in the potentiation of innate immunity to infections and of tumor-cell proliferation Morphologically, two main patterns of bacterial cell degradation under various physiological and pathological conditions have been defined: 1. The term plasmolysis was proposed when a significant degradation of cytoplasmic constituents occurred, leaving apparently intact cell walls 2. The term bacteriolysis was proposed when a significant breakdown and degradation of the rigid cell walls, presumably due to the uncontrolled activation of autolytic wall enzymes (muramidases), occurred.
Bacteriolysis can be defined as an event that may occur when normal microbial multiplication is altered due to an uncontrolled activation of a series of autolytic cell-wall breaking enzymes (muramidases). It may happen following treatment of bacteria by β-lactam antibiotics or also by a large variety of bacteriolysis-inducing cationic peptides such as histones, elastase and cathepsin G, lysozyme, and PLA 2 . When bacteriolysis occurs in vivo, cell wall-and membrane-associated LPS (endotoxin) from Gram-negative organisms and LTA and PPG from Gram-positive organisms are released. These highly phlogistic agents can act on macrophages to induce the generation and release of reactive oxygen and nitrogen species, cytotoxic cytokines, hydrolases, proteinases, and also activate the coagulation and complement cascades. 40 Peptidoglycan hydrolysis can result in the rupture of the murein sacculus due to its high osmotic pressure, leading to the release of cytoplasmic constituents and cell wall fragments. 32 A possible explanation for the long persistence of highly phlogistic nonbiodegradable microbial cell wall remnants within professional phagocytic cells was offered in 1989. 41 It was proposed that following phagocytosis either by PMNs or by macrophages, the engulfed microorganisms are exposed intraphagosomally to the respiratory burst generating oxidants, LL-37, lysosomal cationic proteinases, and also numerous hydrolases, which inactivate the autolytic wall enzymes thus allowing the survival of highly phlogistic microbial cell wall component.
It was also shown that neutrophil-mediated myeloperoxidase, H 2 O 2 , and HOCl production inactivated a class of cytoplasmic membrane enzymes (penicillin-binding proteins [PBP's]) in Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. These PBP's covalently bind β-lactam antibiotics to their active sites. This contributed to the persistence of nondegraded microbial components, leading to unbalanced bacterial growth and cell death. 42,43 Degradation of Staphylococcus aureus by β-lactams was markedly inhibited by the polyanions suramine and Evans blue, suggesting that accumulation of polyanions and sulfated polysaccharides in inflammatory sites might also interfere with bacteriolysis. 44,45 Clindamycin treatment of Staphylococcus aureus caused a remarkable thickening of the bacterial cell wall due to increased numbers of O-acetyl groups in the murein, which made the bacterial wall much more resistant to lytic enzymes within bone marrow-derived macrophages, and this was revealed by electron microscopy and radiolabeling experiments. This reduced wall degradation might increase the survival of highly phlogistic walls in inflammatory sites. Furthermore, such clindamycin-treated bacteria were ingested by adherent bone marrow-derived macrophages at a higher rate than untreated bacteria. 46 The involvement of bacteriolysis in sepsis was also reported. 47 The lysozyme riddle: is this enzyme a genuine and an effective bacteriolytic enzyme?
In 1922, Alexander Fleming discovered the enzyme lysozyme (N-acetylmuramide glycanhydrolase). 48 Lysozyme is a 139-amino acid cationic protein found in neutrophils, macrophages, saliva, mucous, egg white, milk, and additional body fluids. Patients with myeloid leukemia can be diagnosed by measuring lysozyme in urine by a simple method using suspensions of Micrococcus lysodeikticus as a highly sensitive substrate (Ginsburg, unpublished data). Lysozyme was anticipated to kill, lyse, and biodegrade pathogenic microorganisms.
Lysozymecan very rapidly (within 1-2 minutes) lyse certain nonpathogenic Gram-positive cocci (eg, Micrococcus lysodeikticus and also spore-bearing aerobic bacilli) possessing a "simple" PPG. However lysis of Staphylococcus aureus, which possesses a more complex PPG, can take up to 6-12 hours. It is of clinical significance that lysozyme rarely induces lysis either of hemolytic Streptococci, Streptococcus viridans, Listeriae, Mycobacteria, or Candida species, and it lyses enteric bacteria Staphylococcus aureus slowly. It could, however be partially lysed by a synergism between lysozyme, lysolecithin and phospholipase C. 32 Also, hemolytic streptococci cultivated in the presence of subinhibitory concentrations of penicillin lost their membrane-associated phospholipids to a large extent following treatment with small concentrations of lysothelin and lysozyme. 32 The main reason for the relative resistance to lysozyme action of Staphylococcus aureus and perhaps also of the majority of pathogenic microorganisms, may be ascribed to the presence in their peptidoglycans of O-acetyl groups, which hinder the interaction of lysozyme with the N-acetylglucosamine-N-acetyl muramic acid linkages in the PPG. 32,49,50 However, mild alkaline solutions rendered such cell walls digestible by egg-white lysozyme. 51 The lack of deacetylating enzymes in phagocytes may explain why apparently intact Staphylococcal, Streptococcal, and Mycobacterial cell walls may persist for long periods either within phagolysosomes of macrophages in culture or also in vivo. However, lysozyme does not seem to function as a muramidase but rather as a cationic peptide that activates the microbial autolytic wall enzymes in certain bacteria. 32 Usually, this process takes hours and is therefore missed when the bactericidal effects of AMP are tested for very short periods and expressed as colony forming units. [49][50][51][52] Lysis of bacteria by antibiotics in vivo may also be involved in sepsis and septic shock (see section on "Can histone released from neutrophils" 47 ).
Synergistic effects among cationic peptides
It was also demonstrated that mammalian cationic peptides from different structural classes (eg, α-helical cationic peptides such as lactoferrin and most amphipathic membraneactive AMPs) frequently show synergy with each other and also with lysozyme. It is assumed that this reflects the cooperative interactions of the peptides with the outer membranes of Gram-negative bacteria and/or cooperative interaction with lipid bilayers in general. It was concluded that, given the substantial diversity of peptides in any given location in the host, synergistic interactions are an important determinants of the overall effectiveness of the peptides. 52 Cationic peptides and cationic proteins can also act in synergy with reactive oxygen species to injure mammalian cells. [53][54][55][56] Can histone released from neutrophil nets function as a major virulence factor involved in the pathophysiology of septic shock trauma and also in many additional clinical disorders?
It is alarming that today clinicians are still limited when trying to treat the life-threatening sequelae of severe microbial infections, which very often lead to sepsis and septic shock, both of which have a high mortality. 57,58 The annual incidence of sepsis in the USA has been estimated to affect as many as 750,000 hospitalized patients with mortality reaching about 40%. 57,58 Worldwide, sepsis is one of the commonest, deadliest disease entities, and globally, 20 to 30 million patients are estimated to be afflicted every year with what is one of the least well-understood disorders.
Screening the voluminous literature on sepsis treatments reveals the repeated unsuccessful efforts to save patients' lives by administering antibiotics, sometimes combined with only singly-selected antagonists. The numbers of unsuccessful antisepsis agents that have been tested in clinical trials in the last 30 years is phenomenal, and today, even the most promising agent, activated protein C, has been recently removed from use. Today, there is no specific effective treatment for sepsis and septic shock. 57,58 The pioneering studies on poly alpha cationic amino acids [3][4][5][6][7][8][9] and their role as bactericidal agents, as opsonins, and as bacteriolysis-inducing agents, (see the earlier sections) raised interest regarding the possible role of histones and modified histones, actually lysine and arginine-rich peptides, in the pathophysiology of a variety of clinical disorders.
This "new field" of research emerged in 2009 from two "breakthrough" articles by Xu et al 59 and Chaput et al 60 in Nature Medicine. These authors had proposed that nuclear histones released from PMN nets may be the main cause of death in sepsis and that this is due to the toxicity of the highly cationic protein to endothelial cells (ECs).
Histones comprise five groups of nuclear proteins rich in the highly basic amino acids l-lysine and l-arginine, which are bound to chromatin in the cell nucleus. Extracellular histones are highly toxic to bacteria and to mammalian cells and can increase plasma thrombin generation by impairing endothelial thrombomodulin-dependent protein C activation, which is responsible for disseminated intravascular Journal of Inflammation Research 2017:10 submit your manuscript | www.dovepress.com
12
Ginsburg et al coagulopathy (DIC). Dysregulation of ECs by the released histones leads to a severe immune cytokine storm and coagulation cascade. The toxic effects of histones could be abrogated or slowed down by antibodies to histoneactivated protein C (a protease which cleaves histone) and also by heparin. Extracellular histones are also elevated in response to traumatic injury, and this elevation correlates with fibrinolysis 6,10-12 and activation of anticoagulants. In trauma patients, an increase in histone levels between the time of admission and 6 hours is predictive of mortality. This suggests a possible role for activated protein C in mitigating the sterile inflammatory response after trauma through the proteolysis of circulating histones. However, the question of whether histones alone are the real culprits or just markers of cell damage is still unsettled.
Septic shock was recently redefined as a multifactorial synergistic phenomenon/disorder. While no distinct virulence factor (alarmin) has been identified, if successfully neutralized, the devastating immune responses can be slowed down or even stopped, which will hopefully reduce mortality. 61,62 The possible role of histones in additional clinical disorders The interesting publications regarding the possible pathogenetic properties of histones in sepsis has resulted in a plethora of studies, also suggesting that these circulating polycations may be involved in the pathogenesis of DIC, acute lung injury, trauma, pancreatitis, liver, renal and myocardial disorders, trauma, heat stroke, and many other clinical disorders. [63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79] Analyzing these articles, one wonders if the presence of histones in the circulation indicates that these are innate virulence/toxic agents or just an additional marker of tissue damage.
Can histones function alone in vivo as absolute virulence factors?
It was previously demonstrated that the toxicity of histone to ECs and to epithelial cells in culture was markedly further enhanced (in a synergistic manner) in combination with oxidants, proteinases, and additional proinflammatory agents generated by activated neutrophils. 54,55,62,80,81 Such synergistic phenomenon might actually be a general mechanism of cell injury mediated by activated phagocytes recruited to infectious and inflammatory sites.
The following scenario might be depicted: following adherence to ECs, PMNs undergo NETosis, and the released DNA combined with histones is accompanied by activation of NADPH oxidase and generation of reactive oxygen species. This happens concomitantly with activation of a large array of proinflammatory agents.
Dysregulation of ECs leads to platelets activation and the generation of cytokine storms and coagulation cascades.
However, it is highly plausible that in vivo, histones and additional toxic polycations (eg, LL-37, elastase) most probably never act on their own but always in synergism with many additional proinflammatory agents. 54,55,81 Since histones' action could be abrogated either by antibodies to histone, activated protein C, anionic heparin, and also additional polysulfates, it is reasonable to assume that these inhibitors actually affect not only histones action alone but also the synergism among the various agents. Therefore, if these inhibitors are administered early enough, they could still manage to neutralize the toxic effects to prevent the ongoing deleterious immune and coagulation responses.
Perhaps we could also use antioxidants, 82 since NETos and the release of histones (from activated PMNs adhering to endothelial cells) are accompanied by the activation of the respiratory burst in PMNs. 82 Years ago it was suggested that a multi-faceted approach to sepsis was required and not treatment based on a single antagonist, this paper was largely ignored. 83 Why is the mortality from sepsis still so high?
Current clinical management of sepsis
Currently, most efforts in the clinical management of septic patients are directed at early recognition and diagnosis, prompt commencement of treatment, source control of sepsis, and early antibiotic therapy. Following this initial therapy, the rest of the therapeutic armamentarium is based on supportive treatment such as optimal fluid therapy, vasopressor and inotropic therapy, and organ support (eg, mechanical ventilation and renal replacement therapy). In light of our growing understanding of the underlying mechanisms of sepsis and the host response to sepsis, these treatments are directed at "downstream" processes that occur long after the initial injury.
Usually, in clinical practice, sepsis patients showing the main symptoms of tachypnea, tachycardia, confusion, high lactate and procalcitonin levels, leukocytosis or leucopenia arrive in the intensive care unit (ICU) many hours or even days, after developing symptoms. Therefore, even the "miracle" novel nonanticoagulant heparin 81,84 combined with antibiotics might already be ineffective since by that time all the pathological biochemical processes are already well established. As for the nature of additional sepsis markers, which may be helpful in the early diagnosis of sepsis, the reader is directed to excellent review articles on the subject. 61,82,[85][86][87][88][89] Finally, taken together, it is not understood why the latest consensus definition of clinical of sepsis 2016 90 has not considered and discussed either the possible involvement in sepsis of histones and additional cationic peptides, the use of nonanticoagulant heparin, 84 oor the role of bacteriolysis in pathogenicity.
It seems that antihistone measures as a plausible and accepted therapy (see section on "Can histone released from neutrophils") have not yet matured sufficiently to reach practicing clinicians.
It is hoped that further studies on the pathophysiology of sepsis might shed more light on the possible validity of combinations of nonanticoagulant heparin, anti-inflammatory cocktails, anti-cytokines, antioxidants, and anti-bacteriolysis agents in the treatment of sepsis. To do so effectively, we have to define and employ very early markers of sepsis. [81][82][83][84][85] Looking to the future Consider a still hypothetical idea that every office of a public/private practitioner (a house doctor) may have available a simple, inexpensive kit to detect early sepsis makers to identify abnormal levels of biochemical, blood, and leukocyte parameters in urine, blood, or other biological fluid. These will replace the more cumbersome and expensive ELISA kits that, today, are usually available in research laboratories but not in the ICU. Thus, sepsis diagnosis will be much faster and more responsive to adequate effective treatment.
Summary
Taken together, our understanding of the possible role of highly charged polymers of basic amino acid in the pathophysiology of infection, in postinfectious and inflammatory sequelae, and following trauma and inflammation has evolved over more than 60 years and is still going on. Septic shock and posttrauma syndromes are considered synergistic multifactorial disorders where not a single virulence factor had been identified, which, if successfully inhibited, might delay or stop the immunological and coagulation cascades leading to a patient's demise. Circulating histones are also tied up with the pathogenesis of pulmonary, renal, cardiac, pancreatic, and liver disorders, as well as in other disorders. Today, we still do not fully know whether histones and additional cationic peptides released into the circulation are major virulence factors or just biomarkers of tissue damage.
Whatever the reasons are for the pathogenicity of polycations, the newly described nonanticoagulant heparin, if combined in time with antibodies to histones, activated protein C (a protease, which cleaves histones), nonbacteriolytic antibiotics, antioxidants, steroids, and cocktails of additional antagonists, may be justified for use as a therapeutic regimen. This may finally bring to an end the numerous unsuccessful trials of sepsis conducted, by the administration of single antagonists, over so many years in an attempt to cope with the patient's morbidity and unfortunate mortality. 79 However, we should also consider the fact that since patients suspected of developing septic shock may usually arrive at the ICU hours or days after the appearance of symptoms, even antihistone strategies may not be fully protective by that time. Therefore, efforts should be made to identify novel, very early markers of tissue damage to allow early treatment.
The complexity of the sepsis syndrome, which involves multiple interactions among biochemical, immunological, and coagulation cascades, and the difficulty in identifying the disorders early enough are still the main stumbling blocks to achieve a consensus of how to prevent and treat the post infectious sequelae of sepsis and posttrauma syndromes.
Disclosure
Erez Koren is currently employed at Teva Pharmaceuticals Ltd, Israel. The authors report no other conflict of interests in this work. | 2018-04-03T00:22:35.373Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "4ecb490d06843eb1e68fa7c48525c90c1b367e78",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=34699",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f32064ba8aa8e208712ee0f1dfb59530bfe1632",
"s2fieldsofstudy": [
"Chemistry",
"History",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15437570 | pes2o/s2orc | v3-fos-license | Chiral MnIII (Salen) Covalently Bonded on Modified ZPS-PVPA and ZPS-IPPA as Efficient Catalysts for Enantioselective Epoxidation of Unfunctionalized Olefins
Chiral MnIII (salen) complex supported on modified ZPS-PVPA (zirconium poly(styrene-phenylvinylphosphonate)) and ZPS-IPPA (zirconium poly(styrene-isopropenyl phosphonate)) were prepared using –CH2Cl as a reactive surface modifier by a covalent grafting method. The supported catalysts showed higher chiral induction (ee: 72%–83%) compared with the corresponding homogeneous catalyst (ee: 54%) for asymmetric epoxidation of α-methylstrene in the presence of 4-phenylpyridine N-oxide (PPNO) as axial base using NaClO as an oxidant. ZPS-PVPA-based catalyst 1, with a larger pore diameter and surface area, was found to be more active than ZPS-IPPA-based catalyst 2. In addition, bulkier alkene-like indene, was efficiently epoxidized with these supported catalysts (ee: 96%–99%), the results were much higher than those for the homogeneous system (ee: 65%). Moreover, the prepared catalysts were relatively stable and can be recycled at least eight times without significant loss of activity and enantioselectivity.
Introduction
The asymmetric epoxidation of alkenes into unique labile three-membered ether rings-which are useful organic building blocks for the synthesis of pharmaceuticals, agrochemicals, and fine chemicals-is one of the fundamental organic transformations [1][2][3]. Chiral Mn III (salen) complexes, firstly reported by the Jacobsen [1,2] and Katsuki groups [4], have emerged as extremely efficient systems for the asymmetric epoxidation of unfunctionalized olefins. Although homogeneous catalysis is advantageous in terms of product yield and efficiency, it suffers from limitations of catalyst recovery and a problem with residual catalyst in the synthesized molecules. Thus, extension of these methods for large-scale synthesis-and for making pharmaceutically imperative molecules-becomes a matter of environmental and economic concern. Hence, it is of paramount importance that the the utilization efficiency of these Mn III (salen) complexes be improved by immobilizing them onto various heterogeneous supports [5][6][7][8][9][10][11][12][13][14] such as MCM-41, LDHS, silica, polymers, dendrimer, graphene oxide, glass beads, etc. The use of supported catalysts offers an attractive solution owing to their easy separation, convenient handing, non-toxic nature, and reusability.
Our group has, in recent decades, been concerned with the research of organic polymerinorganic phosphate salt (Zr, Zn, Al, and Ca) hybrid materials [15][16][17][18][19][20]. The features of these organic polymer-inorganic phosphate salts (Zr, Zn, Al, and Ca) supports are different from either common polystyrene or pure phosphate salts (Zr, Zn, Al, and Ca). They consist of polystyrene which is easily modified and hydrophobic, phosphate salt (Zr, Zn, Al, and Ca) parts that are hydrophilic, and their nanometer-scale self-assembled layered structure. Thus, a great number of polystyrene segments combined with layered phosphate salts (Zr, Zn, Al, and Ca) will lead to the formation of different caves, holes, pores, micropores, channels, and secondary channels with various sizes and shapes which presents an opportunities for support material candidates.
Recently [21,22], we have devoted ourselves to axially immobilizing Jacobsen's catalyst on a series of organic polymer-inorganic phosphate salts (Zr, Zn, Al, and Ca) through different diamine, polyamine, diol, or diphenoxyl linkers, the prepared heterogeneous catalysts were evaluated for enantioselective epoxidation of unfunctionalized olefins. Representative works [17] indicated that ZPS-PVPA-based catalyst-effectively catalyzed epoxidation of styrene and α-methylstyrene (ee: 50% to 78% and 86% to >99%) with m-CPBA or NaClO. These results are significantly better than those achieved with the homogeneous chiral catalysts under the same reaction conditions (ee: 47% and 65%). Moreover, the immobilized catalysts could be reused at least 10 times without significant loss of activity and enantioselectivity. Furthermore, a point worth emphasizing is that ZPS-PVPA-based catalyst results in remarkable increase of conversion and ee values in the absence of expensive Ocoordinating axial additive for the asymmetric epoxidation of olefins [23,24], which is exactly opposite to the literature reported earlier for both homogeneous and heterogeneous systems [25,26]. This novel additive effect was mainly attributed to the support ZPS-PVPA and the axial phenoxyl linker group. Further research demonstrated that similar catalytic results were dependent on the axial phenoxyl or oxyalkyl linker group but also other organic polymer-inorganic hybrid phosphate salts (Zn, Al, and Ca). In order to better understand this novel additive effect and for the sake of searching for different stable, efficient, and reusable heterogenous Mn III (salen) catalysts we are encouraged to conduct further research. Herein, we describe the covalent bonding of an asymmetric chiral Mn III (salen) complex 3 which was rarely reported in our group on modified hybrid zirconium phosphonate ZPS-PVPA and ZPS-IPPA to give supported complexes 1 and 2 (Scheme 1). To allow the maximum conformational mobility in the complex needed to obtain a high level of asymmetric induction and prevent leaching of active complex, the grafting was done through one side of the 4position in chiral salen ligand. The enantioselective catalytic activities of the catalyst 1 and 2 were examined for epoxidation of α-methylstyrene, styrene, and indene in the presence/absence of axial base. Depressingly, the good catalytic efficiency is with the help of an expensive axial base in different oxidant systems. In turn, the results further indicate that the novel additive effects were attributed to the axial phenoxyl or oxyalkyl linker groups but not supports and the covalent attachment method.
Methods
FT-IR spectra were recorded from KBr pellets using a Bruker RFS100/S spectrophotometer (Bruker, Germany) and diffuse reflectance UV-vis spectra of the solid samples were recorded in the spectrophotometer (Bruker, Germany) with an integrating sphere using BaSO 4 as standard. X-ray photoelectron spectrum was recorded on ESCALab250 instrument (Thermo Fisher, Waltham, MA, USA). Elemental analysis was measured by PE-2400 (C.H.N.) (PerkinElmer, Waltham, MA, USA). TG analyses were performed on an SBTQ600 Thermal Analyzer (LINSEIS, Robbinsville, NJ, USA) with the heating rate of 20 • C·min −1 from 25 to 1000 • C under flowing N 2 (100 mL·min −1 ). The Mn contents of the catalysts were determined by a TAS-986G atomic absorption spectroscopy (Pgeneral, Beijing, China). SEM were performed on KYKY-EM3200 microscopy (KYKY, Beijing, China). TEM were obtained on a TECNAI10 apparatus (PHILIPS, Amsterdam, Holland). Nitrogen adsorption isotherms were measured at 77 K with a 3H-2000I volumetric adsorption analyzer (Huihaihong, Beijing, China) using BET method. The racemic epoxides were prepared by epoxidation of the corresponding olefins by 3-chloroperbenzoic acid (m-CPBA) in CH 2 Cl 2 and confirmed by NMR (BrukerAV-300, Bruker, Germany), and the gas chromatography (GC) was calibrated with the samples of n-nonane, olefins, and corresponding racemic epoxides. The yields (with n-nonane as internal standard) and the ee values were analyzed by gas chromatography (GC) with a Shimadzu GC2014 instrument (Shimadzu, Kyoto, Japan) equipped using a chiral column (HP19091G-B233, 30 m × 0.25 mm × 0.25 µm) and FID detector, injector 230 • C, detector 230 • C. The column temperature for a-methylstyrene, styrene, indene was 80-180 • C. The retention times of the corresponding chiral epoxides are as follows: (a) α-methylstyrene epoxide: the column temperture is 80 • C, t S = 12.9 min, t R = 13.0 min; (b) styrene epoxide: the column temperture is 80 • C, t R = 14.7 min, t S = 14.9 min; (c) indene epoxide: the column temperature is programmed from 80 to 180 • C, t SR = 16.1 min, t RS = 17.1 min.
Synthesis of Asymmetric Chiral Mn III Salen Complex 3
Manganese inserted into the salen ligand was accomplished by adding a solution of Mn (OAc)2·4H2O (103.0 mg, 0.42 mmol) in 10 mL of ethanol to the salen ligand 2 (94.9 mg, 0.21 mmol) with stirring (Scheme 3). The mixture was refluxed for 3 h under the protection of Ar. Then air was bubbled for an additional 2 h, and 26.7 mg of solid LiCl was added. After refluxing for 1 h, the bead was filtered and rinsed sequentially with CH2Cl2, ethanol, and H2O, and then finally dried in vacuum to yield brown power 3 [27]
Immobilization Asymmetric Chiral Mn III (Salen) Complex 3 on the ZCMPS-PVPA and ZCMPS-IVPA
Catalyst 3 (543 mg, 1.0 mmol) and NaOH (120 mg, 3 mmol) were added to a suspension of ZCMPS-PVPA and ZCMPS-IVPA (0.50 mmol Cl) that had been pre-swelled in 10 mL of dry THF for 30 min (Scheme 4). The yellow suspension was refluxed for 24 h. The dark brown powder was collected by filtration and washed thoroughly with ethanol, CH 2 Cl 2 and deionized water respectively and dried in vacuo. The CH 2 Cl 2 filtrate was detected by UV-vis until no peaks could be detected (with pure CH 2 Cl 2 solvent as reference). Yield: 88.0%, 86.1% respectively. The loading of Mn III (salen) complex 3 in the heterogenized catalyst, based on the Mn element, was 0.43-0.52 mmol/g determined by AAS.
Asymmetric Epoxidation
Enantioselective epoxidation reactions were carried out using catalysts 1-2 (0.025 mmol)with αmethylstyrene, styrene, and indene (0.5 mmol) as substrates in 3 mL of dichloromethane under reaction conditions in the presence of PPNO (0.19 mmol) as an axial base with aqueous buffered 1.8 mL NaClO (0.55 mol/L pH = 11.3) as an oxidant, the aqueous buffer was prepared with commercially available sodium hypochlorite (10%) diluted by 0.05 mol/L sodium dihydrogen phosphate (VNaClO:VNaH2PO4 = 25:10), then, the solution was adjusted by 1.00 mol/L hydrochloric acid to pH = 11.5. When the conversion was steady, the mixture was diluted with CH2Cl2 (3 mL). The phases were separated and the aqueous layer was extracted with CH2Cl2 (3 mL × 2). The combined organic layer was washed with brine (3 mL × 2) and dried over anhydrous sodium sulfate. The concentrated filtrate was purified by chromatography on a silica gel column to afford the corresponding epoxide. For m-CPBA/NMO system, a solution of alkene (0.5 mmol), NMO (337.5 mg, 2.5 mmol), n-nonane (internal standard, 90.1 mL, 0.5 mmol), and immobilized Mn III (salen) complexes (0.025 mmol, 2.0 mol %) in CH2Cl2 (3 mL) was cooled to the desired temperature. Solid m-CPBA (172.5 mg, 1.0 mmol) was added in four portions over 2 min. After completion of the reaction, the mixture was washed sequentially with saturated sodium hydroxide and brine-to remove any residual m-CPBA and the corresponding acid-and dried over anhydrous Na2SO4. The conversion and ee values were determined by GC using nonane as an internal standard.
Asymmetric Epoxidation
Enantioselective epoxidation reactions were carried out using catalysts 1-2 (0.025 mmol)with α-methylstyrene, styrene, and indene (0.5 mmol) as substrates in 3 mL of dichloromethane under reaction conditions in the presence of PPNO (0.19 mmol) as an axial base with aqueous buffered 1.8 mL NaClO (0.55 mol/L pH = 11.3) as an oxidant, the aqueous buffer was prepared with commercially available sodium hypochlorite (10%) diluted by 0.05 mol/L sodium dihydrogen phosphate (V NaClO :V NaH2PO4 = 25:10), then, the solution was adjusted by 1.00 mol/L hydrochloric acid to pH = 11.5. When the conversion was steady, the mixture was diluted with CH 2 Cl 2 (3 mL). The phases were separated and the aqueous layer was extracted with CH 2 Cl 2 (3 mL × 2). The combined organic layer was washed with brine (3 mL × 2) and dried over anhydrous sodium sulfate. The concentrated filtrate was purified by chromatography on a silica gel column to afford the corresponding epoxide. For m-CPBA/NMO system, a solution of alkene (0.5 mmol), NMO (337.5 mg, 2.5 mmol), n-nonane (internal standard, 90.1 mL, 0.5 mmol), and immobilized Mn III (salen) complexes (0.025 mmol, 2.0 mol %) in CH 2 Cl 2 (3 mL) was cooled to the desired temperature. Solid m-CPBA (172.5 mg, 1.0 mmol) was added in four portions over 2 min. After completion of the reaction, the mixture was washed sequentially with saturated sodium hydroxide and brine-to remove any residual m-CPBA and the corresponding acid-and dried over anhydrous Na 2 SO 4 . The conversion and ee values were determined by GC using nonane as an internal standard.
FT-IR Spectroscopy
FT-IR spectra (Figure 1) of the supported catalysts 1 and 2 showed bands at near 1620 cm −1 due to the C=N stretching vibration which was similar with homogeneous chiral Mn III (salen) catalysts.
Besides, the IR band of the supported catalysts at near 1026 and 1473 cm −1 is attributed to Ph-O-C stretching vibrations which appeared in the support of ZCMPS-PVPA and ZCMPS-IPPA. This results preliminary showed that the chiral Mn III (salen) molecules are immobilized on these two kinds of carriers.
Polymers 2017, 9, 108 6 of 14 preliminary showed that the chiral Mn III (salen) molecules are immobilized on these two kinds of carriers.
DR UV-Vis Spectroscopy
The UV-vis spectra (Figure 2) of the supported catalysts 1 and 2 showed characteristic bands in homogeneous Mn III (Salen), indicating the presence of Mn III (Salen) in ZCMPS-PVPA and ZCMPS-IPPA. The characteristic bands for Mn III (Salen) at 334, 435 and 510 nm had been blue-shifted to near 330, 410 and 505 nm after immobilizing, respectively. The blue-shifting was mainly due to the interaction between the carrier and the chiral Mn III (salen). Hence, the diffuse reflectance UV-vis spectra also gave further evidence for successful immobilization.
Microscopic Analysis
Scanning electron microscopy (SEM) of catalyst 1 (Figure 3a,b) shows the surface morphology of the catalyst, in which Figure 3a indicates that the amorphous catalyst with particle diameter of one hundred to several hundred nanometers, and each particle is consisted of several smaller particles with diameters of dozens of nanometers. These smaller particles with different shapes gather together irregularly, some micopores, cavums, and secondary channels which increase the surface area of the
DR UV-Vis Spectroscopy
The UV-vis spectra (Figure 2) of the supported catalysts 1 and 2 showed characteristic bands in homogeneous Mn III (Salen), indicating the presence of Mn III (Salen) in ZCMPS-PVPA and ZCMPS-IPPA. The characteristic bands for Mn III (Salen) at 334, 435 and 510 nm had been blue-shifted to near 330, 410 and 505 nm after immobilizing, respectively. The blue-shifting was mainly due to the interaction between the carrier and the chiral Mn III (salen). Hence, the diffuse reflectance UV-vis spectra also gave further evidence for successful immobilization. preliminary showed that the chiral Mn III (salen) molecules are immobilized on these two kinds of carriers.
DR UV-Vis Spectroscopy
The UV-vis spectra (Figure 2) of the supported catalysts 1 and 2 showed characteristic bands in homogeneous Mn III (Salen), indicating the presence of Mn III (Salen) in ZCMPS-PVPA and ZCMPS-IPPA. The characteristic bands for Mn III (Salen) at 334, 435 and 510 nm had been blue-shifted to near 330, 410 and 505 nm after immobilizing, respectively. The blue-shifting was mainly due to the interaction between the carrier and the chiral Mn III (salen). Hence, the diffuse reflectance UV-vis spectra also gave further evidence for successful immobilization.
Microscopic Analysis
Scanning electron microscopy (SEM) of catalyst 1 (Figure 3a,b) shows the surface morphology of the catalyst, in which Figure 3a indicates that the amorphous catalyst with particle diameter of one hundred to several hundred nanometers, and each particle is consisted of several smaller particles with diameters of dozens of nanometers. These smaller particles with different shapes gather together irregularly, some micopores, cavums, and secondary channels which increase the surface area of the
Microscopic Analysis
Scanning electron microscopy (SEM) of catalyst 1 (Figure 3a,b) shows the surface morphology of the catalyst, in which Figure 3a indicates that the amorphous catalyst with particle diameter of one hundred to several hundred nanometers, and each particle is consisted of several smaller particles with diameters of dozens of nanometers. These smaller particles with different shapes gather together irregularly, some micopores, cavums, and secondary channels which increase the surface area of the catalyst and provide enough space accessibly for substrates oxidant molecules in the catalytic active sites are clearer in Figure 3a. However, the catalyst treated in basic solution (Figure 3b) is relatively looser than that depicted in Figure 3a. From the SEM in Figure 3b, the particles of the base treated catalyst with diameter dozens nanometers are smaller, and the micropores, cavums and secondary channels are larger than that of in Figure 3a. Transmission electron microscopy (TEM) exhibits that the average diameter of these secondary channels among the layers of the catalyst is around 50 nm (Figure 4a,b). It is deduced that the inorganic Zirconium (ZrHPO 4 ) parts of the catalyst that show layered structure at a nanoscale were enlarged and decomposed in basic solution (Figure 4b), and the special configurations of the catalysts could be beneficial to the substrates approaching the internal catalytic active sites easily and offer enough space for the epoxidation of olefins.
Polymers 2017, 9, 108 7 of 14 catalyst and provide enough space accessibly for substrates oxidant molecules in the catalytic active sites are clearer in Figure 3a. However, the catalyst treated in basic solution (Figure 3b) is relatively looser than that depicted in Figure 3a. From the SEM in Figure 3b, the particles of the base treated catalyst with diameter dozens nanometers are smaller, and the micropores, cavums and secondary channels are larger than that of in Figure 3a. Transmission electron microscopy (TEM) exhibits that the average diameter of these secondary channels among the layers of the catalyst is around 50 nm (Figure 4a,b). It is deduced that the inorganic Zirconium (ZrHPO4) parts of the catalyst that show layered structure at a nanoscale were enlarged and decomposed in basic solution (Figure 4b), and the special configurations of the catalysts could be beneficial to the substrates approaching the internal catalytic active sites easily and offer enough space for the epoxidation of olefins.
(a) (b) Date on BET surface area, average pore size, and pore volume are presented in Table 1. A large decrease in BET surface area was observed in catalysts 1 and 2 prepared by covalent of the chiral catalyst and provide enough space accessibly for substrates oxidant molecules in the catalytic active sites are clearer in Figure 3a. However, the catalyst treated in basic solution (Figure 3b) is relatively looser than that depicted in Figure 3a. From the SEM in Figure 3b, the particles of the base treated catalyst with diameter dozens nanometers are smaller, and the micropores, cavums and secondary channels are larger than that of in Figure 3a. Transmission electron microscopy (TEM) exhibits that the average diameter of these secondary channels among the layers of the catalyst is around 50 nm (Figure 4a,b). It is deduced that the inorganic Zirconium (ZrHPO4) parts of the catalyst that show layered structure at a nanoscale were enlarged and decomposed in basic solution (Figure 4b), and the special configurations of the catalysts could be beneficial to the substrates approaching the internal catalytic active sites easily and offer enough space for the epoxidation of olefins.
The BET Surface Areas, Pore Volumes, and Average Pore Sizes of ZCMPS-PVPA, ZCMPS-IPPA, and 1
Date on BET surface area, average pore size, and pore volume are presented in Table 1. A large decrease in BET surface area was observed in catalysts 1 and 2 prepared by covalent of the chiral Date on BET surface area, average pore size, and pore volume are presented in Table 1. A large decrease in BET surface area was observed in catalysts 1 and 2 prepared by covalent of the chiral salen ligand at one side of 4-position with the -CH 2 Cl of hybrid zirconium phosphonate-(ZPS-PVPA and ZPS-IPPA), with a reduction in the pore volumes and average pore sizes, suggesting that the Mn III (salen) was mainly located on the inner channels of support material. Moreover, the surface area (ZCMPS-PVPA vs. ZCMPS-IPPA; 120.3 vs. 100.3 m 2 /g) provide the substrates with enough opportunity to approach the catalytic active sites.
X-ray Photoelectron Spectroscopy
XPS spectrum ( Figure 5) gives further evidence for successful immobilization based on the fact that the characteristic bond at 642.1 eV of Mn2P 3/2 is clear, which is in agreement with the values previously reported for Mn III (Salen) and porphyinic ligands [28,29]. salen ligand at one side of 4-position with the -CH2Cl of hybrid zirconium phosphonate-(ZPS-PVPA and ZPS-IPPA), with a reduction in the pore volumes and average pore sizes, suggesting that the Mn III (salen) was mainly located on the inner channels of support material. Moreover, the surface area (ZCMPS-PVPA vs. ZCMPS-IPPA; 120.3 vs. 100.3 m 2 /g) provide the substrates with enough opportunity to approach the catalytic active sites.
X-ray Photoelectron Spectroscopy
XPS spectrum ( Figure 5) gives further evidence for successful immobilization based on the fact that the characteristic bond at 642.1 eV of Mn2P 3/2 is clear, which is in agreement with the values previously reported for Mn III (Salen) and porphyinic ligands [28,29].
The Activities of the Catalysts
The enantioselective catalytic activities of the chiral Mn III (salen) complex and catalysts 1 and 2 were examined for epoxidation of α-methylstyrene, styrene, and indene, at 0 °C using aqueous NaOCl as an oxidant in the presence/absence of PPNO as an axial base. The data reported in Table 2 indicate that all reactions proceeded smoothly, and the ZPS-PVPA-based catalyst 1-with a larger pore diameter and surface area-was found to be more active than ZPS-IPPA-based catalyst 2 (ee: 83% vs. 72%, con: 90% vs. 75%). Significantly, the enantio-induction was high with α-methylstyrene, styrene and indene in this study, which were higher ee values than the homogeneous catalyst under
The Activities of the Catalysts
The enantioselective catalytic activities of the chiral Mn III (salen) complex and catalysts 1 and 2 were examined for epoxidation of α-methylstyrene, styrene, and indene, at 0 • C using aqueous NaOCl as an oxidant in the presence/absence of PPNO as an axial base. The data reported in Table 2 indicate that all reactions proceeded smoothly, and the ZPS-PVPA-based catalyst 1-with a larger pore diameter and surface area-was found to be more active than ZPS-IPPA-based catalyst 2 (ee: 83% vs. 72%, con: 90% vs. 75%). Significantly, the enantio-induction was high with α-methylstyrene, styrene and indene in this study, which were higher ee values than the homogeneous catalyst under the same conditions. The increase in chiral recognition could arise from the unique spatial environment created by both the chiral salen complex and the surface of the supports used. Similar results were obtained by Kim [30] and Li [6], respectively. Kim et al. reported that, for the asymmetric epoxidation of α-methylstyrene, the ee increased from 51% to 59% after immobilization of Mn III (salen) on the siliceous MCM-41 by multi-step grafting. Hutchings [31] found that the confinement effect originated from the zeolite cage could improve the chiral induction for the asymmetric epoxidition of styrene. It was deduced that the data of increase in enantiomeric excess is mainly attributed to the microenvironment effects of ZPS-PVPA and ZPS-IPPA immobilized Mn III (salen) [16,17,22,23], which result from the layered structure, micoropores, and channels; the hydrophilic property of polystyrenylphosphonate parts; and hydrophobic of zirconium parts of the hybrid zirconium phosphonate. These features are different from either pure polystyrene or pure zirconium. In the absence of PPNO, the catalyst 1 oxidize α-methylstyrene (ee: 33%, con: 25%), suggests that the presence of PPNO is essential for catalyst stability and enantioselectivity. PPNO, which is only weakly bound to the manganese center, has remarkable effects on both the activity and enantioselectivity of the enantioselective epoxidation by activating and stabilizing the catalyst. We think there are two possibilities as to why PPNO can coordinate to manganese [16,17]. The first we propose is that the effect of PPNO is due to a set of equilibria (Scheme 5), wherein the active (salen) Mn V =O complex undergoes reversible coupling with a Mn III complex to generate an inactive u-oxo dimer. In the presence of PPNO, the equilibrium is shifted toward the Mn V oxo intermediate as a result of additive binding to the coordinatively unsaturated Mn III complex. Acceleration in the rate of epoxidation is then expected due to the increased concentration of the active Mn V oxo in solution. Another reason is that we think that the PPNO does not really coordinate with Mn, being only weakly bound to the manganese center, and that this has remarkable effects on enantioselectivity of the enantioselective epoxidation by increasing in stability of the Mn V oxo intermediate. the same conditions. The increase in chiral recognition could arise from the unique spatial environment created by both the chiral salen complex and the surface of the supports used. Similar results were obtained by Kim [30] and Li [6], respectively. Kim et al. reported that, for the asymmetric epoxidation of α-methylstyrene, the ee increased from 51% to 59% after immobilization of Mn III (salen) on the siliceous MCM-41 by multi-step grafting. Hutchings [31] found that the confinement effect originated from the zeolite cage could improve the chiral induction for the asymmetric epoxidition of styrene. It was deduced that the data of increase in enantiomeric excess is mainly attributed to the microenvironment effects of ZPS-PVPA and ZPS-IPPA immobilized Mn III (salen) [16,17,22,23], which result from the layered structure, micoropores, and channels; the hydrophilic property of polystyrenylphosphonate parts; and hydrophobic of zirconium parts of the hybrid zirconium phosphonate. These features are different from either pure polystyrene or pure zirconium.
In the absence of PPNO, the catalyst 1 oxidize α-methylstyrene (ee: 33%, con: 25%), suggests that the presence of PPNO is essential for catalyst stability and enantioselectivity. PPNO, which is only weakly bound to the manganese center, has remarkable effects on both the activity and enantioselectivity of the enantioselective epoxidation by activating and stabilizing the catalyst. We think there are two possibilities as to why PPNO can coordinate to manganese [16,17]. The first we propose is that the effect of PPNO is due to a set of equilibria (Scheme 5), wherein the active (salen) Mn V =O complex undergoes reversible coupling with a Mn III complex to generate an inactive u-oxo dimer. In the presence of PPNO, the equilibrium is shifted toward the Mn V oxo intermediate as a result of additive binding to the coordinatively unsaturated Mn III complex. Acceleration in the rate of epoxidation is then expected due to the increased concentration of the active Mn V oxo in solution.
Another reason is that we think that the PPNO does not really coordinate with Mn, being only weakly bound to the manganese center, and that this has remarkable effects on enantioselectivity of the enantioselective epoxidation by increasing in stability of the Mn V oxo intermediate. The role of PPNO as an axial base was established earlier by others authors [25,26,32,33]. ZCMPS-PVPA and ZCMPS-IPPA alone showed negligible catalytic activity toward epoxidation of αmethylstyrene taken as a representative substrate.
Furthermore, catalysts 1 and 2 were also found to be efficient in the epoxidation of indene (ee: 96%-99%), with results higher than those from the homogeneous reaction carried out with catalyst 4 under identical reaction conditions (ee: 65%). However, the catalytic reactions were found to be slower (12 h) in catalysts 1 and 2, similar results were observed in epoxidation of α-methylstyrene and styrene. This behavior was attributed to the diffusional constraints usually present when a catalyst is supported inside the materials. Materials with larger pore sizes would be expected to face less diffusional resistance. Thus, the higher TOF values obtained with catalyst 1 supported on ZPS-PVPA compared with catalyst 2 supported on ZPS-IPPA are to be expected. The role of PPNO as an axial base was established earlier by others authors [25,26,32,33]. ZCMPS-PVPA and ZCMPS-IPPA alone showed negligible catalytic activity toward epoxidation of α-methylstyrene taken as a representative substrate.
Furthermore, catalysts 1 and 2 were also found to be efficient in the epoxidation of indene (ee: 96%-99%), with results higher than those from the homogeneous reaction carried out with catalyst 4 under identical reaction conditions (ee: 65%). However, the catalytic reactions were found to be slower (12 h) in catalysts 1 and 2, similar results were observed in epoxidation of α-methylstyrene and styrene. This behavior was attributed to the diffusional constraints usually present when a catalyst is supported inside the materials. Materials with larger pore sizes would be expected to face less diffusional resistance. Thus, the higher TOF values obtained with catalyst 1 supported on ZPS-PVPA compared with catalyst 2 supported on ZPS-IPPA are to be expected. The observed conversions and ee values of epoxides are well with catalyst 1 and 2 using NaClO as oxidant. When α-methylstyrene was used as substrate, the ee values of the resulting epoxides were found to be 72%-83%, with conversions of 75%-90%, while only 43% ee values was obtained using m-CPBA as oxidant (Table 3). A possible explanation for this phenomena may be the layers of ZPS-PVPA or ZPS-IPPA are enlarged or even decomposed in base solution (the PH value of NaClO oxidation system is 11.35). The decomposed process of catalyst 1 in the base solution condition was deduced and shown in Figure 6, which was indicated that one amorphous particle of the immobilized catalyst 1 are consisted irregularly by hundreds or thousands of smaller particles of catalyst microcrystaline with regular layers, and the most of the catalytic sites immobilized and embed on the surface, in the interlayer or interlamellar region, or among the microcrystaline of ZPS-PVPA under adequate conditions. While in the base solution, the layers of ZPS-PVPA are expanded or even partly decomposed and more secondary channels are formed, and the original secondary channels are enlarged, so some of the embed catalytic active sites were exposed in the base reaction solution, the substrates and the reactants could diffuse to these catalytic sites easily through these secondary channels. However, in the m-CPBA oxidation system which is not in base reaction solution, the nano-particles of immobilized catalyst gather together, and the structure of the catalyst is rigid and very stable with relatively fewer secondary channels, and some of the embedded catalytic active sites cannot work effectively. The observed conversions and ee values of epoxides are well with catalyst 1 and 2 using NaClO as oxidant. When α-methylstyrene was used as substrate, the ee values of the resulting epoxides were found to be 72%-83%, with conversions of 75%-90%, while only 43% ee values was obtained using m-CPBA as oxidant (Table 3). A possible explanation for this phenomena may be the layers of ZPS-PVPA or ZPS-IPPA are enlarged or even decomposed in base solution (the PH value of NaClO oxidation system is 11.35). The decomposed process of catalyst 1 in the base solution condition was deduced and shown in Figure 6, which was indicated that one amorphous particle of the immobilized catalyst 1 are consisted irregularly by hundreds or thousands of smaller particles of catalyst microcrystaline with regular layers, and the most of the catalytic sites immobilized and embed on the surface, in the interlayer or interlamellar region, or among the microcrystaline of ZPS-PVPA under adequate conditions. While in the base solution, the layers of ZPS-PVPA are expanded or even partly decomposed and more secondary channels are formed, and the original secondary channels are enlarged, so some of the embed catalytic active sites were exposed in the base reaction solution, the substrates and the reactants could diffuse to these catalytic sites easily through these secondary channels. However, in the m-CPBA oxidation system which is not in base reaction solution, the nanoparticles of immobilized catalyst gather together, and the structure of the catalyst is rigid and very stable with relatively fewer secondary channels, and some of the embedded catalytic active sites cannot work effectively. The observed conversions and ee values of epoxides are well with catalyst 1 and 2 using NaClO as oxidant. When α-methylstyrene was used as substrate, the ee values of the resulting epoxides were found to be 72%-83%, with conversions of 75%-90%, while only 43% ee values was obtained using m-CPBA as oxidant (Table 3). A possible explanation for this phenomena may be the layers of ZPS-PVPA or ZPS-IPPA are enlarged or even decomposed in base solution (the PH value of NaClO oxidation system is 11.35). The decomposed process of catalyst 1 in the base solution condition was deduced and shown in Figure 6, which was indicated that one amorphous particle of the immobilized catalyst 1 are consisted irregularly by hundreds or thousands of smaller particles of catalyst microcrystaline with regular layers, and the most of the catalytic sites immobilized and embed on the surface, in the interlayer or interlamellar region, or among the microcrystaline of ZPS-PVPA under adequate conditions. While in the base solution, the layers of ZPS-PVPA are expanded or even partly decomposed and more secondary channels are formed, and the original secondary channels are enlarged, so some of the embed catalytic active sites were exposed in the base reaction solution, the substrates and the reactants could diffuse to these catalytic sites easily through these secondary channels. However, in the m-CPBA oxidation system which is not in base reaction solution, the nanoparticles of immobilized catalyst gather together, and the structure of the catalyst is rigid and very stable with relatively fewer secondary channels, and some of the embedded catalytic active sites cannot work effectively. The observed conversions and ee values of epoxides are well with catalyst 1 and 2 using NaClO as oxidant. When α-methylstyrene was used as substrate, the ee values of the resulting epoxides were found to be 72%-83%, with conversions of 75%-90%, while only 43% ee values was obtained using m-CPBA as oxidant (Table 3). A possible explanation for this phenomena may be the layers of ZPS-PVPA or ZPS-IPPA are enlarged or even decomposed in base solution (the PH value of NaClO oxidation system is 11.35). The decomposed process of catalyst 1 in the base solution condition was deduced and shown in Figure 6, which was indicated that one amorphous particle of the immobilized catalyst 1 are consisted irregularly by hundreds or thousands of smaller particles of catalyst microcrystaline with regular layers, and the most of the catalytic sites immobilized and embed on the surface, in the interlayer or interlamellar region, or among the microcrystaline of ZPS-PVPA under adequate conditions. While in the base solution, the layers of ZPS-PVPA are expanded or even partly decomposed and more secondary channels are formed, and the original secondary channels are enlarged, so some of the embed catalytic active sites were exposed in the base reaction solution, the substrates and the reactants could diffuse to these catalytic sites easily through these secondary channels. However, in the m-CPBA oxidation system which is not in base reaction solution, the nanoparticles of immobilized catalyst gather together, and the structure of the catalyst is rigid and very stable with relatively fewer secondary channels, and some of the embedded catalytic active sites cannot work effectively. Zirconium phosphates (phosphonates); Chloromethyl poly (styrene-phenylvinylphosphonate); • Mn III (salen). Chloromethyl poly(styrene-phenylvinylphosphonate); Mn III (salen).
The Reusability of the Catalyst
The recyclability of catalyst 1 was carried for the epoxidation of α-methylstyrene as a representative substrate. After the first run of the epoxidation reaction, catalyst 1 was separated by centrifugation. The separated catalyst was washed thoroughly with dichloromethane, dried, and subjected to another cycle with fresh reactants under similar epoxidation conditions. Table 4 shows the results of the recovery and reusability of catalyst 1. To our delight, catalyst 1 could be reused eight times with no appreciable decrease in yield and enantioselectivity of α-methylstyrene epoxide. Chemical analysis of the Mn content in the supernatant revealed no detectable leaching of Mn species during the reaction. The results suggested excellent stability and reusability of catalyst 1 under the basic reaction conditions in this work.
Conclusions
Catalysts 1 and 2 were prepared by heterogenizing asymmetric chiral Mn III (salen) complex onto modified hybrid materials ZPS-PVPA and ZPS-IPPA by using a covalent bonding method. The supported catalysts 1 and 2 effectively catalyzed epoxidation of α-methylstrene (ee: 72%-83%) with aqueous NaClO in the presence of 4-phenylpyridine N-oxide (PPNO) as axial base. These results are significantly better than those achieved with the catalyst 4 under a homogeneous system (ee: 54%). Remarkably, catalysts 1 and 2 worked well for relatively bulkier alkene such as indene (ee: 96%-99%), and the results were much higher than those for the homogeneous system (ee: 65%). Moreover, the prepared catalysts are relatively stable and can be recycled at least eight times without significant loss of activity and enantioselectivity. Depressingly, the good catalytic efficiency is with the help of expensive O-coordinating axial additive in different oxidant systems. In turn, the results further indicated that the novel additive effects were attributed to the axial phenoxyl or oxyalkyl linker groups but not supports and the covalent attachment method. | 2017-03-31T08:35:36.427Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "117230f352975035cbad5a833ca23c9ef360088e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/9/3/108/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "117230f352975035cbad5a833ca23c9ef360088e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
203248254 | pes2o/s2orc | v3-fos-license | MARKETING STRATEGY FOR TOBACCO AND ITS INDUSTRIAL PRODUCTS TO FACEGLOBAL MARKET AND ANTI-TOBACCO CAMPAIGN
During the last decade, tobacco was sharply highlighted by the international community. In global trade, the tobacco business and its industrial products were under pressure from the world through the world health organization that formulates the framework convention on tobacco control (FCTC). The conventions of FCTC aim to control tobacco and its industrial products in the global market, because they can be detrimental to health. This paper aims to review the pattern of tobacco trade in a global market that tends to get discriminated. The method of analysis employed by the author is reviewing the existing journals and conducting observations and discussions related to the problem of tobacco in the global trading. The facts in tobacco trade show that FCTC regulations that have been widely adopted by countries in the world are often tangential to global trade regulations enacted in almost all over the world. Tobacco and its industrial products are subject to discriminatory treatment in global trade. This condition has triggered many parties to trade tobacco illegally, given the need of world tobacco despite pressure always rises about 2% every year along with the increase of world population number. In the business administration review, an objective strategy is needed to address the challenges inherent in the tobacco business. The Paradigm of Resources Based View (RBV) and Market Based View (MBV) in the tobacco business need to be integrated into the internal and external factors of the business, thus the tobacco business can be maintained. A country producing tobacco (for example: Indonesia), must supportits citizen to exist at their business, because they get benefit at tobacco business. The country must safeguard and fight for tobacco to global market without having to experience product discrimination. keywords: Tobacco business, global market, business administration Jurnal Politico Vol. 19 No. 1, Maret 2019: Halaman 1-22. ISSN : p: 1829-6696, e:2549-4716 Web jurnal online: jurnal.unmuhjember.ac.id By: Iryono, UNEJ Marketing Strategy For Tobacco And Its Industrial Products To Faceglobal Market And Anti-Tobacco Campaign 2 INTRODUCTION Tobacco is one of the agricultural products that became one of the international trade commodities developed in Indonesia. According to Arnes (2009), the beginning of tobacco was brought to Indonesia by the Spaniards through the Philippines and was introduced in Java around 1601. Initially people smoked by chopping tobacco and then wrapped it with leaves of corn or banana, which became known as cigarettes “Klobot”. Then, the first western cigarette was brought by BAT to Batavia (Jakarta) in 1825 during the Dutch colonial period. Around 1924, BAT established the first conventional cigarette factory in Cirebon and subsequently in Semarang. Actually in the 17th century there had been a mixture of cigarettes with clove oil, but it was not successfully marketed. Until about 1870 a man from Kudus named Jamhari, tried tobacco formula with clove oil to be cured, and it was found successful in curing asthma. The development of tobacco history is written in detail by a world institution called ASH (Action on Smoking and Health) released in 2015. In the 1st century before tobacco mash was natively believed in America as a cure for healing,the use of tobacco as a cigarette could be seen from a portrait of tobacco in Guatemala that is estimated to occur in the year 600 1000 AD. In 1492 Columbus discovered tobacco in the new continent (America). Further, Rodrigo de Jerez of Spain found smoking habit in America and made a return to Spain. In 1531, in Europe the first cultivation of tobacco was performed in Santo Domingo. 1548 Portuguese developed tobacco and used it for commercial exports in Brazil. In 1571, Monardes, a Doctor from Sevilla, said that tobacco has a trait which could be used to treat more than 36 diseases. Tobacco and its industrial products to this day are still needed in the international market. World tobacco demand, according to European Commission (2003),continously increasesat about 2%. The increase in tobacco consumption is due to the increase of the world population so that the world tobacco consumption also increases. World tobacco consumption that tends to rise strongly affects the tobacco trade and its industrial products. The trend of increasing the linear tobacco consumption with the increase of the world population makes tobacco and its industrial products much ogled by the tobacco trade so that this product into the category of products that are quite sexy for trading. Tobacco and tobacco industry products are increasing linearly with the population of the world which makes tobacco as one of the trade eyes that still survives in all corners of the world. China as the world's largest tobacco producing country plays a huge role in the world tobacco trade. According to data released by Ericson et al (2015), China to date is the world's largest tobacco producing country as well as the world's largest tobacco consuming nation. Tobacco and its industrial products have provided many benefits to people or companies engaged in tobacco business. Based on the data of 2012, the profits of international tobacco industry/industry internationally are as follows: 1. China national tobacco ($ 95 billion) 2. Phillips Morris International ($ 80 billion) 3. British American Tobacco ($ 76.4 Billion) 4. Imperial Tobacco ($ 45.8 Billion) Jurnal Politico Vol. 19 No. 1, Maret 2019: Halaman 1-22. ISSN : p: 1829-6696, e:2549-4716 Web jurnal online: jurnal.unmuhjember.ac.id By: Iryono, UNEJ Marketing Strategy For Tobacco And Its Industrial Products To Faceglobal Market And Anti-Tobacco Campaign 3 5. Altria / Philip Morris America ($ 24.5 Billion) 6. Japan International ($ 20.1 Billion) The top 10 countries with tobacco consumption rates are as follows: 1. China 2. Russia 3. USA 4. Indonesia 5. Japan 6. German 7. India 8. Turkey 9. Republic of Korea 10. Vietnam. The increasing trade and consumption of tobacco in the world creates a very complex problem. Not just the problem of trade alone, but increased tobacco consumption raises great concerns about the impact. The level of disease prevalence caused by tobacco consumption is increasing every day. Not only affecting adults, the effects of tobacco consumption are found in teenagers and even children. Tobacco consumers and its industry outcomes among teenagers are increasing. This has led to public concerns about the impacts that tobacco consumption will have. Surveys and research have been done to determine the impact and consequences of tobacco consumption in children and adolescents around the world. As an example, Corey et al (2014) in a survey conducted in the US cited cigarette smokers from students according to the National Youths Tobacco Survey and NSDHUH (National Survey on Drug Use and Health) survey in 2011 and 2012 has increased dramatically. Among other causes of the increasing number of cigar smokers is promotion and branding. Therefore, it should get the attention of the government to be able to control the circulation of cigars among students in the US. The pressures for tobacco business actors increased when the regulations formulated in the Framework Convention on Tobacco Control (FCTC) were launched. The FCTC was formulated in 1999. In 2003, the FCTC framework began to be adopted by 171 countries in the World. In February 2005 the FCTC officially became a reference for controlling tobacco worldwide. Thus efforts to reduce and limit tobacco consumption began to focus by disseminating the FCTC framework to be able to enter into countries regulation that are members of the UN or countries around the world. According Ericson et al (2015), to control tobacco in the world, what needs to be done is to make regulations for tobacco startingfrom farm to off farm. Tobacco business must be burdened by cost incriminating, so that tobacco consumption can be controlled. Increasing taxes on tobacco and all of its industrial products is one of the ways to be implemented. Increased cost of tobacco and its industrial products will result in low competitiveness because the price factor is ultimately very expensive for tobacco users. Thus, although tobacco and its industrial products have high prices at the consumer level, tobacco Jurnal Politico Vol. 19 No. 1, Maret 2019: Halaman 1-22. ISSN : p: 1829-6696, e:2549-4716 Web jurnal online: jurnal.unmuhjember.ac.id By: Iryono, UNEJ Marketing Strategy For Tobacco And Its Industrial Products To Faceglobal Market And Anti-Tobacco Campaign 4 producers, in this case, farmers are less likely to benefit from the price increase because the price of raw materials does not increase significantly. Meanwhile, production costs will continue to increase. Tobacco is an important agricultural product for Indonesia. According to Santoso et al (2009), more than 18 million people earn for their living by working in tobacco sector. Tobacco also drives other sectors of the economy in society. Tobacco industry is a very important sector for the movement of the economy gear in Indonesia which was proved most resistant to the economic crisis hit Indonesia. For the country, tobacco and tobacco products are able to contribute in the form of foreign exchange and taxes that are of great value to the development of the country. State revenue from tobacco excise sector from year to year has increased significantly. The development of State revenue from the tobacco excise sector for 2010 2016 is as follows: Table 1.Actual Excise on Tobacco Products Year 2010 – 2016 Year 201
INTRODUCTION
Tobacco is one of the agricultural products that became one of the international trade commodities developed in Indonesia. According to Arnes (2009), the beginning of tobacco was brought to Indonesia by the Spaniards through the Philippines and was introduced in Java around 1601. Initially people smoked by chopping tobacco and then wrapped it with leaves of corn or banana, which became known as cigarettes "Klobot". Then, the first western cigarette was brought by BAT to Batavia (Jakarta) in 1825 during the Dutch colonial period. Around 1924, BAT established the first conventional cigarette factory in Cirebon and subsequently in Semarang. Actually in the 17th century there had been a mixture of cigarettes with clove oil, but it was not successfully marketed. Until about 1870 a man from Kudus named Jamhari, tried tobacco formula with clove oil to be cured, and it was found successful in curing asthma.
The development of tobacco history is written in detail by a world institution called ASH (Action on Smoking and Health) released in 2015. In the 1st century before tobacco mash was natively believed in America as a cure for healing,the use of tobacco as a cigarette could be seen from a portrait of tobacco in Guatemala that is estimated to occur in the year 600 -1000 AD. In 1492 Columbus discovered tobacco in the new continent (America). Further, Rodrigo de Jerez of Spain found smoking habit in America and made a return to Spain. In 1531, in Europe the first cultivation of tobacco was performed in Santo Domingo. 1548 Portuguese developed tobacco and used it for commercial exports in Brazil. In 1571, Monardes, a Doctor from Sevilla, said that tobacco has a trait which could be used to treat more than 36 diseases.
Tobacco and its industrial products to this day are still needed in the international market. World tobacco demand, according to European Commission (2003),continously increasesat about 2%. The increase in tobacco consumption is due to the increase of the world population so that the world tobacco consumption also increases. World tobacco consumption that tends to rise strongly affects the tobacco trade and its industrial products. The trend of increasing the linear tobacco consumption with the increase of the world population makes tobacco and its industrial products much ogled by the tobacco trade so that this product into the category of products that are quite sexy for trading.
Tobacco and tobacco industry products are increasing linearly with the population of the world which makes tobacco as one of the trade eyes that still survives in all corners of the world. China as the world's largest tobacco producing country plays a huge role in the world tobacco trade. According to data released by Ericson et al (2015), China to date is the world's largest tobacco producing country as well as the world's largest tobacco consuming nation. Tobacco and its industrial products have provided many benefits to people or companies engaged in tobacco business. Based on the data of 2012, the profits of international tobacco industry/industry internationally are as follows: 1. China national tobacco ($ 95 billion) 2. Phillips Morris International ($ 80 billion) 3. British American Tobacco ($ 76.4 Billion) 4. Imperial Tobacco ($ 45.8 Billion) Jurnal Politico Vol. 19 No. 1, Maret 2019: Halaman 1-22. ISSN : p: 1829-6696, e:2549 Web jurnal online: jurnal.unmuhjember.ac.id By: Iryono, UNEJ Marketing Strategy For Tobacco And Its Industrial Products To Faceglobal Market And Anti-Tobacco Campaign 3 5. Altria / Philip Morris America ($ 24.5 Billion) 6. Japan International ($ 20.1 Billion) The top 10 countries with tobacco consumption rates are as follows: 1. China 2. Russia 3. USA 4. Indonesia 5. Japan 6. German 7. India 8. Turkey 9. Republic of Korea 10. Vietnam. The increasing trade and consumption of tobacco in the world creates a very complex problem. Not just the problem of trade alone, but increased tobacco consumption raises great concerns about the impact. The level of disease prevalence caused by tobacco consumption is increasing every day. Not only affecting adults, the effects of tobacco consumption are found in teenagers and even children. Tobacco consumers and its industry outcomes among teenagers are increasing. This has led to public concerns about the impacts that tobacco consumption will have.
Surveys and research have been done to determine the impact and consequences of tobacco consumption in children and adolescents around the world. As an example, Corey et al (2014) in a survey conducted in the US cited cigarette smokers from students according to the National Youths Tobacco Survey and NSDHUH (National Survey on Drug Use and Health) survey in 2011 and 2012 has increased dramatically. Among other causes of the increasing number of cigar smokers is promotion and branding. Therefore, it should get the attention of the government to be able to control the circulation of cigars among students in the US.
The pressures for tobacco business actors increased when the regulations formulated in the Framework Convention on Tobacco Control (FCTC) were launched. The FCTC was formulated in 1999. In 2003, the FCTC framework began to be adopted by 171 countries in the World. In February 2005 the FCTC officially became a reference for controlling tobacco worldwide. Thus efforts to reduce and limit tobacco consumption began to focus by disseminating the FCTC framework to be able to enter into countries regulation that are members of the UN or countries around the world.
According Ericson et al (2015), to control tobacco in the world, what needs to be done is to make regulations for tobacco startingfrom farm to off farm. Tobacco business must be burdened by cost incriminating, so that tobacco consumption can be controlled. Increasing taxes on tobacco and all of its industrial products is one of the ways to be implemented. Increased cost of tobacco and its industrial products will result in low competitiveness because the price factor is ultimately very expensive for tobacco users. Thus, although tobacco and its industrial products have high prices at the consumer level, tobacco producers, in this case, farmers are less likely to benefit from the price increase because the price of raw materials does not increase significantly. Meanwhile, production costs will continue to increase.
Tobacco is an important agricultural product for Indonesia. According to Santoso et al (2009), more than 18 million people earn for their living by working in tobacco sector. Tobacco also drives other sectors of the economy in society. Tobacco industry is a very important sector for the movement of the economy gear in Indonesia which was proved most resistant to the economic crisis hit Indonesia. For the country, tobacco and tobacco products are able to contribute in the form of foreign exchange and taxes that are of great value to the development of the country.
State revenue from tobacco excise sector from year to year has increased significantly. The development of State revenue from the tobacco excise sector for 2010 -2016 is as follows: Table , the amount of State revenue from the excise sector will continue to increase in accordance with prevailing conditions and legislation in Indonesia. That is, the tobacco sector and its industrial output can sustain the funding needs for the Government for development. Thus, if the State still expects the tobacco excise duties to be sustainable, the tobacco and the industrial output should also receive serious attention and guidance from upstream to downstream from the government.
The tobacco market is included in the type of oligopsonymarket, where in this case the number of buyers is very limited whereas buyer is very instrumental in determining the price of the product. Generally the existing marketing substitution is quite complex, and through many marketing channels. Santoso (2001) said that the tobacco trade in Madura is quite complex. Tobacco from farmers cannot be directly handed to warehouses or factories, but must be passed through the tongko, then collected by the Bandol who then deposited to the Juragan. It is these merchants who can connect with big traders or with agents from the warehouse or industry.
Besuki Na-Oogst's tobacco marketing in Jember region involved many actors in the marketing chain. Rare farmers can directly gain access to marketing with warehousing as Besuki Na-Oogst tobacco exporter. Small to medium traders or known as "Belandang" are marketing actors who are directly connected with farmers. This small gaze deposits its goods to a largerclothing or collectors. Only these big collectors enter the warehouse or exporter to market tobacco. Trust between farmers and the perpetrators of tobacco marketing is well preserved in this tobacco marketing pattern.
Indonesia as one of the tobacco producers in the world experienced ups and downs in its exploitation. In the heyday of tobacco, tobacco and its industrial products are categorized as products with distinguished value (fancy product). Fancy tobacco products have high value in their trade. Farmers who act as tobacco producers enjoy high benefits from planting tobacco. Everyone involved in the chain of trade receives enormous economic benefits. However, over the course of time, tobacco is now almost losing its fancy nature so the tobacco value is no longer fantastic in the world of commerce.
The pattern of consumption of tobacco by consumers has been through a lot of changes. This condition has an impact on all tobacco business actors from upstream to downstream. International pressure on tobacco that has a health impact also plays a role in changing market tastes. The cigarette smokers begin to shift to more "mild" tastes. The cigar smoker starts to abandon the large cigar smoking style, and switches to a smaller cigar. Changes in the pattern and consumption style of tobacco and its industrial products will affect the ability of tobacco business players in their production, distribution and marketing.
The phenomenon of tobacco products and the results of its industry seen from the eyes of the business administration process are very interesting. Business strategy paradigm view is divided into two namely Resources Base View (RBV) and Market Base View (MBV). In the case of tobacco reviews and industrial results, this article aims to explore deeper whether the tobacco cultivation strategy and its industrial output can be sustained through the RBV or MBV approach. Is it also possible to incorporate RBV and MBV (centrally) approaches to the worldwide problem of tobacco concessions due to international pressure by antitobacco communities?
METHODOLOGY
The research strategy of marketing tobacco and its industrial products in global markets and anti-tobacco campaigns uses qualitative methods. Where according to Moleong (in Herdiansyah, 2010: 9), qualitative research is scientific research that has a goal in understanding the social context naturally by focusing on the pattern of in-depth communication interactions between researchers with what is researched. Added by Moleong (2012: 14) that a qualitative approach is interpreted as a subjective experience and study of a person's principal perspective.
Tobacco business in Indonesia is very thick and shows social symptoms. This is based on an explanation According to Creswell (in Hasbiansyah, 2005), qualitative studies are depictions of meaning for various people related to their life experiences about events or concepts. The structure of awareness of human life experiences is explored by several people involved. Meanwhile, according to Husserl (in Creswell, 1998) efforts made by researchers in the research of events include: the search for some things needed (essential), the meaning of basic experience or invariant structure (essence) and efforts to suppress the intensity of consciousness where experience consists of several things that appear (both from outside and from within each awareness according to meaning, image and memory). So, the core of this qualitative research is the life experiences of people involved in the management of tobacco policies because to get the essence that can only be obtained from individuals who are really involved in its implementation. Thus, in social theory, according to experience, it cannot be solved by other people except those involved so that the connection with the selection of informants using purposive techniques, the informants are those who are directly involved in the implementation of the tobacco policy. In addition, talking about the essence or experience of one's life is closely related to a high level of subjectivity. Therefore, to minimize the emergence of subjectivity, triangulation of the various informants involved (implementor) is carried out so as to achieve the objectivity of research results. In other words, statements from the informant "a" are checked against the results of statements from other informants so that the matching of the statement results can be obtained. Based on the previous explanation, the methodological implications of the qualitative approach is to select informants (research subjects) who are truly involved in the implementation of the tobacco policy. According to Hasbiansyah (2005: 171), states that methodological in qualitative research using data collection techniques through in-depth interviews with research subjects. In addition, the complete data can be traced through the use of other techniques, including participants, document searches, observations and so forth. The interview data is a fact of the phenomenon obtained based on the experience of the informants regarding the implementation of the tobacco policy. The researcher will allow the phenomenon to be revealed as it is by the informant by fully describing the phenomenon experienced by the informant. All records of the results of in-depth interviews with informants were transcribed into written language. Then from the results of the transcription, the researcher inventory important statements that are relevant to the implementation of the tobacco policy.
DISCUSSIONS 1. Tobacco Performance and Their Industrial Product in Indonesia
Indonesia has experienced various conditions about tobacco, some are high tides, some are low. The long history of tobacco in Indonesia can be read in Arnes (2009) entitled From tobacco to kretek: a Success Story about cloves states that in Indonesia, especially before 1900s, the habit of the inhabitants is to chew betel. The Netherlands as a country that controlled Indonesia disliked this custom, so in 1900 -1950 the habit of chewing betel was completely replaced by smoking that used to be considered more modern.
The development of tobacco use for cigarettes cannot be separated from the role of a man named H. Jamhari. Cigarettes were originally made with formulations using tobacco and clove mixture to treat asthma in 1870. Since then the cigarette has grown quite rapidly in Kudus, Semarang and Java areas in general. The mixture of tobacco with cloves if burned will produce a "kretekkretek" sound so that in later development of this cigarette wascalled clove cigarettes.
In its early developmentuntil 1968,clove cigarettes still used the handrollingtechnology. After that year kretek cigarettes began to be done with the main Cigarette filters are favored and accepted by the market Mechanization promises more effective company performance in increasing production Negative impact of mechanization is a small tobacco company began to go bankrupt while large cigarette companies started to develop fast. Market share for cigarettes around 1989 is: Djarum 31%, GudangGaram 31%, Bentoel 12% and Sampoerna 5.5%. Government policy at that time was also more profitable for the large companies. Government intervention on cigarette industry is also seen from the formation of BPPC which controlled the national clove trade run by Hutomo Mandala Putra in 1990.
In the era of Suharto presidency, kretek cigarettes get protection so that the middle to top began to like clove cigarettes, to stem the white cigarette which is a foreign product. In the 1990s clove cigarettes held 90% of the total cigarette sales, and began to symbolize Indonesian culture Gudang Garam, which in 1997 held 47% of the national cigarette market share, eventually declined by 24% in 2007, followed by Sampoerna 23% and then Djarum 20%. The entry of Philip Morris which is a multinational company finally began to shake the existence of the tobacco industry in Indonesia, Gudangsalam also started in the lyrics by other multinational companies. Djarum is starting to develop other sectors by acquiring the shares of BCA which is the largest private bank in Indonesia. Besides, PT. Djarum also began to develop other business sectors such as property, hotels and palm oil and shampoo products.
The rapid development of tobacco products has created problems for the tobacco public. This condition raises a global issue for tobacco control. The impact of the global issue on tobacco control in Indonesia has led to many regulations relating to cigarette industry that are linked to trade and health issues, Most of those rules are restricting cigarettes in terms of trade to promotion and also restricting people smoking in public places. Even in its development several NGOs began suing cigarette industry related to violations on regulations made in the framework of cigarette control.
The Indonesian buildup of some of these decades experienced ups and downs in terms of its exploitation, which has had an enormous impact on business actors. Safitri (2011) makes an analysis of export and import performance of tobacco in Indonesia. Based on the results of the analysis,the performance of Indonesian tobacco exports and imports are as follows: 1. In 2000-2009, based on TSR analysis results, the development of tobacco exports showed close to one value, this illustrates the condition of Indonesian tobacco to be still at the maturation stage of exports. 2. Exporters and importers in Indonesia still need policies and role of government in maintaining and improving its quality and maintaining stability in order to compete with other countries. 3. The high demand for Indonesian tobacco should be of the government's attention, so that natural resources can be produced better in order to increase the income of the country. 4. The results of the analysis of market concentration show the export of Indonesian tobacco spread to several countries and not only centered in one country only. The role of the government in maintaining the stability of Indonesia's exports and imports is still urgently needed, and the role of the Indonesian people in maintaining and conserving natural resources and maintaining the stability of export and import activities is also urgently needed. The role of the government is very necessary given the real problems faced in the export and import activities of Indonesia in an effort to increase economic growth in Indonesia.
The implications of Indonesia's export and import of tobacco exports in the world market are as follows: 1. To improve the competitiveness of Indonesian tobacco in the world market, it is necessary for all parties involved, including the government and Indonesian exporters and importers to actively participate in order to improve the international competitiveness of the Indonesian tobacco production. 2. The public and the government should not be lulled by the results of Indonesian tobacco exports;instead the government should increase the export results to increase the income of Indonesia. 3. The government and the people of Indonesia should be able to control the export trade of Indonesian tobacco, because the high number of demand for Indonesian tobacco may open to more dishonest acts of society and government to take advantage. According Prajoga and Friyatna (2008) tobacco exploitation in Indonesia is still a mainstay for the perpetrators and their stakeholders. The development of tobacco exploitation in Indonesia has never been separated from the global influence, where today many tobacco products are opposed by some of the world community because of health problems. The Development sector in its development in Indonesia cannot be separated from the global influence on tobacco and health issues which has been agreed upon by its restriction in world confession by the WHO in a regulation contained in the FCTC.
Still according to Prajoga and Friyatna (2008), the performance of tobacco sector in Indonesia can be known as follows: a. tobacco production during the period 2000 -2006 decreased an average of 5.98 percent per year b. per capita cigarette consumption tends to rise with rising per capita income, c. the tobacco sector and the tobacco industry sector contributed about 7 percent of the country's revenues from the country, but more draining than generating foreign exchange, d. the role of the tobacco sector and the cigarette industry sector in the creation of output value, added value, and the absorption of labor is less significant, but both have considerable output multiplier, especially tobacco sector, and e. tobacco sector is able to attract upstream sector and push its downstream sector to develop, while cigarette industry sector only able to push downstream sector. Seeing the performance of tobacco exploitation in Indonesia, there are several notes that need to be considered in the development of tobacco exploitation in Indonesia, among others: a. in the development of tobacco sector and cigarette industry sector in the future need to consider the balance between economic aspect and health aspect and b. if the policies taken by the government ultimately control the tobacco then this step should be done gradually considering the economic aspects incurred is also very large. c. the content of nicotine and tar in cigarettes needs to be reduced as well as finding an alternative to tobacco use for non-fungus which is economically feasible. Tobacco Besuki Na-Oogst has its own uniqueness in terms of its exploitation. For all products produced on Besuki Na-Oogst tobacco are all for the market share of the export. Tobacco Besuki Na-Oogst is a type of tobacco used for cigar raw materials. Cigar products are tobacco products widely consumed by the people of Europe and America while for the local market (Indonesia) tobacco cigar material is not a much market share.
Besuki Na-Oogst Tobacco planted by Besuki farmers, especially in Jember, provides a very promising advantage for farmers and business actors. Hartadi (2009) whose special interest is about tougher conquering Besuki Na-Oogst stated thatBesuki Na-Oogst tobacco is a commodity of Jember that of the community pride. In terms of profit for cultivation, tobacco and rice crops are equally beneficial crops cultivated by Jember farmers, but calculations of tobacco plants are more profitable than rice crops.
Tobacco plants have a positive effect on social costand requires more labors than the rice plant. Rice prices are highly dependent on governments that have the authority to control large market prices, and this is also affected by imported rice. Medium tobacco prices that occur are free price depending on the existing market.
Tobacco has a higher comparative advantage compared to rice and is produced in a more efficient way. Rice farmers are subsidized by the government because it is government policy, while tobacco is taxed which is income for the government. Therefore, the government should not restrict tobacco crops and allow farmers to plant crops according to their own will. The government in this case has to help tobacco farmers to deal with partnership issues with the exporters and provide adequate facilities.
Tobacco and Global Trade
The application of free trade or global trade in the world has a different impact on tobacco commodities. The existence of such free handling of several countries make tobacco become a reliable commodity that can be traded between countries, because the nature of tobacco is needed by many Industries in the World. Thindwa and Seshamani (2014) stated that trade liberalization conducted in Malawi did not increase growth in the tobacco trade sector. Increased tobacco trade is more due to the availability of fertile land suitable for tobacco cultivation.
Meanwhile, Taylor (2000) argues that the liberalization of tobacco trade through bilateral, regional, and international trade agreements has significantly reduced barriers of both tariff and non-tariff trade. Advertisements and promotions have drastically increased so that it increases tobacco and cigarette consumption globally. Low and middle income countries will be victims of increased tobacco consumption. An effective way to limit tobacco consumption is health reasons so it is necessary for the role of global health organizations to regulate tobacco consumption restrictions for good health.
A case study conducted by Warsh (2006) states that in the history of Canada from 1943 to 1949, the smoking behavior of cigars was a common practice by Canadians, both male and female, even though the smoking habit was defined by the doctors as detrimental to health. In a war situation, economic activity is driven by women while waiting for their husbands to come home from war. The Jews are also instrumental in the tobacco trade in retail. This tobacco trade was able to produce integration between minorities and the majority population. Yet, the fantastic figure is that 1993 the number of cigarettes consumed in Canada reached 1.7 billion sticks per month.
The report from Way (2014) states that the Food and Drug Administration plans to establish premium cigars, electronic cigarettes, smokeless tobacco and pipe tobacco as part of the tobacco prevention and control regulations of 2009. Premium cigar companies and shops and small businesses providing cigars will be impacted by this policy. However,FDA policies and antitobacco people, it is highly questionable because until now no relationship has been found between cigar smokers and health. Corey et al (2014) in a survey conducted in the United States stated that cigarette smokers from students according to the National Youths Tobacco Survey and NSDHUH (National Survey on Drug Use and Health) survey in 2011 and 2012 experienced a drastic increase. Of the many causes, the increasing number of cigar smokers is promotion and branding. Therefore, it should get the attention of the government to be able to control the circulation of cigars among students in the US.
Allen (2011) global tobacco control impacts tobacco trade illegally. Illegal tobacco trade typically manifests itself in three interrelated ways: smuggling, counterfeiting and local taxes. Illegal tobacco trade is happening around the world. This illegal trade has had a substantial impact, especially on the excise and tax sectors as state revenues. Illegal trade is driven by the law of demand and supply,where consumers want tobacco products that are cheap, and the producers want their sales create as much profit as possible.
The method used to reduce the threat of illicit trade is by way of a confrehenship approach,where it should be accompanied by a strong "political will" and adequate funding for supervision. A sudden increase in taxes would increase the potential for illegal trade in tobacco products. The measures of illegal tobacco control techniques include: audit and physical controls on customs, supply chain control, regulation and good law enforcement, and international cooperation through the FCTC. Reavers (1999) mentions that the tobacco sector in the US has a strategic role in the economy. Changes in the industrial sector that oriented on cheap tobacco products have hit tobacco farmers in terms of its economy. Various efforts have been made to improve the welfare of tobacco-producing farmers through various programs of economic activity. In this case tobacco-producing farmers are required to be wiserin implementing production activities and investment money as in the provision of production facilities and costs to produce tobacco products to still be profitable. The point is that every time there is a change in the tobacco industry sector, the farmers are required to adapt in terms of production patterns.
The European Commission (2003) mentions that tobacco plants for tobacco-producing countries such as China, the US, Brazil, Turkey, Malawi and Zimbabwe make a huge contribution to mobilizing the economy among farmers. On the other hand, for the health of the society, the impacts caused by the use of tobacco products are also quite large, so there is a huge cost for recovery.
The world's tobacco use growth is estimated to be 2% annually due to the increase in population. As a result of controlling tobacco products, some countries impose high taxes on tobacco products. But this way boosts the potential for illegal trade in tobacco products. A strong will and strong politics to control tobacco products in a balanced way is the best way to exercise control, because tobacco producers and industry will always adjust themselves against all changes. Chaloupka and Nair (2000) argue that trade globalization can significantly increase world tobacco consumption. Increasing the amount of tobacco consumption actually occurs in countries with a weak economic level so that this impact on the quality of health of the population. The best approach to reducing tobacco consumption has been formulated in the FCTC (framework of the Convention on Tobacco Control). High taxation charges on these products are expected to reduce tobacco consumption in general. Rweyemamu and Kimarso (2006) in a paper utter that tobacco production is a source of activity that greatly affects the economy in the countryside, in Songe district, Tanzania. The liberalization of the tobacco market has not been caught as a factor that can increase farmers' income, this is because there is still inefficiency in the pattern of production, therefore incentives from policy makers in the production process of farmers are needed so that farmers can improve their welfare. For example in terms of taxation, funding, infrastructure repairs, etc.
Global trade in the international world and the declaration of tobacco control formulated by the World Health Organization (WHO) within a framework of the Convention on Tobacco Control (FCTC) has been much in contact with the implementation. On the one hand, the trade of any product in the world in the presence of globalization cannot be prevented or inhibited, while specifically for tobacco problems many countries impose tobacco trade as a product that must be limited. Thus there is often a conflict of interest. Lester (2005) discusses the conflict of interest between global trade and tobacco product restrictions within the framework of the FCTC very straightforwardly. On the issue of trade and tobacco conflicts with the FCTC problem, the reference used and expected to control it is the trade rules themselves, based on the structure of the treaty itself prepared by the Government.
Trade rules have relatively strong enforcement conditions, while the FCTC does not. For practical purposes, this means that any dispute adjudication will be conducted in a manner and the rules of trade obligations. Moreover, in this interpretation strong enforcement mechanisms for trade show that governments are expected to give priority to trade rules. However this does not mean that the FCTC is irrelevant in terms of trade disputes. The FCTC can be used as an element to help interpret trade agreements. For example, the FCTC may offer an explanation of the reasonableness of the size in the context of specific provisions. If the size is based on the FCTC, the purpose is valid. However, the FCTC cannot be a "defense" for breach of trade rules.
Ultimately, rather than worrying about trade and tobacco conflicts, or trying to push tobacco rules to have priority over trade rules, public health groups should emphasize how trade rules do not prohibit demand-reduction measures as they are driven by the FCTC, and should focus on developing tobacco regulations that do not violate trade rules. Both regimes can and must work together to achieve their own goals in a way that does not create tension. This does not explain that there are many constraints on tobacco control of trade rules.
If there is no dispute between trade and the FCTC, the steps governments can take to control tobacco with a consistently applied trade agreement. In general, fragmentation of international law can be used as a reference of solutions. In some cases, governments continue to sign overlapping agreements thus may create a conflict. Interpretation in a trade agreement should be able to be analyzed in a focused manner. On the issue of tobacco and trade agreements, there seems to be an imposition that global tobacco trade should refer to the regulations issued by WHO, in this case as outlined in the FCTC. This is where an international court review needs to be reviewed to discuss the global trade links applicable to tobacco.
Tobacco and its industrial products to date are still declared as legal products. And yet no country in the world says that tobacco and its industrial products are illegal. The very strict regulations imposed by countries in the world that refer to the FCTC framework precisely cause illegal or illegal trade in the world markets. Joossens (2012) notes that since the strict implementation of tobacco control, there has been a lot of tobacco and cigarette trade in the dark market. Major industries such as Phillip Morris (PMI), BAT and Japan tobacco indicate that tobacco trade in the black market is triggered by regulations that tightly control tobacco and cigarette use. Even trademark counterfeiting on the black market is very widespread.
On the one hand, tobacco trade in black or illegal markets occurs because the demand for tobacco is still quite high while prices are rising very high due to the enactment of very high taxes. The price of production for tobacco in the dark market is quite cheap, so this right attracts parties who are looking for substantial profits in the black market.
Allegedly by anti-tobacco activists, that the illicit trade in tobacco products is actually a lot going on and done by big companies or industry itself, such as Philip Morris, BAT, RJ Reynold, Japan tobacco and many more tobacco players in the big industry. Official cigarette packing that must include a very horrible warning sign is hitting the cigarette industry, therefore they are looking for ways to earn huge profits by producing cigarettes in empty packaging and marketed in the international black market. This black market is happening and is found in almost all parts of the world consuming cigarettes, such as America, Canada, UK, Middle East and many other places.
The tobacco and cigarette trade can only be resolved if the countries firmly apply all existing conventions to the FCTC framework,depending on how strong the country depends on tobacco.
Business Strategy on Tobacco
Tobacco and industrial products area product that reap the pros and cons. The conflict of views is influenced by the point of view of the interests of the people or the public concerned in it. In this case there are people who are pro to tobacco and the tone of the counter community is mainly associated with the problem of tobacco consumption relations with health problems, until this has prompted the world health agency to intervene in the international affairs of tobacco business.
The development of tobacco history is written in detail by a world institution called Ash (Action on smoking and Health) released in 2015. In detail the paper contains the development of tobacco from the beginning discovered until the occurrence of conflict of interest on the use of tobacco. In the 1st century before AD, tobacco was natively believed in America as a cure for healing. The use of tobacco as a cigarette can be seen from a portrait of tobacco in Guatemala that is estimated to occur in the year 600 -1000 AD. News about the benefits of tobacco as a drug and tobacco can be enjoyed as a cigarette, rapidly spreading to various parts of the world.
The rapid development of tobacco consumption in the world leads to an unbalanced state between the benefits and consequences of excessive tobacco consumption. In the view of trade also threatens other products that trigger jealousy. Several community groups have finally conducted research on the consequences of the growing tobacco consumption pattern around the world.
Regardless of the research results, the researchers are especially engaged to human health such as the Surgeon General project concluded that tobacco consumption would have a detrimental effect on human health so that in its development the consumption of tobacco should be controlled. In the end, the Framework Convention on Tobacco Control (FCTC) initiated by WHO began in May 1999. In 2003 the FCTC was adopted by 171 countries and by 2005 it was officially used as a reference for tobacco control worldwide.
Anti-tobacco communities continue to fight against the development of tobacco and cigarette industries. As reported by Ericson et al (2015) in his book Tobacco Atlas has written and highlighted the development of the tobacco industry and the impact of tobacco consumption on human health.
In order for world tobacco consumption not to increase quickly and prevent teenagers and children from consuming tobacco, a structured effort is required to control the effects of tobacco and cigarettes and the impact that can result from tobacco consumption.
Tobacco operations are also associated with environmental damage. Tobacco control through FCTC covers various activities as an instrument: a. Restrictions on tobacco product trade through regulation b. Health issues c. Pesticide input control (via Coresta list) d. Pre-requisites for all tobacco business actors with the SRTP program e. NTRM f. Environmental issues To make the tobacco control effective, the work program should cover all aspects related to tobacco exploitation in the form of regulation from farming, industry, purchasing, taxation, to product use and waste control. This is illustrated in this illustrated cycle: The stages of Tobacco Regulation In Indonesia, some people who believe that tobacco has a negative impact on health begin to write a lot of negative sides of tobacco with reference to most of the big comes from the world society of anti-tobacco structured. As conducted by Barber et al (2008) published by Demographic institutions University of Indonesia. The report states that the application of tobacco taxes to the maximum extent allowed by law (57 percent) can prevent the occurrence of 1.7 million to 4 million deaths from tobacco among smokers, and provide additional state revenues of IDR 29.1 trillion to IDR 59.3 trillion. The written recommendation of Barber et al (2008) states that the allowance for excise duty of 2 percent is directed effectively to assist those who are negatively affected by the decline in tobacco consumption and to implement a more comprehensive tobacco control program.
Rachmat (2010) states that the results of tobacco management are not balanced with the impacts of tobacco consumption. The role of tobacco in the national economy can be seen from several indicators such as its role in state revenues (GDP), sources of employment and community income. The tobacco industry widely covers the primary raw materials sector of tobacco leaves and cloves and cigarette processing industries. Based on the results of Input-Output analysis in 2005, the tobacco industry contributed 1.66 percent to the total national GDP. The largest contribution came from the cigarette industry of 1.56 percent, while the tobacco and clove raw materials sector only contributed 0.036 percent and 0.067 percent respectively. However, the cigarette industry is one of the leading agricultural industries (agroindustry) in Indonesia. Against the agroindustry the role of the cigarette industry reached 13.13 percent.
Tobacco industry and smoking culture have long been a part of the daily habits of Indonesian society. Against this tobacco industry, Indonesia is faced with a dilemmatic situation.On the one hand, it plays a role in the national economy, whereason the other hand, it negatively affects public health and the environment. The role of tobacco in the national economy can be seen from several indicators such as its role in state revenues, sources of employment and community income.
The situation of massive pressure on tobacco and its industrial output needs to be given serious attention to all its business actors. The strategies adopted by the anti-tobacco community in the world are highly structured and through all aspects related to tobacco exploitation. If tobacco is still desirable and its existence is preserved, it is necessary to defend the tobacco business. Savell, Gilmore and Fooks (2014) tobacco industry companies are largely disadvantaged by international policies referring to the FCTC framework. The FCTC's framework of thought severely inhibits the tobacco industry in developing and marketing its products. The tobacco industry developing strategy is by way of systematic promotion and attempting to influence the local government's policy in a juridical way.
Today's growing business paradigm is two Resources Based View (RBV) and Market Based View (MBV). Tobacco operations initially rely on the concept of RBV, where there is a distinctive feature of a product that has a special pull and cannot be replaced. This value is often called the Fancy product that is relied upon tobacco. But over time, the internal factors that are the resources of the company's strength that becomes a force in tobacco businessare declining. Hence there is a need for a new, more updated strategy to maintain tobacco as a reliable trading commodity. Given the current tobacco business cannot survive just because of internal strength factor, external influences related to the market or market greatly affect the success of the tobacco business. At least, tobacco can survive as a product that can be cultivated.
Companies that operate in tobacco business are very important to understand the organization itself. Tichy (1982) argues that the present organizational faces describe situations that cannot sustain, causing complex and more difficult situations. To be able to get out of this kind of environmental situation,a strategy of change of technical system, culture and politics of organization is required. The tools that should be available in the process of management change include: a. External interface b. Mission c. Strategy d. The mission sets the organization and the strategy process e. Tasks f. Setting the network g. Human resources h. Emergency network Tobacco companies need strategies and tactics to survive in the business world. Masanell and Ricart (2009) distinguish the definition or definition between Strategy, business model and tactics and the three ideas can be integrated. Understanding the strategy as a bigger idea to achieve the goals of a business is important. In the manifestations required various business models. In every business model there are various tactics to run the business model. So there is a clear boundary between the strategy, the business model and the tactics of moving an organization. Sanchez & Mahoney (1996) suggest that the products in the business must be kept so that the product is favored by its customers. Product design becomes important in maintaining the strength of the organization in the business. Design and development is an internal factor of the company or organization that needs to be maintained. The marketing of tobacco industry results is also much concerned about design and development. As the implication of change of design and development on tobacco product done very quickly according to market demand. Kotler (1984) defines design as a powerful strategic tool for companies to obtain and sustain comparative sustainability. Design can be used to improve the product, environment, communication, and corporate identity.
In the era of global competition, it takes effort to maintain the comparative advantage of a business. Large industries characterized by an intense level of service and competitive pricing make it increasingly difficult for business competition to take advantage of the benefits and cash features of an industry involved in global competition.
Every company that runs its business is always trying to maintain its business. Business strategies are undertaken to keep firms in global trade. Teece (2010) states that all businesses either explicitly or implicitly use certain models in their management. The essence of such modeling should be to analyze customer needs, ability to pay, and can define well how the company should respond and deliver value to its customers and can persuade customers to pay / value a value and can convert it into profit from the company through a design precise and good operation of the various linking elements of the chain. Fereira and Rezende (2007) stated that the internal factors for corporate sustainability depend on the role of the manager. In the tobacco business manager (leader) of an organization must be able to create a good relationship with the colleagues that exist outside of the organization. Matzler et al (2013) in the case study of Nespresso coffee suggest that innovative business models play an important role in sustaining the company sustainably. In the case of Nespresso coffee, the CEO has a good strategy in maintaining the sustainability of his business in the coffee positioning through business model innovation.
Innovation of their business model succeeded in aligning the logic of their products and services so as to have added value in terms of sales and marketing, thus increasing the company's revenue. The secret lies in the coherence and uniqueness of the product so it is difficult to imitate. The basic idea of Nespresso coffee is customization of products to suit the tastes of the customers. Business model innovation consists of 5 components: a. Innovative, unique positioning; b. consistent logic products and services; c. appropriate value creation architecture; d. an effective sales and marketing logic; and e. profit formula that works. Slater, Olson and Sørensen (2012) define the customer is the focus to be managed by the company or organization. This means that what is used by the customer should be exploited in detail so that the company can give a sense of satisfaction to the customer. Knowledge of the market where it also departs from the customer should be described in clear and detailed . Knowledge about the market will contribute greatly to successful product development. This view is more oriented to the MBV paradigm. Tobacco exploitation must also focus on the customer, meaning that what is desired by the customer must be able to be met by the organization.
Whittington (2012) argues that sustainable leadership is meant to manage the company in order to better survive in a global situation which in this case involves community participation in a corporate social responsibility to society. Managers are often less concerned about this, forcing leaders and decision makers to be able to implement policies of taking on social responsibility as part of a strategy for "sustainable revolution".
Structured anti-tobacco community strategies must be balanced with business adjustment strategies by tobacco business actors. This adjustment must be integrated in internal and external factors that may affect this business and all aspects related to tobacco business.
CONCLUSION
Tobacco in its history is a product that can also be used as a treatment of several kinds of diseases in humans. The development of tobacco in terms of trade and profits that have achieved this product is very sexy, so many cause more complex problems. The emergence of restrictions cannot be separated from the competition of tobacco product trade.
The health reasons used to justify tobacco control are actually counterproductive, which in fact tobacco can also be used as a medicinal ingredient, although there is no doubt that tobacco consumption in certain cases also has a negative impact on humans who consume them.
The regulation of free trade (globalization) which has been agreed by all countries in the World has consequences also on tobacco commodities. Globalization does not recognize discrimination in traded products. Trade and consumption restrictions on tobacco products as set forth in the FCTC framework are evidence that tobacco commodities are discriminated against in global trade. This is because none of the countries in the world that expressly state that tobacco is included in commodities that should not be traded.
The current strategy of tobacco enterprise cannot be done by maintaining only internal factors related to the product and all organizational resources.
External factors that are the determinants of business success must be continuously developed and maintained. So in the paradigm management concept there must be a synthesis between the RBV and MBV paradigm to run the tobacco business in order for this business to survive in today's challenging global situation.
Tobacco exploitation in Indonesia should be managed wisely, because it involves the interests of the millions of people involved in it and has a fairly strong economic dimension. State revenues through excise duties or large foreign exchange should be considered in formulating regulations concerning the sustainability of tobacco exploitation. On the other hand, an understanding of the health impacts that may be caused by tobacco consumption patterns should also be of concern to the entire tobacco production process and tobacco industry to be minimized.
Strong currents from anti-tobacco communities who want tobacco restrictions in circulation and consumption should be responded wisely by tobacco businessmen or government from a country producing tobacco, including Indonesia. The tobacco enterprises and its products shall be directed to a more favorable interest to the business actor and shall not harm the other party not involved in the tobacco business. The impact of tobacco must be reduced to be as small as possible so that this sector can be a commodity that has a good competitiveness to trade. | 2019-09-19T09:04:15.081Z | 2019-08-16T00:00:00.000 | {
"year": 2019,
"sha1": "98d747077424348bbfba3efe3daa94042db243a3",
"oa_license": "CCBY",
"oa_url": "http://jurnal.unmuhjember.ac.id/index.php/POLITICO/article/download/2315/1856",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3faac3e5fc901a5caa9b4ddb469a759e6d3ed293",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
268875245 | pes2o/s2orc | v3-fos-license | Coexistent sarcoidosis mimics metastasis in a patient with early-stage non-small cell lung cancer: A case report
Background and aim: Sarcoidosis is a granulomatous disease. Malignant tumors are accompanied rarely by granuloma reactions that mimic metastasis. Methods: We present the case of a patient with possible advanced lung cancer with metastases to the mediastinal lymph nodes and bilateral ilia. Results: Ilium biopsy revealed the presence of sarcoid-like reaction. Bronchoscopy and endobronchial ultrasound revealed an adenocarcinoma in the right upper lung lobe, with a negative mediastinal lymph node. The correct staging of lung cancer was achieved through pathological examination of the surgically removed lung tissues. Two years later, the lung cancer metastasized, and the patient underwent systemic treatment. Conclusions: Coexistent sarcoid-like reaction may mimic metastatic lung cancer. A multidisciplinary approach and sequential diagnostic biopsies can prevent unnecessary surgery or inadequate treatment by distinguishing between coexistent sarcoidosis and metastatic lung cancer.
Introduction
Sarcoidosis is a common systemic disease of unknown etiology that involves the formation of granulomas, and it is a multisystem inflammatory disease.The lungs are most frequently affected in sarcoidosis.Sarcoidosis, which mimics widespread metastatic cancer, has an exceptionally rare clinical manifestation involving the bone and has been documented in only a few instances (1).Here, we present a case of bone involvement with a sarcoid-like reaction in early-stage non-small cell lung cancer.
Case presentation
In August 2018, a 57-year-old woman visited our clinic with a lingering cough that had persisted for 4 months.The patient was a nonsmoker.She had undergone surgery for thyroid cancer in 2010 and denied any history of pulmonary disease.Laboratory tests indicated no inflammation (C-reactive protein was 1.83 mg/L, erythrocyte sedimentation rate, 13 mm/h).Carcinoembryonic antigen (CEA) level was 1.5 ng/ml.Serum angiotensin-converting enzyme (ACE) levels were within the reference range (25 U/L); serum creatinine, alkaline phosphatase, and calcium levels were normal.A chest computed tomography (CT) scan performed immediately revealed a solitary cavitary lesion in the upper right lung, along with diffuse small perilymphatic nodules in both lungs and enlarged lymph nodes in the hilar and mediastinal regions.Pulmonary function test results were normal.In October 2018, fluorodeoxyglucose (FDG) positron emission tomography (PET)/ CT revealed slight uptake of 18F-FDG in the upper right lesions and several lung nodules on both sides, with strong FDG uptake in the lymph nodes of the mediastinum and iliac regions (Figure 1).Magnetic resonance imaging revealed multiple bilateral osteolytic lesions that appeared to be bone metastases (Figure 2).Initially, the patient was diagnosed with advanced lung cancer and bone metastases (M1).However, biopsy of the ilium revealed epithelioid granulomas with small necrotic areas (Figure 3).Periodic acid-Schiff, acid-fast, and Grocott's methenamine silver staining yielded negative results.Therefore, the condition was diagnosed as possible sarcoidosis.
Bronchoscopy revealed multiple mucosal nodules in the right middle bronchus (Figure 4).The patient underwent radial endobronchial ultrasonography with a guide sheath and endobronchial ultrasound needle aspiration (EBUS).Pathological examination revealed adenocarcinoma in the upper right lung, non-necrotizing granulomas in the mucosa of the middle right bronchus, and no cancerous cells in the lymph nodes located at stations 4R and 7.No evidence of bacteria, fungi, or tuberculosis was found in the bronchoalveolar lavage fluid.Bronchoalveolar lavage showed cluster of differentiation (CD)4/CD8 ratio of 0.6 with 8% lymphocytes.Acid-fast bacillus staining and tuberculosis culture of the bronchoalveolar lavage fluid yielded negative results.The patient underwent resection of the right upper lobe of the lung and a part of the right middle lobe.Pathological analysis of the surgical specimen revealed adenocarcinoma with surrounding granulomas in the right upper lung lobe (Figure 5 A, B, and C), along with granulomatous nodules in the right middle lung lobe.The postoperative pathological stage was pT1N0M0, with an epidermal growth factor receptor (EGFR) mutation and a coexistent sarcoid-like reaction.Cardiac examination (electrocardiogram and cardiac ultrasound) revealed no involvement of the heart, whereas ophthalmological examination revealed no involvement of the eye.
In July 2020, chest CT displayed diffuse small perilymphatic nodules, and the hilar and mediastinal lymph nodes were larger than before (Figure S1).The ACE level was 46U/L, and the CEA level was 1.78 ng/ml.As an experimental treatment, methylprednisolone was administered initially at a dose of 40 mg/day, which was subsequently reduced to 5 mg/day for maintenance therapy over 6 months.In March 2021, chest CT revealed small diffuse perilymphatic nodules, hilar and mediastinal lymph nodes, and a solid nodule enlarging in the dorsal segment of the right lower lobe of the lungs (Figure S2).The CEA level increased to 19.34 ng/ml.The patient underwent icotinib therapy.After 3 weeks, she began experiencing fever with accompanying signs of respiratory distress.Bronchoscopy-obtained pathological tissue revealed a granuloma lesion, but no tumor.Numerous pleural, pericardial, and diaphragmatic metastases were identified during thoracoscopy.Pleural biopsy revealed adenocarcinoma with epithelial granulomas.She was administered osimertinib, and steroids were put on hold.A response evaluation conducted in August 2022 demonstrated that lung cancer and sarcoidosis remained stable (Figure S3).In January 2023, metastases were observed in the thoracic spine, lumbar spine, and ribs.The patient was diagnosed with metastatic lung cancer and received three cycles of carboplatin, pemetrexed, and bevacizumab.The latest evaluation was in May 2023 (Figure S4) with a partial response.
Discussion and conclusions
In this case, the patient was misdiagnosed with advanced lung cancer based on PET/CT findings.By contrast, the bone biopsy and EBUS results suggested an early stage malignancy, a sarcoid-like reaction involving the ilium, which was eventually confirmed by pathologic analysis of the surgically resected sample.Sarcoidosis is a multi-organ disease that manifests as non-caseating granulomas of an unknown etiology.Sarcoidosis can affect all organs to varying degrees.However, bone sarcoidosis is rare (3.4% of the studied population) (1), and such lesions are easily misdiagnosed as bone metastases.Bone biopsy is important for a precise diagnosis, considering the difficulty in distinguishing bone metastases from sarcoidosis using PET/CT and magnetic resonance imaging (2).During initial cancer diagnosis or suspected recurrence, histological evidence of non-caseous granuloma or sarcoidosis may be encouraging for patients.This leads to tumors being diagnosed at an early stage.The patient was diagnosed initially with adenocarcinoma involving sarcoidosis; however, we observed enlarged pulmonary and lymph nodes at the follow-up, which improved after experimental hormone therapy.Moreover, multiple metastases were observed.Researchers have reported on sarcoidosis or a sarcoidosis-like reaction to malignancy (3).This clinical case involved a systemic sarcoid-like reaction associated with cancer.The granulomatous reaction was initiated supposedly by cancer cells and perpetuated by persistent residual cancer cells, which led to metastatic disease.The increased granulomatous burden of previous tumors warrants considering the possibility of recurrence.Experimental corticosteroid treatment may temporarily improve the condition and mask tumor recurrence.Therefore, a differential diagnosis, especially of tumor or tumor recurrence, should be ruled out before diagnosing sarcoidosis.
It is unclear whether sarcoidosis predisposes the patients to malignancy or arises as an immune response to malignancy.Despite an unclear underlying mechanism, it may be related to long-term inflammatory reactions (4).Patients with sarcoidosis have a significantly increased risk of malignant tumors (5-7).Malignant tumors have been identified in patients previously diagnosed with sarcoidosis, either because the diagnosis was made based on the granulomatous response to malignant tumors, or because the treatment of sarcoidosis was associated with malignant tumors, or because sarcoidosis patients were essentially at risk for malignant tumors.Hence, individuals diagnosed with sarcoidosis must undergo screening for cancerous tumors as a component of their initial assessment and subsequent medical monitoring (5-7).
However, there is no distinct association between lung cancer and sarcoidosis.Sarcoidosis can be detected in lung cancer lesions and in the hilar and mediastinal lymph nodes (8-10).Yamasawa et al. proposed that sarcoidosis and lung cancer coexist by chance.Lung cancer is associated with sarcoidosis-induced abnormal cell-mediated immunity.Sarcoidosis leads to fibrous tissue development, which is a source of lung cancer.Furthermore, its onset is caused by immunohistochemical reactions, which are responsive to malignant tumors (10).In this case, lung adenocarcinoma and sarcoidosis were observed simultaneously.Moreover, we observed granulomas in the pleural metastasis after tumor spread; hence, our patient could be cited as the fourth instance.She presented with bone sarcoidosis, in addition to lung cancer sarcoidosis reactions (8)(9)(10).Moreover, she harbored an EGFR mutation and responded to osimertinib treatment for 17 months.Kachalia et al. reported on a case of lung adenocarcinoma with an EGFR mutation and sarcoidosis.The patient was diagnosed initially with sarcoidosis and responded favorably to steroid therapy.Six months later, the lung adenocarcinoma had spread to the pleura, pericardium, and diaphragm, along with an EGFR mutation.After 6 months of erlotinib treatment, the disease progressed and palliative care was administered (11).Despite limited evidence for the association between sarcoidosis and lung cancer, clinicians should exclude metastatic malignant tumors from patients exhibiting clinical and imaging manifestations consistent with sarcoidosis.
CD4 + T cells are the predominant cell type in sarcoid granulomas and central to granuloma development, maintenance, and prognosis (12).Approximately two-third of patients with pulmonary sarcoidosis experience spontaneous remission.Patients in spontaneous clinical remission from sarcoidosis have fewer programmed cell death protein 1 (PD-1) + CD4 + T cells, with normal proliferative capacity of the T cells.Patients with clinical progress have five to six times higher PD-1+CD4 + T cells than do healthy controls, with reduced proliferative capacity (13,14).During PD-1 inhibition, the proliferative capacity of CD4+T cells returns to reference levels (13).PD-1 expression downregulation in CD4+T cells is associated with sarcoidosis regression, and PD-1 inhibitors are regarded as the therapeutic targets for sarcoidosis.However, PD-1 inhibitors can trigger sarcosis-like reactions (DISR).DISR is a systemic granuloma reaction that is difficult to distinguish from sarcoidosis (15).A delicate balance exists between immunosuppression and recovery of T cell function in sarcoidosis involving the PD-1 pathway, warranting further research.
This case emphasizes the need for a multidisciplinary team and sequential diagnostic biopsies to arrive at the correct diagnosis and avoid unnecessary surgery or insufficient treatment.Solid or hematologic malignancies are associated frequently with sarcoidosis, occurring prior to, during, or following disease onset.There are many circumstances in which diagnosis can be challenging and will require a careful diagnostic evaluation.
Figure 4 .
Figure 4. Bronchoscopy examination: bronchoscope depicts multiple mucosal nodules in the right middle bronchus (A.White light imaging, B. Narrow band imaging).
Figure S2 .
Figure S2.Chest computed tomography depicts shrinking diffuse small perilymphatic nodules shrinking, hilar, and mediastinal lymph nodes shrinking and an enlarging solid nodule in the dorsal segment of the right lower lobe of the lung.
Figure S3 .
Figure S3.A response evaluation conducted in August 2022 shows stable lung cancer and sarcoidosis.
Figure S4 .
Figure S4.The most recent chest computed tomography in May 2023. | 2024-04-04T06:18:57.616Z | 2024-03-26T00:00:00.000 | {
"year": 2024,
"sha1": "5ad9b743e22a8ed3247d591ae685c1fd0ce68532",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e979d3702907b08f077f2f0ffb8503258f860adb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230768152 | pes2o/s2orc | v3-fos-license | The Effect of an 8-week NASM Corrective Exercise Program on Upper Crossed Syndrome
* Corresponding Author: Mahsa Abdolahzade, MA. Address: Department of Sports Injuries and Corrective Exercise, Shafagh Institute of Higher Education, Tonekabon, Iran. Tel: +98 (911) 8837761 E-mail: mahsa.abdolahzadeh@gmail.com 1. Department of Sports Injuries and Corrective Exercise, Shafagh Institute of Higher Education, Tonekabon, Iran. 2. Department of Sports Injuries and Corrective Exercise, Faculty of Physical Education and Sport Sciences, University of Guilan, Rasht, Iran. *Mahsa Abdolahzade1 , Hassan Daneshmandi2
Introduction
uscle imbalance can affect the body's natural alignment and cause a variety of postural abnormalities [1]. Improper posture and long-term work tasks can lead to musculoskeletal disorders [2]. Early and timely identification of these postural defects and their treatment can reduce its complications and help save time and money [3]. Muscle imbalance can have serious and M known consequences in the body [1]. Upper Crossed Syndrome (UCS) occurs in the neck and shoulder girdle [4]. This syndrome is a type of musculoskeletal system involvement that results in shortening of the upper posterior and anterior muscles in the neck, which are tonic muscles (e.g. pectoralis major muscle, upper trapezius, levator scapula, sternocleidomastoid) and the anterior deep muscles of the neck and posterior shoulder girdle, which are mainly phasic (e.g. Rhomboid major, middle and lower trapezius muscles, serratus anterior muscle, and deep neck flexors) are inhibited and weakened. Postural changes seen in UCS include forward head, rounded shoulders, and thoracic kyphosis [5].
There have been several reports of osteoarthritis of the temporomandibular joint due to forward head and mechanical pain in the head [4]. There are also some reports of radicular pain in the arms and hands due to osteoarthritis of the neck due to UCS [2]. Such adverse secondary changes resulting from this syndrome are also present in people with thoracic kyphosis in the glenohumeral joint [4,11].
Methods
In this study, 30 female students [15] with forward head posture > 46 degrees [16], forward shoulder posture > 52 degrees [16] and thoracic kyphosis > 42 degrees [17] were selected as samples using purposive sampling method and randomly divided into groups of control and intervention. Participants in the intervention group received 8 weeks of corrective exercise, 3 sessions per week, each for 30-70 min. National Academy of Sports Medicine (NASM) principles were used to develop the training program. The program follows certain training protocols in designing and implementing corrective exercises. It consists of four stages of inhibition, stretch, activation and coherence [9].
The head-forward and shoulder-forward angles were measured using side photography [19], the kyphosis angle was measured using a flexible ruler (r=0.093) [20] before and after intervention. The type of movements was determined by referring to specialists and resources of movement therapy and then finalized and implemented through a pilot study on some study samples [21]. The Shapiro-Wilk test was used to measure the normal distribution of data. In order to analyze the data obtained from pre-test and posttest phases, pair t test was used, and ANOVA test was used to compare the study changes. Table 1 presents the characteristics of participants and test results. Due to the long and incorrect sitting position and repetitive use of the upper limbs in students, there is a possibility that the balance of the muscles of the upper extremity is disturbed. Since muscle imbalances in the upper quarter of the body increase the risk of UCS, and UCS is associated with three postures of head forward, rounded shoulder and thoracic kyphosis, the exercises in this study were comprehensively and simultaneously based on these three abnormalities. The people with UCS need to pay special attention to the issue of muscle balance while sitting, in addition to correcting the posture of the head, neck and back. The results showed the positive impact of exercise based on NASM principles on muscle balance and correcting head forward, rounded shoulder and thoracic kyphosis postures.
Discussion
UCS is commonly seen in people who sit for long periods of time or in people who apply frequent overload patterns to upper limbs [8,9]. Corrective exercises have been reported to be one of the most effective ways to restore performance [23]. Eight weeks of corrective exercises regulates muscle activity and musculoskeletal disorders in the upper body [24]. In this study, the four-step NASM-based corrective protocol was focused on all three abnormalities caused by UCS at the same time, and is consistent with the Janda approach and the Bruegger's exercise [8]. Researchers have shown that strength training affects the length of the muscle tendon, displacing different parts of the skeleton and stabilizing the ligaments. On the other hand stretching exercises act as coordinator of agonist and antagonist muscles. Thus, such exercises increase the length of the muscles on the concave side, the muscle power and strength on the convex side, and thus reduce the rate of postural abnormalities [37]. We attempted to apply the exercise program more in a closed chain of motion and more in a weight-bearing position to simulate real-life activities [36].
Conclusion
In general, it seems that the use of corrective exercises can lead to improvements in flexibility and strength following
Funding
This study was extracted from the master thesis of first author approved by Department of Sport Injuries and corrective exercises, Shafagh Institute of Higher Education, Tonekabon, Iran.
Conflicts of interest
The author declared no conflict of interest | 2020-08-06T09:08:28.971Z | 2020-07-30T00:00:00.000 | {
"year": 2020,
"sha1": "ae87af1893ee41481d75b227800bccfc08369bda",
"oa_license": "CCBY",
"oa_url": "http://biomechanics.iauh.ac.ir/files/site1/user_files_a58f2a/mahsaabdolahzadeh-A-10-260-1-289ed74.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "25cd5575319400b7bdc3e26cc2f1cf5397946a32",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16380353 | pes2o/s2orc | v3-fos-license | Optical Spectroscopy of the unusual galaxy J2310-43
We present and discuss new spectroscopic observations of the unusual galaxy J2310-43. The observations cover a wide wavelength range, from 3700 A to 9800 A allowing the study of both the regions where H alpha and the Ca II ``contrast'' are expected. No evidence for H alpha in emission is found and we thus confirm the absence of emission lines in the spectrum of J2310-43, ruling out the possibility that it may host a Seyfert nucleus. The CaII break is clearly detected and the value of the contrast (38 +/-4 %) is intermediate between that of a typical elliptical galaxy (about 50 %) and that of a BL Lac object (<25 %). This result imposes limits on the intensity of a possible non-stellar continuum and, in the light of the radio and X-ray loudness of the source, draws further attention to the problem of the recognition of a BL Lac object. Objects like J2310-43 may be more common than previously recognized, and begin to emerge in surveys of radio-emitting X-ray sources.
Introduction
In a recent paper, Tananbaum et al. (1997) have reported a detailed analysis of a ROSAT PSPC observation of J2310-43, a very peculiar and interesting galaxy firstly discovered as a luminous X-ray source (∼ 10 44 erg s −1 ) in an Einstein IPC image (Tucker, Tananbaum & Remillard 1995). The IPC data showed that the X-ray source is spatially extended. This, combined with the fact that the galaxy is a cD in cluster (Tucker, Tananbaum & Remillard 1995) supported the hypothesis that the X-ray emission is due to the cluster, rather than to the activity of the galaxy itself. However, on the basis of the X-ray spatial and spectral analysis of the PSPC data, Tananbaum et al. (1997) suggested that only 20% of the total X-ray emission comes from the cluster, while the bulk is associated with a nuclear activity of the galaxy. Optical spectroscopy of the galaxy (Tucker, Tananbaum & Remillard 1995), however, did not reveal the emission lines or the excess blue continuum expected if J2310-43 contains an active nucleus.
Another interesting possibility considered by Tananbaum et al. (1997), is that J2310-43 is related to the BL Lac phenomenon. The lack of emission lines in the optical spectrum would apparently support this view. Furthermore, the broad band energy distribution of J2310-43, i.e. the position of the object in the α ro /α ox plane (Tananbaum et al. 1997), falls at the edge of the region occupied by BL Lac objects (Stocke et al. 1991). On the other hand, the same authors show that the (B-V) color observed in J2310-43 is consistent with that of a normal elliptical galaxy and it is rather different from that of a typical BL Lac object. The resulting picture is rather intriguing with contradicting or inconclusive evidences as for the presence of "nuclear activity". Indeed Tananbaum et al. (1997) leave open the question whether J2310-43 belongs to the tail of the BL Lac population or to a different class of objects and point out similarities with the optically dull galaxies with strong nuclear X-ray emission discovered by Elvis et al. (1981).
The spectroscopic data on J2310-43, however, were limited to the 4700Å-6700Å interval thus excluding two regions critical for the understanding of the nature of the source: that of the Ca II break (expected at ∼4355Å) and of the Hα line (expected at ∼7145Å). In fact AGNs showing no Hβ and [OIII] lines but exhibiting a broad Hα line are known to exist (Stocke et al. 1991; see also Halpern, Eracleous & Forster 1997). For these reasons the possibility that J2310-43 is hosting a reddened Seyfert nucleus or a BL Lac object could not be completely ruled out.
With the aim of further studying J2310-43 and understanding its real nature we have therefore secured two optical spectra covering the wavelength range 3700Å − 9800Å.
Observation and analysis
Spectroscopy of J2310-43 was carried out with the ESO 3.6m telescope on 1996 December 10. Observations were made with EFOSC1 in longslit mode, using a 1.5 arcsec wide slit and two different grisms, b300 and r300, with wavelength coverage from 3700Å to 6800Å and from 6200Å to 9800Å, respectively. The dispersions achieved with the two grisms, scaled to the Tek512 CCD detector, were 6.3Å/pixel (b300) and 7.5Å/pixel (r300). The exposure time was of 600 sec with the grism b300 and of 300 sec with the r300.
The data were reduced using the IRAF-LONGSLIT package 3 . The wavelength solution was obtained using a He-Ar reference spectrum while the correction for the instrument response was based on the observation of a photometric standard (LTT 377). We did not make an absolute flux calibration, thus the flux density scale of the spectra is in arbitrary units.
The calibrated spectra are presented in Figure 1.
Discussion and conclusions
The spectra presented in Figure 1 cover a wavelength range considerably larger than that of the spectrum discussed in Tucker, Tananbaum & Remillard (1995). In particular, the region of the [OII], CaII break (≈ 4355Å) and the region where Hα is expected (7145Å) are fully covered. No emission lines appear in the spectrum, which shows only the typical absorption features of a "normal" early type galaxy. From the main absorption features seen (Ca II H&K, G band, Hβ, MgI 5175Å, Na I D) we have computed a redshift of z = 0.0887 ± 0.0002, confirming the value found by Tananbaum et al. (1997).
The first result worth noting is the absence of Hα in emission. The fact that no emission lines are present from [OII] to [NII] definitively put to rest the possibility that a Seyfert nucleus is hiding in J2310-43.
Secondly, we note that a pronounced Ca II contrast is detected. We have computed its amplitude following Dressler & Shectman (1987), i.e. by estimating the average fluxes (expressed in unit of frequency) between 3750Å-3950Å (f − ) and between 4050Å-4250Å (f + ) in the rest-frame of the source; the contrast is then defined by: We have found, using the central part of the spectrum to minimize the stellar contribution, a Ca II contrast of 38% ± 4.0%, which is below the mean value found for a "normal" elliptical galaxy (≈ 50%, Dressler & Shectman 1987).
If one considers the "canonical" limit of 25% for the definition of a BL Lac object (Stocke et al. 1991) then J2310-43 cannot be considered a BL Lac. On the other end Marchã et al. (1996) have proposed that objects with a Ca II contrast below 40% are likely to have an extra source of continuum, besides stellar, and consequently they have to be considered as possible low-luminosity BL Lac candidates.
The observed contrast can be used to set limits on the presence of such a non-stellar continuum, at least in the wavelength range 3700Å − 4300Å (object rest frame). To this end, we have considered the spectrum of a "normal" galaxy, showing a Ca II contrast of about 60%. Then, we have "added" a non-stellar continuum to the spectrum, in the form of a power-law (f ν ∝ ν −α ) with a spectral index ranging from 0 to 2, and we have computed the Ca II contrast as a function of the fraction of non-stellar over stellar continuum. Our results show that the Ca II contrast is about 40% if the non-stellar contribution is approximately equal to the stellar continuum, (integrated between 3750Å and 4250Å). Values of the contrast of ≤ 25% (the limit used to define a BL Lac object) are obtained when the non-stellar contribution is about 3 times or more higher than the stellar continuum. Thus, in J2310-43, which has a Ca II "contrast" of ∼ 38%, the non-thermal component can still be present but at an intensity level comparable or lower than the fraction of the stellar continuum falling in the extraction region within the slit aperture. We have also extracted the spectrum of J2310-43 considering only the outer region of the galaxy, thus minimizing the contribution of the nucleus, and we have found that the Ca II "contrast" increases to 47%±5%. We consider this as a further evidence that J2310-43 harbors in its nucleus a weak source of non-thermal continuum that can be detected only if the stellar contribution falling in the aperture is kept at a minimum. These results are consistent with the observed color of J2310-43 (determined for the whole galaxy) that indicates a negligible non-stellar contribution in the optical band, as discussed by Tananbaum et al. (1997).
In conclusion, the spectroscopic observations of the galaxy J2310-43 presented here support the interpretation that this object represents the faint tail of the BL Lac population, in which the extra source of continuum is present but does not contribute significantly to the optical spectrum.
Further observations (polarization, radio spectral index, high resolution X-ray spectroscopy, etc.) are obviously needed to characterize the presence of such non-thermal continuum emission in J2310-43.
It is worth noting that a similar object (E0336-248) has been recently discovered by Halpern et al. (1997). Also in this case, the computed Ca II "contrast" (33%) does not meet the nominal criterion of ≤ 25% for classification as a BL Lac object. Nevertheless, these authors have produced convincing evidence for the presence of a non-stellar continuum in the spectrum of E0336-248 and, consequently, for its classification as a weak BL Lac object. Clearly, a re-assessment of the Ca II "contrast" criterion for the definition of a BL Lac is needed. Tananbaum et al. (1997) ask "how common are sources such as J2310-43?" We have reason to believe that they may be more common than currently recognized. We have recently initiated a survey of Radio Emitting X-ray sources (the REX Survey, Caccianiga et al. 1997aCaccianiga et al. , 1997b with the aim of selecting a new large sample of BL Lac objects and radio loud quasars. To enter the sample, a source has to be detected in a pointed ROSAT PSPC observation and in the VLA NVSS survey, above well defined flux limits and thresholds. During the optical identification program of the REX sources we have already found six objects that are similar to J2310-43. The properties of these objects will be presented and discussed in detail elsewhere; here we recall that, like J2310-43, they have X-ray luminosity in the range 10 43 − 10 45 erg s −1 (0.5 -2.0 keV) and radio luminosity in the range 3 × 10 30 − 3 × 10 31 erg s −1 Hz −1 (1.4 GHz). They are all radio loud (α ro > 0.35) and they all have a CaII "contrast" between 25% and 40% and no emission lines in their spectrum. They do not seem to lie preferably in a cluster environment. We note that these six objects have been found out of ∼ 100 new spectroscopic identifications. It is unfortunate however that, given the very low identification rate so far obtained for the REX survey, we cannot, at present, make an estimate of the space density of these objects. | 2014-10-01T00:00:00.000Z | 1997-09-02T00:00:00.000 | {
"year": 1997,
"sha1": "a7dead46a12d9a2ca9cdbc604aa97472ce9b2225",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "16e3fad7f58177f0c7b26db748256a7a42724e59",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238244993 | pes2o/s2orc | v3-fos-license | Fine-Grained Large-Scale Vulnerable Communities Mapping via Satellite Imagery and Population Census Using Deep Learning
: One of the challenges in the fight against poverty is the precise localization and assessment of vulnerable communities’ sprawl. The characterization of vulnerability is traditionally accom-plished using nationwide census exercises, a burdensome process that requires field visits by trained personnel. Unfortunately, most countrywide censuses exercises are conducted only sporadically, making it difficult to track the short-term effect of policies to reduce poverty. This paper introduces a definition of vulnerability following UN-Habitat criteria, assesses different CNN machine learning architectures, and establishes a mapping between satellite images and survey data. Starting with the information corresponding to the 2,178,508 residential blocks recorded in the 2010 Mexican census and multispectral Landsat-7 images, multiple CNN architectures are explored. The best performance is obtained with EfficientNet-B3 achieving an area under the ROC and Precision-Recall curves of 0.9421 and 0.9457, respectively. This article shows that publicly available information, in the form of census data and satellite images, along with standard CNN architectures, may be employed as a stepping stone for the countrywide characterization of vulnerability at the residential block level.
Introduction
Historical statistics indicate that the poverty rate is steadily receding worldwide [1]. Take, for instance, extreme global poverty. In 1990, 1895 million people had an income of less than USD 1.90 at constant 2011 purchasing power parity prices (PPP). In 2015, this value was reduced to 736 million people [2], i.e., an estimated 122,100 individuals abandoned extreme poverty around the world each day during that period. Although the poverty rate is receding, global income inequality may be on the rise in some countries [3]. For instance, in the U.S., the top 1% holds 40% of the total net wealth, as opposed to 25% in the 1980 [4]. Furthermore, the poverty rate is unevenly distributed worldwide, with Sub-Saharan Africa affected at a much larger scale than the world in general. For example, in 2016, 95.5% of the U.S. population received as income USD 10 or more daily, versus 31.3% of Mexican people. This income level translates to 16.36 million individuals in the U.S. and 88.85 million in Mexico living below the international poverty line of USD 10 daily [5]. Poverty means families are going hungry, children have little or no access to education, reduced services (e.g., electrical power, drinking water), and poor health [6]. Communities of people living in poverty are vulnerable to physical factors and social exploitation. Building on the principle of Leave No One Behind, the U.N. adopted the 2030 Agenda for Sustainable Development, which included eliminating extreme poverty as its primary goal.
• A nationwide assessment of settlements' vulnerability for Mexico is conducted at the residential block level. • An alternative vulnerability indicator is developed using the UN-Habitat factors related to settlements [27]. • Using data composed of hundreds of thousands of records, different convolutional neural network (CNN) architectures are assessed in the task of mapping satellite images to the vulnerability index. • The computer code for this project is made available to the research community. This should permit the evaluation of this work and serve as a stepping stone for further progress in the field.
The rest of the paper is organized as follows. The next section describes an approach to measuring vulnerability using the UN-habitat characteristics and presents a strategy to assess vulnerability using satellite images and a CNN is described. Section 3 describes the results of testing the approach described in the previous section. In particular, it is shown that CNNs are useful for extracting meaningful features, and the performance of these architectures is assessed. Section 4 presents a discussion on the relevance of the results in the context of the related work. The paper concludes with a summary of the findings and provides recommendations for future research. Figure 1. Detecting vulnerable communities via satellite imagery and population census. Our approach collects multispectral satellite images corresponding to the census information of residential blocks in Mexico. After training and assessing the performance of diverse convolutional neural networks, the results demonstrate that it is possible to establish a robust mapping between satellite images and vulnerability. This outcome should permit an accurate and up-to-date establishment of the spatial socioeconomic distribution of vulnerable communities in the country.
Materials and Methods
In this approach, learning involves the construction of an automatic inference mechanism mapping satellite images to reference values representing a vulnerability index. This section details how we address these issues.
Characterizing Vulnerability
Vulnerability is an elusive term, often bringing to mind crowded informal settlements without access to essential services. UN-Habitat suggests that an operational definition of a slum should include [27]: (a) inadequate access to safe water, (b) inadequate access to sanitation and other infrastructure, (c) poor structural quality of housing, (d) overcrowding, and (e) insecure residential status. Recently, Roy et al. [22] proposed the Slum Severity Index (SSI) to blend these aspects of vulnerability. Agglomerating per residential block, they defined the SSI employing the average number of persons per room (x o ), the proportion of houses without sewage (x s ) and toilets (x t ), the proportion of dwellings with a dirt floor and temporary structures (x f ), and the proportion of homes lacking piped and public water (x w ). They then obtained the SSI from the projection of the centered reference values x − µ = (x o , x s , x f , x w ) T − µ on the axis of maximum variability, i.e., the SSI corresponds to the projection of the reference values on the first principal component of the centered observation matrix.
PCA maintains the sensibility of other optimization approaches where the underlying assumption is that the differences are normally distributed and therefore affected by outliers' presence. Another issue with PCA is that each new axis represents a certain amount of the information available, assuming a representation where the first component points in the direction of maximum variability, and then subsequent ones point in directions orthogonal to the previous ones. Under this interpretation, the singular values provide a proxy for the information I expressed by the first l components. PCA works best when there is a linear relationship between the variables. However, our dataset (see Figure 2) may not hold up that assumption. Furthermore, reference values in this dataset, such as the occupancy per bedroom, may have a long tail, which in practice makes it hard for PCA techniques to provide consistent results as it tries to interpret as Gaussians the distribution of differences between the dataset and its low-rank decomposition.
Avoiding the sensibility of PCA and considering the distribution in this dataset (see Figure 2a-e), an alternative to express vulnerability is developed as follows. First, in this dataset, the reference values range between zero and one, except for the occupancy x o , which has values larger or equal to zero without a pre-defined upper bound. In this case, a possible solution may be to threshold and normalize the occupancy distribution with a con- √ 5 describe the content of the dataset for data sample i and the weights vector w T = (w s , w t , w w , w o , w f ) describing the relative reference values' importance. Vulnerability is measured using population responses related to the UN-Habitat criteria [27] obtained from a census using the following procedure. First, from an extensive dataset collected from a nationwide census, the vectorsx i for i ∈ {1, . . . , n} representing the reference values for the predictors and corresponding to the n data samples are considered. The proposed vulnerability index (see Figure 2f) corresponds to the expression where || w || represents the norm of w, the weights associated to each of the reference values. The reference classes can be obtained using where τ v expresses the threshold after which the residential block is considered vulnerable assuming that the weights are equal, when at least one of the reference values is one, τ v ≥ 0.2.
Setup
This dataset consists of 2,178,508 records, corresponding to the residential blocks for Mexico obtained during the 2010 census [30] (see Figure 3). The census contains an aggregate of the reference values of interest collected at each house in a residential block. Following a policy to keep citizens' privacy, the public census only contains residential blocks with more than five houses. The resulting distribution of values is nonetheless singular (see Figure 2). About 60.23%, 6.75%, 59.81%, and 60.49% have (their value is zero) and 0.43%, 12.16%, 0.88%, and 0.13% do not have (their value is one) sewage, toilets, running water, and concrete floors, respectively. Additionally, for 99.54% of the records, the average occupancy is smaller than three, i.e., only 9834 residential blocks have an average occupancy larger than three. The threshold at τ v = 0.2 distinguishes between 853,583 and 1,324,925 records corresponding to the vulnerable and non-vulnerable classes, respectively. For the experiments, a balanced classification problem was created by subsampling 853,583 non-vulnerable records at random. Under the assumption of uniform weights, this threshold may indicate that each dwelling belonging to a particular block lacks an essential service or its rooms are overcrowded.
To run the algorithms, an Exxact server with one Titan-X Pascal GPU was employed. The computer programs extract the geo-location for each residential block using publicly available data [30]. The programs then downloaded the corresponding 600 m × 600 m image patches of the blue (450-515 nm), green (525-605 nm), red (630-690 nm), near-infrared (775-900 nm), SW1 (short wavelength infrared) (1550-1750 nm), and SW2 (2080-2350 nm) Landsat-7 bands for the same year from INEGI (Instituto Nacional de Estadística y Geografía) [31]. Note that the first three and second three bands correspond to the visi-ble V and infrared parts of the electromagnetic spectrum. Landsat-7 is a helio-synchronous satellite with a repeat interval of 16 days that orbits the Earth at a nominal altitude of 705 km. INEGI generates these images with the geomedian [32] from those captured between 1 January and 31 December 2010. Although Landsat-7 has a resolution of 30 m/pixel, corresponding to image patches of 20 × 20 pixels, the image were resized as required by the architecture under consideration using bicubic interpolation. Note that 60.23%, 6.75%, 59.81%, 11.18%, and 60.49% have (corresponding to a value of zero in the histogram) and 0.43%, 12.16%, 0.88%, 0.45%, 0.13% do not have (corresponding to a value of one in the histogram) sewage, toilets, running water, an occupancy of more than three people per bedroom, and concrete floors, respectively. In the case of occupancy, for 99.54% of the records, the average occupancy is larger than three for 9834 of the records. The vulnerability index v is defined in (1).
Selecting a Learning Architecture
Given their capacity to extract high-level features, CNNs are employed to construct the map between satellite image patches and the vulnerability reference value. Functionally, a CNN [33] includes convolution operations that transform the input data with the application of data-driven operators. Researchers, such as Li et al. [34], have concluded that the layers extract more abstract features subsequently, starting from edges, corners, and representations generalizing the pattern sought. In each layer, a CNN applies a linear operator to the inputs. To express nonlinearities, one applies activation functions to generate each layer's outputs. In most cases, one feeds the features developed by the convolutional layers to a fully connected layer. Eventually, the aim is to represent for regression or discriminate for classification, the resulting transformed data with hyperplanes in the last layer. The selected architectures were chosen based on the following criteria: Performance in the ImageNet benchmark [35], availability of ImageNet pre-trained weights in the Tensorflow applications repository [36], and the capability of the target computing resources. Based on these criteria, the CNN architectures selected included ResNet [37], ResNeXt [21], and EfficientNet [38]. In addition, a baseline was established with LeNet-5 [39]. Other popular strategies may be pursued, including the employment of Inception-like architectures [40] or Generative Adversarial Networks (GAN) [41]. The former type of approaches was left for further consideration as we are including ResNext, an architecture that includes the Inception capabilities to analyze features at variable scales. Furthermore, as Perez et al. [42] noted, GANs are most useful in the classification of vulnerable settlements when the references are sparse. In this study, the dense reference values available in the census is complemented with the employment of a high-performance classifier, which itself could be part of the GAN architecture definition in the discriminant.
LeNet [39]. LeNet is used to establish a baseline for comparison. Its introduction with a solid performance on MNIST, consisting of a 28 × 28 pixels dataset of digits, makes it a natural choice for the small image patches in the dataset. This CNN consists of a first part with convolutional layers and a second part with fully connected layers. The convolutional stage has three convolutional layers, where the first two are followed by sampling summarizing layers. Next, the pooling layer reduces the features in each dimension by half. Finally, the fully connected layers lead to the classification.
ResNet [37]. This CNN still shows a strong performance in a wide variety of computer vision and pattern recognition tasks. The ResNet paradigm implements in its architectures the concept of skip connections. That is, suppose that as an input x passes through a set of layers, it is transformed into F (x). In skip connections, x joins F (x) to compute H(x) = F (x) + x. In practice, it means that if the overall underlying mapping between input and output is H(x), the machine learns in F (x) the residual H(x) − x. This configuration addresses the degradation problem, i.e., the observation that a deeper network should not generate a higher learning error than a shallower architecture, when in fact, it does. He et al. [37] showed that residual networks can be constructed deeper, are easier to optimize, and gain accuracy from increasing depths.
ResNeXt [21]. ResNeXt implements topologies that split the input into C low dimensional embeddings, transform each path with a mapping CNN architecture T i (x), and concatenate the results as F ( . The transformation functions implement the ResNet's [37] bottleneck topology with stacks of 1 × 1 (reducing dimensions), 3 × 3, and 1 × 1 (restoring dimensions) convolutions. Thus, while splitting the input as Inception models [43], ResNeXt implements in each branch the same topology in a number of paths C that is called cardinality. From ResNet, ResNeXt also inherits the skips connections, resulting in the residual function y = , where y is the outcome. EfficientNet [38]. This architectural framework systematically addresses model scaling in terms of depth, d = α φ , width, w = β φ , and resolution, r = γ φ , based on a compound coefficient, φ, which in turn depends on the computing resources available. Then, starting with a baseline architecture in which building blocks are mobile inverted-bottlenecks (MBConv) [44] with squeeze-and-excitatory components [45], EfficientNet employs the compound coefficient to scale up and generate deeper, wider, and higher resolution architectures. The MBConv modules consist in deepwise convolutions, where each filter acts on each input channel and ResNet-like skip connections. Squeeze-and-excitatory components summarize layers with deepwise convolution and learn their importance to scale their value via a skip connection.
Detecting Vulnerability
Thus, the machine learning problem in this study is to map the multispectral 20 × 20 pixels satellite image patches to the class C(v i ) defined by the vulnerability index v i by employing a CNN. The learning phase takes place as follows. To train a CNN, first an image dataset with the same positive and negative samples is selected. Then, the CNN is fed with the corresponding image patches, which are labeled according to the classification described in (2). The CNNs are trained by optimizing the parameters using backpropagation during a certain number of epochs employing a loss function defined in terms of the cross-entropy and a regularization factor as where L p is the p-norm, λ is a constant, p T = (p 1 , . . . , p c ), and q T = (q 1 , . . . , q c ) correspond to the inferred and referenced probability distributions, respectively, and c the number of classes. During testing, given an image patch, I j , corresponding to a residential block j, the CNN generates a probability distribution p j for the sample. To define the sample as positive or negative, one could use a decision threshold τ p to accept a certain inference probability. The performance of the classifiers is evaluated using the area under the ROC and Precision-Recall curves.
Results
To test the algorithms, census data for 2010 were collected, corresponding Landsat-7 satellite images were retrieved, different CNN architectures were trained, and their performance was assessed.
Learning
The balanced dataset containing 1,707,166 records was split in 50% for training (853,583 records), 25% for validation (426,791 records), and 25% for testing (426,792 records due to rounding). The same random partition is used to train and test the different CNN models. The images intensity values were re-scaled to represent each band in the range between zero and one. To increase the expressiveness of the dataset, the training dataset was augmented with transformations including horizontal and vertical flip, with a 0.5 probability, and grid distortion and elastic deformations [46], with a 0.2 probability. After every epoch, the training and validation datasets are randomly shuffled. Several L p regularization schemes were evaluated, settling with L 2 with an λ = 0.1. In the experiments, the CNN was trained either with the visible bands or with the visible and infrared bands. Furthermore, an agnostic position was assumed and a uniform distribution for the reference values weights was set. Now, some implementation details are provided for the CNNs under consideration . The input layer of the CNN architectures was modified to accommodate for three or six channels depending on whether training occurs with images corresponding to the bands in the visible (V) electromagnetic spectrum or a combination of bands in the visible and infrared (V + IR) portion of the electromagnetic spectrum.
LeNet [39]: Current LeNet-5 implementations adopt hyperbolic tangents activation func-
tions in the inner layers and softmax in the last layer. Our best training results were obtained by employing Stochastic Gradient Descent (SGD) as the optimizer with a constant learning rate of 10 −3 with a momentum equal to 0.9, a batch size of 128, and training during 100 epochs. For this CNN, we resize the images to 28 × 28 pixels.
ResNet [37]
: Models based on ResNet-50 v2 architecture were trained, replacing the top layer with a flattened layer, and inserting a drop out layer with 20% probability, a dense layer of 256 units with ReLU activation function, and a dense layer with the softmax activation function for two classes. For one model, the V bands of the images were used, and for the other, the V + IR bands were used. In both cases, the images were resized to 32 × 32 pixels. Transfer learning with ImageNet [35] weights was applied and then the CNN was trained during 100 epochs with a batch size of 128. When using V + IR bands, the input layer of the model was modified. Then, the ImageNet pre-trained weights were copied to the other layers before performing training, initializing the input layer with Xavier [47]. The best results for this CNN were obtained optimizing with SGD with a learning rate of 10 −5 and momentum 0.9.
ResNeXt [21]:
The images were resized to 32 × 32 pixels and the models were based on the ResNeXt-50 architecture. As in the ResNet-based models, the top layer was replaced with a flatten layer, and inserted a drop out layer with 20% probability, a dense layer of 256 units with ReLU activation function, and a dense layer with softmax activation function for two classes. The models were initialized using the weights of the ResNeXt network pre-trained with ImageNet [35] and then fine-tuned by training throughout 100 epochs using SGD with momentum 0.9 and a batch size of 128. For the model trained with the V + IR bands' images, the input layer was modified to accept those images and used the ImageNet weights only on the non-modified layers.
EfficientNet [38]: For efficientNet, the images were resized to 224 × 224, applying transfer learning with ImageNet [35] weights. When the V bands were used, the transference was immediate. Otherwise, when the V + IR bands were used, accommodating the extended number of channels in the CNN input. Correspondingly, the input layer was initialized using Xavier [47]. Correspondingly, the top layer of the EfficientNet architecture was removed and replaced with a global average pooling layer. A a drop out layer with a 0.5 probability, along with a dense layer with softmax activation function for two classes. The best results for this CNN were obtained training during 20 epochs using the Adam optimization method with β 1 = 0.9 and β 2 = 0.999. During the first 18 epochs, a learning rate of 0.001 was applied and 0.0001 for the last two. For there experiments, the EfficientNet-B3 architecture was employed.
Classification Performance
The performance for the CNNs of interest was assessed employing the test dataset, which is composed of 426,792 records (see Figure 4 and Supplementary Materials). The experiments included using the V bands and the V + IR bands available. The Receiving Operating Characteristics (ROC) curve illustrates the trade-off between the sensitivity or true positive rate (TPR = TP/(TP+ FN)) and the complement to the specificity or false positive rate (FPR = FP/(FP+ TN)), where TP, FN, FP, and TN correspond to the number of true positives, false negatives, false positives, and true negatives, respectively. Correspondingly, the Precision-Recall shows the trade-off between the precision (P = TP/(TP+ FP) and the recall (R = TP/(TP+ FN)). Note that TPR and R are the same. Usually, practitioners use them separately when finding an optimal compromise between TPR and FPR or between P and R, which occurs at a specific decision threshold value τ p . The area under the curve (AUC), for the ROC and Precision-Recall, is a useful metric to evaluate the performance of a classifier. High TPR and low FPR values, over a wide range of thresholds, result in a large ROC AUC, while high values of both P and R generate a large Precision-Recall AUC. Figure 5 shows the curves of TPR vs. FPR (ROC curve), precision vs. recall, for all the CNN architectures and the employment of the V bands and the V + IR bands. Table 1 shows the numerical values corresponding to the AUC for each case. Note that the use of the V + IR consistently outperforms the employment of the V bands for all the CNN architectures under consideration. Similarly, the EfficientNet architecture, with ROC AUC = 0.9421 and Precision-Recall AUC = 0.9457, outperforms the rest of the CNNs by a considerable margin.
Discussion
Vulnerability is a fluid concept that may include social, physical, health, educational, environmental, economic, and physicological aspects [48]. Nonetheless, its concrete definition is of paramount importance because its establishment will entail the subsequent approach to risk assessment and reduction efforts. Our approach to the definition of vulnerability aims to find a tradeoff between the terms proposed by UN-Habitat [27], the data publicly available for its countrywide detection [30], and an agnostic approach about the relative importance of the factors employed. Other sources of information, e.g., whether houses receive electricity, or prior knowledge of the importance of the different reference values may provide to be useful in a more refined labeling associated with the images and further performance of the automatic inference mechanisms, as shown by Sharma et al. [24] and Ibrahim et al. [23]. Establishing a baseline of comparison with other classical [8,12,16,22] or modern approaches [13,14,21,23,24] is most challenging. For one, the data sources, dense [8] or sparse [13], may have local and unique components; the scope may be a citywide [18], countrywide [26], or regionwide [15,25] interest. At any rate, this research demonstrates a strong performance with publicly available satellite information and census data at the fine-grained resolution of residential block and countrywide scale. There are some applications where the use of a CNN requires a control and experimental group [49][50][51]. That is the case for applications where the CNN is employed as an alternative procedure. In these circumstances, the control group is the golden standard to assess the effectiveness of a new technique. In the present research, there is no comparison with a previous method to determine vulnerability from satellite images because this study introduces both a source of information and a vulnerability index definition. However, in parallel to presenting a robust baseline by comparing different CNN architectures, this paper provides the means and forms to benchmark new approaches on the same framework.
This study is based on the reformulation of the census reference values as a classification problem where the distinction between vulnerable and non-vulnerable is the result of the lack of at least one UN-habitat reference value. These results extend Dorji et al. [16], who also framed their regression problem as a classification one employing nighttime-light satellite images and classical machine learning techniques, such as gradient boosting. These results show that this reformulation strategy can be applied also in the case of daytimelight satellite multispectral images and deep learning based models. In fact, this research distinguish from others using high resolution imagery [11,12,14], potentially making it more affordable.
The image resolution employed in this study is relatively coarse, although not uncommon in this field of study [8,17,19]. This choice may permit a widespread adoption of the method by allowing its application with to open datasets, contrary to the employment of private high-resolution satellite images such as QuickBird and DigitalGlobe images. Furthermore, there has been a sizable historical interest in the computer vision research community to understand the tolerance of detection algorithms to degradation in image resolution [52]. Despite that, the level of performance achieved in establishing the map between multispectral satellite images and census data is remarkable. Further research may explore how the deep learning algorithm may provide explanations for its decisions [53]. This line of study will open rich avenues for investigation, but most importantly, it will produce less bias and more ethical application of research results [54].
The availability of large-scale, long-term, satellite image datasets makes it possible to employ eager data methods, such as CNNs, which increase their performance logarithmically based on the volume of training data size [55]. Yet, models still play a role. For example, the comparison of diverse CNN architectures, varying in deep and complexity, highlights that while convolutions are important for feature extraction, one may find substantial differences in performance relative to their overall architecture, which may include the factors of width, depth, resolution, and cardinality employed by EfficientNet [38]. This result is still the subject of widespread interest in the deep learning community, as it remains debatable in which circumstances deeper networks are better than shallower ones [56].
An intriguing outcome in the results is the increase in performance achieved with the employment of the infrared bands. Recent studies on trees' urban ecosystem services [57] point out the relationship between socioeconomic indicators and trees canopy. Further studies may shed light on whether the infrared bands capture some of the associated temperature differences commonly found across vulnerable communities [58].
Conclusions
This paper introduces an approach to detect vulnerability at the residential block level. It consists of training CNNs to find a relationship between salient features extracted from satellite images and vulnerability indicators obtained from population census data. Our extensive experimental results, including hundreds of thousands of records, with state-of-the-art CNN architectures, show that it is possible to create a high-performance characterization of this relationship, offering ample opportunity for generalization. In particular, our experiments show that EfficientNet CNN architectures provide the best performance relative to other topologies.
In the efforts to reduce vulnerability is important to provide decision-makers with affordable, reliable, and up-to-date information about its sprawl. Our method introduces a tool to rapidly assess the spatial distribution of poverty in Mexico in detail and with ample coverage. This methodology should be a helpful asset to decide expenditure and to evaluate the progress of remedy programs. Thus, focused attention to vulnerable communities should result in a more significant return on public and humanitarian funds.
In the future, the employment of satellite radar images will be explored. Although multispectral images from Landsat, in the visible and infrared bands, are readily available every 16 days, clouds or time of the day may alter their suitability. On the contrary, Sentinel-1 provides Synthetic Aperture Radar (SAR) images, which have a higher resolution, are unaffected by clouds and can be taken at any time of day, since they are from active sensors. Furthermore, although these results offer good average performance at a national level, it may be interesting to explore the opportunities left at a coarser level of analysis at regional, state, and municipalwide levels. Furthermore, these results make it possible to periodically evaluate vulnerable communities, even during years when censuses are not carried out by employing a blend of satellite imagery and deep learning techniques.
Supplementary Materials: The following are available online at https://git.inegi.org.mx/laboratoriode-ciencia-de-datos/vulnerability/-/tree/master/maps. Figures S1 and S2 illustrate, respectively, the assessment of vulnerability for Oaxaca and Acapulco, cities located in the Mexican states of Oaxaca and Guerrero. The figures show the outcome of the vulnerability assessment using Efficient-Net, the best performing CNN. Vulnerability is displayed from less vulnerable (yellow) to more vulnerable (red). The web application at https://tinyurl.com/vulnerable-app shows the results for the whole country of Mexico. There, the interested reader can select the desired level of detail in the visualization of the results. | 2021-10-02T13:08:24.975Z | 2021-09-10T00:00:00.000 | {
"year": 2021,
"sha1": "f61c5ed6e3159b2221395928446d7d5c3fb51b17",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/18/3603/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "625c4bab31c23439cd08723f4923054c9ef5fa30",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
224722255 | pes2o/s2orc | v3-fos-license | Complete Clinical Response in Stage IVB Endometrioid Endometrial Carcinoma after First-Line Pembrolizumab Therapy: Report of a Case with Isolated Loss of PMS2 Protein
Endometrial cancer is the only gynecological cancer that is rising in incidence and associated mortality worldwide. Although most cases are diagnosed as early stage disease, with chances of cure after primary surgical treatment, those with advanced or metastatic disease have a poor prognosis because of the quality of treatment options that are currently available. Mismatch repair (MMR)-deficient cancers are susceptible to programmed cell death protein 1 (PD-1)/programmed cell death ligand 1 inhibitors. The US Food and Drug Administration granted accelerated approval to pembrolizumab for MMR-deficient tumors, the first tumor-agnostic approval for a drug. We present a case of stage IV endometrioid endometrial carcinoma with isolated PMS2 protein loss, in which treatment with first-line pembrolizumab therapy achieved a complete clinical and pathological response of tumor.
Introduction
Endometrial carcinomas of endometrioid type are more frequent than the non-endometrioid type of carcinoma, and extremely heterogeneous in terms of prognosis. Surgical staging is the most important prognostic factor and the guide for adjuvant treatment [1]. Stage IV
Case Report
A 45-year-old woman, gestation 2, presented in April 2018 with a diagnosis of endometrioid carcinoma of the endometrium, FIGO grade 3. The diagnosis had been based on the results of endometrial biopsy indicated because of abnormal vaginal bleeding and suspicion of endometrial polyp. On physical examination, the patient was in good condition, with a body mass index of 19.6. Her family history included a brother who developed a colon cancer at age 32 and a maternal grandmother who had a gynecological cancer at the age of 70 years.
The review of the endometrial biopsy confirmed a diagnosis of endometrioid endometrial carcinoma, FIGO grade 3. The histological examination revealed a poorly differentiated adenocarcinoma, extensively necrotic, with 75% solid areas, 25% villoglandular architecture, and foci of squamous differentiation. Nuclear pleomorphism was high, with numerous mitosis, including atypical figures. Tumor-infiltrating lymphocytes accounted for 15% of the stroma tumor and were associated with an intraepithelial component.
Magnetic resonance imaging revealed a 4-cm tumor located in the uterine fundus, which had infiltrated the myometrium and extended to serosa, beyond the left fallopian tube, and adjacent peritoneal fat (Fig. 2a). There were enlarged iliac and retroperitoneal para-aortic lymph nodes suggestive of neoplastic involvement. Fluorodeoxyglucose (FDG)-18 F positron The patient was treated with intravenous pembrolizumab (Keytruda, Merck & Co) at a dose of 200 mg every 3 weeks.
After 3 cycles of pembrolizumab, the uterine lesion completely regressed (Fig. 2b). PET-CT no longer demonstrated anomalous uptake in the iliac, para-aortic, retrocrural, or periesophageal lymph nodes. To evaluate the pathological response, the patient underwent laparoscopic hysterectomy, pelvic and para-aortic lymph node dissection, omentectomy, and inventory of the peritoneal cavity. There was no macroscopic evidence of disease. The right retrocrural and thoracic periesophageal lymph nodes were not dissected. Pathological examination of the surgical specimens showed no residual neoplasia in the uterus. Histological signs of complete tumoral regression (fibrosis, edema, histiocytes, lymphoid aggregates, hemorrhage organizing, and endothelial hyperplasia) were seen in endometrium, myometrium, cervix, uterine serosa, left ovary, and adipose tissue in adhering to the uterine fundus. Forty-one lymph nodes (17 pelvic, 23 para-aortic, and 1 omental) were dissected. A micrometastasis of 3.0 mm was identified in one right pelvic lymph node, and an aggregate of isolated tumor cells (diameter 0.2 mm) with signs of partial regression was present in one para-aortic lymph node. Signs of complete tumoral regression were identified in another 3 lymph nodes (1 left pelvic and 2 para-aortic).
The patient has completed 24 months without evidence of progression of disease and without adverse effects.
Genetic Evaluation
The patient was referred to a genetic counselor for risk assessment. The patient reported a family history that included a brother with colon cancer at the age of 32 years and a maternal grandmother with gynecological cancer at the age of 70 years. It is important to note that the maternal family history was limited, as the patient's mother was the only child. After receiving pre-test genetic counseling, the patient was tested for a next-generation sequencing cancer panel that included all MMR genes. A genetic variant was identified in MLH1 gene c.193 G > A (p.Gly65Ser) and classified by a CLIA-certified lab as of uncertain significance (VUS). This classification was based on American College of Medical Genetics and Genomics (ACMG) guidelines and the laboratory pipeline. The laboratory reported that the variant has 3.5 points for pathogenic classification (a minimum of 4 points are needed to be classified as probably pathogenic). This variant has no entry in ClinVar. Post-test genetic counseling was provided, and the patient was informed about the possibility that this variant may be reclassified as likely pathogenic, which would confirm a diagnosis of Lynch syndrome.
Discussion
Cancer is among the leading causes of death worldwide. The number of new cases of corpus uterine cancer in 2018 was 382,069; the number of related deaths was 89,929 [6]. The highest rates occur in North America (20.5 per 100,000), while South America presented only 6.9 cases per 100,000 inhabitants [6]. While the incidence of cervical cancer is declining, particularly in most developed countries, endometrial cancer is the only gynecological malignancy with a rising incidence and associated mortality. Apart from a genetic predisposition, obesity is an important risk factor for endometrial carcinoma. The fraction of all corpus uterine cancers attributable to excess body mass index is 37.1% in Brazil and 48.3% in the US [6]. We must be prepared for the management of this growing number of endometrial carcinoma cases.
TCGA study of endometrial cancer identified four categories of tumors: POLE ultramutated, microsatellite instability hypermutated, copy number low/microsatellite stable, and copy number high [3]. These four categories are differentiated in the Proactive Molecular Risk Classifier for Endometrial Cancer (ProMisE), which uses the results of immunohistochemistry for MMR proteins and sequencing of the POLE exonuclease domains [7]. Tumors are classified as POLE mutant, MMR-deficient, p53 wild-type, or p53 abnormal, which are surrogates for the TCGA categories of POLE ultramutated, microsatellite instability hypermutated, copy number low/microsatellite stable, and copy number high, respectively.
The MMR-deficient group corresponds to tumors with the loss of proteins expression secondary to either germinative or somatic mutations, generally in the genes for MLH1, MSH2, MSH6, and PMS2. The MMR system is an essential mechanism for maintaining genome integrity in organisms. MMR deficiency results in greatly increased rates of spontaneous mutation with consequent microsatellite instability and predisposition to cancer development. Hereditary nonpolyposis colorectal cancer, or Lynch syndrome, is an autosomaldominant disease characterized by germline mutations in MMR genes. Endometrial cancer is the second most common manifestation of this disease. Apart from its role in classification of molecular subtype, MMR deficiency analysis is recommended for all endometrial cancers, because a significant percentage of women with Lynch syndrome will present with an endometrial cancer as their initial manifestation of cancer [8]. The ultimate diagnosis of Lynch syndrome requires documentation of a mutation within one of the four MMR genes (MLH1, PMS2, MSH2, and MSH6) or EPCAM, currently achieved with comprehensive sequencing analysis of germline DNA. Immunohistochemistry for MMR proteins can be performed to screen patients with endometrial cancer for Lynch syndrome [9].
MMR proteins are stable only as heterodimers: MLH1 pairs with PMS2; MSH2 pairs with MSH6. Considering that PMS2 and MSH6 only dimerize with MLH1 and MSH2, respectively, some recommend the immunohistochemistry analysis only for PMS2 and MSH6, which become unstable when protein expression of MLH1 or MSH2, respectively, is lost [8]. However, our case presented only the loss of PMS2 expression, a finding reported in 7% of endometrial carcinomas with MMR deficiency [10]. The management of patients with this finding, for purposes of screening for Lynch syndrome, must include MLH1 analysis if no mutations are detected through PMS2 testing, because approximately 24% of patients harbor germline MLH1 mutations not detected by immunohistochemistry [10].
Apart from its role in screening for Lynch syndrome, MMR protein testing is a prognostic classifier that may be used to guide adjuvant treatment [9].
MMR plays an important role in editing DNA mismatches that occur during replication and in recombination repair. MMR defects increase the rate of mismatch errors, resulting in microsatellite instability and, consequently, abnormal proteins that activate the immune system. Yamashita et al. [4] identified a loss of MMR proteins in 42/149 (28.2%) patients with endometrial cancer. The group of patients with MMR defects presented higher levels of CD8+ tumor-infiltrating lymphocytes and PD-L1/PD-1 expression, suggesting that MSI may be a biomarker for immune checkpoint inhibitors. In our case, besides alterations in MLH1 (splice site 117-11_153del48), Foundation One detected high MSI.
Previous work has shown that pembrolizumab has durable antitumor activity in patients with locally advanced or metastatic PD-L1-positive endometrial cancer that has not responded to treatment [11]. However, responses to immune checkpoint inhibitors are observed regardless of the status of PD-L1, as in the recent case reported by Takeda et al. [12] and in their literature review, which included 7 cases of endometrial carcinoma successfully managed by pembrolizumab. The approval of immune checkpoint inhibitors for the treatment of all solid tumors with defective DNA MMR could benefit a significant portion of patients with advanced endometrial cancer. However, patients respond to treatment in different ways. For example, in a study of endometrial carcinoma, Sloan et al. [13] found that PD-L1 expression was more common in tumors from patients with MMR deficiency associated with Lynch syndrome than in those with MLH1 promoter hypermethylation or MMR-intact tumors, suggesting differences in the benefits afforded by PD-1/ PD-L1 drugs. These findings indicate that not all tumor neoantigens generated by MSI are antigenic [14] and additional information on the pathological and genomic features of MMR deficient tumors may facilitate the search for an effective immunotherapy. Our case presented an extraordinary response and several characteristics aside from MSI-H that suggest immune activation. The tumor presented lymphocyte infiltration of the stroma and neoplasia, suggesting lymphocyte mobilization. The tumor was poorly differentiated, highgrade, with high mitotic index and atypical mitosis, carrying a high neoantigen load that elicited the recruitment and activity of cytotoxic lymphocytes, and, consequently, the expression of PD-L1/PD-1. The tumor also presented a loss of ARID1A expression. ARID1A encodes a member of the SWI/SNF (switch/sucrose non-fermentable) chromatin remodeling complex, and there is an association between ARID1A loss and sporadic MSI [15]. Although the loss of ARID1A is associated with MLH1 silencing, which was not demonstrated in our case, it interacts with several other proteins, some of which are involved in DNA repair and genomic stability [15]. Foundation One testing revealed alterations in 19 genes, most of which act as tumor suppressors. | 2020-09-10T10:03:15.737Z | 2020-09-07T00:00:00.000 | {
"year": 2020,
"sha1": "fafc90e1b3801fd819dcc45b86962dffb040ce79",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/510000",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d11be182024e80deac6e3937d647f44429043ea",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222219961 | pes2o/s2orc | v3-fos-license | Prior Attention Enhanced Convolutional Neural Network Based Automatic Segmentation of Organs at Risk for Head and Neck Cancer Radiotherapy
Aimed to automate the segmentation of organs at risk (OARs) in head and neck (H&N) cancer radiotherapy, we develop a novel Prior Attention enhanced convolutional neural Network (PANet) based Stepwise Refinement Segmentation Framework (SRSF) on full-size computed tomography (CT) images. The SRSF is built with a multiscale segmentation concept, in which OARs are segmented from coarse to fine. PANet is a pyramidal architecture with elements of inception block and prior attention. In this study, the developed PANet based SRSF is applied for OARs segmentation in H&N radiotherapy. 139 CT series and manually delineated contours of twenty-two OARs by experienced oncologists are collected from 139 H&N patients for training and evaluating the proposed PANet based SRSF. The mean testing Dice similarity coefficients (DSC) on 39 CT series range from 76.1± 8.3% (left middle ear) to 91.9± 1.4% (right mandible) for large volume OARs(mean volume >1cc) while the corresponding ranges are 63.4± 12.3%(chiasm) to 81.0± 14.1% (right lens) for small and challenging OARs(mean volume ≤1cc). Furthermore, the proposed method also achieved superior segmentations over reference methods on the MICCAI 2015 H&N dataset with mean DSC of 95.6± 0.7%, 81.3± 4.0%, 77.6± 4.5%, 77.5± 4.6%, and 69.2± 7.6%, on the mandible, left submandibular, left and right optical nerve, and chiasm, respectively. The accurate segmentation of OARs is obtained on both the self-collected testing data and public testing dataset, which implies that the proposed method can be used as a practicable and efficient tool for automated OARs contouring in the H&N cancer radiotherapy.
I. INTRODUCTION
Organs at risk (OARs) delineation in computed tomography (CT) is a critical step in radiotherapy planning to achieve organ dose sparing for minimizing radiation-induced toxicity [1], [2]. Manual delineation is usually adopted in current clinical practices, which is time-consuming and with large inter-and intra-operator variabilities [3]. On the other hand, the quality of OARs delineation directly influences the dose distribution in OARs, especially for head and neck(H&N) cancer radiotherapy, which involves many important OARs, such as brain stem, optical nerves, pituitary, and so on. A more robust and accurate automatic OARs segmentation is clinically desirable for H&N cancer radiotherapy [1].
In the past several decades, many automatic OARs segmentation methods, such as the watershed segmentation algorithm [4], [5], active contour model-based algorithm [6], [7], and region-growing based segmentation algorithm [8], [9] were developed for H&N cancer radiotherapy. The most widely studied and used traditional method is the atlas-based automatic segmentation (ABAS) method, which is extensively adopted in the commercial treatment planning system for assisting contour delineation. ABAS method can be divided into two categories: single atlas [10], [11] and multiple atlases based methods [12]- [15]. The single atlasbased method is sensitive to the selected atlas, which may fail if there are great anatomical differences between the target image and the atlas [16], [17]. In contrast, multiple atlases based method has lower sensitivity to atlases, but also lower efficiency with involving more registration procedures, which may introduce more registration errors [15]. However, due to the low soft-tissue contrast and inter-patient variances in CT images, the ABAS method tends to low accuracy in segmentation. Thus, more manual modification is usually required to satisfy the clinical requirement in radiotherapy planning.
Recently, the convolutional neural network (CNN) based deep learning methods were considered as the state-of-theart approaches for the tasks of medical image segmentation. Many deep learning researches had been conducted in H&N OARs segmentation for radiotherapy [18]- [23]. Ibragimov and Xing [19] applied a CNN in thirteen OARs segmentation in CT images for H&N cancer radiotherapy, and achieved higher accuracies in most OARs than conventional ABAS methods, while reported poor segmentations in low-contrast and small organs such as the optical nerves (ONs) and chiasm with Dice similarity coefficient (DSC) of 63.9% and 37.4%, respectively. Liang et al. [20] proposed a two-stages (detection and segmentation) method for eighteen H&N OARs segmentation with DSC from 68.9% (ONs) to 93.4%(eyes), which is superior to the results of fully convolutional neural network (FCN). However, the segmentation accuracies were limited by only using 2D image information with mean DSC <70% for ONs. Tong et al. [21] developed a shape representation model to constrain the 3D FCN for nine H&N OARs segmentation, which achieved mean DSC from 58.5%(chiasm) to 93.7% (mandible). However, this study is conducted on downsampled CT images with a voxel size of 2mm×2mm × 2mm2 × 2 × 2mm 3 , which is not suitable for clinical usage [21]. Chen et al. [22] developed an ensemble UNet [24] based recursive segmentation framework for brain stem, eyes, ONs, and chiasm segmentation on magnetic resonance image, which performs superior to UNet, even in small OARs with mean DSC of 80.1% and 71.1% for ONs and chiasm. Yet, the delineations on MRI still need to be extrapolated to CT via image registration for radiotherapy treatment planning, which will introduce the registration uncertainties. Zhu et al. [23] constructed a squeeze and excitation residual block based AnatomyNet for nine OARs on whole volume CT images. The mean segmentation DSC achieved by AnatomyNet ranges from 53.5%(chiasm) to 91.3%(mandible). However, the above methods still perform poorly on low contrast and small OARs because of the blurred boundary and limited image information. Furthermore, the compatibility of the proposed methods on very large and small OARS such as temporal lobe, pituitary, and chiasm was not considered. Gao et al. [25] proposed a FocusNet to balance large and small OARs segmentation. It achieved more accurate segmentation on small OARs with training OAR specific model, which is time-consuming. Besides, FocusNet used the prior information by simply concatenating feature maps from OARs localization, which may weaken the model stability.
In this study, twenty-two OARs for H&N cancer radiotherapy are involved in the segmentation task, including four single organs: brainstem, spinal cord, chiasm, and pituitary, and nine paired organs: temporal lobes(TLs), eyes, optical nerves(ONs), lens, middle ears(MEs), mastoids, mandibles, temporal mandibular joints(TMJs), parotids. In the following paragraph, the left and right parts of paired OAR are expressed as OAR_l, OAR_r, respectively. To achieve fully automatic accurate segmentation for large and small OARs in full volume CT images for H&N cancer radiotherapy, we developed and evaluated a novel Stepwise Refined Segmentation Framework (SRSF), whose core model is a novel Prior Attention enhanced Convolutional Neural Network (PANet). The PANet based SRSF(SRSF PANet ) explores and takes advantage of the inherently stable relative position among OARs, and achieves OARs segmentation from coarse to fine via three sequential segmentation steps: OAR-groups segmentation (OGS), large/easy OARs segmentation (LOS), and small/difficult OARs segmentation (SOS). To improve the segmentation accuracy in each step, a novel combined attention of prior and learnable spatial attention is applied to a justified inception block for more accurate and effective feature extraction in PANet.
II. METHODS AND MATERIALS A. METHODS
In this study, a novel PANet based SRSF: SRSF PANet is developed for a large amount of OARs' segmentation in common large volume CT images. Twenty-two OARs to be delineated are divided into three groups: Group A: brainstem and spinal cord; Group B: mastoids, mandibles, temporal mandibular joints, parotids, middle ears; Group C : temporal lobes, eyes, VOLUME 8, 2020 and adjacent small OARs (mean volume ≤1cc): lens, optical nerves, chiasm, and pituitary. As illustrated in Figure 1, the SRSF includes three sequential segmentation steps: OGS, LOS, and SOS. Firstly, the OGS model is trained on downsampled CT images with half-resolution for OARs group segmentation. The label of each OARs group is obtained via Equation 1. That means OARs in each group are regarded as one individual target. Then, the corresponding regions of interest (ROIs) are localized and prior probability maps are predicted based on the rough OGS for LOS, respectively. Similarly, the small OARs (if exist) in each group are also segmented as a whole structure for ROI location and prior probability obtaining. For LOS and SOS, each OAR except for the small OARs group in LOS C is treated as an individual target.
For OGS, a justified inception pyramidal network(IPNet) is constructed based on a classic pyramidal network: UNet [24]. Compared with UNet, the core feature extractor of IPNet is a justified inception block without the pooling path. As shown in Fig.2, the justified inception block improves the respective field and feature variety by using multiple convolution paths with different kernel sizes. However, the pooling path was removed to avoid the image feature losing. Then, a convolution block is followed for combining the multi kernels extracted feature maps. The convolution block sequentially includes a convolution layer with a kernel size of 3×3×3, a batch normalization (BN) layer [26], and the ReLu activation layer. With the justified inception block, the depth of the pyramidal network also can be reduced to avoid image feature losing, especially for small targets.
To constrain the network pays more attention to effective and informative spatial regions, a Prior Attention enhanced Inception (PAI) block is designed in the PANet. As shown in Fig.3, the learnable PAI firstly adjusts feature maps in spatial positions via prior attention (PA) and convolutional spatial attention. Finally, all the attention refined feature maps are element-wise added with the unrefined feature maps to avoid the gradient vanishing problem. In this study, prior attention map P i for depth i in PANet is generated by average pooling. In this study, the probability maps predicted by IPNet and PANet in OGS and LOS for group C were regarded as the prior information for following OARs segmentation, respectively.
Considering surface distance (SD) is more sensitive to the shape changes than the dice coefficient. A combined loss of Dice loss [27] and SD loss [28] is employed for segmentation model training in this study, which is defined as: where P c i and G c i represents the predicted SoftMax probability and gold standard label in voxel i of channel c, respectively. D c i is the corresponding normalized distance to the surface of the gold standard. γ and α are the parameters to adjust the penalty of large surface error, and the weight of Loss SD , are set as 1 refer to [28]. Adam optimizer [29] is chosen for minimizing the loss function.
The proposed SRSF PANet is implemented with the deep learning library of Pytorch in Python 3.5. The model training and validation are completed on two GPU cards (NVIDIA GeForce GTX 1080) with 12GB memory. The hyperparameters of models are illustrated in Table 1. The maximum training epoch is set as 50 with an early stop strategy (10 epochs without validation loss decrease) to avoid overfitting.
B. MATERIALS
139 independent CT series from 139 nasopharyngeal cancer patients with manual contours are collected in Sun Yat-Sen Cancer Center, China. All contours are manually delineated by an experienced oncologist and review and adjust by the other experienced oncologist, which are regarded as the gold standard in this study. Resolutions of the CT images vary between 0.7mm∼1.2mm in the transverse plane. The slice thickness is 3mm for all cases. The number of slices ranges from 90 to 172 with an average of 111. There are 15,443 slices extracted from the self-collected dataset.
All the CT images are clipped to the range of [WL-WW/2, WL+WW/2], and then normalized to the range of [−1, 1], where WW and WL represent window width and level, respectively. 100 of all 139 patients are randomly split for training and the rest for testing. During training, 10% of the training set is randomly divided into inner-validation data to avoid overfitting. Translation, rotation, and noise addition are applied for the training data augmentation. With the data augmentation, there are 360 three dimensional images used for the model training. To obtain the rough prediction probability on training data, five-fold cross-validation is employed in the OGS and LOS C.
where h (G, P) = max g∈G min g∈P g − p . DSC ranges from 0 to 1, corresponding to the worst and the best segmentation, respectively. HD95 ranges from 0 to positive infinity. Higher DSC and lower HD95 indicate more accurate segmentation. In this study, the DSC and HD95 are calculated on a threedimensional basis for each patient. Besides, the volume cover ratio (VCR) is defined as VCR = V in V gt , where V in and V gt are the covered volume within the extracted ROI and the gold standard volume of the targeted OAR, respectively. VCR is used for evaluating ROI localization accuracy. 100% of VCR means the localized ROI can cover all target OARs. Because of the large occupation in the Graphics Processing Unit (GPU), general hardware conditions are difficult to support the segmentation of twenty-two organs on the original CT image directly. Thus, the proposed SRSF PANet are compared with UNet and IPNet based SRSF(SRSF UNet and SRSF IPNet ) in the evaluation study. The core models used in the above three methods are illustrated in Table 2. The Kolmogorov-Smirnov test is employed for normal distribution testing(p > 0.05). Then, the Wilcoxon rank-sum test and paired t-test are used for statistical significance analysis on the dataset with abnormal and normal distributions, respectively. The statistical analysis is implemented in SPSS 19.0 software in this study. The significant difference was defined by p < 0.05.
To compare the proposed method with other state of art methods, SRSF PANet is also compared with five published methods [23], [25], [32]- [34] on the MICCAI 2015 H&N OARs segmentation dataset [32], denoted as MIC-CAI'15 dataset (http://www.imagenglab.com/newsite/pddca/). The MICCAI'15 dataset consists of 38 samples for training and 10 samples for testing. Five reference methods include the champion method of MICCAI'15 challenge [32] and the other four state-of-art deep learning based methods [23], [25], [33], [34]. To transfer the proposed SRSF PANet for this MICCAI'15 segmentation task, the original output channels for TMJs are replaced for submandibular segmentation in LOS of group B in this comparison study. Other settings of the SRSF PANet are the same as those used in experiments on the self collected dataset. Table 3 illustrates the OARs group segmentation accuracy in OGS and LOS C. The mean DSCs of large OARs groups in OGS are >83%, and the corresponding HD95 are <4.8mm with small variations. The mean DSC and HD95 of small OARs in LOS C is 68.3±4.7% and 6.3±2.6mm, respectively. Furthermore, 100% of VCR shows that all the OARs are covered in the corresponding ROIs under the size settings. As the results showed, the segmentation accuracies on OARs groups and ROI size settings are enough for the SRSF in this study. As the segmentation example illustrated in Fig.4, under and over segmentations are observed in segmentation results obtained by SRSF UNet , especially for mandible and TLs. Benefiting from the larger respective field of the inception module, SRSF IPNet achieved better performance than SRSF UNet , but still cannot achieve accurate segmentation on very large organs and low contrast organs, such as TLs, mastoids, and ONs. In comparison, SRSF PANet performed superior over SRSF UNet and SRSF IPNet with the best agreements to the physician delineated gold standard. On the small OARs, such as ONs and chiasm, SRSF PANet also achieves the best segmentation results. Table 4 lists the testing results of SRSF UNet , SRSF IPNet , and SRSF PANet . For SRSF UNet , the mean DSCs are above 70.0% with ranges from 72.8%(ME_r) to 89.1%(mandible_l) on large volume OARs, and ranges from 51.2%(chiasm) to 75.2%(ON_r) on small OARs. For almost all OARs, SRSF UNet could achieve good results, except for very larger volume OARs, such as TLs and mandibles. With the respective field improved, SRSF IPNet achieves significantly better segmentation results on TL_l/r, parotid_l/r, ME_r, mastoid_l, TNJ_l/r, lens_l/r, ON_l/r, chiasm and pituitary, and not significantly different results on eye_l/r, ME_l, mastoid_r, but significantly worse results on the spinal cord and TMJ_l/r. Compared with SRSF UNet , SRSF PANet achieves significantly better segmentation results on nineteen OARs, and not significantly different results on ME_l, TMJ_l/r. Compared with SRSF IPNet , SRSF PANet also achieves significantly better segmentation results on seventeen OARs, and not significantly different result on five OARs (mastoid_l, mandible_l, lens_l, ON_l, and pituitary). In the comparison of HD95, SRSF IPNet achieves significantly superior performance over SRSF UNet on ten of twenty-two OARs. Moreover, SRSF PANet achieves significantly superior performance on thirteen and seven OARs over SRSF UNet and SRSF IPNet , respectively. Overall, SRSF IPNet achieves performance improvement, while SRSF PANet achieves the best results among all three methods. Fig.5 depicts the boxplots of DSC and HD95 comparisons in testing data. We can observe that: 1)Among all the three methods, SRSF PANet achieves best results on DSC and HD95 overall; 2) Compared to SRSF UNet , SRSF IPNet performs significantly superior for eleven OARs, but
significantly inferior on two OARs(ME_l and Mandible_r);
3) The worst results achieved by SRSF PANet on chiasm and pituitary are worse than SRSF UNet and SRSF IPNet . In general, the SRSF PANet achieves more accurate segmentation for H&N OARs than SRSF UNet and SRSF IPNet . Table 5 illustrates the segmentation comparison on the MICCAI'15 dataset among our proposed method and five state-of-art methods. In comparison, the proposed method achieved comparative and slightly superior(mandible) segmentation accuracy on large volume OARs. For small OARs, the most accurate segmentations were achieved by the proposed method with means DSC of 77.6±4.5%, 77.5±4.6%, and 69.2±7.6%, on the ON_l, ON_r, and chiasm, respectively. It should be noted that Zhu, et al. [23] used an additional training dataset in their study. The segmentation results of the public dataset demonstrated that: (1) the segmentation accuracy of SRSF PANet is superior over these reference methods; (2) the SRSF and PANet both can be easily transferred for different OARs segmentation scenarios.
IV. DISCUSSIONS
This study developed and validated a novel PANet based SRSF: SRSF PANet for the automatic segmentation of OARs in CT images for H&N cancer radiotherapy. SRSF is proposed to alleviate the volume imbalance in multiple target segmentation, especially for tiny targets, such as the optical nerves, chiasm, and pituitary in this study. Excluding more background regions is a direct and efficient approach to solve this issue. Thus, we achieve the multiple OARs segmentation stepwise via SRSF. The primary step is used for achieving rough segmentation, OARs localization, and prior attention, which is useful for the next segmentation refinement in different aspects. Thus, the proposed SRSF is compatible with different basic networks, such as PANet, IPNet, and UNet for multi-targets segmentation. Furthermore, we propose prior attention in PANet to utilize the predicted confidence probability map in previous segmentation. To improve the segmentation accuracy on small organs, a justified inception block is employed for feature extraction on a larger scale with the pooling operation reduced, which is employed in IPNet and PANet.
The quantitative and qualitative evaluation results (Table 4, Fig. 5, Table 5) achieved in 39 testing cases and MICCAI'15 public datasets have demonstrated the effectiveness of the proposed method. Moreover, compared with SRSF UNet and SRSF IPNet , the proposed SRSF PANet achieved significantly better performance on most of the OARs. Besides, the meantime cost in segmenting all twenty-two OARs for a new case is about 30s, which can effectively support clinical delineation work.
As the quantitative and qualitative evaluation results illustrated in Table 5 and Fig. 5, we can observe that: (1) the segmentation accuracies achieved by SRSF UNet are inferior than SRSF IPNet and SRSF PANet , especially for very large OARs(TLs) and small OARs(ONs, and chiasm). There are two reasons: shallower network and larger respective fields in IPNet and PANet. To avoid the feature missing for small OARs segmentation, we reduce the pooling operation. Thus, the network is shallower, which will weaken the capability of networks likes UNet, but will not affect the IPNet and PANet with a larger respective field. The larger respective field benefited from the inception block helps IPNet and VOLUME 8, 2020
FIGURE 5.
Quantitative comparisons in DSC and HD95 among SRSF UNet , SRSF IPNet , and SRSF PANet . The boxes run from the 25th to 75th percentile; the two ends of the whiskers represent the 10th to 90th percentile, the horizontal line and cross symbol in the box represent the median, mean values, respectively. The ' * ' and '-' symbol above each group represents the statistically significant differences exist or not exist between the two approaches.
PANet extracting more helpful global features for segmentation. In this way, the IPNet and PANet achieved a balance between pooling operation and more global features. (2) Even with the same respective field, the SRSF PANet still achieves superior segmentation performance over SRSF IPNet , which is benefited from the combination mechanism of prior attention and convolutional spatial attention. Firstly, the proposed SRSF provides a practicable way to utilize the information obtained from previous segmentation steps. It is not just for ROI localization, also can be used as prior attention in PANet. Thus, the prior attention from OGS and LOS for group C can provide additional glob information, which involved the relationship among OARs. Secondly, the learnable convolutional spatial attention can achieve case-specific spatial adjustment on feature maps. Furthermore, the learnable spatial attention is soft, which can adjust the hard attention from prior. Therefore, the proposed SRSF PANet is reliable in theory and practice.
Moreover, as shown in Table 5, Wang's method [34] achieved the best performance on the brain stem and mandible but performed poorly on parotid. The reason is that they used a shape regression model constructed based on the shape correspondences detected across all atlases. However, the shape variety of parotid is much larger than the brain stem and mandible. Due to the larger error of shape correspondence detection, the segmentation accuracy was reduced on parotid. Besides, Zhu's model [23] achieved the best performance on the left parotid and right submandibular but performed particularly poorly on the brain stem and chiasm. With trained on an additional dataset, the segmentation accuracy was improved on the brain stem and chiasm, but reduced on the mandible and optical nerves. These results imply the instability of Zhu's method, which achieved multiple OARs on whole CT images via a single model. In comparison, the proposed SRSF PANet is more accurate and stable, which are benefit from the three innovations: SRSF, the justified inception block with a larger receptive field, and the prior attention mechanism.
However, this study also has several limitations. 1) as the results illustrated in Fig.3, the worst results achieved by SRSF PANet on chiasm and pituitary are worse than SRSF UNet and SRSF IPNet , although SRSF PANet performed superior in most of the cases. Considering optical chiasm and pituitary are very small, which usually appear in only one to two CT slices, the prior information tends to misguide the further segmentation. Thus, it is believed that the proposed model is still sensitive to the prior for small target segmentation to some extent. 2) PANet training relies on the prior probability map, which is from the previous segmentation step. To improve the model stability, the prior probability maps of all training data were obtained via inner five-fold crossvalidation. Thus, the training procedure is more complex than the general model. In this study, the time cost for model training is about 70 hours. In future work, other fast conventional approaches may be employed for obtaining prior to avoid such a disadvantage. 3) the inception block is not the only method for achieving a larger receptive field. For example, the dilated convolution also can achieve a similar multi-scale effect as the inception block with fewer parameters. However, the gridding problem in dilation convolution is adverse to small target segmentation. Thus, more feasible revised approaches, such as the receptive field block, which combined the ideal of inception block and dilated convolution, are also worth applying to similar segmentation tasks in the future work. 4) this study only considers the segmentation of OARs on H&N in non-contrast CT images. We plan to apply the proposed method for more segmentation applications to assist radiotherapy treatment planning in future works. 5) the size of the evaluation dataset is limited. We are planning to evaluate the proposed method on more clinical data on different anatomic sites to provide more clinical support.
In conclusion, an SRSF framework is developed for the automatically sequential segmentation of H&N OARs.
Based on the SRSF, a novel PANet is proposed for more accurate segmentation by balancing the respective felid and pooling operation and comminating the soft spatial attention and hard prior attention. The good evaluation results achieved by SRSF PANet on independent and public testing datasets both demonstrated that the proposed SRSF PANet could be a potential tool for automatic OARs contouring in the H&N cancer radiotherapy. DONGYUN LIN CHANG received the master's degree from the Medical School, Southeast University. She is currently the Chief Technician of the Children's Hospital of Nanjing Medical University. Her research interests include tumor immunity and medical laboratory diagnosis.
YING SUN received the Ph.D. degree in imaging and nuclear medicine from Sun Yat-sen University, Guangzhou, China, in 2002. She is currently a Professor of radiation oncology and the Vice President of the Sun Yat-sen University Cancer Center, Guangzhou. Her main research interests include the individualized and precise treatment of nasopharyngeal carcinoma (NPC), artificial intelligence-assisted delineation of tumor targets and organs at risk for radiotherapy of NPC, bigdata-driven risk stratification and individualized treatment of non-metastatic NPC, and translational research focused on developing prognostic and predictive markers in patients with NPC.
DONGMEI WU was a Postdoctoral Fellow with the Kimmel Cancer Center, Thomas Jefferson University Hospital. She is currently an Associate Professor with the Department of Radiation Oncology and the Department of Cancer Biology, Nanxishan Hospital of Guangxi Zhuang Autonomous Region. Her research interests include nasopharyngeal carcinoma radiotherapy and artificial intelligence application in radiotherapy.
YAO LU was a Postdoctoral Research Fellow and a Research Investigator with the Medical School, University of Michigan. He is currently a Professor with the School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China. His research interests include inverse problem, medical image processing, and computer-aided diagnosis. VOLUME 8, 2020 | 2020-10-09T17:43:33.425Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "fdd6288a4b174dc396613939da40272037d17182",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09209968.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "fdd6288a4b174dc396613939da40272037d17182",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
259010708 | pes2o/s2orc | v3-fos-license | HALAL LIFESTYLE INDONESIA: REVIEW OF HALAL PRODUCT DEVELOPMENT IN SHARIA ENTERPRISE THEORY (SET) PERSPECTIVE
: The research aims to analyze the development of halal products in fulfilling the needs of halal lifestyles in Indonesia. Sharia enterprise theory reviews show important things that must be considered for all companies that produce goods or services. This research is qualitative research using descriptive analysis. This analysis is used to provide a description or description of the research subject based on variable data obtained from certain subject groups. The results of this study explain that Allah SWT is the main stakeholder, the owner of all the resources on earth. So it is important for business actors to be accountable to Allah SWT by managing products according to activities in Islamic law. Then the responsibility to humans is a form of increasing concern among others and developing products with the power of a worker. The responsibility for the environment becomes an important concern for business actors so that they can continue to develop products with due regard to environmental preservation.
Introduction
In living their lives, Muslims must certainly be guided by various basic principles of In general, consumption is an activity carried out by humans to use goods or services to meet their needs and satisfaction directly. 1 Regarding consuming a product for Muslims, one must pay attention to several provisions related to this activity. One of them is consuming food as a basic supporting human life. Islam has prohibited Muslims from consuming several foodstuffs, one of which is in the following verse: Meaning: It is forbidden for you (to eat) carrion, blood, pork, (animal meat) slaughtered in the name of other than Allah. (QS. Al Maidah: 3) 2 Based on the verse above, there are indications of some foods that are forbidden to eat. So consuming food that is permissible or halal is an obligation for Muslims. Halal itself has been conveyed a lot in the Al-Quran, one of which is in the following verse: The word halalan tayyiban indicates the order to consume something good in Islam.
Etymologically, halal means things that are permissible and can be done because they are free or not related to the provisions that prohibit them. 4 In general, it can be said that all consumption activities of Muslims in any product must be included in the halal category. So that "halal" becomes a necessity for Muslims, or it can also be referred to as the lifestyle of a Muslim. Lifestyle, or in other terms called lifestyle, is a way of life that is identified by how people spend their time, what they think about themselves, and the world around them. 5 So a term has recently become a trend, namely the halal lifestyle. The halal lifestyle is currently sweeping the world, not only in countries with a predominantly Muslim population but also in countries with a non-Muslim majority. 6 Indonesia, with the largest Muslim population, is a potential forum for developing a halal lifestyle. Based on the data the author has explored, the following is the development of the Muslim population in Indonesia in the last five years: According to Najiatun and Maulayati, the development of halal products can be done in three ways, namely, ensuring that they are not contaminated with non-halal products. Then it already has halal certification and educates consumers on the importance of halal products. 9 Concerning halal certification, which is the most important thing in the production of goods or services in Indonesia. The development of products that have received a halal label from the MUI in the last five years is as follows: Masyarakat, Vol. 15, No. 2, 2015, hlm. 56. Fauziah, the increasing presence of halal products will create competitiveness. A product with halal certification can gain high public trust. 12 Halal product development is not only limited to obtaining a halal label; behind that, there must be a process passed. Included in this certification process is product management. From the production stage to distribution, one must pay attention to Sharia guidelines about halal products. One form of responsibility for activities is the corporate theory, or corporate CSR among businesses.
Islam has also regulated the accountability of a company in Sharia principles. Sharia enterprise theory is a theory for companies that have been integrated with God's values. The most important thing in Sharia enterprise theory is Allah is the creator and sole owner of all the resources in the world. In comparison, the assets owned by stakeholders are a mandate from Allah SWT, including the responsibility to use the methods and goals set by Allah SWT. 13 Halal product development from the perspective of Sharia company theory can be viewed from Allah SWT's main principle of ownership of the entire natural world so that any human activity on earth must be accountable to God. The management of halal products must be able to be responsible that all processes follow Allah's commands. For example, in the management of food products, the raw materials used are not unclean foods for consumption. Besides that, the responsibility to other stakeholders, such as fellow human beings and the environment. It is a form of corporate responsibility for the activities carried out.
Therefore the author is interested in reviewing the halal lifestyle trend currently developing well in Indonesia. The focus of the discussion is related to the development of halal products today to improve the economy through the business sector. Through an analysis of the theory of Sharia companies, the author will reveal the responsible development of halal products for the three stakeholders: Allah, humans and the environment. When the business follows Sharia principles, it will be easy to get halal certification. Then the existence of halal certification will create consumer confidence, and of course, the business can develop well in economic activities.
Literature Review Halal Lifestyle
Indonesia has the potential for a halal lifestyle with enormous opportunities to develop.
The world's halal lifestyle trend is also a form of righteousness because it shows how people live, work, behave, choose food for consumption, channel their interests, spend money, and allocate their time according to Sharia principles. 14 The concept of halal benefit is universal for Muslims and non-Muslims because halal covers. Sharia needs and is also a concept of sustainability through cleanliness, sanitation and safety, making halal products acceptable to consumers who care about food safety and lifestyle.
Healthy and halal. This proves that Muslims and non-Muslims have accepted the halal concept, gradually becoming a way of life. 15 The halal lifestyle is needed by all human beings, not only for Muslims, because the concept of halal applies universally and philosophically and practically is an innovation from standard operating procedures (SOP) that existed fourteen centuries ago in Islamic Sharia. Halal lifestyle contains elements of health, safety and security, wealth and human dignity. The term halal lifestyle is not meant to renew or impose but rather to reintroduce ramhatan lil'alalmin, the teachings of Allah SWT from a Sharia perspective which has been stated in the Al-Qur'an and Most people who hear the term halal will think of food and beverages such as meat and non-alcoholic beverages. In particular, this has been widely reviewed by scholars. The meaning of the word halal, in the aggregate, includes everything related to human life and lifestyle. 14
Indonesian Halal Product
In general, the halal industry is a production activity that produces goods or services following the provisions of the Islamic religion. Several processes in this industry will be adjusted to the Sharia basis. 19 For example, in the process of making goods to facilitate people's activities, in terms of raw materials, the manufacturing process, and the benefits of goods, Islamic values must be taken into account. As a production activity, the halal industry also has several aspects that must be considered, including:
Foundation Aspect
As one of the activities carried out by Muslims, the basis of activities in the halal industry is the value of monotheism or belief in God. Allah SWT has shown various directions of behavior for Muslims. Therefore, these guidelines must be considered in every activity of the people, including economic activities.
Aspects of Purpose
The benefit that must be achieved from a service activity based on Sharia principles is the benefit for the general public. The industry should be able to meet consumer needs, increase employment opportunities, and the welfare of society at large. In addition, the production process must also pay attention to the surrounding environment so as not to cause harm to nature.
Remuneration Aspect
This aspect is a form of fulfilling the obligation for a company to provide the rights of its workers who have assisted the production process in realizing its goals. The Islamic wage concept has also been explained, and it is important to pay attention to the concept of fair wages for workers. 20 Indonesia has great potential in the halal industry, especially in the following sectors:
Food and Beverages
This sector in 2021 is ranked 2nd in the global economy, which indicates the concern of the Indonesian people for the need for halal food. Food and drink are important to sustain human survival and, in general, as basic needs that must be met.
Fashion
This sector is a need that is no less important for the people of Indonesia, proven to be ranked 3rd in the global economy for meeting the needs of the halal industry. The need for clothing is one of the three basic principles humans must meet, apart from food and shelter. For Muslims, clothing will support worship and activities in daily life.
Travel
This sector is a form of tourism facilities that are used as recreational facilities for the community. The need for Sharia tourism is growing rapidly, considering the development of local potential based on local wisdom is being intensively promoted by the government.
Pharmaceuticals and Cosmetics
This sector is no less important to support the survival of Muslims. Indonesia occupies the 9th position in the global economy in fulfilling the need for medicines and cosmetics.
Pharmaceutical products are used to maintain people's health, especially now that there is a pandemic that requires an antidote vaccine, which must be made with good substances for Muslims. Besides that, cosmetics also help in the daily activities of Muslims as a form of art and beauty that can be developed to improve people's self-competence. 21
Sharia Enterprise Theory (SET)
Sharia enterprise theory is an enterprise theory that has been internalized with Islamic values to produce a metaphysical and more humanist theory. 22 The first concept encourages the understanding that tangible assets are stored in the rights of others. This understanding certainly brings about important changes in the Sharia terminology of enterprise theory, which lays down its premise to distribute wealth based on participants' contributions. These namely participants make financial or skill contributions. This thinking is based on the premise that humans are khalifatullah fil ardh, whose mission is to create and distribute prosperity for all humans and nature. This premise encourages Sharia enterprise theory to realize the value of justice for humans and the natural environment.
Therefore, Islamic companies' theory will benefit stakeholders, society and the environment. 24 In principle, the Sharia enterprise theory provides a form of accountability primarily to Allah (vertical accountability), which then requires another form of accountability to humans and nature (horizontal accountability). The final premise is falah, true success in business in the form of achieving prosperity, which includes (spiritual) happiness and (material) prosperity at the individual and societal levels. 25
Method
This research is qualitative research using descriptive analysis. This analysis provides an overview or description of the research subject based on variable data from certain subject groups. 26 The data taken is secondary data obtained through intermediaries and is usually presented without having to dig directly from the source. 27 This study uses a literature study, which obtains data from various sources such as books, scientific articles, official websites, and reports on the object data studied. 28 The data will be recorded, read, and processed to help answer research problems. The data obtained will be analyzed based on the existing literature in books or scientific articles.
Result and discussion
Indonesia has various industrial sectors that are useful for meeting the need for halal products for the Muslim community. Four industrial sectors need to be developed to increase the halal lifestyle in Indonesia.
Food and Beverages
This product is the basis for the needs of the people that must be met. Islam has explained clearly the consumption of something halal. So, a Muslim producer must pay attention to processors according to Islamic law. In the Sharia enterprise theory concept, companies have accountability for realizing Sharia values. So here, a producer has responsibility for every product consumed by the public. Specifically for the Muslim community, the product must be halal in raw materials and processing.
Fashion
Besides food, there is also a need for clothing at the level of people's needs. Muslims must always worship Allah to realize this obedience by wearing good clothes when facing Him.
Lots of Muslim clothing manufacturers meet the needs of community worship. However, related to the concept of Sharia business theory that accountability in industrial mode can be realized through proper waste management. It is known that the clothing industry will involve the processing of raw materials, which creates residual waste. Then the waste must be managed properly not to pollute the environment or disturb the human ecosystem.
Travel
The need for recreation is part of fulfilling the secondary needs of the people. Primary needs must be met first, then secondary needs. However, recreation is also important for entertainment and peace of mind for the activities undertaken. The concept of Sharia business theory in the travel industry can be realized with Islamic business travel. Conducting visits to Islamic tourism, so it needs to be optimized. The goal is that recreation is not a means of immoral needs but a fulfillment of religious tourism and increasing community religiosity.
Pharmaceuticals and Cosmetics
This industry is very important for the wider community. Medicines to eliminate the harm of the diseases suffered by the people. Meanwhile, cosmetics are used to beautify oneself, so it is very wrong to use too much. Drug and cosmetic processing in Indonesia must comply with BPOM and Halal MUI permits. This is related to fulfilling the halal aspect of the medicinal and cosmetic products sold. Sharia enterprise theory in the pharmaceutical and cosmetic industry is related to responsibility in production. All processing processes to become drugs and cosmetics must comply with Sharia.
The development of halal products in Indonesia is a form of fulfilling the increasing needs of the Muslim community. Consumption of halal products has become a lifestyle for Muslim communities. However, what sometimes becomes problematic is gaining public trust in the halal labeling of a product. So it is important to have a halal certification to convince consumers.
According to Alfian and Marpaung, the halal label influences the purchasing decisions of the Muslim community in Medan City. 29 It is In line with Widyaningrum's research. This also states that the halal label significantly influences purchasing of Wardah cosmetics in Ponorogo. 30 The importance of halal labeling on a product in Indonesia, of course, must be noticed by all business owners. However, obtaining this halal label requires a process and observation of the validity of all production activities that comply with Sharia principles.
Product management adapted to Sharia principles will make obtaining halal certification easier for an entrepreneur. The existence will see the principle of applying Sharia to a product of accountability regarding the production process. Usually, corporate responsibility ends in social activities. However, in Sharia, enterprise theory mainly focuses on accountability to several stakeholders. Following research by Rinovian and Suarsa states that stakeholders in enterprise sharia theory include accountability to Allah SWT, humans, and the environment. 31 Halal product entrepreneurs in Indonesia must pay attention to several responsibilities of these three stakeholders. The manifestation of attention to the three stakeholders is as follows: 1. Responsibility to Allah SWT As the basis of human activities on earth, economic activities must be adjusted to the commands of Allah SWT. Including managing a product that comes from natural resources belonging to Allah SWT. For example, in the management of halal food, in Surat Al Maidah verse 3 concerning the prohibition of consuming several food ingredients. When halal food entrepreneurs pay close attention to aspects of this raw material, it will be more in certifying the halal label.
This responsibility to Allah SWT is a manifestation of one person aware of the ownership of the entire universe. The following paragraph explains that: Muslims must realize that the ownership of the entire universe is Allah SWT. So every human activity must be accountable. A study explained that the management of natural resources is God's commandment. Even every people must work hard to find wealth to meet their needs. The universe belongs to Allah SWT, but humans are ordered to manage it. 33 Entrepreneurs in the halal industry in Indonesia manifest the people's efforts to seek sustenance in meeting their daily needs. However, business managers must pay attention to the values of the provisions of Islamic law.
According to Athiroh, several aspects of the halal industry must be considered, including the foundation aspect. This aspect explains that in every activity carried out by Muslims, the basis of activity in this halal industry is the value of Tawhid or Godhead. Allah SWT has shown various directions of behavior for Muslims. Therefore, these guidelines must be considered in every activity of the people, including economic activities. 34 Developing halal products in the responsibility to Allah reminds all entrepreneurs to manage products well. All true human activities will be accountable to God in the hereafter. In addition, Islamic law is a form of offering benefits for the people. All forms of command in Islamic teachings contain Falah for mankind in general.
In the halal certification process, an analysis will be carried out regarding the product's raw materials so that taking into account sharia-compliant production will facilitate this process. The hope is that through special attention to religious values in production, halal 32 Al-Mujanatul Ali. products will be created, which are the target of today's Muslim society. Fulfilling the needs of good Muslim consumers will create business development opportunities so that it can improve the economy for the owner of halal products.
Responsibility to Humans
Attention to this aspect is a form of concern for a business actor for fellow human beings.
According to Athiroh, there are aspects of purpose in Indonesia's halal industry's existence. In this aspect, the goal is the benefit that must be achieved from an economic activity based on Sharia principles for the benefit of the general public. The industry should be able to meet consumer needs, increase employment opportunities, and the welfare of society at large. 35 Halal product development requires several stakeholders in it. Included in a company that requires labor or employees to help carry out the production process to distribution. According to Irawati, a workforce or employee is the main resource for a company to develop. 36 For the development of halal products from the perspective of Sharia company theory, it must also pay attention to aspects of responsibility to fellow human beings. By providing work opportunities for other people, of course, it can improve the community's economy. The role of the productive industrial sector is very large for the people's economy. Related to this, the Sharia economy has regulated how to manage workers according to Sharia to create a buoyant economy and benefits between people.
Halal products can develop with the help of a workforce that persistently helps meet the needs of Muslim consumers. So a worker's rights must be considered a form of responsibility in humans. For example giving wages on time, according to the following hadith: has bestowed grace in the form of all the abundant natural resources in the world. Humans only manage it for their benefit. However, nature needs to be preserved, not destroyed by overexploitation.
According to Pratiwi and Wuryani, good production management must be considered for companies with waste that can pollute the environment. 38 The latest development concept is a solution to overcome environmental damage consequences. The lack of proper planning of economic activities can cause the existence of damage to nature. 39 Halal product development in this concern is a form of responsibility to the environment.
This is related to the task of humans on earth as managers and, at the same time, guardians of nature. The hope is that the benefits of nature will continue to be felt by mankind. Islam has ordered humans to take care of the environment in the following Surah: Meaning: And do not make mischief on the earth after (Allah) has repaired it and pray to Him with fear (it will not be accepted) and hope (it will be granted). Indeed, Allah's mercy is very close to those who do good (QS. Al-Araf: 56) 40 Attention to this aspect will realize the management of halal products that care about environmental sustainability. When the environment is damaged, it will certainly cause harm.
Human resources are a source of benefit for humans by managing them properly. The development of halal products will require the contribution of the natural surroundings, for example, for raw materials or product management processes.
Every production activity will certainly impact the surrounding environment. This is in line with research by Siregar and Nasution, which states that the adverse effects of economic activities on the environment include pollution, reduced species of living things, and disruption It is very important to preserve nature for business actors as a form of responsibility to the environment. Halal product development will not be achieved if there are still practices of environmental damage because the resources of a product come from the environment.
Conclusions
The development of halal products to fulfill the needs of a halal lifestyle in Indonesia needs to pay attention to several aspects of Sharia business theory. Attention to Sharia enterprise theory to manage a product according to Sharia principles by being responsible to several stakeholders. Allah SWT is the main stakeholder and the owner of all resources on earth. So business actors need to be accountable to Allah SWT by managing products according to activities in Islamic law. Then the responsibility to humans is a form of increasing concern among others and developing products with the power of a worker. The responsibility for the environment becomes an important concern for business actors so that they can continue to develop products with due regard to environmental preservation. | 2023-06-02T15:17:33.424Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "ebe3d4355f43d173b569b4c11331f469c371d094",
"oa_license": "CCBY",
"oa_url": "https://ejournal.uinsatu.ac.id/index.php/nisbah/article/view/7466/2260",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5dc8b657c336b4e5ad03155a012bc8df5ef25411",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
900024 | pes2o/s2orc | v3-fos-license | High-Resolution MRI Evaluation of Neonatal Brachial Plexus Palsy: A Promising Alternative to Traditional CT Myelography
BACKGROUND AND PURPOSE: Despite recent improvements in perinatal care, the incidence of neonatal brachial plexus palsy remains relatively common. CT myelography is currently considered to be the optimal imaging technique for evaluating nerve root integrity. Recent improvements in MR imaging techniques have made it an attractive alternative to evaluate nerve root avulsions (preganglionic injuries). We aim to demonstrate utility of MR imaging in the evaluation of normal and avulsed spinal nerve roots. MATERIALS AND METHODS: All study patients with clinically diagnosed neonatal brachial plexus palsy underwent MR imaging by use of a high-resolution, heavily T2-weighted (driven equilibrium) sequence. MR imaging findings were reviewed for presence of nerve root avulsion from C5–T1 and for presence of pseudomeningocele. The intraoperative findings were reviewed and compared with the preoperative MR imaging findings. RESULTS: Thirteen patients (9 male, 4 female) underwent MR imaging; 6 patients underwent nerve reconstruction surgery, during which a total of 19 nerve roots were evaluated. Eight avulsions were noted at surgery and in the remainder, the nerve injury was more distal (rupture/postganglionic injury). Six of the 8 nerve root avulsions identified at surgery were at C5–6 level, whereas 1 nerve root avulsion was identified at C7 and C8 levels, respectively. The overall sensitivity and specificity of MR imaging for nerve root avulsions was 75% and 82%, respectively. CONCLUSIONS: Our preliminary results demonstrate that high-resolution MR imaging offers an excellent alternative to CT myelography for the evaluation of neonatal brachial plexus palsy with similar sensitivity compared with CT myelography.
N eonatal brachial plexus palsy (NBPP) results from insult to the brachial plexus during the perinatal period. 1 NBPP can result when the upper shoulder of the infant becomes blocked by the pubic symphysis of the mother. 2,3 Nerve injury can occur anywhere along the brachial plexus but generally occurs in the supraclavicular brachial plexus at the nerve roots/trunks levels, resulting in varied neurologic deficits. Damage to the nerve roots arising from the ventral aspect of the spinal cord results in motor function disability. The most common lesions occur within the C5 and C6 spinal nerves (80% of patients), with a smaller group of patients having more extensive lesions ranging from C5-C7 and from C5-T1 (pan-plexopathy). 1,3 Collectively, the clinical presentation resulting from these lesions is referred to as NBPP.
NBPP occurs with an incidence of up to 3 per 1000 live births. 1 The most severe forms of injury result from complete axonal disruption (neurotmesis or severe axonotmesis), either at the level of the proximal nerve roots or trunks of the brachial plexus (ruptures, postganglionic injury), or when 1 or more of the spinal nerves of the brachial plexus are torn out of the spinal cord (root avulsion, preganglionic injury). 4 In these cases, the likelihood of spontaneous recovery is low and surgical intervention is generally thought to be reasonable. 5 Less severe injuries such as simple stretching of the nerves (neurapraxia) or rupture of a few axons (mild axonotmesis) can result in spontaneous functional recovery 3 (Fig 1). The clinical treatment of patients with NBPP can be difficult and depends on the specific type of lesion involved. Early on, it is often difficult to characterize the lesion type because patients may clinically present with similar apparent deficiencies regardless of the levels involved. This presents a diagnostic and management dilemma because patients with neurapraxia/mild axonotmesis will demonstrate spontaneous recovery over time, whereas the effectiveness of surgical intervention for neurotmesis or nerve root avulsion decreases with time. The typical practice at this time is to allow the patient a 3-month period in which to exhibit spontaneous recovery. 4,5 If recovery does not occur or is incomplete, further evaluation is recommended to determine the extent of injury. Perhaps there is a role for imaging earlier after birth because patients with minor injuries could be given a more favorable prognosis without the waiting period, at the admitted increased medical costs. However, it is possible that early imaging can save medical costs downstream by identifying patients who do not need more extensive follow-up and evaluation in the future.
Although direct surgical exploration may be considered the reference standard for lesion characterization, it carries significant morbidity and would require laminectomy to observe the intradural nerve roots. For this reason, CT myelography (CTM) and electrodiagnostic studies have been used as less invasive techniques and comprise the standard preoperative assessment for establishing preganglionic nerve root avulsion and postganglionic nerve ruptures in neonatal, pediatric, and adult populations. 4,6 CTM is most useful for detection of root avulsions (72% sensitivity), whereas electrodiagnostic studies are best at detecting postganglionic nerve ruptures, especially in the upper plexus (93% sensitivity). 6 These 2 tests are generally used in combination with one another to provide the neurosurgeon with supplemental preoperative information. 6 Although CTM is currently the most widely used imaging method for evaluating nerve root avulsion, there are drawbacks. It requires an invasive lumbar puncture, instillation of iodinated contrast into the thecal sac, and the use of radiation, all of which carry unfavorable risks, particularly within the infant and pediatric populations. Nevertheless, CTM continues to be recommended in every preoperative assessment for NBPP at many specialty centers. 4,6 Recent improvements in MR imaging techniques have made MR imaging an attractive alternative to conventional CT. MR imaging is noninvasive, does not require the use of intrathecal contrast, and does not use ionizing radiation. This tech-nique, if effective at diagnosing nerve root avulsion, can emerge as an alternative technique to CTM in the pediatric population. To date, however, there are only a few reports contained in the literature examining the utility of MR imaging for nerve root avulsions and none looking specifically at NBPP. [7][8][9][10][11] The reports contain scant imaging examples of nerve root avulsion, and many of the images are not convincingly diagnostic. 12 Most of the reports focus on the use of a heavily T2-weighted 3D sequence, referred to under various names on the basis of the specific manufacturer, such as 3D CISS (constructive interference in steady state), 3D True-FISP (fast imaging with steady-state precession), FIESTA (fast imaging employing steadystate acquisition), and DRIVE (driven equilibrium) sequences. 7 The end goal of these sequences is the same: to create a sequence with very high CSF-to-tissue contrast with elimination of pulsation artifact, to optimally visualize the exiting cervicothoracic nerve roots. 13 Until now, however, there are no studies that unequivocally and consistently demonstrate high-quality images of nerve root avulsion. Some propose that it lacks the requisite spatial resolution to provide the neurosurgeon with necessary diagnostic information, 14 though more recent advances in high-resolution 3T MR challenge this proposition. 15 Our aim was to use high-resolution MR imaging in evaluation of ventral nerve root avulsions in NBPP and to demonstrate that it is an excellent noninvasive and nonirradiating alternative to CTM.
MATERIALS AND METHODS
Institutional review board approval was obtained, and patient consent was waived for this Health Insurance Portability and Accountability Act-compliant prospective study. Patients were referred to our institution for evaluation of NBPP. All patients were given an obligatory observation period of 3-4 months to assess for spontaneous recovery. If clinical improvement was not forthcoming, they were referred for additional evaluation including MR imaging. MR imaging examination was performed on a 3T magnet (Ingenia; Philips Healthcare, Best, the Netherlands). We used a high-resolution 3D T2 DRIVE sequence with TR/TE/ of 1500/100 ms, TSE factor of 40, uniform voxel size of 0.6 mm with field of view of 80 mm, and reconstruction matrix of 320 ϫ 320. A sensitivity encoding factor (parallel imaging) of 1.6 was used. Total scan time for this sequence was 8 minutes, 43 seconds. Sagittal and coronal reformatted images on both right and left sides were obtained in all patients and reviewed. All imaging studies were independently reviewed by 2 board-certified and pediatric neuroradiology-trained radiologists. Findings of presence or absence of ventral nerve root avulsions were recorded by consensus. At the time of writing, 6 of the 13 patients had proceeded to surgery. The findings at the time of surgery were recorded. The initial radiologic diagnoses were then compared with the surgical findings for any discrepancy (Table).
RESULTS
Thirteen patients (9 male, 4 female) with clinically diagnosed NBPP underwent MR imaging evaluation. Average age at the time of imaging was 6 months. MR imaging was successful and able to visualize the individual ventral and dorsal nerve roots in all patients (Fig 2A-C). We used axial images as primary images for our analysis; sagittal and coronal reformatted images complemented the axial images by showing multiple nerve roots at the same time. Of the 13 patients, 6 underwent brachial plexus exploration and nerve reconstruction, from which a total of 19 nerve roots were evaluated. At surgery, 8 ventral nerve root avulsions were noted. Overall, MR imaging was 75% sensitive and 83% specific in the preoperative detection of these ventral nerve root avulsions. The positive and negative predictive values were also 75% and 83%, respectively. A more in-depth, level-by-level analysis demonstrated that 6 of 8 surgically confirmed avulsions occurred at the C5 and C6 levels (Fig 3A-F). At C5, there was 1 avulsion, which was correctly identified by means of MR imaging (100% sensitive and specific). At C6, there were 5 surgically proved avulsions. Of these, MR imaging was successful in detecting 3 of the lesions (60% sensitive, 100% specific). Interestingly, the remaining 2 avulsions that were not detected did not have any evidence of associated pseudomeningocele, either on imaging or at surgery. The C6 level nerve roots were reanalyzed again in light of surgical data, but no obvious technical issues on MR images were noticed to explain this discordance. At C7, there was 1 surgically confirmed avulsion that was detected by MR imaging; however, there were 2 additional false-positive C7 avulsions (100% sensitive, 60% specific, 33% positive predictive value). MR imaging correctly identified 1 avulsion at C8. An additional C8 avulsion was detected, but this level was not explored surgically and thus there was no confirmation. At the T1 level, MR imaging did not detect any abnormalities. Only 1 of these T1 levels was surgically confirmed to be normal, whereas none of the other T1 roots were explored, and thus confirmation of whether these roots were actually intact was not possible. MR imaging detected 5 pseudomeningoceles (Fig 4) occurring in 3 patients, all of which were associated with nerve root avulsions. Of note, no MR imaging avulsions were identified in the absence of a pseudomeningocele. Interestingly, however, there were 2 surgically confirmed C6 avulsions. In both of these cases, there was no evidence of pseudomeningocele.
DISCUSSION
NBPP includes a wide array of injuries, primarily to the exiting nerve roots and trunks of the brachial plexus. Identification of avulsion (preganglionic) injuries is critical for maximizing outcomes by early surgical intervention and for operative planning: the nerve reconstruction strategy for avulsion injuries significantly differs from that of rupture (postganglionic injuries). In the case of ruptured nerve roots or trunks, autologous nerve grafting is usually used. A harvested nerve, often the sural nerve, is used to bridge the gap between the disrupted elements of the brachial plexus. 3 This provides a physical pathway as well as neurotrophic factors to stimulate axonal outgrowth. This surgery is often accomplished by means of a supraclavicular approach. 5 currently there is no feasible way in which to reattach the avulsed root to the spinal cord. The solution is a nerve transfer, in which an extra-plexus exiting nerve is cut and coapted to the denervated brachial plexus terminal nerve. 3 In the present study, we evaluated a total of 13 patients by use of high-resolution MR imaging, of which 6 went on to surgery for confirmation and repair. In terms of overall performance, MR imaging demonstrated a sensitivity of 75% and specificity of 83%, both of which are comparable to the results published on CTM. Vanderhave et al 6 found the sensitivity of CTM as compared with surgical exploration to be 50% at levels C5 and C6, 83.3% at C7, and 75% at C8 and T1. There were 2 instances in which MR imaging failed to detect nerve root avulsion, both at the C6 level in different patients. These avulsions were detected at surgery, and these patients had no other roots that were avulsed. Curiously, no secondary findings such as pseudomeningocele were identified, either on imaging or at surgery. Isolated root avulsion without such secondary findings of trauma is quite unusual, and no satisfactory anatomic explanation exists; perhaps there was scarring of the pseudomeningocele in the interval from time of injury to the time of imaging. Nonetheless, these 2 cases provide excellent examples in which nerve root avulsions can exist at surgery without the presence of pseudomeningocele formation. Therefore, it is imperative that MR and CT emphasize the imaging of actual nerve roots and that the neuroradiologist should not rely solely on the detection of pseudomeningoceles to confirm or discount the existence of a root avulsion. Our results demonstrate that MR imaging offers an excellent alternative to CT myelography in the evaluation of complete brachial plexus nerve root avulsion(s). The high-resolution MR imaging technique provides unambiguous visualization of intact nerve roots and accurate assessment of nerve root avulsions. MR imaging is also able to clearly show dorsal and ventral nerve roots in all 3 planes. It is important to differentiate the dorsal and ventral nerve roots and assess whether they are intact or avulsed because an intact dorsal nerve root can be used as a donor for an avulsed ventral nerve root by the neurosurgeon.
There are limitations to this study. First and foremost, the sample size is small, with 13 patients included in the study, only 6 of whom went on to surgery. Nevertheless, this is the first study of its kind to compare MR imaging findings with the reference standard of surgical exploration in the setting of NBPP. Another limitation stems from the lack of comparison CTM within this study group. Ideally, CTM and MR imaging would have been obtained in all patients to allow for a one-to-one comparison of the accuracy of the tests, but, obviously, given the ethical considerations, this was not deemed possible. Thus, the sensitivities and specificities for MR imaging were compared with those of CTM in the already published literature. The third limitation is that with the exception of 1 patient, the remaining C8 and T1 levels were never surgically observed. This in part had to do with the alternate, more in-depth surgical approach necessary to access these levels. This resulted in 1 C8 avulsion being detected on MR but never undergoing surgical confirmation. This was an isolated, nonsurgically confirmed avulsion. All other avulsions were surgically confirmed or refuted. Axial high-resolution MR imaging in a 4-month-old boy with clinically suspected right-sided brachial plexus palsy shows a pseudomeningocele at right C5-6 level (arrow). Note absent nerve roots on right side suggestive of nerve root avulsion injury. Compare with normal ventral and dorsal nerve roots on the left side.
CONCLUSIONS
By prospectively examining 13 patients with clinically diagnosed NBPP, we have demonstrated the potential utility of MR imaging for providing reliable preoperative diagnoses of the type and extent of injury. It has proven value and has supplanted the use of CTM at our institution. Given that it is both noninvasive and nonirradiating while still providing all of the diagnostic information necessary to aid our neurosurgical colleagues, MR imaging should be the recommended technique in evaluating nerve root avulsion injuries in patients with NBPP. | 2018-04-03T03:11:04.585Z | 2014-06-01T00:00:00.000 | {
"year": 2014,
"sha1": "3a9bf7e0ccb2db07f130dcde0dcf1503db0ef632",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/ajnr/35/6/1209.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "0ba756504c2dfcce9b88e3bdc2b49314de9a8cf4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260702016 | pes2o/s2orc | v3-fos-license | Influence of Ion Diffusion on the Lithium–Oxygen Electrochemical Process and Battery Application Using Carbon Nanotubes–Graphene Substrate
Lithium–oxygen (Li–O2) batteries are nowadays among the most appealing next-generation energy storage systems in view of a high theoretical capacity and the use of transition-metal-free cathodes. Nevertheless, the practical application of these batteries is still hindered by limited understanding of the relationships between cell components and performances. In this work, we investigate a Li–O2 battery by originally screening different gas diffusion layers (GDLs) characterized by low specific surface area (<40 m2 g–1) with relatively large pores (absence of micropores), graphitic character, and the presence of a fraction of the hydrophobic PTFE polymer on their surface (<20 wt %). The electrochemical characterization of Li–O2 cells using bare GDLs as the support indicates that the oxygen reduction reaction (ORR) occurs at potentials below 2.8 V vs Li+/Li, while the oxygen evolution reaction (OER) takes place at potentials higher than 3.6 V vs Li+/Li. Furthermore, the relatively high impedance of the Li–O2 cells at the pristine state remarkably decreases upon electrochemical activation achieved by voltammetry. The Li–O2 cells deliver high reversible capacities, ranging from ∼6 to ∼8 mA h cm–2 (referred to the geometric area of the GDLs). The Li–O2 battery performances are rationalized by the investigation of a practical Li+ diffusion coefficient (D) within the cell configuration adopted herein. The study reveals that D is higher during ORR than during OER, with values depending on the characteristics of the GDL and on the cell state of charge. Overall, D values range from ∼10–10 to ∼10–8 cm2 s–1 during the ORR and ∼10–17 to ∼10–11 cm2 s–1 during the OER. The most performing GDL is used as the support for the deposition of a substrate formed by few-layer graphene and multiwalled carbon nanotubes to improve the reaction in a Li–O2 cell operating with a maximum specific capacity of 1250 mA h g–1 (1 mA h cm–2) at a current density of 0.33 mA cm–2. XPS on the electrode tested in our Li–O2 cell setup suggests the formation of a stable solid electrolyte interphase at the surface which extends the cycle life.
■ INTRODUCTION
The impellent need for efficient energy storage to stabilize the renewable power grids and provide satisfactory autonomy to electronic devices, including electric vehicles, has triggered a relevant breakthrough in the field of rechargeable batteries. 1,2 Moreover, excessive ambient pollution and anomalously fast climate change during the recent years have focused the research efforts on developing sustainable technologies that can effectively replace Li-ion batteries based on critical and expensive raw materials, e.g., Co, Ni, and Mn. 3 Among the various electrochemical energy storage systems, lithium−sulfur (Li−S) and Li−O 2 batteries rely on abundant cathode materials, limiting their environmental and economic impact compared to Li-ion batteries. 4−6 Furthermore, Li can electrochemically react with either S or O 2 according to conversion processes involving multiple electrons/ions exchange, leading to practical energy densities above 500 W h kg −1 , outperforming the state-of-the-art Li-ion batteries based on Li +insertion-type electrodes. 7,8 Particular interest has been devoted to rechargeable Li−O 2 batteries operating in organic solvents because of their notable energy density (i.e., ∼3400 W h kg −1 for the schematic reaction Li 2 O 2 ⇄ 2Li + O 2 ) and potentially low life cycle environmental burdens. 5,9 A relevant boost to these intriguing systems has been achieved by the use of ad hoc-designed electrolytes, including those based on glymes with the general formula CH 3 O(CH 2 CH 2 O) n CH 3 characterized by chemical and electrochemical stabilities, as well as by limited cost and low toxicity. 10,11 In particular, glymes with sufficiently long chains and low volatility can form in Li−O 2 batteries stable coordination complexes with the reactive peroxide and superoxide radicals during ORR, 12,13 and can withstand oxidation at potential as high as 4.8 V vs Li + /Li upon OER. 6 The effect of the Li salt nature and concentration on the operation of the Li−O 2 cell has been investigated by several studies, reporting promising results for cells using lithium trifluoromethanesulfonate (LiCF 3 SO 3 ) and lithium bis(trifluoromethanesulfonyl)imide (LiTFSI) in glyme-based electrolytes characterized by high Li + transference number and ionic conductivity, e.g., with tetraethylene glycol dimethyl ether (TEGDME) as the solvent. 6,14,15 Despite the role of the Li + diffusion to the electrode−electrolyte interphase on the cell performances has been widely investigated for Li-ion 16−19 and Li−S batteries, 20,21 only a limited deal of studies correlated the kinetics of Li + diffusion to the performances of Li−O 2 batteries. 22 Efficient ORR/OER processes have been suggested for Li−O 2 cells using GDLs, for facilitating the diffusion of involved species, with various substrates which promote the reaction kinetics, e.g., nanosized carbon, 14,23,24 metals, 25−28 metal oxides, 29−31 and conductive polymers. 32 Based on these premises, herein we reported a detailed study of various commercially available GDLs used as the support for the cathode material. We in-depth investigated the effects of the Li + diffusion on the electrochemical process of Li−O 2 batteries using these GDLs, which are characterized by different morphological and structural characteristics, as determined through scanning electron microscopy (SEM), X-ray diffraction (XRD), N 2 physisorption measurements, and thermogravimetric analysis (TGA). The ORR and OER were examined through cyclic voltammetry (CV) measurements, while the evolution of the electrode/electrolyte interphase was monitored through electrochemical impedance spectroscopy (EIS) measurements. The diffusion kinetics were studied with galvanostatic intermittent titration technique (GITT), identifying the most suitable GDL to be combined with few-layer graphene (FLG) flakes and multiwalled carbon nanotubes (MWCNTs) for further improving the process in Li−O 2 cells. MWCNTs have been chosen due to their optimal morphology that triggers an extremely reversible electrochemical process, 14 while FLG flakes have been selected since they strongly enhance the stability of the MWCNT film on the GDL, improve the surface characteristics, and avoid cracks, thus increasing the cycle life of the cell. The identification of the correlation between electrode properties, Li + diffusion kinetics, and cell performances is here proposed as an effective approach to design efficient and high-energy density Li−O 2 batteries for practical applications. ■ EXPERIMENTAL SECTION Material Characterization. Gas diffusion layers (GDL Sigracet Ion Power), referred to as 22BB, 28BC, 36BB, and 39BB, bare MWCNTs (>90% carbon basis, D × L: 110−170 nm × 5−9 μm, Sigma-Aldrich), and FLG produced by wet-jet mill (WJM) method (BeDimensional S.p.A.) 33 were characterized by SEM, XRD, and TGA measurements. SEM images were acquired with a Zeiss EVO 40 microscope using back-scattered electrons and secondary electrons modes, while the corresponding EDS elemental mapping was recorded with a X-ACT Cambridge Instruments analyzer coupled to the SEM equipment. The XRD patterns of the GDLs were collected through a Bruker D8 Advance using a Cu Kα source (8.05 keV) by performing scans over the 2θ range between 10 and 60°with a step size of 0.02°and a rate of 10 s per step. The TGA measurements of the GDLs were carried out in the 25−1000°C temperature range under N 2 flow with a rate of 5°C min −1 , using a TGA 2 Mettler-Toledo instrument. The specific surface area and the porosity of the GDLs were determined by N 2 adsorption at 77 K with an automated gas sorption analyzer (AutoSorb iQ, Quantachrome Instruments, USA). The samples were degassed under vacuum conditions at 150°C overnight before each measurement. Specific surface area was calculated using the multi-point Brunauer−Emmett− Teller (BET) method, 34 considering equally spaced points in a relative pressure range P/P 0 from 0.05 to 0.30 with a correlation coefficient of above 0.999. The total pore volume was directly calculated from the volume of N 2 held at the highest relative pressure (P/P 0 = 0.99). The non-local density functional theory (NLDFT, implemented into Quantachrome's data reduction software) 35 was applied to the gas adsorption data using a slit-shape model to describe the pore-size distributions (PSDs) of the samples.
Assembly were assembled under an Ar atmosphere by stacking a GDL disc, a glass fiber Whatman GF/B separator with a diameter of 18 mm soaked with an excess (ca. 200 μL) of the electrolyte solution, and a Li disc with a diameter of 14 mm as the counter electrode. This twoelectrode setup may have additional polarization compared to possible three-electrode configuration, in particular in view of Li reactivity. However, the above cell (i.e., top-meshed CR2032 coin cell) represents the most diffused system for practical Li−O 2 battery characterization. 36 Subsequently, the cells were inserted in sealed glass chambers and filled with pure oxygen to achieve the Li−O 2 system. The electrolyte solution consisted of TEGDME (≥99%, Sigma-Aldrich) dissolving LiCF 3 SO 3 (99.995% trace metals basis, Sigma-Aldrich) conductive salt with a concentration of 1 mol kg solvent −1 . Before electrolyte preparation, TEGDME was kept in Ar-filled glovebox under molecular sieves (3 Å, rod, size 1/16 in., Honeywell Fluka) previously dried under vacuum at 280°C for 5 days, until a water content lower than 10 ppm was verified by a 899 Karl Fischer Coulometer (Metrohm), while LiCF 3 SO 3 salt was dried under vacuum for 2 days at 110°C. The electrochemical characterization of Li−O 2 cells was carried out by means of CV and EIS measurements using a VersaSTAT MC Princeton Applied Research (PAR) potentiostat/galvanostat. The CV measurements consisted of three subsequent potential scans between 2.5 and 4.2 V vs Li + /Li at 0.05 mV s −1 , while EIS spectra of the cells were recorded at the opencircuit voltage (OCV) condition and after each voltammetry cycle. Additional CV−EIS measurements were run on Li−O 2 cells using a CV potential range of 1.5−4.3 V vs Li + /Li with a scan rate of 0.05 mV s −1 and performing EIS at the OCV condition and after each voltammetry cycle. All EIS spectra were recorded through an AC voltage signal with an amplitude of 10 mV in the 500 kHz to 100 mHz frequency range. The spectra were subsequently fitted by an equivalent electrical circuit model using the non-linear least squares (NLLS) method through Boukamp software. 37,38 Only fits with a chisquare (χ 2 ) value of the order of 10 −4 or lower were considered. EIS measurements were also conducted on symmetrical Li−Li and GDL(39BB)-GDL(39BB) cells in an O 2 atmosphere at the OCV condition in the 500 kHz to 100 mHz frequency range with AC voltage signal with an amplitude of 10 mV. Polarization curves were recorded through galvanodynamic reduction scans between 0 and −20 mA on either a Li−Li and Li-GDL(39BB) cells in an O 2 atmosphere using a step height of 0.1 mA and a step time of 10 s. Galvanostatic charge/discharge cycling measurements were carried out on Li−O 2 cells using the various GDLs by applying a current of 0.2 mA (0.1 mA cm −2 considering the geometric area of the GDL discs of 2.0 cm 2 ) and limiting the cell capacity to 2 mA h, or by setting the cell voltage between 1.5 and 4.5 V (without any capacity limitation). The GITT measurements were performed to record the potential of Li−O 2 cells with the various GDLs over the exchanged lithium equivalents (x) in the 1.5−4.5 V vs Li + /Li range, using square current pulses of 0.4 mA for 1 h followed by potential relaxation steps of 1 h at the reached state of charge (SOC). An additional Li−O 2 cell was assembled using the GDL 39BB coated with MWCNTs and FLG. The latter were deposited onto the GDL by Doctor Blade (MTI Corp.) casting of a slurry composed by 80 wt % of MWCNTs, 10 wt % of FLG, and 10 wt % of polyvinylidene fluoride (PVDF 6020 Solef) dispersed in N-methyl-2-pyrrolidone (NMP, Sigma-Aldrich). The electrode tape was dried at 70°C, cut into 16 mm-diameter discs (geometric area: 2.0 cm 2 ), and dried at 110°C under vacuum for 3 h before transfer in Ar-filled glovebox. The final mass loading of MWCNTs/FLG on the GDL support ranged from 0.8 to 1.0 mg cm −2 . Galvanostatic charge/discharge measurements were carried out on this Li−O 2 cell by applying a current rate of 0.66 mA (0.33 mA cm −2 ) and limiting the cell capacity to 2 mA h (1 mA h cm −2 ) and 1 mA h (0.5 mA h cm −2 ) in the 1.5−4.8 V voltage range. The charge/ discharge galvanostatic tests and GITT were performed using a MACCOR series 4000 battery test system, and all the electrochemical tests were performed at 25°C.
Galvanostatic and CV tests were carried out on cells using lithium discs with thickness of 250 μm and mass of about 20 mg, while Li−O 2 cells for GITT measurements employed lithium anodes with thickness and mass limited to 70 μm and 7 mg, respectively. In addition, a 39BB GDL coated with MWCNTs/FLG (composite loading: 0.8 mg cm −2 ) was galvanostatically discharged and charged for three cycles in a Li− O 2 cell at 0.66 mA with a capacity limited to 2 mA h between 1.5 and 4.8 V, and subsequently retrieved for XPS analysis. The XPS measurements were performed on the cycled electrode and on a pristine one for comparison with a Kratos Axis UltraDLD spectrometer, equipped with a monochromatic Al Kα source, operating at 20 mA and 15 kV. To prevent air contamination, the samples were moved from an Ar-filled glovebox to the XPS system using a hermetically sealed transfer chamber. Wide scans were carried out with an analysis area of 300 × 700 μm and a pass energy of 160 eV. High-resolution spectra were collected over the same analysis area at a pass energy of 20 eV. Spectra were charge-corrected to the C 1s peak at 284.5 eV for sp 2 carbon (C�C) and were analyzed using CasaXPS software (version 2. (Figure 1g), leading to a different surface morphology. The latter can be qualitatively evaluated from the secondary electron SEM images (Figure 1b,d,f,h, and images with higher magnification are reported in Figure S1 in Supporting Information). Accordingly, the 22BB and 28BC GDLs reveal smaller aggregates compared to 36BB and 39BB samples, in agreement with the experimental surface area discussed afterward. The EDS elemental mapping recorded on secondary electron SEM images (insets of Figure 1b,d,f,h) shows the presence of F in addition to that of C. The F signal is associated to the polytetrafluoroethylene (PTFE) binder, which is typically applied to the GDLs to improve their mechanical stability and hydrophobicity, however with an insulating character that may affect the reaction kinetics. Figure 1i shows the XRD patterns of the GDLs, which exhibit a main sharp peak at 2θ = 26.6°and a secondary signal at 2θ = 54.7°ascribed to the graphite, 39 broad shoulders in the 20−30 and 40−45°2θ ranges indicating the co-presence of amorphous carbon, 40 and a peak at 2θ = 18°associated to the PTFE. 41 It is worth mentioning that the difference between EDS and XRD responses is related with the nature of the two techniques. Indeed, EDS focuses mainly on the electrode surface and can detect species without any crystallinity, while XRD detects only crystalline species located into the whole electrode structure. Overall, SEM-EDS and XRD analyses reveal that all the GDLs are formed by both graphitic and amorphous carbons, linked with PTFE binder, and exhibit different surface morphologies which may therefore influence the electrochemical processes occurring in the Li−O 2 battery.
The GDLs are further evaluated through TGA performed under N 2 to determine the binder content (Figure 2a Table 1) are carried out to assess their surface area and PSD. 34 The thermogravimetric curves (Figure 2a) and the corresponding differential thermogravimetry (DTG) curves ( Figure S2, Supporting Information) show that the GDLs undergo a weight loss between 25 and 100°C ascribed to the removal of absorbed water. The weight loss between 500 and 550°C is associated to the PTFE decomposition, 42 while the weight loss starting at 950°C is attributed to the degradation of the carbonaceous structure of the GDLs. Importantly, the TGA data reveal that the GDLs have different contents of PTFE, i.e., 17 wt % for 22BB, 13 wt % for both 28BC and 39BB, and 12 wt % for 36BB. Moreover, 22BB exhibits the most pronounced weight loss below 200°C, indicating a superior ability to Table 1). Figure 2 for the GDLs
Table 1. Data Derived from N 2 -Sorption Isotherms in
total pore volume (P/P 0 = 0.99) [cm 3 43 all the isotherms can be classified as type II isotherms with a H3 hysteresis loop, indicating the presence of relatively large pores. Table 1 reports the compilation of textural parameters obtained after application of the BET equation and NLDFT method to the N 2 adsorption data of the GDLs. The highest surface area of 39 m 2 g −1 is found for 22BB, and the lowest one of 13 m 2 g −1 for 39BB. 28BC and 36BB show intermediate BET surface area of 38 and 31 m 2 g −1 , respectively. The pore volumes are 0.14 cm 3 g −1 for both 22BB and 28BC and 0.10 cm 3 g −1 for 36BB and 39BB. The PSD analysis derived from the adsorption branch of the isotherms in Supporting Information ( Figure S3) indicates two main populations of mesopores at ∼3 and 4.5 nm with intensities decreasing from 22BB to 28BC, 36BB, and 39BB. The minor peak centered at ∼30 nm shows similar intensity for all the GDLs. It is worth mentioning that the BET surface area detected herein may differ from the one fully accessible to the electroactive species which represents the electrochemically active surface. On the other hand, the BET surface area observed for the 22BB, 28BC, and 36BC GDLs is higher than that of the GDL 39BB. Therefore, the difference between the BET surface area observed herein between the GDLs may play a role in enhancing the cell performances of the materials in Li−O 2 cells. Nevertheless, further discrepancies between the inter-fiber pores more readily accessible for Li 2 O 2 formation compared to the mesopores of 3−4 nm diameter cannot be excluded, as suggested by literature studies. 44,45 The 22BB and 28BC GDLs have similar surface area, while the TGA in Figure 2a shows that 22BB has a higher quantity of the PTFE binder (17%) compared to 28BC (13%). Hence, the higher ratio of the insulating polymer in 22BB compared to 28 BC may actually affect the CV curves, as demonstrated hereafter.
Characteristics of the Li−O 2 Electrochemical Process. The electrochemical behavior of the bare GDLs as cathodes in Li−O 2 cells is studied through CV measurements, performed between 2.5 and 4.2 V vs Li + /Li (Figure 3a,c,e,g), and EIS measurements, carried out at the OCV condition and after each CV scan (Figure 3b,d,f,h). The potential window used for the CV favors the reversible redox process Li + 1/2O 2 ⇄ 1/ 2Li 2 O 2 , which typically involves multiple steps and intermediates such as the lithium superoxide radical (LiO • 2 ). 13 The first CV curves measured for the cell using 22BB (Figure 3a reveal cathodic currents at potential lower than 2.8 V vs Li + /Li, which are attributed to the ORR, i.e., Li + 1/2O 2 → 1/ 2Li 2 O 2 . 13 The reverse oxidation steps, associated to the OER, i.e., Li 2 O 2 → 2Li + O 2 , are instead revealed by the anodic currents at potentials exceeding 3.6 V vs Li + /Li. 13 Interestingly, during the first CV cycle (black curves), the shape and intensity of the cathodic and anodic currents associated to the ORR and OER, respectively, appear to be influenced by the GDL characteristics. Indeed, the cells using 22BB (Figure 3a) show intense and narrow ORR and OER sharp current slopes rather than defined peaks. Instead, the cells using 28BC (Figure 3c), 36BB (Figure 3e), and 39BB ( Figure 3g) reveal similar ORR current slope but with a lower intensity than 22BB, and OER reflecting broad peaks centered at ∼4.0 V vs Li + /Li. The higher ORR intensity of the cell using 22BB support with respect to the other GDLs may indicate a Li 2 O 2 deposition initially triggered by its higher surface area (see Table 1). On the other hand, the formation of a defined OER peak in the cells using the 28BC, 36BB, and 39BB may account for the OER process promoted by a favorable morphology of the reaction products (Li 2 O 2 ) due to the relevantly lower binder content in these GDLs compared to 22BB (see discussion of Figure 2). 14 Despite the intensity of the CV peak does not directly account for the kinetics of the charge transfer, it may be associated with the various processes, including diffusion in the cell and reaction at the electrode/ electrolyte interphase. Hence, the kinetics may be ascribed to the whole process, including ions and electrochemical species diffusion as well as charge transfer at the electrode/electrolyte interphase, in particular considering the geometry of the cell used herein to achieve the Li−O 2 battery, that is, a top-meshed CR2032 coin cell. 36 Furthermore, the use of a suitable threeelectrode geometry in the Li−O 2 cell may be hindered by the reactivity of the additional Li-reference electrode, and by possible leakage of the liquid electrolyte. Instead, the coin cell allows the study of the electrochemical reaction without the abovementioned issues, despite additional polarization due to the two-electrode configuration cannot be excluded. During the subsequent CV cycles, the cathodic current of the ORR increases for all GDLs, less remarkably for the cell using 22BB (Figure 3a) and more relevantly for the cells using 36BB ( Figure 3e) and 39BB (Figure 3g), while the anodic current of the OER increases for all GDLs, except for 22BB. Furthermore, the OER CV shapes change for the cell using 36BB and 39BB from a broad but defined peak to a sloped profile. The increase of the cathodic currents during repeated CV cycles indicates the presence of an activation of the GDLs toward the ORR, instead the behavior of the anodic currents and related CV shapes during the OER appears more complex. The GDL activation toward ORR may be ascribed to the stabilization of the electrode/electrolyte region and the formation of a favorable SEI layer. 6 Noteworthy, the activation process is particularly pronounced for the 36BB and 39BB GDLs, which are characterized by the lowest surface area and lowest porosity among the investigated samples (see Table 1). To elucidate the electrode/electrolyte interphase properties, EIS spectra of the Li−O 2 cells are recorded before and after each CV cycle, as shown in Figure 3b,d,f,h for 22BB, 28BC, 36BB, and 39BB, respectively. The resulting Nyquist plots are fitted through the NLLS method, modeling the Li−O 2 systems with a R e (R 1 Q 1 ) Q g equivalent circuit including resistive elements (R) and constant phase elements (Q), accounting for the electrolyte and the electrode/electrolyte interphase (see the top-side scheme in Figure S4 in Supporting Information). 37,38 More in detail, R e is the electrolyte resistance measured by the highfrequency intercept of the Nyquist plot; R 1 and Q 1 , arranged in parallel in the (R 1 Q 1 ) element, describe the processes related to the Li + transfer and/or the SEI layer formation; 37,38 the R 1 resistance corresponds to the width of the semicircle in the high-medium frequency range; 37,38 and lastly, Q g is a constant phase element used to represent the low-frequency region of the Nyquist plot identifying the cell geometric capacitance and the diffusion-limited mass transport. 37,38 Table 2 shows the estimated parameters for the equivalent circuits of the investigated Li−O 2 systems, as determined by the NLLS fitting. At OCV, the Li−O 2 cells show high R 1 with values ranging from 530 to ∼1520 Ω. After the first CV cycle, R 1 significantly decreases to 135 Ω for 22BB (Figure 3b), 78 Ω for 28BC (Figure 3d), 49 Ω for 36BB (Figure 3f), and 69 Ω for 39BB (Figure 3h). After three CV cycles, R 1 further decreases to 70 Ω for 22BB and to 55 Ω for 39BB, almost stabilizes at 83 Ω for 28BC, and increases to 82 Ω for 36BB (see Table 2).
In general, these EIS data confirm the cycling-induced activation of the electrode/electrolyte interphase for the ORR observed during CV, showing significant differences depending on morphological and structural characteristics of the investigated GDLs. In particular, after three CV cycles, the lowest R e value is observed for 39BB, which has the lowest surface area and porosity among the GDLs. On the other hand, R e remains almost constant after subsequent CV runs for all the GDLs, with values ranging between 30 and 45 Ω ( Table 2). The trend observed for R e indicates only minor electrolyte decomposition during cell operation. 46 Additional EIS measurements are carried out on symmetric Li−Li and GDL(39BB)-GDL(39BB) cells, both assembled in an O 2 atmosphere at the OCV condition ( Figure S5 in Supporting Information). Figure S5a shows for the symmetric Li−Li cell the typical Nyquist plot including a semicircle at medium−high frequency ascribed to the electrode/electrolyte interphase, and a low frequency contribute related with the semi-finite Warburg-type Li + diffusion. The cell shows a resistance around 100 Ω, that is much lower than that of the Li−O 2 cell using the same GDL displayed in Figure 3h GDL(39BB)/GDL(39BB) cell ( Figure S5b) shows a wide and noisy semicircle likely ascribed to possible side reaction of the electrolyte or ion diffusion, with a very large resistance value, i.e., extending 10,000 Ω, and suggesting the almost blocking character of this configuration due to the absence of the Li + source in the electrodes. Figure 4 shows the galvanostatic charge/discharge curves measured for the Li−O 2 cells using 22BB (Figure 4a In addition, minimum and maximum voltage cutoff of 1.5 and 4.8 V, respectively, are used. This galvanostatic charge/ discharge cycling procedure avoids excessive deposition of Li 2 O 2 on the GDL surface and ensures reversible cell operation. 47 The cell voltage profiles reveal the occurrence of the ORR and OER between 2.5 and 2.7 V and between 3.6 and 4.5 V, respectively. At the end of the first discharge/charge cycle, the Li−O 2 cells exhibit similar polarizations (i.e., difference between the voltages achieved by the cell at the end of charge and at the end of discharge) of ∼1.8 V, except for the one using 39BB that show a polarization of ∼2.0 V likely due to the growth of larger insulating Li 2 O 2 agglomerates. 15 During subsequent charge/discharge cycles, all the investigated Li−O 2 cells exhibit an activation for the ORR that occurs at slightly higher voltage, due to the abovementioned stabilization of the SEI upon the first charge/ discharge cycle. After 10 cycles, the cells display different polarization values, i.e., 2.1 V for 22BB, 1.9 V for 28BC, 2.2 V for 36BB, and below 2.0 V for 39BB.
The difference between the voltages achieved by the cell at the end of charge and at the end of discharge is reported as a function of the cycle number in Figure S6 in Supporting Information, which shows the initial decrease of the polarization upon the above discussed GDL activation. After 2−3 cycles, the cell polarization increases for all the cells except that based on 39BB, for which the polarization starts to increase only after the 4th cycle and stabilizes at a final value (10th cycle) slightly lower than the initial one. 15 Overall, these cell polarization trends indicate that 39BB is a particularly suitable GDL to ensure the formation of stable and effective electrode/ electrolyte interphase for the realization of performant Li−O 2 systems.
The GDLs are subsequently investigated by CV, EIS, and galvanostatic charge/discharge measurements using a wide potential range and without any capacity limitation. Previous paper suggested for the TEGDME-LiCF 3 SO 3 solution and the PVDF binder anodic stability approaching 4.8 V, 36 despite partial electrolyte oxidation during the OER at lower potentials, 48 and side reaction due to the PVDF binder 49 cannot be completely excluded. On the other hand, the reductive decomposition of the electrolyte typically occurs Figure 3) can allow the limitation of the undesired process, hold the high electrode conductivity, and increase the reversibility of the Li−O 2 redox process in particular during ORR, instead the excessive Li 2 O 2 electrodeposition achieved by voltammetry lowering the cathodic limit to 1.5 V vs Li + /Li can lead to a partial insulation of the electrode surface, which is reflected in a decrease of the reversibility. 14,36,47 Figure 5 suggest a limited effect of the bare GDLs on the oxidation kinetics when insulating Li 2 O 2 is massively formed during the ORR within the full potential range. The Nyquist plots after each CV cycle (Figure 5e−h) are fitted with the R e (R 1 Q 1 )-(R 2 Q 2 )Q g equivalent circuit (Table 3, and bottom-side scheme in Figure S4 in Supporting Information), instead those at the OCV are the same reported in the inset of Figure 3b,d,f,h, and Table 2 (see the top-side scheme in Figure S4 in Supporting Information). Compared to the one used to fit the Nyquist plots reported in Figure 3, an additional (R 2 Q 2 ) element is included to discriminate the Li + transfer and the SEI formation at the electrode/electrolyte interphase. 52 The fitting of the Table S1 in Supporting Information and the potential vs time GITT curves in Figure S7). Square current pulse: 0.4 mA; time of pulse: 1 h; potential relaxation step time: 1 h; and potential range: 1.5−4.5 V vs Li + /Li.
ACS Applied Materials & Interfaces
Nyquist plots after the voltammetry cycle indicates interphase resistance (R 1 + R 2 in Table 3) of about 86 Ω for 22BB ( Figure 5e), 115 Ω for 28BC (Figure 5f), 96 Ω for 36BB (Figure 5g), and 93 Ω for 39BB (Figure 5h). These low impedance values suggest a limited electrolyte decomposition during the ORR and OER, thus indicating the suitability of the GDLs for promoting efficient electrochemical reactions in the Li−O 2 systems. 52 Further proof of the efficiency of the electrochemical processes is given by the charge/discharge galvanostatic profiles of the Li−O 2 cells recorded with no capacity limitation (Figure 5i−l). The cells using 22BB (Figure 5i), 28BC (Figure 5j), 36BB (Figure 5k), and 39BB (Figure 5l) achieve notable discharge areal capacities of 6.8, 7.4, 6.4, and 7.8 mA h cm −2 , respectively, corresponding to cell capacities of 13.6, 14.8, 12.8, and 15.6 mA h, with a high Coulombic efficiency. It is worth noting that the different reversibility of CV tests in Figure 5a−d compared to the galvanostatic tests in Figure 5i−l may be attributed to the higher current values reached in the former compared to the latter. Thus, the galvanostatic test is performed at a constant current of 0.2 mA, while in the CV, the currents reach maximum values ranging from about 3 mA in discharge to about 1 mA in charge. Thus, the Li−O 2 cell using 39BB as the cathodic support shows the best performance in terms of delivered capacity and Coulombic efficiency, indicating that the characteristics of this GDL, including low surface area and low porosity (see Table 1), are beneficial to attain the reversible Li + 1/2O 2 ⇄ 1/2Li 2 O 2 reaction. 53 The electrochemical performances of the investigated GDLs in Li−O 2 cells are further rationalized by determining the Li + diffusion coefficient (D) at various SOCs using GITT ( Figure 6). 54 Typically, this technique evaluates the effect on D promoted by the exchange of a Li-equivalent fraction (x) within active materials designed for Li-ion batteries, such as Li 1−x FePO 4 . 18,55 More recent work reported the use of GITT for the evaluation of the diffusional features of Li−S batteries, considering the exchange of x in the Li 2x S reaction products. 20 In our case, Li−O 2 cells represent three-phase (solid/liquid/ gas) systems, which hinder the proper determination of the x value at the cathode side. 56 Indeed, the exact mass of the electroactive specie on cathode, i.e., the oxygen on the GDL which is used only as the support for the electrochemical reaction, is practically complex to determine in particular in the cell setup used herein (i.e., CR2023 top-meshed coin cell in an excess of statical O 2 gas). Therefore, we refer herein to the x equivalents exchanged within the Li metal anode, the mass of which can be easily determined, for the evaluation of the D values calculated through the GITT eq 1 54 where I 0 (A) is the applied current, V M is the Li molar volume (13.02 cm 3 mol −1 ), A is the Li geometric area (1.54 cm 2 ), F is the Faraday constant (96,485 C mol −1 ), τ is the diffusion time employed in the tests, dE/dx is obtained by derivation of the titration plots in Figure 6a,c,e,g, and dE/dt 1/2 is determined by linear fitting of the relaxation potential vs t 1/2 related to each current pulse (with t ≪ τ). 20 Despite the above technique can help the rationalization of the Li−O 2 battery behavior, the diffusion in the cell configuration adopted in this work avoids the actual deconvolution of the various factors, including Li + and O 2 transport, ORR/OER kinetics, nucleation and growth of Li 2 O 2 , and formation/decomposition of parasitic products, which are instead taken in whole by the "practical version" of the diffusion coefficient determined hereafter. Indeed, the complex nature of the battery hinders the full discerning of the various processes. In particular, the ion as well as the oxygen diffusion at the cathode/electrolyte interphase which may represent the rate-determining step of the cell, despite the contribution of the electrolyte and anode may be not completely excluded. Figure 6a,c,e,g shows the potential profiles recorded at quasi-equilibrium condition as a function of x, as achieved by the elaboration of the corresponding GITT potential vs time curves ( Figure S7). 18,20 Importantly, these data are consistent with the cell voltage profiles recorded during the galvanostatic charge/discharge cycling (see Figure 5) Figure 6b, d, f, and h for the cells using 22BB, 28BC, 36BB, and 39BB, respectively. For all the cells, the data show higher D values during discharge than during charge, thus accounting for a faster kinetics during the ORR than during the OER. This behavior is consistent with the differences of the reactants involved in the two processes, i.e., Li and O 2 in the former while insulating Li 2 O 2 in the latter. 14,57,58 The data also reveal a decrease of D during the initial stages of the cell discharge and charge, where Li 2 O 2 begins the deposition on the GDLs or it undergoes oxidation, respectively, due to the notable activation energy of the ORR and OER. 13 Subsequently, D increases most likely due to the stabilization and consolidation of the electrode/electrolyte interphase, as already supported by EIS analyses (see Figure 5). Table S1 in Supporting Information displays the maximum and minimum D calculated using GITT, indicating that 28BC leads to both the highest D value of 2.8 × 10 −8 cm 2 s −1 and the lowest one of 4.4 × 10 −17 cm 2 s −1 . The other GDLs show intermediate D, ranging from 10 −8 to 10 −16 cm 2 s −1 , while the sample 39BB reveals the most suitable D values until the highest x of 0.55. Hence, GITT indicates the interplay between the GDL properties, including its surface characteristics, and the SOC of the Li−O 2 cell in determining both the diffusional properties and the electrochemical performances. This behavior is associated with redox processes that involve multiple phases (i.e., solid, liquid, and gas) and formation of insulating species (Li 2 O 2 ) and reaction intermediates including radicals and nucleophiles. 13 Despite the complex response, the GITT analysis suggests the use of 39BB to ensure the most performant Li-equivalent exchange in Li−O 2 cells, aiming at maximizing the discharge capacities of the latter. Indeed, previous work demonstrated that the growth of Li 2 O 2 crystals follows a surface-mechanism in our cell setup. 14 According to the above mechanism, the nucleation in the system leads to the formation of Li 2 O 2 microparticles by direct-electrodeposition over the surface of the support, the size and distribution of which depend on the local current density. Hence, GDLs with lower porosity and surface, thus with the higher local current, can lead to the better performance due to the deposition of bigger Li 2 O 2 micrometric particles distributed into the conductive framework, rather than small particles covering and possibly insulating the support.
With the aim of further understanding the nature of the D coefficient determined herein, we have performed polarization tests through galvanodynamic reduction scans on Li−Li and Li-GDL(39BB) cells in an O 2 atmosphere. The data reported in Figure S8 in Supporting Information suggest a limiting current exceeding the value of 5 mA cm −2 for Li + diffusion in the Li−Li symmetrical system and a complex trend for the Li-GDL(39BB) cell evolving with a double slope, suggesting a concomitant role of the O 2 diffusion at lower currents in the Li−O 2 cells.
Use of the GDL Coated with MWCNTs/FLG in the Li− O 2 Cell with Prolonged Cycling. According to the above GDL characterization, 39BB is subsequently selected as a suitable cathodic support for the realization of a practical Li− O 2 battery based on a MWCNTs/FLG electrode. Figure 7 reports the SEM images at various magnifications of the electrode, alongside with the voltage profiles and corresponding specific capacity and Coulombic efficiency trends as a function of galvanostatic charge/discharge cycles of the corresponding Li−O 2 cell. The SEM images show an electrode surface mainly formed by MWCNTs (Figure 7a) with a characteristic morphology including secondary particles with sizes ranging from 10 to 30 μm (Figure 7b) intimately curling up primary nanotubes. 14 The SEM imaging also evidences the presence of FLG flakes, with sizes ranging from 1 to 10 μm and nanometric thickness, dispersed into the MWCNT framework (Figure 7b,c). 21 The cell using the 39BB GDL coated with MWCNTs/FLG as the electrode is cycled at a constant current of 0.66 mA (geometrical areal value: 0.33 mA cm −2 ) by limiting the capacity to 2 mA h (geometrical areal value: 1 mA h cm −2 ) that corresponds to charge and discharge processes of 3 h each. The cell shows shapes of voltage profiles (Figure 7d) similar to those collected for the corresponding cell using the bare GDL (Figure 4d), although a remarkable three times higher current is reached upon the incorporation of MWCNTs and FLG. The cell reveals a Coulombic efficiency approaching 100%, which is actually achieved by the capacity limit, and a relevant specific capacity of 1250 mA h g −1 (as referred to the weight of the MWCNTs/FLG mixture) over 40 charge/ discharge cycles (Figure 7e). Prospectively, a further increase of the cycle life of the cell may be achieved by tuning the MWCNTs/FLG weight ratio, as well as by activating the MWCNTs using thermal treatments under an N 2 atmosphere as reported in our previous work. 14 Literature papers suggest various additional strategies to limit the overvoltage and increase the cycle life of the Li−O 2 cell. 14,36,59 The first and simplest one consists on the decrease of the cell capacity limit to achieve the extended cycle life. 36 We have adopted this strategy in Figure 7f,g by lowering the capacity limit from 2 mA h (geometrical areal value: 1 mA h cm −2 ) to 1 mA h (geometrical areal value: 0.5 mA h cm −2 ) in the Li−O 2 cell using the 39BB coated with MWCNTs/FLG cycled at 0.66 mA. The new capacity limit, which corresponds to a gravimetric value of 500 mA h g −1 , leads to the extension to the cell lifespan from 40 to 100 cycles, in agreement with literature work. 14 Furthermore, the use of catalysts and redox mediators can actually lower the charge polarization and thus extend the cycle life due to the limited side reactions, such as the electrolyte degradation occurring in the Li−O 2 cell. 27 In addition, the use of a different electrolyte, such as ionic liquids, can change the reaction mechanism, lower the polarization, and extend the cycle life of the cell. 59 To examine the chemical composition of the SEI layer formed at the electrode/electrolyte interphase, a 39BB coated with the MWCNTs/FLG electrode is cycled in the Li−O 2 cell with the same conditions of Figure 7d,e and subsequently retrieved from the cell. XPS measurements are performed on the cycled electrode and on a pristine one for comparison, and the results are reported in Figure 8. The survey spectra of pristine and cycled electrodes are reported in Figure 8a. The spectrum of the pristine electrode is dominated by the characteristic peaks related to C 1s and F 1s, likely related with the GDL substrate, FLG, and MWCNTs, and to the PVDF binder, respectively. A low amount of adsorbed oxygen at the sample surface is detected, possibly due to partial oxidation of one of the electrode components. After the third charge/discharge cycle, the survey spectrum of the electrode exhibits the expected C 1s, O 1s, and F 1s signals, along with additional peaks related to Li 1s and S 2p derived from the contact of the MWCNTs/FLG-coated 39BB electrode with the electrolyte solution. The presence of the Si peaks is originated from the glass fiber used as a separator in the cell. The relative atomic concentrations of C, O, F, S, and Li are quantified and reported in Table S2 in Supporting Information. Increase of O and F contents is observed at the surface of the cycled electrode compared to the pristine one, together with the decreased C atomic concentration. High-resolution C 1s, O 1s, F 1s, S 2p, and Li 1s XPS spectra are acquired and reported in Figure 8b−f. In the pristine electrode, the C 1s spectrum is deconvolved into seven peaks, ascribed to MWCNTs/FLG mixture compounds, at 283.7 ± 0.2, 284.5 ± 0.2, 285.0 ± 0.2, 286.5 ± 0.2, 287.9 ± 0.2, 288.9 ± 0.2, and 290.9 ± 0.2 eV (Figure 8b). They correspond to C vacancies, C�C (sp 2 -hybridized carbon), C−C (sp 3 -hybridized carbon), C−O (hydroxyl), C�O (carbonyl), O�C−O (carboxyl), and π−π* satellite peak, respectively. 60 The presence of PVDF is associated with the appearance of five additional peaks centered at 286.1 ± 0.2 eV (attributed to the CH 2 group), 290.6 ± 0.2 eV (CF 2 −CH 2 ), 291.7 ± 0.2 eV (CF 2 −CF 2 ), 292.4 ± 0.2 eV (O�C−CF 3 ), and 293.5 ± 0.2 eV (CF 3 ). 61 The two components in the F 1s spectrum (Figure 8d), located at 687.8 ± 0.2 and 689.7 ± 0.2 eV, correspond to −F−C−H− and −F−C−F− groups, respectively, related to the PVDF binder. 62 The additional components at higher binding energy (691.0 ± 0.2 and 692.2 ± 0.2 eV in Figure 8d) may be attributed to O bonded to a highly electronegative element such as F to form O−F bonds. 63,64 Other authors 65,66 suggested that the formation of the bump visible at > 692 eV caused by local charging effects of the PVDF binder during the analysis, related to the "negative charge trapping" within the PVDF. The O 1s spectrum (Figure 8c) can be deconvoluted into four peaks centered at 531.7 ± 0.2, 532.7 ± 0.2, 533.4, and 535.7 ± 0.2 eV, assigned to the C�O, C−O, O−C�O, and O−F groups. 67 In the cycled electrode, the C 1s spectrum resembles to the one of the pristine electrode (Figure 8b). The most notable distinctions from the pristine sample include the appearance of two new components at 287.5 ± 0.2 and 290.0 ± 0.2 eV, identified as C−SO x and CO 3 2− . 61,68 The pronounced C−SO x and the slightly noticeable CO 3 2− signals, alongside with the increased −CF 3 one in the C 1s spectrum, suggest the presence and possible decomposition of LiCF 3 SO 3 conductive salt strongly adsorbed to the carbon electrode. The degradation of the salt with the formation of kinetically stable products at the SEI layer is confirmed by two distinct contributions in the O 1s spectrum (Figure 8c) at 532.0 ± 0.2 and 534.8 ± 0.2 eV attributed to CO 3 2− in Li 2 CO 3 and S−O groups, respectively. 65,69 Additionally, the S 2p spectrum (Figure 8e) comprising the double split peaks at 168.9 ± 0.2 and 166.5 ± 0.2 eV validates the presence of −SO 3 CF 3 and the formation of Li 2 SO 3 as electrolyte degradation product. 70 The additional peak at 170.6 eV is probably due to the chemisorption of oxygen, with the formation of SO 4 2− species. 71 The strong contribution of the Li 2 CO 3 component at 55.7 ± 0.2 eV 72 (532 ± 0.2 eV in the O 1s spectrum, Figure 8c) to the global Li 1s signal ( Figure 8f) hinders the possibility of precisely evaluating the nature of the low intensity species at lower binding energy ∼53 ± 0.2 eV, precluding the distinction between Li 2 O and Li 2 O 2 compounds. Although the expected LiF decomposition product of fluorinated salt during the discharge can dissolve during the charge, 65 the F 1s spectrum (Figure 8d) reveals discernible LiF peak at 684.8 ± 0.2 eV (in the Li 1s spectrum at 56.8 ± 0.2 eV, Figure 8f), along with the components at 688.6 ± 0.2 eV and 690.4 ± 0.2 eV related to the PVDF binder. 73 The slight shift toward higher binding energy of the latter two components compared to those for the pristine electrode can be ascribed to local charging effects. 66 Overall, the XPS indicates that the SEI formed at the electrode surface in the Li−O 2 cell under the setup adopted in this work is mainly formed by decomposition products of the LiCF 3 SO 3 conducting salt and the TEGDME solvent (e.g., Li 2 SO 4 , LiF, and RCF 3 SO 3 ), which are strongly adsorbed into a protective layer increasing the cycle life of the battery.
■ CONCLUSIONS
Various GDLs indicated as 22BB, 28BC, 36BB, and 39BB have been characterized in terms of physical−chemical features, which were correlated to the performances of Li−O 2 batteries using the GDLs as the cathode. The SEM-EDS analyses of the GDLs revealed different surface morphology and a composition based on carbon and PTFE binder. The XRD patterns of the GDLs indicated the presence of carbon with either graphitic or amorphous characters. The contents of the PTFE in the GDLs, determined through TGA, were found to be 17% for 22BB, 13% for both 28BC and 39BB, and 12% for 36BB. The BET analysis of N 2 physisorption measurements indicated specific surface area of 39, 38, 31, and 13 m 2 g −1 for 22BB, 28BC, 36BB, and 39BB, respectively, and total pore volumes between 0.10 and 0.14 cm 3 g −1 . The average pore diameter of the GDLs was found to be less than 3 nm. The electrochemical behavior of the GDLs as cathodic supports in Li−O 2 cells was assessed through CV measurements performed in the potential range of 2.5−4.2 V vs Li + /Li, showing reversible ORR and OER occurring below 2.8 and above 3.6 V vs Li + /Li, respectively. After the first CV cycles, the currents associated to the ORR increased, suggesting an activation process associated to the stabilization of the electrode/electrolyte interphase and the formation of a suitable SEI at the electrode surface. On the other hand, the OER evidenced a more complex dependence between the CV profiles and the GDL nature due to the insulating character of the Li 2 O 2 formed during the reaction in the absence of a specific catalyst. The EIS spectra recorded at OCV condition and after each CV cycles revealed initial resistances between 500 and 1500 Ω, which decreased to less than 100 Ω after CV, supporting the activation process that was particularly pronounced for 39BB (resistance after CV scan as low as 55 Ω). Galvanostatic charge/discharge cycling of the Li−O 2 cells using the investigated GDLs were carried out by limiting the capacity to 2 mA h. The cells displayed promising performance, with reversible redox processes and a decrease of polarization after the first galvanostatic cycle. Additional CV tests using a wide potential range from 1.5 to 4.3 V vs Li + /Li showed resolved cathodic current peak, associated to the ORR and centered at 2.2 V vs Li + /Li. The ORR process was then reversed into a multi-step OER occurring at potentials between 3.5 and 4.3 V vs Li + /Li, with electrode/electrolyte interphase resistance limited to ∼100 Ω. The reversibility of the Li−O 2 cells was further demonstrated by galvanostatic charge/discharge cycling without any capacity limitation, demonstrating geometrical areal capacities as high as 6.8, 7.4, 6.4, and 7.8 mA h cm −2 for cells using 22BB, 28BC, 36BB, and 39BB, respectively. Also, GITT measurements were performed to determine the practical Li + diffusion coefficients (D) in the Li−O 2 cells within the configuration adopted in this work using the various bare GDLs. The GITT data indicated that D is driven by both GDL properties and the SOC of the cell, with values in a vast range from 10 −8 to 10 −17 cm 2 s −1 . Importantly, the GITT analyses indicated that 39BB ensures the highest Li-equivalents (x) exchange, which, in turn, results in the highest cell discharge capacity among the investigated Li−O 2 systems. In summary, the results reported in this work indicated that the less porous GDL (i.e., 39BB) represents the most suitable cathodic support for the realization of practical high-performance Li−O 2 batteries. These characteristics have been attributed to the growth pathway of Li 2 O 2 crystallites, which proceeds in our system according to the surface-mechanism over the sites of the carbon support. This direct-electrodeposition process forms bigger microparticles distributed into the conductive GDL in case of relatively high local current, low porosity and surface, instead smaller particles covering and possibly insulating the material in the case of the low local current, high porosity and surface. Accordingly, 39BB was coated with a MWCNTs/FLG mixture to further promote the electrochemical process, resulting in a Li−O 2 battery with specific capacity as high as 1250 mA h g −1 (1 mA h cm −2 ) at ∼2.7 V discharge voltage with a high Coulombic efficiency over 40 cycles achieved at a current density of 0.33 mA cm −2 (specific current: 412.5 mA g −1 ). Further limitation of the capacity to 500 mA h g −1 (0.5 mA h cm −2 ) has led to the extension of the cell lifespan over 100 cycles. In addition, XPS on the cycled electrode suggested a cell stability promoted by the formation of a suitable SEI layer at the surface. | 2023-08-09T06:16:59.201Z | 2023-08-08T00:00:00.000 | {
"year": 2023,
"sha1": "cb82cad320d54a309fd4ee3af9e1b07f7340f089",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acsami.3c05240",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b5e8ef38325fe436c0bf1008099106654c5fc56",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221767428 | pes2o/s2orc | v3-fos-license | Draft Genome Sequence of Lactobacillus rhamnosus Strain CBC-LR1, Isolated from Homemade Dairy Foods in Bulgaria
Here, we report the draft genome sequence of Lactobacillus rhamnosus strain CBC-LR1, which was isolated from naturally processed, homemade dairy foods in Bulgaria. The genome was assembled in 29 contigs with a total length of 2,892,155 bp and a GC content of 46.7%. Genome annotation predicted 2,638 coding genes and 49 tRNA genes.
L actobacillus rhamnosus is a species of lactic acid bacteria that is considered the most studied probiotic species for human use (1). This species can be found in fermented foods and in the intestinal and vaginal tracts, and strains belonging to this species have been reported to have human health benefits. For example, strain L. rhamnosus GG, the most widely studied probiotic strain, was found to have antimicrobial activity against Salmonella enterica serovar Typhimurium (2), Shigella sonnei (3), Clostridium spp., Pseudomonas spp., Staphylococcus spp., and Streptococcus spp. (4), as well as a role in controlling antibiotic-associated diarrhea (5, 6) and a role in alleviating nasal blockage in allergic rhinitis (7). Other L. rhamnosus strains were found to have antibacterial and antifungal activities in the urogenital tract (8) and anti᎑inflammatory effects in patients with inflammatory bowel disease (9).
Lactobacillus rhamnosus strain CBC-LR1 was isolated from naturally processed, homemade dairy foods from an ecologically pure geographical area in Bulgaria in August 2001. L. rhamnosus strain CBC-LR1 was isolated on MRS agar plates that were incubated anaerobically at 37°C for 48 h.
Here, we report the genome sequence of Lactobacillus rhamnosus strain CBC-LR1. The genome was sequenced to obtain better insight into the probiotic properties and probiotic mechanisms of this strain and to evaluate its safety for human use.
Chandler Biopharmaceutical Corp. provided strain CBC-LR1 as a lyophilized powder. The NucleoSpin food kit (product number 740945.50; Macherey-Nagel, Germany) was used for genomic DNA extraction from 50 mg of the lyophilized powder, following the manufacturer's instructions. A Qubit v4.0 fluorometer was used to quantify the extracted DNA using a Qubit double-stranded DNA (dsDNA) high-sensitivity (HS) assay kit. Genomic DNA was submitted to the Genomics Facility at the University of Guelph (Guelph, ON, Canada) for library preparation and sequencing on an Illumina MiSeq system using an Illumina Nextera XT kit and Illumina MiSeq reagent v3 kit (600 cycles; 2 ϫ 300-bp reads).
CLC Genomics Workbench v20.0.3 (Qiagen Bioinformatics) was used to analyze the sequencing data. Default parameters were used except where otherwise noted. A quality-trimming step was performed to remove low-quality sequences (limit of 0.05 base-calling error probability) and to allow a maximum of 2 ambiguous nucleotides. The total numbers of paired reads before and after quality trimming were 1,684,162 and 1,683,896, respectively. High-quality reads with a total of 457,109,690 bases were used for de novo assembly, yielding ϳ150ϫ coverage. The genome was assembled in 29 contigs, with a total length of 2,892,155 bp, a GC content of 46.7%, and an N 50 value of 179,274 bp. The full-length 16S rRNA gene sequence was extracted from the genome sequence using the ContEst16S tool (10) and was used for a BLAST search with GenBank to confirm species identity (11). The NCBI Prokaryotic Genome Annotation Pipeline (PGAP) v4.11 (https://www.ncbi.nlm.nih.gov/genome/annotation_prok) was used for genome annotation (12), which predicted 2,638 coding genes, 49 tRNA genes, and 6 rRNA genes.
Data availability. This whole-genome shotgun project has been deposited in DDBJ/ENA/GenBank under the accession number JACKWP000000000. The version described in this paper is version JACKWP000000000.1. The raw files were deposited in the SRA database under the accession number SRR12412204.
ACKNOWLEDGMENT
The Natural Health Product Research Alliance, University of Guelph, supported this study. | 2020-09-17T23:10:32.392Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "7d84396d86b9070b3039f958f276f187340eb307",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/mra.00961-20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07e830510c228bfad2824ce79d7602b2485f40c4",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
90948814 | pes2o/s2orc | v3-fos-license | Excavated DNA from Two Khazar Burials
To understand a biological tribal affiliation (in terms of Y-chromosomal haplogroups, subclades, and haplotypes) of two excavated Khazar bone remains in the lower Don region in the south of Russia, we have extracted and analyzed their DNA and showed that both belonged to haplogroup R1a and its subclade Z93. The pattern could be considered typically “Turkic”, and not a Jewish DNA lineage. Their haplotypes were also identified and reported here. The haplotypes indicate that both Khazars were unrelated to each other in a sense that their common ancestor lived as long as 1500 2500 years earlier than them, in the middle of the II millennium BC—beginning of the I millennium BC, during typically Scythian times or somewhat earlier. Their haplotypes are unrelated to well-known Jewish haplotypes of haplogroup R1a.
Introduction
We report here what seems to be the first case of an ancient DNA study from Khazar burial mounds.Traditionally, Khazars are associated with ancient Jewish people; however, the extent of that association remains unknown.Overall, archeology of a few hundred Khazar burials dated from the VI through X centuries CE, assumed to be the timing for political history of Khazars, has not revealed any distinct indications to Jewish artifacts or anything related to ancient Jewish culture.It was certainly of interest to observe what the ancient Khazarian DNA would reveal in terms of haplogroups and haplotypes of Y-chromosome, which could be rather distinctly assigned to some tribal affiliations.
Results and Discussion
Two human skeletons which were considered in this work, were obtained from two Khazar burial mounds in southern Russian steppes.The mounds, or kurgans, were typical Khazarian mounds surrounded by shallow square ritual ditches.Both burials are described in the literature (Ilyin, 1995;Parusimov, 1998;Glebov & Ivanov, 2007;Batieva, 2007).Both burials, named Kuteiniki II (mound 2, burial 1) and Talov II (mound 2, burial 1), are located in the South-East of the Rostov region on the left bank of Don river, about 70 kilometers from each other.The first was excavated in 1994, the second in 2004.The first burial was robbed in the past.The human skeleton belonged to a male of 40+ years old, the human bones were moved by the robbers, and the original burial position was uncertain.The burial was dated by the end of the VII to the beginning of the VIII century CE.The DNA sample obtained from the burial was assigned by the index 1251.
The second burial was not robbed and was completely preserved.The human skeleton belonged to a male of 35 -45 years old, positioned stretched on its back, the skull to the West.The burial was dated by the second half of the VIII to the beginning of the IX century CE.The DNA sample obtained from the burial was assigned by the index 1986.
In the first half of the IX century kurgans with square ditches were seized to appear.The archaeological culture vanished.It seems that Khazars left the lower Don steppes during that time period; thus, Kuteiniki and Talovo burials mark early Khazar and late Khazar times, respectively, of their presence in the Don steppes.
The DNA in both cases was extracted from teeth of the ancient skeletons.The teeth were cleaned and ground in a vibration mill, the DNA was isolated by phenol extraction, and other routine procedures were employed for quantitation of the isolated DNA, such as the polymerase chain reaction.In both cases the Y-chromosomal haplogroup of the ancient Khazars was identified as R1a, and the primers specific to SNP mutations R1a-Z280 and R1a-Z93 revealed that the both samples showed negative Z280 and positive Z93 mutations.Thus, both ancient Khazars' DNA was interpreted to be of the R1a-Z93 "signature".This is a very rare SNP in present-day ethnic Russians, Ukrainians, Poles and other Slavic male populations, approximately 50% of whom are estimated to carry the R1a haplogroup (www.eupedia;Rozhanskii & Klyosov, 2012).On the other hand, R1a-Z93 is very common in present-day Turkic-speaking peoples such as Caucasian Karachaevo-Balkars, also Tatars, Bashkirs, Kirgiz, and other populations who apparently descended from Scythians, and have their common ancestors in the R1a-Z93 subclade dated back to 1500 -2500 years ago (Klyosov & Rozhanskii, 2012;Klyosov & Saidov, 2015).
Conclusion
The discovered subclades (R1a-Z93) and haplotypes from the two Khazar burials, one of early Khazar, and another of late Khazar times, are likely to be assigned to Turkic nomadic tribes, which migrated between Central Asia (and Altai region in particular) and the Black Sea area since the middle of the II millennium BC through the I millennium CE and some later.They belonged apparently to different tribes and different haplogroups (among them haplogroups C, G, Q, R1a, R1b), however, thus far only haplogroup R1a was discovered among ancient excavated DNA of the Scythians and related tribes (Haak et al., 2015;Allentoft et al., 2015).This study describes ancient R1a haplogroup in two Khazar skeletons, dated about 1200 and 1300 years before present (earlier and later Khazars) though the two belonged to rather distant DNA lineages, with their common ancestor who lived some 1500 -2000 years before them.Both the Khazars (R1a-Z93) were unrelated to ancestors of the present day ethnic Russians, Ukrainians, Belarus, Poles, and other Slavic peoples of haplogroup R1a (predominant subclades are R1a-Z280 and R1a-M458; Rozhanskii & Klyosov, 2012), as well as Scandinavians of haplogroup R1a (the predominant subclade being R1a-Z284; ibid.).There are, however, many peoples with a rather large share of R1a-Z93, who speak Turkic languages, and who seem rather closely related to the DNA lineages of the excavated Khazars (some of them live in the Caucasus, some on the former Scythian and Khazar land, and in the area of Volga river, such as Tatars and Bashkirs.It should be noted that according to DNA genealogy data none of the two ancient Khazars belonged to the Jewish YDNA (Y-chromosomal DNA) lineage. | 2019-04-02T13:03:48.097Z | 2017-01-18T00:00:00.000 | {
"year": 2017,
"sha1": "853023956fd38979a21a468dd3d93e82077a62a7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=73563",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "853023956fd38979a21a468dd3d93e82077a62a7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
4436451 | pes2o/s2orc | v3-fos-license | MOBILIZATION IN EARLY REHABILITATION IN INTENSIVE CARE UNIT PATIENTS WITH SEVERE ACQUIRED BRAIN INJURY : AN OBSERVATIONAL STUDY
Michelangelo BARTOLO, MD, PhD1, Stefano BARGELLESI, MD2, Carlo Alberto CASTIONI, MD3, Domenico INTISO, MD4, Andrea FONTANA, MSc5, Massimiliano COPETTI, PhD5, Federico SCARPONI, MD6, Donatella BONAIUTI, MD7, and the Intensive Care and Neurorehabilitation Italian Study Group* From the 1Department of Rehabilitation, Neurorehabilitation Unit, HABILITA Care and Research Rehabilitation Hospitals, Zingonia di Ciserano, Bergamo, 2Department of Rehabilitation Medicine, Severe Acquired Brain and Spinal Cord Injuries Rehabilitation Unit, Ulss 9 Ca’ Foncello Hospital,Treviso, 3Unit of Anesthesia and Intensive Care B-DEA, S. G. Bosco Hospital, Turin, 4Physical Medicine and Rehabilitation, Neurorehabilitation Unit, IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (FG), 5Unit of Biostatistics IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (FG), 6SSD Severe Acquired Brain Injury Unit – S. Giovanni Battista Hospital, Foligno (PG), 7Department of Rehabilitation Medicine, S. Gerardo Hospital, Monza, Italy and Intensive Care and Neurorehabilitation Italian Study Group authors (see Appendix SI1)
I ntensive care unit (ICU) patients may develop com- plications due to prolonged immobilization, such as cardiovascular system damage and critical illness neuromuscular syndromes (1), which are associated with poor short-term outcomes, including a delay in ventilator weaning and ICU/hospital discharge (2,3).
Early mobilization might counterbalance these effects, by maintaining muscle strength, improving functional outcome, sedation levels and patients' quality of life in the ICU and beyond (4)(5)(6).Although early physical rehabilitation, including mobilization of critically ill patients, was considered unsafe a few years ago, in the last decade a growing body of literature has shown the safety and feasibility of mobilizing ICU patients to prevent impairments and functional limitations (5,7,8).A number of studies have shown that early rehabilitation is effective, especially if mobilization is implemented within a structured protocol (9) and is based on procedures with proven feasibility and safety.Therefore, early mobilization has been included as a component of the ABCDE bundle (Awaken from sedation, Breathe independently of the ventilator, Choice of sedation, Delirium management, Early mobilization) (8,10) and recent studies have confirmed its important role/effect (11).
However, evidence supporting early mobilization is based mainly on trials performed in general medical and surgical ICUs, while studies conducted in neurological ICU (NICU) settings are sparse and show conflicting results.Indeed, a bidirectional case-control study showed that early mobilization and sitting upright could be favourable for patients admitted to NICUs (12), whereas a prospective intervention trial and a comparative study revealed that early rehabilitation in patients with severe acquired brain injury (sABI) might lead to a shorter length of hospital stay (LOS), fewer restraint days, and fewer hospital-acquired infections (13,14).On the other hand, a recent retrospective chart review conducted during a 6-month pre-mobilization and 6-month post-mobilization period concluded that, despite an increase in the amount of physical therapy and occupational therapy, no change in hospital and ICU LOS or duration of mechanical ventilation was observed (15).
Overall, the lack of available evidence underlines that there is still much research to be done into early rehabilitation for sABI patients in the ICU and there are specific questions to be answered regarding the timing of intervention, the intensity and type of exercises, and which professionals should be involved (e.g.physiotherapist, occupational therapist, nurse) (16).
The aim of the current study was to evaluate whether early mobilization influences the functional outcome of patients with sABI, through further analysis of data collected during a previous multicentre observational study (17).
Study sites and participants
Fourteen centres in Italy with neurorehabilitation units and an ICU/NICU participated in the study (7 in the north of Italy, 3 in the south, 2 in the centre, and 2 in the islands).
Patients admitted to the ICU from 1 January to 31 December 2014, with a diagnosis of sABI were enrolled in the study.Each participating centre was asked to enrol at least 10 patients.
sABI was defined as central nervous system (CNS) damage due to acute traumatic or non-traumatic (vascular, anoxic, neoplastic or infectious) causes that led to a variably prolonged state of coma (Glasgow Coma Scale ≤8), producing a potentially wide range of impairments affecting physical, cognitive and/or psychological functioning (18)(19)(20)(21)(22).
Subjects with premorbid CNS-related disability, neurological diseases, or neoplastic disease with metastatic involvement of the CNS were excluded.
Immediate relatives or legal guardians of the patients provided informed consent to participate in the study.The study was conducted in accordance with the revised version of the Declaration of Helsinki and was approved by the local ethics committee of the coordinator centre and approval was extended to all centres taking part in the study.
Study design and procedure
Data in the present study were collected as part of routine care during a previous prospective, observational, multicentre study by our working group (17).As the study was observational, no criteria to decide readiness to mobilize were provided: it was only recorded when mobilization was performed.At the end of the study, the baseline clinical features and the outcomes of the patients who received mobilization (MOB) and who were not mobilized (NoMOB) were compared in order to understand which criteria are used by clinicians to mobilize (or not mobilize) the patients.The methodology has been described previously in detail (17).
On admission all the enrolled patients underwent a complete clinical, neurological and functional examination.All patients were re-evaluated every 3-5 days (at least twice per week) until discharge from the ICU.Clinical and rehabilitative data were collected.
The following rehabilitative data were collected: duration, type and timing of rehabilitative sessions, postural changes (performed at least 6-8 times/day), early passive/active-assisted mobilization, respiratory rehabilitation, bronchial drainage, removal of tracheostomy tube, sitting posture and orthostatic reconditioning, gait rehabilitation, swallowing evaluation, speech therapy, responsiveness, multisensory stimulation, caregiver education and psychological support, and team meetings with caregivers.The researcher who collected the data specified whether each procedure was performed.
For the aims of this work, early passive/active-assisted mobilization was defined as movement against gravity involving axial loading of the spine and/or long bones, also including the following activities: (i) sitting over the edge of the bed, (ii) sitting on a chair, (iii) use of a tilt bed/table to ≥40°.Physical and/or mechanical assistance was permitted in order to complete these activities.A session of mobilization was defined as a single continuous period of mobilization with a period of bed-rest either side of that session.
In order to provide a multidimensional assessment of the patients' clinical and functional status the following measures were used: Glasgow Coma Scale (GCS), Disability Rating Scale (DRS), the Rancho Los Amigos Levels of Cognitive Functioning Scale (LCF), Early Rehabilitation Barthel Index (ERBI), Glasgow Outcome scale (GOS) and Functional Independence Measure (FIM).
All measurements were administered at each visit, except for ERBI, which was administered at admission and at discharge, and GOS and FIM, which were administered only at discharge.All adverse events were recorded.
Statistical analysis
Patient characteristics were reported as means ± standard deviations (SD) and medians (together with first and third quartiles) or frequencies and percentages, for continuous and categorical variables, respectively.For each continuous variable, the assumption of normal distribution was checked by means of the Shapiro-Wilk test along with quantile-quantile (Q-Q) plots.Comparisons between the MOB and NoMOB groups were assessed using the Mann-Whitney U test (because of deviation from normal distribution) or the Fisher's exact test, as appropriate for continuous and categorical variables, respectively.
To test changes in GCS, DRS and LCF scores at different (and unequally spaced) follow-up times from the first evaluation to discharge, hierarchical generalized linear models (HGLMs) for longitudinal data were fitted for each outcome.Within this framework, the Poisson distribution for the error term was assumed to model each functional score.A random intercept was included in each model to account for clustering due to centres (i.e.multicentre design effect).As measurements were collected at different times, a spatial-power covariance structure (which handles individuals' unequally spaced follow-up times), was assumed within each longitudinal model (23).To test changes in ERBI measurements at the first and last evaluations, a HGLM was performed on ERBI rank values, assuming normal distribution for the error term.Such models included an indicator variable, which specifies whether patients received mobilization (i.e.MOB group), a time variable, which specifies each measurement (within each patient), and a group-by-time interaction variable.Specifically, to test whether the outcome means were different (during the follow-up) in all patients, we examined the statistical significance of the time variable, whereas to test whether such means were different within each group, we examined the statistical significance of suitable statistical contrasts defined within the models.Moreover, pairwise comparisons between means were also estimated and p-values were adjusted for multiple comparisons following the Benjamini-Hochberg procedure.Another statistical contrast was properly assessed to evaluate whether there was a difference between group means at first evaluation (i.e.baseline) only.The significance of the group-by-time interaction variable suggested whether the outcome mean profiles (i.e.means collected over time) were different between the 2 MOB groups.The time variable was included into HGLMs, both as categorical and continuous.In the first case, a test for overall difference between means over time was assessed by examining the significance of the Type III test, whereas in the second case, a test for linear trend was assessed by examining the significance of the slope of the time variable.To make valid inference for each statistical test derived from HGLMs, degrees of freedom were corrected following Kenward-Roger approximation.Estimated means were carried out from HGLMs and were reported along with their 95% confidence interval (95% CI).For the ERBI outcome only, observed medians along with first-third quartiles) were reported instead.Furthermore, longitudinal plots of the estimated functional outcomes over time were separately reported for each MOB group at issue, along with error bars, which represented 95% CI.Two-sided p-values < 0.05 were considered for statistical significance.All analyses were performed using SAS Software, Release 9.4 (SAS Institute, Cary, NC, USA) and R (package: ggplot2).
Patient demographic and clinical characteristics
The study enrolled 109 consecutive patients diagnosed with sABI.Six patients were excluded due to missing data; therefore the study sample comprised 103 subjects.Participant enrolment and flow through the study are summarized in Fig. 1.
Among the total sample, 68 patients (66%) received the intervention (MOB group).Baseline demographic and clinical characteristics, aetiology, side and extension of cerebral lesions and comorbidities of the total sample, as well as of the MOB and NoMOB groups, are summarized in Table I.
For most patients mobilization was performed by a physiotherapist (98%), while in one case a nurse performed the mobilization.
The mean LOS in the ICU was 24 days (standard deviation (SD) 14.1 days) for the total sample, with a significant difference (p = 0.01) between the MOB group (26.2 (SD) 13.7 days) and NoMOB group (19.5 (SD) 14.2 days).
Functional measures at admission and discharge
Data for patients' functional status at admission and discharge are shown in Table II.
At admission the between-group analysis showed that the NoMOB group had a significantly more severe clinical and functional profile than the MOB group with regard to all measurements, except for the ERBI scale.Longitudinal analysis revealed a statistically significant improvement in patients' clinical and functional conditions in both groups, when comparing admission-discharge values of GCS and LCF scores.Moreover, comparison within the group revealed that the MOB group showed significant improvement in DRS and ERBI scores, while the NoMOB group sho- wed a trend towards improvement for these measures without reaching statistical significance (Table II).
Discharge clinical features and settings
Patients' clinical characteristics, including limitations and need for supportive devices at discharge, were not statistically different between the 2 groups, except for the presence of pressure sores, which were significantly more frequent in the NoMOB group (Table III).
For both groups, patients were discharged mainly to sABI rehabilitation units (30.9% and 48.6% of patients in the MOB and NoMOB groups, respectively), which, in Italy, define care settings for patients with disability due to neurological disease.At least 180 min of treatment per day is provided and patients receive care from an interdisciplinary team, often in technologically supported contexts.The percentage of subjects who were discharged to rehabilitation units was significantly higher for the MOB group (27.9%) than the NoMOB group (0%); p < 0.001.Rehabilitation units in Italy are devoted not only to neurological patients, but must ensure at least 120 min of rehabilitation treatment.Treatment by some professionals in the multidisciplinary team (e.g.psychologist, occupational therapist) is recommended but not mandatory.The other patients were discharged to acute wards, particularly to neurosurgery units (17.6% for the MOB group and 22.9% for the NoMOB group), without significant differences.All the discharge destinations are shown in Table IV.No adverse events were reported in either group.
Rehabilitation treatments
The first rehabilitative clinical evaluation was performed after a mean of 7.7 (SD 6.9) days for patients in the MOB group, while patients in the NoMOB group underwent the first rehabilitative evaluation after a mean of 15.5 (SD 21.3) days from ICU admission.No statistical differences were found.Data on the rehabilitative treatments performed in both groups are summarized in Table V.At discharge, the mean number of sessions of mobilization, speech therapy and psychology was 10 (SD 7.7), 0.8 (SD 2.5) and 0.4 (SD 1.3), respectively, for the MOB group.The mean number of missed sessions did not reach 1%,
DISCUSSION
Data from this study show that early mobilization seems to favour clinical and functional recovery in ICU patients with sABI; however, only two-thirds of patients received mobilization and this started one week after ICU admission.These results are consistent with data from non-neurological populations, revealing that early mobilization of patients receiving mechanical ventilation is still uncommon, despite the recent publication of consensus recommendations regarding safety criteria for mobilization of adult, mechanically ventilated ICU patients (24).Baseline comparison between MOB and NoMOB groups in our sample provided guidance regarding the criteria commonly used by clinicians to decide patients' readiness to be mobilized.Approximately one-third of patients were considered unsuitable to start mobilization, perhaps because they were deemed clinically "too serious" by physicians.This raises the question of whether this attitude is supported by evidence or driven by fear among healthcare providers.The literature indicates that the main obstacles in early rehabilitation are: (i) the clinical severity of patients, considered "too sick" to engage in physical activities; (ii) the presence of indwelling lines and tubes (endotracheal tubes, central venous catheters, arterial lines, bladder catheters) that restricted movement; (iii) sedation that made patients too sleepy to be involved in treatment (25); and (iv) the presence of femoral vascular access and mechanical ventilation (26,27).Moreover, although less represented, the lack of professional resources and poor experience in delivering rehabilitative care to ICU patients have also been described (27,28).On this issue, it has been shown recently that staff education alone was ineffective at improving mobility outcomes for ICU patients, suggesting that educative approaches should be integrated with other factors, such as a change in sedation practice and increase in staffing (28).In our sample, the main factors associated with early mobilization were: (i) patients' level of consciousness and cognitive functio-ning; and (ii) comorbidities.In fact, only patients who exhibited at least minimal reactions to the environment and presented with fewer comorbidities underwent mobilization.In our opinion, the reasons to perform early mobilization are not clear to all operators: in general medical and surgical ICU it is easier to understand that mobilization can favour a faster motor recovery that can be observed during the ICU stay, while for patients with sABI, who are often unconscious or sedated, the benefits of the intervention may be less evident.However, the prejudicial exclusion of comatose patients could not be justified.Indeed, comatose subjects might also benefit from rehabilitation and obtain functional improvement.The achievement of this objective is essential, since ICU patients who improve their functional status during the ICU stay have a reduced risk of 90-day mortality following hospital discharge (29).
When considering neurological patients, most clinicians have concerns in relation to the early mobilization of severe stroke patients, especially after a haemorrhagic event (30).With regard to aneurysmal subarachnoid haemorrhage (SAH), some observational studies have found the highest risk period for re-bleeding is between 2 and 4 weeks after the initial aneurysmal SAH.Consequently, in order to avoid re-bleeding, especially for patients who have not had, or could not have, surgical or endovascular treatment for the aneurysm, bed-rest for 4-6 weeks is often included as a component of the treatment strategy (31,32).Conversely, in patients with SAH, the feasibility and safety of arterial and intracranial pressure of an early rehabilitation programme was focused on functional training and therapeutic exercise in more progressively upright positions (33,34).How ever, a recent Cochrane systematic review concluded that no randomized controlled trails or controlled trials were available to provide evidence for or against staying in bed for at least 4 weeks after symptom onset, and suggested further research to clarify optimal periods of bed-rest for these patients (32).A recent retrospective study that analysed the outcome of 143 ICU-dependent, tracheotomized, and mechanically ventilated patients with both ischaemic and haemorrhagic cerebrovascular disease (CVD), concluded that, as mortality rates of early rehabilitation in CVD are low, in-patient rehabilitation should be undertaken even in severe CVD patients to improve outcome and to prevent accommodation in long-term care facilities (35).Our findings showed that even if more than half of our patients were affected by sABI due to a haemorrhagic stroke, mobilization was probably a safe procedure; no adverse events were recorded in the MOB group and the rates of the deaths were comparable between the 2 groups.However, these data need to be confirmed in larger samples.
Our results highlight the problem of lack of homogeneity of the contents of rehabilitation in the ICU at this time.The literature suggests that rehabilitation in the ICU for sABI patients is primarily focused on respiratory therapy, passive-assistive movement for contracture prophylaxis, stimulation therapy, low-dose strength, and endurance training and stretching (36).The therapeutic goal is usually focused on the prevention of secondary damage (i.e.pneumonia or contractures), promotion of consciousness and sensory perception, and strengthening of muscles (36).Recently, an Italian Consensus Conference recommended that rehabilitation for patients with sABI in the intensive hospital phase should be more comprehensive and encompass management of respiratory problems, dysphagia, tracheostomy tube removal, cognitive disorders and language (37).However, most centres in Italy limit their rehabilitative approach to the physiotherapists' intervention.Our data partially confirm this observation: even when patients in the MOB group also performed significantly more respiratory rehabilitation and multisensory stimulation than patients in the NoMOB group, the interventions were always performed by physiotherapists.The presence of the speech therapist was minimal and psychologists' intervention, besides being limited in time, was devoted mainly to providing educational support to the caregivers.
Regarding the effectiveness of early mobilization, our data showed that the improvement in clinical and functional conditions in the MOB group was slightly higher than for the NoMOB group, reaching statistical significance only for ERBI values.Similar to what has been observed in medical and surgical ICUs (38), our findings suggest a methodological reflection on the available validated scales evaluating functional outcome in patients with ABI.It is likely that the measures commonly used in rehabilitation are not sensitive enough to capture the mild improvement occurring in the early phase of ICU stay.The ERBI, not commonly used in Italy, could be a reliable and valid scale to assess early neurological rehabilitation patients, as it contains highly relevant items for this population, such as mechanical ventilation, tracheostomy or dysphagia, compared with other validated clinical scales most widely used in rehabilitation.Our data showed a longer LOS in ICU for patients in the MOB group, contrary to data in the literature suggesting a shorter ICU stay for mobilized patients (13,14).In our opinion, these results should be considered in relation to the setting where patients were sent after ICU discharge.In fact, patients with a better prognosis who received early mobilization in the ICU were discharged in a significantly higher percentage to rehabilitation units, skipping admission to sABI units where more intensive treatments based on a multidisciplinary approach are guaranteed.Unfortunately, the lack of dedicated pathways for sABI patients may induce delay in discharge from the ICU.In fact, even if clinical factors should affect rehabili-tation use, several non-clinical variables play a major role in rehabilitation provision and use.In particular, the availability of rehabilitation services seems to be the major determinant of whether patients use such care and which type of rehabilitation facility they use.Moreover, across Italian regions, the criteria used to rule out access to rehabilitation units are different and often unclear, lacking clinical criteria that would identify the best setting for maximizing outcomes.
This study has some limitations.The MOB and No-MOB groups were not homogeneous, since patients were not randomly allocated to the 2 groups.On the other hand, the aim of the study was to provide a description of the procedures that were spontaneously adopted across different centres in Italy.Subsequent studies, performed as randomized trials, will provide more rigorous and controlled data to analyse the effects of mobilization in the ICU, identifying patients who would benefit more.Two additional limitations can be reported: the narrow time window (ICU LOS was a mean of 24 days (SD 14.1) and the lack of follow-up data.Evidence showed that most of the functional recovery after TBI occurs in the first 6 months after the injury, a period too long to verify the decisive results of a rehabilitation intervention occurring in the first weeks of hospitalization (39).Studies on patients with cerebral haemorrhage have shown that significant improvements (measured by means of FIM and CRS) could be detected, on average, after 11 and 9 weeks from admission to neurorehabilitation, for patients diagnosed with a vegetative state or minimally conscious states, respectively (40).This suggests that neurological outcomes should be measure over a longer time-frame than our study, and may explain why the studies on patients with sABI are performed mainly in a subsequent phase after the ICU stay, during patients' admission to rehabilitation.Therefore, it is necessary to plan further studies including follow-up evaluations, in order to track the recovery pathway of patients with ABI from the acute phase to hospital discharge.
In conclusion, although technical difficulties and questions remain, this study provides further support to the early and progressive implementation of mobilization in the ICU for patients with sABI, and suggests that, in order to achieve widespread implementation of early rehabilitation, a shift in focus is necessary from survival to functional outcome among ICU clinicians.Finally, the study highlights several areas for future research: studies are needed addressing the timing and dosage of mobilization, as well as the association between early mobilization and patient-centred outcomes.SIAARTI (Italian Society of Anesthesia, Analgesia, Resuscitation Intensive Care), and SIMFER (Italian Society of Physical Medicine and Rehabilitation), which aims at improving and promoting the culture of early rehabilitation in the ICU.
Table I .
Demographic and clinical characteristics of patients with severe acquired brain injury (sABI) at hospital admission (overall and by mobilization status) a p-value from Mann-Whitney U test; b p-value from χ 2 test; c p-value from Fisher's exact test.MOB: receiving passive/active-assisted mobilization during hospitalization; SD: standard deviation; q 1 -q 3 : first and third quartiles.
Table II .
Functional measures at admission and discharge in patients with severe acquired brain injury (sABI) (overall and by mobilization status) a Medians and lower-upper quartiles; b determines whether functional outcomes means evaluated at baseline were different between MOB groups; c determines whether absolute changes in functional outcomes means during the follow-up were differential between MOB groups; d determines whether absolute changes in functional outcomes means during the follow-up were significantly different within all patients and within each MOB group; e determines the presence of a linear trend in functional outcomes means over follow-up within all patients and within each MOB group.#Abbreviations for apex letters for statistically significant (i.e.p-value<0.05)pairwise comparisons adjusted for multiple comparisons: a 1 st vs. 2 nd , b 1 st vs. 3 rd , Early rehabilitation in ICU patients with severe ABI
Table III .
Clinical characteristics of patients with severe acquired brain injury (sABI) at hospital discharge (overall and by mobilization status) a p-value from χ 2 test; b p-value from Fisher's exact test.
Table IV .
Discharge destinations a p-value from χ 2 test; b p-value from Fisher's exact test.J Rehabil Med 49, 2017 considering both clinical and organizational causes.No sessions were recorded for the NoMOB group.
Table V .
Rehabilitation treatments: intergroup comparison www.medicaljournals.se/jrmEarly rehabilitation in ICU patients with severe ABI | 2017-10-19T16:04:12.823Z | 2017-11-21T00:00:00.000 | {
"year": 2017,
"sha1": "6067c998de8b5a6a47ad49b649507f7da3301e4b",
"oa_license": "CCBYNC",
"oa_url": "https://www.medicaljournals.se/jrm/content_files/download.php?doi=10.2340/16501977-2269",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6067c998de8b5a6a47ad49b649507f7da3301e4b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246103067 | pes2o/s2orc | v3-fos-license | Association of Metabolic Syndrome with Migraine: A Case-Control Study
Background: Migraine is a disabling primary headache disorder and metabolic syndrome is a major escalating public-health challenge worldwide. They share some common pathophysiology. But till date, their relationship is obscure. Methods: This study was conducted in headache clinic and inpatient-outpatient department of Neurology and Biochemistry laboratory of BSMMU, from June 2017 to February 2019. In these age-sex matched case control study, 30 migraine patient and equal number non migraine volunteer were taken according to inclusion exclusion criteria. Waist circumference (WC), blood pressure (BP), fasting plasma glucose (FPG), serum triglyceride (TG) and high density lipoprotein cholesterol (HDL-C) were measured among all. Results: In this case control study, 24 women and 6 men were taken in both case and control groups, with mean age (±SD) of 32 (±7.77) and 30 (±8.46) years respectively. Metabolic syndrome was significantly higher among migraineurs (36.7% in case and 13.3% in control group respectively, p=0.037). Patient with metabolic syndrome had 3.763 times more chance of having migraine then person without metabolic syndrome [p=0.037, OR=3.763, 95% C.I. (1.038-13.646)]. Conclusion: There is an association between metabolic syndrome and migraine.
headache. Among different types of aura visual aura is the most common (90%) type 1 . Several theories have been put forward regarding the complex pathophysiology of migraine-the vascular theory, migraine generator theory, the cortical spreading depression theory and the trigeminovascular theory 2 . Activation of the trigeminovascular system plays a central role in the pathophysiology of migraine and linked to the pain of migraine 3 . Activated trigeminovascular system leads to release of inflammatory vasoactive neuropeptides (CGRP, substance-P, NO) from sensory afferents that innervate the major intracranial arteries, results in vasodilation, plasma protein extravasation, inflammation (termed as "neurogenic inûammation") 3 . More and more evidence indicates a primary role for CGRP as a mediator of migraine 2 . Cortical spreading depression (CSD) is hypothesized to cause the aura of migraine, activate trigeminal nerve afferents and alter blood-brain barrier permeability 2 . In migraine, special pattern of inflammatory and oxidative stress markers has been observed in the systemic circulation including increased levels of C-reactive proteins (CRP), interleukins (IL-1, IL-6), TNF-α 4,5 . Increased level of Leptin (activates IL and TNF-α, increase pain sensitivity) 5 , Homocysteine (induces neurogenic inflammation, oxidative stress, inhibition of GABA-A receptor in migraine attack) 6 and decreased magnesium level in serum and brain (Mg in the brain triggers release of 5-HT-a vasoconstrictor) 2 ,have been observed among migraineurs. Migraine is associated with some major vascular diseases including-stroke, subclinical brain matter lesions, coronary artery disease and HTN 7 . The metabolic syndrome is a major escalating public-health problem and clinical challenge worldwide in the wake of urbanization, surplus energy intake, increasing obesity, and sedentary life habits 8 . Metabolic syndrome is present if ≥3 of the following five criteria are met: waist circumference >90 cm (men) or >8o cm (women) (adjusted for Asian population), blood pressure Systolic >130 or Diastolic >85 mmHg or on drug treatment for HTN is an alternate indicator, fasting triglyceride (TG) level >150 mg/dl or on drug treatment for elevated TG, fasting high-density lipoprotein (HDL) cholesterol level <40 mg/dl (men) or <50 mg/dl (women) or on drug treatment for reduced HDL-C and fasting blood sugar >100 mg/ dl or On drug treatment for elevated glucose 8 . Metabolic syndrome confers a 5-fold increased risk of type 2 DM and 2-4 fold the risk of developing cardiovascular disease and stroke 9 .
Though exact pathogenesis of metabolic syndrome is not clear, but abdominal adiposity and insulin resistance thought to be at the core of the pathophysiology of the metabolic syndrome and its individual components 10,11 .Free fatty acid (FFA) are released in abundance from an expanded adipose tissue mass, which result in increased hepatic production glucose and triglycerides, leads to the lipid/lipoprotein abnormalities include reductions in HDL-C, reduce insulin sensitivity in muscle & increase pancreatic insulin secretion, resulting in hyperinsulinemia 10,11 . Insulin resistance in the liver, muscle, and adipose tissue is also associated with the abundance of proinflammatory cytokines 10,11 . In obese person elevated calcitonin gene related peptide (CGRP) and Leptin and decreased Adiponectin (an anti-inflammatory substance) have been observed in different studies 12 . Metabolic syndrome is found to be associated with hyperhomocysteinemia 13 and low serum magnesium levels 14 .
The relationship between metabolic syndrome and migraine is still obscure and only few studies done regarding this topics 15, which have found positive [15][16][17][18][19][20] and negative associations 21 . Among previous studies conducted in BSMMU found migraine is associated with dyslipidemia 22 , decreased level of serum Magnesium 23 , hyperhomocysteinemia 24 all of them is also associated with metabolic syndrome. Another study in Bangladesh found migraine more severe in patients with comorbidities like DM, HTN and obesity 25 .
Materials and methods:
This age-sex matched case control study was conducted in headache clinic and inpatientoutpatient department of Neurology and Biochemistry laboratory of BSMMU. Patients with migraine headache (according to ICHD-3 beta criteria) 1 , age more than 18 years, who were willing to participate in this study and who had given informed written consent, were enrolled as case group. Age and sex matched non-migraineur volunteers, age more than 18 years were selected as control group. Both case and control were enrolled by purposive consecutive sampling technique. Participants had pregnancy or lactating, who were smoker, alcoholic, on active pain (during examination or sample collection), with acute illness (e.g. fever, acute myocardial infarction, acute stroke). had following conditions or diseases: Diabetes mellitus, hypothyroidism, Cushing's syndrome, acromegaly, polycystic ovarian syndrome, chronic kidney disease, nephrotic syndrome, chronic liver disease, who took following drugs: Glucocorticoid, Oral contraceptives, Amitriptyline, Valproic acid, Pizotifen, Beta blocker, Thiazide diuretics, Mirtazapine, Quetiapine, Olanzapine, Retinoids were excluded from this study. Secondary causes of metabolic syndrome along with its individual components were excluded among person whom was suffering from metabolic syndrome. Anthropometric measurements including height, weight, waist circumference (WC) were taken with participants following the standard protocol (participants wearing light clothes and without shoes). Body mass index (BMI) was calculated as the weight (kg) divided by square of the height (m 2 ). To measure waist circumference (WC), top of right iliac crest were located (WC-IC method). A measuring tape ((standard metered flexible measuring tape) was placed in a horizontal plane around abdomen (tape was snug but non compressive. Measurement (cm) was made at the end of a normal expiration. Blood pressure were measured in the both arms by auscultatory method using standard metered Mercury Sphygmomanometer (Model: ALPK-2), following the AHA, (2005) 26 guideline of blood pressure measurement. Participants were fasted at least 12 hour overnight before blood sample collection.Withall aseptic precaution measures 10 ml of venous blood were collected from each of the participants (using sterile 10 cc disposable plastic syringes). 2 ml of blood were collected in a gray cap test tube (containing EDTA) for measuring fasting plasma glucose and 5ml ofblood were collected in a red cap test tube for measuring fasting lipid profile. Coulter auto-analyzer machine (Model-AU680, USA) was used along with proper reagent to measure fasting plasma glucose and serum lipid profile.Metabolic syndrome was diagnosed according AHA/NHLBI, 2005 criteria 8 .
Statistical analysis:
Demographic, anthropometric, clinical and laboratory characteristics (Data) were expressed as mean ± SD (standard deviation) for continuous variables or as percentages for categorical variables. Data were compared by using Student's t-test for continuous variables. For categorical variables, differences were assessed by the Chisquare test. To assess the relative significance of etiological variable, binary logistic regression was used. Results for the binary logistic regression were presented as odds ratios (OR) with a 95% confidence interval (CI). Data were analyzed by using statistics software SPSS v-25. In all cases, P values <0.05 were considered as statistically significant.
Results:
A total number of 60 participants were recruited for this study of which 30 migraineurs were in case group and the 30 respondents were in control group after fulfilling the inclusion and exclusion criteria. 15 have found that, migraineurs have significantly higher BMI and weight circumference then non-migraineurs. Measures commonly have been used for assessing obesity are BMI and waist circumference (WC). Unfortunately, BMI is not considered to be a good estimate of obesity in Asian Indians as they have a characteristic obesity phenotype, with relatively lower BMI but with central obesity 27 . It has been suggested that, fat distributed in the abdominal region, particularly visceral fat is more metabolically important than other fat depots 27 This study has not found any significant difference of fasting blood glucose (mmol/L) level between case and control group [5.44±0.66 and 5.20±0.52 (Mean ± SD) in case and control group respectively, p=0.117]. Insulin resistance (IR) (a prior stage, in which a person may go through years before developing pre-diabetes followed by type 2 DM) is related to metabolic syndrome and migraine. Fava et al., (2013) 28 have found that, migraine is associated with insulin resistance.So direct measuring of IR could have given us better idea. This is included in WHO diagnostic criteria of metabolic syndrome (1998). However, in this study, AHA/NHLB, (2005) criteria is used, for which measuring of insulin resistance is not required.
Conclusion:
The study suggests that, the metabolic syndrome is associated with migraine. Though body mass index is higher among migraineurs then nonmigraine but it is not statistically significant. But waist circumference (a core component of metabolic syndrome) is significantly higher among migraineurs. Among other different components of metabolic syndrome systolic blood pressure, triglycerides are significantly higher and HDL-C significantly lower among migraineurs in contrast to non-migraineur.
Recommendation:
Though the study was conducted on small sample size, it may be recommended that, metabolic syndrome along with its components should be searched in migraineur, as some commonly prescribed anti-migraine drugs are associated with weight gain, dyslipidemia and insulin resistance. | 2022-01-22T16:03:51.160Z | 2018-01-31T00:00:00.000 | {
"year": 2018,
"sha1": "a397363fcef6ab03e0607d32a2891ff8c4fd6e63",
"oa_license": null,
"oa_url": "https://www.banglajol.info/index.php/BJN/article/download/57531/40063",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b74973b5a0c4dadb9cc04d8a4f5b4a9171d406af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
1292121 | pes2o/s2orc | v3-fos-license | Strand bias in complementary single-nucleotide polymorphisms of transcribed human sequences: evidence for functional effects of synonymous polymorphisms
Background Complementary single-nucleotide polymorphisms (SNPs) may not be distributed equally between two DNA strands if the strands are functionally distinct, such as in transcribed genes. In introns, an excess of A↔G over the complementary C↔T substitutions had previously been found and attributed to transcription-coupled repair (TCR), demonstrating the valuable functional clues that can be obtained by studying such asymmetry. Here we studied asymmetry of human synonymous SNPs (sSNPs) in the fourfold degenerate (FFD) sites as compared to intronic SNPs (iSNPs). Results The identities of the ancestral bases and the direction of mutations were inferred from human-chimpanzee genomic alignment. After correction for background nucleotide composition, excess of A→G over the complementary T→C polymorphisms, which was observed previously and can be explained by TCR, was confirmed in FFD SNPs and iSNPs. However, when SNPs were separately examined according to whether they mapped to a CpG dinucleotide or not, an excess of C→T over G→A polymorphisms was found in non-CpG site FFD SNPs but was absent from iSNPs and CpG site FFD SNPs. Conclusion The genome-wide discrepancy of human FFD SNPs provides novel evidence for widespread selective pressure due to functional effects of sSNPs. The similar asymmetry pattern of FFD SNPs and iSNPs that map to a CpG can be explained by transcription-coupled mechanisms, including TCR and transcription-coupled mutation. Because of the hypermutability of CpG sites, more CpG site FFD SNPs are relatively younger and have confronted less selection effect than non-CpG FFD SNPs, which can explain the asymmetric discrepancy of CpG site FFD SNPs vs. non-CpG site FFD SNPs.
Background
Single-nucleotide polymorphisms (SNPs) involve two complementary base substitutions, one on each DNA strand. Where the two DNA strands are functionally distinct (such as in transcribed sequences), the two complementary substitutions may not occur with equal frequency on each strand [1], due to transcription-related mutation/repair mechanisms or selective pressure from functional effects on mRNA. A↔G vs. C↔T asymmetry in the two DNA strands is well known to exist in prokaryotes [2]. In the human, there is an excess of C↔T over G↔A in mutations causing Mendelian disorders [3] while excess of A→G substitutions in the sense strand of transcribed intronic sequences was found when comparing a ~1.5 Mb region of human chromosome 7 to its chimpanzee orthologue [4]. Both reports attributed the bias to transcriptioncoupled repair (TCR), and further support for transcription-coupled effect has been provided by the correlation between strand bias in nucleotide composition of transcribed sequences with transcription levels [5]. However, the conflicting results observed within coding and intronic sequences have not been explored further. It is highly unlikely that TCR distinguishes between exons and introns. Furthermore, our current knowledge of TCR [6,7] suggest that its action would affect the proportion of A→G vs. T→C mutations, but should not affect other mutations. An alternative explanation for the observed discrepancy between exons and introns is that synonymous exonic substitutions in mammals may be under non-trivial selective pressures, as has been suggested by some recent studies [8,9]. An important effect of synonymous coding mutations is the association with gene splicing [10,11]. In humans, evidence of selection on synonymous variations may have a profound effect on how we view the role of synonymous variations in genetic disease and phenotypic variability. Further research is needed besides these studies: the analysis of disease-causing mutations [3] required assumptions about likelihood of coming to clinical attention based on chemical differences between substituted amino acids, while the work on intronic sequences [4] was confined to a single ~1.5 Mb region and the genome-wide applicability of the results remains to be proven. Neither study explored differences between introns and exons to distinguish mutation/repair effects from alterations in RNA function. To our knowledge, strand asymmetry in human SNPs has not been fully examined for possible clues about the mutational mechanisms that created them and/or their potential functional significance. We therefore undertook a systematic examination of human coding SNPs in the fourfold degenerate (FFD) codon site and a random sample of intronic SNPs (iSNPs) for strand asymmetry between A↔G and C↔T polymorphisms.
Results
The identities of the ancestral bases and the direction of mutations were inferred from human-chimpanzee genomic alignment. To avoid bias from amino acid composition in the third codon position, only FFD SNPs were included in the analysis. On this basis, from the full list of Perlegen validated SNPs, 2,374 FFD SNPs involving A↔G or C↔T polymorphisms were identified for further investigation (Table 1). To increase the statistical power of this study, a larger number of iSNPs were included in the analysis. As edges of introns are known to be under selective constraint [8,12,13], all iSNPs investigated were chosen to be more than 200 bp from each intronic end. In addition, first introns have specific substitution patterns because they are enriched for CpG islands [8] which, being unmethylated [14], are not hypermutable. Also, iSNPs in first introns may be under purifying selection [8,12,13]. Therefore, iSNPs in first introns were not included in the subset.
To control the observed substitution rates for background nucleotide composition (see Methods), the nucleotide content was determined for all known human intronic sites and FFD sites of coding regions ( Table 2). After the background correction, a large excess of A→G polymorphisms over the complementary T→C was found in both (Fig 1a). An excess of G→A changes over C→T was also observed in iSNPs, χ 2 = 5.0, v = 1, p = 0.025, ratio (95%CI) = 1.08 (1.01, 1.15) and, more dramatically, at FFD SNPs: χ 2 = 27.2, v = 1, p < 0.001, ratio (95%CI) = 1.30 (1.18, 1.43). We thus confirm, genome-wide and within Homo Sapiens, the strand bias in substitution rates, which has been found on a human chr. 7 region when compared to chimpanzee [4]. The excess of A→G polymorphisms resulting in iSNPs is concordant with the finding by Green et al [4]. This result can be explained by differential effect of TCR on transcribed and untranscribed DNA strands of genes.
In order to investigate the effect of hypermutable CpG dinucleotides, and to correct for their excess in exons over introns (Table 1), SNPs were next analyzed separately according to whether or not the polymorphism occurred within a CpG site. The hypermutability of CpG dinucleotides is well documented and results from methylationinduced deamination of 5-methyl cytosine [15]. If the deamination occurs on the sense strand, it results in [C→T]pG; if the cytosine deamination takes place on the antisense strand, it produces a Cp [G→A] on the sense strand. Thus, A SNP at a CpG site has the pattern of YpG or CpR (Y represents C or T, and R represents A or G). In introns, the mutational asymmetry does not differ between CpG and non-CpG sites (Table 1). Unlike iSNPs, a dramatic difference between CpG and non-CpG sites was noted in FFD SNPs. After correction for the codon compositions, different asymmetry pattern of G→A vs. C→T between non-CpG sites and CpG sites was noticed ( Table 3, Fig 1b). Excess C→T over G→A can be seen in non-CpG FFD SNPs, but not CpG FFD SNPs. Because this finding is present in exons but absent in introns, it is very unlikely that it can be explained by any transcriptionrelated mutational and/or repair mechanism, but suggests selective pressure due to effects on the function of the mature transcript.
An obvious example of such an effect is the creation of an AT dinuclotide by a G→A mutation in a FFD site when a T is the first nucleotide of the next codon. AU dinucleotides are known to be targets of RNaseL endonucleolytic cleavage [16]. A|U dinucleotides at synonymous dicodon boundaries could allow more efficient 3'-5' degradation by endonucleolytic cleavage [17] and, consequently, drive purifying selection. Thus, our interpretation makes the prediction of fewer than expected G→A polymorphisms at FFD sites preceding a codon with a T in the first position. Our analysis indeed shows a dramatic deficit of G→A polymorphisms that occur before a codon that starts with a T (Table 4).
Discussions and conclusion
It is of great interest that the C→T excess over the complementary G→A in non-CpG FFD SNPs is not seen in iSNPs or FFD SNPs that are part of a CpG. As iSNPs and FFD SNPs should confront the same transcription-coupled mechanisms, including TCR and transcription-coupled mutation (TCM) [18], the C→T excess of FFD SNPs must be driven by mechanisms other than mutational/repair factors. Alternatively, biologically significant effects of synonymous SNPs (sSNPs) on aspects of RNA function other than protein coding may exist and be subject to selective pressures. Unlike lower organisms, it is still contentious whether selection for translational efficiency does [19,20] or does not [21][22][23][24] play a major role in shaping codon usage (and therefore sSNP frequencies) in mammals. There is little variation in iso-acceptor tRNA gene numbers and the population sizes are likely too low to reflect very weak selective pressures [23]. On the other hand, translation may be affected by RNA secondary structure which, like splicing, mRNA stability, or other less well understood RNA functions, may be significantly altered by single-nucleotide changes. Such mechanisms have recently been suggested in a few studies [8,9,25]. If sSNPs do have such biological effects, there is evidence to suggest that changes in mRNA secondary structure are likely to play an important role in mediating them [25,26]. Given the evidence of compromised mRNA stability in the presence of A|T dinucleotides at dicodon boundaries [16,17], G→A polymorphisms at FFD sites may have deleterious effects that C→T does not, thus creating selection pressure that favors C→T if the next codon begins with a T. In this report we show that this is indeed the case.
The different asymmetry pattern between non-CpG and CpG sites can be attributed to the hypermutability of the latter [27]. The effects of selection on the observed mutation patterns are most pronounced in relatively slowly mutating, non-CpG sites. Because of the hypermutability of CpG sites, more CpG site FFD SNPs are younger and have confronted less selective pressure than non-CpG FFD SNPs. For the same selection effect on A|T dinucleotides, A→G polymorphism may also confront more selection pressure than T→C, which can also explain why the A→G excess is not significantly different in FFD non-CpG and intronic CpG sites.
In conclusion, we confirm the genome-wide excess of A→G over T→C mutations previously reported in a small region of chr. 7 [4], a finding that points to TCR as an important factor in human mutagenesis. More importantly, our analysis of FFD SNPs clearly suggests a mechanism that operates differentially in intronic vs. exonic sequences. We propose that selective pressure related to changes in mRNA stability is the most likely explanation.
In view of the balance between selective and mutational pressures, we provide satisfactory explanation for the previous contradictory findings of mutation rates in humans [3,4,28]. Our finding further highlights the importance of not overlooking potential function by the sSNPs, which may not be as selectively neutral as is generally thought [29], an important consideration given the expected wealth of complex-disease association data to come out of the new genotyping technologies.
SNP information collection
Considering the possibility that some SNPs recorded by NCBI dbSNP database may not be reliable and result from DNA sequencing errors, we performed the investigation using the Perlegen dataset of DNA variation genotyping [30,31]. The SNPs were all identically ascertained by microarray resequencing of the genome, and verified in multiple populations. Only single nucleotide polymorphisms with two alleles were included. SNPs in sex chromosomes were not included in this study. Reference sequences of the SNPs in 22 pairs of human autosomes were bulk-downloaded from the NCBI dbSNP database build 124[32].
The orientation of SNP reference sequence
The dbSNP reference sequences of iSNPs can not be aligned with mRNA sequence directly. Some FFD SNP reference sequences have intronic sequence included, and some genes have different mRNA transcripts from alternative splicing. Therefore, instead of aligning SNP sequence with mRNA sequence, we wrote Java scripts to determine the orientation of a dbSNP reference sequence in the DNA coding strand. The corresponding NCBI genome DNA contig sequence was first downloaded from the NCBI reference sequences [33]. Then, a SNP reference sequence was aligned with the contig sequence around the SNP contig position and the orientation in the contig sequence was determined. The orientation of mRNA sequence in the same contig sequence was acquired from the annotation of dbSNP. Based on these two orientations, the orientational relation of SNP reference sequence and mRNA sequence was known. The corresponding nucleotide polymorphism in the DNA coding strand were determined consequently.
Correction for nucleotide or codon compositions
In order to determine the relative rates of each substitution, the observed counts were corrected for the back-ground frequencies of nucleotides or codons. Both the intronic and FFD nucleotide compositions were acquired from the 14,029 genes annotated by the CCDS dababase [34,35]. For background intronic nucleotide compositions, the first introns as well as the first and last 200 bp of each intron were excluded. As an example of correction, for A→G polymorphism, the observed number (N A→G ) corrected by the frequency of adenine (P A ) was calculated as: The corrected proportions of each type of polymorphisms within the A↔G-C↔T pair were calculated in the same way. For the computation of the asymmetry ratio of complementry polymorphism, such as A→G vs. T→C, The 95% CI was computed by logistic regression analysis. | 2017-06-20T07:23:53.114Z | 2006-08-17T00:00:00.000 | {
"year": 2006,
"sha1": "9669b695a2be978162ea1f40e39225499da1edbd",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-7-213",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fbe0e7f20bbcd4e32d9bd34acc2cbc20866271f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
227297523 | pes2o/s2orc | v3-fos-license | Frequency of Positive Aspergillus Tests in COVID-19 Patients in Comparison to Other Patients with Pulmonary Infections Admitted to the Intensive Care Unit
The aim of this study was to describe the frequency of positive Aspergillus tests in COVID-19 patients and investigate the association between COVID-19 and a positive Aspergillus test result. We compared the proportion of positive Aspergillus tests in COVID-19 patients admitted to the intensive care unit (ICU) for >24 h with two control groups: patients with community-acquired pneumonia with (i) a PCR-confirmed influenza infection (considered a positive control since the link between influenza and invasive aspergillosis has been established) and (ii) Streptococcus pneumoniae pneumonia (in whom positive Aspergillus tests are mostly considered to indicate colonization).
I nvasive pulmonary aspergillosis (IPA) is a life-threatening disease that typically occurs in severely immunocompromised patients (1). The mortality of IPA in patients with hematological disease is estimated to be 30% (2) but is substantially higher in critically ill patients (3). More recently, intensive care unit (ICU) admission for severe influenza has been shown to be a risk factor for IPA, with an incidence varying between 7 and 18% and overall mortality around 50% (4)(5)(6).
Like influenza, COVID-19 is predominantly a pulmonary disease, and COVID-19 has been linked with Aspergillus detection in respiratory samples by several authors, who have coined the term COVID-19-associated pulmonary aspergillosis (7,8). Assessing clinical significance of a positive Aspergillus test is difficult in general (9) and is even more difficult in ICU patients. Clinically, it is impossible to differentiate pulmonary aspergillosis from COVID-19 based on clinical signs and symptoms, especially in patients admitted to the ICU. All of them have pulmonary infiltrates. Moreover, chest computed tomography (CT) scans of COVID-19 patients that show bilateral widespread ill-defined and ground-glass opacification (10) may also obscure any eventual findings of IPA.
Studies showing prevalent positive Aspergillus tests in COVID-19 so far are mostly case reports or observational studies without control groups (8,(11)(12)(13)(14)(15)(16)(17)(18). While small case series are prone to publication bias, the lack of any control group precludes a valid conclusion (19). The link between Aspergillus and COVID-19 might, for example, also be explained by the high frequency of bronchoalveolar lavage (BAL) performed in COVID-19 patients admitted to the ICUs to exclude bacterial superinfection. The more diagnostic procedures are performed, the higher the chances of detecting microorganisms that may be colonizers rather than true pathogens, which also applies to the ubiquitous mold Aspergillus. Detection of Aspergillus in respiratory samples in nonimmunocompromised patients is often considered to indicate colonization and does not require antifungal therapy (20). By considering any positive Aspergillus test in COVID-19 patients clinically significant, antifungal treatment will be initiated in these patients, leading to higher costs, possible adverse events, and toxicity (21).
Based on these observations, we hypothesized that the number of positive Aspergillus tests in COVID-19 patients would not differ from that in a control group but would be lower than in a group of ICU patients that is now broadly considered to be at increased risk for IPA.
Therefore, the aim of this study was to describe the frequency of positive Aspergillus tests in COVID-19 patients and to investigate the association between COVID-19 status and positive Aspergillus tests in a case-control study using two different control groups, one with an established link with IPA (influenza) and another without (pneumococcal pneumonia).
MATERIALS AND METHODS
Study design. This was a case-control study performed in Erasmus University Medical Centre (Erasmus MC), Rotterdam, The Netherlands. Erasmus MC is a large tertiary hospital. It hosted the national coordination center for COVID-19 patient distribution during the first wave of the COVID-19 outbreak in The Netherlands (between March and June 2020). During the first wave, it had 102 ICU beds allocated for COVID-19 (7% of all Dutch ICU beds for COVID -19).
Included were adult patients (.18 years old) admitted to the ICU for .24 h. COVID-19 cases were those with confirmed COVID-19 based on a positive PCR from respiratory samples between 1 March and 21 April 2020. We used two control groups selected in the period between January 2010 and April 2020 admitted to the same ICU: (i) patients with community-acquired pneumonia due to influenza confirmed by a positive PCR on respiratory samples and (ii) patients with community-acquired pneumococcal pneumonia based on positive respiratory culture with Streptococcus pneumoniae or positive urine antigen test and a negative influenza test. All included patients had infiltrates on the radiograph or CT scan of the chest.
Demographic (age and gender) and microbiology data were obtained from hospital and laboratory information system. This study was a noninterventional observational study using only limited demographic data and was performed under institutional review board approval (METC-2015-306).
Test algorithm. In COVID-19 cases and controls, BAL or other respiratory samples were collected within the first week of ICU admission. BAL fluid was collected when patients showed clinical deterioration leading to differential diagnosis of secondary infection based on clinicians' judgment. At Erasmus MC, it is the standard procedure to do a fungal culture on all BAL samples from ICU patients. Aspergillus antigen test (galactomannan) and Aspergillus DNA detection by PCR are occasionally performed as well. Due to the invasive nature of BAL, and because BAL fluid is used to make a diagnosis and not for followup of the pulmonary infections, BAL was rarely done more than once.
Microbiological tests. Galactomannan tests were performed on the BAL fluid and other types of respiratory samples (when BAL was not performed) or serum of the patients using an immunoenzymatic sandwich microplate assay (Platelia Aspergillus Ag; Bio-Rad Laboratories B.V., Veenendaal, The Netherlands). A galactomannan index of .0.6 was considered positive for serum as well as for BAL fluid. The performance of this test depends on the study population, and studies in nonneutropenic patients reported sensitivities of 22% in serum (22) and 76% in BAL fluid to diagnose invasive pulmonary aspergillosis (IPA) at a cutoff galactomannan index of $0.5 (23). PCR for Aspergillus was performed using the commercial kit AsperGenius (PathoNostics, Maastricht, The Netherlands), which detects and identifies Aspergillus fumigatus, Aspergillus terreus, Aspergillus spp., and azole resistance markers TR34 and TR46, or in-house real-time PCR assay as described before (24). The sensitivity of the in-house real-time PCR was 80% in nonneutropenic patients in diagnosing IPA (24). Aspergillus cultures were performed on BBL Sabouraud dextrose agar with chloramphenicol (BD Diagnostics, Erembodegem, Belgium) and incubated at 26°C and 35°C for 21 days. The sensitivity of culture in detecting invasive pulmonary aspergillosis is between 20% and 50% (25,26). These tests were performed in an accredited ISO 15189 microbiology lab. PCR for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) targeting E and RNA-dependent RNA polymerase (RdRp) was performed in the Department of Viroscience of Erasmus MC, which is one of the WHO referral laboratories.
Statistical analysis. Data were analyzed using SPSS Statistics 26 (SPSS Inc., Chicago, IL). Continuous variables are presented as means with standard deviations (SD). Means among group were compared using analysis of variance (ANOVA). Number and proportion of gender and positive tests were calculated and chi-squared test was used to compared proportions between groups.
We determined several possibilities as proxies in defining clinical significance of pulmonary aspergillosis based on microbiology results (culture, PCR, and Aspergillus antigen in respiratory samples), from low probability to higher probability: (i) any positive Aspergillus tests performed in any type of respiratory or serum samples, (ii) any positive Aspergillus tests from BAL samples only, (iii) positive PCR and positive Aspergillus antigen in BAL samples, (iv) positive culture and positive Aspergillus antigen in BAL samples, (v) positive culture and positive PCR from BAL samples, and (vi) positive PCR, positive Aspergillus antigen, and positive culture.
Further, we used logistic regression analysis to calculate the odds ratio (OR) with 95% confidence interval (CI) to determine whether COVID-19 pneumonia was independently associated with positive Aspergillus tests. We performed two analyses: first, by considering positive Aspergillus tests in all type of respiratory samples (BAL fluid, sputum aspirates, and lung tissue), and second, by considering positive Aspergillus tests in BAL fluid only. We adjusted both associations for age and sex.
RESULTS
Demographic characteristics and microbiological tests. During the study period, 92 patients were admitted to the ICU with COVID-19, 48 with influenza, and 65 with pneumococcal pneumonia (Table 1). COVID-19 patients tended to be older (mean age [SD], 62 [14] years) than patients with influenza or pneumococcal pneumonia (P = 0.1) and were more often male (P = 0.04).
Respiratory samples of approximately one-third of the COVID-19 patients were obtained. BAL sampling was more often performed in COVID-19 patients (29.3%) than in patients with pneumococcal pneumonia (18.5%) but less frequently than in patients with influenza (45.8%). The numbers of performed Aspergillus PCRs and Aspergillus antigen tests were significantly different between the COVID-19 and the control groups, but no statistical difference was found between the groups for numbers of performed fungal cultures.
Positive Aspergillus tests. Any positive Aspergillus test from any respiratory samples was found in 10.9% of the COVID-19 patients, and this proportion was higher than in patients with pneumococcal pneumonia (6.2%) but lower than in the influenza patients (22.9%; P = 0.02) ( Table 2). When only the Aspergillus tests performed on BAL fluid were considered, the proportion of positive tests in COVID-19 patients was reduced to 5.4% and comparable to that of the pneumococcal pneumonia group (4.6%) but much lower than what was observed in patients with influenza (18.8%; P = 0.01). Stricter criteria for positivity (e.g., positive PCR and positive Aspergillus antigen in BAL samples or positive culture and positive Aspergillus antigen in BAL samples) delivered a limited number of cases to allow statistical comparisons.
Association between detection of Aspergillus and COVID-19. The OR (95% CI) of having any positive Aspergillus test in any respiratory sample for COVID-19 patients compared with pneumococcal pneumonia patients was 1.9 (0.6 to 6.2; P = 0.3). Taking age and gender into account did not change the estimate. Compared this group with the influenza group, the OR (95% CI) was 0.4 (0.2 to 1.1; P = 0.06) without and 0.4 (0.1 to 0.9, P = 0.04) when age and sex were taken into account.
When only a positive Aspergillus test from BAL fluid was taken into account, the OR was 1.2 (0.3 to 5.1, P = 0.8) for the comparison of COVID-19 patients with pneumococcal pneumonia, while it was 0.2 (0.1 to 0.8; P = 0.02) compared with the influenza group despite the fact that a fungal culture had been performed less often on BAL samples from patients with influenza than from COVID-19 patients. This difference remained significant when corrected for age and sex (OR of 0.2 (0.01 to 0.7; P = 0.01).
DISCUSSION
In this study, which included 92 patients admitted to the ICU with COVID-19, the proportion of patients in which Aspergillus was detected was comparable to that of patients admitted with pneumococcal pneumonia while significantly lower than in patients with influenza, a patient population in which the link with IPA is well established (4).
In critically ill patients admitted for respiratory failure, such as COVID-19 patients, the suspicion of a hospital-acquired superinfection typically arises when a clinical deterioration is observed. In COVID-19 patients, clinicians are often reluctant to perform BAL (27) and rely on upper respiratory tract specimens to diagnose ventilator-associated pneumonia. Yet the detection of Aspergillus in the upper respiratory tract more often represents colonization than infection (27). The majority of publications on Aspergillus in COVID-19 are case series (11)(12)(13)(14)(15)(16)18) and often included the detection of Aspergillus in the upper airway (6) or included the detection of b-D-glucan as a mycological criterion, a test that is not specific for Aspergillus spp. and it has never been shown to be useful to diagnose IPA in ICU patients (9). Bartoletti and coworkers did perform a study with systematic sampling of BAL fluid in COVID-19 patients admitted to the ICU. In their study, 27.7% of the patients had a serum galactomannan index of .0.5, BAL galactomannan index of .1.0, growth of Aspergillus spp. in BAL fluid, or a cavitating infiltrate on CT scan of the chest (17). We agree that the 27.7% incidence is surprisingly high, and it is higher than the number in our study. It is very likely that the high incidence is at least partially explained by the very invasive BAL sampling protocol in this study. Indeed, a BAL was performed on admission, on day 7, and at the time of clinical deterioration. Our observations illustrate that a positive test for Aspergillus in COVID-19 should not automatically lead to the initiation of antifungal therapy, a treatment that does not come without risk (21). Exactly in the ICU patient population, diagnosing IPA is most challenging. Indeed, the typical radiological findings are often absent, and in ICU patients, the specificity of any mycological test is lower than it is in well-defined patient population like those with longstanding neutropenia. In this setting, nonspecific tests such as panfungal b-D-glucan should not be used.
Due to nonspecific clinical symptoms and radiological findings, we considered a comparison of objective microbiological test results in COVID-19 patients with 2 other ICU populations with different respiratory infections a reasonable first comparator in order to estimate the clinical significance of the different Aspergillus tests available. We chose pneumococcal pneumonia as a control group since a comparable group in which invasive respiratory samples were obtained was needed. Patients admitted to the ICU with influenza were chosen as a patient group because ICU admission for respiratory insufficiency due to influenza was repeatedly shown to increase the risk for IPA (4). Healthy patients as a control group would be ideal, but this was not feasible.
The most reliable data will come from studies in which pre-or postmortem lung biopsies or full autopsies are performed and correlated with diagnostic test results that preceded the findings on biopsy or autopsy. Few published studies of this kind are available. In one such study, postmortem lung biopsies could not conform the premortem IPA diagnosis (28), and autopsies studies did not report the presence of possible Aspergillus (29)(30)(31).
In this study, we also observed several clinical practices in diagnosing and treating COVID-19 patients, such as the increased tendency to perform BAL in COVID-19 patients in comparison to pneumococcal pneumonia patients, despite initial fear of SARS-CoV-2 infection of medical personnel during the act of bronchoscopy (27). For the majority of the BAL fluid samples, an Aspergillus test was requested and performed. Furthermore, we noticed that a positive Aspergillus /\\test in our setting was common when samples were obtained from the upper respiratory tract, which suggests isolation of Aspergillus that may represent colonization.
The strength of the present study is the use of a control group and the fact that all patients originated from the same hospital, which should minimize selection bias. It can be assumed, for example, that the reasons to perform BAL are comparable among the groups. Yet we cannot control for all possible confounders and there is still some bias, since cases and controls needed to be selected from different calendar years. Another limitation is that only one-third of the COVID-19 patients' respiratory samples were tested for the presence of Aspergillus.
In conclusion, the frequency of positive Aspergillus tests from BAL fluid of COVID-19 patients was comparable to that of pneumococcal pneumonia patients. Since in this control group positive Aspergillus tests are often considered to indicate colonization, a positive test for Aspergillus in COVID-19 should be interpreted with caution and not automatically lead to the start of an antifungal, especially considering its possible side effects. To get a more reliable estimate of the incidence of IPA in patients with COVID-19, studies are needed in which systematic (minimally invasive) autopsies or directed postmortem lung biopsies are performed. There are no conflicts of interest to report. There are no financial disclosures to report. | 2020-12-06T14:06:28.862Z | 2020-12-04T00:00:00.000 | {
"year": 2020,
"sha1": "cb1c86296dc92e184f064d5577d888572d4ee5df",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8106735",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a37868130585c52c82b4e8cdde8c9d67c94a58aa",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118689646 | pes2o/s2orc | v3-fos-license | First-order quantum phase transitions: test ground for emergent chaoticity, regularity and persisting symmetries
We present a comprehensive analysis of the emerging order and chaos and enduring symmetries, accompanying a generic (high-barrier) first-order quantum phase transition (QPT). The interacting boson model Hamiltonian employed, describes a QPT between spherical and deformed shapes, associated with its U(5) and SU(3) dynamical symmetry limits. A classical analysis of the intrinsic dynamics reveals a rich but simply-divided phase space structure with a H\'enon-Heiles type of chaotic dynamics ascribed to the spherical minimum and a robustly regular dynamics ascribed to the deformed minimum. The simple pattern of mixed but well-separated dynamics persists in the coexistence region and traces the crossing of the two minima in the Landau potential. A quantum analysis discloses a number of regular low-energy U(5)-like multiplets in the spherical region, and regular SU(3)-like rotational bands extending to high energies and angular momenta, in the deformed region. These two kinds of regular subsets of states retain their identity amidst a complicated environment of other states and both occur in the coexistence region. A symmetry analysis of their wave functions shows that they are associated with partial U(5) dynamical symmetry (PDS) and SU(3) quasi-dynamical symmetry (QDS), respectively. The pattern of mixed but well-separated dynamics and the PDS or QDS characterization of the remaining regularity, appear to be robust throughout the QPT. Effects of kinetic collective rotational terms, which may disrupt this simple pattern, are considered.
Introduction
Quantum phase transitions (QPTs) are qualitative changes in the properties of a physical system induced by a variation of parameters λ in the quantum HamiltonianĤ(λ) [1,2,3]. Such ground-state transformations have received considerable attention in recent years and have found a variety of applications in many areas of physics and chemistry [4]. These structural modifications occur at zero-temperature in diverse dynamical systems including spin lattices [5], ensembles of ultracold atoms [6,7] and atomic nuclei [8].
The particular type of QPT is reflected in the topology of the underlying mean-field (Landau) potential V (λ). Most studies have focused on second-order (continuous) QPTs, where V (λ) has a single minimum which evolves continuously into another minimum. The situation is more complex for discontinuous (first-order) QPTs, where V (λ) develops multiple minima that coexist in a range of λ values and cross at the critical point, λ = λ c . The competing interactions in the Hamiltonian that drive these ground-state phase transitions can affect dramatically the nature of the dynamics and, in some cases, lead to the emergence of quantum chaos [9,10,11]. This effect has been observed in quantum optics models of N two-level atoms interacting with a single-mode radiation field [12,13], where the onset of chaos is triggered by continuous QPTs. In the present article, we examine similar effects for the less-studied discontinuous QPTs, and explore the nature of the underlying classical and quantum dynamics in such circumstances.
The interest in first-order quantum phase transitions stems from their key role in phasecoexistence phenomena at zero temperature. In condensed matter physics, it has been recently recognized that, for clean samples, the nature of the QPT becomes discontinuous as the critical-point is approached. Examples are offered by the metal-insulator Mott transition [14], itinerant magnets [15], heavy-fermion superconductors [16], quantum Hall bilayers [17], Bose-Einstein condensates [18] and Bose-Fermi mixture [19]. First-order QPTs are relevant to shape-coexistence in mesoscopic systems, such as atomic nuclei [8], and to optimization problems in quantum computing [20].
Hamiltonians describing first-order QPTs are often non-integrable, hence their dynamics is mixed. They form a subclass among the family of generic Hamiltonians with a mixed phase space, in which regular and chaotic motion coexist. Mixed phase spaces are often encountered in billiard systems [9,10,11], which are generated by the free motion of a point particle inside a closed domain whose geometry governs the amount of chaoticity. Here, in contrast, we consider many-body interacting systems undergoing a first-order QPT, where the onset of chaos is governed by a change of coupling constants in the Hamiltonian. The amount of order and disorder in the system is affected by the relative strengths of different terms in the Hamiltonian which have incompatible symmetries. Order, chaos and symmetries are thus intertwined, and their study can shed light on the structure evolution. In conjunction with first-order QPTs, this raises a number of key questions. (i) How does the interplay of order and chaos reflect the first-order QPT, in particular, the changing topology of the Landau potential in the coexistence region. (ii) What is the symmetry character (if any) of the remaining regularity in the system, amidst a complicated environment. (iii) What is the effect of kinetic terms, which do not affect the potential, on the onset of chaos across the QPT.
To address these questions in a transparent manner, we employ an interacting boson model (IBM) [21], which describes quantum phase transitions between spherical and deformed nuclei. The model is amenable to both classical and quantum treatments, has a rich algebraic structure and inherent geometry. The phases are associated with different nuclear shapes and correspond to solvable dynamical symmetry limits of the model. The Hamiltonian accommodates QPTs of first-and second order between these shapes, by breaking and mixing the relevant limiting symmetries. These attributes make the IBM an ideal framework for studying the intricate interplay of order and chaos and the role of symmetries in such quantum shape-phase transitions. It is a representative of a wide class of algebraic models used for describing many-body systems, e.g., nuclei [21], molecules [22] and hadrons [23].
Chaotic properties of the IBM have been throughly investigated both classically and quantum mechanically [44,45,46,47,48,49,50,51]. All such treatments involved a simplified Hamiltonian giving rise to integrable second order QPTs and to non-integrable first order QPTs with an extremely low barrier and narrow coexistence region. A new element in the present treatment, compared to previous works, is the employment of IBM Hamiltonians without such restrictions [37] and their resolution into intrinsic and collective parts [52,53]. This enables a comprehensive analysis of the vibrational and rotational dynamics across a generic (high-barrier) first-order QPT, both inside and outside the coexistence region. Brief accounts of some aspects of this analysis were reported in [54,55]. Section 2 reviews the algebraic, geometric and symmetry content of the IBM. An intrinsic Hamiltonian for a first-order QPT between spherical and deformed shapes, with an adjustable barrier height, is introduced in Section 3, and its symmetry properties are discussed. The classical limit of the QPT Hamiltonian is derived in Section 4. The topology of the classical potential is studied in great detail, identifying the control and order parameters in various structural regions of the QPT. A comprehensive classical analysis is performed in Section 5, focusing on regular and chaotic features of the intrinsic vibrational dynamics across the QPT. Special attention is paid to the dynamics in the vicinity of minima in the Landau potential and to resonance effects. An elaborate quantum analysis is conducted in Section 6 with emphasis on quantum manifestations of classical chaos and remaining regular features in the spectrum. A symmetry analysis is performed in Section 7, examining the symmetry content of eigenstates and the evolution of purity and coherence throughout the QPT. The impact of different collective rotational terms on the classical and quantum dynamics is considered in Section 8. The implications of modifying the barrier height, are examined in Section 9. The final Section is devoted to a summary and conclusions. Specific details on the IBM potential surface and on linear correlation coefficients are collected in Appendix A and B, respectively.
The interacting boson model: algebras, geometry, and symmetries
The interacting boson model (IBM) [21] describes low-lying quadrupole collective states in nuclei in terms of N interacting monopole (s) and quadrupole (d) bosons representing valence nucleon pairs. The bilinear combinations (6) algebra, which serves as the spectrum generating algebra. The IBM Hamiltonian is expanded in terms of these generators,Ĥ = ij ij G ij + ijk u ijk G ij G k + . . ., and consists of Hermitian, rotational-invariant interactions which conserve the total number of s-and d-bosons,N =n s +n d = s † s + m d † m d m . A dynamical symmetry (DS) occurs if the Hamiltonian can be written in terms of the Casimir operators of a chain of nested sub-algebras of U (6). The Hamiltonian is then completely solvable in the basis associated with each chain. The three dynamical symmetries of the IBM [56,57,58] defined by the expectation value of the Hamiltonian in the following intrinsic condensate state [60,61] |β, γ; N = ( Here (β, γ) are quadrupole shape parameters analogous to the variables of the collective model of nuclei [59]. Their values (β eq , γ eq ) at the global minimum of V (β, γ) define the equilibrium shape for a given Hamiltonian. For one-and two-body interactions, the shape
Algebra Generators
Casimir U (1) , U (3) , Π (2)Ĉ O(5) + Π (2) · Π (2) σ(σ + 4) SU (3) U (1) , Q (2) 2Q (2) · Q (2) can be spherical (β eq = 0) or deformed (β eq > 0) with γ eq = 0 (prolate), γ eq = π/3 (oblate) or γ-independent. The parameterization adapted in Eq. (3) is particularly suitable for a classical analysis of the model. An alternative parameterization for the shape parameters and further properties of the potential surface are discussed in Appendix A. The dynamical symmetries of Eq. (1) correspond to solvable limits of the model. The often required symmetry breaking is achieved by including in the Hamiltonian terms associated with different sub-algebra chains of U (6). In general, under such circumstances, solvability is lost, there are no remaining non-trivial conserved quantum numbers and all eigenstates are expected to be mixed. However, for particular symmetry breaking, some intermediate symmetry structure can survive. The latter include partial dynamical symmetry (PDS) [62] and quasi-dynamical symmetry (QDS) [30]. In a PDS, the conditions of an exact dynamical symmetry (solvability of the complete spectrum and existence of exact quantum numbers for all eigenstates) are relaxed and apply to only part of the eigenstates and/or of the quantum numbers. In a QDS, particular states continue to exhibit selected characteristic properties (e.g., energy and B(E2) ratios) of the closest dynamical symmetry, in the face of strong-symmetry breaking interactions. This "apparent" symmetry is due to the coherent nature of the mixing. Interestingly, both PDS [33] and QDS [30,31,32] have been shown to occur in quantum phase transitions.
In discussing the dynamics of the IBM Hamiltonian, it is convenient to resolve it into intrinsic and collective parts [52,53], The intrinsic part (Ĥ int ) determines the potential surface V (β, γ), Eq. (2), and is defined to yield zero when acting on the equilibrium condensatê H int |β = β eq , γ = γ eq ; N = 0 .
For β eq = 0, the condensate is spherical, and consists of a single state with angular momentum L = 0 built from N s-bosons. For (β eq > 0, γ eq = 0) the condensate is deformed, and has angular projection K = 0 along the symmetry z-axis. States of good L projected from it span the K = 0 ground band, and other eigenstates ofĤ int are arranged in excited K-bands. The collective part (Ĥ col ) has a flat potential surface and involves collective rotations linked with the groups in the chain O(6) ⊃ O(5) ⊃ O(3). These orthogonal groups correspond to "generalized" rotations associated with the β-, γ-and Euler angles degrees of freedom, respectively. Apart from constant terms of no significance to the excitation spectrum, the collective Hamiltonian is composed of the two-body parts of the respective Casimir operatorŝ HereĈ G are defined in Table 1 and a per-boson scaling is invokedc i = c i /N (N − 1) to ensure that the bounds of the energy spectrum do not change for large N . In general, the intrinsic and collective Hamiltonians do not commute andĤ col splits and mixes the bands generated byĤ int .
First order quantum phase transitions in the IBM
The dynamical symmetries of the IBM, Eq. (1), correspond to phases of the system, and provide analytic benchmarks for the dynamics of stable nuclear shapes. Quantum phase transitions (QPTs) between different shapes are studied [61] by considering Hamiltonianŝ H(λ) that mix interaction terms from different DS chains, e.g.,Ĥ(λ) = λĤ 1 + (1 − λ)Ĥ 2 . The coupling coefficient (λ) responsible for the mixing, serves as the control parameter which upon variation induces qualitative changes in the properties of the system. The kind of QPT is dictated by the potential surface V (λ) ≡ V (λ; β, γ), Eq. (2), which serves as a mean-field Landau potential with the equilibrium deformations (β eq , γ eq ) as order parameters. The order of the phase transition and the critical point, λ = λ c , are determined by the order of the derivative with respect to λ of V (λ; β eq , γ eq ), where discontinuities first occur.
The IBM phase diagram [63] consists of spherical and deformed phases separated by a line of first-order transition ending in a point of second-order transitions in-between the spherical [U(5)] and deformed γ-unstable [O(6)] phases. The spherical [U (5)] to axiallydeformed [SU(3)] transition is of first order and the O(6)-SU(3) transition exhibits a crossover. In what follows, we examine the nature of the classical and quantum dynamics across a generic first-order QPT, with a high-barrier separating the two phases.
The intrinsic Hamiltonian in the spherical phase,Ĥ 1 (ρ), describes the dynamics of a spherical shape and satisfies Eq. (5), with β eq = 0. For large N , its normal modes involve five-dimensional quadrupole vibrations about the spherical global minimum of its potential surface, with frequency = 2h 2 N β 2 0 .
The intrinsic Hamiltonian in the deformed phase,Ĥ 2 (ξ), describes the dynamics of an axially-deformed shape and satisfies Eq. (5), with (β eq = √ 2β 0 (1 + β 2 0 ) −1/2 , γ eq = 0). For large N , its normal modes involve one-dimensional β vibration and two-dimensional γ vibrations about the prolate-deformed global minimum of its potential surface, with frequencies The two intrinsic Hamiltonians coincide at the critical point, ρ c = β −1 0 and ξ c = 0, whereĤ int cri is the critical-point intrinsic Hamiltonian considered in [37]. The collective Hamiltonian, Eq. (6), does not affect the shape of the potential surface but can contribute a shift to the normal-mode frequencies, Eqs. (9)-(10), by the amount In general, given an HamiltonianĤ, the intrinsic and collective parts, Eq. (4), are fixed by the condition of Eq. (5) and by requiringĤ int andĤ to have the same shape for the potential surface. For example, an Hamiltonian [64] frequently used in the study of QPTs iŝ where Q(χ) = d † s+s †d +χ(d †d ) (2) is the quadrupole operator and ≥ 0, κ ≥ 0, − √ 7 2 ≤ χ < 0. The critical-point Hamiltonian is obtained for a specific relation among , κ and χ, The parameters of the intrinsic and collective Hamiltonians are then found to bê In the present study, we adapt a different strategy. We fix the value of the parameter β 0 in the intrinsic Hamiltonian, Eq. (7), so as to ensure a high barrier at the critical point. We then vary, independently, the control parameters (ρ, ξ) in the intrinsic Hamiltonian and the parametersc 3 ,c 5 ,c 6 , of the collective Hamiltonian, Eq. (6). This will allow us to examine, separately, the influence on the dynamics of those terms affecting the Landau potential and of individual rotational kinetic terms, in a generic (high-barrier) first order QPT.
Symmetry properties and integrability
The symmetry properties of the intrinsic Hamiltonian (7) depend on the choice of control parameters (ρ, ξ) and of β 0 . In general, the dynamical symmetries are completely broken in the Hamiltonian and hence the underlying dynamics is non-integrable. However, for particular values of these parameters, exact dynamical symmetries (DS) or partial dynamical symmetries (PDS) can occur and their presence affects the integrability of the system.
The explicit breaking of O(5) symmetry leads to non-integrability and, as will be shown in subsequent discussions, is the main cause for the onset of chaos in the spherical region. AlthoughĤ 1 (ρ), Eq. (19), is not diagonal in the U(5) chain, it retains the following selected solvable U(5) basis states [62] while other eigenstates are mixed. As such, it exhibits U(5) partial dynamical symmetry [U(5)-PDS].
The collective Hamiltonian of Eq. (6) preserves the O(5) symmetry for any choice of couplingsc i , hence its dynamics is integrable with τ and L as good quantum numbers. TheĈ O(3) andĈ O(5) terms lead to an L(L + 1) and τ (τ + 3) type of splitting. In general, integrability is lost when the collective Hamiltonian is added to the intrinsic Hamiltonian (7), since the latter breaks the O(5) symmetry, and only L remains a good quantum number for the full Hamiltonian. A notable exception is when ρ = 0, since now all terms inĤ 1 (ρ = 0) +Ĥ col , respect the O(5) symmetry.
Classical limit
The classical limit of the IBM is obtained through the use of Glauber coherent states [67]. This amounts to replacing (s † , d † µ ) by six c-numbers (α * s , α * µ ) rescaled by √ N and taking N → ∞, with 1/N playing the role of . Number conservation ensures that phase space is 10-dimensional and can be phrased in terms of two shape (deformation) variables, three orientation (Euler) angles and their conjugate momenta. The shape variables can be identified with the β, γ variables introduced through Eq. (3). Setting all momenta to zero, yields the classical potential which is identical to V (β, γ) of Eq. (2). In the classical analysis presented below we consider, for simplicity, the dynamics of L = 0 vibrations, for which only two degrees of freedom are active. The rotational dynamics with L > 0 is examined in the subsequent quantum analysis.
Classical limit of the QPT Hamiltonian
For the intrinsic Hamiltonian of Eq. (7), constrained to L = 0, the above procedure yields the following classical Hamiltonian Here the coordinates β ∈ [0, √ 2], γ ∈ [0, 2π) and their canonically conjugate momenta p β ∈ [0, √ 2] and p γ ∈ [0, 1] span a compact classical phase space. The term denotes the classical limit ofn d (restricted to L = 0) and forms an isotropic harmonic oscillator Hamiltonian in the β and γ variables. Notice that the classical Hamiltonian of Eq. (26) contains complicated momentum-dependent terms originating from the two-body interactions in the Hamiltonian (7), not just the usual quadratic kinetic energy T . Setting p β = p γ = 0 in Eq. (26) leads to the following classical potential The same expressions can be obtained from Eq. (2) using the static intrinsic coherent state (3). Notice that the potential of Eq. (28) is independent of N due to the per-boson scaling used in Eq. (7). The variables β and γ can be interpreted as polar coordinates in an abstract plane parameterized by Cartesian coordinates (x, y). The transformation between these two sets of coordinates and conjugate momenta is Using the relations one can express the classical Hamiltonians of Eq. (26) in terms of (x, y, p x , p y ). Setting p x = p y = 0 in the resulting expressions, we obtain the classical potential of Eq. (28) in Cartesian form Note that the potentials V (β, γ) = V (x, y) depend on the combinations β 2 = x 2 + y 2 , β 4 = (x 2 + y 2 ) 2 and β 3 cos 3γ = x 3 − 3xy 2 .
The classical limit of the collective Hamiltonian, Eq. (6), constrained to L = 0, is obtained in a similar manner and is given by where T = p 2 β + p 2 γ /β 2 = p 2 x + p 2 y . The O(3)-rotational c 3 -term is absent from Eq. (32), since the classical Hamiltonian is constrained to angular momentum zero. The purely kinetic character of the collective terms is evident from the fact that H col vanishes for p β = p γ = 0, thus not contributing to the potential V (β, γ).
Topology of the classical potentials
The values of the control parameters ρ and ξ determine the landscape and extremal points of the potentials V 1 (ρ; β, γ) and V 2 (ξ; β, γ), Eq. (28). Important values of these parameters at which a pronounced change in structure is observed, are the spinodal point where a second (deformed) minimum occurs, an anti-spinodal point where the first (spherical) minimum disappears and a critical point in-between, where the two minima are degenerate. For the potentials under discussion, the critical point (ρ c , ξ c ) given by separates the spherical and deformed phases. The spinodal point (ρ * ) and the anti-spinodal point (ξ * * ) embrace the critical point and mark the boundary of the phase coexistence region. The derivation of these expressions is explained in Appendix A.
The spherical phase (0 ≤ ρ ≤ ρ c = β −1 0 ). The relevant potential in the spherical phase is V 1 (ρ; β, γ), Eq. (28a), with 0 ≤ ρ ≤ ρ c . In this case, β = 0 is a global minimum of the potential at an energy V sph , representing the equilibrium spherical shape, The limiting value at the domain boundary is For ρ = 0, the potential is independent of γ, and has β eq = 0 as a single minimum.
The relevant potential in the deformed phase is V 2 (ξ; β, γ), Eq. (28b), with ξ ≥ ξ c . In this case, [β eq > 0, γ eq = 0] is a global minimum of the potential at an energy V def , representing the equilibrium deformed shape The limiting value of the domain boundary is β = 0 is an extremal point and occurs at an energy V sph β = 0 : V sph is the energy of the spherical configuration which is a global minimum for 0 ≤ ρ < ρ c and a local minimum for ξ c ≤ ξ < ξ * * . For ρ > 0, a deformed maximum occurs at an energy V max . Beyond the spinodal point, ρ > ρ * , a local deformed minimum at an energy V def develops, along with a saddle point which creates a barrier of height V bar separating the two minima. The latter cross and become degenerate (V sph = V def ) at the critical point (ρ c , ξ c ). At the anti-spinodal point, ξ * * , the spherical configuration changes from a minimum to a maximum, the deformed configuration remains a single minimum (energy V def ) and the saddle point (energy V sad ), now separates pairs of equivalent deformed minima (see Fig. 2). Note that V bar = V def at ρ * and V bar = V sph at ξ * * . The relations V sph = V max and V sad = V lim at ξ SU(3) = 1 are a specific property of the SU(3) surface.
The evolution of the various stationary and asymptotic values of the Landau potentials (V sph , V max , V def , V bar , V sad , V lim ) as a function of the control parameters ρ and ξ, is depicted in Fig. 3. Most of these quantities, depend also on the parameter β 0 of the Hamiltonian (7). In particular, β 0 determines the equilibrium deformation in the deformed phase β eq > 0, Eq. (43), the height of the barrier at the critical point V bar , Eq. (52), and the width of the coexistence region through the values of the spinodal point ρ * , Eq. (34), and anti-spinodal point ξ * * , Eq. (35). In the present work, we choose β 0 = √ 2, for which the intrinsic Hamiltonian interpolates between the U(5) and SU(3) dynamical symmetries and various expressions simplify, since η = 0 in Eq. (40). For convenience, Table 2 lists the values of the relevant control and order parameters when β 0 = √ 2. Figure 4: Behavior of the order parameter, β eq , as a function of the control parameters (ρ, ξ) of the intrinsic Hamiltonian (7), with β 0 = √ 2. Here ρ * , (ρ c , ξ c ), ξ * * , are the spinodal, critical and anti-spinodal points, respectively, with values given in Table 2. The deformation at the global (local) minimum of the Landau potential (28) is marked by solid (dashed) lines. β eq = 0 (β eq = 2 √ 3 ) on the spherical (deformed) side of the QPT. Region I (III) involves a single spherical (deformed) shape, while region II involves shape-coexistence.
Structural regions of the QPT and order parameters
The preceding classical analysis of the potential surfaces has identified three regions with distinct structure.
I. The region of a stable spherical phase, ρ ∈ [0, ρ * ], where the potential has a single spherical minimum. II. The region of phase coexistence, ρ ∈ (ρ * , ρ c ] and ξ ∈ [ξ c , ξ * * ), where the potential has both spherical and deformed minima which cross and become degenerate at the critical point. III. The region of a stable deformed phase, ξ ≥ ξ * * , where the potential has a single deformed minimum.
The potential surface in each region serves as the Landau potential of the QPT, with the equilibrium deformations as order parameters. The latter evolve as a function of the control parameters (ρ, ξ) and exhibit a discontinuity typical of a first order transition. As depicted in Fig 4, the order parameter β eq is a double-valued function in the coexistence region (in-between ρ * and ξ * * ) and a step-function outside it. In what follows we examine the nature of the classical dynamics in each region.
Regularity and chaos: classical analysis
Hamiltonians with dynamical symmetry are always completely integrable [68]. The Casimir invariants of the algebras in the chain provide a complete set of constants of the motion in involution. The classical motion is purely regular. A dynamical symmetry-breaking is usually connected to non-integrability and may give rise to chaotic motion [68,69,70]. This is the situation encountered in a QPT, which occurs as a result of a competition between terms in the Hamiltonian with incompatible symmetries.
Regular and chaotic properties of the IBM have been studied extensively, employing various measures of classical and quantum chaos [44,45,46,47,48,49,50,51]. All such treatments involved the simplified Hamiltonian of Eq. (13), giving rise to an extremely low barrier and narrow coexistence region. For that reason, the majority of studies focused on the regions I and III of stable phases, while far less effort was devoted to the dynamics inside the region II of phase-coexistence. Considerable attention has been paid to integrable paths (the U(5)-O(6) transition for χ = 0 in Eq. (13) [49,50]) and to specific sets of parameters leading to an enhanced regularity ("arc of regularity" [46,51]) within these regions. Similar type of analysis was performed in the framework of the geometric collective model of nuclei [71,72,73,74,75].
In the present work, we consider the evolution of order and chaos across a generic first order quantum phase transition, with particular emphasis on the role of a high barrier separating the two phases. For that purpose, we employ the intrinsic Hamiltonian of Eq. (7) with β 0 = √ 2. In this case, the height of the barrier at the critical point, Eq. (52), is V bar /h 2 = 0.268, substantially higher than barrier heights encountered in previous works. In comparison, for the Hamiltonian of Eq. (14) with χ = − √ 7 2 , the corresponding quantities are β 0 = 1 2 √ 2 and V bar /h 2 = 0.0018. A high barrier will allow us to uncover a rich pattern of regularity and chaos in region II of shape-coexistence.
The classical dynamics of L = 0 vibrations, associated with the Hamiltonian (7), can be depicted conveniently via Poincaré surfaces of sections [9,10,11]. The latter are chosen in the plane y = 0 which passes through all the various types of stationary points (minimum, maximum, saddle) in the Landau potential (28). The values of x and p x are plotted each time a trajectory intersects the plane. The method of Poincaré sections provides a snapshot of the dynamics at a given energy. Regular trajectories are bound to toroidal manifolds within the phase space and their intersections with the plane of section lie on one-dimensional curves (ovals). In contrast, chaotic trajectories diverge exponentially and randomly cover kinematically accessible areas of the section. Although restricted to L = 0, the method is particularly valuable to the present study, due to its ability to identify different forms of dynamics occurring at the same energy in separate regions of phase space. Standard global classical measures of chaos, such as, the fraction of chaotic volume and the average largest Lyapunov exponent, are insensitive such local variations. We first discuss distinctive features of the dynamics in each region and relate them to the morphology of the Landau potential. This will provide the necessary background for understanding the complete evolution of the dynamics across the QPT.
Characteristic features of the dynamics in the vicinity of minima
Considerable insight into the nature of the classical dynamics at low energy can be gained by examining the topology of the Landau potential in the vicinity of its minima. A sample of representative Poincaré sections for the classical Hamiltonian constrained to L = 0, Eq. (26), are depicted in Figs. 5-6, along with selected trajectories.
The spherical configuration (β = 0) is a global minimum of the potential V 1 (ρ; β, γ), Eq. (28a), on the spherical side of the QPT (0 ≤ ρ ≤ ρ c ). For ρ = 0, the system has U(5) DS and hence is integrable. The potential V 1 (ρ = 0), Eq. (38), is γ-independent and exhibits β 2 and β 4 dependence. As shown in Fig. 5(a), the sections, for small ρ, show the phase space portrait typical of a weakly perturbed anharmonic (quartic) oscillator with two major regular islands and quasi-periodic trajectories. The effect of increasing ρ on the dynamics in the vicinity of the spherical (s) minimum (β ≈ 0), can be inferred from a small β-expansion of the potential, To order β 3 , V 1,s (ρ) has the same functional form as the the well-known Hénon-Heiles (HH) potential [76], which in polar coordinates (r, φ) reads with α > 0. The latter potential serves as a paradigm of a system that exhibits a transition from regular to chaotic dynamics as the energy increases [9,10,11]. As shown for ρ = 0.2, at This typical HH-type of behavior persists in the vicinity of the spherical minimum throughout the coexistence region, including the critical point (ρ c , ξ c ) and beyond where the spherical minimum is only local (0 ≤ ξ ≤ ξ * * ). This can be inferred from a similar small β-expansion of the relevant potential V 2 (ξ; β, γ), Eq. (28b), It should be noted that although the expansions in Eqs. (56) and (58) are similar in form to the Hénon-Heiles potential (57), the full potentials, Eq. (28), have a finite domain and include a β 4 term, thus ensuring that the motion is bounded at all energies. The deformed configuration (β eq > 0, γ eq = 0), Eq. (43), is a global minimum of the potential V 2 (ξ; β, γ), Eq. (28b), on the deformed side of the QPT (ξ ≥ ξ c ). The classical dynamics in its vicinity (x ≈ 1) has a very different character, being robustly regular. At low energy, the motion reflects the β and γ normal mode oscillations about the deformed minimum. As shown in Fig. 6(a), the family of regular trajectories has a particular simple structure. It forms a single set of concentric loops around a single stable (elliptic) fixed point. They portray γ-vibrations at the center of the surface (p x ≈ 0) and β-vibrations at the perimeter (large |p x |). This regular pattern of the dynamics is found for most values of ξ ≥ 0 both inside and outside the phase coexistence region. The dynamics remains regular but its pattern changes in the presence of resonances. The latter appear when the ratio of normal mode frequencies, Eq. (10), is a rational number Fig. 6 show examples of such scenario for β 0 = √ 2 and ξ-values corresponding to R = 1/2, 2/3, 1. The corresponding surfaces exhibit four, three and two islands, respectively. The phase space portrait for (ξ = 1, R = 1), shown in Fig. 6(d), corresponds to the integrable SU(3) DS limit. These additional chains of regular islands will be considered in more detail in Section 5.3.
Similar trends are observed in the region (ρ * < ρ ≤ ρ c ), where the deformed minimum is only local. A regular dynamics is thus an inherent feature of a deformed minimum and, at low energy, reflects the behavior of the Landau potential in its vicinity. The structure of the latter is revealed in an expansion of the potential in local coordinates. Consider a deformed minimum (global or local) of the Landau potential characterized by the deformation (β * > 0, γ * = 0). The local coordinates (δ, θ) about it, shown in Fig 7, are defined by the relations β cos γ = β * + δ cos θ , (60a) β sin γ = δ sin θ .
(60b) A small δ-expansion of the potential about this minimum (to order δ 3 ), reads Here V (β, γ) stands for V 1 (ρ; β, γ) in the spherical phase and V 2 (ξ; β, γ) in the deformed phase. In general, the coefficients K i depend on β * and the control parameters, e.g., K 1 = V (β * , γ = 0). In the deformed phase, where the deformed minimum is global, β * = β eq = √ 2β 0 (1 + β 2 0 ) −1/2 , Eq. (43), and the K i coefficients are given by For β 0 = √ 2, these expressions simplify to The expansions in Eqs. (61) and (63) contain terms with cos θ, cos 2θ and cos 3θ dependence. The presence of lower harmonics destroys, locally, the three-fold symmetry encountered near the spherical minimum, Eqs. (56)-(58), due to the cos 3γ term. This asymmetry is clearly seen in the contour plots of Figs. 1-2. Both spherical and deformed minima of the Landau potentials V 1 (ρ; β, γ) and V 2 (ξ; β, γ), are present in the coexistence region, ρ * < ρ ≤ ρ c and ξ c ≤ ξ < ξ * * . In this case, each minimum preserves its own characteristic dynamics resulting in a marked separation between a Hénon-Heiles type of chaotic motion in the vicinity of the spherical minimum and a regular motion in the vicinity of the deformed minimum. Such mixed form of dynamics occurring at the same energy in different regions of phase space, is demonstrated in Fig. 8. The latter depicts the potential landscape at the critical point (ρ c , ξ c ), along with the Poincaré section and selected trajectories at the barrier energy. In this case, the spherical (s) and deformed (d) minima are degenerate, and for β 0 = √ 2, the expansions of the corresponding Landau potential in their vicinity exhibit a different morphology The critical-point potential near the spherical minimum (V s,cri ) has a 3-fold symmetry and its contours are either concave or convex towards the origin (see Fig. 1). The former contours lead to divergence of trajectories, a characteristic property of chaotic motion. In contrast, the critical-point potential near the deformed minimum (V d,cri ) has an egg-shape, without a local 3-fold symmetry. The potential contours are convex and tend to focus the trajectories towards the minimum, resulting in a confined regular motion.
Evolution of the classical dynamics across the QPT
We turn now to a comprehensive analysis of the classical dynamics, constraint to L = 0, evolving across the first order QPT. The evolution is accompanied by an intricate interplay of order and chaos, reflecting the change in structure. The shape-phase transition is induced by the intrinsic Hamiltonian of Eq. (7) with β 0 = √ 2. The Poincaré surfaces of sections, are shown in Figs. 9-10-11 for representative energies, below the domain boundary (V lim ), and control parameters (ρ, ξ) in regions I-II-III, respectively. The surfaces record a total of 40,000 passages through the y = 0 plane by 120 trajectories with randomly generated initial conditions, in order to scan the whole accessible phase space at a given energy. The bottom row in each figure displays the corresponding classical potential V (x, y = 0), Eq. (31).
is similar to the Hénon-Heiles system (HH) [76] with regularity at low energy and marked onset of chaos at higher energies. The chaotic component of the dynamics increases with ρ and maximizes at the spinodal point ρ * = 0.5. The chaotic orbits densely fill two-dimensional regions of the surface of section.
The dynamics changes profoundly in the coexistence region (region II). Here the relevant classical Hamiltonians are H 1 (ρ), Eq. (26a), with ρ * < ρ ≤ ρ c and H 2 (ξ), Eq. (26b), with ξ c ≤ ξ < ξ * * . The corresponding potentials V 1 (ρ), Eq. (31a), and V 2 (ξ), Eq. (31b) have both spherical and deformed minima, which become degenerate and cross at the critical point (ρ c = 1/ √ 2, ξ c = 0). The Poincaré sections before, at and after the critical point, (ρ = 0.6, ξ c = 0, ξ = 0.1) are shown in Fig. 10. In general, the motion is predominantly regular at low energies and gradually turning chaotic as the energy increases. However, the classical dynamics evolves differently in the vicinity of the two wells. As the local deformed minimum develops, robustly regular dynamics attached to it appears. The trajectories form a single island and remain regular even at energies far exceeding the barrier height V bar . This behavior is in marked contrast to the HH-type of dynamics in the vicinity of the spherical minimum, where a change with energy from regularity to chaos is observed, until complete chaoticity is reached near the barrier top. The clear separation between regular and chaotic dynamics, associated with the two minima, persists all the way to the barrier energy, E = V bar , where the two regions just touch. At E > V bar , the chaotic trajectories from the spherical region can penetrate into the deformed region and a layer of chaos develops, and gradually dominates the surviving regular island for E V bar . As ξ increases, the spherical minimum becomes shallower, and the HH-like dynamics diminishes.
As seen in Fig. 11, the dynamics is robustly regular in the stable deformed phase (region III), where the relevant classical Hamiltonian is H 2 (ξ), Eq. (26b), with ξ ≥ ξ * * . The spherical minimum disappears at the anti-spinodal point ξ * * = 1/3 and the relevant potential V 2 (ξ), Eq. (31b), remains with a single deformed minimum. Regular motion prevails for ξ ≥ ξ * * where a single stable fixed point, surrounded by a family of elliptic orbits, continues to dominate the Poincaré section. In certain regions of the control parameter ξ and energy, the section landscape changes from a single to several regular islands, reflecting the sensitivity of the dynamics to local degeneracies of normal modes. Such resonance effects will be elaborated in more detail in Section 5.3. A notable exception to such variation is the SU(3) DS limit (ξ = 1), for which the system is integrable and the phase space portrait is the same for any energy.
Resonance effects
The preceding discussion has shown that even away from the integrable SU(3) limit, the classical intrinsic dynamics associated with the deformed well, remains robustly regular. In most segments of regions II and III, the Poincaré sections exhibit a single island, originating from simple β (x) and γ (y) orbits, imprinting the small amplitude vibrations of normal modes about the deformed minimum. As noted, occasionally, resonances in these oscillations give rise to additional chains of regular islands. In the present section we examine in more detail this sensitivity of the classical motion and attempt to demarcate the ranges of energy and control parameters where these resonance effects occur.
The bottom row illustrates the Poincaré-Birkhoff scenario of the breakdown of resonant tori. Sequences of alternating stable (dots) and unstable (crosses) fixed points are seen in the Poincaré sections due to the stable and unstable orbits for each particular resonance.
The dynamical consequences of perturbing a classical integrable system, are governed by the celebrated Kolmogorov-Arnold-Moser (KAM) and Poincaré-Birkhoff (PB) theorems [9,10,11]. According to the KAM theorem, most tori of the integrable system which are sufficiently irrational, get slightly deformed in the perturbed system but are not destroyed. On the other hand, the resonant tori (the tori characterized by a rational ratio of winding frequencies) of the integrable system, disintegrate when the system gets perturbed and consequently, according to the PB theorem, a chain of islands is formed on the surface of section. The resonant tori decay into sets of stable and unstable orbits, giving rise to sequences of alternating elliptic and hyperbolic fixed points. The elliptic points lead to the emergence of regular islands, inside which the trajectories are phase-locked and the ratio of the corresponding frequencies remains equal to the rational number of the corresponding initial resonant torus. The hyperbolic points lie on separatrix intersections between the islands, about which chaotic layers can develop.
For the considered classical intrinsic Hamiltonian H 2 (ξ), Eq. (26b), in the deformed region (ξ ≥ ξ c = 0), the resonances are reached when the ratio R = β / γ = m/n of Eq. (59), is a rational number. The shape of the resonant orbits resembles Lissajous figures with the same ratio of frequencies. For more details on the topology of such orbits, the reader is referred to [77]. The most pronounced resonances (thicker PB islands) correspond to small co-prime (m, n) integers, and the number of islands in a given chain is 2/R. These features were observed in Fig. 6, and are shown schematically in Fig. 12, for R = 1, 2/3, 1/2, 2/5, 1/3, corresponding to 2, 3, 4, 5, 6 islands, respectively. At low energy (E → 0), where the harmonic approximation is valid, one expects the resonances to occur at discrete values of the control parameter ξ ≈ ξ R , in a narrow interval around ξ R , where the latter is obtained by inverting Eq. (59). At a finite energy (E > 0), anharmonic effects in H 2 (ξ) come into play and, consequently, a PB chain of islands associated with a given rational R ratio, can occur in wider ranges of ξ values. The sensitivity of the classical dynamics to resonance effects is demonstrated in Fig. 13 near R = 2/3, where the PB chain consists of three regular islands. The different columns show the Poincaré sections for ξ = 0.475, 0.5, 0.55, at energies At the resonance point, ξ R = 0.5 (middle column), one observes at all chosen energies, the expected three regular islands near the perimeter of the Poincaré sections, indicating an instability with respect to the β-motion. Their relative size compared to the total area of the section, increases with energy. In contrast, the PB islands are not seen at low E neither , on the deformed side of the QPT, ξ ≥ 0. The color-coded regions indicate the occurrence of major Poincaré-Birkhoff chains of islands in the Poincaré sections due to normal-mode resonances with frequency ratios R = 1/3, 2/5, 1/2, 2/3 and 1/1, giving rise to 6, 5, 4, 3, 2 islands, respectively. As E → 0, the colored regions tip exactly towards the resonant values ξ R of Eq. (65). White areas involve (ξ, E) domains with a single island. Black bullets (red stars) correspond to individual panels in Figs. 10-11 (Fig. 13). Stationary and boundary values of the potential surface V 2 (ξ), Eq. (28b), are marked by thin black lines (compare with Fig. 3). The gray area at high energies is inaccessible due to numerical instability.
at ξ = 0.475 (see panels for E = E 1 , E 2 ), nor at ξ = 0.55 (panel for E = E 1 ), where the Poincaré sections display the usual pattern of a single island. These islands, however, do appear at higher energies, E = E 3 for ξ = 0.475 and E = E 2 for ξ = 0.55. In the latter case, the PB island-chain occurs near the center of the Poincaré section, signaling an instability with respect to the γ-motion. Fig. 14 presents a detailed map of the (colored-coded) regions in the (ξ, E) plane, where PB chains with 2, 3, 4, 5, 6 islands occur. The latter are associated with the most pronounced resonances having normal-mode frequency ratios R = 1, 2/3, 1/2, 2/5, 1/3, respectively. For E → 0, all the resonance regions end in a sharp tip at ξ R = 0, 0.1, 0.25, 0.5, 1, in agreement with Eq. (65). As the energy E > 0 increases, the resonance regions are either tilted away from ξ R (as for R = 1/3, 2/5, 1/2) or fan out and embrace ξ R (as for R = 2/3, 1).
At higher energies, pairs of regions [(R = 1/3, 2/5), (R = 1/2, 2/3), (R = 2/3, 1)], can overlap, indicating that for a given Hamiltonian H 2 (ξ), Eq. (26b), two distinct PB island chains can occur simultaneously in the Poincaré surface. The white areas outside the color-coded resonance regions, identify the (ξ, E) domains where the Poincaré surfaces exhibit a single island, without additional PB island chains. The dominance of these areas for E ≤ 1 explains why this simple pattern prevails in most Poincaré sections at low energies. Fig. 14 is very instrumental for understanding the rich regular structure arising from the classical intrinsic dynamics in regions II and III of the QPT, for ξ ≥ 0. For orientation, a few black bullets are marked in some of the color-coded resonance regions, corresponding to particular Poincaré sections in Figs. 10-11. For the critical point, the line ξ c = 0 is completely inside a white area in Fig. 14 and no resonance regions are seen along it, consistent with the single island observed in the panels of the ξ c = 0 column in Fig. 10. At the anti-spinodal point (ξ * * = 1/3), the lowest two bullets marked in Fig. 14, are located in white areas and the remaining bullets at higher energies reside inside the R = 1/2, 2/5, 1/3 resonance regions, consecutively. This is consistent with the observed surfaces of the ξ * * = 1/3 column in Fig. 11, where the lowest two panels display a single island and the remaining panels in consecutive order, show PB chains with 4, 5, 6 islands. The Poincaré sections of the ξ = 2/3 column in Fig. 11, show a single island (lowest three panels) and a PB chain of three islands in the remaining panels at higher energies. This is again in line with the location of the bullets for ξ = 2/3 in Fig. 14. For the SU(3)-DS limit (ξ = 1), the Poincaré sections in Fig. 11 display two islands at all energies, consistent with the sole R = 1 resonance region embracing the ξ = 1 line in Fig. 14.
Near the boundaries of each resonance region, the PB islands are tiny in size. Upon varying ξ and/or E towards the center of a given region, the islands migrate to the interior of the main regular island in the respective Poincaré sections, and grow in relative size. Such a scenario is seen clearly in the panels of Fig. 13. The latter correspond to the red starred points near/inside the R = 2/3 resonance region in Fig. 14. The dashed lines marking the high-E boundaries of the R = 2/3 and R = 1 resonance regions, indicate the location where the respective PB chains disappear in the surrounding chaotic sea. Thus, for ξ = 0.5 in Fig. 14, the fourth black bullet lies on the dashed line marking the boundary of the R = 2/3 resonance region, where the three islands of the PB chain just disappear in an emerging chaotic layer. Notice, that the same black bullet lies simultaneously inside the R = 1/2 resonance region and indeed, we observe four additional pronounced islands in the fourth panel from the bottom of the ξ = 0.5 column in Fig. 11. In contrast, the fifth black bullet at higher energy for ξ = 0.5, lies inside a white area in Fig. 14 and in the corresponding fifth panel in Fig. 11, we observe just a single regular island, without any PB island chains, embedded in a significant chaotic environment.
Quantum analysis
The analysis of the classical dynamics, constraint to L = 0, has revealed a rich inhomogeneous phase space structure with a pattern of mixed regular and chaotic dynamics, reflecting the changing topology of the Landau potential across the QPT. It is clearly of in-terest to examine the implications of this behavior to the quantum treatment of the system. In what follows, we consider the evolution of levels in the corresponding quantum spectrum and examine the regular and irregular features of these quantum states. Fig. 15 shows the correlation diagrams for energies of (N = 80, L = 0) eigenstates of the intrinsic Hamiltonian, Eq. (7), with β 0 = √ 2, as a function of the control parameters, 0 ≤ ρ ≤ ρ c (upper portion) and ξ c ≤ ξ ≤ 1 (lower portion). The position of the spinodal point (ρ * = 1/2) and the anti-spinodal point (ξ * * = 1/3), is indicated by vertical lines. In-between these points, inside the coexistence region, solid lines mark the energies of the barrier (V bar ) at the saddle point and of the local minima (V def for ρ * < ρ ≤ ρ c and V sph for ξ c ≤ ξ < ξ * * ) in the relevant Landau potential (compare with Fig. 3).
Level evolution
On the spherical side of the QPT, outside of the coexistence region (0 ≤ ρ ≤ ρ * ), the spectrum ofĤ 1 (ρ) (7a) at low energy, resembles the normal-mode expression of Eq. (9), E = n d (n d = 0, 2, 3, . . .), with = 4h 2 N independent of ρ (the missing n d = 1 state has L = 2). As seen in the upper portion of Fig. 15, this low-energy behavior is observed also inside the coexistence region (ρ * < ρ ≤ ρ c ) at energies E < V def below the local deformed minimum. Anharmonicities are suppressed by 1/N , as can be verified by comparing the spectrum at ρ = 0 with the U(5)-DS expression, Eq. (17). At higher energies and ρ > 0, there are noticeable level repulsion and (avoided) level crossing occurring in the classical chaotic regime. These effects become more pronounced as ρ increases and approaches the spinodal point ρ * , and are due to the U(5) breaking ρ-term in Eq. (19).
On the deformed side of the QPT, outside of the coexistence region (ξ * * ≤ ξ ≤ 1), the levels with L = 0 serve as bandheads of rotational K = 0 bands, associated with the ground band g(K = 0) and multiple β n γ 2k (K = 0) excitations of the prolate-deformed shape. The low energy spectrum ofĤ 2 (ρ) (7b) resembles the normal-mode expression of Eq. (10), E = β n β + γ n γ (n β = 0, 1, 2, . . . and n γ = 0, 2, 4, 6, . . .) with β = 4h 2 N (2ξ + 1) and γ = 12h 2 N (only bands with n γ even support L = 0 states). In particular, bandhead energies involving pure γ excitations are independent of ξ, while bandhead energies involving β excitations are linear in ξ, a trend seen in the lower portion of Fig. 15. Local degeneracies of normal-modes lead to bunching of energy levels and noticeable voids in the level density, in the same regions of (ξ, E) shown in the classical resonance map of Fig. 14. For ξ = 1, one has β = γ and the spectrum follows the SU(3)-DS expression, Eq. (22), with anharmonicities of order 1/N . This ordered pattern of levels is observed also inside the coexistence region (ξ c ≤ ξ < ξ * * ) at energies E < V sph below the local spherical minimum. Dramatic structural changes in the level dynamics take place in the coexistence region (ρ * < ρ ≤ ρ c ) and (ξ c ≤ ξ < ξ * * ). As shown in Fig. 15, at energies above the respective local minima, (E > V sph or E > V def ), the spherical type and deformed type of levels approach each other and their encounter results in marked modifications in the local level density. In particular, there is an accumulation of levels near the top of the barrier (V bar ). Such singularities in the evolution of the spectrum, referred to as excited state quantum phase transition [42], have been encountered in integrable models involving QPTs [78]. In what follows, we plan to examine the regular and irregular features of these quantum states, and explore how their properties echo the mixed regular and chaotic dynamics observed in the classical analysis of the first-order QPT.
Peres lattices
Quantum manifestations of classical chaos are often detected by statistical analyses of energy spectra [9,10,11]. In a quantum system with mixed regular and irregular states, the statistical properties of the spectrum are usually intermediate between the Poisson and the Gaussian orthogonal ensemble (GOE) statistics. Such global measures of quantum chaos are, however, insufficient to reflect the rich dynamics of an inhomogeneous phase space structure encountered in Fig. 9-11, with mixed but well-separated regular and chaotic regions. To do so, one needs to distinguish between regular and irregular subsets of eigenstates in the same energy intervals. For that purpose, we employ the spectral lattice method of Peres [79], which provides additional properties of individual energy eigenstates. The Peres lattices are constructed by plotting the expectation values O i = i|Ô|i of an arbitrary operator, [Ô,Ĥ] = 0, versus the energy E i = i|Ĥ|i of the Hamiltonian eigenstates |i . The lattices {O i , E i } corresponding to regular dynamics can be shown to display a regular pattern, while chaotic dynamics leads to disordered meshes of points. The method has been recently applied to the collective model of nuclei [74,75] and to the IBM [80,81]. The ability of the method to distinguish between regular and irregular states, does not rely on the Peres operatorsÔ used, and their choice can be made on physical grounds.
In the present analysis, in order to highlight the classical-quantum correspondence, we chooseÔ =n d and define the Peres lattices as the set of points {x i , E i }, with and |i being the eigenstates of the IBM Hamiltonian. The expectation value ofn d in the condensate |β; N ≡ |β, γ = 0; N of Eq. (3) is related to the deformation β (whose equilibrium value is the order parameter of the QPT) and the coordinate x in the classical potential, V (x, y = 0) = V (β, γ = 0), Eqs. (28) and (31). The spherical ground state is the s-boson condensate which has n d = x i = 0. Excited spherical states are obtained, to a good approximation, by replacing s-bosons in |β = 0; N with dbosons, hence x i ∼ n d /N is small for n d /N << 1. Rotational members of the deformed ground band are obtained by L-projection from |β; N and have x i ≈ β to leading order in N . This relation is still valid, to a good approximation, for states in excited deformed bands, whose intrinsic states are obtained by replacing condensate bosons in |β; N with the orthogonal bosons, , representing β and γ excitations [53,82]. These attributes have the virtue that the chosen lattices {x i , E i } of Eq. (66), can identify the regular/irregular quantum states and associate them with a given region in the classical phase space. (38), a trend seen for ρ = 0 (full regularity) and ρ = 0.03 (almost full regularity) in the top row of Fig. 16. For ρ = 0.2, at low energy a few lattice points still follow the potential curve V 1 (ρ), but at higher energies one observes sizeable deviations and disordered meshes of lattice points, in accord with the onset of chaos in the classical Hénon-Heiles system considered in Fig. 9. The disorder in the Peres lattice enhances at the spinodal point ρ * = 0.5, where the chaotic component of the classical dynamics maximizes.
The center row of Fig. 16 displays the evolution of quantum Peres lattices in the region of phase-coexistence (region II) for ρ ∈ (ρ * , ρ c ] inĤ 1 (ρ), Eq. (7a), and ξ ∈ [ξ c , ξ * * ) inĤ 2 (ξ), Eq. (7b). The calculations shown are for the same values of control parameters used in the classical analysis in Fig 10. The occurrence of a second deformed minimum in the potential is signaled by the occurrence of regular sequences of states localized within and above the deformed well. They form several chains of lattice points close in energy, with the lowest chain originating at the deformed ground state. A close inspection reveals that the x i -values of these regular states, lie in the intervals of x-values occupied by the regular tori in the Poincaré sections in Fig. 10. Similarly to the classical tori, these regular sequences persist to energies well above the barrier V bar . The lowest sequence consists of L = 0 bandhead states of the ground g(K = 0) and β n (K = 0) bands. Regular sequences at higher energy correspond to multi-phonon β n γ 2m (K = 0) bands. In contrast, the remaining states, including those residing in the spherical minimum, do not show any obvious patterns and lead to disordered (chaotic) meshes of points at high energy E > V bar .
The bottom row of Fig. 16 displays the Peres lattices in the stable deformed phase (region III) for ξ ∈ [ξ c , 1], and is the quantum counterpart of Fig. 11. No lattice points are seen at small values of x < 0.5, beyond the anti-spinodal point ξ * * , where the spherical minimum disappears. On the other hand, more and longer regular sequences of K = 0 bandhead states are observed in the vicinity of the single deformed minimum (x ≈ 1) as its depth increases. These sequences tend to be more aligned above the center of the potential well, as ξ progresses from ξ * * towards the SU(3) limit (ξ = 1). A close inspection reveals slight dislocations in the ordered pattern of lattice points for those values of (ξ, E), mentioned in Section 5, corresponding to a resonance.
Unlike the Poincaré sections of the classical analysis, the Peres spectral method can be used to visualize also the dynamics of quantum states with non-zero angular momenta. Examples of such Peres lattices of states with L = 0, 2, 3, 4 are shown in Fig. 17 for representative values of the control parameters in region I (ρ = 0.2), region II (ξ c = 0) and region III (ξ = 0.8). The right column in the figure combines the separate-L lattices and overlays them on the relevant classical potential. For ρ = 0.2, at low energies typical of the regular Hénon Heiles (HH) regime, one can identify multiplets of states with L = 0, L = 2, L = 0, 2, 4, similar to the lowest U(5) multiplets of Eq. (18). As will be discussed in Section 7, their wave functions show the dominance of a single n d component (n d = 0, 1, 2, respectively), characteristic of a spherical vibrator. No such multiplet structure can be detected at higher energy in the chaotic HH regime. Interestingly, a small number of low-energy U(5)-like multiplets persists in the coexistence region, to the left of the barrier towards the spherical minimum, as seen in the Peres lattice for the critical point, ξ c = 0, in Fig. 17. demonstrate the occurrence of such regular K = 0, 2, 4 bands inside the coexistence region (region II), alongside with other irregular states represented by the disordered meshes of points in the Peres lattice. The panels for ξ = 0.8 in Fig. 17 indicate that in region III, as the single deformed minimum becomes deeper, the regular K-bands exhaust larger portions of the Peres lattice. Generally, the states in each regular band share a common intrinsic structure as indicated by their nearly equal values of n d and a similar coherent decomposition of their wave functions in the SU(3) basis, to be discussed in Section 7. The regular bands extend to high angular momenta as demonstrated for the critical point in Fig. 18.
While it is natural to find regular rotational bands in a region with a single well-developed deformed minimum, their occurrence in the coexistence region, including the critical point, is somewhat unexpected, in view of the strong mixing and abrupt structural changes taking place. Their persistence in the spectrum to energies well above the barrier and to high values of angular momenta, amidst a complicated environment, validates the relevance of an adiabatic separation of intrinsic and collective modes [83], for a subset of states.
To conclude, the classical and quantum analyses presented so far, indicate that the variation of the control parameters (ρ, ξ) in the intrinsic Hamiltonian, induces a change in the topology of the Landau potential across the QPT which, in turn, is correlated with an intricate interplay of order and chaos. For the considered Hamiltonian, whenever a spherical minimum occurs in the potential, the system exhibits an anharmonic oscillator (AO) type of dynamics for small ρ, and a Hénon Heiles (HH) type of dynamics at larger values of ρ. While the AO dynamics is regular, the HH dynamics shows a variation with energy from regular to chaotic character, which is reflected in the Peres lattices by a change from ordered to disordered patterns. Whenever a deformed minimum occurs in the potential, the Peres lattices display regular rotational bands localized in the region of the deformed well and corresponding to the regular islands in the classical Poincaré sections. In the coexistence region, these regular bands persist to energies well above the barrier and are well separated from the remaining states, which form disordered meshes of lattice points in the classical chaotic regime. The system in the domain of phase coexistence, thus provides a clear cut demonstration of the classical-quantum correspondence of regular and chaotic behavior, illustrating Percival's conjecture concerning the distinct properties of regular and irregular quantum spectra [84].
Symmetry aspects
The intrinsic Hamiltonian, Eq. (7), with β 0 = √ 2, interpolates between the U(5)-DS limit (ρ = 0) and the SU(3)-DS limit (ξ = 1). Away from these limits, (ρ > 0 and ξ < 1), both dynamical symmetries are broken and the competition between terms in the Hamiltonian with different symmetry character, drives the system through a first-order QPT. It is of great interest to study the symmetry properties of the Hamiltonian eigenstates and explore how they echo the observed interplay of order and chaos accompanying the QPT.
The preceding quantum analysis has revealed regular SU(3)-like sequences of states which persist in the deformed region and, possibly, U(5)-like multiples which persist at low-energy in the spherical region. It is natural to seek a symmetry-based explanation for the survival of such regular subsets of states, in the presence of more complicate type of states. In what follows, we show that partial dynamical symmetry (PDS) and quasi-dynamical symmetry (QDS) can play a clarifying role. They reflect, respectively, the enhanced purity and coherence, observed in the wave functions of these selected states.
A number of works [85,86] have shown that PDSs can cause suppression of chaos even when the fraction of states which has the symmetry vanishes in the classical limit. SU(3) QDS has been proposed [87] to underly the "arc of regularity" [46], a narrow zone of enhanced regularity in the parameter-space of the IBM Hamiltonian, Eq. (13). In conjunction with first-order QPTs, both U(5) and SU(3) PDSs were shown to occur at the critical point [33]. The QDS notion was originally applied to properties of selected low-lying states outside the coexistence region [32]. Later works [80,81] have demonstrated the relevance of SU(3) QDS not only to the ground band, but also to high-lying bands in the stable deformed phase, with a single deformed minimum. In what follows, we show that the PDS and QDS notions can be used also inside the coexistence region of the QPT and serve as fingerprints for structural changes throughout this region. Their measures can uncover the survival of order in the face of a chaotic environment.
Decomposition of wave functions in the dynamical symmetry bases
Consider an eigenfunction of the IBM Hamiltonian, |L i , with angular momentum L and ordinal number i (enumerating the occurrences of states with the same L, with increasing where, for simplicity, the dependence of |L i and the expansion coefficients on N is suppressed. The U(5) (n d ) probability distribution, P The sum in Eq. (69a) provides considerable insight on the nature of states. This follows from the observation that "spherical" type of states show a narrow distribution, with a characteristic dominance of single n d components that one would expect for a spherical vibrator. In contrast, "deformed" type of states show a broad n d -distribution typical of a deformed rotor structure. This ability to distinguish different types of states, is illustrated for eigenstates of the critical-point Hamiltonian in Fig 19. The states shown on the left column of Fig. 19, were selected on the basis of having the largest components with n d = 0, 1, 2, 3, 4, within the given L spectra. States with different L values are arranged into panels labeled by 'n d ' to conform with the structure of the n dmultiplets of the U(5) DS limit, Eq. (18). Each panel depicts the n d -probability, P The states shown on the right column of Fig. 19, have a different character. They belong to the five lowest regular sequences seen in the combined Peres lattices for ξ c = 0 in Fig. 17. The association of a set of L i -states to a given sequence, is based on a close proximity of their lattice points {x i , E i }, and on having a similar decomposition in the SU(3) DS basis, to be discussed below. The states shown, exhibit a broad n d -distribution, hence are qualified as 'deformed'-type of states, forming rotational bands: g(K = 0), β(K = 0), β 2 (K = 0), β 3 (K = 0) and γ(K = 2). The bandhead energy of each K-band is listed in each panel. Note that the zero-energy deformed ground state, L = 0 1 , is degenerate with the (n d = 0, L = 0 2 ) spherical state. The P (L i ) n d probabilities for the K = 0 bands in Fig. 19, display an oscillatory behavior, reflecting the expected nodal structure of these ground and multi β-phonon bands. Fig. 19. The spherical-type of states, shown on the left column, involve considerable mixing with respect to SU(3), without any obvious common pattern among states in the same 'n d ' multiplet, and in marked contrast to their n d -distribution shown in Fig. 19. The states in the 'n d ≤ 2' multiplets involve higher SU (3) irreps, while those in the fragmented 'n d ≥ 3' multiplets are more uniformly spread among all (λ, µ)-components. The 'rotational'-type of states, shown on the right column of Fig. 20, show again a very different behavior. First, the ground g(K = 0) and the γ(K = 2) bands are pure with (λ, µ) = (2N, 0) and (2N − 4, 2) SU(3) character, respectively. These are the solvable bands of Eq. (25) with SU(3) PDS. Second, the non-solvable K-bands are mixed with respect to SU(3), but the mixing is similar for the different L-states in the same band. Such strong but coherent (L-independent) mixing is the hallmark of SU(3) QDS. It results from the existence of a single intrinsic state for each such band and imprints an adiabatic motion and increased regularity [83].
By comparing the right hand side panels in Fig. 20, with the left hand side panels in Fig. 19, we find that the SU(3) QDS property of the 'deformed' states persists, while the U(5) PDS property of the spherical states dissolves at higher energy. This observation is in accord with the classical and quantum analyses. As portrayed in the Poincaré sections (Fig. 10) and Peres lattices (Figs. 17-18) at the critical point (ξ c = 0), the dynamics ascribed to the deformed well is regular and persists to energies higher than the barrier. In contrast, the dynamics ascribed to the spherical well, shows a Hénon-Heiles (HH) type of transition from regular to chaotic motion as the energy increases. A narrow chaotic layer in the classical phase space starts to occur at E ≈ 0.1, while fully chaotic dynamics develops at E ≈ 0.24, below the top of the barrier at V bar /h 2 = 0.268. For the boson number N = 50 considered, the 'n d = 0, 1' states in Fig. 19, lie in the energy domain of the regular HH dynamics, the 'n d = 2' triplet resides in the relatively-regular domain just above the appearance of the chaotic layer, while the 'n d = 3, 4' multiplets lie already near the barrier top, in the highly chaotic domain. Thus, the observed breakdown of the U(5)-character of the multiplets, can be attributed to the onset of chaos at higher energy in the region of the spherical well.
Measures of purity (PDS) and coherence (QDS)
The preceding discussion highlights the importance of U(5)-PDS and SU(3)-QDS in identifying and characterizing the persisting regular states. These symmetry notions rely on the purity and coherence of the states with respect to a DS basis. It is therefore of interest to have at hand quantitative measures for these properties.
The Shannon state entropy is a convenient tool to evaluate the purity of eigenstates with respect to a DS basis. Given a state |L i , with U(5) and SU(3) decomposition as in Eq. (68), its U(5) and SU(3) entropies are defined as Here π(0 i , L j ) is a Pearson correlation coefficient whose values lie in the range [−1, 1]. Specifically, π(0 i , L j ) = 1, −1, 0, indicate a perfect correlation, a perfect anti-correlation, and no linear correlation, respectively, among the SU(3) components of the 0 i and L j states. More details on these coefficients in conjunction with the present study, are discussed in Appendix B. To quantify the amount of coherence (hence of SU(3)-QDS) in the chosen set of states, we adapt the procedure proposed in [81], and consider the following product of the maximum correlation coefficients We consider the set of states {0 i , 2 j , 4 k , 6 } as comprising a K = 0 band with SU(3)-QDS, if C SU3 (0 i −6) ≈ 1.
The values of C SU3 (0 i −6) for selected sets of states are shown in Fig. 20. As expected, C SU3 (0 i −6) ≈ 1 for all the 'deformed' K-bands. On the other hand, this quantity is much smaller (but still non-zero) for the spherical-type of states. Band structure based on SU(3) QDS thus necessitates a value of C SU3 (0 i −6) in very close proximity to 1. It should be noted that the coherence properly of a band of states, as measured by C SU3 (0 i −6), is independent of its purity, as measured by S SU3 (L i ). Thus, in Fig. 20, the pure g(K = 0) and γ(K = 2) bands with SU(3) PDS have C SU3 (0 i −6) = 1 or C SU3 (2 i −8) = 1 and S SU3 = 0, while the mixed β 3 (K = 0) band has C SU3 (0 i −6) = 0.9996 and S SU3 = 0.406.
Starting on the spherical side of the QPT, at the U(5) DS limit (ρ U(5) = 0), the U(5) entropy, S U5 (L) = 0, vanishes for all states. The spherical L = 0 1 ground state ofĤ 1 (ρ) maintains S U5 (L = 0 1 ) = 0 throughout region I (0 ≤ ρ ≤ ρ * ) and in part of region II (ρ * < ρ ≤ ρ c ), in accord with its U(5)-PDS property, Eq. (20a). As seen in Fig. 21, for ρ > 0, all other L = 0, 2 eigenstates ofĤ 1 (ρ) have positive S U5 (L) > 0, reflecting their U(5) mixing. S U5 (L) attains small positive values at low energy, corresponding to spherical-type of states, and changes to moderate and high values as the energy increases, indicating stronger U(5) mixing. The departures of S U5 (L) from zero value start to occur at lower energy, as ρ approaches ρ * . This behavior is consistent with the Hénon Heiles type of dynamics and the onset of chaos in this region. In region III (ξ * * ≤ ξ ≤ 1) all states, including the ground state ofĤ 2 (ξ), have S U5 (L) ∈ [0.7, 0.9], exhibiting weaker variation with energy. These large values reflect the deformed nature of the underlying eigenstates, which are arranged in rotational bands. In region II of phase coexistence (ρ * < ρ ≤ ρ c and ξ c ≤ ξ < ξ * * ), S U5 (L) attains both low and high positive values, reflecting the presence of both spherical-and deformed-type of states. This creates a zig-zag pattern, especially visible in the triangular region bordered by the energies of the barrier (V bar ) and of the (deformed or spherical) local minima (V def or V sph ).
In spite of the U(5) mixing present in the overwhelming majority of eigenstates, a subset of low-lying states in regions I and II exhibit pronounced low-values of S U5 (L), indicating an enhanced purity with respect to U(5). Such states are members of U(5)-like multiplets, of the form discussed in Fig. 19. Their wave functions are dominated by a single n d component, which has the largest (maximal) n d -probability, P (L i ) n d , for a given (n d , L). These spherical type of states thus exemplify the persistence of an (approximate) U(5) PDS.
The top panel of Fig. 22 displays the values of the SU(3) Shannon entropy S SU3 (L = 0), Eq. (70b), for the entire energy spectrum of L = 0 states. The notation of lines is the same as in Fig. 21 Starting on the deformed side of the QPT, at the SU(3) DS limit (ξ SU(3) = 1), the SU(3) entropy, S SU3 (L) = 0, vanishes for all states. In this case, the L-states in a given K-band belong to a single SU(3) irrep, hence necessarily C SU3 (0−6) = 1. As one departs from the symmetry limit, (ξ < 1), S SU3 (L = 0) > 0 acquires positive values, reflecting an SU(3) mixing. The SU(3) breaking becomes stronger at higher energies and as ξ approaches ξ c = 0 from above, resulting in higher values of S SU3 (L = 0). A notable exception to this behavior is the deformed ground state (L = 0 1 ) ofĤ 2 (ξ), which maintains S SU3 (L = 0 1 ) = 0 throughout region III (ξ * * ≤ ξ ≤ 1) and in part of region II (ξ c ≤ ξ < ξ * * ), in accord with its SU(3)-PDS property, Eq. (25a). In contrast to the lack of SU(3)-purity in all excited L = 0 states, the SU(3) correlation function maintains a value close to unity, C SU3 (0 − 6) ≈ 1. This indicates that the SU(3) mixing is coherent and that these L = 0 states serve as bandhead states of K = 0 bands with a pronounced SU(3) QDS. This band-structure is observed throughout region III in extended energy domains. In particular, all such K = 0 bands show strong coherence up to the energy of the saddle point V sad , Eq. (54b), for ξ > ξ * * , or of the spherical local minimum V sph , Eq. (45), for ξ < ξ * * . This observation is consistent with the classical analysis, which revealed a robustly regular dynamics in this region. Coherent K = 0 bands can also be seen seen at high energy above V sph in regions III and II of Fig. 22. One observes here numerous sequences of points with C SU3 (0−6) = 1, alternating with other points for which C SU3 (0−6) < 0.7. The former correspond to the regular states, identified by the Peres lattices in Fig. 16, while the latter correspond to irregular (chaotic) states. In particular, at energies below V lim , Eq. (44), there is a very sharp distinction between the two families, corresponding to the sharp distinction between the regular and chaotic states (dynamics) observed in the Peres lattices (Poincaré surfaces). At very high energies, above V lim , some incoherence appears, consistent with the onset of chaos in region III.
In region I (0 ≤ ρ ≤ ρ * ), all states exhibit high values of S SU3 (L = 0) ≈ 1 and C SU3 (0−6) < 1, indicating considerable SU(3) mixing and lack of SU(3) coherence. This is in line with the presence of spherical states at low energy, and of more complex type of states at higher energy, and the absence of rotational bands in this region. In region II of phase coexistence (ρ * < ρ ≤ ρ c and ξ c ≤ ξ < ξ * * ), one encounters both, points with C SU3 (0−6) ≈ 1, and points with C SU3 (0−6) < 1. This reflects the presence of deformed states arranged into regular bands, exemplifying SU(3) QDS and, at the same time, the presence of spherical states and other more complicated type of states of a different nature.
Global features of U(5) PDS and SU(3) QDS as fingerprints of the QPT
The preceding discussion has demonstrated the relevance of U(5) PDS and SU(3) QDS in characterizing the symmetry properties of individual quantum eigenstates of the intrinsic Hamiltonian across the QPT. The related measures of these quasi-symmetries, the U(5) Shannon entropy, S U5 (L), Eq. (70a), and the SU(3) correlation coefficient, C SU3 (0−6), Eq. (71), quantify the U(5)-purity and SU(3)-coherence in these states, respectively. Considerable interest is drawn to subsets of regular states which maintain a high degree of purity or coherence amidst a complicated environment of other states. In particular, a small value of S U5 (L) signals an approximate U(5) PDS and identifies subsets of spherical-type of states which reflect a surviving regular dynamics in the vicinity of the spherical minimum. On the other hand, a large value of C SU3 (0−6) ≈ 1 signals an SU(3) QDS and identifies rotational K-bands, which reflect a persisting regular dynamics in the vicinity of the deformed minimum. In the present Section, we wish to consider global features of these measures which can shed light on the PDS and QDS content of the entire system as a whole, and monitor its evolution across the the QPT.
The presence or absence in the spectrum of spherical or deformed type of regular states, is intimately tied with the existence and depth of the corresponding spherical or deformed wells in the classical potential. As seen in Figs. 21-22, the number of such regular states is maximal at the DS limits and it reduces as the control parameters approach the values of the anti-spinodal or spinodal points, where the respective local minimum disappears. The evolution with (ρ, ξ) of the number of states having an approximate U(5) PDS or SU(3) QDS reflects the change in the morphology of the underlying Landau potential and can, therefore, serve as fingerprints of the QPT.
As a global measure of an approximate U(5) PDS, we consider the quantity ν U5 , which denotes the number of L = 0 states satisfying S U5 (L = 0) < 0.25. This quantity is an indicator of the amount of enhanced U(5)-purity in the system. The choice of 0.25 as an upper limit is somewhat arbitrary, and is close to the value of S U5 (L = 0) = 0.242 at ξ = 0.17 for which the maximal U(5) probability is P (L=0) n d =0 = 0.8. Analogous quantities, ν U5 (L), can be calculated for states with other angular momentum L. Henceforth, we continue to use the shorthand notation, ν U5 ≡ ν U5 (L = 0). In a similar spirit, as a global measure of SU(3) QDS, we consider the the quantity ν SU3 , which denotes the number of K = 0 bands whose L = 0, 2, 4, 6 members satisfy C SU3 (0−6) > 0.995. This quantity is an indicator of the amount of SU(3) coherence in the system. The choice of 0.995 as a lower limit is again somewhat arbitrary. It is based on a detailed study of the SU(3) correlator for the regular K = 0 bands in Fig. 22, which revealed a well-separated peak in the range C SU3 (0 − 6) ∈ [0.995, 1]. It should be pointed out that the chosen cutoff values for ν U5 and ν SU3 apply to eigenstates of the intrinsic Hamiltonian (7) with β 0 = √ 2 and N = 50, and in general these thresholds vary with N . Fig. 23 displays the quantities ν U5 and ν SU3 as a function of the control parameters (ρ, ξ) across the QPT. At the U(5) DS limit (ρ U(5) = 0), all states are pure with respect to U(5) and hence, as seen in panel (a) of Fig. 23, ν U5 = 234 equals the total number of L = 0 states for N = 50. For ρ > 0, the quantity ν U5 decreases, indicating a reduction in the U(5) PDS of the system in region I. This reduction in the U(5) purity is accelerated for larger values of ρ (stronger U(5) mixing), as the system enters region II of phase-coexistence (ρ * < ρ ≤ ρ c and ξ c ≤ ξ < ξ * * ). Inside region II, ν U5 attains smaller values, it vanishes as ξ approaches the anti-spinodal ξ * * , where the spherical minimum disappears, and remains ν U5 = 0 in region III (ξ ≥ ξ * * ). Similar trends are seen for ν U5 (L) involving states with different angular momentum when scaled by the number of states for each L.
At the SU(3) DS limit (ξ SU(3) = 1), all states are pure and coherent with respect to SU(3), in particular, all L = 0 states serve as bandheads of K = 0 bands and hence ν SU3 = 234 in panel (a) of Fig. 23. For ξ < 1, the quantity ν SU3 decreases, indicating a reduction in the number of regular K = 0 bands with SU(3) QDS, as the deformed well becomes less deep in region III. This reduction in SU(3) coherence continues inside region II of phase coexistence where 'spherical' states and chaotic-type of states come into play. The quantity ν SU3 vanishes as ρ approaches the spinodal point ρ * , where the deformed minimum disappears, and remains ν SU3 = 0 in region I (ρ ≤ ρ * * ).
Panel (b) of Fig. 23 zooms in and provides more details on the evolution of ν U5 and ν SU3 . As shown, both quantities are non-zero throughout region II, indicating the presence of both (approximate) U(5) PDS and SU(3) QDS inside the coexistence region. These global measures of purity and coherence in selected eigenstates, thus trace the crossing of the spherical and deformed minima in the Landau potential by monitoring the remaining regular dynamics associated with each of them. The vanishing of ν SU3 near ρ * , appears to be sharper and less gradual than the vanishing of ν U5 which occurs even before ξ < ξ * * . This reflects the more abrupt disappearance of the deformed minimum at ρ * compared to the disappearance of the spherical minimum at ξ * * [compare the behavior of (V bar − V def ) near ρ * with that of (V bar − V sph ) at ξ ≤ ξ * * , in Fig. 3].
It is worthwhile emphasizing that both (approximate) U(5) PDS and SU(3) QDS are present in region II of phase coexistence, including the critical point, imprinting in a transparent manner, the evolution of the first-order QPT. A number of factors have facilitated the exposure of such a simple pattern in the present study; (i) a high barrier, (ii) a wide coexistence region, (iii) invoking the resolution of the Hamiltonian, Eq. (4), and performing the analysis on its intrinsic part. The latter does not contain rotation-vibration terms that can spoil the simple patterns observed. The effect of such collective kinetic terms will be considered in Section 8. The rich symmetry structure uncovered in region II and the coexistence of PDS and QDS inside it, were not noticed in previous works because the Hamiltonians employed did not meet the requirements (i)-(iii).
Collective effects
The analysis presented so far, considered the evolution of the dynamics associated with the intrinsic part of the Hamiltonian across the QPT. The intrinsic Hamiltonian determines the Landau potential and the variation of its control parameters (ρ, ξ) induces the shapephase transition. In the present Section, we address the impact on the order and chaos accompanying the QPT, of the remaining collective part of the Hamiltonian. For that purpose, we examine the classical and quantum dynamics of the combined Hamiltonian The intrinsic Hamiltonian considered,Ĥ int (ρ, ξ), is that of Eq. (7) with h 2 = 1 and β 0 = √ 2, interpolating between the U(5) (ρ = 0) and SU(3) (ξ = 1) DS limits. The collective Hamiltonian considered,Ĥ col (c 3 , c 5 , c 6 ), is that of Eq. (6), composed of kinetic terms with couplings c 3 , c 5 and c 6 , associated with collective O(3), O(5) and O(6) rotations in the Euler angles, γ and β degrees of freedom, respectively. By construction,Ĥ andĤ int in Eq. (72) have the same Landau potential which is not influenced byĤ col . The observed modifications in the dynamics due toĤ col , are thus kinetic in nature, arising from momentum-dependent terms which vanish in the static limit. For simplicity, the impact of these rotational c i -terms are studied individually, by adding them one at a time toĤ int (ρ, ξ), the latter taken at representative values of (ρ, ξ) in regions I-II-III of the QPT. The results obtained indicate that, although the collective Hamiltonian does not affect the Landau potential, it can have dramatic effects on the onset of classical chaos, on the resonance structure and on the regular features of the quantum spectrum.
Classical analysis in the presence of collective terms
As previously done, we constrain the classical dynamics to zero angular momentum and visualize it by means of Poincaré sections. In such circumstances, the classical limit of the quantum Hamiltonian of Eq. (72) is given by where the first term is the classical intrinsic Hamiltonian of Eq. (26) and the second term is the classical collective Hamiltonian of Eq. (32). The O(3) c 3 -term is absent from the latter since the classical dynamics is constraint to L = 0. The O(5) c 5 -term depends on p 2 γ hence affects the γ motion, while the O(6) c 6 -term depends on T = p 2 β + p 2 γ /β 2 , T 2 and β 2 p 2 β , hence is the only collective term affecting the β motion. The plane of the Poincaré section is chosen as before at y = 0 and its envelope at a given energy E is defined by H(x, y = 0, p x , p y = 0) = E, resulting in the condition As seen, the envelope of the full classical Hamiltonian H is modified with respect to that of H int , solely due to the O(6) c 6 -term.
Considering the classical dynamics of L = 0 vibrations in the stable spherical phase (region I), the relevant classical intrinsic Hamiltonian in Eq. (73) is H 1 (ρ) of Eq. (26a), with 0 ≤ ρ ≤ ρ * . Fig. 24 shows for ρ = 0 the Poincaré sections, at E = 1, of H 1 (ρ = 0) and of the added c 5 and c 6 collective terms. The potential surface is V 1 (ρ = 0), Eq. (38), the same in all panels. The c 5 -term turns the exact U(5) symmetry of the intrinsic Hamiltonian into a U(5) dynamical symmetry of the combined Hamiltonian. The c 6 -term breaks the U(5) symmetry but maintains the reduced symmetry of the O(5) subgroup. As a result, the system for ρ = 0 remains integrable in the presence of both terms. The main effect is that the trajectories are no longer periodic but rather become quasi-periodic, start to precess and densely cover the surfaces of the invariant tori. In the Poincaré sections, instead of a finite collection of points, we now see smooth curves organized into two regular islands, forming a pattern typical of an anharmonic (quartic) oscillator. Fig. 25 shows similar sections, at E = 0.2 (bottom) and E = 0.5 (top), for the spinodal-point ρ = ρ * . The relevant Landau potential is that depicted in the bottom panel of the ρ * = 0.5 column in Fig. 9. In general, the added collective terms maintain the characteristic features of the intrinsic classical dynamics, namely, a Hénon-Heiles type of transition from regular dynamics at low energy to chaotic dynamics at higher energy. The classical dynamics in the spherical region is, to a large extent, determined by The classical intrinsic Hamiltonian in Eq. (73), appropriate to the coexistence region (region II), is H 1 (ρ) of Eq. (26a), with ρ * < ρ ≤ ρ c , and H 2 (ξ) of Eq. (26b), with ξ c ≤ ξ < ξ * * . Fig. 26 displays the Poincaré surfaces for the critical point (ρ c , ξ c ) with energies below, at, and above the barrier, arising from H 1 (ρ c ) ≡ H 2 (ξ c ) and from the added c 5 and c 6 rotational terms. The potential surface in all panels is that of Eq. (48), exhibiting a barrier separating the degenerate spherical and deformed minima. The c 5 -term is seen to have a very little effect on the Poincaré sections of the intrinsic Hamiltonian. On the other hand, the sections with the c 6 -term have a smaller size, and are compressed for large |p x |, in accord with the properties of the envelope mentioned in Eq. (74). In addition, the regular island in the region of the deformed minimum appears to be more elongated in the x-direction (see the middle panel of the c 6 = 1 column in Fig. 26). This distortion can affect the regular bands built on the deformed minimum, as will be discussed in the subsequent quantum analysis.
The different impact of the two collective terms on the classical dynamics can be attributed to the fact that in the coexistence region the barrier at the saddle point is in the β-direction, and hence is more sensitive to the β motion. As mentioned, the latter motion is affected by the O(6) term but not by the O(5) term. In general, throughout region II, the presence of the collective terms in the Hamiltonian does not destroy the simple pattern of robustly regular dynamics confined to the deformed region and well-separated from the chaotic dynamics ascribed to the spherical region.
The classical intrinsic Hamiltonian in Eq. (73), relevant to the stable deformed phase (region III), is H 2 (ξ) of Eq. (26b), with ξ ≥ ξ * * . For ξ = 1, the intrinsic Hamiltonian has SU(3) symmetry and the system is completely integrable. As seen in Fig. 27, the inclusion of the c 5 -and c 6 -rotational terms leads to substantial modifications in the phase space portrait, showing chaotic layers and additional islands.
Both the O(5) and O(6) symmetries are incompatible with the SU(3) symmetry, hence the corresponding added rotational terms break the integrability of the intrinsic Hamiltonian at ξ = 1. This can lead to the occurrence of chaotic regions. The latter are more pronounced for the O(5) term (see the panels for c 5 = ±1 in Fig. 27), which can be attributed to the fact that in region III, the saddle point accommodates a barrier in the γ-direction (see the contour plot in Fig. 2). It should be stressed that, in this case, the onset of chaos is entirely due to the kinetic terms of the collective Hamiltonian, since the intrinsic Hamiltonian is integrable for ξ = 1 and its Landau potential, which has a single-deformed minimum, is kept intact in all panels of Fig. 27. This is a clear-cut demonstration that in the deformed side of the QPT, chaos can develop from purely kinetic perturbations, without a change in the morphology of the Landau potential.
The phase space portrait of the integrable intrinsic Hamiltonian at the SU(3) limit, shows the same pattern of two regular islands at any energy (see the ξ = 1 column in Figs. 27 and 11). The inclusion of the collective terms modifies this pattern. As discussed in Section 5.3, the pattern of islands is affected by the presence of resonances which, in turn, occur at low energy when the ratio R of normal mode frequencies is a rational number. These resonance effects are influenced by the presence of the collective Hamiltonian. As seen in Eq. (12), both the c 5 -and c 6 -terms contribute to the normal-mode frequencies and hence change the ratio R of Eq. (59), to In the current study, we adapt the values h 2 = 1 and β 0 = √ 2, for which the above expression simplifies to R = [2(2ξ + 1) + c 6 ]/(6 + 2c 5 /3 + c 6 ).
For ξ = 1, the intrinsic Hamiltonian (with β 0 = √ 2) has R = 1. The inclusion of the O(6) rotational term does not alter this value, but it stabilizes an additional family of orbits circulating around, instead of passing through, the deformed minimum (see the unstable orbit of H 2 (ξ) for R = 1 in Fig. 12). As a result, two additional regular islands develop in the Poincaré sections shown in the c 6 = 1 column Fig. 27, compared to the SU(3) limit. The inclusion of the O(5) term changes the ratio to R = 9/(9 + c 5 ), leading to R < 1 (R > 1) for c 5 > 0 (c 5 < 0). The γ-motion, with p x ≈ 0, is stable for R < 1 and is unstable for R > 1. In the latter case, the center of the Poincaré section exhibits a hyperbolic fixed point and chaos develops in its vicinity as the energy increases (see the column with c 5 = −1 in Fig. 27). On the other hand, the β-motion, with large |p x |, is stable for R > 1 and is unstable for R < 1. In the latter case, chaos develops at the perimeter of the Poincaré section (see the column with c 5 = 1 in Fig. 27).
For ξ < 1, the integrability associated with the SU(3) limit is broken due to the presence of the (ξ − 1)P † 0 P 0 in the intrinsic Hamiltonian, Eq. (24). The effect of adding the rotational c 5 -term on the classical dynamics, is similar to that of varying the control parameter ξ in the intrinsic Hamiltonian. This is illustrated in Fig. 28, where different combinations of ξ and c 5 , which yield the same ratio R, give rise to similar Poincaré surfaces. Specifically, the surfaces Fig. 27. The slight differences are due to anharmonic effects beyond the normal-mode approximation.
Quantum analysis in the presence of collective terms
The collective HamiltonianĤ col (c 3 , c 5 .c 6 ) of Eq. The intrinsic Hamiltonian in region I of the QPT isĤ 1 (ρ), Eq. (7a), with 0 ≤ ρ ≤ ρ * . For ρ = 0, it has U(5) DS and a solvable spectrum, Eq. (17). The added collective c 5term conforms with the dynamical symmetry, the eigenstates remain the U(5) basis states, |N, n d , τ, n ∆ , L Eq. (1a), and hence satisfy S U5 (L i ) = 0. The combined spectrum becomes , which explains the observed spreading in the Peres lattice with c 5 = 1 in Fig. 29. The energies of the lowest L = τ = 0 states still follow the potential curve V 1 (ρ = 0), Eq. (38). The c 6 -term breaks the U(5) symmetry, inducing considerable ∆n d = ±2 mixing, but retains the O(5) symmetry and quantum number τ . Accordingly, the U(5) Shannon entropies are non-zero, as seen for c 6 = 1 in Fig. 29. Nevertheless, a few low-lying (as well as high-lying) states exhibit a low value of S U5 (L), indicating the persistence of an (approximate) U(5)-PDS in the presence of the O(6) term. The energies of the lowest L = τ = 0 states in the Peres lattice deviate now from V 1 (ρ = 0). In all cases considered in Fig. 29, with and without the collective terms, the eigenstates in question are spherical in nature, hence exhibit considerable SU(3) mixing (S SU3 (L = 0) ≈ 1) and lack of SU(3) coherence (C SU3 (0−6) < 1).
For ρ > 0, the intrinsic HamiltonianĤ 1 (ρ) itself breaks the U(5) symmetry. Most of its eigenstates are mixed with respect to U(5) except for the U(5)-PDS states of Eq. (20), with n d = τ = L = 0, 3. The U(5)-PDS property still holds when the c 5 -term is included, but is violated by the c 6 -term. This can be seen in Fig. 30 for the spherical ground state, L = 0 1 , which has S U5 = 0 (S U5 > 0) for the c 5 (c 6 ) term. In general, the added collective terms maintain the characteristic features of the intrinsic quantum dynamics in region I, namely, the presence of spherical-type of states at low energy, with an approximate U(5)-PDS, of more complex-type of states at higher energy, and the absence of rotational bands, hence S SU3 (L = 0) ≈ 1 and C SU3 (0−6) < 1 for all states. The quantum dynamics in the spherical region is, to a large extent, determined by the O(5)-breaking ρ-term in the intrinsic The intrinsic Hamiltonian in region II isĤ 1 (ρ), Eq. (7a), with ρ * < ρ ≤ ρ c , andĤ 2 (ξ) of Eq. (7b), with ξ c ≤ ξ < ξ * * . As discussed in Section 7, the new element entering the intrinsic quantum dynamics in the shape-coexistence region, is the occurrence of deformed-type of states forming rotational K-bands, associated with the deformed minimum, coexisting with low-energy spherical-type of states, associated with the spherical minimum, in the background of more-complicated type of states at higher-energies. The regular rotational Kbands exhibit coherent SU(3) mixing, and for K = 0 bands are signaled by C SU3 (0−6) ≈ 1. As shown for the critical point (ξ c = 0) in Fig 31, the inclusion of the collective c 5 -term maintains these features. In contrast, the regular band-structure is disrupted by the inclusion of the c 6 -term. The number of quasi-SU(3) bands for which C SU3 (0−6) > 0.995, is now reduced from 12 to 6. Thus, most of the reduction of SU(3)-QDS is due to the collective O(6) rotations which couple the deformed and spherical configurations and mix strongly the regular and irregular states. This disruption of band-structure is consistent with the β-distortion of the regular island, observed in the classical analysis of Fig. 26. It highlights the importance for QPTs of the coupling of the order parameter fluctuations with soft modes [90].
The intrinsic Hamiltonian in region III isĤ 2 (ξ) of Eq. (7b), with ξ ≥ ξ * * . For ξ = 1, it has SU(3) DS and a solvable spectrum, Eq. (22). The added collective c 5 -and c 6 terms both break the SU(3) symmetry and consequently, as seen in Fig. 32, the SU(3) Shannon entropy in both cases is positive, S SU3 (L) > 0. At low and medium energies, (E ≤ 3 for the c 5 term and E ≤ 4.5 for c 6 term), the SU(3) mixing is coherent and the L-states are still arranged in rotational bands. The number of such regular K = 0 bands is smaller for the c 5 -term, consistent with the classical analysis of Fig. 27, showing a more pronounced onset of chaos in the γ-motion due to the O(5) rotational term. At higher energies, the SU(3)-QDS property is dissolved due to mixing with other types of states. In general, there are no spherical-type of states in region III, and the U(5) entropy is positive, S U5 (L) > 0, in all panels of Fig. 32. This is in line with the fact that the classical Landau potential has a single deformed minimum in this region.
Height of the barrier
All calculations presented so far, were performed at a fixed value of β 0 = √ 2, ensuring a high barrier V bar /h 2 = 0.268, Eq. (52), at the critical point (ξ c = 0). For this choice, the intrinsic HamiltonianĤ 2 (ξ; β 0 = √ 2), Eq. (24), attains the SU(3) limit for ξ = 1 and exhibits SU(3)-PDS for states in the ground and selected gamma bands of Eq. (25), throughout the deformed region, ξ ≥ ξ c . A variation of the parameter β 0 in the intrinsic Hamiltonian, Eq. (7), affects the symmetry properties of quantum states and the morphology of the classical potential, in particular, the height of the barrier. In the present Section we examine the implied changes in the dynamics in the coexistence region of the QPT, reflecting the impact of different barrier heights.
Focusing the discussion to the intrinsic dynamics at the critical point (ρ c , ξ c ), the Poincaré sections for the classical Hamiltonian H 1 (ρ c ) = H 2 (ξ c ), Eq. (26), with β 0 = 0.35, 1.5, 1.87, are displayed in the left, center and right columns of Fig. 33, respectively. The three cases correspond to potential barriers V bar /h 2 = 0.0018, 0.322, 1.257, compared to the value at the domain boundary, V lim /h 2 = 2. The bottom row depicts the corresponding classical potentials V cri (β, γ = 0) = V cri (x, y = 0), Eq. (48). Apart from an energy scale, the three cases display similar trends, namely, a Hénon-Heiles type of transition, with increasing energy, from regular to chaotic motion in the vicinity of the spherical well, and regular dynamics in the vicinity of the deformed well. The extremely low-barrier case displayed in the left column of Fig. 33, is obtained for β 0 = 1 2 √ 2 , which is the value of the equilibrium 2 . The small energy scale explains why the simple pattern of coexisting but well-separated regular and chaotic dynamics in the coexistence region, has escaped the attention in all previous works which employed the Hamiltonian of Eq. (13). This highlights the benefits gained by using the intrinsic-collective resolution of the Hamiltonian, Eq. (4), and the ability to construct Hamiltonians accommodating a high-barrier, in order to uncover in a transparent manner the rich dynamics in the coexistence region of the QPT.
In spite of the overall similarity, some differences can be detected between the classical dynamics with β 0 < 1 and β 0 > 1. In the former case, the onset of chaos occurs at a lower energy, as demonstrated in Fig. 33. This can be attributed to the different relative weights of the harmonic term, β 2 0 β 2 , and the chaos-driving term, β 0 2 − β 2 β 3 cos 3γ in the Landau potential, V cri (β, γ), Eq. (48). The value of β 0 affects also the ratio R of normal-mode frequencies of oscillations about the deformed minimum, Eq. (59). As noted in Section 5.3, the number of islands in a Poincaré Brikhoff chain is 2/R, hence decreases with R ∝ (1 + β 2 0 ). Accordingly, the island chains are more visible and the resonance structure is more pronounced for larger values of β 0 (see the column for β 0 = 1.87 in Fig. 33). Fig. 33. In each case, one can clearly identify regular sequences of K = 0, 2, 4 bands localized within and above the respective deformed wells, and persisting to energies well above the barriers. The number of such K-bands is larger when the potential is deeper (larger β 0 values). To the left of the barrier towards the spherical minimum, one observes a number of low-energy U(5)-like multiplets, Eq. (18). This spherical multiplet-structure is very pronounced for β 0 = 1.5, 1.87 (high barriers) and only part of it survives for β 0 = 0.35 (extremely low barrier). For β 0 = √ 2, the intrinsic Hamiltonian,Ĥ 2 (ξ; β 0 ), Eq. (7b), no longer possess the SU(3) PDS property, Eq. (25). All eigenstates are mixed with respect to SU(3), including member states of the ground and gamma bands. Nevertheless, by construction,Ĥ 2 (ξ; β 0 ) still satisfies Eq. (5), and hence the states with L = 0, 2, 4, . . . , 2N , projected from the condensate, Eq. (3), with [β eq = √ 2β 0 (1 + β 2 0 ) −1/2 , γ eq = 0], span a solvable (but SU(3)-mixed) ground band. In general, the SU(3) mixing is stronger for larger deviations, β 0 − √ 2 , and the mixing is coherent for the L-states in the same K-band. This is illustrated in Fig. 35, which shows the SU(3) decomposition in the solvable ground band of the critical-point intrinsic HamiltonianĤ 2 (ξ c ; β 0 ), for two values of β 0 . For β 0 = 1, the L = 0 bandhead state of the ground band has a high-value for the SU(3) Shannon entropy, S SU3 (L = 0) = 0.33, hence is less pure compared to its counterpart with β 0 = 1.5, for which S SU3 (L = 0) = 0.03. In both cases, the ground bands exhibit SU(3) coherence (L-independent mixing), with SU(3) correlation coefficients C SU3 (0−6) = 1, exemplifying SU(3)-QDS.
Summary and conclusions
We have presented a comprehensive analysis of the dynamics evolving across a generic (high-barrier) first-order QPT, with particular emphasis on aspects of chaos, regularity, and symmetry. The study was conducted in the framework of the IBM, a prototype of an algebraic model, whose phases are associated with dynamical symmetries (DSs) and the transition between them exemplify QPTs in an interacting many-body system. The specific model Hamiltonian employed, describes a shape-phase transition between spherical [U(5) DS] and deformed [SU(3) DS] quadrupole shapes, a situation encountered in nuclei. The resolution of the Hamiltonian into intrinsic (vibrational) and collective (rotational) parts has allowed us to disentangle the effects due to terms affecting the Landau potential, from effects due to kinetic terms. The separate treatment of the intrinsic dynamics highlights simple features by avoiding distortions that may arise in the presence of large rotation-vibration coupling. The availability of IBM Hamiltonians accommodating a high barrier in a wide range of control parameters, made it possible to uncover a previously unrecognized pattern of competing order and chaos, which echoes the QPT in the coexistence region.
A classical analysis of the intrinsic part of the Hamiltonian revealed a rich mixed dynamics with distinct features in each structural region of the QPT. On the spherical side of the QPT, the system is integrable at the U(5) DS limit. Near it, the phase space portrait resembles that of a weakly perturbed anharmonic (quartic) oscillator. In other parts of region I, where the Landau potential has a single spherical minimum, the phase space portrait is similar to the Hénon-Heiles system, with regular dynamics at low energy and chaos at higher energy. The non-integrability here is due to the O(5)-breaking term in the Hamiltonian. On the deformed side of the QPT, the system is integrable at the SU(3) DS limit. Away from it, the integrability is lost by a different mechanism of breaking the SU(3) symmetry. The dynamics, however, remains robustly regular throughout region III, where the Landau potential supports a single deformed minimum. The Poincaré sections in this region are dominated by regular trajectories forming a single island. Additional chains of regular islands show up, occasionally, due to resonances in the normal-mode oscillations. The fact that the classical dynamics evolves differently, is attributed to the different topology of the Landau potential in the vicinity of the two minima. In spite of the abrupt structural changes taking place, the dynamics in the phase coexistence region (region II), exhibits a very simple pattern where each minimum preserves, to a large extent, its own characteristic dynamics. The robustly ordered motion is still confined to the deformed minimum, in marked separation from the chaotic behavior ascribed to the spherical minimum. The coexistence of well-separated order and chaos persists in a broad energy range, even above the barrier, throughout region II, and is absent outside it. The simple pattern of mixed dynamics thus traces the crossing of the two minima, a defining feature of a first-order QPT. Simply divided phase spaces are known to occur in billiard systems [91,92], where the amount of chaoticity in the motion of a free particle is governed by the geometry of the cavity. Here, however, they show up in a many-body interacting system undergoing a QPT, where the onset of chaos is governed by a variation of coupling constants in the Hamiltonian.
The quantum manifestations of the classical inhomogeneous phase space structure have been analyzed via Peres lattices. The latter distinguish regular from irregular quantum states by means of ordered and disordered meshes of points. A choice of Peres operator whose classical limit corresponds to the deformation, allowed us to overlay the lattices on the classical potentials and thus associate the indicated states with a given region in phase space. The results obtained reflect adequately the mixed nature of the classical dynamics. The distribution of lattice points agrees with the location of regular and chaotic domains in the classical Poincaré sections. The quantum analysis has disclosed a number of regular low-energy spherical-vibrator U(5)-like multiplets, associated with the spherical minimum and regular SU(3)-like rotational K-bands in the vicinity of the deformed minimum. The latter bands persist to energies well above the barrier, extend to high values of angular momenta, and their number is larger for deeper deformed wells. These two kinds of regular subsets of states retain their identity amidst a complicated environment of other states, and both are present in the coexistence region. An important clue on the nature of the surviving regular sequences of selected states, comes from a symmetry analysis of their wave functions. A U(5) decomposition has shown that the above mentioned regular U(5)-like multiplets consist of spherical type of states, with wave functions dominated by a single n d component. As such, they exhibit U(5) partial dynamical symmetry [U(5)-PDS], either exactly or to a good approximation. This enhanced U(5) purity is signaled by a low value of the U(5) Shannon entropy, Eq. (70a). In contrast, the deformed type of states exhibit a broad n d -distribution. An SU (3) 6) rotations. When added to the intrinsic part of the Hamiltonian they lead to rotational splitting and mixing. Although these kinetic terms do not affect the Landau potential, the mixing induced by the O(5) and O(6) terms, can affect the onset of classical chaos and the regular features of quantum states. An analysis of the classical and quantum dynamics has shown that in region I, the added collective terms being O(5)-invariant, maintain the Hénon-Heiles type of dynamics; the onset of chaos being largely determined by the intrinsic Hamiltonian. The O(6) rotational term, being associated with the β degree of freedom, was found to be significant in the coexistence region. Its presence disrupts the regular K-bands built on the deformed minimum and reduces their coherence property related to SU(3)-QDS. The simple pattern of well-separated regular and chaotic dynamics ascribed to each minimum, however, is not destroyed. The O(5) rotational term, being associated with the γ degree of freedom, was found to be significant in region III. Its presence modifies the regular intrinsic dynamics associated with the single deformed minimum and leads to significant chaoticity. The SU(3)-QDS property here is completely dissolved at higher energies. It is important to note that chaos can develop from purely kinetic perturbations, without a change in the Landau potential, as vividly demonstrated in Fig. 27. This illustrates that a criterion for the onset of chaos cannot be based solely on the geometry of the potential.
The study of first-order QPTs conducted in this work, considered a finite system, whose mean-field potential involved two asymmetric wells, one dominated by chaotic dynamics the other by regular dynamics. A parameter β 0 in the Hamiltonian governed the height of the barrier between them. The ramifications of divided phase space and Hilbert space structure, e.g., simple patterns of dynamics and intermediate symmetries (PDS and QDS), are observed at any β 0 > 0, but are more pronounced for higher barriers (larger β 0 ). It will be interesting to see in future studies of first-order QPTs, whether simply divided spaces occur also when both wells accommodate regular or chaotic motion but with distinct characteristic features, e.g., different phase space portraits and dissimilar symmetry structure. Other issues which deserve further attention are finite-size effects and scaling behavior. Although there are initial indications that the simple pattern of mixed dynamics characterizing the QPT, occurs also at moderate values of N , a systematic study is called for. The large-N scaling behavior should be considered, in analogy to what has been done in second-order (continuous) QPTs. An interesting question to address is whether the global measures of U(5)-PDS and SU(3)-QDS, ν U5 and ν SU3 , shown in Fig. 23, converge to a particular curve for large N .
Returning to the key questions posed in the Introduction, we end with some pertinent remarks. Based on the results obtained in the present paper, we conclude that the interplay of order and chaos accompanying the first-order QPT can reflect its evolution, provided the underlying phase-space is simply divided and each minimum maintains its own characteristic dynamics. If these conditions are met, then the resulting simple pattern of mixed dynamics can trace the modifications in the topology of the Landau potential inside the coexistence region. The pattern of mixed but well-separated dynamics is particularly transparent when considering the intrinsic dynamics, and appears to be robust. Deviations are largely due to kinetic collective rotational terms, which may lead to strong rotation-vibration coupling, breakdown of adiabaticity and an onset of chaos due to purely kinetic perturbations. The present work suggests that the remaining regularity in the system, associated with different minima at the classical level, and with different regular subsets of eigenstates, at the quantum level, amidst a complicated environment, can be assigned particular intermediate symmetries, PDS or QDS. Both the classical and quantum analysis indicate a tendency of a system undergoing a QPT, to retain some "local" regularity far away from integrable limits and some partial-or quasi form of symmetries far way from symmetry limits. Is this linkage between persisting regularities and persisting symmetries a general result or an observation valid for specific algebraic models? What are the general conditions for a dynamical system to have these local regions of regularities and effective symmetries for subsets of states? Can one incorporate the notions of quasi-and partial dynamical symmetries in attempts [93,94,95] to formulate quantum analogs of the KAM and Poincaré-Birkhoff theorems? Quantum phase transitions in many-body systems and their algebraic modeling provide a fertile ground for addressing these deep questions. The present work is only a first step towards accomplishing this goal.
Given an HamiltonianĤ(λ) describing a QPT, its potential surface coefficients, a(λ), b(λ) and c(λ), Eq. (78), depend on the control parameter λ. In case of a first-order QPT between a spherical and prolate-deformed shape, the value of the control parameter at the critical point (λ c ), which defines the critical-point HamiltonianĤ(λ c ), is determined by the condition The corresponding potential surface shown, has degenerate spherical and deformed minima atβ = 0 and (β =β, γ = 0), whereβ = 2a/b. The value of the control parameter at the spinodal point (λ * ), where the deformed minimum disappears, is obtained by requiring D = 0, with D given in Eq. (82). The value of the control parameter at the anti-spinodal point (λ * * ), where the spherical minimum disappears, is obtained by requiring a = 0. The potential surface coefficients for the first-order intrinsic Hamiltonian of Eq. (7), are given bŷ The values of the control parameters at the critical (ρ c , ξ c ), spinodal (ρ * ), and anti-spinodal (ξ * * ) points, given in Eqs. (33)- (35), were obtained by the conditions mentioned above. The energy surface coefficients of the collective Hamiltonian, Eq. (6), all vanish.
HereX,Ȳ and s X , s Y are the mean values and standard deviations of the vector components, respectively. The values of the Pearson coefficient lie in the range 1 ≤ π(X, Y ) ≤ 1, with π(X, Y ) = 1, π(X, Y ) = −1, and π(X, Y ) = 0 indicate a perfect correlation, perfect anticorrelation and no linear correlation, respectively. In the present work, we apply the Pearson indicator to estimate the amount of correlations between two eigenstates of an IBM Hamiltonian, |L i and |L j . For that purpose, we expand both states in the SU (3) (λ,µ) = 0, if the angular momentum L is not contained in a particular SU(3) irrep (λ, µ) that does accommodate the angular momentum L. To associate a band of states with a given state |L i , we scan the entire spectrum of states |L j , with angular momentum L = L, and choose the state that maximizes the Pearson correlation coefficient max j {π(L i , L j )}, Eq. (85). This identifies among the ensemble of states with angular momentum L , the most correlated state with |L i , which is the favored candidate to be its member in the same band. This procedure, adapted from [81], was used in Eq. (71) to identify K = 0 bands, composed of sequences of rotational states with L = 2, 4, 6, built on a given |L = 0 i bandhead state. | 2014-09-22T19:00:45.000Z | 2014-04-02T00:00:00.000 | {
"year": 2014,
"sha1": "0288562f45cdc9a2eb1a2494322c92562392ff13",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "http://manuscript.elsevier.com/S0003491614002553/pdf/S0003491614002553.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0288562f45cdc9a2eb1a2494322c92562392ff13",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238254756 | pes2o/s2orc | v3-fos-license | Risk Factors for Non-arteritic Anterior Ischemic Optic Neuropathy: A Large Scale Meta-Analysis
Objective: We conducted a meta-analysis to explore all the potential risk factors for non-arteritic anterior ischemic optic neuropathy (NAION) based on the published literature. Methods: A comprehensive literature search through the online databases was performed to obtain studies concerning the risk factors of NAION up to June 2020. Pooled unadjusted odds ratios (ORs) or rate ratios (RRs) were calculated to evaluate the weight of risk factors. This study was registered in PROSPERO under the number CRD42018084960. Results: Our meta-analysis included 49 original studies comprising of more than 10 million patients. The following risk factors were proved to be significantly associated with NAION: male gender (OR = 1.67, 95% CI: 1.50–1.85, P < 0.00001), hypertension (RR = 1.28, 95% CI: 1.20–1.37, P < 0.00001), hyperlipidemia (RR = 1.43, 95% CI: 1.26–1.62, P < 0.00001), diabetes mellitus (DM) (RR = 1.53, 95% CI: 1.36–1.73, P < 0.00001), coronary heart disease (CHD) (RR = 1.68, 95% CI: 1.24–2.27, P = 0.0008), sleep apnea (RR = 3.28, 95% CI: 2.08–5.17, P < 0.00001), factor V Leiden heterozygous (RR = 2.21, 95% CI: 1.19–4.09, P = 0.01), and medication history of cardiovascular drugs. Conclusion: We concluded that the above risk factors were significantly related to NAION. Better understanding of these risk factors in NAION can help the direct therapeutic approaches.
INTRODUCTION
Non-arteritic anterior ischemic optic neuropathy (NAION) is the most common ischemic optic neuropathy. The incidence rate is 2.5-11.8 per 100,000 cases in men elder than 50 (1). Characterized by optic nerve ischemia mostly due to hypoperfusion of short posterior ciliary arteries (SPCAs) (2), NAION can lead to unilateral, sudden, and painless loss of vision among awake patients. Segmental or diffuse optic disc edema can be observed without evidence of arteritis (3). Although the detailed pathogenesis is unclear, NAION is probably related to systematic hypoperfusion, nocturnal hypotension, local autoregulation failure, and hypercoagulation (2). NAION is a naturally progressive disease and the contralateral eye involvement rate is 15-20% in the following 5 years (4). It has been proved that the medications, including corticosteroids, aspirin, and neurotrophic drugs, have limited and controversial efficacies (5,6). The risk factors should be taken into thorough consideration when providing precautions and treatments for NAION.
The two published meta-analyses reported influences of DM or sleep apnea on NAION, respectively (7,14). However, impacts of other factors differed in literature and no cumulative conclusions were reached. Therefore, we decided to perform a large-scale systematic review and meta-analysis on all the possible NAION risk factors identified in the published studies. To the best of our knowledge, this is the first meta-analysis concentrating on multiple factors of NAION. We expect to help the clinicians comprehensively understand the risk factors of NAION and provide more evidence for the preventions and treatments.
METHODS
We conducted this systematic review and meta-analysis in accordance with the meta-analysis of observational studies in epidemiology (MOOSE) guidelines. The protocol registration number of PROSPERO (https://www.crd.york.ac.uk/prospero/) was CRD42018084960.
Search Strategy and Study Selection
Original literature was searched comprehensively through electronic Pubmed, Medline, Embase, and Cochrane Library databases. The related references were also screened, including gray literature (in the website http://graylit.osti.gov/). The language of included studies was restricted to English. The last search was on June 6, 2020. The search terms were applied as follows: "non-arteritic anterior ischemic optic neuropathy" OR "non-arteritic anterior ischaemic optic neuropathy" OR "NAION" OR "NA-AION" in combination with "risk" OR "factor" OR "risk factor." These terms were searched in all the fields of articles, not restricted to their abstracts.
The inclusion criteria were listed as follows. First, the clinical studies concerning the comparisons of risk factors between NAION and controls were taken into further consideration. Second, the risk factors should exist before the diagnosis of NAION, which was judged by carefully screening the abstracts and/or full texts. Third, the samples of case and control groups were provided directly, or odds ratios (ORs) or rate ratios (RRs) of risk factors were reported with 95% CIs. Accordingly, we excluded the animal experiments, case reports/series, abstracts, conference proceedings, repeated publications, non-published materials, reviews, and editorials.
Two investigators (B. L. and Y. Y.) did the literature search, study screening, data extraction, and eligible study quality assessment independently. The inconsistency was resolved by a third reviewer or via an open discussion.
Data Extraction and Study Quality Assessment
We collected the following data in a prepared standard form: first author, year of publication, country, ethnicity, study design, study duration, sample size, baseline information of the patient, and the number of patients (ORs or RRs with 95% CIs) in both the NAION and the control groups. The Newcastle-Ottawa Scale (NOS) (15) was applied for quality assessment of the nonrandomized controlled trials (RCTs). The studies achieving seven or more stars were regarded as high quality.
Statistics Analysis
We calculated the pooled unadjusted ORs or RRs on dichotomous variables to identify the association between the risk factors and NAION. Accordingly, the mean differences (MDs) were used on the continuous variables. Heterogeneity was determined through the chi-square test based on Q and I 2 values (16). No significant heterogeneity existed if the p-value was >0.10, and in this condition, the fixed-effect model was used. On the contrary, the random-effect model was applied. We also conducted the subgroup analyses according to different populations. The results were significant in our meta-analysis if a two-sided p-value was <0.05. Inverted funnel plot visual inspection was to assess the publication bias for all the comparisons, and the Egger's test was added when the number of studies was more than 10. The data analyses were performed in software RevMan (version 5.3; Cochrane Collaboration, Oxford, UK) and STATA (version 13.0; StataCorp, College Station, TX, USA).
DISCUSSION
In our systematic review and meta-analysis, we included articles studying a variety of risk factors: age, gender, ethnicity, systematic diseases, ocular factors, genotypes, cardiovascular drugs, and so on. We finally concluded the following risk factors to be significantly associated with NAION: male gender, hypertension, hyperlipidemia, DM, CHD, sleep apnea, medication history of cardiovascular drugs, and factor V Leiden heterozygous. Some other systematic and ocular diseases were researched in <3 studies and did not seem to be significant risk factors. In the subgroup analyses based on ethnicities, we found that the influences of gender in Asians and CHD and sleep apnea in Europeans were not significant. Therefore, we should take notice when applying our results to different populations.
In our meta-analysis, the cardiovascular factors were highly associated with NAION and studied in most literature, including hypertension, hyperlipidemia, and DM. NAION probably results from topical and/or systematic hypoperfusion (2). Although this is not a thrombotic event, many predisposing factors related to thrombogenesis or hypercoagulative state can disturb the systematic blood circulation via different pathways (41). For example, hyperlipidemia is harmful to endothelial cells and accelerates the formation of atherosclerotic plaques, further leading to hypertension and CHD (27). Some biochemical markers (such as hyperhomocysteinemia) and genetic polymorphisms (such as factor V Leiden heterozygous) indicating hypercoagulative state were also significant risk factors in our meta-analysis. Therefore, the above factors are cofactors of NAION with similar mechanisms. Drugs treating cardiovascular diseases were proved to be significantly associated with NAION, too, such as antithrombotics, β-blockers, and statins. These drugs could not be considered as independent risk factors because they were only applied to treat the diseases.
Although the above major factors were reported to induce NAION, we summarized and reconfirmed these conclusions. In addition, we included the hypercoagulative biomarkers and possible risky genetic polymorphisms from the published literature. We first performed the meta-analyses on these factors, demonstrating that increased homocysteinemia, fibrinogen, lipoprotein(a), and factor V Leiden heterozygous were risk factors of NAION. Several diseases and biomarkers were identified in <3 studies, which are listed in Table 3. They might be the potential risk factors if more original studies were carried out. Specifically, we conducted the subgroup analyses based on ethnicities in seven risk factors. The cases were divided into Asians, Europeans, and mixed ethnicities from original publications. The ethnicity did not affect the association between smoking, hypertension, hyperlipidemia, DM, and NAION. Nevertheless, the incidence of NAION had no disparity in different gender among the Asians. CHD and sleep apnea were not significant risk factors in Europeans. This subgroup analysis expanded the application of our results to different peoples.
Diabetes and sleep apnea were proved to be important risk factors in the previously published meta-analyses (7,14). We reconducted meta-analyses on both factors with 11 new studies included on diabetes and three on sleep apnea. Apart from the verification of their conclusions with larger sample sizes, we also performed the subgroup analyses according to ethnicities as stated above. For DM, its influence on NAION was not related to ethnicity in our pooled results, similar to the conclusions of Chen et al. (14). Furthermore, our conclusions were more powerful with three extra cohort studies compared with the previous meta-analysis (14), in which the included studies were all case-controlled because the cohort studies had a higher level of evidence. Sleep apnea was not a significant risk factor of NAION in the Europeans after our subgroup analysis, although a published study found that the Europeans were more likely to have NAION (14). However, only three studies were included in this subgroup, and we achieved marginal data (P = 0.06). We expected more valid original studies on different ethnicities, in case, a high-quality meta-analysis could be conducted.
We found no apparent association between the occurrence of NAION and 1-month use of PDE5-Is, which was published by us in 2018 (8). Although PDE5-Is mainly cause vasodilation and systematic hypotension (32), their influences on the NAION pathogenesis remain controversial. Because of the relatively low incidence of NAION and difficulties in diagnosis, it was hard to include adequate samples in the published literature, and confounders were not adjusted in several case-control studies. More clinical studies are necessarily needed to provide strong evidence on this point.
A crowded optic disc was often observed in NAION because hypoperfusion or ischemia of the optic nerve head is apparent in a tight optic disc structure (26). An optical coherence tomography (OCT) is a useful tool to measure the CDR. Several studies showed the association between the CDR and NAION. For example, González Martín-Moro et al. reported a smaller CDR to be a risk factor and poor prognostic marker (50). Both pathogens disturb the function of endothelial cells, activate the secretion of inflammatory cytokines, and thus induce or promote atherosclerosis. In the cohort study by Chang et al., endstage renal disease was proved to increase the risk of NAION (31), probably because these patients had received repeated (64), no consistent conclusions were reached due to the lack of well-planned studies with large sample sizes. All the above research were carried out in a small number of studies, so the studies with larger samples size are necessary to confirm these findings. Although our meta-analysis was conducted comprehensively and included multiple potential risk factors of NAION, it still has several limitations. First, all the included original articles were case-control or retrospective cohort studies. No RCTs or prospective cohort studies have been published yet. Recalled data might be incomplete and inaccurate, bringing biases to these studies. Second, most NAION possible factors have similar mechanisms and act as confounders. Independence of these factors may probably be examined by RCTs; however, no RCTs can be conducted on studying the risk factors. Third, the time span of the included studies lasts from 1991 to 2019, during which the changes in lifestyles and medical techniques influence the type of risk factors. Finally, some risk factors were examined in <3 studies and it was hard to confirm their association with NAION. The above limitations reduced the quality of our meta-analysis partly and restricted our results to be applied.
Consequently, our study concluded that the following risk factors were associated with NAION: male gender, hypertension, hyperlipidemia, DM, CHD, sleep apnea, medication history of cardiovascular drugs, and factor V Leiden heterozygous. Better understanding of these risk factors in NAION can direct future research and therapeutic approaches.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. | 2021-10-04T13:16:32.253Z | 2021-10-04T00:00:00.000 | {
"year": 2021,
"sha1": "891e3036fc7819370f1b4cfcf918b722c9b0d0c1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.618353/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "891e3036fc7819370f1b4cfcf918b722c9b0d0c1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
48358795 | pes2o/s2orc | v3-fos-license | Property income from-whom-to-whom matrices: A dataset based on financial assets–liabilities stocks of financial instrument for Spain
A common problem in compiling and updating Social Accounting Matrices (SAM) or Input-Output tables is that of incomplete information. In the case of the submatrix ‘Property Income of the Account Allocation of Primary Income’, the information published by the National Bureau of Statistics of Spain (INE) is limited because it is not possible to build the set of from-whom-to-whom sub-matrix on income interest, dividends, securities and rents with only the subtotals presented in the Integrated Economic Accounts (IEA). This because the income distribution received and paid for by each institutional sector required for a financial SAM is not available, i.e. the INE does not break down the data by institutional destination and source. In this sense, our contribution rely on estimating a complete series of from-whom-to-whom matrices of Property Income for the Spanish economy between 1999 and 2016, in which we have devoted special attention to staying in line with the Data Gaps Initiative (DGI-2) recommendation released by the Financial Stability Board (FSB) and the International Monetary Fund (IMF), claiming that more focus is needed on data sets that support the monitoring of risks in the financial sector in response to regulatory and macro-financial emerging policy needs).
Subject area
Economics More specific subject area Financial Macroeconomic, Flow of Funds, System of National Accounts Type of data
Value of the data
The dataset provides accurate estimates of the allocation of primary income accountproperty incomein a from-whom-to-whom matrix scheme. This representation allows the question 'who' is financing 'whom' to be answered, which allows a more detailed and complex analysis of the financial flows between sectors and their role in the economy.
The novel approach to provide the stocks of Asset and Liabilities Matrices in a from-whom-to-whom framework by financial instruments turns out to be very useful for analyzing the real-financial interconnectedness of the Spanish economy.
The dataset provides the necessary elements to estimate the breakout of the total income return, resulting in an outstanding sources of information for investment analysis and impact analysis of public policies.
The set of submatrices results in a consistent accounting framework useful for improving and extending Social Accounting Matrices, sectorial-financial linkage analysis, macroeconomic forecasting and improve and enrich the scope of real-financial computable general equilibrium (CGE) models.
Data
The real side information concerning the allocation of primary income accountproperty income was obtained from the statistics of the Integrated Economic Accounts (IEA) provided by the National Bureau of Statistics of Spain (INE). The financial side information was retrieved from the financial statistics of the Flow of Funds (FoF) provided by the Bank of Spain (BdE). Both INE and BdE data sets correspond to the yearly series 1999 to 2016 due to the constraints to using the more recent official data set available to build a Property Incomes matrix for Spain. Given that both the real statistics of the INE and the financial statistics of the BdE shape the entire System of National Accounts (SNA93) [1], the estimation procedure proposed in this data research maintain and respect the statistical data provided by both agencies. In this sense, the statistical compilation procedures follow the UN Manual of SNA93 for the construction of the matrices of Property Incomes, while contemplating the recommendations made by Shrestha et al. [2] to expand the statistical information within an integrated framework for financial stocks positions and flows on a from-whom-to-whom basis, and the compilation guides suggested by Tsujimura and Mizoshita [3] and Jellema et al. [4] to integrate the financial matrices accounting used as baseline.
The statistical information from the INE is available in integrated structured tables separated by years, while the database from the BdE is expressed in quarterly time series. Since the data from the BdE are in quarterly series, different treatments among figures expressed in flows from those expressed in balances were required, adding the former to form flows for the year and considering the figures of the last quarter as the closing balances of the respective year.
Experimental design, materials and methods
In the wake of the 2008 financial and economic crisis, the Group of Twenty economies (G-20) asked the Financial Stability Board (FSB) and the International Monetary Fund (IMF) "to explore gaps and provide appropriate proposals for strengthening data collection before the next meeting of the G-20 Finance Ministers and Central Bank Governors." In its Spring Meeting in April 2009, the FSB-IMF came up with 20 recommendations [5], known now as the Data Gaps Initiative (DGI-1), to address information gaps revealed by the global financial crisis. Recently, the FSB-IMF concluded the Second Phase of the G-20 Data Gaps Initiative (DGI-2) in September 2017 [6]. The DGI-2 recommendations maintain the continuity of DGI-1 but claim that more focus is needed on data sets that support the monitoring of risks in the financial sector and the analysis of the interlinkages across economic and financial systems. This data article focuses on two of these recommendations, both of which state that G-20 member economies should extend their national accounts by compiling financial and nonfinancial stocks and flows in the economic sector.
Integrated approach for property income and financial instruments on a from-whom-to-whom basis
The integrated system of sector accounts in a from-whom-to-whom (or debtor/creditor) framework, correspond to the matrix form representation that allow the analysis of the financial connections among institutional sectors in a national economy and abroad. As have been pointed out by Shrestha et al. [2] the integrated from-whom-to-whom representation of statistical information allows answering questions like "Who is financing whom, in what amount, and with which type of financial instrument?". As regards of property income, it also permits tracing who is paying/receiving income (e.g., interest) to/from whom. The from-whom-to-whom compilation approach also enhances the quality and consistency of data by providing more cross-checking and balancing opportunities.
The System of National Accounts 2008 (SNA2008) [7] presents from-whom-to-whom matrices as three dimensional tables where the flows from one sector to another sector for each type of financial instruments are showed. In this regard, to estimate the matrix Property Income, we based our approach on the definition provided by the SNA2008 manual, which states: "7.107 Property income accrues when the owners of financial assets and natural resources put them at the disposal of other institutional units. The income payable for the use of financial assets is called investment income while that payable for the use of a natural resource is called rent. Property income is the sum of investment income and rent.
7.108 Investment income is the income receivable by the owner of a financial asset in return for providing funds to another institutional unit…".
Hence, we can use the balances account of the financial account relating to financial assets and liabilities across the institutional sectors as estimators of the shares of income received and paid by each institutional sector. Intuition suggests that the property income received and/or paid for each institutional sector should be directly proportional to its levels of assets and/or liabilities. Thus, we used the balance account of the financial account relating to financial assets and liabilities across the institutional sectors as estimators of the shares of income received and paid by each of them.
Property income from-whom-to-whom matrix
The property income from-whom-to-whom matrix show how the income is received by the owner of a financial asset in return for providing funds to another institutional sector. The income payable for the use of financial assets is called investment income while that payable for the use of a natural resource is called rent [7]. Formally, the property income as the total sum of investment income and rent, can be expressed as follow: where PI mxmxp correspond to the Property income matrix in a double and quadruple entry matrix form, the subscript m denotes institutional sectors in the economy, and the subscript p denotes income type of transactions. In this data article, we consider m equal to 5 institutional sectors, 1 Table 1 for more details). From expression (1) we can get from each row the total vector v jp of total property income paid by each m institutional sector: Similarly, from each column the total vectors u jp of total property income received by each m institutional sector: In this sense, the integrated framework on a from-whom-to-whom scheme allows answering questions like "Who is paying/receiving income (e.g., interest) to/from whom, in what amount, and with which type of transaction?". Also, as have been pointed out by Shrestha et al. [2] this matrix Table 1 Property income and type of payments.Source: System of national Accounts 2008 and own elaboration.
Property income
Type of payments
Code Description Variable Description
D41 -Interest payable on loans and deposits. Interest Income receivable by the owners of certain kind of financial assets at the disposal of another institutional unit. -Interest payable on debt securities. D42 -Distributed income of corporations.
Dividends Investment incomes to which shareholders become entitled as a result of placing funds at the disposal of corporations.
-Withdrawals from incomes of quasicorporations, Investment funds shareholders. D43 -Reinvested earnings of foreign direct investment. D44 -Income payable on pension entitlements, insurance policyholders.
Securities Other investments incomes and rents.
representation approach enhances the quality and consistency of data by providing more crosschecking and balancing requirements, given that the following condition should be hold where the total paid must be equal to the total received by the economy.
Financial Instruments from-whom-to-whom matrices
Financial instruments include the full range of financial contracts made between institutional sectors. These contracts are the basis of creditor/debtor relationships through which asset owners acquire unconditional claims on economic resources of other institutional sectors [7]. In this sense, the financial instrument matrix defined as A mxmxq denotes a from-whom-to-whom representation of the net worth of this economy in terms of stocks: where A mxmxp correspond to the Assets-Liabilities matrix of Stocks of Financial Instrument in a double and quadruple matrix form. The financial instrument from-whom-to-whom matrices in expression (5) comprise the financial acquisition in both claims (described as assets) and obligations (described as liabilities) by institutional sector. As before, the subscript m denotes institutional sectors in the economy, and the subscript q denotes financial instruments. The availability of information provided by the Bank of Spain allow to consider seven (q ¼ 7) financial instruments: AF.1 Monetary gold and Special Drawing Rights, AF.2 Currency and deposits, AF.3 Debt securities, AF.4 Loans, AF.5 Equity and investment fund shares, AF.6 Insurance, pension, and standardized guarantee schemes, and AF.7/8 Other Assets (see Table 2 for more details).
GRAS estimation approach
Like Leung and Secrieru [8] and Aray, Pedauga and Velázquez [9] we estimated the Property Income matrix (PI mxmxq ) breakdown by institutional sector and type of instruments defined in Eq. (1) by using the information embedded on the assets and liabilities matrix of each institutional sector compiled in the financial instruments matrix (A mxmxq ) represented in Eq. (2). In this sense, let A and PI be, respectively, the observed (prior) and the estimated (target) matrix, with their typical elements a ijq (each financial instrument) and x ijp (each property income component).
Under the GRAS algorithm [10], the prior matrix A is used as baseline to estimate the target matrix PI, satisfying simultaneously the row sums v jp defined in Eq. (2) and column sums u jp expressed in Eq.
(3). Thus, the programming model following the information loss problem is such that: X i x ijp ¼ v jp for all j; and ð8Þ Eq. (6) represents the objective function. Constraints (7) and (8) imply that the adjusted matrix PI should be consistent with an exogenously specified row and column totals. Moreover, constraint (9) introduces parameters α i and β j which make all cells 0 in a row i or a column j of matrix PI, when the corresponding cell in u ip or v jp is 0. These new constraint is set to remove Financial Assets/Liabilities in matrix A which do not produce payments in matrix PI.
In this sense, we are capable to derive the breakdown of the Property Income by types of financial transactions for the Spanish economy, in which we have devoted special attention to staying in line with DGI-2 recommendation II.8, referring to the compilation of sectorial account flows and balance sheet data, based on from-whom-to-whom matrices expressed in transactions and stocks to support balance sheet analysis, and recommendation II.9, which encourages the development and dissemination of distributional information on income allocation.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Transparency document. Supporting information
Transparency data associated with this article can be found in the online version at https://doi.org/ 10.1016/j.dib.2018.05.018. Table 2 Financial Instruments and type of assets.Source: System of National Accounts 2008 and own elaboration.
Financial Instrument
Type of asset/liability
Code Description Variable Description
AF.1 -Gold and reserves.
-Special Drawing rights.
Monetary gold and special Drawing rights.
Titles held as a reserve assets, comprising gold bullion and supplement reserve only produced by IFM. AF.2 -Currency, Notes and coins issued or authorized by central bank or government.
Currency and Deposits.
Amount of money in national or foreign currency that economic agents own as assets and liabilities. Saving deposits, fixed-term deposits and nonnegotiable certificates of deposits.
Debt securities.
Negotiable instruments that can serve as evidence of a debt. AF.4 -Short-term and long-term Loan. Loans. Financial assets created when a creditor lends fund directly to a debtor and evidenced by a document that they are not negotiable. AF. 5 -Listed and investment fund shares.
-Unlisted and other Equity shares.
Equity and investments fund share.
Assets whit the particular feature that their holders obtain a residual claim of the institutional unit who issued the instrument.
AF.6 -Life and non-life insurance.
Insurance, pensions and standardized guarantees.
All function as a form of redistribution of income or wealth mediated by financial institution.
-Trade credits, other accounts receivable and advances.
Other assets. All other kind of financial assets linked to a specific financial instrument or destined to goods and services. | 2018-06-15T00:11:55.081Z | 2018-05-22T00:00:00.000 | {
"year": 2018,
"sha1": "306d92189b3f70c1c49ee1dcbded8edb18b2e212",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2018.05.018",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f0c17a7d83f0f1dc6ecd47c51474ba641e4a01e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
6071102 | pes2o/s2orc | v3-fos-license | The calcium-sensing receptor suppresses epithelial-to-mesenchymal transition and stem cell- like phenotype in the colon
Background The calcium sensing receptor (CaSR), a calcium-binding G protein-coupled receptor is expressed also in tissues not directly involved in calcium homeostasis like the colon. We have previously reported that CaSR expression is down-regulated in colorectal cancer (CRC) and that loss of CaSR provides growth advantage to transformed cells. However, detailed mechanisms underlying these processes are largely unknown. Methods and results In a cohort of 111 CRC patients, we found significant inverse correlation between CaSR expression and markers of epithelial-to-mesenchymal transition (EMT), a process involved in tumor development in CRC. The colon of CaSR/PTH double-knockout, as well as the intestine-specific CaSR knockout mice showed significantly increased expression of markers involved in the EMT process. In vitro, stable expression of the CaSR (HT29CaSR) gave a more epithelial-like morphology to HT29 colon cancer cells with increased levels of E-Cadherin compared with control cells (HT29EMP). The HT29CaSR cells had reduced invasive potential, which was attributed to the inhibition of the Wnt/β-catenin pathway as measured by a decrease in nuclear translocation of β-catenin and transcriptional regulation of genes like GSK-3β and Cyclin D1. Expression of a spectrum of different mesenchymal markers was significantly down-regulated in HT29CaSR cells. The CaSR was able to block upregulation of mesenchymal markers even in an EMT-inducing environment. Moreover, overexpression of the CaSR led to down-regulation of stem cell-like phenotype. Conclusions The results from this study demonstrate that the CaSR inhibits epithelial-to-mesenchymal transition and the acquisition of a stem cell-like phenotype in the colon of mice lacking the CaSR as well as colorectal cancer cells, identifying the CaSR as a key molecule in preventing tumor progression. Our results support the rationale to develop new strategies either preventing CaSR loss or reversing its silencing. Electronic supplementary material The online version of this article (doi:10.1186/s12943-015-0330-4) contains supplementary material, which is available to authorized users.
Background
Colorectal cancer (CRC) is the second most frequently diagnosed malignant tumor in females, third most in males, and ranks second in cancer related deaths worldwide [1,2]. Chronic inflammation is one of the major risk factors to develop colorectal tumors [3]. Patients with inflammatory bowel disease have an increased risk to develop cancer [4,5]. Although progress in therapy to manage locally advanced or metastatic CRC patients have evolved, the 5-year survival rate continues to be poor (www.cancer.org).
Epithelial-to-Mesenchymal Transition (EMT) is a reversible, cellular trans-differentiation process by virtue of which epithelial cells acquire mesenchymal traits of plasticity, mobility, stem cell-like and invasive properties [6]. The EMT process, which is normally active during embryogenesis and wound healing (Type 1 EMT), can be activated also during tissue regeneration and fibrosis upon initiation by inflammation (Type 2 EMT) [7]. Type 3 EMT occurs during tumor progression when cancer cells exploit this mechanism to gain invasive and metastatic potential [7,8]. The reversal of this process, called Mesenchymal-to-Epithelial Transition (MET), is implicated in later stages of metastasis by which metastatic cancer cells reacquire epithelial characteristics in order to form secondary tumors [9].
Activation of the EMT program orchestrates complex transformations in cellular architecture and behavior. A spectrum of EMT regulators, majority of which belong to the zinc-finger family of transcription factors (including, but not restricted to Snai1 (Snail), Snai2 (Slug), Twist, and Zeb) can bind to the promoter of E-Cadherin and repress its transcription [6,10]. Downregulation of E-Cadherin results in loss of the epithelial phenotype (by deregulated inter-cellular junction complexes), accompanied by the induction of genes specific for mesenchymal phenotype (e.g. αSMA, FSP1, FOXC2 and Vimentin). Furthermore, reduction of E-Cadherin-dependent sequestering of cytoplasmic β-catenin results in free β-catenin that is able to translocate to the nucleus and activate the Wnt/β-catenin signaling pathway [11]. These cells also undergo a 'cadherin-switch' leading to a shift from E-Cadherin to N-Cadherin expression, which is operated by the aforementioned epithelial transcriptional repressors [10,12].
In solid tumors, including CRC, recent reports have shown that a small sub-population of cells, within the heterogeneous tumor, acquires features of stem cells [13,14]. These cancer stem cells have the ability of selfrenewal and usually increase in number following conventional therapy as these treatments target only the rapidly dividing cells.
The calcium-sensing receptor (CaSR) is a G proteincoupled receptor that regulates systemic calcium homeostasis. In the colon, the CaSR has been shown to play important roles in nutrient sensing and fluid secretion/absorption [15]. CaSR expression is significantly reduced in colorectal tumors [16,17]. Since the chemopreventive properties of Ca 2+ are partially mediated by the CaSR [18,19], these effects may be limited in colonic tumors lacking the CaSR. Indeed, the colons of mice lacking the CaSR exhibit aberrant crypt foci, the earliest identifiable pre-neoplastic lesions [20] as well as enhanced intestinal inflammation [20,21] suggesting that the CaSR is important for maintaining normal colonic epithelium.
In this study we aimed to decipher the mechanisms that contribute to the tumor suppressive functions of the CaSR during colorectal tumorigenesis. We demonstrate that the CaSR suppresses EMT and the stem cell-like phenotype both, in vivo in the colon of mice and in vitro, in colon cancer cells, providing rationale for developing pharmacological agents to modulate CaSR sensitivity in colorectal cancer to prevent tumor progression.
Ablation of CaSR leads to induction of EMT-associated markers in the colon of global CaSR/PTH double-knock out mice
We investigated the consequences of CaSR knockdown on markers of EMT in the colon of two murine models of CaSR gene knockdown.
In the first model, global ablation of exon-5 of the CaSR on a PTH-null background (CaSR −/− /PTH −/− , DKO) led to a significant upregulation in mRNA expression of the mesenchymal markers αSma, Fsp1, Snai2, Twist2, Vimentin and Zeb1 in the colon compared with control mice (CaSR +/+ /PTH −/− ) ( Figure 1). Expression of the pluripotency markers, Nanog and Stella, was also significantly elevated in the colon of DKO mice compared with controls ( Figure 1).
Ablation of CaSR leads to induction of EMT-associated markers in the colon of intestine-specific CaSR knock out mice
We further evaluated protein expression of EMT markers in colon of mice lacking exon-7 of the CaSR specifically in the intestinal epithelium under the control of the Villin promoter (CaSR int-KO ). Colonocytes lacking CaSR in the intestinal epithelia showed a clear upregulation in expression of N-Cadherin (in the epithelial cells of the crypt) and of αSma and Vimentin (in the stromal region) compared with wild-type mice (CaSR WT , Figure 2). Although N-Cadherin expression was upregulated in CaSR int-KO mice, expression of E-Cadherin was unaltered compared with CaSR WT mice (data not shown). Interestingly, we also saw populations of Snai1 and Snai2 expressing cells limited to the stroma only in the CaSR int-KO mice ( Figure 2).
Overexpression of the CaSR enhances the epithelial phenotype of HT29 colon cancer cells Although of epithelial origin, HT29 colon cancer cells grow as spindle-shaped, elongated cells that contact neighboring cells only focally when cultured in their standard growth conditions. While HT29 EMP cells retained this mesenchymal-like phenotype, stable transfection with the CaSR promoted a morphological change in these cells to a more cobblestone-like, well adherent phenotype that displayed complete cell-cell adhesion with the cells growing in densely packed colonies ( Figure 3A). Furthermore, mRNA expression of the epithelial marker, E-Cadherin was significantly upregulated in HT29 CaSR compared with HT29 EMP cells ( Figure 3B).
Overexpression of the CaSR impairs migration and invasion of colon cancer cells
Since tumor cell spheroids are considered more representative of in vivo conditions, we evaluated the role of the CaSR in regulating migration and invasion of CRC cells in a 3D spheroid cell invasion assay. After spheroid formation for 7 days, the migration and invasion potential of 3D cellular aggregates into the surrounding matrix was evaluated.
HT29 CaSR cells had significantly lower invasive index (area of the invading spheroids) compared with cells that were transfected with the empty vector ( Figure 3C). To distinguish between effects on migration and invasion, we additionally quantified the number of daughter spheroids that had migrated away from the primary spheroid. Overexpression of the CaSR significantly reduced the number of invading daughter spheroids compared with control cells ( Figure 3D).
Overexpression of the CaSR attenuates nuclear translocation of β-catenin in HT29 colon cancer cells
Previous studies have shown that loss of CaSR promotes migration and invasion of CRC cells by regulating the Wnt/β-catenin pathway [20,22,23]. Since ectopic CaSR enhanced the epithelial phenotype whilst inhibiting the invasiveness of HT29 cells, we examined whether restoration of CaSR expression was indeed able to regulate Wnt/β-catenin activity. We measured β-catenin expression in protein lysates from nuclear and cytosolic fractions of HT29 EMP and HT29 CaSR cells. Cells overexpressing the CaSR had a marked decrease in the amount of nuclear βcatenin ( Figure 4A). The ratio of nuclear to cytosolic βcatenin in HT29 CaSR cells was significantly decreased by 43% compared with HT29 EMP cells ( Figure 4B). Concomitantly we found significantly higher GSK-3β mRNA expression in these cells ( Figure 4C).
We showed that overexpression of CaSR increased expression of the differentiation markers, CDX2 and Villin ( Figure 4D and E), and downregulated expression of the proliferation marker, Cyclin D1 ( Figure 4F).
CaSR suppresses EMT in HT29 colon cancer cells
NPS R-568, a positive allosteric modulator of the CaSR increases sensitivity of the receptor to its ligands, including Ca 2+ [24]. Interestingly, treatment with NPS R-568 upregulated the endogenous expression of the CaSR in HT29 EMP cells ( Figure 5A). Both, the ectopic (HT29 CaSR ) and the endogenous CaSR (HT29 EMP treated with NPS R-568) were able to induce expression of E-Cadherin (distinctively in the cell membrane) ( Figure 5B) and down-regulate the expression of the mesenchymal markers such as αSMA and Vimentin ( Figure 5C and D). . Dots indicate individual data points, and the line represents median. Statistical significance was calculated using t test. n = 9 animals/group, *p < 0.05, **p < 0.01, ***p < 0.001. We next evaluated whether the presence of the CaSR would further prevent induction of EMT in HT29 cells. Stably transfected HT29 cells were treated with a commercially available EMT inducing cocktail. Upon treatment, HT29 EMP cells were robustly induced towards the mesenchymal phenotype as assessed by significant upregulation in mRNA expression of the mesenchymal markers αSMA, FOXC2, SNAI1, TWIST2, Vimentin and Zeb1 ( Figure 6). Interestingly, in HT29 CaSR cells, ectopic reintroduction of the CaSR was able to block EMT induction in these cells ( Figure 6).
These results were confirmed at protein level by immunofluorescence staining (Figure 7). Treatment with the EMT promoting cocktail induced protein expression and CaSR +/+ (CaSR WT ) mice were investigated for protein expression of mesenchymal markers αSma, N-cad, Snai1, Snai2 and Vimentin. Representative images in black and white for the markers are shown in addition to the merged images (red or white channels for the indicated markers and blue for DAPI). n = 5 animals/group. Scale bar: 50 μm.
of mesenchymal markers, αSMA and Vimentin only in HT29 EMP cells, which was blocked by ectopic expression of CaSR. In HT29 CaSR cells, the upregulated E-Cadherin expression was downregulated upon treatment with the EMT promoting cocktail but remained higher than in HT29 EMP cells ( Figure 7).
CaSR suppresses stem cell-like phenotype in HT29 colon cancer cells
Since HT29 cells display expression of pluripotencyrelated genes, we evaluated whether expression of the CaSR could decrease the cancer stem cell-like properties of these cells. We cultured the stably transfected HT29 CaSR and HT29 EMP cells in a commercially available stem cell medium, and assessed expression of the pluripotency related genes Nanog, Oct3/4, Stella and FOXC2. mRNA expression levels of these markers was significantly lower in HT29 CaSR cells compared with HT29 EMP cells ( Figure 8A).
Increasing CaSR levels, either by transfection (HT29 CaSR ) or by treatment with NPS R-568 reduced the stem-like phenotype in these cells by downregulating expression of pluripotency associated genes, SOX2, Nanog, Oct4, and the colon cancer stem cell marker, CD44 ( Figure 8B).
Functionally, cancer stem cells are defined by their ability of self-renewal and form spheres in vitro under extreme limited dilutions [25]. Therefore, we examined whether the presence of the CaSR could block the ability of the cells to form colonospheres (in vitro spheroidal aggregates). When performing a limiting-dilution assay in hanging drop cultures, we found a significant difference in the spheroid forming ability between HT29 CaSR and HT29 EMP cells. While HT29 EMP cells needed 5 cells/drop to form spheroids in 50% of the drops ( Figure 8C, open squares), HT29 CaSR cells required 8-fold more cells (40 cells/drop) to form spheroids in 50% of drops ( Figure 8C, filled circles). Performing the Extreme Limiting Dilution Analysis (ELDA) revealed a 4-fold lower frequency (p < 0.001) in forming spheroids in HT29 CaSR cells (frequency: 1/60) compared with the vector-transfected control HT29 EMP cells (frequency: 1/15) indicating that the enrichment in these stem cell-like cells was inversely proportional to CaSR expression.
CaSR expression positively correlates with the epithelial marker E-Cadherin, and negatively with markers of the mesenchymal lineage in human CRC samples We evaluated in silico the correlation between CaSR expression and a signature of EMT markers using data available from the GEO database. This study by Ryan and colleagues [26] deposited microarray data from 111 tumor and adjacent mucosa samples from CRC patient samples.
Discussion
The calcium-sensing receptor (CaSR) has gained importance outside its physiological role as a regulator of calcium homeostasis. Several model systems have provided convincing evidence for the role of the CaSR as a tumor suppressor in the colon [18]. However, evidence describing the molecular mechanisms causing the tumor suppressive functions of the colonic CaSR is limited. In this study we demonstrate a causal relation between the CaSR and regulation of Epithelial-to-Mesenchymal Transition (EMT) as well as acquisition of stem cell-like properties in vitro and in vivo. We show that the CaSR is able to inhibit transition of colonic epithelial cells into the mesenchymal phenotype and prevents acquisition of the stem cell-like phenotype. All preliminary experiments were conducted in two colon cancer cell lines: HT29 and Caco2-15, stably overexpressing the CaSR (or the empty control vector). The results obtained from both cell lines were comparable and therefore, we focused on the HT29 cell line, which has negligible endogenous expression of CaSR.
In light of recent advances in developmental biology and biology of cancer, EMT has emerged as a critical process in the pathophysiology of inflammation and cancer [7,27,28]. Chronic inflammation in patients with inflammatory bowel disease (IBD) increases the risk of CRC development. Chemically induced, as well as genetically engineered mouse models of IBD have an increased susceptibility to develop colorectal tumors [28]. In this study, we have evaluated the expression of EMT associated markers in the colon of the exon 5-less global CaSR/PTH double knockout mouse as well as in the colon of the exon 7-less intestine-specific CaSR knockout mice. The colons of both, the global CaSR/PTH double knock out and the intestine-specific CaSR knockout mice show signs of inflammation [20,21] and immune cell activation [21] as seen also in patients with IBD [28]. We demonstrate for the first time in vivo, that in both models expression of EMT-associated mesenchymal markers were significantly upregulated in the colon of mice lacking the CaSR, suggesting a critical role of the receptor in suppressing EMT.
Type 2 EMT is associated with tissue regeneration and fibrosis after inflammation-associated injury. As a result, macrophages and activated fibroblasts (myofibroblasts) expressing high levels of mesenchymal markers like FSP1 and αSMA accumulate [7,29]. In some cases, epithelial cells retain a normal epithelial morphology, expressing epithelial markers like E-Cadherin but expression of mesenchymal markers is also induced [29] and cells develop into an intermediate EMT phenotype. The extent of the EMT process depends on the intensity and length of the inflammatory process as has been reported in organs like kidney, lung and intestine [7]. Eventually, these cells acquire mesenchymal characteristics, lose cell-cell contact, migrate out of the epithelial layer and enter the interstitium, where they forego their epithelial phenotype and attain a mesenchymal phenotype [30]. Therefore it is not surprising, that we observe enrichment in populations of stromal cells staining positive for expression of mesenchymal markers in the colon of the CaSR int-KO mice. In the colon of these mice, the cadherin-switch leads to a significant upregulation of N-Cadherin expression without significant alterations in E-Cadherin levels. Such cadherin switch has been previously reported in other cancers [12,31].
The enhanced inflammatory/immune cell microenvironment can eventually lead to cancer through the inflammation-dysplasia-carcinoma sequence. Extracellular calcium is known to have tumor preventing effects in colorectal cancer [32] and these effects are mediated by the CaSR [18,22,23]. CaSR-null cells, which represent a subpopulation of colon cancer cells, indeed show an enhanced malignant phenotype with increased migration potential and enhanced expression of EMT markers [33].
Several colon cancer cell lines of epithelial origin, including the HT29 cells used in this study, are reprogrammed to acquire an EMT-like phenotype (mesenchymal-like morphology, loss of cell-cell adherence and a highly metastatic and invasive phenotype). These cell lines have low endogenous CaSR expression compared with more differentiated colon cancer cells having a more epithelial phenotype (e.g. Caco-2 cells) [34]. In normally functioning colon cells, β-catenin (a member of the canonical Wnt pathway) binds to E-cadherin and is involved in cell-cell adhesion. Free cytosolic β-catenin is marked for degradation in the proteasome by the APC/Axin/GSK-3β destruction complex. However, when the Wnt pathway is activated, cytosolic β-catenin is translocated to the nucleus where it binds to the Transcription Factor (Tcf)-4 and regulates transcription of genes involved in proliferation, differentiation and apoptosis [11,[35][36][37][38][39]. Active Wnt signaling can prevent GSK-3β induced degradation of Snail, and thereby promote EMT [40].
In the present study we show that reintroduction of the CaSR induced a transformation in the morphological architecture of HT29 cells from a more spindle-like, fibroblast-like phenotype to a well adherent, epitheliallike phenotype. These cells also showed reduced cellular invasiveness and migration, two critical steps for initiation of metastasis. Furthermore, HT29 cells overexpressing the CaSR had increased GSK-3β and E-Cadherin expression, in parallel with reduced nuclear β-catenin and Cyclin D1 levels. These data support previous findings that the CaSR suppresses the malignant behavior of CRC cells by modulating the Wnt signaling pathway [20], which is often deregulated in CRC. Rey and colleagues have previously shown the CaSR-dependent regulation of Wnt/β-catenin pathway in a normal colonic epithelial cell line with a functional APC gene [41]. It is interesting that although HT29 cells harbor a mutation in the APC gene, overexpression of the CaSR is able to counteract defective βcatenin activity in these cells.
Singh and colleagues have recently observed an EMTdriven, highly malignant phenotype in CaSR-null cells [33]. However, it remained elusive to what extent the CaSR is able to regulate/interfere in the EMT process. By treating stably transfected HT29 cells with an EMT inducing supplement, we are the first to report that the CaSR is effective in reversing the EMT phenotype and that the presence of the CaSR effectively blocks the transition from an epithelial to a mesenchymal state even under EMT promoting conditions. We further show that treatment with the calcimimetic, NPS R-568 previously described as a pharmacochaperone of the CaSR [42], was able to induce endogenous CaSR expression. Both, ectopic overexpression and enhancement of endogenous CaSR expression and activity (treatment with NPS R-568) is able to reverse the EMT phenotype by upregulating expression of E-Cadherin, which becomes localized to the cell membrane, and by downregulating expression of EMT-associated mesenchymal markers.
In several cancers, acquisition of EMT characteristics leads to a parallel increase in pluripotency-associated markers like Nanog, Stella and Oct3/4 [43,44]. In solid tumors (like the colon), these subpopulations of cells, which express cell surface markers like CD44, are termed cancer stem-like cells. These cancer stem cells, although controversial, have attained a lot of research interest recently for their contribution to recurrence/relapse of chemoresistant tumors [45]. HT29 cells express cancer stem cell markers abundantly. Since reintroduction of the CaSR induces a more differentiated phenotype, we predicted a reduction in their stem-like phenotype as well. Culturing colonospheres (in vitro spheroids) of stably transfected HT29 EMP and HT29 CaSR cells in chemically defined media supporting growth of stem cells [46], we were able to show that the CaSR was not only able to downregulate expression of markers associated with stemness, but was also able to reduce the ability to form tumor-associated spheroids.
Conclusions
Our data demonstrates that reintroduction of the CaSR prevents the development of a highly malignant, stem cell-like phenotype in colon cancer cells. The mechanisms Figure 7 Ectopic CaSR prevents induction of protein expression of EMT markers in HT29 colon cancer cells. Treatment with the EMT promoting cocktail induced protein expression of mesenchymal markers, αSMA and Vimentin only in HT29 EMP cells, which was blocked by ectopic expression of CaSR (HT29 CaSR ). In HT29 CaSR cells, the upregulated E-Cadherin expression was downregulated upon treatment with the EMT promoting cocktail but remained higher than in HT29 EMP cells. The merged images (red or white channels for the indicated markers and blue for DAPI) are shown. Scale bar: 20 μm.
involved in this process advance our understanding of the molecular changes in colon cancer cells accompanying the loss of CaSR. We show that the CaSR is necessary for calcium-mediated growth control in the colon. These data support the rationale to develop pharmaceutical agents to restore expression and function of colonic CaSR during colonic inflammation and cancer.
Methods
CaSR/PTH double-knockout mouse (CaSR −/− /PTH −/− ) Mice heterozygous for CaSR ΔExon5 and PTH were bred to generate CaSR +/+ /PTH −/− and CaSR −/− /PTH −/− mice as previously described [47]. All mice were maintained under standard conditions as approved by the Institutional Animal Care and Use Committee at Harvard Medical School. Age-and sex-matched animals (n = 9/genotype) were sacrificed; colons were washed in ice-cold PBS and stored in RNAlater (Life Technologies, Austria). For mRNA analysis, 1-2 cm of colonic tissue, 0.5 cm distal from cecum was used.
Intestine specific CaSR knockout mouse (CaSR int-KO )
CaSR flox/flox mice [48] were bred with mice genetically engineered to express Cre-recombinase under the control of the villin 1 promoter to produce vil Cre/CaSR flox/flox and CaSR flox/flox mice, as previously described [41]. All mice were maintained under standard conditions as approved by the Animal Care Subcommittee at San Francisco Department of Veterans Affairs Medical Center. Age-and sex-matched animals (n = 5/genotype) were sacrificed, whole colons were washed in ice cold PBS, fixed and embedded in paraffin until further analysis.
Cell culture, cloning and stable transfection
The human colon cancer cell line HT29 was obtained from American Type Culture Collection (ATCC, USA) and was routinely maintained in Dulbecco's Modified Eagle Medium (DMEM) containing 10% FBS, 1.8 mM Ca 2+ , 2 mM L-glutamine, 100 U/ml penicillin, and 100 μg/ ml streptomycin (all from Life Technologies) in a 5% CO 2 / humidified air incubator maintained at 37°C. Cells were periodically tested for mycoplasma contamination and authenticated by STR DNA profiling (DNA Diagnostic Center, UK).
HT29 cells were transfected with pcDNA3.1/Zeo (+) (EMP) or an expression vector encoding the full length CaSR cDNA (constructs kindly provided by Prof. Romuald Mentaverri, University of Picardie Jules Verne, France) using Lipofectamine LTX reagent (Life Technologies) as previously described [18]. Stable transfectants were selected by culturing the cells in the presence of Zeocin (150 μg/ml) for over 6 months.
Invasion assay
In vitro cellular invasion was determined using the Cultrex ® 3D Spheroid Cell Invasion Assay (Trevigen, USA) according to the manufacturer's instructions. Briefly, cells were seeded at a density of 3000 cells/cm 2 (in triplicates) in 96-well spheroid forming plates along with an extracellular matrix to drive aggregation of spheroids. After 72 hours, the spheroids were allowed to invade into a matrix composed of basement membrane proteins. Cellular invasion was visualized with a bright field microscope and quantified using Adobe Illustrator (Adobe Systems, USA) to calculate the area of the invading spheroids (invasive index) and the number of invading daughter spheroids. The assay was performed in three independent experiments.
Nuclear and cytosolic protein extraction and western blot analysis
In order to retrieve nuclear and cytosolic protein fractions, cells were cultured to 60-70% confluency, collected in icecold PBS and centrifuged. The pellet was first resuspended for 15 minutes in a hypotonic buffer (10 mM Hepes pH 7.5, 10 mM KCl, 0.1 mM EDTA, 0.1 mM EGTA, DTT, PMSF, protease and phosphatase inhibitors and 10% NP40 (all from Sigma Aldrich, Germany) and centrifuged to obtain the soluble cytosolic fraction in the supernatant. Next, the pellet was resuspended in a high salt buffer (20 mM Hepes pH 7.5, 0.4 M NaCl, 1 mM EDTA, 1 mM EGTA, DTT, PMSF, protease and phosphatase inhibitors) for 15 minutes to obtain the soluble nuclear fraction by centrifugation. Protein concentration was measured using Protein Assay Dye (Bio-Rad, USA).
Equal amounts of protein lysate were separated using sodium dodecyl sulfate polyacrylamide gel electrophoresis and transferred to a nitrocellulose membrane. Membranes were blocked in 5% milk (in 10 mM Tris, pH 7.5, 150 mM NaCl, 0.1% Tween-20 (TBST)) for 1 h at room temperature (rt) and subsequently incubated with the primary antibodies (in TBST) for 1 h at rt. After washing, the membrane was incubated with respective secondary antibody for 1 h at rt. The membrane was subjected to ECL reagent (Bio-Rad), and protein bands were detected using a digital imaging system (VersaDoc) and quantified using Image Lab software (Bio-Rad).
Induction of EMT
Stably transfected cells were treated with a commercially available EMT-inducing cocktail for HT29 cells according to the manufacturer's instructions (R&D Systems, USA) [49]. Cells were cultured in DMEM media containing 5% FCS and 1.8 mM Ca 2+ in the presence or absence of 1X EMT inducing supplement for 6 days and data analyzed for mRNA/protein expression.
Induction of stem cell-like phenotype
Stably transfected cells were cultured in the commercially available Essential 8 stem-cell media [46] according to the manufacturer's instructions (Life Technologies). Cells were cultured for 6 days and analyzed for mRNA/ protein expression and colonosphere-forming ability. For protein expression studies, stably transfected cells were also treated with 1 μM NPS R-568 (in DMSO). Vehicletreated cells were used as controls.
For the colonosphere-forming assay we modified the extreme limiting dilution analysis (ELDA) by Yu et al. [50] originally described by Hu and Smyth [51]. Cells were trypsinized to obtain single-cell suspension and were then seeded at the concentrations of 100, 40, 20, 10, 5, and 1 cell(s) per 50 μl Essential 8 media (6 replicates for each dilution/run) as hanging drop cultures. One week post seeding, the percentage of wells positive for formation of colonospheres was determined and plotted against the number of cells seeded per drop. Sphere forming frequency was determined using the ELDA analysis tool at http://bioinf.wehi.edu.au/software/elda.
RNA isolation, reverse transcription and quantitative RT-PCR
Total RNA was isolated using Trizol reagent (Life Technologies) according to the manufacturer's instructions. Integrity of RNA was checked by agarose gel electrophoresis, and RNA was reverse transcribed as previously described [52]. qRT-PCR was performed on the Step One Plus qRT-PCR system using Power SYBR Green master mix (Life Technologies). Where possible, primers were designed to bridge an exon-exon junction to prevent genomic DNA from being amplified. The ΔΔC t method was used to calculate fold changes in gene expression, relative to housekeeping genes and normalized to a commercially available total RNA calibrator (Clontech, USA) according to Livak et al. [53].
Human Beta-actin (hβ-ACTIN), human Large ribosomal protein (hRPLPO) and/or human Beta-2-microglobulin (hβ2M) were used as housekeeping genes for samples of human origin; mouse β-actin and mouse Eukaryotic translation elongation factor 1 beta 2 (mEef1B2) were used as housekeeping genes for mouse colon samples. Primer sequences are shown in Additional file 1: Table S1.
Immunostaining
Cells grown on glass cover slips under experimental conditions as well as paraffin-embedded 5-μm tissue sections were stained as previously described [17]. Samples were incubated with primary antibodies for 1 h at rt. Isotype-specific IgG antibodies were used as negative controls. Samples were subsequently incubated with corresponding secondary antibodies for a further 1 h at rt. Nuclei were stained with DAPI (1:3000, Roche, Switzerland) and images acquired with the TissueFAXS system (TissueGnostics, Austria).
Acquisition and processing of public microarray data
Raw microarray data was obtained from the Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo) database. The dataset GSE44861 deposited by Ryan and colleagues contained log2 transformed expression data from 111 tumor and adjacent non-tumor samples from CRC cancer patients [26] and was used to compute the correlation between the CaSR and EMT markers studied.
Statistical analysis
All assays were performed in at least three independent experiments. For comparison between 2 groups, t-test was used. For group comparisons, Analysis of Variance (ANOVA) was performed followed by Tukey's post-test. Non-normally distributed data were log-transformed to achieve normal distribution. Correlation coefficients were calculated using the nonparametric Spearman's correlation. P values <0.05 were considered statistically significant. SPSS (IBM, USA) was used to perform all statistical calculations and graphs were plotted using GraphPad Prism (GraphPad Software Inc., USA).
Additional file
Additional file 1: Table S1. Details of primers used in the study. | 2017-06-26T03:55:07.541Z | 2015-03-18T00:00:00.000 | {
"year": 2015,
"sha1": "598cada56288eab0aedf730f5595859ddc83f674",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/s12943-015-0330-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b7cd231ce97a94c0904765857158e384ce36ef1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55883864 | pes2o/s2orc | v3-fos-license | Experimental investigation on streamlines in a 180o sharp bend
One of the most important concerns of hydraulics engineers is predicting erosion at the outer banks of rivers by studying the flow pattern along the bend. Not only are the streamlines in meanders non parallel curved lines, but they are also twisted. To study the streamlines flowing a sharp bend, a 180o sharp bend was constructed at Persian Gulf University in Iran. Three dimensional flow components at different locations of the bend were measured using Vectrino velocim. In this paper, streamlines were drawn and investigated in different longitudinal profiles, cross sections, and plans. The results indicated that the secondary flow strength and size of the vortex formed at the distance of the beginning to the bend apex would increase. The core of central vortex moved away from the inner bank towards central line of the channel by 22%, and to the water surface by 20%. On the contrary, the size of the secondary vortex increased by 15%. In addition, the average of the horizontal angle of the streamlines, vector and the locus of maximum velocity were determined at different levels in the present investigation.
Introduction
The secondary flow is formed as a result of centrifugal force and its interaction with lateral pressure gradients due to lateral slope of water surface.In this flow, water moves away at the upper part of the river, and at the lower part, it moves towards the inner bank.In open-channel bends, the curvature of the flow gives rise to secondary flow, resulting in the helical motion.This helical motion is of high importance in meandering rivers, where it plays a key role in erosion and sedimentation patterns in the river's bed (Van Balen, Uijttewaal, & Blanckaert, 2009).Therefore, it is vital to know about and study flow pattern in river bends in order to predict and prevent outer wall erosion in the rivers.Moreover, the proper understanding of flow characteristics in curved open-channels is vital in predicting the spreading of pollutants and thus for water quality of natural river systems (Van Balen et al., 2009).
Since scour and flow patterns are of high importance in river and hydraulics engineering, a great number of researchers have always conducted studies on flow structure and sediment transport through bend and straight reaches.Kra and Merkley (2004) developed a computational method based on mathematical modeling for both two-dimensional and three-dimensional velocity distributions for steady-state uniform flow in open channels of rectangular cross-section.It is evident that the twodimensional version of the model is not appropriate for calculating surface velocity coefficients.Sui, Fang, and Karney (2006) carried out an experimental study on local scour in a flume with a 90º bend and analyzed the effect of some param including the Froude number, slope and width of the protective wall, and size of bed particle on the scour at bed level.Huang, Jia, Hsun-Chuan, and Sam (2009) applied NCCHE3D 3-D free surface to study secondary flows in an experimental channel.The agreements of vertically-averaged velocities between the simulated results were obtained by using different turbulence models with different pressure solution techniques, and the resulting measured data were satisfactory.Wang, Zhou, and Shao (2010) used a computational fluid dynamics model for simulation of two-dimensional water flow, sediment transport, bank failure processes, and the subsequent channel pattern changes.They considered the effects of secondary currents in bend channel and validated the water flow model using experimental data.Experimental and numerical studies of flow pattern in a 90º bend by Abhari, Ghodsian, Vaghefi, and Panahpur (2010) indicated that streamlines at the level close to the bed orient to the inner wall and at levels near water surface decline to the outer wall.Chan, Zhang, Leu, and Chen (2010) studied the turbulent flow in a channel with periodic porous ribs on one wall.They used the Reynolds averaged Navier-Stokes (Rans) equations with a k-ɛ turbulent model for turbulence closure.Barbhuiya and Talukdar (2010) carried out an experimental study of three dimensional flow and scour pattern in a 90º bend, and measured the time averaged velocity components, turbulent intensity components and Reynolds stresses in different vertical sections by using ADV.The Results showed that the maximum measured velocity is 1.61 times the mean velocity.Stoesser, Ruether, and Olsen (2010) solved the Navier-Stokes equations on a fine threedimensional grid by using a Large Eddy Simulation approach and a method that is based on the Rans equations for which there are two different isotropic turbulence closures.The results provided clear evidence that the Rans code was able to predict the time-averaged primary velocities with good agreement regardless of the turbulence model used.Bonakdari, Baghalian, Nazari, and Fazli (2011) predicted flow field in a mild 90º bend using Artificial Neural Networks (ANN) and Genetic Algorithm (GA).They studied the variations of velocities in both experimental and numerical (CFD) models.Moreover, they compared the results of ANN and CFX methods in sections where experimental data were not available.Constantinescu, Koken, and Zeng (2011) considered the flow in an open channel bend of strong curvature (the ratio between the radius of curvature of the curved reach and the channel width is close to 1.3) over realistic topography corresponding to equilibrium scour conditions.Results demonstrated that compared to Rans, DES (detached eddy simulation) is able to better capture the redistribution of the mean flow stream wise velocity.Baghalian, Bonakdari, Nazari, and Fazli (2012) investigated the velocity field in a 90 degree open channel bend using artificial intelligence, analytical, experimental, and numerical methods.They indicated that numerical, ANN and experimental results could show that the maximum velocity occurs under the free-surface but the analytical solution could not.Blanckaert et al. (2013) studied three distinct processes of flow separation near the banks in sharply-curved open-channel bends.The experiments were performed with both a flat immobile gravel bed and a mobile sand bed with dominant bed load sediment transport.Gholami, Akhtari, Minatour, Bonakdari, and Javadi (2014) carried out the experimental and numerical modelling of flow pattern at a strongly-curved 90 degree bend and reported that in both models, along the bend, the maximum velocity always occurs near the inner wall while the minimum occurs near the outer wall.Celik, Diplas, and Dancey (2014) measured the pressure fluctuations on the surface of a coarse, fully exposed, spherical grain resting upon a bed of identical grains in an open channel turbulent flow.They concluded that the stream wise velocity near the bed is most directly related to those force events crucial to particle entrainment.Huang, Li, Huang, and Liou (2014) acquired temperature profiles in a PDMS micro channel with a 90º sharp bend using a molecule-based temperature sensor.These temperature evolutions agree with secondary flow patterns identified from the velocity measurement.Vaghefi, Akbari, and Fiouz (2014) used the Depth-Averaged method to study and analyze shear stress distribution near the bed in a 180º sharp bend flume.The results suggested that the maximum dimensionless shear stress occurs near the inner wall and at the 40º cross section.Vaghefi, Akbari, and Fiouz (2015) measured three dimensional flow velocity components in a 180 degree sharp bend.The comparison between the longitudinal velocity values at distances of 5 and 95% from the bed showed a 60% increase in flow velocity from near the bed to water surface.Horvat, Isic, and Spasojevic (2015)
Results and di
In this se affecting the bend, the st profile, cross s Then, the ave determined a maximum v addressed.
Streamlines in c
When flow occur in its th due to ex and longitudin m.Technology Figure 2. The lo bend for collectin Figure 4. Strea 80, d) 90, e) 10 As seen clockwise v secondary v Maringá, v. 39, n.
figures, there section, is formed at Figure degree and f) Stream In differ and n the fl can b the in that outer phen bend away sharp the d featu longi stream concl layer bed i being midstream the Figur secon strike stream direc
Figure
Figure 6.Stream distances of a) 5, b The streamlines In Figure 7 of the stream been compare m.Technology Figure 8 of 180 d of flow d Firs the lon that it keeps t all dire Figure toward gradien depth o of seco occurs the ben surface oriente beginni | 2018-12-11T06:28:57.732Z | 2017-09-15T00:00:00.000 | {
"year": 2017,
"sha1": "4be62c0bf24b3ba7a361d8201a4e37738029428d",
"oa_license": "CCBY",
"oa_url": "http://periodicos.uem.br/ojs/index.php/ActaSciTechnol/article/download/29032/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4be62c0bf24b3ba7a361d8201a4e37738029428d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Geology"
]
} |
259910750 | pes2o/s2orc | v3-fos-license | Single versus Dual-Operator Approaches for Percutaneous Coronary Interventions within Chronic Total Occlusion—An Analysis of 27,788 Patients
(1) Background: Since the treatment of chronic total occlusion (CTO) with percutaneous coronary intervention (PCI) is associated with high procedural complexity, it has been suggested to use a multi-operator approach. This study was aimed at evaluating the procedural outcomes of single (SO) versus dual-operator (DO) CTO-PCI approaches. (2) Methods: This retrospective analysis included data from the Polish Registry of Invasive Cardiology Procedures (ORPKI), collected between January 2014 and December 2020. To compare the DO and SO approaches, propensity score matching was introduced with equalized baseline features. (3) Results: The DO approach was applied in 3604 (13%) out of 27,788 CTO-PCI cases. Patients undergoing DO CTO-PCI experienced puncture-site bleeding less often than the SO group (0.1% vs. 0.3%, p = 0.03). No differences were found in the technical success rate (successful revascularization with thrombolysis in myocardial infarction flow grade 2/3) of the SO (72.4%) versus the DO approach (71.2%). Moreover, the presence of either multi-vessel (MVD) or left main coronary artery disease (LMCA) (odds ratio (OR), 1.67 (95% confidence interval (CI), 1.20–2.32); p = 0.002), as well as lower annual and total operator volumes of PCI and CTO-PCI, could be noted as factors linked with the DO approach. (4) Conclusions: Due to the retrospective character, the findings of this study have to be considered only as hypothesis-generating. DO CTO-PCI was infrequent and was performed on patients who were more likely to have LMCA lesions or MVD. Operators collaboratively performing CTO-PCIs were more likely to have less experience. Puncture-site bleeding occurred less often in the dual-operator group; however, second-operator involvement had no impact on the technical success of the intervention.
Introduction
At present, percutaneous coronary interventions (PCIs) are becoming more challenging. Such a situation is primarily due to the patient's advanced age, complex anatomic 2 of 12 lesions, and greater burden of concomitant diseases [1][2][3][4]. In turn, in selected groups of patients, this accounts for the higher complexity of the intervention, greater amount of contrast, and radiation dose used during the procedure. This further reflects longer procedural time, which is followed by an increased risk of adverse outcomes [3,4]. This is extremely relevant in PCIs performed within chronic total occlusions (CTOs), which are linked with more challenging procedures. Compared to regular PCIs, they are also connected with a higher risk of periprocedural complications, such as coronary artery perforation and loss of collateral circulation [5][6][7][8][9][10]. Higher complexity can also be caused by the poor condition of patients. Seeing as CTO may be present in many patients with acute coronary syndromes, patient prognosis becomes further exacerbated [11]. To address this issue, it has been recommended to perform high-risk PCIs. These include CTO-PCIs with a multiple-operator approach. This is performed in order to improve procedural success and safety [9,[12][13][14]. The involvement of a second operator, depending on his/her experience, may provide support for the leading operator. This allows shared intra-procedural decision-making while simultaneously yielding educational benefits for the assistant operator [9]. However, in recently published studies on both high-risk PCIs and CTO-PCIs, no improvement has been reported in terms of procedural outcomes or major adverse cardiac event (MACE) rates among patients treated via the multiple-operator approach [4,15]. Nonetheless, since the latter remains poorly studied, we aimed to identify factors associated with single-(SO) and dual-operator (DO) approaches for CTO-PCIs and their impact on procedural outcomes.
Materials
This retrospective analysis is based on data from the national registry of percutaneous coronary interventions (ORPKIs), collected between 2014 and 2021. The registry is maintained in cooperation with the Association of Cardiovascular Interventions (AISNs) of the Polish Cardiac Society. It covers almost all catheter laboratories (CathLabs) performing PCIs in Poland and has been characterized in previously published papers [8]. We extracted 27,788 patients who had undergone either SO (n = 24,184) or DO (n = 3604) CTO-PCI between January 2014 and December 2020. The exact percentage share of PCI CTO in the entire population of patients undergoing PCI procedures in subsequent years has been presented in previous publications [6,8]. Due to the retrospective nature and anonymization of the collected data in the registry, obtaining the consent of the Bioethics Committee was waived.
Definitions
CTO was defined by coronary angiography as a coronary occlusion without antegrade filling of the distal vessel other than via collaterals assessed by thrombolysis in myocardial infarction (TIMI) at grade 0. The duration of the occlusion had to be more than 3 months, as estimated from the onset of clinical events, including myocardial infarction (MI), sudden onset or worsening of chest symptoms, and angiography. This further had to be confirmed by an experienced operator. According to the CTO-ARC consensus, technical success of the performed CTO-PCI was defined as restoration of coronary artery patency assessed by TIMI scale grade 2/3 with <30% residual stenosis of the target CTO lesion [16].
Study Endpoints
The primary endpoint of this study was the technical success of the performed CTO-PCI. Secondary endpoints included periprocedural complications, i.e., coronary artery perforation, puncture-site bleeding, MI, cardiac arrest, no-reflow phenomenon, or death. With regard to the aforementioned endpoints, the procedural outcomes were compared in the SO vs. DO study groups.
Clinical Characteristics at Baseline
The clinical characteristics at the baseline are shown in Tables 1 and 2. The propensity score matched the yielded pairs of patients undergoing either DO CTO-PCI or SO CTO-PCI at a 1 to 4 ratio, respectively. The baseline characteristics were balanced with no significant differences.
Procedural Indices, Pharmacotherapy, and Procedural Outcomes
Most importantly, the technical success rate was similar in the DO and SO groups (71.2% and 72.4%, respectively, Table 3). Furthermore, CTO-PCIs in the DO cases were characterized by significantly higher usage of contrast volume and radiation dose (p < 0.0001) (Table 3). Furthermore, in these cases, access site crossover occurred more often (p = 0.02) and overall, femoral access was used more frequently (p < 0.0001). In comparison to the DO procedures, SO CTO-PCIs were performed at sites of greater annual and total volumes and by more experienced operators. Their values were expressed as the total and annual volume of the performed PCIs, including CTO cases (Table 3). However, as shown in Table 4, in the propensity score-matched population, only differences in contrast, radiation dose, and UFH reached statistical significance.
Procedure-Related Complications
Although no differences were found among the unmatched population, after propensity score matching, it was revealed that more patients experienced puncture-site bleeding in the SO group (Tables 5 and 6). Furthermore, multivariable analysis revealed no statistically significant differences in the associations between DO vs. SO approach and procedural outcomes in both, unmatched and propensity score-matched population. (Figure 1).
(a) Figure 1. The dual-operator approach and procedure-related complications: multivariable analysis by outcomes for the unmatched (a) and propensity score-matched population (b). The association between the DO vs. SO approaches and outcomes were adjusted for gender, diabetes, previous stroke, previous myocardial infarction, previous PCI, previous coronary artery bypass graft, smoking status, psoriasis, hypertension, kidney disease, chronic obstructive pulmonary disease, Killip class IV, age, weight, contrast used in PCI, radiation dose used in PCI, annual site volume, annual operator volume, and annual operator CTO volume. CI, confidence interval; DO, dual operator; PCI, percutaneous coronary intervention; SO, single operator.
Factors Associated with Dual-Operator CTO-PCI
The univariable analyses revealed that older age was not linked with CTO-PCIs performed by two operators. However, males (odds ratio (OR), 1.14 (95% confidence interval (CI), 1.05-1.24); p = 0.001) and current smokers (OR, 1.22 (95% CI, 1.11-1.34); p < 0.001), had higher odds of being treated by two CTO-PCI operators. This was also true for patients burdened with arterial hypertension (OR, 1.36 (95% CI, 1.26-1.47); p < 0.001) ( Figure 2). Multi-vessel disease (MVD), in comparison to single-vessel disease (SVD) (OR, 1.30 (95% CI, 1.08-1.55); p = 0.005), and having either MVD or left main coronary artery (LMCA) involvement compared to the absence of such angiographic findings (OR, 1.67 (95% CI, 1.20-2.32); p = 0.002), were also associated with the DO approach. Furthermore, greater contrast amount and radiation dose used during the procedure itself were linked with DO CTO-PCI. Considering operators' experience, those with lower annual and total volumes of both CTO-PCIs and overall PCIs had higher odds of performing CTO-PCI with a second operator. In contrast, patients administrated with UFH and LMWH had lower odds of being treated by two operators (Figure 2).
(LMCA) involvement compared to the absence of such angiographic findings (OR, 1.67 (95% CI, 1.20-2.32); p = 0.002), were also associated with the DO approach. Furthermore, greater contrast amount and radiation dose used during the procedure itself were linked with DO CTO-PCI. Considering operators' experience, those with lower annual and total volumes of both CTO-PCIs and overall PCIs had higher odds of performing CTO-PCI with a second operator. In contrast, patients administrated with UFH and LMWH had lower odds of being treated by two operators (Figure 2).
Discussion
To summarize the results of this study, the frequency of DO CTO-PCIs was 13%. This approach, however, was not associated with the improvement of procedural outcomes, i.e., technical success and periprocedural complication occurrence. Thirdly, CTO-PCIs in the DO group may have been more complex. This was due to the higher contrast and radiation doses used during the procedure. Such a situation concerned patients with greater MVD lesion rates as well as more frequent involvement of LMCA. Lastly, DO interventions were performed by operators who were more likely to be less experienced.
The revascularization of CTO lesions has been recognized as a great challenge for the operator. This is because it is linked with complex anatomic lesions demanding scrupulous vessel preparation, followed by longer procedural time, higher radiation dose, and contrast amount used during the intervention [3,6,17,18]. Apart from this, high procedural difficulty is attributed to a higher rate of additional device exploitation (e.g., intravascular imaging, rotablation, intravascular lithotripsy) compared to regular PCI. Moreover, challenging maneuvers, involving antegrade or retrograde dissection and reentry approaches, may strain the mental and skillset capacity of the interventionalist [17,19]. Furthermore, the increasing clinical severity of patients eligible for PCI and its low predictability but high risk makes it more difficult to single-handedly manage these cases at the Cathlab [20]. Despite the aforementioned challenges, due to novel interventional therapies, the international procedural success of CTO-PCI has been steadily increasing
Discussion
To summarize the results of this study, the frequency of DO CTO-PCIs was 13%. This approach, however, was not associated with the improvement of procedural outcomes, i.e., technical success and periprocedural complication occurrence. Thirdly, CTO-PCIs in the DO group may have been more complex. This was due to the higher contrast and radiation doses used during the procedure. Such a situation concerned patients with greater MVD lesion rates as well as more frequent involvement of LMCA. Lastly, DO interventions were performed by operators who were more likely to be less experienced.
The revascularization of CTO lesions has been recognized as a great challenge for the operator. This is because it is linked with complex anatomic lesions demanding scrupulous vessel preparation, followed by longer procedural time, higher radiation dose, and contrast amount used during the intervention [3,6,17,18]. Apart from this, high procedural difficulty is attributed to a higher rate of additional device exploitation (e.g., intravascular imaging, rotablation, intravascular lithotripsy) compared to regular PCI. Moreover, challenging maneuvers, involving antegrade or retrograde dissection and re-entry approaches, may strain the mental and skillset capacity of the interventionalist [17,19]. Furthermore, the increasing clinical severity of patients eligible for PCI and its low predictability but high risk makes it more difficult to single-handedly manage these cases at the Cathlab [20]. Despite the aforementioned challenges, due to novel interventional therapies, the international procedural success of CTO-PCI has been steadily increasing over time, from less than 75% in the past to a current total of 90% at the leading highly experienced centers [9,17,[20][21][22]. However, this outcome not only remains significantly worse in comparison to regular PCI, but also exhibits great variability with regard to the site where CTO treatment is attempted. In fact, in other studies based on inexperienced institutions, it has been reported that achieved procedural success is unsatisfactory, far below 70% [10,17]. Thus, in an effort to improve the modest outcomes of such revascularizations, a renewed interest has arisen in establishing international guidelines for CTO-PCIs and adopting widely convenient training programs. This involves, among other aspects, a multi-operator approach to CTO cases as well as other high-risk PCIs [9,13,14]. Indeed, the growing usage of the DO approach has been observed in the past years. It primarily concerns institutions with high annual PCI volumes, where LMCA disease, calcific stenosis, and CTO account for a great proportion of attempted cases. Also, imaging techniques, rotablation, and mechanical support devices are vividly exploited at these centers [4].
In our study, DO CTO-PCI was performed on patients with greater disease severity, as they were more likely to have LMCA lesions or MVD. Also, DO intervention was linked with a higher amount of contrast as well as radiation dose used during the procedure. Such findings are comparable to other reports on high-risk PCIs. They can further be explained by the preselection of complex cases during collaborative preplanning of the procedure, since in other studies, an association has been reported between contrast used during CTO-PCI and a greater rate of periprocedural complications and utilization of advanced devices and complex lesions, such as restenosis, i.e., all factors being commonly attributed to higher procedural difficulty [4,6]. In a different study on CTO-PCIs, Karacsonyi et al. showed that multi-operator procedures were also characterized by greater procedural and fluoroscopy time, as well as higher air kerma radiation dose and contrast volume, compared to the SO group [15]. However, in contrast to our results, patients treated by multi-operator CTO-PCI had lower lesion complexity. This was confirmed by a lower Japan-CTO (J-CTO) score (2.28 ± 1.20 vs. 2.38 ± 1.29; p = 0.005; respectively) and Prospective Global Registry for the Study of Chronic Total Occlusion Intervention (PROGRESS-CTO) (0.97 ± 0.93 vs. 1.13 ± 1.01; p < 0.001; respectively) [15]. It should also be noted that the increased amount of contrast and radiation dose may be related not only to the complexity of the lesions undergoing PCI but also the lack of experience of the operators performing such a procedure.
In our study, operator annual and total volume of all performed PCIs and CTO cases were identified as factors linked with the SO approach in the multivariable analysis. These findings seem to be similar to those obtained in other studies. In them, it has been reported that operators performing multi-operator high-risk procedures, including CTO cases, were less often highly experienced (>60 CTO cases per year), had fewer years of experience in PCIs, and had lower annual PCI volumes, including high-risk interventions [4,15]. One could hypothesize that the central tendency of operators' experience in DO procedures could have been lowered by the participation of inexperienced juniors. However, the aforementioned studies differentiated the experience of leading interventionalists performing DO CTO-PCIs and in them, their superiority over SO operators was still not noted [4,15].
Most importantly, no improvement was demonstrated in the technical success or periprocedural complication rates in the DO group compared to SO. Coherently, in the study on high-risk PCIs, Kovach et al. reported similar outcomes between such two groups regarding the MACE rate (32% vs. 30%, p = 0.44) and its components at the 12-month follow-up. Acute kidney injury incidence, hospital length, and 30-day re-admission were also considered [4]. The paucity of anticipated outcome improvement may have been due to the biased group selection. The potential benefits of a second operator in terms of procedural success could have been confounded by the too-broad group selection, since they may appear only in the most complex cases, such as among patients with J-CTO scores >3. However, in the study based on the PROGRESS-CTO registry, no differences were noted in any J-CTO groups regarding the MACE rate (2.2% vs. 2.4%; p = 0.6), technical (86% vs. 86%; p = 0.9), or procedural success (84% vs. 85%; p = 0.6). Moreover, patients with PROGRESS CTO scores of 3 and 4, treated by multiple operators, had a significantly lower technical success rate compared to the SO group [15].
In part, the lack of procedural improvement could have also been driven by the fact that in some cases, PCIs performed by two operators were already linked with worse outcomes. That would appear as a consequence of the scenario in which an additional operator was called to aid the ongoing procedure following the occurrence of a sudden complication; thus, falsely accounting for a certain percentage of the DO group. Moreover, since the average technical success achieved in this registry is relatively low, it may be that the DO approach is a result of cooperation between two, non-dedicated CTO operators. In such a case, the potential benefits of an additional operator were countered by the greater experience of interventionalists single-handedly performing CTO-PCIs at centers with higher volumes of CTO interventions. In multiple studies, it has been shown that experience, both at the level of the operator and site volumes, is associated with procedural success and lower adverse events, e.g., in-hospital death. Such events are more pronounced in high-risk, complex procedures [18,[23][24][25][26]. Hence, the current European Consensus and National Societies guidelines regarding CTO-PCIs established requirements regarding certification for a well-experienced specialist to perform 300 CTO-PCIs in total and maintain more than 50 CTO-PCIs per year [27,28].
In general, despite our findings not underpinning the benefits of second operator involvement, it is worth mentioning that several potential benefits of such an approach were not examined in-depth and are yet to be investigated in the future.
Conclusions
Due to the retrospective character, the findings of this study have to be considered only as hypothesis-generating. DO CTO-PCI was infrequent and was performed on patients who were more likely to have LMCA lesions or MVD. Operators collaboratively performing CTO-PCIs were more likely to have less experience. Puncture-site bleeding occurred less often in the dual-operator group; however, second-operator involvement had no impact on the technical success of the intervention.
Study Limitations
This study is limited by several factors. Most importantly, it lacks a randomized design due to its retrospective characteristics. Hence, the results have to be considered solely as hypothesis-generating. Moreover, certain data were not available. This included, for instance, information about the percentage of junior operators involved in DO procedures. Moreover, there was a lack of objective measures concerning CTO complexity, such as the J-CTO index or extensive data regarding the location culprit of the CTO. In addition, the collection of data from multiple centers imposes bias related to the first operators' divergency. The ultimate recognition of periprocedural complications and ongoing PCI scenarios depends on operator experience, habits, and inclinations. Although our study, on the basis of a large patient cohort, does not support second-operator involvement in CTO cases, more research is necessary to establish an unequivocal consensus on this subject. Institutional Review Board Statement: This study was conducted according to the guidelines of the Declaration of Helsinki, and due to its retrospective nature, it did not demand the approval of the local Bioethics Committee.
Informed Consent Statement: All included patients provided informed consent for the procedure. No personal data were gathered in this registry. Data Availability Statement: Upon special request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-07-16T15:06:49.362Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "b6eceea97df50478c6d67d7d6478e251da01e14e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/14/4684/pdf?version=1689319671",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0db55c7010af7af8c67a73c2913428307ce291e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
4856483 | pes2o/s2orc | v3-fos-license | Bacterial meningitis epidemiology and return of Neisseria meningitidis serogroup A cases in Burkina Faso in the five years following MenAfriVac mass vaccination campaign
Background Historically, Neisseria meningitidis serogroup A (NmA) caused large meningitis epidemics in sub-Saharan Africa. In 2010, Burkina Faso became the first country to implement a national meningococcal serogroup A conjugate vaccine (MACV) campaign. We analyzed nationwide meningitis surveillance data from Burkina Faso for the 5 years following MACV introduction. Methods We examined Burkina Faso’s aggregate reporting and national laboratory-confirmed case-based meningitis surveillance data from 2011–2015. We calculated incidence (cases per 100,000 persons), and described reported NmA cases. Results In 2011–2015, Burkina Faso reported 20,389 cases of suspected meningitis. A quarter (4,503) of suspected meningitis cases with cerebrospinal fluid specimens were laboratory-confirmed as either S. pneumoniae (57%), N. meningitidis (40%), or H. influenzae (2%). Average adjusted annual national incidence of meningococcal meningitis was 3.8 (range: 2.0–10.2 annually) and was highest among infants aged <1 year (8.4). N. meningitidis serogroup W caused the majority (64%) of meningococcal meningitis among all age groups. Only six confirmed NmA cases were reported in 2011–2015. Five cases were in children who were too young (n = 2) or otherwise not vaccinated (n = 3) during the 2010 MACV mass vaccination campaign; one case had documented MACV receipt, representing the first documented MACV failure. Conclusions Meningococcal meningitis incidence in Burkina Faso remains relatively low following MACV introduction. However, a substantial burden remains and NmA transmission has persisted. MACV integration into routine childhood immunization programs is essential to ensure continued protection.
Introduction
For over 100 years, the meningitis belt of sub-Saharan Africa-stretching from Senegal to Ethiopia and including 450 million people in 26 countries-experienced high endemic rates of meningitis, annual seasonal outbreaks, and explosive epidemics occurring every 5-12 years [1,2]. Prior to the introduction of the meningococcal serogroup A conjugate vaccine (MACV, MenAfriVac™), approximately 90% of meningitis cases during epidemics in the region were attributable to Neisseria meningitidis serogroup A (NmA) [3]. From 2010-2016, MACV was aggressively rolled out using national vaccination campaigns in 19 at-risk countries within or bordering the meningitis belt, representing a new approach to controlling epidemic-prone diseases (Fig 1) [4].
Burkina Faso, a landlocked West African country with a population of approximately 19 million, is one of the few countries entirely located within the meningitis belt and experiences hyper-endemic rates of meningitis [3]. In December 2010, Burkina Faso was the first country to complete national introduction of MACV through a 10-day mass vaccination campaign reaching 11 million people (approx. 70% of the population), achieving 96% coverage among the target population of persons aged 1-29 years [5]. An evaluation of the impact of MACV one-year following the campaign demonstrated a substantial reduction in meningitis incidence among both the target population and the general population, due to high coverage and herd protection [3].
A growing cohort of unvaccinated children (currently over 4 million) born since 2010 puts the population at risk of NmA epidemic resurgence. The World Health Organization (WHO) Strategic Advisory Group of Experts on Immunization's 2014 recommendation of one dose of MACV at !9 months of age provides an opportunity to sustain the immunity already achieved [6]. Burkina Faso conducted a catch-up campaign among children aged 1-6 years in November 2016 and introduced a single dose of MACV into their routine immunization program in March 2017 at age 15-18 months, along with the second dose of measles-containing vaccine. We analyzed national meningitis surveillance data and described the epidemiology of reported NmA cases in the five years (2011-2015) since MACV introduction in Burkina Faso.
Meningitis surveillance
In Burkina Faso, two complementary systems of nationwide population-based meningitis surveillance exist [3]. Aggregate surveillance for reportable diseases is conducted via the Télégramme Lettre Official Hebdomadaire, which collects weekly reports of clinically-defined meningitis cases from both inpatient and outpatient facilities and meningitis-related deaths aggregated at the district-level. Functional since 1997, this system contains no identifying information or laboratory data, and only limited demographic information on age and sex. A second system, nationwide case-based meningitis surveillance, was implemented in 2010 prior to MACV introduction. This passive surveillance system collects case-level demographic and clinical information as well as results of cerebrospinal fluid (CSF) examination and laboratory testing using Integrated Disease Surveillance and Response tools [7]. A paper case notification form is completed for each suspected case, and district surveillance officers enter the data into a surveillance database. If a CSF specimen is collected, a copy of the case notification form travels with the specimen to district laboratories, regional laboratories, and national reference laboratories. At each level, laboratory results are entered into a laboratory database. Data are reported weekly to a central meningitis national reference laboratory and the Ministry of Health surveillance office, including zero reporting. The surveillance office merges the Bacterial meningitis epidemiology and Neisseria meningitidis serogroup A cases in Burkina Faso, 2011-2015 surveillance database with the laboratory database to create a master database, which includes cases both with and without CSF specimens and laboratory results. Burkina Faso, as a member of the MenAfriNet Consortium (www.menafrinet.org), makes substantial efforts to maintain high-quality cased-based surveillance with quarterly assessments of performance indicators covering critical surveillance domains, including specimen collection and transport to a national reference laboratory, pathogen confirmation, linkage of laboratory and epidemiologic data, and data management.
Case definitions
Cases are classified according to WHO case definitions [8]. A case of suspected meningitis is defined as sudden onset of fever !38.5˚C with one of the following signs: neck stiffness, altered consciousness, or other meningeal signs (including flaccid neck, bulging fontanel, or convulsions in young children). Probable bacterial meningitis is a suspected case with turbid, cloudy, purulent, or xanthochromic CSF; presence of Gram negative diplococci, Gram positive diplococci, or Gram negative bacilli on microscopic examination of CSF; or a CSF white cell count >10/mm 3 . A confirmed case of meningitis is a suspected or probable case with N. meningitidis, Streptococcus pneumoniae, or Haemophilus influenzae isolated from CSF by culture or detected in CSF by real-time polymerase chain reaction (rt-PCR) or latex agglutination [9].
Laboratory methods
CSF specimens are transported from local healthcare facilities to district laboratories, which conduct preliminary lab testing such as cytology, Gram staining, and latex agglutination (Pastorex, Bio-Rad). CSF specimens are also sent to a national reference laboratory for culture and rt-PCR targeting the sodC gene for N. meningitidis, lytA for S. pneumoniae, and hpd for H. influenzae [10,11]. Meningococcal serogroups are determined using latex agglutination or rt-PCR, with rt-PCR considered definitive [12]; sequence type and clonal complex were determined using whole genome sequencing.
NmA case investigations
Each reported NmA case in Burkina Faso triggers a full case investigation, including an initial investigation by district health officers and a follow-up investigation by a national team of epidemiologists and laboratory technicians. The investigations confirm the patient's age, sex, vaccination status, travel history, epidemiologic links, and the causative agents confirmed by laboratory tests.
Statistical analysis
To assess the impact of MACV on meningitis epidemiology in the 5 years following MACV introduction in 2010, we examined both aggregate and case-based meningitis surveillance data from January 1, 2011, to December 31, 2015. Cases among non-residents of Burkina Faso were excluded. Incidence rates (cases per 100,000 persons) were calculated using national census estimates. District-level epidemics were defined by an aggregate suspected meningitis incidence exceeding 100 per 100,000 population per week. The sensitivity of the case-based surveillance system to detect suspected meningitis cases was calculated using aggregate surveillance case counts as the denominator.
For laboratory-confirmed cases, annual incidence was adjusted for the age-stratified proportion of cases with CSF tested at a national laboratory, where culture and rt-PCR were performed. Within each age stratum (<1 year, 1-4 years, 5-9 years, 10-14 years, 15-29 years and !30 years), the number of cases confirmed by culture or rt-PCR for a specific pathogen was divided by the number of cases with CSF tested via culture or rt-PCR at a national laboratory; this proportion was then applied to cases lacking any test results within that age stratum.
Data were analyzed using SAS v9.3. This evaluation was determined by the Centers for Disease Control and Prevention's Human Research Protection Office to be public health nonresearch, and Institutional Review Board review was not required.
Aggregate meningitis surveillance
From 2011-2015, 20,389 cases of suspected meningitis and 2,333 (12%) deaths were reported via aggregate surveillance in Burkina Faso, corresponding to an annual median of 3,486 cases (range: 2,919-7,022) and average annual incidence of 24.5 cases per 100,000 population (Tables 1 and 2). Overall, 15,629 (77%) of cases occurred during the meningitis season (epidemiologic weeks 1-24). A total of 7 district-level epidemics occurred, all in 2012.
Case-based meningitis surveillance
During 2011-2015, 18,538 individual suspected meningitis cases and 1,996 (11%) deaths were reported through case-based meningitis surveillance (Tables 1 and 3). The numbers of cases reported per week by both surveillance systems were similar, except in 2011 and 2015, when some reporting lags in the aggregate surveillance system were apparent (Fig 2). The annual sensitivity of the case-based surveillance system to detect suspected meningitis cases, using aggregate surveillance for comparison, improved from 73% (2,842/3,878) in 2011 to 96% (2,970/3,084) in 2015.
Suspected and probable meningitis cases
Nearly a fifth (19%) of suspected meningitis cases occurred among children aged <1 year and 47% occurred in children aged <5 years (Table 3). Fifty-three percent (n = 9,639) of all
Overall, 1,811 meningococcal meningitis cases were reported in 2011-2015 (Table 2); 153 (8%) were fatal. Apart from the elevated incidence of N. meningitidis in 2012 (10.6 per 100,000), the annual incidence remained low (range: 1.7-2.1), with an average annual Table). NmW accounted for the highest incidence among meningococcal serogroups ( Table 2, Fig 3). The (Table 4), resulting in an average annual adjusted incidence of 0.01 per 100,000. All NmA cases were confirmed using rt-PCR; and the 2011 case was additionally confirmed using latex agglutination and culture. Five of 6 cases occurred between November 2014 and May 2015 in three adjacent districts in northern Burkina Faso: Ouahigouya, Titao and Tougan. The NmA patients ranged in age from 5-19 years; half were female (Table 4). None traveled outside their community in the month before disease onset. No epidemiologic links between cases were identified. One case was fatal and one resulted in hearing loss.
Two of the six NmA cases occurred in 5-year-old children who were too young to be vaccinated during the 2010 MACV mass campaign, and three occurred in children who were age-eligible for MACV in 2010 but were not vaccinated. The remaining case occurred in a 9-year-old female who, based on her vaccination card, received MACV 5 years earlier (during the mass vaccination campaign); this represents the first documented case of MACV failure. Serum specimens were not available for complement component deficiency or immunologic testing.
Discussion
This analysis of bacterial meningitis epidemiology was based on aggregate and case-based meningitis surveillance systems in Burkina Faso, a country which maintained high-quality surveillance in the five years post-MACV. These data provide evidence that MACV introduction resulted in a continued lower burden of suspected meningitis (aggregate reporting average incidence: 24.3 cases per 100,000 population) compared to the 4 years prior to MACV introduction (aggregate reporting average incidence 81.2 cases per 100,000) [3]. The highest incidence of suspected meningitis was observed among children aged <5 years, who were too young to be vaccinated during or were born after the 2010 MACV national campaign. These results support previous findings demonstrating the impact of MACV in Burkina Faso and elsewhere in the meningitis belt. Remarkable decreases in meningococcal disease incidence, particularly for NmA disease, have been shown in other countries that introduced MACV [13]. Cross-sectional meningococcal carriage surveys conducted before and a year after MACV introduction demonstrated elimination of NmA carriage among both vaccinated and unvaccinated populations in Burkina Faso [14]. Similar results have been observed elsewhere in the meningitis belt, implying that there is a vaccine-induced herd protection effect [15,16]. Additionally, it has been shown that a single dose of MACV induces sustained levels of NmA antibodies in children aged 12-23 months for up to five years [17]. These findings, along with the high community acceptance [18] and low cost of the vaccine, provide compelling evidence for continued MACV rollout in all countries inside or bordering the meningitis belt [4,19,20].
Of six NmA cases detected among Burkina Faso residents since MACV introduction in 2010, five occurred during the 2014-2015 meningitis epidemic season, with two of these occurring in children who were too young to be vaccinated during the 2010 MACV campaign. While these cases were reported and thoroughly investigated, it is possible that additional NmA cases occurred but were undetected, unconfirmed, or unreported. Confirmation of the six cases described here suggests a recent increase in NmA transmission. This is likely the result of an increase in the pool of susceptible persons and a decrease in herd protection that occurred as years passed without routine MACV infant vaccination or catch-up campaigns for children born after the mass MACV campaign. In 2015, the estimated susceptible population eligible for MACV vaccination in Burkina Faso was over 3.5 million, equaling at least 30% of the population size originally vaccinated with MACV in 2010. To assess the long-term impact of MACV on the prevalence, serogroup distribution, and molecular characteristics of nasopharyngeal carriage of N. meningitidis, particularly NmA, meningococcal carriage evaluations are underway in Burkina Faso in 2016-2017. These evaluations include the districts of Ouahigouya, where three of the five recent NmA cases were reported, and Kaya, which had the highest NmA carriage prevalence prior to MACV introduction [14].
Burkina Faso's detection, investigation, confirmation, and reporting of NmA cases motivated Gavi and the international community to accelerate Burkina Faso's timeline for MACV introduction into the Expanded Programme on Immunization (EPI), in order to halt NmA transmission. Burkina Faso conducted a MACV catch-up campaign of infants and children aged 1-5 years in November 2016 and integrated MACV into the EPI in March 2017 for administration to children aged 15-18 months. Mathematical models have shown that the strategy implemented in Burkina Faso-conducting catch-up campaigns prior to EPI introduction-would produce the lowest overall annual incidence of NmA meningitis and maintain long-term population protection [21,22]. The high community acceptance of MACV may also benefit other EPI vaccines co-administered at the same age [18,23]. Burkina Faso strategically decided to integrate MACV into their EPI at age 15-18 months concomitantly with the second dose of measles-containing vaccine (MCV2); a follow-up MACV and MCV2 vaccination coverage survey is planned to measure potential impact of MACV EPI introduction on MCV2 vaccination coverage.
Burkina Faso's NmA case investigations confirmed that one of the six patients was vaccinated with MACV in 2010, signifying the first reported MACV failure among the estimated 260 million individuals vaccinated across 19 countries in the meningitis belt since 2010. 4 The country's investigation and transparent reporting to the international community are a key component of monitoring MACV implementation and success. The MenAfriNet Consortium developed a Serogroup A Case Investigation Protocol based on Burkina Faso's experience, to guide standardized data collection, laboratory confirmation, and reporting for each NmA case detected in all countries that have introduced MACV, regardless of surveillance capacity.
In addition to the reemergence of NmA disease in Burkina Faso and other countries that previously introduced MACV (Benin, Ghana, Guinea, Mali and Togo) [24], regional meningitis epidemiology is changing. S. pneumoniae has been the predominant endemic bacterial meningitis pathogen in Burkina Faso since 2011 [25], and N. meningitidis serogroup W has accounted for the majority of epidemic meningitis [26]. Regionally, serogroup C and X have also emerged as significant causes of epidemic meningitis: serogroup C tends to be clustered, while serogroup X has been dispersed. The strain identified in the serogroup C case reported by Burkina Faso in 2015 shared similar molecular characteristics with the serogroup C strain that emerged in Nigeria in 2013 and caused the largest global epidemic of this serogroup in Niger in 2015 [27][28][29]. Burkina Faso's experience in conducting high-quality meningitis surveillance demonstrates that long-term investment in case-based surveillance is a valuable platform both for monitoring evolving meningitis epidemiology and for evaluating the effectiveness of bacterial meningitis vaccines, especially MACV integration into the EPI, pneumococcal conjugate vaccines, and a potential polyvalent meningococcal conjugate vaccine [30]. Despite increases in surveillance quality over time in Burkina Faso, challenges remain regarding specimen transport and laboratory confirmation (see S2 Table); however reported incidence rates adjust for changes in culture and real-time PCR testing capacity over time and likely reflect true trends in pathogen-specific incidences.
An immense effort has been invested in mass vaccination campaigns in Burkina Faso and other countries in the region, resulting in major successes in mobilizing international and local communities and encouraging high vaccine uptake, and in an extraordinary and unprecedented reduction in meningitis burden and NmA transmission. However, the data provide concerning evidence of a return of NmA transmission and disease as the susceptible population size increases. It is critical that the initial MACV roll-out is completed in the remaining countries of the meningitis belt and that MACV is thoughtfully and successfully integrated into EPI programs [30]. Continuing the momentum of MACV roll-out, along with long-term investments in surveillance and thorough investigations into reported NmA cases, must remain a high priority for the international community and countries within the meningitis belt. Failure to take these measures will undermine the outstanding successes that have already been achieved, and will place future generations at risk of experiencing devastating meningitis epidemics. | 2018-04-03T00:11:00.623Z | 2017-11-02T00:00:00.000 | {
"year": 2017,
"sha1": "9fa5acac99e0f0b996c328742075fc239c3d1199",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0187466&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fa5acac99e0f0b996c328742075fc239c3d1199",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250150859 | pes2o/s2orc | v3-fos-license | Multidimensional Coding of Multimodal Languaging in Multi-Party Settings
In natural language settings, many interactions include more than two speakers, and real-life interpretation is based on all types of information available in all modalities. This constitutes a challenge for corpus-based analyses because the information in the audio and visual channels must be included in the coding. The goal of the DINLANG project is to tackle that challenge and analyze spontaneous interactions in family dinner settings (two adults and two to three children). The families use either French, or LSF (French sign language). Our aim is to compare how participants share language across the range of modalities found in vocal and visual languaging in coordination with dining. In order to pinpoint similarities and differences, we had to find a common coding tool for all situations (variations from one family to another) and modalities. Our coding procedure incorporates the use of the ELAN software. We created a template organized around participants, situations, and modalities, rather than around language forms. Spoken language transcription can be integrated, when it exists, but it is not mandatory. Data that has been created with another software can be injected in ELAN files if it is linked using time stamps. Analyses performed with the coded files rely on ELAN’s structured search functionalities, which allow to achieve fine-grained temporal analyses and which can be completed by using spreadsheets or R language.
Analyzing real-life interactions
Language has long been studied out of its ecological context, first through written forms characterized by their linearity, then through invented sentences, and finally with a focus on speech, in experimental studies or semiguided interactions. Even when gestures are integrated in the analyses, environments are most often stripped of objects or other activities whose affordances have a multitude of impacts on their use (but see Mondada, 2016). Those limits can be viewed as strengths as they have conducted to fruitful research on the language system. However, in order to capture the full complexity of language use, new approaches and methods are needed in which all our semiotic resources can be analyzed as they are deployed in their natural habitat involving the orchestration of bodies engaged in a variety of situated activities with a diversity of artifacts.
The aim of our research is to capture language in its ecological environment in order to articulate its actional roots and its symbolic functions. Following Boutet (2018), we thus analyze the bodies of our participants as both the support (the instrument) and the substrate (that which constitutes and structures) for "languaging" (Linell, 2009). Our approach grounds language in embodied action rather than viewing it only as a code or a symbolic system (Bottineau, 2012). What we call "languaging" is not only relative to the languages and cultures a subject uses, but also to the available semiotic resources that can be coordinated and enable us to embody mental constructions. Reversely, the semiotic resources we use shape, construct and contribute to the meaning of our interactive productions.
We share Mondada's multimodal approach (2019: 47) according to which "research in multimodalitythat is, the diversity of resources that participants mobilize to produce and understand social interaction as publicly intelligible actions, including language, gesture, gaze, body postures, movements, and embodied manipulations of objectscan be further expanded by considering not only embodied resources for interacting but also embodied practices for sensing the world in an intersubjective way." This means that, in real-life interaction, understanding other people does not only rely on the language produced. Linguistic analyses, even when determined with great precision, i.e through standardized notation systems, with very good intercoder agreement, rely on a range of cues and on the context. To participate fully in an interaction, a semiotic understanding of the situation needs to be achieved. In many situations, what was linguistically produced is insufficient to understand the meaning, or is even sometimes misleading. The linguistic material is a vital part of semiotic understanding, but the context and all the actions of the participants are also crucial.
The goal of this paper is to present an analytical framework and method that will be used to annotate and study a corpus of natural multiparty interactions in family dinner settings. This corpus is collected in the context of a project funded by the ANR, "Multimodal LANGuage practices in French family DINners" (DINLANG) in which our main aim is to analyze the coordination between the co-activities of dining and languaging in French and French sign languages. The material used for this project, especially the ELAN template described in this paper and our annotation guide, is can be downloaded from the NAKALA repository (https://doi.org/10.34847/nkl.97ff535e). This material will be updated during the duration of the project.
Family dinners are shared moments of everyday life which present a perfect opportunity to study how language and interactive practices are transmitted to and used by children in order for them to construct meaning (Morgenstern et al., 2021). Because the subtle interweaving of these practices while eating fully engages the body, our project highlights the semiotic differences between participants using a spoken language, French, and a sign language, Langue des Signes Française (LSF), including children at different ages.
The project includes recordings from families primarily using a spoken language and from families primarily using a sign language (families where at least one member is deaf). This means that the analytical framework cannot use a written language form of transcription as the underlying structure of coding features, contrarily to what is most often found in language corpora, because it does not exist for French Sign Language and is not sufficiently relevant to code various multimodal forms of language and actions.
Moreover, using written language for semiotic analyses is dangerous as the main features of written language tend to hide the real properties of language interactions. Indeed, a body of literature (see Harris, 1990;Linell, 2005;Love, 2017) has demonstrated how written language forms have led to misunderstandings on what language actually is.
Using a phonetic transcription as the basic structure of the corpus contents could be possible, as we could conduct the required analyses for both vocal languaging (using mouth movement description) and visual languaging (using hand movement description). But these analyses would not be sufficient to achieve our goal of providing a semiotic analysis, and in both vocal and visual languaging it would be difficult and would not be desirable, according to our perspective, to draw a semiotic division between vocal productions, signs and gestures.
To avoid drawing a dividing line between actions, gestures, speech and sign, and conducting interpretations that suffer from the "written language bias" we use an analysis based on modalities and on interactions between participants at the highest structural level, and transcription or symbolic coding at the lowest structural level. This does not mean that our coding will not contain spoken language transcriptions when they are possible, or written descriptions and translations when they are useful. But these elements are not the ideal theoretical representation of language (see Harris 1990, Linell 2005, Love 2017).
Interaction and modalities
Our theoretical framework combines language socialization, cognitive grammar, interactionist and multimodal approaches to languaging. We borrow the term languaging to refer to multimodal language use -"linguistic actions and activities in actual communication and thinking" (Linell, 2009: 274) expanding the term to include speaking, gesturing and signing. We study how children's socialization to a variety of modes of expression in their daily experiencing (Ochs, 2012) through dinners shapes the development of their language use.
Interaction is a powerful theoretical framework for the analysis of semiotics (see Linell, 2009;Mondada, 2008). So as to avoid the pitfalls described above, we have based our analysis on two main features: interaction and modalities. Who is participating, and in which modality, are our topmost levels of analysis. We also include collective activities (that cannot be ascribed to one participant only) at our top-level analysis.
Recording set-up
Our goal of gathering and coding real-life interactions in dinner settings also has consequences on the equipment used to record the interactions. Our recording set-up is designed to collect as much information as possible without being a hindrance to the dinner participants.
As the location of the dinner is fixed, always around a table (of any shape), the recording equipment is also fixed. We have three recording points where a camera and a sound recorder are placed. A 360° camera is placed above the table on a boom stand. A 360° sound recorder is also positioned on the boom stand (see Figure 1). ELAN (Brugman & Russel, 2004) has many powerful features which can be used to organize both audible and visible data (Boutet & Blondel, 2016;Vincent, 2020). The two main features are the template and the searching module, and each feature can be set up according to a temporal or a structural organization.
Template, tier structure, vocabularies
The template is the most well-known feature of the ELAN software. It is a static organization that is decided at the beginning of the coding process and where later on changes are limited in scope. A template organizes all the tiers used in ELAN, and especially the relations between the tiers.
At the highest level of organization in a template, we created tiers (that we called the main tiers) with a temporal organization. The main tiers are all independent and they contain annotations that are characterized by their beginning and their end boundaries.
The other tiers (that we call dependent tiers) are organized according to their relationship with elements of the tier called the "parent" tier. The parent tier can be one of the main tiers, or any of the other dependent tiers, producing a tree-like organization of the data. There are several modes of organization defined in ELAN, but in our current project, we only use the "Time Subdivision" mode. In this mode, a dependent tier contains a description of a temporal division of the annotations in the parent tier.
Coding the dependent tier is not mandatory, but if coded, the dependent tier cannot contain blanks and must have the same span as the parent tier.
When coding data, it is possible to restrict the annotations to a specific (controlled) vocabulary, so as to avoid using erroneous or undescribed annotations. We plan to use this feature whenever it is possible, once our inventories of all categories found in the data seem sufficiently comprehensive, especially to code gaze properly.
Searching module
The first function of the searching module is to find elements in the annotation on the basis of any string or keyword. ELAN presents many powerful search features, including features that search words or strings of words in as many files as needed, and features that allow the user to replace targeted elements by new items.
One of the searching tools of ELAN is the "Structured search of multiple EAF". As indicated by its name, this searching tool can structure data and process as many files as necessary.
A very interesting feature of the structured search is that it allows the user to improve on the functionalities offered by the template. A major reason for using templates is that they enable us to organize our data and to improve the quality of our annotations. But they also make it easy to extract information from the transcriptions and export them to a statistical software or a spreadsheet for further analysis.
However, the template organization of ELAN, although powerful, is limited because it accepts neither temporal overlaps nor distant temporal relationships between the dependent tier and the main tier. Dependent tiers have to be included within the temporal boundaries of the parent tier. If two annotations have only partial overlap, or do not overlap, the template tool cannot be used, as the extraction and exportation of those annotations is not possible.
It is possible to go beyond this limitation thanks to the "Multiple Layer Search" option of the "Structured Search of multiple EAF" searching tool. With this option, we can search for co-occurrences across multiple tiers (layers). A co-occurrence can be specified by a temporal or a hierarchical relation. Results can be exported in a tabular format (CSV). For each line, the export contains all the information for one hit, thus with this format, it can be further exploited in any spreadsheet processor or statistic tool. For example, we can find all the gazes of one specific participant that occur before the gazes of another participant and overlap with them. The time elapsed between the gaze of the first participant and the gaze of the second participant can be controlled to avoid having a first gaze that ends too much time in advance (by setting a maximal value, for example two seconds between the beginning of the first and the second gaze) and a first gaze that begins nearly at the same time (by setting a minimal value, for example 100 milliseconds between the beginning of the first and the second gaze).
The Multiple Layer Search can find any temporal relation, even when a long time has elapsed between the occurrences in each tier. It can also find sequences of annotations for the same participant, or any combination of sequences and overlaps. The number of overlaps and the number of annotations in a sequence can be more than two annotations long. Finally, the relationship between annotations can be described using all the possible temporal and structural settings of ELAN, but also using sets of participants, or sets of coded tiers, etc. The searching tool can be used to locate elements in the files, as well as to export the results. Finally, the queries can be saved, so complex queries can be reused easily whenever new data is available.
Coding features for a multimodal analysis
Coding features are organized hierarchically. We placed the participants and collective tiers (for example conversational topics) at the main level. Collective tiers link several participants together and are rarely attributed to a single participant (there could be for example occurrences of the youngest child producing a vocal or gestural monologue while the other members of the family are conversing together). Coordinated tiers also include the participation framework (Goffman, 1981), e.g. the group of participants that are involved in an interaction. Even if some of them do not actually produce vocal or visual language and are not either the main speaker/signer or the main addressee, they form a 'framework' in which each member has a participation status. We thus code the cues that enable us to assess that status (as speaker/signer, addressee or overhearer for instance). For each participant, the coding describes a set of dimensions. A dimension can be a resource, in the sense of Mondada (2019) and Feyaerts et al. (2022), which can correspond to language (mouth and hand), gaze, body postures, actions, manipulations of objects, … Dimensions can also be analyses of the situation, for example the theme of the conversation, the participation framework, the discourse topic ... As a rule, coding indicates that the participant is doing something, either producing sound or movement (with hands, arms, head, bust, gaze or facial articulators), thus the production is not necessarily verbal or with conventional meaning.
Implementation in the current project 6.1 Organization of the ELAN template
The ELAN template includes a series of main tiers and dependent tiers. On the main level, we placed the participants as individuals, and the participants as groups.
Each of these main tiers is divided into all the modalities necessary for our analysis. Modalities are logical subdivisions of participants and groups, but we did not use the structural mechanisms of ELAN to organize the division into modalities, so as to keep a maximal flexibility in the use of ELAN (structural properties cannot be used to describe overlaps). Queries about the relationships between modalities (as presented in 4.2.2) use the temporal organization of the transcription rather than its structure.
Participants
There are four to five participants in our recordings. Two parents, typically but not necessarily the mother and the father, and two or three children. Each participant is associated to a unique tag. Each participant has the same set of dimensions coded.
Groups
Groups refer to situations where it is necessary to code something that is shared between several participants. In our work, this includes: Presence: Who is present in the situation, including participants that are not producing anything at the target moment, but who could participate (in real life you can talk to several people, but not all of them will necessarily answer you. Nonetheless, your discourse will take their presence into account, so it is necessary to code this information). There can be presence in the audio channel, the visual channel or both.
Themes: What is happening in general? What is the situation about? What is the topic of the conversation? Several topics can co-exist. Themes are often deduced from the situated languaging, or from the semantics of non-verbal actions.
Participants: Who is actually involved in the participation framework?
Other dimensions
For all participants, several dimensions can be analyzed. The four main dimensions are: presence, gaze, audible production, visible production. These four dimensions are structurally independent from each other. In all dimensions, all events have a beginning and an end. But there is no specific structural relation between them. This is not because such a relation cannot exist, but because we cannot use structural constraints on these relations, as structural constraints in ELAN limit the possible temporal constraints (it becomes impossible to indicate overlap, precedence, or succession). However, analyses of these temporal relations can be conducted thanks to the searching system described below.
The main level of analysis simply indicates: • A participant is present in the room, is not actually in the room but it is possible to hear her or him, or is not present at all. • A participant produces something visible.
• A participant produces something audible.
• A participant gazes at one or several persons or objects.
A controlled vocabulary is used in the tier referring to presence: in-camera field, off-camera field, out of-roomcan-be-heard, out of-room-cannot-be-heard. For gaze, the annotations contain information about the participants or inanimate elements (objects) the gaze is directed at. For an audible or visual event, we can describe both the actions and the languaging that occur during the event.
The main transcription level allows us to segment and tag the events, but without specifying their nature. The information is included in the dependent action and languaging tiers. These annotations are temporally organized (with beginning and ending boundaries) using ELAN's "Time subdivision". The coding system for languaging is not the same in the vocal and visual modalities.
Descriptions of actions
All actions are first coded as dining related or not. Further description can be included in a comment or in a sub-tier in natural language, whether they may carry a communicative value or not. If they have a communicative value, the description of the action includes the symbol " §". Actions can be quite automatic and non-intentional, or intentional. Most actions are coded in the visual part of the template, but audible actions can also be coded.
6.4.2
Descriptions of audible languaging Audible languaging contains the name of the language used in the main tier line. In a dependent 'script' tier, there are orthographic transcriptions of what is said. They can be completed by symbolic codes to indicate intonation, onomatopoeia, laughter, etc. They follow the principles used in classic spoken language transcriptions. More specifically they follow the convention of the CHILDES system (CHAT: MacWhinney, 2000) as the audible languaging is first transcribed using the CHAT software independently for the rest of the coding system. When it is finished, a conversion is performed using a specific tool (TEICORPO: Parisse et al., 2022) which produces an ELAN file with the transcription in the correct ELAN tier.
6.4.3
Descriptions of visual languaging These descriptions target all symbolic gestures (arms, faces, torso). In our theoretical approach, we consider that sign language and gestures are in the same continuum because gestures have symbolic meaning in sign languages and are used productively in languaging.
Moreover, there is ambiguity in sign languages between what is a gesture specific to LSF and a shared gesture with the surrounding cultural community (that signers and nonsigners may use, such as pointing, shrugs or headshakes), as these shared gestures are often fully incorporated (or grammaticalized) in sign language.
So, we do not separate sign language and gesture in our template. Visual languaging can also be produced by hearing speakers when they produce symbolic gestures.
French Sign Language (LSF) is annotated in the sub-tier named "script" and using ID-Gloss (consistent labels in the written surrounding language, including codes for non or semi-lexical unit, see Johnston, 2014). A free translation is provided on another dependent tier.
More fine-grained analyses of the hand-movements are not currently included in our work, but will be conducted later in the project.
6.4.4
Addressee(s) The description can be completed by a dependent description of the addressee(s) of the languaging. This line, called "interloc" contains only a controlled vocabulary with the codes of the participants. It is possible to include more than one addressee at the same time. interloc-lngaud-Ca 1-F Table 1: simplified version of speaking family annotation Table 1 shows a simplified version of part of the coding for a speaking family. The notations in _'---' delimit the duration of the father's visible or audible production. Code F is for Father, Ca for elder child. Aud is for Audible, Lng for Languaging, Interloc is the addressee (so for example interloc-lng-aud-Ca corresponds to addressee(s) of the elder child when producing languaging in audible form). 1-Ca corresponds to languaging directed at the elder child only, 1-F to languaging directed at the father only. Table 2 shows a simplified version for a signing family. M is the mother, Cb is the younger child. The languaging part is in the lng-vis-M tier instead of the lng-aud-F tier. An example of theme is given.
Queries
As presented in part 4.2.2, queries will be very useful to structure the data available in our corpus. Indeed, having a languaging and semiotic approach means that there is no preset organization of the data such as what can be imagined in a theory based on the primacy of conventional speech or sign. In real-life, gestures can take on symbolic meaning for both the speaker/signer and their addressee(s) on the spur of the moment, language forms can be used in a repetitive manner just to tease someone or emphasize a situation, there can be a gaze before or after either a word or a gesture, etc. Data organization is not stable. This is expressed in our coding by using annotations that are organized according to their temporal boundaries. We thus simply indicate within the beginning and ending boundaries of an annotation if a participant is present, if a participant is producing audible or visible languaging or audible or visible acting, if a participant is gazing at a specific person or object. A preset structure simply cannot be used. Therefore, in order to structure our data and obtain results, we must use the search options.
For example, if we want to know if a child's gaze precedes the mother's gesture or her spoken production, we can use a searching option to find out all the possible occurrences. Or we can find all relationships between Figure 2 : query for some languaging followed by at least 300ms by an action for the mother children's gaze and mothers' speech and conduct statistical analyses.
We can perform the same type of search within the coding for each participant. For example, we can find out what specific spoken utterance (annotated in the audible languaging tier) is produced by the mother before certain gestures (annotated in the visible languaging tier). This can be found by searching for the right overlap between speech and gesture. If we want to know only when the overlap is at least 300ms, we can add this condition (see Figure 2). There is no need to have a predefined organization of the template to analyze how speech and gesture are coordinated, as long as they are correctly coded within their temporal deployment.
If we want to find out the specific spoken interactions between the speaking participants, we can find them. If we want to cross tabulate this data with their gestures, or the absence of gesture, we can do that as well.
Queries are the perfect answer to a coding situation that is not clearly predefined, or that relies mostly on timing. ELAN is thus an excellent tool to conduct analyses on multimodal multiparty situated interactions with no preconstrued ideas on how to pair form (including action, LSF, French, gaze, and gesture) and meaning.
Limits of the implementation
There are cases where the temporal information is not sufficient to determine the degree of relationship between what is coded in the various modalities. One example is long distance temporal relations, which can be found using queries, but which could be hidden within multiple other relations that do not make sense. Another example is that things that occur at the same time might not be related, which is the case in multiparty interactions when at least two conversations occur at the same time.
These limits can be handled using structural information, which we do not use a lot because, as explained above, it is difficult to organize. Another means is to use the semantics of the values used in the coding process. For example, coding which people are engaged in a participation framework provides information about which temporal relations are meaningful or not. Using the right codes allow to make relevant queries, which are solely based on temporal information.
Conclusion
In this paper, we have presented the organization of an ELAN template and the use of the ELAN query module which will allow us to test our research questions. These include: 1. Because of the specialized role of gaze and of the articulators involved (mouth, hands, arms…), are there crucial differences between coordinating speaking vs. signing, and eating? 2. Will children become increasingly expert at coordinating semiotic resources and at navigating between activities? 3. Will developmental regularities according to age, cognitive, social and linguistic development as well as mode of expression (speech, sign, gesture) be identifiable despite individual and family variation? 4. Will the mode of expression and its formal components, when deployed in situated activities, have a major impact on how children construct meaning and develop language?
More research issues might be raised in the course of our project, but as of now, the multimodal nature of language has led us to develop a method to investigate how various semiotic systems such as speech, sign, gesture, posture, facial expressions and gaze but also actions and object manipulations, are simultaneously deployed, transmitted and used in the situated activities combined in family dinners. During multiparty, multimodal situated interactions in coordination with other body activities, every move, every part of the body, every object is potentially meaningful. They are deployed in a multitude of skillful variations in the collective coordination of bodies, activities and artifacts.
We thus use the affordances of the ELAN software specifically designed to annotate gesture and sign as well as other semiotic resources. ELAN can be extremely useful to analyze multiparty co-activities such as conversing and dining as it integrates temporal boundaries for the annotations and since both independent main tiers and dependent secondary tiers can be articulated in the template. Its very powerful searching options allow us to use queries in order to structure our analyses of the data. We can thus obtain results on all the possible research questions we might have concerning the orchestration of actions, gaze, signs, speech and gestures in the varying participation frameworks that occur in spontaneous ecological conversations. | 2022-07-01T10:43:47.624Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "bc09ed4634f29ad62109c21b4ace38fceced8015",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bc09ed4634f29ad62109c21b4ace38fceced8015",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
188600392 | pes2o/s2orc | v3-fos-license | New Methods for Sustainable Circular Buildings
Cities can and should be an open field to sustainable circular guidelines since its scale complexity becomes an impact (positive or negative) over the environment as deep as its dimension. On this scenario, construction industry aims to develop products that fulfil also functional requirements and at the same time safety and durability during all the life cycle phases, promoting reversible buildings to avoid constructions obsolescence and recourses’ waste. Most of the Building Sustainability Assessment methods require detail in the input data, hampering their usability at early design stages. So, the paper presents two complementary methods that are being developed to promote early stage sustainability through both sustainability design decision-making guidance and assessment of investment willingness and affordability. The first method enables project teams to compare design alternatives and verify which is the most sustainable choice and alerts them how sustainability concerns are linked to all design criteria, constraints and decisions. The second method is a cost-benefit analysis method to analyse and compare building solutions that consider the stakeholders’ investment willingness and market availability. These new approaches can lead to a more sustainable built environment and contribute to more circular economy since it allows thinking on reversible and transformable buildings since the early design stages choosing solutions closer to the building stakeholders’ investment willingness and the users’ affordability.
Introduction
The building sector is one of the most resource consuming sectors in European Union. In their whole life cycle, from the extraction of materials, the manufacturing of construction products, construction, use and maintenance, buildings in the EU amount for around: one half of extracted materials and energy consumption, and one third of water consumption and waste generated. On other hand, the building sector has also a significant impact at social and economic level. This sector is estimated to be worth 10% of global GDP and employs 111 million people [1].
In the Roadmap to a Resource Efficient Europe, buildings are highlighted as one of three key sectors to be addressed. Better construction and use of buildings could help making significant resource savings: 42% of final energy consumption; about 35% of total GHG emissions; 50% of the extracted materials; and up to 30% of water in some regions [2].
Concept of sustainability and building industry
Sustainable Development is a concept whose importance has grown significantly in the last decades. The global economic crisis has reinforced growing environmental concerns as well as raising the awareness of the population towards a necessary and inevitable change in the values of their societies. The ease and lack of concern of people worldwide, especially in industrialised countries, is the base of this increasingly three-dimensional crisis (social, environmental and economic). In this scenario, the positivism of new initiatives aimed at changing this global attitude of the world population is highlighted.
Today there is an explosion and even trivialisation of the concept of sustainability, which seems to be omnipresent in the daily lives of today's population. The issue related to environmental issues has been widely publicised by the media and is often manipulated by advertising actors in order to achieve objectives other than those of improving the planet's environmental, economic and social condition. In this way, and to be able to witness a real change in society with the aim of improving sustainable performance, it is necessary to inform and raise awareness of the populations in a correct way [3].
Building Sustainability Assessment methods
The major reason, which promoted the development of systems to support environmental performance assessment of buildings, was that the countries were unable to say how sustainable a building was. This is also true for countries and design teams, which believed that they were experts in this field. In this regard, several countries have developed their own systems for sustainability assessment adapted to their reality and presenting them as capable of guiding the overall performance of this sector. Most of these systems are based on local rules and legislation, in locally conventional construction technologies, with the default weight of each indicator set according to the actual local socio-cultural, economic and environmental contexts [4].
Among the systems and assessment tools currently available on the market it is possible to highlight some of them for increased use and accuracy: BREEAM (Building Research Establishment Environmental Assessment Method); CASBEE (Comprehensive Assessment System for Building Environmental Efficiency); DGNB (Deutsche Gesellschaft für Nachhaltiges Bauen); Green Star; HQE (Association pour la Haute Qualité Environmentale); LEED (Leadership in Energy & Environmental Design); NABERS (National Environmental Australian Building Rating System); and SBTool (Sustainable Building Tool).
SBTool is a generic framework for rating the sustainable performance of buildings and projects and authorized third parties can be allowed to establish adapted SBTool versions as rating systems to suit their own regions and building types. For instance, owners and managers of large building portfolios, can also use it to express in a very detailed way their own sustainability requirements to their internal staff or as briefing material for competitions. Lastly, it can also be an educational tool, since developing benchmarks for a wide range of issues is a useful experience for graduate and post-graduate students.
In Portugal, some tools have been developed under the structure of SBTool. There are already three available, focused on the following building types: residential buildings; office buildings; and tourism buildings. In addition to buildings a method and a tool to assess the sustainability of urban areas and urban neighbourhoods (SBTool Urban) is also under development.
International Policies
The differences between the criteria of the different assessment tools make the definition of "Sustainable Construction" subjective and difficult to compare the results obtained from each of the methods. In this context, the International Organization for Standardization (ISO) and the European Committee for Standardization (CEN) have been active in producing standards (eight and eleven respectively) for the environmental and sustainability assessments of buildings. Considering founding programs, it is possible to highlight Horizon 2020, which is the biggest EU Research and Innovation program ever, over seven years (2014 to 2020). European Union (EU) established demanding targets to be achieved by 2020, 2030 and 2050: reduction of GHG emissions; share of renewable energy consumption; energy saving compared with the business-as-usual scenario; and share of renewable energy in transport sector.
Regarding legislation there are been published directives and standards about building materials (ISO/EN 15804), construction and demolition waste (Directive 2008/98/EC) and indoor environment quality (EN 15251). Although, related to sustainable buildings the legislation is mainly focused on energy: Energy Performance of Buildings Directive (2010); and Energy Efficiency Directive (2012).
Recently, Directive 2018/844 of 30 May 2018 amending Directive 2010/31/EU on the energy performance of buildings and Directive 2012/27/EU on energy efficiency, was published. The main objective of this new Directive is to accelerate the cost-effective renovation of existing buildings, that is to introduce building control and automation systems as an alternative to physical inspections, to encourage the implementation of the necessary infrastructures for efficient mobility and to introduce an intelligence indicator to assess the technological preparation of the building. Thus, among the changes that occur, the following stand out [5]: Introduction of new definitions, such as "automation and construction control system"; Implementation, by 2050, of a long-term strategy to support the renewal of MS building parks, transforming them into energy-efficient and decarbonized real estate parks; To instruct the EC to act legally through actions complementing this Directive by establishing a common voluntary regime for the classification of the degree of preparedness for intelligent building applications with the definition of an indicator and a method of calculation; Establish mandatory periodic inspections of heating and air-conditioning installations with a rated output of more than 70 kW; Determine primary energy consumption in kWh/ (m 2 .year) as a numerical indicator for certification purposes and to meet minimum energy efficiency requirements.
Actions for sustainable circular economy
The circular economy provides an industrial system that is restorative and regenerative by an early design stage considering the different building life cycle phases, providing a low carbon economy and a sustainable growth, maximising the benefits and reducing the costs. In addition, it also delivers dynamic and interactive services and enable expert assistance, learning, and peer-to-peer sharing experiences to reduce human error [6]. Even all these actions being carried out, the majority of buildings are still not sustainable, and there are a small number of commercial and residential buildings certified by a BSA method. The reason for this is mainly because: sustainable solutions have higher initial costs; lack of information on solutions costs and benefits; lack of public awareness; disbelief on social and economic benefits of sustainability; lack of political support/incentives. So, what could be done?
Methods for sustainable circular buildings
According to the Ellen MacArthur Foundation and SYSTEMIQ [7] report three main themes within built environment should be invested in to promote a circular economy, being (i) designing and producing circular building through designing and producing multi-use highly modular buildings and energy positive buildings made of durable non-toxic materials; (ii) closing building loops by ramping up recycling and re-manufacturing building materials and; (iii) developing circular cities, through integrating circularity into urban developments through innovative business models. The first of these topics directly relates with building design. Besides, in order to implement this concept and make it sound, decision-making support tools are required to aid in building design. Such tools help practitioners implementing new design solutions towards circular economy and sustainability.
Accounting for sustainability concepts should occur as early as possible in the building design process, as any other project required aspect, to increase the probability of succeeding in the sustainable [8]. It is essential to define key goals and establish targets to which design alternative solutions can be evaluated and compared to. This enables the identification of measurable criteria to assist designers defining the solutions that would accomplish the project goals, with minimal environmental impacts and costs. Also, if the goals are not easily measurable and understandable, limitations and inefficacy to their achievement could occur [9]. Most of the existing building sustainability assessment (BSA) methods are not applicable during early design, as they require a certain data detail that is not available early in the project [10].
Moreover, it is essential to bear in mind the economic viability of the design solutions under study. Even if a solution has a high performance with low environmental impact and could drive a low life cycle cost, it cannot be considered sustainable nor economically viable if stakeholders are not willing to pay for it. Several studies had already addressed the economic viability of building solutions [11] and the stakeholders' willingness to pay for sustainable solutions [12]. However, there is a lack of a comparative methodology that takes this parameter into account.
Take into consideration the abovementioned it is of major importance to consider the stakeholders' opinion on the process of comparing building solutions, because it leads to the selection of solutions that best suits their interests. Therefore, in this work two novel methods are presented. The first is a design support tool which enables designers to set sustainability goals early in the project and aids them to attain for those, allowing the comparison of alternative solutions. The second consists in a costbenefit analysis method, where the selection of best building solution is according not only to LCC and solutions performance, but also to stakeholders' desire/willingness to invest in sustainable solutions.
Early stage design method for building sustainability
The decision-making support method intends to assist early-stage design on pursuing a building's life cycle sustainability. The main aim of the method is to aid designers define sustainability goals and accomplish them through guidance, and evaluation and comparison of design solutions performance. It intends to make designers aware that all design criteria, constraints, and decisions relate to sustainability concepts.
To cope with that, the method was developed with two main approaches: (i) quantification and (ii) decision-making. The first enables measuring the potential impacts of design alternatives within all the sustainability indicators and their burden to the whole building sustainability. The second provided critical information for the decision-making process, by comparing design alternatives performance. Both viewpoints contribute to ameliorate the built environment sustainability and to endow designers with sustainability awareness.
Additionally, the method has following premises: (i) be simple and easy to use; (ii) be in line with international standards for sustainable construction; (iii) embrace the three sustainability dimensions; (iv) allow simultaneity of quantitative and qualitative criteria and; (v) give required guidance to understand the implications of sustainability in the design [13].
Considering the abovementioned, existing BSA tools and international standards were reviewed as well as relevant literature on the topic and questionnaires were applied to understand the designers' perspectives. From that research and data analysis resulted the sustainability matrix to include in the decision-making method. The sustainability matrix is organised in a tree structure, with the following broad categories: Materials and resources -comprise the materials life cycle environmental impact and the efficient use of resources; Wellbeing -consisting in inhabitants' health and comfort indicators and in the building's functionality; Life cycle costing -covering investment, operational and end-of-life costs; Location -encompassing the site conditions, ecology and social constrains; Technical and management -accounting for project quality and management.
Each of these categories is further divided into indicators and sub-indicators, fulfilling nineteen indicators and thirty-nine sub-indicators. Designers can select the indicators and sub-indicators to assess
and in which order. Unlike other BSA tools, this method does not weight nor aggregate results. These are presented individually, as mid-point indicators. Figure 1 presents the method's workflow. Objectives can be defined first and then, to each indicator, the design solutions' performance can be estimated or given guidelines to achieve desired levels and compared. The comparison of alternative solution enables designers to identify the one that best fits their goals and with higher performance.
Cost-Benefit Analysis (CBA) method
The CBA method aims at comparing the sustainability performance and costs of different building solutions. The method uses a visualization approach by a bi-dimensional graphical representation ( Figure 2); where the horizontal axis depicts the Sustainability Level (SL), the vertical axis represents the LCC, and the A point shows the solution. The cheaper solutions will appear at the bottom in the graph, while the solutions with better sustainability performance will appear in the rightmost. In this method the sustainability assessment is carried out through the evaluation of seven key indicators. These were defined after analysing several European Sustainable Building assessment methodologies, European projects and ISO and CEN Standards. The following indicators were chosen: energy consumption, water consumption, building material LCA, thermal comfort, acoustic comfort, lightning, and indoor air quality. Each design solution is then analysed from a LCC perspective. In the end, the global solution assessment is achieved by a multi-criteria analysis.
This method considers relevant aspects when comparing solutions. When a solution is cheaper and has better sustainability performance than others, it is easily concluded that it is better than others (in the graph it appears at the bottom right corner of the quadrant IV). However, when comparing alternatives in which one is more expensive but has better performance than the other, such conclusions may not be so obvious. Therefore, it is required a comparison method that accounts for the value of money or the stakeholder's willingness to invest in expensive high-performance solutions or in cheap low performance solutions. This comparison method is crucial to answer the question: To what extent is someone willing to pay for a certain sustainability performance improvement in a building?
Mathematically, this corresponds to selecting an ideal cost-benefit ratio for a given budget. In figure 2, the red line represents this ratio. This ratio can vary between stakeholders. In order to define it, stakeholders' investment willingness should be analysed to adapt the line to each stakeholder.
This will allow not only the comparison of solutions but also the selection of the one that best suits each individual. As the willingness to invest in sustainable measures can diverge for different investment or desired sustainability levels, the line can take a linear or non-linear shape.
Case Study
It was considered a typical Portuguese single-family building (Figure 3). The building has two bedrooms and a total built area of 110 m 2 . Hypothetically, it is located in Lisbon at an altitude of 71 m, and the objective of the analyse is to identify the best comparative solution for the improvement of energy efficiency.
The case study building solutions were defined taken into account the most common building solutions used in Portugal between 1960 and 1990. The building was considered to be air conditioned through mobile heating and cooling systems (COP=1; SREE=3,5), and to be naturally ventilated. The comfort temperatures recommended by the Portuguese thermal regulation were considered for the analysis of the energy needs: 18 ºC for the heating season and 25 ºC for the cooling season. The ventilation was assessed trough dynamic simulation using the EnergyPlus AirFlowNetwork module. The building was considered to be occupied by three persons from 7 pm to 8 am in week days and all day during the weekends.
Rehabilitation Scenarios
Three rehabilitation scenarios were analysed (Table 1). In scenario 1 only passive measures were considered. In scenario 2, in addition to the passive measures, more efficient building systems were defined. In scenario 3 the measures from scenario 2 were combined with a heat pump. So, the rehabilitation scenarios are: Passive measure of scenario 1 plus substitution of building acclimatization equipment for an air conditioning system (COP= 4,12; SREE=8,53) for heating and cooling and of the DHW system by a gas condensing heater (COP=0,881) 3.
Measure of scenario 2 with the addition of a self-consumption photovoltaic kit with a production of 1 500 W (Eren = 2 290 kWh.year, where Eren = Photovoltaic system energy production across a year. The panels were considered to be facing South with a slop of 35º). Figure 4 presents the energy use in each rehabilitation scenario. . Energy use of each rehabilitation scenario Considering the indicator I5 (Energy Efficiency) of Early stage design method, it is possible to estimate the total primary energy required by the building. Briefly, the indicator estimates the energy needs, using ISO 13790 general framework for the calculations for the heating and cooling needs, while EN 15316-3-1 is used for the estimation of DHW production needs. The model requires the input of the building envelope solutions -materials and thickness for each building element -and the heating, cooling and domestic hot water preparation systems -type of system. Then, the energy needs for heating, cooling and domestic hot water preparation are present. If needed alternative solutions can be tested and the performances compared.
Results
Therefore, it was verified that the simple adoption of passive measures leads to a decrease the annual building energy use in 69 kWh/m 2 .year. These passive measures combined with more efficient but conventional building system allow to decrease the energy use in around 118 kWh/m 2 .year. The adoption of a heat pump, which is a more efficient but usually also more expensive equipment, allow to obtain an even better energy performance, decreasing energy use in 127 kWh/ m 2 .year. Table 1 present the economic analysis regarding the CBA method. The payback time was verified to be high in all of the three rehabilitation scenarios. Nevertheless, the scenarios with the best payback time is scenario 2. It was also verified that regarding the payback time, the scenario based only in passive measures present a very similar performance than other scenarios combining these measures with more efficient energy systems.
The initial cost has a very significant influence on the economic analysis of residential buildings rehabilitation measures. Even in scenarios with relevant annual savings, the initial costs of the solutions make the return of investment only available in the last years of the life cycle.
Conclusions
The early stage sustainability design tool fulfils the existing gap in the BSA tools universe, as it enables considering sustainability concepts since the early design stages. This method aids the decision-making process through the comparison of design alternatives sustainability performance at the indicators and sub-indicators level. This will assure that decisions regarding sustainability concerns are made conscientiously. Acting so early in the project, not just will improve the possibilities to promote sustainability but it will also reduce the possible associated costs.
The cost benefit method aids stakeholders to easily compare building solutions and understand the benefits of their investment. Also, it encourages them to invest in high performance solutions, with higher initial costs, since the benefits of such investment, in terms of sustainability and cost savings will be easily understood. The method also allows identifying measures with low investment availability and high performance. This is important to inform the government bodies of measures and aspects for which is necessary to develop funding programs in order to improve sustainability. | 2019-06-13T13:22:20.387Z | 2019-02-24T00:00:00.000 | {
"year": 2019,
"sha1": "4cab00dcd8f49fb18213b0ebffbb36d07b462731",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/225/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d4fa4e7ddfa6dc2d91fb7f23243ee9258e29efba",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Political Science",
"Physics"
]
} |
233557082 | pes2o/s2orc | v3-fos-license | Outsourcing of Pharmaceutical Care Services: A New Initiative Project in the Kingdom of Saudi Arabia
Phone no: +966504417712 E-mail: yalomi@gmail.com ABSTRACT Objectives: To explore the outsourcing of pharmaceutical care services as a new initiative project in the Kingdom of Saudi Arabia. Methods: This is a new initiative project driven by the international outsourcing of pharmaceutical care services guidelines. The project has been formulated from the global business model, pharmaceutical project guidelines, and professional project management of a new project. The project management professionals have written this initiative project. It consists of the following parts: The initial phase, the planning phase, the execution phase, and finally, the monitoring and controlling phase. Results: We explored the outsourcing of pharmaceutical care services with a defined vision, mission, and goals. The services had various benefits, including clinical and economic outcomes on patients. The risk management model was explored, which assured the continuity of the project. Moreover, the monitoring and controlling of the project’s services as declared. The transition to operation project through the closing project stage has also been explored in this study. Conclusion: The outsourcing of pharmaceutical care services is a new initiative project and part of the pharmacy strategic plan with Saudi Vision 2030 programs. The outsourcing of pharmaceutical care services meets the pharmacy workforce’s demand, completes the requirement of some pharmacy services, and improves clinical pharmacy sections without additional cost. We highly recommend it to be implemented in Saudi Arabia.
INTRODUCTION
Recently, there was implementation and improvements made in the pharmaceutical care services in the Kingdom of Saudi Arabia, including drug distribution system as unit dose methods, intravenous admixture services, medication safety measures, drug information services, and clinical pharmacy services. [1][2][3] Moreover, the increase of workforce of distributive pharmacists and clinical pharmacists. [4][5][6] However, the improvement does not meet the demand for pharmacy workforces, service requirements, and fundamentals of pharmacy strategic plan or updated strategies with New Saudi Vision 2030. [7][8][9][10][11][12][13] It will take a long time until the plan is executed. As a result, we need to facilitate the improvement of pharmacy services in a short period and appropriate economic burden on healthcare services through utilization of the outsourcing of pharmaceutical care services. [14][15][16] The American Society of Health-System Pharmacists (ASHP) established outsourcing pharmacy services around 20 years back. 17 It released specific outsourcing guidelines related to intravenous admixture services. 18 The outsourcing of pharmaceutical care services can either operate fully or fully substitute the pharmacy services on behalf of the owner. 17 So far, various studies have reported the experience and benefit of the utilization of outsourcing part to start or implement the pharmaceutical service, for instance, home total parenteral nutrition, repacking medications system, drug distribution system, and intravenous admixture services. [19][20][21][22][23][24] Recently, after the implementation of a new pharmacy strategic plan with New Saudi Vision 2030, 25,13 the outsourcing of primary healthcare centers through the dispensing of medications on behalf of healthcare institutions called Saudi managed care pharmacy had been implemented. 26 However, to the best of our knowledge, no studies discuss outsourcing pharmaceutical care services to the Gulf and Middle Eastern countries. Therefore, we aimed to declare the outsourcing of pharmaceutical care services as a new initiative project in the Kingdom of Saudi Arabia.
PROJECTS METHODS
This new initiative project was driven by the international pharmaceutical outsourcing programs. [14][15][16][17] The task force team of outsourcing pharmaceutical care services was formulated, which consisted of the author's pharmacy administration and clinical pharmacy practitioner expertise. The committee developed the guidelines for the outsourcing of pharmaceutical care services by deriving information from international sources of literature and by utilizing pharmacy project guidelines, the international business model, and project management institution guidelines of a new project. [27][28][29][30] The outsourcing of pharmaceutical care services is adjusted based on the outsourcing of the pharmacy services, general pharmacy outsourcing regulations, and the transformation from regular pharmacy services to outsourcing pharmaceutical services. The project is written by project management Alomi YA, et al.: Outsourcing of Pharmaceutical Care Services in Saudi Arabia professionals and contains various parts, including the initial phase, the planning phase, the execution phase, and the monitoring and control phase.
Initiative phaseAssessment needs
The full pharmacy services are not available at most of the Ministry of Health (MOH) or private healthcare institutions. Implementing any of the pharmacy services will take time and will be costly, whereas utilizing outsourcing of pharmaceutical services will save time and will be less expensive. Moreover, the number of personnel working as pharmacists and pharmacy technicians was not adequate to fully operate current pharmacy services; therefore, the pharmaceutical companies should provide enough staff to meet the pharmacy staff shortage. Education and training are other important factors determining the outsourcing of new pharmacy services; outsourcing companies should provide educated and trained pharmacists or pharmacy technicians for further services. The utilization of outsourcing clinical pharmacy services, including drug information services, will save time for healthcare institutions and expand their current clinical pharmacy at a reasonable cost. Moreover, the Hajj period is a unique situation that shows high demand for pharmacy services and the pharmacy workforce. Therefore, the outsourcing pharmaceutical care companies should aim to provide the best pharmacy services to all pilgrims with a high-quality workforce in a short time. As a result, the outsourcing of pharmaceutical care needs to meet the high demand for full and high quality of pharmacy services, including any pharmacy workforce shortage with the reasonable cost burden on the healthcare system.
SWOT analysis
The SWOT analysis is considered one of the popular tools in assessing the outcome of a new project. It consists of four parts: analysis of the strengths, weaknesses, opportunities, and threats to the project. This project's strengths are outsourcing pharmaceutical care services to cover the shortage of staff, reduce the pharmacy workload, reduce or avoid medication errors, and build a medication safety culture. This project's weaknesses can be the need for education, training, and updating the pharmacy services system. This project's opportunities include issues related to the current methods of quality accreditation and patient safety program necessary foundation and meet the Saudi Vision 2030 by utilizing the private sectors of outsourcing the pharmaceutical care system. This project's threat points include outsourcing pharmaceutical care companies, which might suddenly be stopped, or the administrative changes to the plan.
Market Analysis
The majority of the pharmacy services are operated by the government or private healthcare organizations. It is sporadic to use outsourcing of pharmaceutical care services through their part of pharmaceutical companies. Recently, the transformation system (from governmental to the private operation system) through the implementation of Saudi Vision 2030 and the MOH's strategic health plan. 25,31 Therefore, the outsourcing of primary healthcare pharmacy services was implemented by Saudi managed care pharmacy system (dispensing of MOH prescriptions on behalf of community pharmacy). 26,32 Moreover, the outsourcing of total parental nutrition through the compounding pharmaceutical companies prepared total parenteral nutrition (TPN) for neonates, pediatric, and adult physicians' orders. These services have been implemented at private hospitals, whereas they have not been implemented at all healthcare institutions. Furthermore, the medication supply company utilizes the logistics to purchase and distribute all pharmaceutical medications, for instance, the National Unified Procurement Company (NUPCO). Currently, the medical supply outsourcing system is well-established and covers most MOH hospitals and primary healthcare centers, and NUPCO covers non-MOH healthcare organizations. 33 The new plan is to convert all logistic services, including all ambulatory care services, intravenous administration, clinical pharmacy services, and drug information services, from regular operations to the Saudi market's outsourcing system.
Planning phaseScope of the project
The scope of the project covers the outsourcing of pharmaceutical care services, including narcotics and psychotropic medications for inpatient services, ambulatory care services, total parental nutrition, intravenous admixture preparations, clinical pharmacy services, and compounding or extemporaneous preparation in addition to drug information services, the inpatient pharmacy, and the repackaging medication system.
Vision, Missions, and Goals
This project's vision is to perform the best outsourcing of pharmaceutical care services, and the mission is to provide the appropriate outsourcing of pharmaceutical care services for most pharmacy units. The goals of this project are as follows: to fix the outsourcing of pharmaceutical care services during transformation to privatization, to improve any missing pharmacy services within a short period, to replace the demand and requirement of the shortage of pharmacy staff, to prevent any drugrelated problems during pharmacy activities, to reduce the workload of pharmacy staff and healthcare providers, and to avoid the additional unnecessary cost on the pharmacy and healthcare system via utilization of outsourcing of pharmaceutical care services.
Project description
The following suggested policies and procedures were put in place for every pharmacy staff and other healthcare individuals: ✓ The guidelines for the outsourcing of pharmaceutical care services should be formulated at healthcare organizations. ✓ The outsourcing of the pharmaceutical care services committee should consist of the pharmacy, head of each pharmacy unit, pharmacy quality management, medication safety pharmacist, and physician and nurse representative. ✓ The committee revises the standards of the outsourcing of pharmaceutical care services and updates at least annually. ✓ The education and training sessions about the outsourcing of pharmaceutical care services. ✓ The committee should conduct the outsourcing of pharmaceutical care services to all pharmacy and healthcare providers. ✓ The policies and procedures related to outsourcing pharmaceutical care services should be distributed to healthcare sectors at the organization. ✓ The physician should write the prescription based on the Saudi regulation and dispense the medication based on outsourcing pharmaceutical care and medication formulary regulations. 32 ✓ If the physician wishes to prescribe outside the outsourcing of pharmaceutical care services guidelines, then he should document the justification. ✓ The prescription should be sent to the pharmacy and inpatient or outpatient pharmacist, and the pharmacy technician will prepare it based on the outsourcing of pharmaceutical care services. ✓ The pharmaceutical staff sends the medications to the ambulatory care patients or nursing department, and the nurse administers the medicines based on the outsourcing of pharmaceutical care services guidelines. ✓ The pharmaceutical department should measure the clinical outcome of the outsourcing of pharmaceutical care services. 32 Alomi YA, et al.: Outsourcing of Pharmaceutical Care Services in Saudi Arabia ✓ The pharmacy department should perform the economic analysis of the outcome of outsourcing of pharmaceutical care services. 32 ✓ The pharmaceutical department should document any medication nonadherence to outsourcing pharmaceutical care services through the electronic system. 32
Plan cost management
For each new project regarding outsourcing pharmaceutical care services, the management team must set out the financial budget, including educational courses on outsourcing pharmaceutical care services, the management team meeting's value, and updated medical or pharmaceutical references to outsourcing pharmaceutical care services. The management team should supervise the budget from the beginning till the end of the project and switch to the operating system.
Execution phaseManagement team
Project management professionals follow various steps, among which one of the essential steps is the execution phase. The execution phase should be lead by a team leader. The project should be outsourced for pharmaceutical care services from the beginning till the end and converted from a new project to a full operating system at the healthcare organizations. The team should consist of the following organization members: The director of the pharmacy, clinical pharmacists, distributive pharmacists, pharmacy technician experts in outsourcing pharmaceutical care services, physicians and nurses, pharmacy quality management officers, and medication safety officers. The team should implement and follow the guidelines for outsourcing pharmaceutical care services and follow-up with regular updates and increasing outsourcing services. Moreover, the team needs to educate and train the pharmaceutical and healthcare professionals about the new outsourcing of pharmaceutical care services and measure the project's clinical and economic outcome.
Education and training
Each new project on outsourcing pharmaceutical care services requires special education and pharmacy staff training, including clinical pharmacists, pharmacists, and pharmacy technicians. Moreover, the healthcare professionals, including physicians and nurses, need additional particular outsourcing of pharmaceutical care services education and training. Furthermore, the team management needs to be orientated concerning the project for all healthcare professionals. The orientation should focus on new staff healthcare providers who have joined the healthcare institutions to cover outsourced pharmaceutical care services.
Project total quality management
During the implementation phase, various tools can be used to manage total quantity with the current project outsourcing of pharmaceutical care services. The balance scorecards are tools used during the implementation phase. 34 The instruments consisted of four parts: The customer, finance, internal process, and education and innovation. The assessment of healthcare services of outsourcing of pharmaceutical care services was an example of an internal process type. The clinical outcome of outsourcing pharmaceutical care services might reflect all clinical pharmacists, distributive pharmacists, and pharmacy technicians' education and competency. The financial type includes the measurements of the cost avoidance of the outsourcing of pharmaceutical care services. The fourth type was the customer types measuring the patient's satisfaction with healthcare providers, including healthcare professionals and pharmacists, pharmacy technician's satisfaction of outsourcing pharmaceutical care service in Saudi Arabia.
Risk Management
This project has various risks: schedule risks, scope risks, budget risks, personal risks, technical risks, and quality risks. 35,36 This project can also be exposed to various risks such as lack of personnel, budget, technical support, and quality risks. The project can suffer from personal risks without trained healthcare professionals or insufficient pharmacists and pharmacy technicians. The education and training sections for all pharmacy staff and healthcare professionals are not included in outsourcing the pharmaceutical care services budget as an example of budget risk. This project is also exposed to certain technical risks. The limited to electronic scientific recourses or not electronic system used in the pharmacy practice. The project outsourcing of pharmaceutical care services may be exposed to quality risks without implementing safety tools or nontrained personnel.
Closing of the project
The outsourcing of pharmaceutical care services for healthcare organizations in the governmental and private sectors is highly required to prevent any drug-related problems, meet the shortage of pharmacy staff, and expand the pharmaceutical services. The outsourcing of pharmaceutical care services can reduce morbidity and mortality with patient outcome improvements. Moreover, we recommend adopting outsourcing of pharmaceutical care services to avoid economic burden on the pharmacy and healthcare system, including the hospitals and primary healthcare centers services in Saudi Arabia. The project should continue to outsource pharmaceutical care services at each pharmacy unit and keep supervision through related committees. The education and training related to the outsourcing of pharmaceutical care services should be implemented accordingly. The guidelines pertaining to outsourcing pharmaceutical services should be updated regularly, and the number of pharmacy services should be expanded in the future. The annual celebration of all outsourcing of pharmaceutical care pharmacy staff, including pharmacist and pharmacy technician, is highly recommended in Saudi Arabia. | 2021-04-30T09:01:21.136Z | 2021-04-06T00:00:00.000 | {
"year": 2021,
"sha1": "8841a56a8e0367d98c0299eddf0d7a3b8d84dca6",
"oa_license": "CCBY",
"oa_url": "http://ptbreports.org/sites/default/files/PTBReports-7-1-01.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8841a56a8e0367d98c0299eddf0d7a3b8d84dca6",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
247169254 | pes2o/s2orc | v3-fos-license | Preserved stem cell content and innervation profile of elderly human skeletal muscle with lifelong recreational exercise
Abstract Muscle fibre denervation and declining numbers of muscle stem (satellite) cells are defining characteristics of ageing skeletal muscle. The aim of this study was to investigate the potential for lifelong recreational exercise to offset muscle fibre denervation and compromised satellite cell content and function, both at rest and under challenged conditions. Sixteen elderly lifelong recreational exercisers (LLEX) were studied alongside groups of age‐matched sedentary (SED) and young subjects. Lean body mass and maximal voluntary contraction were assessed, and a strength training bout was performed. From muscle biopsies, tissue and primary myogenic cell cultures were analysed by immunofluorescence and RT‐qPCR to assess myofibre denervation and satellite cell quantity and function. LLEX demonstrated superior muscle function under challenged conditions. When compared with SED, the muscle of LLEX was found to contain a greater content of satellite cells associated with type II myofibres specifically, along with higher mRNA levels of the beta and gamma acetylcholine receptors (AChR). No difference was observed between LLEX and SED for the proportion of denervated fibres or satellite cell function, as assessed in vitro by myogenic cell differentiation and fusion index assays. When compared with inactive counterparts, the skeletal muscle of lifelong exercisers is characterised by greater fatigue resistance under challenged conditions in vivo, together with a more youthful tissue satellite cell and AChR profile. Our data suggest a little recreational level exercise goes a long way in protecting against the emergence of classic phenotypic traits associated with the aged muscle. Key points The detrimental effects of ageing can be partially offset by lifelong self‐organized recreational exercise, as evidence by preserved type II myofibre‐associated satellite cells, a beneficial muscle innervation status and greater fatigue resistance under challenged conditions. Satellite cell function (in vitro), muscle fibre size and muscle fibre denervation determined by immunofluorescence were not affected by recreational exercise. Individuals that are recreationally active are far more abundant than master athletes, which sharply increases the translational perspective of the present study. Future studies should further investigate recreational activity in relation to muscle health, while also including female participants.
Introduction
Age-related loss of muscle mass and function is often unnoticeable and negligible during mid-life, but gradually accelerates, causing most individuals entering their eighth decade of life to have a greatly diminished muscle function (Janssen et al. 2000;Kostka, 2005;Suetta et al. 2019). Among the myriad changes associated with the ageing muscle, myofibre denervation and a decline in the number (Verdijk et al. 2014) and function (Pietrangelo et al. 2009) of muscle stem (satellite) cells are clear features. Myofibre denervation occurs following decay of α-motoneurons in the spinal cord (Campbell et al. 1973;Tomlinson & Irving, 1977;Mittal & Logmani, 1987;Power et al. 2010;Piasecki et al. 2016) or destabilization of neuromuscular junctions (NMJ) (Bütikofer et al. 2011). Loss of myofibre innervation removes the transcriptional specialization normally confined to the small synaptic area, and alters gene expression in the extra-synaptic area of the myofibre (Covault & Sanes, 1985). For example, a strong upregulation of the acetylcholine receptors (AChR), normally confined to the NMJ, is evident along the length of the myofibre upon denervation (Merlie et al. 1984). We Soendenbroe et al. 2019Soendenbroe et al. , 2020 and others (Gigliotti et al. 2015;Baehr et al. 2016;Kelly et al. 2018;Sonjak et al. 2019;Daou et al. 2020;Skoglund et al. 2020;Lagerwaard et al. 2021;Monti et al. 2021) have availed ourselves of this to indirectly investigate myofibre innervation status in human muscle tissue.
Satellite cells are indispensable during embryonic myogenesis and for muscle regeneration during adulthood (Engquist & Zammit, 2021), due to their ability to proliferate, fuse and form myotubes. Given their role as the sole source of myonuclei, satellite cells are also involved in the hypertrophic response to exercise (Murach et al. 2021a). Studies using satellite cell-depleted mice have shown that some hypertrophy can be achieved without satellite cells, but in order to maximize the response to long-term training, satellite cells are required (Englund et al. 2020). It is now also clear that satellite cells interact directly with muscle fibres (Murach et al. 2021b) and with other cell types located in the microenvironment surrounding the muscle fibre, including fibroblasts (Fry et al. 2017;) and endothelial cells (Nederveen et al. 2021). Maladaptation of the muscle is evident during persistent overload in the absence of satellite cells, such as increased extracellular matrix and fibroblast number, indicating a regulatory role for satellite cells in ameliorating unfavourable remodelling of the muscle environment (Murach et al. 2018). In relation to the NMJ, it has been shown that a subgroup of satellite cells generate and maintain the specialized myonuclei at the NMJ (Liu et al. 2017;Larouche et al. 2021) and that depletion of satellite cells dampens the regeneration of NMJs following nerve damage (Liu et al. 2015). Although not completely depleted, the aged human muscle has been shown to have fewer satellite cells, especially those associated with type II fibres (Verdijk et al. 2007(Verdijk et al. , 2014Karlsen et al. 2019Karlsen et al. , 2020. Furthermore, a link between denervation and satellite cells has been shown, where satellite cells exit the quiescent state following denervation and mount an attempt at compensatory myogenesis (Borisov et al. 2001). Long-term denervated fibres also possess viable satellite cells with preserved renewal capability (Wong et al. 2021).
A key tool in improving muscle function is increasing levels of physical activity (Pahor et al. 2020). Numerous studies have documented the beneficial effects of intense, supervised and short-term interventions (<1 year) on muscle mass (Gylling et al. 2020), strength (Erskine et al. 2011) and other parameters of health (Nordby et al. 2012). However, while short-term interventions of increased physical activity undoubtedly remain an effective countermeasure against age-related loss of muscle function, the effects of self-organized physical activity are less clear. Most studies on aged exercising and sedentary individuals focus on aged master athletes, meaning the best-functioning individuals within their age group , which is a highly select group that constitutes a minor proportion of the general population (Ng & Popkin, 2012). Less than 20% of men and women aged ≥60 performed ≥20 min of vigorous intensity physical activity on three or more days per week (Hallal et al. 2012). In contrast, the group of recreationally active individuals constituted around 60%. From the master athlete studies we know that high levels of physical activity, maintained over many years, preserve muscle mass, strength and power (Klitgaard et al. 1990;Grassi et al. 1991;Mikkelsen et al. 2013;Mosole et al. 2014). Furthermore, electrophysiological (Power et al. 2010) and muscle biopsy (Mosole et al. 2014;Sonjak et al. 2019) studies indicate that exercise influences the neuromuscular system, possibly by facilitating myofibre reinnervation. However, there exists a paucity of knowledge on recreationally active individuals, especially in relation to myofibre morphology, satellite cell numbers and function, and how these relate to indices of muscle denervation.
The potential of exercise to influence the neuromuscular system is substantial. However, there are discrepancies in outcomes between experimental and self-organized exercise interventions, as well as limited data on recreationally active individuals compared with master athletes. We therefore designed the present study to investigate muscle morphology, satellite cells and myofibre denervation in two well-matched groups of elderly individuals different only in their physical activity history. We hypothesized that physically active individuals would possess a higher lean body mass and better muscle function than sedentary individuals, although an inherent decline in muscle morphology and function due to ageing would still exist (relative to the young control group). Furthermore, we hypothesized that positive effects of lifelong recreational physical activity would be evident for indices of myofibre denervation, myofibre size, type II myofibre-associated satellite cells, and satellite cell function in cell culture in comparison with a sedentary lifestyle.
Ethical approval and participants
Experimental procedures were approved by The Committees on Health Research Ethics for The Capital Region of Denmark (Ref: H-19000881) and were conducted according to the standards set by the Declaration of Helsinki, except for registration in a database. Participants signed an informed consent agreement. Two hundred and twenty-three men responded to either newspaper or online advertisements and were screened by telephone and asked wide-ranging questions on their physical activity pattern. Exclusion criteria were age between 40 and 67, obesity (body mass index (BMI) >32 kg/m 2 ), smoking, >14 alcoholic beverages per week, prior muscle biopsies (vastus lateralis), knee pain, current disease and use of anticoagulant medication. J Physiol 600.8 Fifty-six men were included into one of three groups: young, elderly lifelong exercise (LLEX) and elderly sedentary (SED). Seven individuals did not complete the study: injury not related to study (1), loss of interest (1), knee pain during exercise protocol (1), muscle biopsy only obtained from one leg (3) or no information (1). Subjects in the LLEX group correspond to Tier 1 in the participant classification framework by McKay et al. (2022). These individuals meet the recommendations for physical activity set by the World Health Organization, often through a combination of different activities, and without a specific aim at competing. Three additional LLEX subjects were excluded, as they ultimately proved markedly less trained in comparison with the rest of the group. The final number of participants included was 46 (15 young, 16 LLEX and 15 SED).
Young and SED were healthy and had not performed structured physical activity, such as regular football or resistance exercise, or any physical activity during everyday life (e.g. cycling or walking for transportation) for at least 10 (young) or 30 (SED) years prior to enrolment. LLEX had performed multiple sports throughout their adult life. We sought to include participants who had at least partially performed sports which would lead to recruitment of type II myofibres in the lower extremities (high force or high speed). Specific activities reported were as follows (individuals performing each activity; individuals performing each activity as their primary activity): strength training (10;3), ball games (5;3), racket sports (5;3), cycling (5;3), rowing (4;1), running (4;1), gymnastics (3;1), athletics (2;1), martial arts (1;0) and swimming (1;0).
Study design
The study was comprised of three visits to the research facility, taking place between 08.00 and 13.00 (Fig. 1A).
The participants were instructed to refrain from physical activity from two days before visit 1 and for the entire course of study, and they were asked to transport themselves to the institute by car or public transportation. On visits 1 and 3 they were instructed to drink a provided protein shake (Bodylab ShakeUp!, 330 ml, 26 g protein, 284 kcal) at home 2 h before the experiment started instead of their normal breakfast.
Visit 1 consisted of a dual energy x-ray absorptiometry (DEXA) scan, blood sampling, maximal strength testing and a bout of unilateral heavy resistance exercise. Visit 2 consisted of a blood sample. On visit 3, another blood sample was taken, followed by bilateral muscle biopsies.
The leg that was subjected to the exercise bout was block-randomized for dominant/non-dominant, resulting in 8/7 (young), 8/8 (LLEX) and 6/9 (SED). The SED group ended up being unbalanced, as two participants dropped out after being allocated to a group.
DEXA scan
Thirty minutes before the scan, the participants drank 0.5 l of water, and they emptied their bladder immediately before lying down in the scanner (Lunar DPX-IQ, GE-Healthcare). The participants were carefully positioned and lay supine for 10 min before the scan. Lean body mass (LBM), total bone mineral content, fat percentage and android fat mass were chosen as the outcomes.
Blood samples
Blood samples were obtained from an antecubital vein. General health parameters were analysed on visit 1, and creatine kinase was analysed on all visits, following standard methods at the Department of Clinical Biochemistry.
. Study design and exercise protocol
A, three visits spread over 7 days, with timing of exercise, blood samples and biopsies indicated. B, unilateral bout of heavy resistance exercise performed on visit 1. Two rounds, separated by a 5−10 min break, each consisting of four sets of concentric and four eccentric isokinetic contractions. The first, fifth and 10th concentric repetitions and the first, third and fifth eccentric repetition from each set was sampled. Maximal voluntary contractions were performed before and immediately after the exercise bout and after a 5 min break. Abbreviations: DEXA, dual energy x-ray absorptiometry; MVC, maximal voluntary contraction.
Maximal voluntary contraction
Participants had their assigned leg tested for maximal voluntary contraction (MVC) in a dynamometer (KinCom, model 500−11; Kinetic Communicator). The protocol was similar to the one used in our previous study , except angular velocity was 30°/s (2.67 s per repetition). The isometric test was repeated after the exercise bout.
Acute resistance exercise bout
Participants underwent a bout of unilateral heavy resistance exercise in the KinCom using the same leg as for the MVC. The exercise protocol is illustrated in Fig. 1B. Two rounds were performed separated by a 5−10 min break. Each round consisted of four sets of 10 concentric contractions (30°/s) at >70% of MVC. This was followed by four sets of five eccentric contractions (30°/s) at >100% of MVC. Torque was sampled from the first, middle and last repetition from each set. Verbal encouragement and visual feedback were provided. The participants rested for 1.5−2.5 min between sets.
Muscle biopsy
Muscle biopsies were obtained from the middle portion of the vastus lateralis muscle from both legs. Biopsies were taken under local anaesthetic (1% lidocaine), using the percutaneous needle biopsy technique (Bergstrom, 1975) with manual suction. Care was taken to align the incision sites between the legs. Two biopsies were taken from each leg in immediate succession, through the same incision, with the biopsy needle angled proximally and distally from the incision. Pieces of muscle appropriate for histology were carefully aligned in Tissue-Tek (Sakura Finetek), frozen in liquid nitrogen-cooled isopentane (JT Baker) and stored at -80°C. The remaining tissue was immediately processed for cell culture.
Cell culture
The cell culture protocol has previously been described in detail (Agley et al. 2017;Bechshøft et al. 2019). Briefly, tissue was digested using collagenase B (11088815001; Roche) and dispase II (D4693; Sigma-Aldrich) for 1 h in a humidified incubator (37°C and 5% CO 2 ), then filtered through a cell strainer (352340; BD Falcon) and transferred to a cell culture flask (690170/658170; Cellstar) and grown in culture medium (C-23060; PromoCell) until ∼80% confluency (mean 6.3 ± 1.4 SD days). The medium was changed after 3 days and old medium was spun down, and unattached cells were returned to the flask. Afterwards, the medium was changed every second day. Cells were detached using diluted Trypsin-EDTA (25200-056; Gibco) and then incubated with MACS running buffer (130-091-221; Miltenyi Biotec) and CD56 magnetic beads (130-050-401; Miltenyi Biotec). Cells were passed through a pre-separation filter (130-041-407; Miltenyi Biotec) and a large cell column Miltenyi Biotec) attached to a MultiStand magnet (130-090-312; Miltenyi Biotec), capturing the CD56 + (myogenic) fraction. Approximately 3000 and 5000 CD56 + cells/cm 2 were plated on glass coverslips (0111580; Marienfeld) in 12-well plates (353503; Corning), for proliferation (PRO) and differentiation (DIF) experiments. Three 12-well plates were used for PRO and DIF each, and cells were plated in duplicate (IHC or RNA) on each plate, providing three replicates for each analysis. Control leg and exercised leg for each participant were cultured on the same plates. Cells were cultured for 3 days for PRO and 3 + 4 days for DIF. After 3 days in CM, PRO cells were exposed to 10 μM of 5-bromo-2-deoxyuridine (BrdU) for 5 h. For DIF, the cells were also cultured in CM for 3 days, after which the medium was changed to differentiation medium (C-23260; PromoCell). The medium was changed again after 2 days, and the experiment was stopped after further 2 days. At the end of PRO and DIF, the cells were either fixed using Histofix (Histolab) for immunostaining or processed for RNA extraction.
RNA extraction
Coverslips containing cells were moved to an empty well in a new plate. One millilitre of TriReagent (TR118; Molecular Research Inc.) was added and, after pipetting several times, the mixture was moved to a 2 ml BioSpec tube (5225; Bio Spec Products Inc.) and stored in a -80°C freezer. At the end of the experiment, all samples were thawed, and RNA purified with added glycogen as previously described .
For the tissue samples, 100 sections (10 μm each) from the frozen biopsies were transferred to the 2 ml BioSpec tubes and dissolved in 1 ml TriReagent by shaking with five steel beads (2.3 mm, BioSpec) for 15 s in a FastPrep homogenizer (MP Biomedicals). The RNA was purified as for the cell culture, except no glycogen was added.
Real-time RT-qPCR
Fifty nanograms (cell culture) or 400 ng (tissue) total RNA per sample was converted to cDNA using OmniScript reverse transcriptase (Qiagen) and poly-dT (Qiagen) as previously described . 0.25 μl cDNA was amplified in a 25 μl SYBR green polymerase chain reaction (PCR) containing 1×Quantitect SYBR Green Master Mix (Qiagen) and 100 nm of each primer J Physiol 600.8 for every target mRNA (Table 1). An MX3005P real-time PCR machine (StrataGene) was used for monitoring the amplification, and a standard curve was made with known concentrations of DNA oligonucleotides (Ultramer oligos, Integrated DNA Technologies) corresponding to the expected PCR product. The Ct values were related to the standard curve. Melting curve analysis after amplification was used to confirm the specificity of the PCR products, and RPLP0 mRNA was originally chosen as the internal control for normalization. To support the use of RPLP0, another unrelated 'constitutive' mRNA, GAPDH, was measured (normalized to RPLP0) and showed no change in response to exercise (shown together with the rest of the mRNA). But the basal level was higher in the young group showing that either GAPDH mRNA decrease by age or that RPLP0 mRNA increase by age. As the first would suggest lower metabolic activity in aged muscle and the latter more protein synthesis, we find the first more likely and therefore used RPLP0 as normalizer for all the mRNA. The data are expressed relative to the SED group (control leg) or for the exercised leg relative to the individual control leg (exercise response).
The slides for Pax7 staining were fixed using Histofix before incubation with the primary antibodies. All other stainings were fixed after incubation with secondary antibodies. Sections were incubated overnight at 5°C with primary antibodies diluted in blocking buffer consisting of 1% BSA and 0.1% sodium azide in Tris-buffered saline (TBS). Then, slides were incubated for 45 min at room temperature with secondary antibodies diluted in blocking buffer. Slides were washed in TBS between each step. Sections were finally mounted with cover glasses using Prolong-Gold-Antifade (P36931; Thermo Fisher Scientific) containing 4' ,6-diamidino-2-phenylindole (DAPI). The immunofluorescence staining protocol for the cultured cells has been described before . Briefly, cells were tritonized (9002-93-1; Sigma-Aldrich) for 8 min and incubated overnight with primary antibodies (desmin and myogenin for DIF and desmin and BrdU for PRO) diluted in blocking buffer (1% BSA and 0.1% sodium azide in TBS). Cells were incubated for 1 h at room temperature with secondary antibodies diluted in blocking buffer. Coverslips containing the cells were mounted on glass slides using Prolong-Gold-Antifade containing DAPI.
Microscopy
Tissue biopsy sections were imaged using a 20×/0.50 NA (slide 4) or a 10×/0.30 NA objective and a 0.5× camera (DP71, Olympus) mounted on a BX51 Olympus microscope. Greyscale 4080 × 3072 or 2040 × 1513 pixel images were obtained, and sections stained with MyHCn or NCAM were stitched into one seamless image using Fiji (ImageJ, v.1.51). BrdU staining of the proliferating cells was not strong enough to analyse in a reliable manner so only mRNA data are provided for PRO. Differentiating cells, stained with desmin and myogenin, were imaged with an AxioScan.Z1 slide scanner (Carl Zeiss). A standardized region of interest (ROI), which covered approximately 90% of the coverslip, was defined (Fig. 3A). Damaged areas (due to handling of the coverslips) or large air bubbles were removed from the ROI before imaging. Images were captured using a plan-apochromat 10×/0.45 NA objective and a MultiBand filter cube (DAPI/FITC/TexasRed) using excitation wavelengths of 353, 493 and 577 nm (LED light source) and both coarse and fine focusing steps. Each channel was imaged separately and sequentially with an AxioCam MR R3 and a 10% overlap between images. Merged images were stitched using ZEN blue software (Carl Zeiss).
Image analyses
The same person, blinded to group and leg, analysed all samples. The number of fibres included in each analysis is provided in Table 3.
Myofibre size and type. Myofibre cross-sectional area, type composition and type area percentage, were analysed on composite images (dystrophin/myosin/DAPI) using a semi-automated macro, run in Fiji, as described . Transversally cut myofibres were delineated and classified as type I, type II or hybrid based on median staining intensity. Hybrid fibres were detected in all three groups (0.9 [0-7.8]% in young, 0.5 [0-2.7]% in LLEX and 1.2 [0-5.6]% in SED) and were removed from the analysis. Myofibre type composition was also manually assessed on the same composite images by counting all visible type I, type II or hybrid myofibres using the ObjectJ plugin in Fiji. Myofibre-type composition obtained by manual counting and using the semi-automated macro were strongly correlated (R 2 = 0.971). Fibre-type area percentage was determined as a function of fibre-type percentage and fibre cross-sectional area (CSA).
Satellite cells.
Satellite cells were manually quantified on composite images (laminin, Pax7, myosin I, DAPI) using the ObjectJ plugin in Fiji. Pax7 + cells, also DAPI + , were classified as satellite cells, and were allocated to type I or II fibres. If the 'parent' fibre could not be clearly identified, the respective satellite cell was marked separately, and later shared between fibre types. This occurred for 14 out of a J Physiol 600.8 total of 4614 satellite cells counted. Satellite cell number was expressed relative to the number of fibres included in the analysis. Two samples were excluded from type II analysis due to a low number of fibres (SED control leg, n = 14 and LLEX exercised leg, n = 15). Denervated fibres. The presence of MyHCn + and NCAM + fibres was manually assessed on composite images (dystrophin, NCAM/MyHCn, DAPI) using the ObjectJ plugin in Fiji. The CSA of all NCAM + fibres was measured and checked for co-expression of MyHCn and MyHC I. Then the CSA of all MyHCn + fibres was measured. Lastly, we removed all NCAM + or MyHCn + fibres that were not merosin + and desmin + , or merosin + and phalloidin + , as further confirmation that included cells were of myogenic origin. Fibres that had disappeared on a subsequent section or could not be convincingly located were marked separately as 'lost' .
Cell culture. The stitched images were separated into regions (2.26 × 1.80 mm, 3510 × 2790 pixels) equal to 3 × 3 of the original image tiles of the slide scanner. As it was observed that cells were more densely located toward the centre of the coverslip, only regions within a central rectangular ROI on the coverslip were used. Automated thresholding of the DAPI channel was used to determine the approximate number of nuclei within each region and the region with a nuclei count closest to the median for that coverslip was selected for further analysis. As we had three technical replicates placed on separate plates, we analysed cells of the exercised and control leg that were cultured on the same plate. The next step included a manual correction of any mistakes made by the macro in delineating nuclei, e.g. fusing a single nucleus that had been split or separating several nuclei that were clumped together. Then the corrected nuclei were superimposed on the desmin channel, and nuclei that were located within myotubes with three nuclei or more were manually selected. Due to a small amount of bleed-through of desmin signal in the myogenin channel, the myogenin signal in each image was corrected by fitting the myogenin intensity vs. desmin intensity outside of nuclei (containing no true myogenin signal) and subtracting this fit from the intensity of the entire myogenin image. To improve homogeneity between samples with differing staining intensity, a contrast enhancement was performed on the desmin and myogenin channels. Data lists containing intensities in all channels for each nucleus were exported from Fiji and a custom MATLAB script (MATLAB R2019a, The MathWorks Inc.) was used for aggregating the data and determining desmin + and myogenin + cells by a threshold in the intensity of the respective channels within each nucleus. Area covered by myogenic cells (area of desmin + signal) was automatically measured. Fusion index was determined as the ratio of fused nuclei to desmin + nuclei, and differentiation index was determined as the ratio of myogenin + nuclei to desmin + nuclei. Samples with a cell purity, determined as percentage desmin + cells, below 90% were removed from all data sets. Twelve of 92 samples (5/7 control/exercised leg and 1/7/4 young/SED/LLEX) were removed (Fig. 3B).
Statistical analyses
Data are presented as means ± standard deviations or individual values with median unless stated otherwise in the figure legend. A significance level of P < 0.05 was chosen, with tendencies (P < 0.1) provided. used for statistical analyses. LLEX and SED were directly compared, and young was compared with the old groups combined. Within-group differences between rested and exercised leg was also compared. Cell culture data and NCAM/MyHCn analyses were not normally distributed, so non-parametric statistics were used (Mann-Whitney rank sum test and Wilcoxon's signed rank test). All remaining data appeared normally distributed (mRNA data after log-transformation), prompting the use of unpaired and paired t tests. Isometric strength tests performed before and after the exercise bout and log-transformed creatine kinase values were evaluated with one-way repeated measures ANOVA (Tukey post hoc) for each group. Data from the exercise bout were averaged into rounds and analysed using a two-way ANOVA (group x round) with the Holm-Sidak post hoc analysis.
Participant characteristics and heavy resistance exercise
LLEX and SED did not differ in age, height, weight or BMI (P = 0.679, 0.482, 0.124 and 0.277, Table 4). Young had lower levels of C-reactive protein and HbA1c compared with old (P = 0.052 and 0.001, Table 4). Young were stronger and had a higher LBM than old (P < 0.0001 and P = 0.035), while LLEX had a lower fat percentage than SED (P = 0.006, Fig. 4 and Table 4). Relative strength tended to be higher in LLEX compared with SED (P = 0.087, Fig. 4B). Force produced, expressed relative to MVC, was lower in round 2 than round 1 in all groups, and LLEX produced force at a higher relative level across all sampled repetitions than both young and SED (Fig. 5A). There was a decline in MVC immediately following the exercise bout, and creatine kinase increased at day 2 in all groups (P < 0.0001, Fig. 5B, C).
Myofibre size and denervation
LLEX had a larger proportion of type I fibres than SED (P = 0.033), while there was a tendency for young to have a lower proportion of type I fibres than old (P = 0.060, Fig. 6B). Fibres that were only weakly stained with MyHC I (hybrid fibres) were detected in low numbers in all three groups (0.9 [0-7.8]% in young, 0.5 [0-2.7]% in LLEX and 1.2 [0-5.6]% in SED). Given that hybrid fibres are common in aged muscle, and are composed of two or three distinctive MyHCs (Andersen et al. 1999), we removed these fibres from our analysis as our myosin I staining provided insufficient insight into the myosin composition. Fibre-type area followed a similar pattern to fibre-type distribution. Young had larger type II fibres than old (P < 0.0001), while their type I fibres tended to be larger (P = 0.072, Fig. 6C). Both old groups had smaller type II fibres compared with their own type I fibres (LLEX, P = 0.003, SED, P = 0.015). Myofibre morphology is illustrated in histograms, where the type II fibres of the old participants have shifted leftwards (Fig. 6A).
The percentage of NCAM + and MyHCn + fibres was larger in old than young (P = 0.003 and 0.034), while no difference was observed between LLEX and SED (P = 0.984 and 0.352, Fig. 7A,B). NCAM + fibres were classified as pure type I or II myofibres, or hybrids, with almost even numbers of types I and II. Furthermore, between 10 and 30% of NCAM + fibres co-expressed MyHCn (Fig. 7D). A large part of the NCAM + and MyHCn + fibres were <500 μm 2 (Fig. 7C). We observed an area in a sample that was reminiscent of MTJ, similar to what we have previously described . Control stainings with COL22 revealed that 14 out of 37 NCAM + fibres from that biopsy were related to the MTJ and were removed. A median (range) of 4.5 ± 4.6 and 1.8 (0-9) ± 2.1 fibres initially included in the NCAM and MyHCn counts, respectively, were removed following assessment for merosin, desmin and phalloidin (26 and 28% reduction in NCAM + and MyHCn + fibres, respectively). It was predominantly the very small myofibres that could not be detected on serial sections.
Satellite cells and cell culture
In the control leg, LLEX had a greater number of type II myofibre-associated satellite cells than SED (P = 0.016), while no difference was observed for type I fibres (P = 0.609, Fig. 8A). Young had more satellite cells associated with both type I and II fibres, compared with old (P = 0.035 and P < 0.0001, Fig. 8A). LLEX and SED had fewer type II-associated satellite cells than type I (P < 0.0001 and P = 0.006, Fig. 8A). No difference in differentiation index was observed (P = 0.695), while a tendency for a higher fusion index in young compared with old was found (P = 0.091, Fig. 9A). Young had a higher cell count than old (P = 0.002), and a tendency for an increased desmin area in young compared with old was also observed (P = 0.081, Fig. 9A) We observed no effect of acute exercise on satellite cell number, differentiation index or fusion index, cell count or desmin area (P values ranged from 0.094 to 0.922, Figs 8B and 9B).
Gene expression
At the tissue level AChR δ, α1 (tendency) MuSK and MyHCn mRNA were lower in young compared with old (P = 0.047, 0.086, 0.014 and P < 0.0001, Fig. 10A). AChR β1 and γ were higher in LLEX compared with SED (P = 0.022 and 0.026, Fig. 10A). MyHCe gene expression was upregulated in the exercised leg of LLEX (P = 0.035), and AChR α1 and MuSK tended to be expressed higher in the exercised leg of LLEX and young, respectively (P = 0.098 and 0.074, Fig. 10A).
Discussion
Skeletal muscle of lifelong recreationally active elderly individuals retains a higher number of type II fibre-associated satellite cells, possesses a beneficial innervation status when assessed by RT-qPCR, and performs substantially better during acute resistance exercise, compared with sedentary individuals. These findings indicate that lifelong recreational activity can partially offset the emergence of classic phenotypic traits associated with the aged muscle.
In vivo measure of muscle function
The acute exercise bout caused a pronounced decline in force output both within sets and between sets for all groups. Strikingly, LLEX outperformed both SED and the young group, confirming their status as exercise-habituated individuals. Despite this, no differences were observed in LBM or MVC between the old groups, indicating that these standard assessments may not allow subtle differences to be detected. Studies investigating the impact of a recreationally active lifestyle on LBM and MVC are inconclusive. When heavy resistance exercise is performed, or the participants are at the pinnacle of sporting performance within their age group in a strength or explosive type of event, then both muscle mass and function will have increased Figure 5. Acute bout of heavy resistance exercise A, the first, fifth and 10th concentric repetitions and the first, third and fifth eccentric repetition from each set was sampled during the exercise bout. Maximum torque values are expressed relative to concentric isokinetic MVCs and are shown as averages with standard deviations. n = 15 (young), 16 (LLEX) and 15 (SED). Set 1−4 (round 1) and 5−8 (round 2) average values were statistically evaluated using two-way ANOVA (group x round) with the Holm-Sidak post hoc analysis. # P < 0.05 vs. SED, * P < 0.05 vs. young, $ P < 0.05 round two vs. round 1. B, isometric MVC, shown as individual values, before, immediately after the exercise bout and following a 5 min rest period. n = 15 (young), 16 (LLEX) and nine (SED). Data were analysed using one-way RM ANOVA with Tukey's post hoc analysis * P < 0.05 vs. before. C, creatine kinase was measured on days 0, 2 and 6 and is shown as geometric mean with 95% CI. n = 15 (young), 15 (LLEX) and 14 (SED). Data were analysed using one-way RM ANOVA with the Tukey post hoc analysis. * P < 0.05 vs. before/day 0. Abbreviations: MVC, maximal voluntary contraction. accordingly (Klitgaard et al. 1990;Ojanen et al. 2007;Unhjem et al. 2016;Sonjak et al. 2019). On the other hand, several studies investigating recreationally active individuals have seen limited effects on LBM and MVC (Klitgaard et al. 1990;Lanza et al. 2008;Unhjem et al. 2016;St-Jean-Pelletier et al. 2017), suggesting that these measures are unable to discriminate between recreationally active and sedentary individuals of similar fibre-size distribution shown as averages with standard deviations. B, fibre-type distribution and fibre-type area shown as averages with standard deviations. C, fibre size of types I and II shown as connected individual values and averages. n = 15 (young), 16 (LLEX) and 15 (SED). * Significantly different from type I within group, # significantly different from old, (#) tendency for a difference from old, $ significantly different from SED, ($) tendency for a difference from SED. age. Accordingly, it is only under challenged conditions that functional differences become apparent between recreationally active and inactive elderly individuals. In support of this notion, a recent study found no correlation between daily steps and in vivo measurements of muscle function, except during challenged conditions in elderly men and women (Varesco et al. 2022).
Satellite cell quantity and function
One of the main novel findings of the present study is the difference in type II myofibre-associated satellite cells between the physically active and inactive elderly men. Satellite cells are the sole source of new myonuclei and are important not only for long-term muscle growth by facilitating accretion of myonuclei (Kadi et al. 2004;Fry et al. 2014) but also for inter-cell communication (Murach et al. 2021b) and NMJ maintenance (Liu et al. 2017). Satellite cell quantity is reduced with ageing , disease (Verdijk et al. 2012) and inactivity (Arentson-Lantz et al. 2016) and increased with acute (Heisterberg et al. 2018) and long-term (Kadi et al. 2004) exercise and during muscle regeneration . Type II myofibre-associated satellite cells are more severely affected by ageing than type I (Verdijk et al. 2014;Karlsen et al. 2019Karlsen et al. , 2020, but this decline could also be attributed to a reduced type II myofibre activation with ageing. The larger type II fibre satellite cell pool in LLEX thus provides a larger capacity to mount a myogenic response in the event of injury or denervation (Shefer et al. 2006), while simultaneously secreting signals taken up by muscle fibres and single-nucleated cells in or around the satellite cell niche (Murach et al. 2021b). Surprisingly, the exercise bout did not lead to an increase in satellite cell content, which might be related to the exclusive use of slow contractions, timing of biopsy sampling or insufficient stimulus (Hyldahl & Hubal, 2014;Snijders et al. 2015). To explore the function of the satellite cells, we performed cell culture studies and compared the capacity of satellite cells to differentiate and fuse, in addition to measuring mRNA levels of genes related to myogenesis and muscle innervation. Importantly, cultivated myogenic satellite cells have been shown to retain intrinsic capabilities reminiscent of their former in vivo environment (Teng & Huang, 2019). Contrary to our hypothesis, the two primary measures of cell function, differentiation and fusion index, were similar in LLEX and SED, while only a tendency for an age-related difference for fusion index was observed, which might be explained by a higher cell number. Satellite cell proliferation could not be assessed due to problems relating to the staining protocol, so we cannot rule out potential differences between groups in myoblast proliferation. The literature on whether ageing affects satellite cell function in culture is mixed, as some studies indicate phenotypic differences Balan et al. 2020) while others do not (Alsharidah et al. 2013;Chaillou et al. 2020). For example, we recently showed that the fusion capabilities were reduced in old compared with young subjects ), while Chaillou et al. 2020 found no difference in fusion index or myotube diameter between young and old (Chaillou et al. 2020). The cause of these discrepancies between studies is unclear but may at least partly be due to differences in the employed cell culture models (cell lines or primary cells) or the immunofluorescence and image analyses. As such, a strength of the present study is that entire coverslips were imaged, which allowed the analysis of areas with the most representative cell presence, and that technical replicates were used. In line with our earlier studies, several age-related differences in gene expression of proliferating and differentiating satellite cells were observed (AChR γ subunit, myogenin, COL1A1, MyHCn, MyHCe and p16) Soendenbroe et al. 2020). No significant differences were observed between LLEX and SED. Overall, the satellite cell data are supportive of age-related differences in both satellite cell quantity in vivo, measured by immunofluorescence microscopy, and function in vitro, as evidenced by differences in the gene expression of several genes related to myogenesis and muscle innervation. However, neither differentiation nor fusion index, the primary measures of cell function, were affected by age, although the influence on proliferation remains to be determined. Lifelong recreational exercise affected satellite cell numbers positively, while no change in satellite cell function was observed. Next, we wanted to know if these differences amounted to differences in muscle innervation status and myofibre morphology.
Muscle innervation status
Innervation status was assessed by immunofluorescence microscopy and RT-qPCR analyses. Both methods were used as they might represent myofibres at different stages of denervation or differ in how they are regulated. NCAM and MyHCn were used as IHC markers for denervated myofibre, as we (Soendenbroe et al. 2019 and others (Mosole et al. 2014;Sonjak et al. 2019;Daou et al. 2020;Burke et al. 2021;Monti et al. 2021) have previously done. It should be noted that NCAM and MyHCn are also associated with other physiological processes and structures within muscle, which can challenge the interpretation. NCAM is found at the NMJ and MTJ Jakobsen et al. 2018), during muscle regeneration (Irintchev et al. 1994; and in neuromuscular disease . MyHCn is found during muscle regeneration (Sartore et al. 1982;, in neuromuscular disease (Fitzsimons & Hoh, 1981) and in intrafusal fibres (Walro & Kucera, 1999). However, in healthy vastus lateralis muscle tissue, MTJ and NMJ structures are easily recognized, intrafusal fibres are rare, and muscle regeneration is unlikely to be present. Furthermore, experimentally induced muscle denervation leads to a large upregulation in the expression of NCAM and MyHCn (Covault & Sanes, 1985;Schiaffino et al. 1988), together making muscle fibre denervation the most likely explanation for the observation of NCAM + and MyHCn + fibres in our study. In accordance with our previous findings , old subjects had a higher number of NCAM + and MyHCn + fibres than young. However, in contrast to our hypothesis, we did not see indications of favourable innervation status in the LLEX group using our immunofluorescent approach. Importantly, several novel findings relating to exercise status were observed in the gene expression data. LLEX had significantly higher mRNA levels of both AChR β1 and γ subunits compared with SED, and young had lower AChR δ and α1 (tendency) compared with old. AChR gene expression has been reported to be affected by disease (Kapchinsky et al. 2018;Kelly et al. 2018), injury (Gigliotti et al. 2015;Karlsen et al. 2020), ageing (Spendiff et al. 2016;Soendenbroe et al. 2020), inactivity (Monti et al. 2021) and acute exercise . Given the remarkable similarity of the AChR gene expression profile between LLEX and the young group, it could be speculated that the young group might have been habitually more active than SED, which would push them in the direction of LLEX. Activity levels are well known to change with ageing (Hallal et al. 2012). As previously mentioned, very few human studies examine human AChRs, and this is the first study to report data for all muscle-specific AChR subunits in lifelong recreationally active elderly men. Overall, it appears that the analysis of J Physiol 600.8 AChR gene expression is more sensitive than the currently available immunofluorescent markers of denervation. But the use of immunofluorescent markers in the present study has added important details on the morphology of the denervated fibres. Most denervated fibres are very small, often with a CSA of less than a tenth of the mean normal fibre size of the elderly groups. Furthermore, the rigorous assessment required the presence of several myogenic markers such as a dystrophin (sarcolemma), merosin (basal lamina), MyHC and desmin, as well as containing general cell actin. Approximately similar proportions of the denervated fibres in LLEX and SED are type I, II and hybrid fibres. Interestingly, the overlap between the used markers (NCAM and MyHCn) Figure 10. Gene expression in biopsies and cells A, gene expression in muscle biopsies (n = 15, 16 and 15 for young, LLEX and SED). B, proliferating myoblasts (n = 14/13, 15/14 and 12/8 for control/exercise leg of young, LLEX and SED). C, differentiating myotubes (n = 15/14, 14/13 and 14/11 for control/exercise leg of young, LLEX and SED). Control (left) and exercised (right) leg. mRNA data were normalized to RPLP0 and are shown as geometric means with 95% confidence intervals. Control leg is shown relative to SED control leg and exercise response is shown relative to own control leg. Baseline differences were analysed using unpaired t tests, and exercise responses were analysed using paired t tests. * P < 0.05 young vs. old. ( * ) P < 0.1 young vs. old. # P < 0.05 LLEX vs. SED. (#) P < 0.1 LLEX vs. SED. $ P < 0.05 exercised vs. control leg. ($) P < 0.1 exercised vs. control leg.
was limited, which might be due to temporal variation in protein expression of denervated fibres, or that subgroups of denervated fibres exist. Also, MyHCn positive fibres have a segmented staining profile, which, due to the cross-sectional approach, could also explain at least a portion of the discrepancy (Schiaffino et al. 1988;Soendenbroe et al. 2019Soendenbroe et al. , 2021. Lastly, myofibre morphology was comprehensively studied as it ties closely with both innervation status and satellite cell numbers. We found, as expected, that the young group had larger type II and type I (tendency) fibres than the old groups combined, which has been shown before (Klitgaard et al. 1990;Zampieri et al. 2015;St-Jean-Pelletier et al. 2017;Karlsen et al. 2019;Sonjak et al. 2019). In contrast to our hypothesis, however, no difference in fibre size was observed between LLEX and SED. The reason for the lack of difference in average fibre size is unclear, but it is possible that the activities performed by the individuals of the LLEX group did not possess a large enough hypertrophic stimulus for the fibres to increase in size. In general, heavy loading has been shown to be crucial for type II myofibre hypertrophy (Klitgaard et al. 1990), and endurance exercise has a limited effect on type II fibre CSA (McKendry et al. 2020). Ten of the subjects in the present study reported performing resistance exercise, although some of these only did it once a week, some only during the off season of their primary activity and some with light loads. Only three subjects reported resistance exercise as their primary activity. Since muscle loading, volume and training frequency are all major determinants of hypertrophy, it is likely that the activities performed have not forced an adaptation in myofibre size. It is also noteworthy that while the amount and type of activity performed by LLEX did not appear to preserve type II myofibre size, it was associated with a preservation of the number of type II myofibre-associated satellite cells, suggesting that these two entities are not tightly regulated in healthy elderly muscle. The study of master athletes remains a suitable model to study ageing disentangled from physical inactivity . However, the number of individuals performing exercise at a level where they can be considered master athletes is low (Hallal et al. 2012), which coincidentally makes the study of recreationally active individuals more relevant. All individuals in the present study were independent and well-functioning, meaning that a decline in muscle function would be expected in the years ahead. It has been shown that the muscle of very old individuals remains amenable to improvement (Kryger & Andersen, 2007), indicating that although few differences between the groups were observed, the recreationally active individuals might be on a different trajectory, which could benefit them later in life when phenotypic traits of the aged muscle are more pronounced.
Conclusion
Recreational physical activity preserves type II myofibre-associated satellite cells during ageing, and leads to a more beneficial muscle innervation status. These data strongly suggest that detrimental effects of ageing can be partially offset by lifelong self-organized recreational exercise. Furthermore, this is the first attempt in humans to investigate satellite cells and myofibre denervation in parallel, and how they are each influenced by exercise. The study is limited by the lack of objective measures of levels of physical activity and the inclusion of only male participants. In our earlier study on young and elderly females, similar findings on myofibre denervation were reported . Clearly, studies of lifelong exercise in females are needed. The translational perspective of the present study is heightened due to the focus on recreationally active individuals rather than master athletes, as the former constitute a far larger part of the general population aged 60 and above. | 2022-03-02T06:23:43.618Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "029449d031d2420779f208f6a2d2a5b269f7f9f5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "18b5d759dececb360441d53d71b7c9b3e2890d97",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17620212 | pes2o/s2orc | v3-fos-license | An insight into the complex prion-prion interaction network in the budding yeast Saccharomyces cerevisiae
The budding yeast Saccharomyces cerevisiae is a valuable model system for studying prion-prion interactions as it contains multiple prion proteins. A recent study from our laboratory showed that the existence of Swi1 prion ([SWI+]) and overproduction of Swi1 can have strong impacts on the formation of 2 other extensively studied yeast prions, [PSI+] and [PIN+] ([RNQ+]) (Genetics, Vol. 197, 685–700). We showed that a single yeast cell is capable of harboring at least 3 heterologous prion elements and these prions can influence each other's appearance positively and/or negatively. We also showed that during the de novo [PSI+] formation process upon Sup35 overproduction, the aggregation patterns of a preexisting inducer ([RNQ+] or [SWI+]) can undergo significant remodeling from stably transmitted dot-shaped aggregates to aggregates that co-localize with the newly formed Sup35 aggregates that are ring/ribbon/rod- shaped. Such co-localization disappears once the newly formed [PSI+] prion stabilizes. Our finding provides strong evidence supporting the “cross-seeding” model for prion-prion interactions and confirms earlier reports that the interactions among different prions and their prion proteins mostly occur at the initiation stages of prionogenesis. Our results also highlight a complex prion interaction network in yeast. We believe that elucidating the mechanism underlying the yeast prion-prion interaction network will not only provide insight into the process of prion de novo generation and propagation in yeast but also shed light on the mechanisms that govern protein misfolding, aggregation, and amyloidogenesis in higher eukaryotes.
T he budding yeast Saccharomyces cerevisiae is a valuable model system for studying prion-prion interactions as it contains multiple prion proteins. A recent study from our laboratory showed that the existence of Swi1 prion ([SWI C ]) and overproduction of Swi1 can have strong impacts on the formation of 2 other extensively studied yeast prions, [PSI C ] and [PIN C ] ([RNQ C ]) (Genetics, Vol. 197, 685-700). We showed that a single yeast cell is capable of harboring at least 3 heterologous prion elements and these prions can influence each other's appearance positively and/or negatively. We also showed that during the de novo [PSI C ] formation process upon Sup35 overproduction, the aggregation patterns of a preexisting inducer ([RNQ C ] or [SWI C ]) can undergo significant remodeling from stably transmitted dot-shaped aggregates to aggregates that co-localize with the newly formed Sup35 aggregates that are ring/ ribbon/rod-shaped. Such co-localization disappears once the newly formed [PSI C ] prion stabilizes. Our finding provides strong evidence supporting the "crossseeding" model for prion-prion interactions and confirms earlier reports that the interactions among different prions and their prion proteins mostly occur at the initiation stages of prionogenesis. Our results also highlight a complex prion interaction network in yeast. We believe that elucidating the mechanism underlying the yeast prion-prion interaction network will not only provide insight into the process of prion de novo generation and propagation in yeast but also shed light on the mechanisms that govern protein misfolding, aggregation, and amyloidogenesis in higher eukaryotes.
The Effect of Overexpression in Prionogenesis
It was thought that rapid synthesis of a prion protein would lead to its misfolding, aggregation, and thus a higher frequency of prion de novo formation. However, the efficiency of such an overproduction event in promoting prion conversion is not clear. Our recent finding that Swi1 aggregates formed from transient Swi1 overproduction were not inheritable suggests that Swi1 overproduction is not an effective means to induce [SWI C ] de novo formation. 1 We also observed that Swi1 overproduction in [pin ¡ ] cells caused Rnq1 aggregation. We failed, however, to obtain prion-like aggregates of Rnq1. In addition, we found that Rnq1 overproduction alone dramatically increases its own aggregation, however, only 3.3% of these aggregates are inheritable. 1 Our findings are in agreement with a previous report that overproduction of Sup35 alone in a nonprion strain is ineffective in inducing Sup35 aggregation. 2,3 Together, these results suggest that prion protein overproduction in non-prion cells is not an effective way to promote prion-prone aggregation. Even in the presence of [PIN C ], most sup35 amyloids formed upon overproduction are shown to be non-inheritable. 3 Thus, without a positive selection system, it would be difficult to obtain prions by simply tracing the aggregates generated upon overproduction. An earlier study suggests that overproduction caused non-inheritable Sup35 aggregates are actually SDS-resistant amyloids that cannot be shorn by chaperones thus cannot propagate. 2 It is unknown if aggregates are SDS-resistant amyloids or just amorphous aggregates. In either case, these non-inheritable aggregates may represent distinct conformations with poor seeding capacity compared to that of prion-prone aggregates (Fig. 1).
While overproduction of Sup35 alone is not effective in promoting [PSI C ] conversion, a series of elegant studies by the Liebman group demonstrated that [PSI C ] de novo formation upon Sup35 overproduction can be dramatically promoted by a pre-existing prion [RNQ C ] or [URE3], or co-overproduction of one of the several non-Sup35 Q/N-rich proteins they examined, including Ure2, Swi1, Cyc8 or New1. 3,4 This [PSI C ]-promoting phenotype is called Pin C , for [PSI C ] inducibility. Intriguingly, although [RNQ C ] is an effective Pin C factor, Rnq1 was not on the list of the identified Q/N-proteins from a systematic genetic screen for Pin C factors upon overexpression. We also observed that the Pin C activity associated with Swi1 overproduction is at least 10 times lower than that of [SWI C ], 1 demonstrating that a pre-existing prion is a significantly stronger Pin C factor than overproduction of its corresponding protein determinant. It is worth noting that Pin C activities were also observed for overproduction events of Mod5, a non-Q/N rich prion protein, 5 and the polyglutamine (Q) containing domain of huntingtin, 6 suggesting that that neither prion-formation nor a Q/N-rich feature is essential for the Pin C function.
Although the molecular mechanism underlying these observed Pin C phenomena remains elusive, 2 models, "crossseeding" and "titration," have been proposed to explain the [PSI C ] inducibility by [RNQ C ] or an overproduction event of a Q/N-rich protein. 4,7 In the cross-seeding model, a direct protein-protein contact is considered the basis of the observed Pin C Since the amyloidogenic [SWI C ] aggregates can be used as an imperfect template to directly cross-seed Sup35 for [PSI C ] de novo formation, [SWI C ] is a better Pin C factor than Swi1 overproduction as more templates are available for cross-seeding Sup35. (C) [PIN C ] induction by [SWI C ] without Rnq1 overproduction. Rnq1 has a complex prion domain with an amino acid composition more similar to that of Swi1 compared to that of Sup35. Thus, [SWI C ] amyloids might have a higher cross-seeding ability to Rnq1 than to Sup35 resulting in [PIN C ] formation even in the absence of Rnq1 overproduction. function ( Fig. 1). A pre-existing amyloid prion or amyloid-like aggregates resulting from overproduction of a protein, such as polyQ, might serve as templates to allow de novo formation of a new prion. 6 In the case of [PSI C ] induction upon co-overproduction of Swi1 and Sup35, it is likely that only a very small portion of Swi1 aggregates formed under the overproduction condition are prion-prone amyloids and they are most often buried among the massive amorphous non-inheritable aggregates, therefore the [PSI C ] inducibility is insufficient (Fig. 1A). In [SWI C ] cells, however, the availability of templates that can be cross-seeded with Sup35 is significantly increased and thus [PSI C ] is more efficiently induced (Fig. 1B). A similar explanation can be used to interpret the different Pin C activities observed between Rnq1 overproduction alone and in the presence of [PIN C ] prion. Alternatively, the titration model predicts that pre-existing prion aggregates, or newly formed protein aggregates by overproduction, may compete for binding, or perhaps sequester anti-prion cellular factors, such as chaperones and proteases, and thereby increase the likelihood of a new prion conversion. 7 Indeed, it was recently shown that aggregation of several Q/N-rich Pin C factors upon overproduction results in chaperones being sequestered from Sup35 aggregates and in some cases alters the chaperone levels in a [PSI C ] strain. 8 Similar chaperone sequestration events may also occur when Pin C factors, such as Q/ N-rich proteins, are overproduced in a [psi ¡ ] strain thereby enhancing the susceptibility of yeast to prion formation. It is also possible that the low Pin C activity and poor inheritability of Swi1 or Rnq1 aggregates formed upon overproduction may be partly attributable to the toxic effects that are associated with overproduction, which has been broadly noted to be stressful to yeast. Taken together, our results suggest that the cross-seeding and titration models are not mutually exclusive. Overproduction of a prion protein may increase the amount of misfolded prion protein(s) but may not effectively promote prion formation. The availability of amyloid-like templates capable of crossseeding is also essential for efficient prion de novo formation. 4,7 Subsequent studies showed that prion-prion interactions can be also mutually antagonistic, suggesting that there is a complicated interaction network among heterologous prions. [9][10][11] The discovery of [SWI C ] has provided us with an additional system to investigate interactions among heterologous prions. There are several unique properties of [SWI C ] that are different from those of [PSI C ] or [PIN C ]. First, Swi1 is a nuclear protein involved in chromatin remodeling. Second, a Swi1 region less than 40 amino acid residues that is free of glutamine but rich in asparagine is enough to maintain and propagate [SWI C ]. 12 While more systematic studies are needed to address the interesting question of how many heterologous prion species one yeast cell can harbor concurrently, our results seem to suggest that it would be difficult for one yeast cell to harbor more than 3 heterologous prions. In our study, the unstable propagation of [SWI C ] is probably not due to direct interactions among the 3 prions because no co-localization of these prion aggregates was observed in cells carrying the 3 prions. Instead, toxic effects and stability of individual prion species might have determined the prion-harboring capacity of a cell and the compatibility of the co-existing prion species. Co-existence of 3 prion species or more may cause an unbearable stress to the cell, leading to cellular toxicity. This cellular stress will likely modulate the steady-state level of molecular chaperones resulting in the collapse of proteostasis.
Interactions of [SWI
We showed that [SWI C ] has a significantly weaker Pin C function than that of [PIN C ]. Interestingly, although overproduction of Sup35 or its prion domain is required for a detectable [PSI C ] de novo formation in [PIN C ] or [SWI C ] cells, [SWI C ] can promote a significant amount of [PIN C ] appearance without Rnq1 overproduction. These results suggest that [SWI C ] is a stronger inducer of [PIN C ] than that of [PSI C ]. As shown is Figure 1C, this difference might be explained by the fact that the asparaginerich PrD of Swi1 has a higher homology to that of the Rnq1 PrD than that of the Sup35 PrD. 12,13,17 The amino acid compositional differences of these PrDs may result in differences in their amyloid core structures, which in turn determine their cross-seeding abilities on different prion conformations. Furthermore, the Rnq1 PrD has a complex sequencing feature including 4 distinct and semi-independent aggregation determinants that may provide more opportunities for cross-seeding, 18 Earlier studies showed that overproduction of Sup35PrD-GFP in [PIN C ] cells can result in formation of both dotshaped and ring/ribbon/rod-like aggregates, and only the ring/ribbon/rod-like aggregates are prone to establish stable [PSI C ]. 4,[19][20][21][22] Interestingly, the Sup35 aggregates in mature [PSI C ] cells are also dot-shaped. 1,4,19 While it is unclear how the ring/ribbon/rod-shaped Sup35 aggregates are processed to the final dot-shaped prion aggregates, it is reasonable to speculate that the newly formed non-inheritable Sup35 dots formed upon overproduction are structurally distinct from the dotshaped aggregates in the mature [PSI C ] cells. Similarly, we found that Swi1 or Rnq1 aggregates formed upon overproduction are mostly dot-like and noninheritable, distinct from the prion-prone ring/ribbon/rod-like aggregates. 1 The fact that both the ring/ribbon/rod-shaped and mature dot-like Sup35 foci in [PSI C ] cells have similar bundled-fibrillar structure 22,23 may explain why the ring/ribbon/rod-like aggregates of Sup35NMGFP can be further processed to mature stable [PSI C ]. Interestingly, forming ring/ribbon/rod-like aggregates seems to be a shared feature by several Q/N-rich candidate proteins besides Sup35 and Rnq1. 24 Our results indicate that the transition from ring/ribbon/rod-like aggregation to dot-shaped stable prion foci requires many generations. 1 In the [PSI C ] de novo formation process, co-localization of a pre-existing Pin C factor and the newly generated ring/ribbon/rod-like Sup35 aggregates upon Sup35 overproduction has been only observed at the early initiation stage of [PSI C ] prionogenesis. 1,6,25 Our finding that the prion aggregates of both Rnq1 and Swi1 in [PIN C ] and [SWI C ] cells formed a beads-on-string organization with the newly formed Sup35 ring/ribbon/rod-shaped aggregates demonstrates that these preexisting prion aggregates (beads) are physically associated with the newly formed prionogenic Sup35 aggregates (string) (see Fig 2, middle panel), providing direct evidence supporting the cross-seeding model for prionogenesis. In addition, we observed that the colocalization frequency of the Pin C factor and newly formed Sup35 aggregation is positively correlated with the prion promoting activity of the Pin C factor, 1 implicating that a direct contact of the Pin C factor and SUp35 is essential in [PSI C ] induction. It is worth emphasizing that direct associations between 2 heterologous prions are rarely seen when they co-exist as mature prions. When a newly formed prion is stabilized, the interaction between the newly formed prion and its facilitator is rarely detectable. 1 These observations are consistent with an earlier report that [PIN C ] is only required for [PSI C ] de novo formation but not for its propagation. 4,26 Once [PSI C ] is established, it can stably propagate in the absence of [PIN C ].
It has been proposed that when Sup35 is overproduced in [PIN C ] cells, Sup35 ring/ribbon/rods are produced in and elongated from an ancient protein quality control compartment, IPOD (insoluble protein deposit), which is adjacent to the vacuole. 22 IPOD may serve as a reservoir that retains multiple heterologous amyloid species, including prion aggregates, 22 therefore the occasionally observed overlapping between mature heterologous prion aggregates may occur in IPOD. It seems that IPOD could serve as an ideal site for de novo heterologous prion crossseeding, filamentous growth, and elongation of prion chain. 22 Though the newly formed Sup35 ring/ribbon/rod-shaped aggregation is found interacting with IPOD, whether prion de novo formation initiates in IPOD needs to be further investigated. 22 Our observation that in [SWI C ] or [PIN C ] cells, multiple Swi1 or Rnq1 aggregates were overlapping with the newly formed ring/ribbon/rod structures of Sup35 to form the beads-on-string-like organization beyond IPOD argues that cross-seeding might happen at multiple cellular sites. IPOD may be just one of the possible sites for cross-seeding and prionogenesis. The newly formed Sup35 rings are located in the cell periphery, and actin cytoskeleton is believed to be critical in this earlier ring-processing event, perhaps by serving as a platform for prion initiation. 20,27,28 Figure 2 illustrates a likely scenario where prion-prion interactions might occur in the prion initiation and maturation processes.
In the prion maturation process, smaller dot-like aggregates can be derived from the ring/ribbon/rod-shaped prion aggregates. This remodeling occurs through processing of large amyloid aggregates into smaller aggregates by the action of a group of chaperones, including Hsp104, Ssa1, and Sis1. Some of these smaller aggregates serve as seeds for prion transmission as they are believed to be distributed to daughter cells when cells divide. Intriguingly, the ring-like aggregates are frequently observed for [PSI C ] but rarely seen for [PIN C ] at the initiation stage of prion de novo formation. The [PIN C ] prion aggregates appear mostly rod-like, not ring-like, suggesting a structural difference between the 2 premature prion aggregates. It may also suggest that the elongated Sup35 and Rnq1 pre-prion aggregates have distinct binding affinities to actin patches and/or other cytoskeleton components.
Once again, our data, together with other accumulating evidence, 1,6,25 support the cross-seeding model in terms of mutually promoting prion-prion interactions in yeast. For example, Rnq1 can be immunocaptured with Sup35 during the de novo induction of [PSI C ] in [PIN C ] cells, indicating a direct physical association of Sup35 and Rnq1. 29 The fact that preformed Rnq1 amyloid fibrils could be used as templates to cross-seed soluble Sup35NM in vitro also supports the cross-seeding model. 6 However, our results do not exclude other possible mechanisms, such as those proposed in the titration model.
Closing Remarks
The prion concept has been recently extended beyond the territory of proteinaceous pathogen of PrP Sc and protein conformation based fungal epigenetic elements. There are ample data suggesting that many amyloidogenic proteins can be transmitted in a way similar to that of PrP Sc and are now considered as "prions." They can be either functional protein aggregates or disease-associated pathogenesis, 30,31 including b-amyloid, a-synuclein, tau, and mutant SOD1. 32 Aggregation of one amyloidogenic protein might trigger a complex interaction among multiple aggregation-prone proteins. As a consequence, formation of one prion can lead to modulation of one or more biological pathways to result in either beneficial phenotypes or pathogenic disorders. Studying prion-prion interactions in yeast might provide valuable information to aid our understanding of not only the prion phenomena in yeast but also the mechanisms underlying protein folding, aggregation, and prionogenesis in mammals.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed.
Funding
The related work was supported by grants from the US. National Institutes of When Sup35 (or its prion domain) is overproduced, Sup35 ring/ribbon/rod-shaped aggregates are formed, presumably through cross-seeding with the preexisting Swi1 prion amyloids at IPOD or multiple cellular sites. At the same time, [SWI C ] prion aggregates undergo significant morphological remodeling from multiple distinct dots to be ring/ribbon/rod-shaped. The remodeled [SWI C ] prion aggregation is drastically co-localized with the newly formed Sup35 ring/ribbon/rod-shaped aggregates to form a beads-on-string organization (middle), supporting the cross-seeding model. During the maturation process, Sup35 ring/ribbon/rod-shaped structures are processed into dotted mature prion aggregates likely through the action of chaperones such as Hsp104, Ssa1, and Sis1. These dotted aggregates can serve as seeds for prion transmission during cell division. In mature [PSI C ] cells, the aggregates of [SWI C ] and [PSI C ] are mainly dot-shaped and do not interact with the possible exception in the IPOD (right). | 2018-04-03T01:52:13.340Z | 2014-11-02T00:00:00.000 | {
"year": 2014,
"sha1": "851ca477bc5dbcab0f1b28c3c3caaffa31f8ac9e",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/19336896.2014.992274?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "851ca477bc5dbcab0f1b28c3c3caaffa31f8ac9e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221397141 | pes2o/s2orc | v3-fos-license | Impact of the small-scale structure on the Stochastic Background of Gravitational Waves from cosmic strings
Numerical simulations and analytical models suggest that infinite cosmic strings produce cosmic string loops of all sizes with a given power-law. Precise estimations of the power-law exponent are still matter of debate while numerical simulations do not incorporate all the radiation and back-reaction effects expected to affect the network at small scales. Previously it has been shown, using a Boltzmann approach, that depending on the steepness of the loop production function and the gravitational back-reaction scale, a so-called Extra Population of Small Loops (EPSL) can be generated in the loop number density. We propose a framework to study the influence of this extra population of small loops on the Stochastic Background of Gravitational Waves (SBGW). We show that this extra population can have a significant signature at frequencies higher than $H_0(\Gamma G\mu)^{-1}$ where $\Gamma$ is of order $50$ and $H_0$ is the Hubble constant. We propose a complete classification of the gravitational wave power spectra expected from cosmic strings into four classes, including the model of Blanco-Pillado, Olum and Shlaer and the model of Lorenz, Ringeval and Sakellariadou. Finally we show that given the uncertainties on the Polchinski-Rocha exponents, two hybrid classes of gravitational wave power spectrum can be considered giving very different predictions for the SBGW.
Introduction
The first direct observation of Gravitational Waves (GWs) coming from the merger of two black holes [1] was both a wonderful check of the theory of General Relativity and the onset of GW astronomy. Since GW propagate freely throughout the Universe, they are not limited by the last scattering surface, and give us an unprecedented opportunity to look for topological defects, and in particular cosmic strings. Cosmic strings are one-dimensional topological defects that may have formed during a symmetrybreaking phase transition in the early Universe [2][3][4][5]. Nambu-Goto strings are a powerful onedimensional approximation to study these solitonic solutions on cosmological scales. The evolution of a Nambu-Goto string network in an expanding background has been studied both analytically [2,[6][7][8][9][10][11][12][13][14][15][16][17][18] and through numerical simulations [19][20][21][22][23][24][25] in the last decades, and is still subject of intense research.
A general result is that the network relaxes to an attractor solution known as the scaling solution and remains self-similar with the Hubble radius. If cosmic strings were formed, scaling means they survive during the whole history of the Universe and are present all over the sky. Strings can induce anisotropies on the Cosmic Microwave Background and have been searched for in the Planck data . The current CMB constraints give an upper bound for the string tension µ of Gµ < 1.5 × 10 −7 for Nambu-Goto strings and Gµ < 2 × 10 −7 for Abelian-Higgs strings, where G is Newton's constant [26][27][28][29].
These bounds are calculated assuming a given scenario for the evolution of the loop number density throughout the history of the Universe (see below), and can depend a lot on those assumptions. Furthermore, each closed cosmic string loop radiates GW and the superposition of them produces a Stochastic Background of Gravitational Waves (SBGW) [30][31][32][33][34][35] which could be detected by gravitational wave detectors. This background has been looked for in LIGO/Virgo for O1 and O2 and gives already a tighter upper bound for Gµ which is, however, very dependent on the cosmic string model used, ranging from Gµ < 1.1 × 10 −6 to Gµ < 2.1 × 10 −14 [36,37]. In section 4.3 we will explain the origin of the orders of magnitude difference between these two constraints. The most stringent and stable constraint today comes from pulsar timing experiment giving Gµ 10 −10 [38].
Building a model for the evolution of the cosmic string network is challenging, and involves both analytical modelling and numerical simulations. Nambu-Goto simulations are necessary to determine the large-scale behavior of the loop number density, but are unable to provide a description of the smallest scales as they do not include gravitational radiation nor the back-reaction that dominates on these scales [39][40][41]. One of the difficulties is the proliferation of kinks -which are discontinuities in the tangent vector of the string. Kinks are formed every time two strings intersect each other, are removed by outgoing loops and are smoothed by gravitational back-reaction. If the scaling of the large scales is today well supported by numerical simulations, the build-up of a population of kinks has raised some doubts on the scaling properties of the small-scales [8-11, 16, 17, 24, 42, 43] and this situation cannot be settled with simulations available today. A first attempt to model analytically the number of kinks using the one-scale model was performed in [8,9], and showed that kinks accumulate until the number of kinks reaches a scaling regime introducing another scale to the system [42]. Models were later introduced to take into account this small-scale structure, these include the threescale model [10], a renormalized velocity-dependent one-scale model [16,17] and the Polchinski-Rocha model based on fractal dimensions [11,12,34,44] which we will use in the following. It introduces a positive exponent χ defined later in the equation (2.1), and one of its particular prediction is that the gravitational back-reaction scale is not ΓGµt as in [34,43], but rather the smaller scale Υ(Gµ) 1+2χ t where Υ is of order 20.
The goal of this article is to provide a unified framework which can continuously describe, with a limited set of parameters, different cosmic string loop models from the literature and give predictions for the SBGW. It is built using the analytical model of Polchinski and Rocha [11][12][13] and later developments [14,18], and therefore includes the parameter χ. With this framework, we aim at gaining a deeper understanding of the SBGW and why constraints on the string tension from LIGO/Virgo are so model-dependent. We also expect to use this framework to give model-independent constraints on the string tension.
Using our unified framework, we can furthermore focus on two particular models, the Blanco-Pillado, Olum and Shlaer (BOS) [15] and the Lorenz, Ringeval and Sakellariadou (LRS) [14] models. The BOS model is based on the simulations conducted in [15,23] and makes the assumption that the production of loops with sizes smaller than the gravitational radiation scale tΓGµ, where Γ ≈ 50, is suppressed. On the other hand, the LRS model is based on the simulations conducted in [25] and based on the analytical studies of [11,12] which assume that small loops are produced down to the gravitational back-reaction scale, which is smaller than the gravitational radiation scale by several orders of magnitude. As a result the two models give very different predictions for the loop number density. Relative to the first one, the second gives rise to an Extra Population of Small Loops (EPSL). The smaller back-reaction scaleà la Polchinski-Rocha can be introduced in the BOS model producing also an additional population of small loops [18]. It is therefore interesting to understand its effect on the SBGW. This paper is set up as follows. Section 2 describes the theoretical framework used to unify several cosmic string models found in the literature. In particular, we show that the loop number density is naturally composed of two distinct population, a Standard Loop Number Density (SLND) which is very similar to the prediction of the one-scale model, and an EPSL. Section 3 shows how to calculate analytically an estimate to the SBGW from cosmic strings and discusses the validity of the approximations made. Section 4 then combines the results to obtain the dependence of several types of GW experiments to the uncertainties on the cosmic string parameters. Finally, section 5 presents our conclusions.
The network of infinite strings
A standard way to model the evolution of cosmic strings is to study infinite strings and closed loops as two distinct populations in interaction. These infinite strings of cosmological sizes are stretched by the expansion of the universe characterized by the scale factor a(t) which evolves as t ν where ν = 1/2 in the radiation-dominated era and ν = 2/3 in the matter-dominated era. At the same time, they lose energy by forming loops. Closed loops are formed when two infinite strings intersect each other or when one self-intersects. In principle these loops can rejoin the infinite strings or fragment into smaller loops. At the end of the fragmentation, one is left with with smaller non-self intersecting long-lived loops. It is this population of long-lived non self-intersecting loops that dominates the SBGW and that we model.
In this article, we assume the inter-commutation probability to be equal to one, although some types of cosmic strings may have it strictly smaller than one [45]. Based on analytical models [4] and numerical simulations [15,23,25] we expect the network of infinite strings to scale in radiationdominated or in matter-dominated era. Scaling is an attractor solution of the network in which all the relevant length scales are proportional to the horizon size d h which itself is proportional to the cosmic time t. During scaling the energy density contained in cosmic strings evolves as ρ ∞ ∝ t −2 .
The loop production function P( , t) is the number of long-lived non self-intersecting loops of invariant length per unit volume per unit time formed at cosmic time t. In scaling, t 5 P( , t) is expected to be only a function of the scaling variable γ = /t. There exist different calculations in the literature concerning the shape of this loop production function. In the one-scale model introduced in [7], all loops are assumed to be formed with the same size, meaning the loop production function is a Dirac-delta distribution. This typical size is then inferred from numerical simulations. In the work of [11,12], it has been argued that the loop production function is a power-law, something which was found in the simulations of [24] and is compatible with the simulations of [25]. In such a case the loop production function is parameterized by a parameter χ and a multiplicative constant c where the analytical study of the small-scale structure of [11] suggested the introduction of a gravitational back-reaction scale γ c below which the production of loops by the network is suppressed where Υ is of order 20. This loop production function was developed in an attempt to take into account the small-scale structure of the network. It was shown in [14] that the precise shape of the loop production function below γ c has only a small impact on the Loop Number Density (LND). It has been used in [14,46] to calculate the loop number density and leads to a significant Extra Population of Small Loops (EPSL) with respect to the one-scale scenario. To fit the numerical simulations of [25], their analysis assumed the network to be sub-critical meaning χ < χ crit where Critical χ = χ crit and super-critical χ > χ crit networks were finally studied in [18]. This super-critical regime is supported by the Nambu-Goto simulations of [15]. It is therefore important to include super-critical regimes in our framework for future applications.
Loop number density
Once the loop production function is known, it can be injected into the Boltzmann equation for the where the effect of the expansion of the universe is taken into account by introducing the scale factor a. The loops radiate GW with a rate we assume to be constant and given by [32,47] where Γ is of order 50 [30,47]. This Boltzmann equation can be solved if one assumes either radiation or matter domination and that the network of infinite strings is scaling so that the loop production function scales and is given by equation (2.1). The complete set of solutions can be found in [18]. The loop number density no longer necessarily scales, unless one assumes that the loop production function is cutoff for γ ≥ γ ∞ , where γ ∞ is expected to be of the order of the Hubble horizon. The authors suggest the inclusion of a sharp infrared cutoff to regularize those new solutions and showed that the he precise shape of the cutoff only has a small effect on the loop distribution. We neglect it in the remainder of this paper. Even in these critical and super-critical regimes, one can observe a large population of small loops in the LND up to a new value of χ = χ ir introducing an additional knee in the LND at in which is given by The fact that critical and super-critical models present an extra population of small loops motivates us to study the impact of this population on the SBGW
Normalization of the loop production function
Currently there is a debate on how to normalize the loop production function, that is the constant c in (2.1) based on measurements from numerical simulations. In this section, we will review two different approaches followed in the community. The first approach -explicitly stated in [23,48], and implicitly used in the one-scale model [4] -is to use an energy conservation equation to put an upper bound on the energy lost by the network of infinite strings into loops. Assuming that the energy density of the infinite string network ρ ∞ is lost through the expansion of the Universe, redshifting and by the formation of non-self-intersecting loops [4] dρ Figure 1: Normalization of the loop production function. The boundary of the blue region is given by the "one-scale energy balance". The green region is given by measurements in [25]. The red line shows the set of parameters giving order unity loops per Hubble radius, see section 2.4. The blue dot corresponds to the parameters of the BOS model, and the orange dot to those of the LRS model.
where H is the Hubble parameter and v 2 ∞ is the average velocity of the infinite strings and has been measured to be 0.45 (resp. 0.40,0.35) in a flat space-time (resp. radiation dominated, matterdominated) [23]. On assuming that the scale factor a ∝ t ν and inserting (2.1) into (2.9), the energy density of the infinite strings is the well known attractor scaling solution ρ ∞ ∝ µt −2 . This can be compared to the values found for each era in numerical simulations and used to give an upper bound for the parameter c once χ, γ c and γ ∞ are fixed. The corresponding allowed parameter space for (c, χ) is denoted as "one-scale energy balance" in figure 1. It should be noted that numerical simulations do not include any gravitational radiation nor back-reaction, meaning that there the only equivalent to a lower cutoff in the integral of equation (2.10) is determined by the smallest length-scale set at the initialization of the simulation. If χ ≤ 1/2, the integral is dominated by this nonphysical lower bound, and one expect t 2 ρ ∞ to diverge if the simulation is long enough [48].
Another approach advocated in [14] is to consider only the large scale LND determined in simulations as trustworthy. It can be parameterized as a power-law on large-scales γ > γ d and fitted to the analytical predictions [18] In the numerical simulations of [25], they obtain a value of A which is compatible with other numerical simulations. As shown in section 2.4, the value of A is related to the parameters of the loop production function (c, χ). Hence a given value of A determines a curve in the (c, χ) which is the red line of figure 1.
While there seems to be a general agreement for the parameter A, there is a strong tension on the parameter p. Even though the uncertainty interval given for p in [25] does not exclude the degenerate value 5/2 in the radiation era, their best fit systematically points to an higher value than 5/2 and the authors have used the best fit value p = 2.6 since then, thus selecting the green region of parameter space denoted in figure 1. One can see that these two interpretations of two different numerical simulations do not agree on the values for the different parameters. It should be noted that the loop production function has been measured directly in [23] giving values for (c, χ) compatible with the energy-balance argument. The group of Ringeval and al. is currently working to improve the measurement of the loop production function in their own simulations to see whether an agreement can be met and results of [49] can be reproduced or not.
For the remainder of this paper, for a given value for χ, we will determine the normalization factor c as to fit the parameter A of the large scale LND. This assumption allows us to study both models on the same footing and is more likely to remain valid once an agreement will be found.
Decomposition of the contributions in the different eras
The aim of this study is to determine whether the Extra Population of Small Loops (EPSL) described in [14,18] are observable features of the SBGW. To this end, we propose a natural decomposition of the loop number density into two parts, as figure 2 illustrates. The first contribution, which we called the Standard Loop Number Density (SLND), is of the form where γ ∞ is a cutoff on the sizes of the loops. It is, for instance, the result of a Dirac loop production t 5 P = cδ(γ ∞ − γ) [4]. In this particular case C = c(γ ∞ + γ d ) 3−3ν and p = 4 − 3ν. It also describes well the large scale behavior if the loop production function is the power-law of equation (2.1) [18]. Then the constants are fixed by • in the sub-critical regime χ < χ crit , C = c and p = 3 − 2χ • in the super-critical regime χ > χ crit , where is given in equation (2.8) and c is fixed by the normalization of the loop production function, as discussed in section 2.3. These approximations break down near χ crit and one should add regularization terms coming directly from the analytical expression of [18]. For clarity we omit these terms here and put the details in appendix C.
On top of the SLND, we superimpose an Extra Population of Small Loops (EPSL) described as a piece-wise function, motivated by the work of [18] This definition comes directly from the fact that we assumed a sharp cutoff at the back-reaction scale γ c . The analytic formulae would be a little more complicated with a power-law cutoff, but the result would not be qualitatively modified.
In the following, the analysis focuses on the impact of these two populations either in radiationdominated era, or in matter-dominated era. One should note that large loops produced during the radiation era can survive long enough to be an important source of GW in the matter era. They are a non-scaling population of loops and some models (see [47]) predict they dominate during the matter-dominated era. Their contribution to the SBGW is calculated in Appendix E.3 and taken into account in our analysis. On the contrary loops of size smaller than γ d during radiation era, which is the case of the EPSL, do not survive long enough in the matter era to be a significant contribution to the SBGW.
Emission of gravitational waves
Cosmic string loops oscillate and emit GW. The incoherent sum of their gravitational radiation forms a SBGW which was first calculated in [30]. The oscillation of the loops is not the only channel of gravitational radiation and burst-like events, from cusps, kinks and kink-kink collisions are also sources of gravitational radiation whose wave-forms were calculated in [33,46,50].
There exists two main methods to calculate the SBGW. The first consists in introducing an effective decomposition into harmonics P m , m ∈ N where the lowest modes are dominated by the oscillatory movement of the loop with typical frequency 2/ , where is the invariant length of the loop, and the higher modes are dominated by burst-like events [47]. Typically P m ∝ m −q with q = 4/3 (respectively 5/3, 2) for cusps (respectively kinks and kink-kink collisions). The energy density carried by the GW per unit logarithmic interval of frequency is given by [47] in which H(z) is the Hubble parameter, t(z) is the cosmic time, and f is the frequency of the wave in the detector. Details on the cosmological parameters used in this paper are summarized in appendix A. The redshift at which cosmic strings where formed is denoted by z * , and it depends on the energy scale of the phase transition determined by the string tension. Considering the phase transition happened during the radiation era and that the temperature today is T 0 , the redshift z * is given by which we will fix to be infinity in the following.
The other method to calculate the SBGW consists in considering the sum of all burst-like events which are typically not isotropic [33,35,51]. This approach allows one to remove events resolved inside a detector from the SBGW, as they are not part of the background anymore. A detailed discussion of the differences of the two approaches can be found in [52].
In this paper we will use the first method. To keep the following analysis simple, we make the simplifying assumption that cosmic string loops emit only in their fundamental mode. The modes m > 1 are only a small modification of its qualitative properties [38,53] and we discuss briefly their impact section 3.3. Introducing Q = 16π/(3Γ), the fraction of the critical density given by the energy of GW is
Asymptotic description of the stochastic background of GW
With the assumptions made in this framework, one can calculate the energy density power spectrum for each contribution individually, namely the contribution from SLND on one side and the contribution from the EPSL on the other side. Consider for instance the SBGW produced by the SLND in the radiation era.
In the radiation era, we can make the following approximations for the Hubble parameter and the cosmic time where H r = H 0 √ Ω r . This allows us to simplify equation (3.4) into Inserting the SLND contribution from equation (2.12) and noticing that p > 1 We can make several remarks on this particular result that can be extended to the other contributions.
The power spectrum has of two characteristic frequency scales. In particular f = 4H r (1 + z eq )γ −1 ∞ is a low frequency cutoff for the energy density. This frequency is so low with respect to the frequency range of the GW detectors that we omit it in the following. The frequency f = 4H r (1 + z eq )γ −1 d is a knee in the SBGW. These two scales are well separated and the power spectrum can be approximated by power-laws far from these frequencies.
We performed the same calculations for the other contributions, the SLND and EPSL during the radiation and the matter era in the Appendices D and E and summed up the asymptotic behavior in tables 1, 2 and 3. We can make the general remarks: • a typical frequency scale at which the power spectrum presents a knee, roughly H 0 γ −1 d for the SLND and H 0 γ −1 c for the EPSL. Those two frequencies are very well separated.
• at low and high frequencies, the power spectrum behaves as a power law • the width of the knees can be estimated from the complete calculations but is essentially small compared to the separation between H 0 γ −1 d and H 0 γ −1 c for Gµ 1 • the power spectrum is cutoff at low frequencies, roughly H 0 γ −1 ∞ for the SLND and H 0 γ −1 d for the EPSL Frequency range Decaying into matter eraQ rm γ 2 From these tables, one recovers that the loops produced during the radiation-dominated era give a plateau at high frequencies while all the other contributions decay as f −1 meaning that at high enough frequencies, the SBGW is a plateau where the dominant contribution comes from the radiation era. On the contrary, the low frequency region is usually dominated by GW produced during the matter-dominated era. Indeed, the contributions from radiation era and from the loops produced in radiation era and decaying into matter era have similar shapes in the low frequency range, but as Ω m Ω r , the latter contribution dominates. Another feature one can see is that in the sub-critical case (table 1), the slopes of the SBGW from the large loop population is dependent on the values of χ r and χ m where the r index denotes radiation-domination and m matter-domination. Whereas in the super-critical regime table 2 the frequency dependence of the spectrum is completely frozen.
For the EPSL, the spectrum presents a knee at the frequency scale H 0 γ −1 c and is completely suppressed on frequencies below H 0 γ −1 d . Therefore, any impact on the SBGW happens on frequencies higher than H 0 γ −1 d . In this frequency range, the dominant contribution coming for SLND is the radiation-domination one.
Beyond the fundamental mode
In subsection 3.2 we have made the assumption that a loop emits GW in its fundamental mode, but this is not generally the case, especially if cusps or kinks are present on the loop [33,35]. If cusps or kinks are present, the higher modes of the spectral power P m are not zero but behave as m −q where q = 4/3 for cusps, 5/3 for kinks and 2 for kink-kink collisions. Even though there have been attempts to calculate the spectral power for all values of m [32], some even taking into account the Frequency range gravitational back-reaction [39], we will make the following Ansatz for P m where ζ is the Riemann zeta function to ensure the normalization of P m . Starting from equation (3.1) during the radiation era and injecting this spectral power P m gives For the the SLND from equation (2.12) At high frequency and under the assumption that γ ∞ γ d , the spectral power is factorized and one recovers the result assuming only the fundamental mode At low frequency the picture is slightly different and Even though we have only included the effects of the spectral power P m on this single case, a simple calculation shows that this result can be generalized to the other types of loops distribution we discussed so far. At high frequencies, the SBGW of GW is insensitive to the decomposition into harmonics, while at low frequencies it is multiplied by a factor (3.14)
Results
The aim of this section is to characterise the shape of the SBGW, as a function of the loop production function exponents χ r and χ m . In particular, we assess the influence of the EPSL on the SBGW and divide the parameter space (χ r , χ m ) into four classes with specific features. Figure 3: Impact of the extra population of small loops onto the SBGW in the parameter space χ r , χ m for Gµ = 10 −13 . In the blue region, the high frequency plateau for Ω gw is dominated by the extra population of small loops produced during radiation era. In the red region, the spectrum presents produced by the EPSL during the matter era. Outside those regions, the population of small loops can be neglected.
Influence of the Extra Population of Small Loops on the SBGW
We can split the parameter space (χ r , χ m ) in different regions depending on whether the EPSL from radiation or matter era has a significant imprint on the SBGW.
Loops from the radiation era produce a plateau at high frequency in the SBGW. The extra population of small loops introduces new features in the spectrum if its plateau is higher than the plateau of SLND, meaning This is shown as the blue region of figure 3. In this figure we have used the regularized formulae of appendix C around χ crit . We provide an analytical expansion in terms of 1/ ln(Gµ) in appendix F for the position of the blue region. It should be noted that the EPSL produced during radiation era can be dominant at high frequencies even if the network is super-critical. This sets a new scale for χ r , between χ crit and χ ir . For loops produced during matter era, we assume that the extra population of small loops is visible if its peak at frequency 3H m γ −1 c with amplitude is bigger than all the other contributions at this frequency. This is represented as the red region in figure 3. Contrary to the loops produced during the radiation era, only a subset of the sub-critical models during matter era produce detectable features for the SBGW. figure 3, one can see that the BOS model can be safely replaced by an effective Dirac distribution loop production function for two reasons. First, the network is super-critical during both matter and radiation era meaning the SLND is universal with slope −5/2 during the radiation era and −2 during matter era [18]. Secondly, figure 3 shows that the extra population of small loops has a negligible impact on the SBGW. Figure 3 can be used to build a classification of the various SBGW in the parameter space (χ r , χ m ). Including the separation between sub-critical and super-critical regimes, there are nine different classes of spectra one can expect. For simplicity let us neglect the separation between sub-critical and supercritical and present four classes having distinctive features in terms of the SBGW.
Hybrid models
The two first classes are represented by the well-known BOS model in figure 4a and the LRS model in figure 4b whose properties have been summed up on the figure. As we showed in the previous section, the BOS model can effectively neglect entirely the EPSL. On the contrary, it EPSL is a dominant source of GW in both the radiation and the matter era for the LRS model.
We can add to this list two new hybrid classes of models. In figure 4c, the EPSL of the radiation era can be neglected but not during the matter era, leading to peak around the frequency 3H m γ −1 c . As we explain in the following section, this peak leads to interesting features when we consider the detection by GW detectors. Figure 4d shows the opposite class in which the EPSL of the matter era can be neglected but not in the radiation era, producing a small valley in the SBGW.
As we attempted to make apparent in figure 4, each of those classes have different shapes on which one can read the parameters of the cosmic string network, apart from models like the BOS models, for which the shape of the SBGW does not depend on (χ r , χ m ).
Constraints on the string tension from GW experiments
We have not yet been able to detect any SBGW in the European Pulsar Timing Array [38] nor in the first two LIGO/Virgo runs [36,37], giving only upper bounds on the cosmic string tension. New data analysis techniques are being devised for the next generation of GW detectors such as LISA [54]. If ongoing and future GW experiments could potentially detect the SBGW coming from cosmic strings, it is a challenging data analysis problem to characterize the observed spectrum and distinguish between the variety of expected astrophysical and cosmological sources.
In this section, we do not pretend to tackle any of the technical difficulties of the detection of a SBGW. In particular we will assume that we are able to separate the astrophysical foreground from the cosmological source of GW. The theoretical GW detector is modeled as having a given sensitivity curve, function of the frequency. We will make the assumption that the bandwidth of the detector is infinitely thin around a typical frequency and a given sensitivity Ω gw . This is of course a brutal assumption, however we expect that progress in the data analysis techniques can be effectively taken into account by changing the sensitivity of the instrument. As we possess analytic expressions for the stochastic background of gravitational waves within our framework, we can easily explore the parameter space (Gµ, χ r , χ m ). The result are summarized in figure 5.
As was shown in previous sections, the extra population of small loops modifies the GW spectrum at frequencies higher than 3H m γ −1 d , hence we expect it to have an impact on high frequency instruments such as LIGO/Virgo. It turns out the effect of the EPSL is quite dramatic for groundbased telescopes as illustrated in figure 5a. Not only does the constraint on Gµ spans over nearly 10 orders of magnitude on the parameter space, it also present a folding for small values of χ m 0.3 and χ r 0.3. The folding is illustrated by a slice at constant χ m in figure 5b. This peculiar feature means that the constraint on Gµ for these models is not an upper bound on Gµ but rather that a set of intervals for Gµ being excluded. This can be understood by looking at figure 4c. The peak at f = 3H m γ −1 d caused by the EPSL produced during matter era enters within the bandwidth of the detectors for a given set of Gµ excluding another interval for Gµ.
On the contrary, experiments at lower frequencies, are not affected by the extra population of small loops and are only sensitive to the slopes of the SLND. As the shape of the SLND is universal for super-critical models we expect the detection surface to be flat in the upper-right corner for low frequency experiments. For sub-critical networks however, the shape of the spectrum is modified and we expect the detection surface to be dependent on the values of χ r and χ m as can be seen in figures 5c and 5d.
Conclusion
Our framework allowed us to produce analytic formulae for the SBGW for cosmic strings including its small-scale structure. In particular, the introduction of a back-reaction scale γ c γ d produces an Extra Population of Small Loops (EPSL) which can have an important effect on the SBGW for the LRS model [14]. We proposed a parametrization, using variables χ r and χ m , of the uncertainty on the dynamics of the infinite string network [14,18]. We showed that the predictions of BOS [47] are stable if one introduces this back-reaction scale, and that the extra population of loops is subdominant in terms of GW production in this particular model. We are also in agreement with LRS [14].
We showed the small-scale structure of cosmic strings can have a significant impact on the SBGW even outside the super-critical regime and calculated the region of the parameter space where its effect cannot be neglected. We classified the GW power spectra coming from cosmic strings into four different classes, for which we have shown two new and called hybrid models. The values of the Note that the detection surface is folded for LIGO/Virgo explaining why constraints on Gµ jump several orders of magnitude in the lower left corner. Figure 5b is a slice at constant χ m . parameters χ r and χ m for these two hybrid models are not supported by any numerical simulation, however the uncertainty on χ r and χ m motivates us to consider them.
We have also estimated systematically the constraints on the string tension Gµ of different types of GW detectors and showed that low-frequency experiments will provide more stable and model-independent bounds while ground-based detectors will be very sensitive to the details of the small-scale structure of the cosmic string network.
F Analytic estimation for the boundary in χ r
The question is to find for which values of χ r does the EPSL leaves a signature in the SBGW. This boils down to finding the value for χ r at which the two contributions are equal. Gµ being an infinitesimal quantity, one can perform an expansion as : . Where χ * = √ 3ν − 1/2 = 1 2 √ 2 . One obtains | 2020-09-02T01:01:27.704Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "29346a7987317c84244df338ff8b54a43f0573fd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2009.00334",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "29346a7987317c84244df338ff8b54a43f0573fd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
216399155 | pes2o/s2orc | v3-fos-license | Maternal Early Warning Scores (MEWS): Development of a Nigerian National Maternal Early Warning Scores (MEWS) Version
Maternal Early Warning Scores (MEWS) is an acute maternity illness severity scoring and escalation trigger system. Background: The use of the MEWS in Nigerian obstetric practice is rare but with the strong desire for its use reported and the increasing number of international partnerships with clinicians in Nigeria for the introduction of this tool, the use of multiple versions of MEWS into the Nigerian clinical practice has been recognised as a potential patient safety risk best managed through the development of a Nigerian national MEWS version. Aims: We set out to explore the development of a Nigerian National MEWS version that would make an acceptable fit for the Nigerian acute obstetric care environment to be used by all clinicians involved in acute obstetric care in Nigeria. Methods: The United Kingdom MOEWS was used as the baseline (template); following planned training on MEWS using the above template, surveys of experts at these meetings on the suitability (for Nigeria) and necessity for modifications of the template was done using survey monkey and paper-based questionnaires. Results: One hundred and forty one experts responded out of two hundred requests (70.5%), one hundred and twenty five (88.6%) opted for modifications of the template. Of these, one hundred and three (82.4%) favoured the addition of a parameter directly related to pre-eclampsia. Conclusion: A modification of the UK MOEWS by the addition of another parameter related to pre-eclampsia was favoured by the majority of experts in this study for the development of a Nigerian national MEWS.
Introduction
Maternal Early Warning Scores (MEWS) is an acute maternity illness severity scoring and escalation trigger system, the absence of this track and trigger tool or the failure of staff to use this tool for the early recognition and escalation of acutely deteriorating obstetric conditions have been recognized as contributory factors in several cases of sub-standard care, morbidities and mortalities [1,2,3,4] .
Improving the recognition of acute deterioration and preventing mortality require a step-wise solution involving staff education, patient monitoring, recognition of patient deterioration, a system to call for help, and an effective clinical response [5] . This five-ringed "chain of prevention" can provide a structure for hospitals to design care processes to prevent and detect patient deterioration and death. The Maternal Early Warning Scores (MEWS) provides solutions to many of these and can set the foundation for team approach to emergency obstetric care (EMOC).
The development of the Maternal Early Warning Scores (MEWS) arose from the report of the United Kingdom confidential enquires into maternal and child health (CEMACH) of 2003-2005, in that triennial report, evidence of substandard care and mortalities reviewed were linked to the failure of clinical staff to recognize acutely deteriorating obstetric conditions and trigger escalation of care sooner, this prompted a recommendation in the report, "There is an urgent need for the routine use of a national early warning chart, which can be used in all obstetric women which will help in the more timely recognition, treatment and referral of women who have, or are developing, a critical illness [6] .
The track and trigger system that emerged from this recommendation was called Modified Obstetric Early Warning Scores (MOEWS) [7] in the United Kingdom -named Modified Obstetrics EWS (MOEWS) to distinguish it from the non-obstetric Early Warning Scores (EWS) because of the physiological changes of pregnancy, outside the United Kingdom, it is simply referred to as Maternal Early Warning Scores (MEWS).
Following endorsement of MOEWS and EWS by the United Kingdom National Institute for Health and Clinical Excellence (NICE), UK [8] it rapidly gained popularity, unfortunately different Hospitals began to develop local versions to the point that over 72 recorded versions of the early warning scoring systems were in use at different hospitals in the United Kingdom prior to the call of the Royal College of Physicians, London, for a national early warning scores [9,10,11] .
Internationally, other versions of the MEWS are also in use in the United States of America [12] , Belgium, Republic of Ireland, etc. and with the adoption of this tool in the United Nations post 2015 Sustainable Development Goals (SDGs) within Goal 3, target 13 (SDG3:13): "Strengthen the capacity of all countries, in particular developing countries, for early warning, risk reduction and management of national and global health risks" the risk of many more different versions of this tool (MEWS) in simultaneous use in hospitals within countries and around the world becomes ever greater.
Although Isemede and Unuigbe [13] , reported a rarity of MEWS in the Nigerian Obstetric practice in 2019, that study also showed a couple of other pertinent points -first, that a quarter of the respondents reported a locally designed MEWS-like system (physician specific calling systems) -a hunger for standard MEWS and secondly, that 96% of respondents indicated desire for the introduction and implementation of the MEWS in their hospitals. Against
Aims
We set out to explore the development of a Nigerian National MEWS version that would make an acceptable fit for the Nigerian emergency obstetric care environment to be used by all clinicians involved in acute obstetric care in Nigeria.
Methods
Following planned training on MEWS (the second phase of the Patient Safety Africa three phase project for the introduction of MEWS for routine use in acute obstetric practice in Nigeria), using the UK MOEWS at seven regional medical centres and at two national conferences in Nigeria in 2018 and 2019, surveys of experts (senior registrars and consultants in obstetrics, midwifery staff of sister level and above and senior registrars and consultants in obstetric anaesthesia care in Nigeria) at these meetings, were surveyed on the suitability of the UK MEOWS template, the need for modifications, choice of necessary modifications to render it suitable for the Nigerian acute obstetric care environment was carried out using survey monkey and paper based questionnaires.
A switch to paper based questionnaire was made following poor response rates from online surveys after visits to the first three centres, as online reminders were sent to the first three centre experts, paper based questionnaires were used in the last four centres and at the national conferences.
The national spread of the centres, the inclusion of all professional groups (Obstetricians, Midwives and Obstetric anaesthetists) in maternity care and the fact that the national conferences had participants drawn from all over the country made this sample a good representation of the national picture.
Response rate turned out better than feared after initial reviews because responses to online reminders were good and better responses were seen with paper based system used in the last four centres and at the national conferences.
Results 1
A total of two hundred requests were sent to the experts, one hundred and forty one responses were received (70.5%) this was made up of 68 responses out of 100 requests to obstetricians, 21 responses out of 30 requests sent to Midwives and 52 responses out of 70 requests sent to Obstetric anaesthetists.
Results 2
One hundred and twenty five (88.6%) of experts opted for modifications of the template while sixteen (11.4%) favoured retaining the baseline (MOEWS/UK) template without modifications.
Results 3
Out of the hundred and twenty five in favour of modification of the UK MEWS template, one hundred and three (82.4%) opted for the addition of pre-eclampsia related parameter.
Discussions
Acute illnesses in the obstetric patient need to be recognized early and adequate monitoring instituted to prevent physiologic deterioration and a cascade of events to organ failure, multi-organ failure, and cardiorespiratory arrest. Routine patient observations which are only periodic -done at fixed intervals or sometimes not doneare inadequate for acutely deteriorating obstetric emergencies where maternal collapse and deaths can occur precipitously.
Most pregnancies and labour tend to be normal physiological events, but potential risks of complications and deterioration exist with each and every case, and because not all deteriorations can be predicted, it is necessary to monitor these women very closely, this involves recording and acting on vital signs to ensure early detection of actual or potential deterioration of patient's physiological state in order to reduce morbidities and mortalities, the maternal early warning scoring system encapsulates these actions and benefits [15,16] .
MEWS utilises the vital signs in common use in the ABCDE approach to emergency care, these vital signs (monitored in the MEWS) are as follows: respiratory rate, heart rate, blood pressuresystolic and diastolic, temperature, oxygen saturations, and level of consciousness (using the AVPU = (A)lert, response to (V)oice, response to (P)ain and (U)nresponsive). Every recoded vital sign generates a score of (0-3), depending on size of deviations from normal: 0 for parameters within normal physiological limits and a score of 3 for the most severe deviation; a total track and trigger score is generated by adding all the scores generated from the vital signs.
A graded response (escalation) strategy for patients identified to be at risk of clinical deterioration is usedlow score group: increased frequency of observations, document/report; medium score group: urgent call to local team leader; high score group: immediate response and emergency call to specialist team.
The MEWS is useful in providing visual aids of trends, revealing "hidden" trends, facilitating shared understanding, and providing legitimacy for escalation that entails timely recognition of deterioration, good communication between teams, expedited treatment, and/or referral [17,18] .
Sixteen experts out of one hundred and forty one respondents (11.4%) were happy for the introduction of the UK MOEWS template without modifications, one hundred and twenty five experts (88.6%), opted for template modification to make it more suitable for the Nigerian acute obstetric care environment. This majority in favour of modification, may be related to the high drive in this study population to achieve reductions in the country's very high maternal mortality rates through preventing deterioration, reducing delays and getting acutely ill obstetric patients in the country to points of definitive care in a more expeditious manner which a more robust MEWS version may offer [19,20] .
Hypertension associated with unremitting headache was the favoured additional parameter by 103 out of the 125 experts (82.4%), others are: hourly urine output measurements, 3 (2.4%), Mean Arterial Blood Pressure recordings, 6 (4.8%) and bedside glucose measurement 13 (10.4%). The choice of hypertension associated with unremitting headache may be due to the fact that hypertensive diseases in pregnancy has become the highest cause of maternal mortalities in Nigeria and several parts of Sub-Saharan Africa [21] , also, the choice of bedside glucose measurements as an additional parameter may be a pointer to common clinical experiences of fitting obstetric patients in this study population.
Mean Arterial blood Pressure (MAP) and hourly urine output recordings were less favoured as additional parameters by the experts, this may be due to experts desiring a version of MEWS that would be suitable for both the secondary care as well as the rural primary care centres, a version that would improve communications and expedite transfer rather than hinder them if the required processes are complex or cumbersome [22,23] .
In implementing the MEWS in Nigeria, caution must be exercised to ensure that scores and numbers do not replace comprehensive patients' assessments as such overdependence on scores by recorders without due regard to clinical judgement has also been shown to be a risk in this process [24] , likewise, the early warning system is not a replacement for adequate staffing; in Sub-Saharan Africa where the challenge of skilled birth attendants is acute, this temptation must be resisted. The MEWS is also not for chronic patients or patients on end of life pathway.
The similarities in acute obstetric care in most of Sub-Saharan Africa [21] may make this proposed Nigerian national MEWS version suitable for use in several of these countries until specific country level studies are available to underpin individual national MEWS.
MEWS Implementation research -protocol development, training, systematic pilots in both secondary and primary healthcare settings and audits in a collaborative approach for the introduction and implementation of a national Maternal Early Warning Scores (MEWS) in Nigeria is keenly advocated.
Conflicts of interest
There are no conflicts of interest. | 2020-04-27T20:43:14.045Z | 2020-02-19T00:00:00.000 | {
"year": 2020,
"sha1": "ae50ee56bdcde3d10ccde73511a734284e2d1934",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijirms.in/index.php/ijirms/article/download/841/612",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "882e343866eb38d0d9673a70e0c0d091fc014e08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
466449 | pes2o/s2orc | v3-fos-license | Effect of epilepsy on autism symptoms in Angelman syndrome
Background Autism spectrum disorder and epilepsy often co-occur; however, the extent to which the association between autism symptoms and epilepsy is due to shared aetiology or to the direct effects of seizures is a topic of ongoing debate. Angelman syndrome (AS) is presented as a suitable disease model to explore this association. Methods Data from medical records and questionnaires were used to examine the association between age of epilepsy onset, autism symptoms, genetic aberration and communication level. Forty-eight participants had genetically verified AS (median age 14.5 years; range 1–57 years). A measure of autism symptoms (the Social Communication Questionnaire; SCQ) was completed for 38 individuals aged ≥ 4 years. Genetic cause was subgrouped into deletion and other genetic aberrations of the 15q11-q13 area. The number of signs used to communicate (< 20 sign and ≥ 20 signs) was used as a measure of nonverbal communication. Results Mean age of epilepsy onset was 3.0 years (range 3 months–7.8 years). Mean SCQ score for individuals without epilepsy was 13.6 (SD = 6.7) and with epilepsy 17.0 (SD = 5.6; p = 0.17); 58% used fewer than 20 signs to communicate. There were no age differences between groups according to presence of epilepsy, level of nonverbal communication or type of genetic aberration. SCQ scores were higher in individuals with the deletion than in those with other genetic aberrations (18.7 vs 10.8 p = 0.008) and higher in the group who used < 20 signs to communicate (19.4 vs 14.1 p = 0.007). Age of epilepsy onset was correlated with SCQ (r = − 0.61, p < 0.001). Multiple regression showed that age of seizure onset was significantly related to SCQ score (β = − 0.90; p = 0.006), even when the type of genetic abnormality was controlled (R2 = 0.53; F = 10.7; p = 0.001). Conclusions The study provides support for the notion that seizures themselves contribute more to autism symptoms than expected from the underlying genetic pathology alone. The study demonstrates how a rare genetic syndrome such as Angelman syndrome may be used to study the relation between epilepsy and autism symptomatology.
Background
Angelman syndrome (AS) is a neurodevelopmental disorder caused by an absent or non-functioning maternal allele of chromosome 15q11-q13 [1]. The typical AS phenotype is characterized by intellectual disability (ID), lack of speech, hyperactivity, ataxic gait, microcephaly, sleep disturbances, frequent laughter/smiling and an apparently happy demeanour [1][2][3][4]. ID ranges from moderate to profound, with most individuals functioning in the severe to profound range [5,6]. Epilepsy occurs in 80% or more of cases [2,7], usually involving multiple seizure types and starting in early childhood [7,8]. High rates of autistic symptoms are also reported [9][10][11], with prevalence estimates of autism spectrum disorder (ASD) ranging from 24 to 81% [6,10]. AS can be due to UBE3A mutations, uniparental disomy and imprinting defects [1,12], but deletions are the predominant cause and are found in 68-75% of patients. Deletions are also associated with more severe AS-phenotype, and codeletion of GABA A -receptor genes (GABRB3, GABRA5 and GABRG3) located adjacent to UBE3A gene is suggested as a possible explanation for this [1]. Dysfunction of GABRB3 is highly associated with both epilepsy and autism symptoms [13,14].
A strong association between autism symptoms, epilepsy and ID has been found in a number of other genetic syndromes, such as fragile X and tuberous sclerosis complex (TSC), as well as in AS [6,10]. It is evident, too, that the negative effect of seizures is particularly strong during infancy and early childhood [15][16][17][18]. Thus, onset of seizures during the first year of life is associated with increased prevalence and severity of ID and ASD and increased prevalence of brain abnormalities [19,20]. However, there is a continuing debate [21][22][23][24] as to whether autism symptoms, epilepsy and ID are independent comorbidities [15,16,21,[25][26][27], whether they are all outcomes of the same underlying pathophysiological/genetic mechanisms [17,21,25,28], or whether the epilepsy itself contributes to more severe cognitive and behavioural impairments than might be expected from the underlying pathology alone [15,17,29,30], i.e. a so-called encephalopathic effect [30].
There are several reasons why AS offers a suitable disease model to investigate the association between epilepsy, ID and autism symptoms. Firstly, the rate of epilepsy in AS (> 80%) is as high as or higher than other genetic disorders in which epilepsy and autism commonly co-occur (e.g. TSC [80-90%]; fragile X syndrome [10-20%]) [29,31,32]. Secondly, epilepsy in AS tends to start in very early childhood. Seizures are also often treatment-resistant and refractory epilepsy has been shown to be an important predictor of autism symptoms [33]. Thirdly, unlike genetic conditions such as TSC, in which the numbers and location of tubers are associated with autism symptoms [17,34], there are no specific structural brain abnormalities in AS that are known to affect the phenotype. Fourthly, knowledge of the specific genetic defects that cause AS makes it possible to evaluate the degree to which the association between epilepsy and autism symptoms is a result of the underlying genetic abnormality and to assess the independent contribution of seizures on level of autism symptoms.
The aims of the current study were to describe epilepsy characteristics and then investigate the relationship between epilepsy, autism symptoms, communication level and genetic cause in individuals with AS. Based on previous research on other populations with childhood epilepsy including TSC [18,33,[35][36][37], we hypothesized that age of onset of epilepsy would be related to the number of autism symptoms in AS independent of the effect of the specific genetic abnormality.
Methods
The study was approved by the regional ethics committee in Norway (REK 2014/1880).
Recruitment procedures
From the records of the Frambu Resource Centre for Rare Disorders in Norway and the Norwegian Angelman Association, 115 individuals with AS were identified. Letters were sent to the parents/guardians of these individuals, and they were asked to complete two questionnaires: the Social Communication Questionnaire (SCQ), which measures autism symptoms [38], and a study-specific questionnaire assessing epilepsy, medication and developmental parameters. Written informed consent was given by all parents/guardians allowing the researchers access to medical records from all hospitals in Norway (Fig. 1).
Clinical information on epilepsy and genetic abnormality
Participants' medical records were used to collect information regarding epilepsy and the nature of the genetic abnormality. Information on age of epilepsy onset, type of seizure and treatment with anti-epileptic drugs was recorded when available. Medical records were not comprehensive for all individuals, and formal seizure classification was not always performed.
Genetic data were also variable. When information was available, the genetic abnormality was dichotomised into 'deletion' or 'other' (i.e. uniparental disomy, imprinting defects and point mutations).
Autism symptoms
The lifetime version of SCQ was used to assess the number of autism symptoms [38]. The SCQ contains 40 items scored 0 or 1 and was designed to screen for a possible diagnosis of autism in individuals aged 4 years and older and with a mental age above 2 years [38]. It has also frequently been used to measure autistic-type symptoms in individuals with genetic syndromes including those with AS [9,11]. We did not classify participants as meeting/not meeting the suggested cut-off scores for autism or ASD (≥ 22 and ≥ 15, respectively [38]) since the validity of these criteria has not been established for individuals with genetic disorders associated with severe ID. Nevertheless, SCQ has often been used as the screening tool in samples with low IQ [39,40].
Communication level
Information about level of development was particularly variable and often very limited. Although many parents reported that they had previously been told their child had severe to profound intellectual disability (in 7 cases, the description was of 'moderate' disability), formal test results were rarely recorded, and hence, the validity of these categories was unknown. Although there were no adequate data on IQ/developmental level, we did have data on communication level. Signing was the major mean of communication for most of the participants; the majority had no use of words and no one used more than 20 words. Categorical ratings of 'use of signs' (< 20 and 20-100 and > 100) were used to divide individuals into two groups; those using fewer than 20 signs to communicate and those with more than 20 signs.
Inclusion criteria
For the descriptive part of the study ('Epilepsy characteristics'), individuals were included if their parents/guardians gave their consent to participation/access to medical records and if their son/daughter had a genetically verified diagnosis of AS. For the second part of the study ('Relation between epilepsy and autism symptoms, nonverbal communication level and genetic aberration'), individuals were required to be at least 4 years of age (i.e. minimum age for the SCQ).
Parents/guardians of 56 out of the 115 individuals identified from the records (49%) consented to participate; 48 of these individuals (age range 1-57 years; median 14 years 6 months) had a genetically verified AS diagnosis. At the time of questionnaire completion (see Fig. 1), medical records confirmed that 34 individuals had epilepsy and 11 individuals did not. Three boys (aged 1, 1, and 4 years, respectively) subsequently developed seizures; hence, the 4-year-old was included in the no-epilepsy group in the part 2 of the study. SCQ questionnaires were completed for 38 of 40 individuals aged 4 years or older (SCQ was not completed for two participants aged 57 and 40 years). See Table 1 for participants' characteristics.
Statistical analysis
Associations between quantitative measures were analyzed by parametric statistics in SPSS (t test, Pearson's r). Due to small sample size, Mann-Whitney U test was used when comparing SCQ in subgroups with/ without epilepsy and when comparing SCQ and age of epilepsy onset in subgroups with/without deletion. Fisher's exact test was used for categorical data. Due to small and unequal sample sizes, Hedges' g was used for effect sizes. Normality of residuals was checked using visual inspection of P-P plots. Multiple regression analysis was conducted to assess the impact of 'age at epilepsy onset'and 'type of genetic aberration' on SCQ scores. Due to the combination of dichotomous and continuous covariates, we report the standardized coefficients (β). To correct for multiple comparisons, a significance level of p ≤ 0.01 was chosen; Bonferroni 'rule of thumb' was used to determine appropriate p level (p = 0.05/5 = 0.01).
Part 1: epilepsy characteristics
Age of first seizure ranged from 3 months to 7 years 10 months (mean 3 years 0 months, SD 2 years 2 months). Focal seizures were seen in four individuals. Sixteen individuals had their first seizure during a febrile episode, and 10 participants were reported to have epileptic seizures that were aggravated by fever. EEGs were recorded repeatedly in several participants, and findings were typical of those reported in AS [2]. When EEGs were recorded prior to first seizure, delta waves but no epileptiform activity were often reported. More epileptiform discharges in EEGs were recorded during periods of seizure aggravation. Seizures were commonly reported to be resistant to anti-epileptic drugs and drug resistance was particularly marked before 6 years of age, and 21 individuals had received benzodiazepine as emergency treatment. Three individuals had been treated with only one anti-epileptic drug, and all others had tried two or more anti-epileptic drugs. Valproate was the most frequently prescribed anti-epileptic drug (31 participants), followed by nitrazepam (18) and clonazepam (16).
Part 2: the relation between epilepsy and autism symptoms, nonverbal communication level and genetic aberration
Mean SCQ was 16.3 (SD = 5.9 range: 0-27). SCQ scores were higher in individuals with epilepsy (n = 31) than in those without (n = 7), but the difference was not significant (see Table 2). SCQ and age were not correlated (p = 0.12). Level of nonverbal communication did not differ between individuals with and without epilepsy; 19 of 33 (58%) with epilepsy and 4 of 7 (57%) (exact p = 1.000) without epilepsy used fewer than 20 signs to communicate. Individuals with the deletion were more likely to be in the group using < 20 signs to communicate than individuals with other genetic aberrations (exact p = 0.022). Within the epilepsy group, age of epilepsy onset was lower among individuals using < 20 signs to communicate. Individuals with the deletion had significantly higher SCQ scores and lower age at epilepsy onset than individuals with other genetic aberrations. There were no differences in age between groups (see Table 2 for details).
Age at epilepsy onset was highly correlated with SCQ score (r = − 0.61, p = 0.0004). A linear regression was conducted with SCQ as the dependent variable and age at seizure onset and type of genetic abnormality as the covariates (forced entry). Age at onset of seizures had an independent contribution when entering the type of genetic aberration as a covariate. The type of genetic aberration did not have an independent contribution in this model (see Table 3 and Fig. 2). As a supplementary analysis, we included level of nonverbal communication as a third covariate. Age of epilepsy onset was significant also in this model (β = − 0.81, p = 0.007).
Discussion
This study explored the relationship between age of epilepsy onset, autism symptomatology, type of genetic aberration and nonverbal communication level in a Norwegian sample of individuals with AS. Among the 56 individuals with AS identified from the available databases, 48 (86%) had genetically verified AS. This is in line with other reports noting that no genetic abnormality can be identified in 10-15% of individuals with AS [4]. Other clinical findings were similar to those of previous studies of AS. Thus, deletions were the most common genetic cause identified [1,4]. With regard to epilepsy, the prevalence in this study was 77%, somewhat lower than the rates of ≥ 80% commonly reported [4,7,8,41]. However, our sample included several very young participants who may not yet have had their first seizure. We also excluded individuals in whom the cause of AS was unknown and there is some indication [7]. Epilepsy characteristics with early-onset epilepsy, multiple seizure types, a tendency to have seizures during febrile episodes and commonly treatment-resistant seizures, particularly in early childhood, are also in line with the findings reported by others [2,7,8,41,42], and the use of anti-epileptic drugs is comparable to other studies [7,8,41].
The main focus of the study was the association between age of epilepsy onset and extent of autism symptomatology when type of genetic abnormality was controlled for. Our findings from this study of individuals with AS provide support for the notion that seizures themselves contribute more to autism symptoms than might be expected from the underlying pathology alone [15][16][17]21]. As anticipated, individuals with a deletion of 15q11-q13 had substantially more autism symptoms than individuals with other genetic aberrations (g = 1.48). However, when entered into a regression model with epilepsy onset, genetic aberration made no significant contribution to the number of autism symptoms reported. Although the lack of an independent effect of type of genetic aberration is likely due to the low number of causes other than deletion, it should be noted that the slope of the regression lines is similar for both genetic subgroups, thus supporting the importance of age at seizure onset across the sample. These findings from AS parallel evidence from studies in other rare disorders such as TSC; although both early seizures and encephalopathy are highly associated with type of genetic abnormality, early seizures may contribute to a worsening of developmental outcome [17,43]. Similarly, from fragile X syndrome, research indicates that males with the FMR1 premutation are more likely to have ASD and ID if seizures occur in childhood [29,44].
Although individuals with epilepsy had more autism symptoms than those without epilepsy, and despite a moderate to large effect size, this difference was not significant [15]. This may be due to the rarity of nonepilepsy cases among individuals with AS and hence the very small size of the no-epilepsy group. However, the findings also point towards the importance of viewing epilepsy as a spectrum disorder rather than a dichotomy [15]. Hence, the comorbidity between autism symptoms and epilepsy may be related both to the underlying pathology and to the effect of seizures. The high risk of ASD in populations with early-onset epilepsy has been used to support the encephalopathy hypothesis, i.e. that seizures may cause ASD [16,25]. Others have argued against this because the relationship is bi-directional and individuals with ASD are at increased risk of future epilepsy and seizures may occur in adolescence or [21,22,45,46]. This study highlights the importance of considering the additive effects of the underlying genetic aetiology and seizures contributing to autism symptoms in AS, which may be relevant also for other conditions [15,29]. The encephalopathic effect may be greater when seizures start early. Early-life seizures may result in molecular changes which impact neural network structure, and the hippocampal region may be of particular importance. Molecular changes may also influence the expression of genes involved in autism symptoms and genetic syndromes such as GABRB3, FMR1, TSC1 and TSC2 [16,29]. Moreover, research suggests that effects of seizures on GABA A -receptor expression are age-dependent, a finding that further supports the notion that early seizures are particularly harmful [16]. There was no difference in the level of nonverbal communication between the epilepsy group and no-epilepsy group. Age of first seizure however, was associated with nonverbal communication (g = 0.56) and individuals with the lowest level of nonverbal communication had earlier seizure onset than those who used more signs to communicate. A number of other studies has found that earlier age of seizure onset is associated with poorer cognitive outcome [18, 33, 35-37, 47, 48]. Our study did not include a measure of development, only a measure of nonverbal communication. However, supplemental analysis showed that age of epilepsy remained significant also when nonverbal communication was entered as a covariate. This suggests that the number of autism symptoms was not explained only by the level of nonverbal communication.
Although the findings of this exploratory study have potentially important implications for understanding the complex links between autism symptoms and epilepsy, there are a number of limitations that must be taken into account in the interpretation of the data. Firstly, the sample size was small and the age of participants was very wide, ranging from infancy to adulthood. In addition, we did not have data on the level of ID, only an estimate of nonverbal communication was available. There were also few individuals with a genetic cause other than the 15q11 deletion, and we lacked data on size of deletions. Furthermore, information from medical records was often incomplete and formal seizure classification, except for tonic-clonic seizures, was rarely performed. Hence, some individuals may have had more types and higher frequency of seizures than reported (particularly those of short duration or less severe such as absences and myoclonic seizures). Finally, there was no clinical assessment of autism, and rather than a categorical distinction between ASD/non-ASD, we focused on the frequency of autism symptoms as measured by the SCQ. While this avoided the problems of misdiagnosing ASD in a population with severe developmental delay, it is well established that the number of autism symptoms is highly related to severity of ID [11]. Thus, high rates of autism symptoms were to be expected in this sample of individuals with AS [9,10]. The severity of ID in AS is the main limitation when using this disorder as a disease model for studying the relation between autism symptoms and epilepsy.
It is clear that information from a larger sample of individuals with AS, with a larger range of genetic causes other than deletions, and detailed information on developmental level is needed to increase confidence in the current findings. More details of the genetic aberration, such as size and exact break points of the deletions, are also needed. Finally, further studies in this area should investigate which autism symptoms are particularly vulnerable to early seizures and which are less affected. Such knowledge may be of relevance for better understanding of the biology of ASD.
Conclusions
This study provides support for the notion that, in individuals with AS, seizures themselves contribute more to autism symptoms than expected from the underlying genetic pathology. This study demonstrates how a rare condition may illuminate core issues in research on developmental disorders. Individuals with Angelman syndrome show limited variation in genetic aetiology, and the condition is therefore a suitable one in which to investigate the relation between epilepsy and autism symptoms. | 2018-01-10T00:44:06.521Z | 2018-01-08T00:00:00.000 | {
"year": 2018,
"sha1": "f753006a5ea6e35516019b628f65739292d4276f",
"oa_license": "CCBY",
"oa_url": "https://molecularautism.biomedcentral.com/track/pdf/10.1186/s13229-017-0185-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f753006a5ea6e35516019b628f65739292d4276f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235401267 | pes2o/s2orc | v3-fos-license | Atypical Power Doppler Ultrasound Findings in Juvenile Idiopathic Inflammatory Myositis (JIIM) Flare
Juvenile idiopathic inflammatory myositis (JIIM) is a multisystem inflammatory disease that impacts the muscles, skin, and blood vessels. Gray-scale power Doppler ultrasound is a technique that can be used to assist the diagnosis of JIIM and myositis in general. We report a case of an atypical symptomatic JIIM myositis flare that shows increased muscle echogenicity without the corresponding increase (complete absence) of Doppler flow.
Introduction
Juvenile idiopathic inflammatory myositis (JIIM) is a multisystem inflammatory disease that impacts the muscles, skin, and blood vessels [1]. Major symptoms include rashes, pain, and weakness of the arms and legs causing difficulty in walking, climbing stairs, and lifting objects above the head [2]. Diagnosis of JIIM and accompanying disease flares are traditionally based on the Bohan and Peter criteria [2]. Definite juvenile dermatomyositis consists of classic skin involvement and at least three of the following: 1) proximal muscle weakness, 2) elevation of a muscle enzyme(s), 3) myopathic changes on electromyography, and 4) abnormal muscle biopsy suggestive of inflammatory myopathy [3,4]. Probable juvenile dermatomyositis (JDM) is defined as patients who have the characteristic rash and fulfil only two of the above criteria. An expanded definition was proposed in 2006 using an international consensus survey [5]. These new criteria include: 1) typical findings on muscle magnetic resonance imaging (MRI), 2) nailfold capillaroscopy abnormalities, 3) calcinosis, and 4) dysphonia [5]. Imaging techniques, such as MRI, are often heavily used in cases where the diagnosis is equivocal or where reporting of symptomology may be unreliable, such as when treating young children who cannot express or explain symptoms; however, MRI remains expensive and cumbersome and has a high false-negative rate [6,7]. Gray-scale ultrasound with power Doppler is a modality that been postulated to be a potential alternative for diagnosis that is not only less expensive than MRI, but also highly sensitive to detecting myositis patterns found in JIIM [8].
Case Presentation
A 17-year-old male presented with a two-year history of skin rashes over his hand and lower extremities weakness. Physical examination revealed Gottron changes on his right third metacarpophalangeal joint and proximal interphalangeal joints, right fourth proximal interphalangeal joints, and left fifth metacarpophalangeal joint and proximal interphalangeal joints. A patch of dry skin with mild erythema was discovered on the left ankle and on the anterior aspect of the lower leg. Laboratory workup showed elevated muscle enzymes, including elevated aspartate aminotransferase (186 IU/L), alanine aminotransferase (341 IU/L), aldolase (17.8 U/L), LDH (350 U/L), and CK (1,092 U/L). Based on modified Bohan and Peter criteria, he was diagnosed with dermatomyositis [5]. High resolution and frequency (Res) gray-scale static and cine imaging with power Doppler, along with an MRI, were performed to assist with the diagnostic workup. The MRI showed findings congruent with a typical flare of JIIM. Lower extremity ultrasound images with corresponding ultrasound parameters depicted below, however, showed increased muscle echogenicity consistent with dermatomyositis, but with no detectable Doppler flow (Figure 1). Typically, increased muscle echogenicity in inflammatory processes coincides with mild to moderate increases in Doppler flow unless a pathological process, such as compartment syndrome, is occurring [8]. In this case, compartment syndrome was clinically absent. To our knowledge, the pairing of this finding of increased muscle echogenicity with complete absence of Doppler flow has not been documented in a patient with JIIM flare. An example showing the typical power Doppler ultrasound findings in JIIM, along with corresponding parameters, has been provided ( Figure 2).
Discussion
The lack of discrimination of myositis is important to both the research community and the practicing clinician. Consensus on the utility of gray-scale power Doppler ultrasound in the workup of JIIM and other inflammatory myopathies remains limited but evolving. Early clinical studies indicated that power Doppler ultrasound, in conjunction with gray-scale ultrasound, could reasonably be used in the diagnosis of myositis-related diseases, such as JIIM. [8]. Specifically, Meng et al. found a statistically significant difference in peak vascularity when comparing inflamed muscle to non-inflamed muscle [9]. Later studies, however, have focused on attempts to standardize reporting of power Doppler ultrasound findings in JIIM and other myositis related diseases [8]. A study conducted by Weber et al. was able to demonstrate the increased perfusion of muscle during symptomatic myositis flare on power Doppler ultrasound by analyzing the replenishment kinetic of microbubbles, perfusion-related parameters, and flow volume and velocity [10]. It was noted that contrast-enhanced blood flow had the greatest sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for the diagnosis of dermatomyositis and polymyositis [10]. There have been no recent trials that examine the true sensitivity and specificity of gray-scale power Doppler ultrasound in the diagnosis of JIIM or related flares.
Conclusions
Based on the symptomatology and MRI findings, our patient was experiencing a JIIM flare; however, the power Doppler ultrasound findings were more suggestive of compartment syndrome, a pathological condition that we know was not present in this patient. Our case provides an example of how ultrasound images with Doppler can be misleading in JIIM and may not fully detect a flareup. More research is needed to understand the diagnostic potential of gray-scale ultrasound with power Doppler as well as its relative sensitivity and specificity in the diagnosis of JIIM and other related diseases.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-06-12T05:19:05.859Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "c0ad404c8e7a489416abe9e77bbbdf9c19c4d029",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/57888-atypical-power-doppler-ultrasound-findings-in-juvenile-idiopathic-inflammatory-myositis-jiim-flare.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0ad404c8e7a489416abe9e77bbbdf9c19c4d029",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
223609798 | pes2o/s2orc | v3-fos-license | THE INTERTEXTUALITY IN THE LITERARY DISCOURSE
The article presents a comprehensive study of the notion of intertextuality, its usage and the role in a study of literary discourse. Functions and types of intertextuality in the literary discourse are analyzed in this article.
Introduction Richard Nordquist introduces intertextuality as self-reliant states in which texts relate to one another (as well as to the culture in general) to produce meaning. A central idea of modern literary and developmental theory, intertextuality has its fundament in 20 th century linguistics, specifically in the work of Swiss linguist Ferdinand de Saussure . The term itself was offered by the Bulgarian-French philosopher and psycologist Julia Kristeva in the 1960s.
The term "intertextuality" is borrowed from the Latin intertexto, meaning to intermingle while weaving. In scientific researches such as "Word, Dialogue, and Novel," Kristeva broke with traditional notions of the author's influences and the text's references, positing that all signifying systems, from table settings to poems, are constituted by the manner in which they transform earlier signifying systems.
The notion "intertextuality" has been derived and altered many times since it was offered by the poststructuralist Julia Kristeva in 1966. As philosopher William Irwin wrote, the term "has come to have almost as many meanings as users, from those faithful to Kristeva's original vision to those who simply use it as a stylish way of talking about allusion and influence".
Kristeva's understanding of "intertextuality" shows an attempt to combine Ferdinand de Saussure's semiotics which is his on the the most significant researches of how signs derive their meaning within the arrangement of a text. For Kristeva, the concept of intertextuality changes the concept of intersubjectivity, when we comprehend that signification is not borrowed directly from author to reader but on the other hand is filtered through, or mediated by, "codes" conveyed to the writer and reader by different texts. For instance, when we read James Joyce's Ulysses we understand it as a modernist literary investigation, or as a response to the epic heritage, or as part of some other discourse, or as part of all of these dialogues at once. This intertextual view of literature, as represented by Roland Barthes, reinforces the notion that the meaning of a text does not exist in the text, but is represented by the reader in connection not only to the text in question, but to the complicated system of connections of texts appeal to the reading process.
More recent post-structuralist theory, such as that formulated in Daniela Caselli's Beckett's Dantes: Intertextuality in the Fiction and Criticism, reexamines "intertextuality" as a production within texts, rather than as a series of relationships between different texts. Some postmodern theorists like to talk about the relationship between "intertextuality" and "hypertextuality"; intertextuality makes each text a "living hell of hell on earth" and part of a larger mosaic of texts, just as each hypertext can be a web of links and part of the whole World-Wide Web. Indeed, the World-Wide Web has been theorized as a unique realm of reciprocal intertextuality, in which no particular text can claim centrality, yet the Web text eventually produces an image of a community--the group of people who write and read the text using specific discursive strategies. One can also make distinctions between the notions of "intertext", "hypertext" and "supertext". As a hypertext it consists of links to different articles within itself and also every individual trajectory of reading it. As a supertext it combines male and female versions of itself, as well as three mini-dictionaries in each of the versions.
In "Merriam-Webster Dictionary" intertextuality is defined as "the complex interrelationship between a text and other texts taken as basic to the creation or interpretation of the text".
In "Nation Mater Encyclopaedia" intertextuality is defined as the shaping of texts' meanings by other texts. It can refer to an author's borrowing and transformation of a prior text or to a reader's referencing of one text in reading another.
Intertextuality means the shaping of texts' meanings by other texts. Or the study of the way in which text of one poem may relate to the text of another poem. It can also be defined as the relationship between texts.
I. Forms of intertextuality and linguistic means of its fulfilment
Semiotic and synergetic interpretation of discourse provides integration of achievements in linguistics, cognitology, semiotics and synergetics. It offers opportunities for comprehensive review of literary work functioning in the semiosphere (semiosphere is a set of sign systems, including text, language and culture in general). This approach is universal because it is appropriate for the description of different types of discourse. The approach doesn't contradict conventional theories of discourse analysis, it's based on generally accepted linguistic statements and it supplements modern scientific theories and research guidelines. Besides, the approach is dynamic and open for further development.
In this research discourse as a constituent part of the semiosphere is considered to be the developing synergetic system that has the following basic principles of organization: hierarchical structure, instability, nonlinear nature, emergence, symmetric/ asymmetric property and openness. Taking into consideration hierarchical structure, the semiosphere consists of micro-(intertext), macro-(discourse), mega-(interdiscourse) levels: interdiscoursive semiosphere is formed by a diverse set of discourses, each of them consists of many intertexts .
The system "intertextdiscourseinterdiscourse" is characterized by instability due to changes in the intertextual inclusions that lead to the discourse transformation which, in turn, affects interdiscourse of the semiosphere as a whole. The property of openness allows the system to evolve from simple to complex state because each hierarchical level acquires an opportunity to develop and become complicated. Meanwhile discourse is characterized by the emergence that provides the appearance of spontaneously occurring properties that are nonrelevant for certain hierarchical levels (intertext, discourse or interdiscourse), but peculiar to the system as a holistic functional formation. Due to its inherent non-linearity and instability textual environment is considered as unpredictable, but it is always ready to create new semantic variations. Dominant meaning synchronizes symmetric (which are in dynamic equilibrium) and asymmetric (which are in the dynamic disequilibrium) system elements; it is the creative attractor that organizes discourse.
In linguistics there are several forms of intertextuality inerrability and presupposition. Inerrability refers to the "repeatability" of certain textual fragments, to citation in its broadest sense to include not only explicit allusions, references, and quotations within a discourse, but also unannounced sources and influences, clichés, phrases in the air, and traditions. That is to say, every discourse is composed of "traces," pieces of other texts that help constitute its meaning. Presupposition refers to assumptions a text makes about its referent, its readers, and its contextto portions of the text which are read, but which are not explicitly there. Once upon a time is a trace rich in rhetorical presupposition, signaling to even the youngest reader the opening of a fictional narrative. Texts not only refer to but in fact contain other texts.
R. S. Miola separated seven types of intertextuality: 1) Revision. This type of intertextuality features a close relationship between anterior and posterior texts, wherein the latter takes identity from the former, even as it departs from it. The process occurs under the guiding and explicitly comparative eye of the revising author. The revision may be prompted by external circumstance -censorship, or theatrical, legal, or material exigencies. Alternatively, the revision may simply reflect an author's subsequent wishes. The reviser who is not the author presents another scenario and an entirely different set of problems and considerations. In all cases, however, the transaction is linear, conscious, and specific, marked by evidence of the reviser's preference and intentionality.
2) Translation. Translation transfers, 'carries across', a text into a different language, recreate it anew. The later text explicitly claims the identity of the original, its chief project an etiological journey to itself, or to a version of itself. Translations are generally grouped according to source language, and judged by standards of 'fidelity', i. e., the closeness of the rendering to the original and the success of the translator in representing the original's literary quality and effects. But the usual distinctions among translation verbatim, paraphrase, and metaphrase, deflect attention from the real difficulty inherent in this type of intertextualitynamely the unbridgeable cultural and linguistic spaces between languages and cultures.
3) Quotation. Quotation literally reproduces the anterior text (whole or part) in a later text. Quotations may be variously marked for reader recognition, by typographical signals, by a switch in language, for example, or by the actual identification of the original author or text. 4) Sources. Source texts provide plot, character, idea, language, or style to later texts. The author's reading and remembering directs the transaction, which may include complicated strategies of imitatio.
The source text in various ways shapes the later text, its content, or its rhetorical style and form. There are at least three subdivisions possible here.
The source coincident. Here the earlier text exists as a whole in dynamic tension with the later one, a part of its identity. The later one may simply respond to an earlier one: Ralegh writes a famous reply to Marlowe's Passionate Shepherd, for example. Gabriel Harvey and Thomas Nashe engage in a pamphlet war. The serious literature of controversy, political and religious, employs extensive quotation and reference so that the originating text and present response take on a new identity.
The source proximate. This is the most familiar and frequently studied kind of intertextuality, that of sources and texts. The source functions as the book on-the-desk; the author honours, reshapes, steals, ransacks, and plunders. The dynamics include copying, paraphrase, compression, conflation, expansion, omission, innovation, transference, and contradiction. Shakespeare's use of North's Plutarch in Julius Caesar provides a good example of a proximate source.
The source remote. This last term includes all sources and influences that are not clearly marked, or that do not coincide with the book-on-the-desk model. The field of possibilities here widens to include all that an author previously knew or read: grammar-school texts, classical stories and authors, the Bible, evident in allusions, turns of phrase, or re-appropriated motifs. The dynamic still consists of reading and remembering, even if the process of recollection and re-articulation occurs in the subconscious mind of the author. Remote sources often include the work of particularly original, earlier playwrights: Thomas Kyd, for example, who readapted Senecan conventions to the Elizabethan stage.
5) Conventions and configurations.
Poets constantly appropriated and adapted numerous conventions from classical, medieval, and continental literatures, formal and rhetorical. Senecan conventions in tragedy, the chorus, messenger, domina-nutrix dialogue, stichomythia, and soliloquy, for example, have all attracted due attention. So have Plautine and Terentian conventions in comedy: eavesdropping, disguise, lockouts, stock characters like the witty slave, bragging soldier, blocking senex, and so on. Configurations of classical character and situation also appear importantly in the drama: Shakespeare adapts the New Comedic triangle consisting of importunate adulescens, blocking senex, and nubile virgo into marvellous, varied, and expressive tensions throughout his career.
6) Genres. These may appear in individual signifiers (e.g., the play-within-the-play of revenge tragedy, the singing shepherds in pastoral), which function much like conventions, or range to broader and less discrete forms. On the far end of the spectrum often a sophistication and smoothness of adaptation makes difficult positive identification of origins: Spenser's The Faerie Queene absorbs classical, medieval, and contemporary works into a new creation; Milton yokes and challenges epical and Biblical traditions in Paradise Lost.
One Shakespearean example may demonstrate the subtlety and evocative power of generic intertextuality. No one has ever successfully proved that Shakespeare ever read a single line of Petrarch's Canzoniere. Yet any reader of Shakespeare's sonnet sequence or Love's Labour's Lost recognizes an intimate familiarity with the conventions and genre that Petrarch (along with Dante and others) originated. These conventions and assumptions, in turn, Shakespeare further adapts in Romeo and Juliet, where Petrarch is appropriately invoked by Mercutio. Romeo in love with Rosaline seems to be conventional Petrarchan lover, full of fanciful and literary paradoxes: lineation through the author's mind or intention. Today, critics can adduce any contemporary text in conjunction with another, without bothering at all about verbal echo, or even imprecise lines of foliation. In some ways the discussion of paralogues departs from past critical practices, bringing new freedom; but, of course, new perils threaten: rampant and irresponsible association, facile cultural generalization, and anecdotal, impressionistic historicizing.
II. Intertextuality as a literary device
Intertextuality is a sophisticated literary device used in writing. In fact, it is a textual reference within some text that reflects the text used as a reference. Instead of employing referential phrases from different literary works, intertextuality draws upon the concept, rhetoric or ideology from other texts to be merged in the new text. It may be the retelling of an old story, or you may rewrite the popular stories in modern context for instance, James Joyce retells The Odyssey in his very famous novel Ulysses.
Although both these terms seem similar to each other, they are slightly different in their meanings, because an allusion is a brief and concise reference that a writer uses in another narrative without affecting the storyline. Intertextuality, on the other hand, uses the reference of the full story in another text or story as its backbone.
Intertextuality Examples from Literature include as follows: Example 1: Wide Sargasso Sea by Jean Rhys. In his novel, Wide Sargasso Sea, Jean Rhys gathers some events occurred in the famous novel the novel, Jane Eyre by Charlotte Bronte. The purpose is to tell the readers an alternative tale. Rhys presents the wife of Mr. Rochester, who played the role of a secondary character in Jane Eyre setting of this novel is Jamaica not England, and author develops the backstory for his major character. While spinning the novel, Jane Eyre, she gives her interpretation amid the narrative by addressing issues such as roles of women, colonization and racism that Bronte did not point out in her novel otherwise.
Example 2. A Tempest by Aime Cesaire. Aime Cesaire's play, A Tempest is an adaptation of The Tempest by William Shakespeare. The author parodies Shakespeare's play from post-colonial point of view. Cesaire also changes the occupations and races of his characters. For example, he transforms the occupation of Prospero, who was a magician, and changes him into a slave-owner, and also changes Ariel in Mulatto, though he was a spirit. Cesaire, like Rhys, makes use of a famous work of literature, and put a spin on it in order to express the themes of power, slavery and colonialism.
Example 3. Lord of the Flies by William Golding.
William Golding in his novel, Lord of the Flies, takes the story implicitly from Treasure Island written by Robert Louis Stevenson. However, Golding has utilized the concept of adventures, which young boys love to use on the isolated island they were stranded on. He, however, changes the narrative into a cautious tale, rejecting glorified stories of Stevenson concerning exploration and swash buckling. Instead, Golding grounds this novel in bitter realism by demonstrating negative implications of savagery and fighting that could take control of human hearts, because characters have lost the idea of civilization.
Example 4. The Lion, the Witch, and the Wardrobe by C.S. Lewis.
In this case, C.S. Lewis adapts the Christ's crucifixion in his fantasy novel, The Lion, the Witch and the Wardrobe. He, very shrewdly, weaves together the religious and entertainment themes for a children book. Lewis uses an important event from The New Testament and transforms into a story about redemption. In doing so, he uses Edmund, a character that betrays his saviour, Aslan, to suffer. Generally, the motive of this theme is to introduce other themes such as evil actions, losing innocence and redemption.
Example 5. For Whom the Bell Tolls by Earnest Hemingway In the following example, Hemingway uses intertextuality for the title of his novel. He takes the title of a poem, Meditation XVII written by John Donne. The excerpt of this poem reads: "No man is an island… and therefore never send to know for whom the bell tolls; it tolls for thee." Hemingway not only uses this excerpt for the title of his novel, he also makes use of the idea in the novel, as he clarifies and elaborates the abstract philosophy of Donne by using the concept of Spanish Civil War. By the end, the novel expands other themes such as loyalty, love and camaraderie.
Majority of the writers borrow ideas from the previous works to give a layer of meanings to their works. In fact, when readers read the new text with reflection of another literary work, all related assumptions, effects and ideas of other text provide them a different meaning and changes the technique of interpretation of the original piece. Since readers take influence from other texts, and while reading new texts they sift through archives, this device gives them relevance and clarifies their understanding of the new texts. For writers, intertextuality allows them to open new perspectives and possibilities to construct their story. Thus, writers may explore a particular ideology in their narrative by discussing recent rhetoric in the original text
III. Intertextuality in translation
Intertextuality is a quality of any literary text and represents the ability of a text to accumulate information not only directly from the personal experience, but also indirectly from other texts, intertextuality is an ontological quality of any text, and, first of allfictional. It is intertextuality that determines adoption of a fictional text into the process of the literary evolution. It means that fictional writing becomes a text only when its intertextuality is being actualized. In the fictional text intertextuality is actualized by the usage of the author of so-called "intertextual inclusions", to be more exact, by the usage of intertextual elements. In the process of translation of a fictional text, translation of the intertextual elements requires a special attention of a translator, and these facts allow us to identify intertextual element as a unit of translation. Intertextual elements are "multifunctional: they increase time frames and cultural space of the text", thus making basis for creation of the multiple associations; they can be the means to express evaluation (as a way to affect by evaluation, which is made not directly, but with the help of the precedent texts), they can also be used to strengthen arguments or to create irony. Inclusion of the existing texts into new forms and their cultural and literal transformation at different levels give us the opportunity to consider intertextual elements as the most important part of intertextuality, which is defined by the reference of the text elements to the precedent facts. On the one hand, intertextuality is associated with ways of signification and labelling at the structural level, on the otherwith the creation of associations aimed at the textual and the discursive levels.
A text with intertextual elements is always stylistically marked, as intertextual elements may lose connection with a source text, becoming, thus, the speech stereotypes. Thus, the preservation of intertextual element in the process of translating a literary text is a necessary condition for the equivalent translation, which allows us to consider intertextual element as a unit of translation.
In the modern translation studies, the problem of defining a unit of translation is one of the most debatable and difficult. R.K. Minyar-Beloruchev identifies two possible approaches to understanding of units of translation in the aspect of intertextuality: 1) "Semantic" approach in the isolation of the units of translation enables us to follow the source text strictly. The author notes that the very isolation of the units of translation at the same time, like any other segmentation of the text is, firstly, linear, and secondly, has subjective nature. Among the supporters of the "semantic" approach are the following researchers: J.-P. Vinay and J. Darbelnet, Y.S. Stepanov, A.F. Shiryaev, R.K. Minyar-Beloruchev, V. Alimov, V.N. Comissarov, T. Kazakova and others.
In determining the principles of selection of the units of translation T. Kazakova believes that "the main condition for the correct determination of the initial units of translation is identification of the textual features of a unit". In the process of defining the units of translation in a source text, the text should be evaluated in terms of relations that determine content or the structural and functional properties of its constituent words. The author notes that the unit of translation may be a segment of words to the text.
According to R.K. Minyar-Beloruchev, to provide the units of translation, and therefore make a list of possible solutions in advance for all the cases in the practice of translation is impossible. These units can be any unit of speech, requiring a separate decision during the process of translation. The provision of such units of speech is also determined by the conditions of work.
2) "Functional" approach to the defining of the units of translation is featured by such authors such as Y.I. Retsker, L.S. Barkhudarov, S. Tyulenev, V. Sdobnikov etc. These researchers are based upon the proposition that every minimal amount of source code that executes in any function must have its compliance in the translation. And such a minimal amount of time is determined only by comparing the original text with the translated text. The functional approach allows us to speak about the translation of units mainly in the presence of inconsistencies between the source and target texts.
Thus, in the process of translation of the intertextual element from one language into another a translator should: 1) identify the intertextual element in the fictional text; 2) choose an appropriate variant of translation. These terms and conditions are necessary to keep the meaning of the intertextual element in the translated text, as intertextual element as a unit of translation requires a separate translation solutions. When intertextual element is not identified in the original text, there may be a mistake in the choice of the unit of translation, and it may lead to disturbance of the equivalency of the translated text.
IV. Conclusions
The following research explores articles, scientific works and research conducted on the theme of intertextuality. The sources, however vary in their definitions of the intertextual notion, its forms and linguistic means of its realization. For instance, some sources define intertextuality as the determination of text meanings through other texts, others offer the notion of the complex relations between a text and other texts, and sometimes intertextuality was considered as a plagiarism. However this negative assumption didn't affect the final decision which is that intertextuality is extremely important in the total understanding of any literary text.
This article highlights seven types of intertextuality which are translation, revision, quotation, sources, conventions and configurations, genres and paralogues. What is more, there were shown three types of intertextual frames and some of the means with the help of which intertextuality can be created. Every notion is supported by the example and comments on the each unit. It can be clearly seen from the research that intertextuality conducts a double focus. On the one hand, it attracts attention to the importance of the text which were used for intertextual creation, but on the other hand intertextuality leads readers to understanding of the prior texts as a contributions to a code which only makes possible the various effects of significance.
Numerous studies have been conducted on various facets of intertextuality as a literary device, and it is often compared to allusion, however intertextuality uses the references of the story in total in another text. This idea is supported by five examples and proved.
Considering a question of intertextuality in translation it can be defined as one of the most debatable and difficult. Two possible approaches to understanding the units of translation are defined in the intertextual context and seven definitions of the intertextuality translation are offered. In addition, the process of translation should include the identification of the intertextual elements and a choice of the appropriate variant of translation.
Finally, the theoretical research of the intertextuality has shown that intertextuality includes appeal to already created text and the most popular intertextual elements include allusion, quotation, translation and duplication. By the comparative analysis which was conducted, it can be seen that intertextual elements are used mainly in literary sources. On the assumption that there are no unique methods of transferring or rendering the intertextual elements inn translation, they can represent some difficulty when translating the source text into the target one. | 2020-07-23T09:06:29.709Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "d4265c9fb3ea56102cbd7e02cacfcca179020dd3",
"oa_license": null,
"oa_url": "https://doi.org/10.15863/tas.2020.06.86.99",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "34c4f6220fff691c4221b6522c3ad0111a98e1a9",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Art"
]
} |
258195378 | pes2o/s2orc | v3-fos-license | Nonlinear Modeling of Contact Stress Distribution in Thin Plate Substrates Subjected to Aspect Ratio
The foundation substrate’s basal contact stresses are typically thought to have a linear distribution, although the actual form is nonlinear. Basal contact stress in thin plates is experimentally measured using a thin film pressure distribution system. This study examines the nonlinear distribution law of basal contact stresses in thin plates with various aspect ratios under concentrated loading, and it establishes a model for the distribution of contact stresses in thin plates using an exponential function that accounts for aspect ratio coefficients. The outcomes demonstrate that the thin plate’s aspect ratio significantly affects how the substrate contact stress is distributed during concentrated loading. The contact stresses in the thin plate’s base exhibit significant nonlinearity when the aspect ratio of the test thin plate is greater than 6~8. The aspect ratio coefficient-added exponential function model can better optimize the strength and stiffness calculations of the base substrate and more accurately describe the actual distribution of contact stresses in the base of the thin plate compared to linear and parabolic functions. The correctness of the exponential function model is confirmed by the film pressure distribution measurement system that directly measures the contact stress at the base of the thin plate, providing a more accurate nonlinear load input for the calculation of the internal force of the base thin plate.
Introduction
When establishing power transmission and communication towers on soft soil near rivers and lakes, a flexible foundation slab is a common foundation form [1,2]. The form of foundation reaction force distribution greatly influences the internal force response of the foundation base plate. In engineering, a concentrated load is a common form of load used between the foundation and building, and the contact stress distribution between the thin slab and substrate with different width-to-height ratios is usually assumed to be linear [3][4][5]. However, actual contact stresses are generally nonlinearly distributed and can be saddle-shaped, parabolic, or anti-parabolic, which is mainly determined by the substrate bed factor or soil type and the material properties of the sheet [6]. The three most common nonlinear contact stress distributions described in the generalized Winkel foundation model are saddle-shaped, parabolic, and anti-parabolic [7,8]. Scholars have conducted studies on the contact stresses of foundation footings under concentrated loads, and Wang et al. [9] found that the measured field test values of foundation contact stress distribution present a convex parabolic form with small, middle, and large edges and the distribution of reaction forces does not vary much for different soils. Similarly, a large number of field measurements of foundation contact stress distribution in engineering practice show a similar parabolic variation law [10][11][12]. To account for this nonlinear variation law, Wang et al. [13] used a parabolic surface function to describe the foundation contact stress distribution and used K to control the adjustment function, where K > 1 for sandy foundations, K < 1 for clay foundations, and K = 1 for uniform distribution of reaction forces. Let the expression for the contact stress at a specific location in the foundation be represented by P x=x 0 = K · P 0 , where K is the control adjustment coefficient function. The relationship of K can be determined through experimental simulations. However, this parabolic function does not accurately reflect the nonlinear law of the stresses in the thin plate and the substrate, and a more accurate functional model is needed to characterize it.
The Technical Provisions for the Design of Power Transmission Line Foundations (DL/ T5219-2005) [14] and the Code for the Design of Building Foundations (GB50007-2002) [15] assume linear contact stress distribution when the aspect ratio of foundation steps is less than or equal to 2.5 to simplify calculations. However, when the step width-to-height ratio exceeds 2.5, no calculation method for the internal force of the thin plate is provided in the specification. Determining the distribution of contact stresses between thin slabs and footings for calculating internal forces in foundation footings when the aspect ratio is greater than 2.5 is crucial for optimizing thin slab designs and reducing engineering costs [16]. The problem of contact stress distribution between the thin plate and the substrate has been studied by many scholars, and various methods have been developed to solve this problem. Solving the contact stress distribution of a four-sided free plate on an elastic substrate is important for calculating the analytical solution of displacement and internal forces in the plate [17]. Numerous scholars at home and abroad have studied the methods for solving the bending problem of a rectangular thin plate with four free sides on an elastic foundation, including the numerical method, analytical method, and semi-analytical method with semi-numerical values [18][19][20]. Each solution method must satisfy not only the fourth-order differential bending equation for the plate [21] but also the geometric and internal force boundary conditions on the four free edges [22]. However, the geometric and internal force boundary conditions related to the form of distribution of contact stresses [23,24] are essential for the bending equations [25]. Therefore, accurately describing the distribution form of contact stresses through model studies is necessary. In engineering and scientific fields, measuring contact stress is vital for evaluating material performance, designing new products, and optimizing production processes.
Many mechanical systems' durability and performance are significantly impacted by contact stress, a crucial factor. The deformation, wear, and failure of these parts can all be significantly influenced by the distribution of stresses that take place at the interface between two contacting bodies. Therefore, for designing and optimizing mechanical systems, it is crucial to comprehend how contact stress arises and how it can be measured and controlled. Guan investigated the creation of tri-axial stress measuring and sensing technology for tire-pavement contact surfaces [26] and described a cutting-edge method for determining the tri-axial stress distribution at the tire-pavement contact surface using a sensor array [27].
First of all, it has been hard to quantify contact stress. Secondly, the distribution of contact stress for thin plates is primarily linear and parabolic, which is relatively conservative for the input conditions of internal force calculation. In this paper, we introduce an innovative type of nonlinear distribution of contact stress called the exponential function model. In order to determine contact stresses in the substrate of thin plates with different aspect ratios, this paper performs film pressure measurement tests. This method takes into account the problems posed by the aspect ratio of the base plate as well as the phenomenon of nonlinear stress distribution in thin plates and substrates. We investigate how aspect ratio affects contact stress distribution and propose a nonlinear distribution model of contact stresses in the substrate of thin plates under concentrated loading. The results of our study have important theoretical and engineering value by providing more precise external boundary conditions for designing and calculating the strength and stiffness of thin plates with different aspect ratios.
Experimental Programs
This section first introduces the component composition and working principle of the thin film pressure distribution test system, as well as the data acquisition method and process. Then, an experimental plan is developed to obtain the real contact stress
Introduction to Thin Film Pressure Distribution Measurement System
The film pressure distribution measurement system is divided into a hardware part and a software part [28]. The hardware part consists of a polyester film sensor, an Analog to Digital Converter (A/D) conversion circuit for the sensor (handle), and software based on a PC using C language to complete an intelligent display system on a microcontroller.
In the test device, Figure 1a shows a 5076-P1-35611DT1-50 type thin film pressure sensor. The area of the thin film pressure sensor is 83.8 mm wide × 83.8 mm high. The pressure sensor consists of two very thin polyester films, as shown in Figure 1b, where the inner surface of one film is laid with a number of band conductors arranged in rows, and the inner surface of the other film is laid with a number of band conductors arranged in columns conductors. The intersection of the rows and columns is the pressure sensing unit, which has a total of 44 rows and 44 columns, forming a uniform distribution of 1936 measurement points. The conductors are made of conductive material with a certain width, and the distance between rows can be changed according to requirements. Therefore, polyester film sensors are available in a variety of sizes and shapes and can be used in different contact conditions. The outer surface of the conductor in the sensing area is coated with a special pressure-sensitive semiconductor material. When two films are combined into one, the intersection of a large number of transverse and longitudinal conductors forms an array of stress-sensing points. When an external force is applied to the sensing unit, the resistance value of the conductor changes linearly with the change in the external force, thus reflecting the stress value at the sensing point. The resistance value is maximum when the pressure is zero and decreases as the pressure increases. This linear change in voltage can reflect the magnitude and distribution of the pressure between the two contact surfaces. The small thickness and high flexibility of the thin-film pressure sensor have no influence on the contact environment, so it can directly measure the real contact stress distribution under different conditions and has the characteristics of high measurement accuracy.
This section first introduces the component composition and working principle o the thin film pressure distribution test system, as well as the data acquisition method an process. Then, an experimental plan is developed to obtain the real contact stress distr bution of the thin plate substrate under graded loading by changing the aspect ratio of th thin plate used in the test and based on the test of the thin film pressure distribution meas urement system.
Introduction to Thin Film Pressure Distribution Measurement System
The film pressure distribution measurement system is divided into a hardware par and a software part [28]. The hardware part consists of a polyester film sensor, an Analo to Digital Converter (A/D) conversion circuit for the sensor (handle), and software base on a PC using C language to complete an intelligent display system on a microcontroller In the test device, Figure 1a shows a 5076-P1-35611DT1-50 type thin film pressur sensor. The area of the thin film pressure sensor is 83.8 mm wide × 83.8 mm high. Th pressure sensor consists of two very thin polyester films, as shown in Figure 1b, where th inner surface of one film is laid with a number of band conductors arranged in rows, an the inner surface of the other film is laid with a number of band conductors arranged i columns conductors. The intersection of the rows and columns is the pressure sensin unit, which has a total of 44 rows and 44 columns, forming a uniform distribution of 193 measurement points. The conductors are made of conductive material with a certai width, and the distance between rows can be changed according to requirements. There fore, polyester film sensors are available in a variety of sizes and shapes and can be use in different contact conditions. The outer surface of the conductor in the sensing area i coated with a special pressure-sensitive semiconductor material. When two films are com bined into one, the intersection of a large number of transverse and longitudinal conduc tors forms an array of stress-sensing points. When an external force is applied to the sens ing unit, the resistance value of the conductor changes linearly with the change in th external force, thus reflecting the stress value at the sensing point. The resistance value i maximum when the pressure is zero and decreases as the pressure increases. This linea change in voltage can reflect the magnitude and distribution of the pressure between th two contact surfaces. The small thickness and high flexibility of the thin-film pressure sen sor have no influence on the contact environment, so it can directly measure the real con tact stress distribution under different conditions and has the characteristics of high meas urement accuracy. The connection diagram between the conductive handle and the sensor is shown i Figure 1a. The connection part in the handle has a conductive interface that can transm the electrical signal from the sensor film to the computer. The connection diagram between the conductive handle and the sensor is shown in Figure 1a. The connection part in the handle has a conductive interface that can transmit the electrical signal from the sensor film to the computer.
The handle is the connection device between the computer software and the sensor and is also the A/D converter. When an external force is applied to the sensing point, the resistance value of the semiconductor changes proportionally to the change in the external force. This change in electrical signal is transmitted to the control circuit on the handle, which is then input to the software and displayed on the computer screen to reflect the stress value and its distribution at the sensing point. The width and spacing of the conductors within the sensor determine the number of sensing points per unit area and the spatial resolution, which can be determined as needed. The value of the spatial resolution on the sensor area can meet a variety of measurement requirements. The sensors can be fabricated in various dimensions and configurations, exhibiting a stress measurement capacity ranging from 0.1 to 175 MPa. The unique feature of this grid sensor is that the sensing area is completely insulated from the non-sensing area. Knowing the spatial size distribution of the sensing points, the force applied to the sensor on a certain area can be intelligently digitized and displayed with Light Emitting Diode (LED) or Liquid Crystal Display (LCD) using the software.
The software part of the thin film pressure distribution measurement system is used to process the measured 2D matrix voltage data and convert it into 2D graphics or 3D graphics display. When a calibration file is not added, the initial values are displayed without units. When the calibration file is added, the initial values are converted into force values and divided into 17 color levels to display the measured stress distribution and the magnitude of the combined force values, indicated from small to large, in a blue-to-pink gradient 2D display graphic. Each pixel point corresponds to the intersection of the band conductors in the film, where the color reflects the measured data at the level set by the corresponding calibration file. In addition, the calibration file is different, and the same force value will be displayed in different colors.
Experimental Program
Square glass sheets of the same width and different thicknesses are used as test specimens, and the widths of the specimens are all 75 mm, and the thicknesses are 4 mm, 5 mm, 8 mm, 10 mm, and 12 mm, respectively. Finally, the test molds with aspect ratios of 18.75, 15, 9.375, 7.5, and 6.25 are obtained, as shown in Figure 2a The handle is the connection device between the computer software and the sensor and is also the A/D converter. When an external force is applied to the sensing point, the resistance value of the semiconductor changes proportionally to the change in the external force. This change in electrical signal is transmitted to the control circuit on the handle, which is then input to the software and displayed on the computer screen to reflect the stress value and its distribution at the sensing point.
The width and spacing of the conductors within the sensor determine the number of sensing points per unit area and the spatial resolution, which can be determined as needed. The value of the spatial resolution on the sensor area can meet a variety of measurement requirements. The sensors can be fabricated in various dimensions and configurations, exhibiting a stress measurement capacity ranging from 0.1 to 175 MPa. The unique feature of this grid sensor is that the sensing area is completely insulated from the nonsensing area. Knowing the spatial size distribution of the sensing points, the force applied to the sensor on a certain area can be intelligently digitized and displayed with Light Emitting Diode (LED) or Liquid Crystal Display (LCD) using the software.
The software part of the thin film pressure distribution measurement system is used to process the measured 2D matrix voltage data and convert it into 2D graphics or 3D graphics display. When a calibration file is not added, the initial values are displayed without units. When the calibration file is added, the initial values are converted into force values and divided into 17 color levels to display the measured stress distribution and the magnitude of the combined force values, indicated from small to large, in a blue-to-pink gradient 2D display graphic. Each pixel point corresponds to the intersection of the band conductors in the film, where the color reflects the measured data at the level set by the corresponding calibration file. In addition, the calibration file is different, and the same force value will be displayed in different colors.
Experimental Program
Square glass sheets of the same width and different thicknesses are used as test specimens, and the widths of the specimens are all 75 mm, and the thicknesses are 4 mm, 5 mm, 8 mm, 10 mm, and 12 mm, respectively. Finally, the test molds with aspect ratios of 18.75, 15, 9.375, 7.5, and 6.25 are obtained, as shown in Figure 2a The actual stress distribution of the square plate when subjected to the concentrated load is observed by applying a graded concentrated load on the center of the square glass plate and placing a thin-film pressure transducer at the bottom. The width of the glass plate is kept constant in the test, and the thickness of the glass is varied to achieve a change The actual stress distribution of the square plate when subjected to the concentrated load is observed by applying a graded concentrated load on the center of the square glass plate and placing a thin-film pressure transducer at the bottom. The width of the glass plate is kept constant in the test, and the thickness of the glass is varied to achieve a change in the aspect ratio. A schematic diagram of the test assembly is shown in Figure 3 below. The overall picture of the experimental assembly can be visualized according to Figure 3.
The loading device for the test is a microcomputer-controlled electronic universal testing machine, as shown in Figure 4. The loading speed should not be too fast to avoid irregular discrete damage by too rapid force, and 10 N/s is used. Glass plates with different aspect ratios are placed on the upper part of the pressure film and loaded by 250 N, 500 N, Sensors 2023, 23, 4050 5 of 16 750 N, 1000 N, and 1250 N in a graded manner. The film pressure distribution system is used to measure the real contact stress distribution of the thin plate substrate, and finally, the system-recorded data is displayed.
Sensors 2023, 23, x FOR PEER REVIEW 5 of in the aspect ratio. A schematic diagram of the test assembly is shown in Figure 3 below The overall picture of the experimental assembly can be visualized according to Figure Figure 3. The schematic diagram of the test assembly.
The loading device for the test is a microcomputer-controlled electronic univers testing machine, as shown in Figure 4. The loading speed should not be too fast to avo irregular discrete damage by too rapid force, and 10 N/s is used. Glass plates with diffe ent aspect ratios are placed on the upper part of the pressure film and loaded by 250 N 500 N, 750 N, 1000 N, and 1250 N in a graded manner. The film pressure distributio system is used to measure the real contact stress distribution of the thin plate substrat and finally, the system-recorded data is displayed.
Test Results and Analysis
The film pressure distribution measurement system displays the combined value contact stresses in thin plates with different aspect ratios under graded concentrated load measured by 1936 measurement points on the film pressure sensor. The actual distribu tion of the contact stress of the thin plate is also reflected visually using a cloud map. B repeating the test several times, the accuracy of the film pressure distribution measur ment system and the real situation of the contact stress distribution in the substrate of th thin plate can be verified by comparing the combined values of the load and the test r sults. The test results provide a basis for a more accurate study of the contact stress distr bution at the base of the thin plate.
Combined Force Value of Film Pressure Distribution Measurement Results
By applying graded concentrated loads (250 N, 500 N, 750 N, 1000 N, and 1250 N) in the aspect ratio. A schematic diagram of the test assembly is shown in Figure 3 below. The overall picture of the experimental assembly can be visualized according to Figure 3. The loading device for the test is a microcomputer-controlled electronic universal testing machine, as shown in Figure 4. The loading speed should not be too fast to avoid irregular discrete damage by too rapid force, and 10 N/s is used. Glass plates with different aspect ratios are placed on the upper part of the pressure film and loaded by 250 N, 500 N, 750 N, 1000 N, and 1250 N in a graded manner. The film pressure distribution system is used to measure the real contact stress distribution of the thin plate substrate, and finally, the system-recorded data is displayed.
Test Results and Analysis
The film pressure distribution measurement system displays the combined value of contact stresses in thin plates with different aspect ratios under graded concentrated loads measured by 1936 measurement points on the film pressure sensor. The actual distribution of the contact stress of the thin plate is also reflected visually using a cloud map. By repeating the test several times, the accuracy of the film pressure distribution measurement system and the real situation of the contact stress distribution in the substrate of the thin plate can be verified by comparing the combined values of the load and the test results. The test results provide a basis for a more accurate study of the contact stress distribution at the base of the thin plate.
Combined Force Value of Film Pressure Distribution Measurement Results
By applying graded concentrated loads (250 N, 500 N, 750 N, 1000 N, and 1250 N) to square thin plates with different aspect ratios (18.75, 15, 9.375, 7.5, and 6.25), the stability
Test Results and Analysis
The film pressure distribution measurement system displays the combined value of contact stresses in thin plates with different aspect ratios under graded concentrated loads measured by 1936 measurement points on the film pressure sensor. The actual distribution of the contact stress of the thin plate is also reflected visually using a cloud map. By repeating the test several times, the accuracy of the film pressure distribution measurement system and the real situation of the contact stress distribution in the substrate of the thin plate can be verified by comparing the combined values of the load and the test results. The test results provide a basis for a more accurate study of the contact stress distribution at the base of the thin plate.
Combined Force Value of Film Pressure Distribution Measurement Results
By applying graded concentrated loads (250 N, 500 N, 750 N, 1000 N, and 1250 N) to square thin plates with different aspect ratios (18.75, 15, 9.375, 7.5, and 6.25), the stability of the combined force values is demonstrated in the test results. The reproducibility of the test has an important impact on the use of the film pressure distribution measurement system, and for this reason, multiple sets of data are measured for each load in the contact stress distribution test on thin plates with different aspect ratios. Studying whether the measured combined force value of the film pressure distribution measurement system is controlled within a certain error range under the same electronic universal testing machine load provides an accurate and realistic distribution for the study of contact stress distribution. The following conclusions can be obtained from Table 1. First, the film pressure distribution measurement system can basically control the input-graded load value and the output stress combined force value within an error range of 10%. When measuring the contact stress, the total combined force in the test result can be lower than the applied load, resulting in a negative test error. Second, for the thin plate with a large aspect ratio, the combined force value measured by the test has a relatively large error in comparison with the load. The reason for this is that when the concentrated load reaches 1000 N or more, the stress concentration phenomenon may occur, or the stress value at the central position of the sensor exceeds the range of its measuring point due to the large aspect ratio of the thin plate. Thirdly, as the aspect ratio increases, the combined force value of the test results and the load are gradually matched, and the error can be controlled even within 5%, which provides an accurate test basis for the study of contact stress distribution. The overall test results show that the contact stress measurements can be controlled within a reasonable range regardless of the aspect ratio, and the test results of the film pressure measurement system are relatively reliable [29].
Characteristics of Nonlinear Distribution of Contact Stress in Thin Plate Substrate
The results of the contact stress test measurements at the bottom of the thin plate can be represented visually in the measurement software by means of a stress cloud [30]. The magnitude of the contact stress is differentiated by color, and the stress is reflected as a gradation from blue to pink in the cloud.
As an example, in the test results for a square glass plate with a maximum aspect ratio of 18.75, the distribution of substrate contact stresses exhibited a significant nonlinearity when the aspect ratio of the sheet is fixed and different loads are applied. Similarly, when the thin plate load is fixed and the aspect ratio of the plate is changed, the test results of the basal contact stress distribution are compared, and it is found that there is also a significant difference in the form of basal contact stress distribution. Examining the distribution of basal contact stresses in thin plates with different aspect ratios under the same size of the concentrated load, as shown in Figure 6. From Figure 5, it can be seen that for the same square glass sheet under the central concentrated load, the total contact area of the sheet and the substrate is fixed. When the concentrated load is small, the pink area with larger contact stress in Figure 5a,b is small in proportion to the total contact area. With the increase of load, the corresponding area of contact stress increases, and the area with larger contact stress also increases gradually, which is shown by the expansion of the pink area in Figure 5d,e.
Similarly, when the thin plate load is fixed and the aspect ratio of the plate is changed, the test results of the basal contact stress distribution are compared, and it is found that there is also a significant difference in the form of basal contact stress distribution. Examining the distribution of basal contact stresses in thin plates with different aspect ratios under the same size of the concentrated load, as shown in Figure 6. Figure 6a-e show the distribution clouds of contact stresses for thin plates with aspect ratios of 18.75, 15, 9.375, 7.5, and 6.25 under a fixed load of 1000 N, respectively. Similarly, when the thin plate load is fixed and the aspect ratio of the plate is changed, the test results of the basal contact stress distribution are compared, and it is found that there is also a significant difference in the form of basal contact stress distribution. Examining the distribution of basal contact stresses in thin plates with different aspect ratios under the same size of the concentrated load, as shown in Figure 6. Figure 6a-e show the distribution clouds of contact stresses for thin plates with aspect ratios of 18.75, 15, 9.375, 7.5, and 6.25 under a fixed load of 1000 N, respectively. From Figure 6a,b, we can see that when the aspect ratio of the plate is changed under a fixed load, the pink area in the contact stress distribution of the thin plate with aspect ratios of 18.75 and 15 is larger and non-linearly obvious. When the aspect ratios are 9.375, 7.5, and 6.25, respectively, which are shown in Figure 6c-e, the color difference of the stress cloud becomes smaller. As the aspect ratio of the plate decreases, the contact stress distribution gradually tends to be uniform, indicating that the aspect ratio of the plate has a significant effect on its contact stress distribution.
Modeling of Exponential Function Distribution of Contact Stress under Concentrated Load
An analytical model of the contact stress in the base of the thin plate is established, as shown in Figure 7, where a is the side length of the square thin plate, δ is the thickness of the square thin plate, and the aspect ratio is defined as = / a λ δ . A concentrated load F is applied at point O (a/2, a/2) in the center of the thin plate. The x-axis and y-axis represent the plane where the test plate is located, and the z-axis indicates the direction of the applied load and the direction of the contact stress. From Figure 6a,b, we can see that when the aspect ratio of the plate is changed under a fixed load, the pink area in the contact stress distribution of the thin plate with aspect ratios of 18.75 and 15 is larger and non-linearly obvious. When the aspect ratios are 9.375, 7.5, and 6.25, respectively, which are shown in Figure 6c-e, the color difference of the stress cloud becomes smaller. As the aspect ratio of the plate decreases, the contact stress distribution gradually tends to be uniform, indicating that the aspect ratio of the plate has a significant effect on its contact stress distribution.
Modeling of Exponential Function Distribution of Contact Stress under Concentrated Load
An analytical model of the contact stress in the base of the thin plate is established, as shown in Figure 7, where a is the side length of the square thin plate, δ is the thickness of the square thin plate, and the aspect ratio is defined as λ = a/δ. A concentrated load F is applied at point O (a/2, a/2) in the center of the thin plate. The x-axis and y-axis represent the plane where the test plate is located, and the z-axis indicates the direction of the applied load and the direction of the contact stress.
Similarly, when the thin plate load is fixed and the aspect ratio of the plate is changed the test results of the basal contact stress distribution are compared, and it is found th there is also a significant difference in the form of basal contact stress distribution. Exam ining the distribution of basal contact stresses in thin plates with different aspect ratio under the same size of the concentrated load, as shown in Figure 6. From Figure 6a,b, we can see that when the aspect ratio of the plate is changed unde a fixed load, the pink area in the contact stress distribution of the thin plate with aspe ratios of 18.75 and 15 is larger and non-linearly obvious. When the aspect ratios are 9.37 7.5, and 6.25, respectively, which are shown in Figure 6c-e, the color difference of th stress cloud becomes smaller. As the aspect ratio of the plate decreases, the contact stre distribution gradually tends to be uniform, indicating that the aspect ratio of the plate ha a significant effect on its contact stress distribution.
Modeling of Exponential Function Distribution of Contact Stress under Concentrated Load
An analytical model of the contact stress in the base of the thin plate is establishe as shown in Figure 7, where a is the side length of the square thin plate, δ is the thickne of the square thin plate, and the aspect ratio is defined as = / a λ δ . A concentrated load is applied at point O (a/2, a/2) in the center of the thin plate. The x-axis and y-axis represen the plane where the test plate is located, and the z-axis indicates the direction of the ap plied load and the direction of the contact stress. In the introduction, Wang [13] mentioned that parabolic functions can be used to reflect the distribution form of contact stresses in the base slab, and the contact stress distribution function at the bottom of the slab is assumed to be The five parameters A1, A2, A3, A4, and A5 in Equation (1) are coefficients to be determined, which can be determined by a stress balance equation and a continuous condition of the reaction force distribution at four corner points. However, the process of solving is complicated. In addition, in the actual situation, the distribution form of contact stress is not one kind of paraboloid, and there are other common nonlinear distribution cases, such as saddle plane and bell shape. In order to better describe the nonlinear distribution of contact stresses in the base of a square thin plate with four free sides under concentrated loading, an exponential function contact stress distribution model is proposed in this paper [31].
Equation (2) can simultaneously describe a variety of nonlinear contact stress distribution forms, including parabolic, saddle surface, and bell-shaped, among other nonlinear shapes, where A, B, and C are coefficients to be determined.
The magnitude of the contact stress in the center of the thin plate is defined as the average force under the plate multiplied by the corresponding aspect ratio coefficient p, where p = kp 0 , p 0 = F/a 2 . p and p 0 are the actual central contact stress of the thin plate and the uniform force of the thin plate, respectively. "k = k(λ)" is the aspect ratio coefficient of the thin plate. According to the equilibrium equation, the symmetry of the square thin plate structure and the central contact stress can determine the coefficients A, B, and C of Equation (2).
According to the previous definition, the contact stress at the center of the thin plate is the mean contact stress multiplied by the aspect ratio factor: Integration along the region leads to the equilibrium equation: For a square thin plate, the concentrated load acts at the center of the plate, and the square thin plate structure are symmetrical. Therefore, the contact stress distribution model of the substrate is also symmetrical.
p(x, y) = p(y, x) The corresponding coefficients to be determined can be obtained by solving the system of equations from (3) to (5).
Different external conditions may have an effect on the initial contact stress value of the contact stress at the base of the thin plate, but under the test conditions, the environment is stable, the 4 sides are free, and the boundary contact stress value is 0. In order to make the model meet the external conditions when building the model, the value of A is, therefore, taken as 0.
The values of A and B that have been determined are substituted into the contact stress distribution model (2) and integrated along the region to obtain Equation (8). F is calculated as follows: Using the polar coordinate transformation, let x = r cos θ + a/2 y = r sin θ + a/2 , where the integration regions of a/2 = r cos θ, r ∈ [0, a/2 cos θ], θ ∈ [0, π/4] coincide exactly with the original integration region (the integration region is shown in Figure 8). Equation (9) is coincide exactly with the origi nal integration region (the integration region is shown in Figure 8). Equation (9) is ob tained. The transformation of the integration region can be better understood in Figure 8. The Procedure for calculating Equation (9) is as follows: The simplification yields Equation (11).
Equation (13) is approximately equal to Equation (14) and finds the solution of C.
Finally, the contact stress distribution model, including the aspect ratio factor for a square thin plate under a central concentrated load, is derived as Equation (15).
Comparison of Model Results with Experimental Data
The experimental results of the film pressure distribution measurement system show that the contact stress distribution in the substrate of a square thin plate under concentrated loading is nonlinear. The aspect ratio of the plate has a significant effect on the contact stress distribution in the substrate. In this section, the correctness of the exponential function contact stress distribution model proposed in this paper is investigated by further analysis of the experimental results and comparison with the theoretical values of different contact stress distribution models.
Comparison of Model and Experimental Stress Values of Contact Stress Distribution by Exponential Function
In this section, the data analysis is processed according to the contact stress exponential function distribution model Equation (15) presented in Section 4 for thin plates with different aspect ratios. It is proved through experimental practice that the aspect ratio k proposed in this paper is generally between 2λ/15 and λ/5. The test results in Section 3 are compared with the model values of Equation (15) in a three-dimensional simulation. The applicability of the exponential function model proposed in this paper for nonlinear contact stress distribution can be visualized from the three-dimensional comparison graph. Figure 9a-e are 3D comparison plots of the results comparing the theoretical and experimental values of the exponential function model under a concentrated load of 1000 N for aspect ratios λ of 18.75, 15, 9.375, 7.5, and 6.25, respectively. The xand y-axes are the coordinates of the plane position of the sheet (in cm), the z-axis is the corresponding contact stress force value (in N/cm 2 ), the scatter point is the stress value measured by the test induction unit, and the colored surface is the model result. It can be intuitively concluded that when the aspect ratio is above 6~8, the nonlinearity of the contact stress distribution is obvious, and the model of this paper has a high degree of fit to the test results. When the aspect ratio is below 6~8, the contact stress distribution tends to be uniform, and the nonlinearity is reduced.
In order to further analyze the model results, this paper uses the film pressure distribution measurement system under concentrated load to sense the distribution of measurement points as 44 × 44, with a total of 1936 measurement points. The amount of test data is large. In addition, due to the range of the thin film pressure sensor, some regions will have stress concentration, which does not play a better role in comparing the experimental results with the model results. On the contrary, choosing the central path comparison can reflect the nonlinearity of stress distribution well. The contact stress distribution of the square thin plate under concentrated loading is symmetrical, so it can also be extended to the contact stress distribution of the whole thin plate substrate, and the specific contact stress values are obtained by selecting the corresponding paths. In order to facilitate the comparison between the model stress values and the experimentally measured stress values, the central path of the surface equation in Equation (15), i.e., y = a/2, can be chosen to obtain the contact stress distribution function Equation (16) on this path when comparing the exponential function contact stress distribution model Equation (15) proposed in Section 4 of this paper with the experimental stress values.
For the analysis and processing of a large number of experimental measurement data, the general value of the aspect ratio coefficient k is taken between 2λ/15 and λ/5. In this paper, we take k = λ/5, determine the value of k, and substitute it into Equation (16) to obtain Equation (17).
Equation (17) is the exponential function distribution model of contact stress for the central path of the thin plate under concentrated load containing the aspect ratio.
When the load is fixed and the aspect ratio of the thin plate is changed, the theoretical value curves of the exponential function contact stress distribution model and parabolic model of the thin plate are compared with the corresponding experimental values in Section 3. Finally, the results comparing the theoretical values of the model with the experimental values when the aspect ratio λ is 18.75, 15, 9.375, 7.5, and 6.25 are obtained, as shown in Figure 10. In order to further analyze the model results, this paper uses the film pressure distri bution measurement system under concentrated load to sense the distribution of meas urement points as 44 × 44, with a total of 1936 measurement points. The amount of tes data is large. In addition, due to the range of the thin film pressure sensor, some regions will have stress concentration, which does not play a better role in comparing the experi mental results with the model results. On the contrary, choosing the central path compar ison can reflect the nonlinearity of stress distribution well. The contact stress distribution of the square thin plate under concentrated loading is symmetrical, so it can also be ex tended to the contact stress distribution of the whole thin plate substrate, and the specific contact stress values are obtained by selecting the corresponding paths. In order to facili tate the comparison between the model stress values and the experimentally measured stress values, the central path of the surface equation in Equation (15), i.e., y = a/2, can be chosen to obtain the contact stress distribution function Equation (16) For the analysis and processing of a large number of experimental measurement data, the general value of the aspect ratio coefficient k is taken between 2λ/15 and λ/5. In this paper, we take k = λ/5, determine the value of k, and substitute it into Equation (16) to obtain Equation (17).
Equation (17) is the exponential function distribution model of contact stress for the central path of the thin plate under concentrated load containing the aspect ratio.
When the load is fixed and the aspect ratio of the thin plate is changed, the theoretical value curves of the exponential function contact stress distribution model and parabolic model of the thin plate are compared with the corresponding experimental values in Section 3. Finally, the results comparing the theoretical values of the model with the experimental values when the aspect ratio λ is 18.75, 15, 9.375, 7.5, and 6.25 are obtained, as shown in Figure 10. From the comparison of the above model's theoretical values and the corresponding test values, the following conclusions can be drawn: 1. For the thin plate with an aspect ratio of 6~8 or more in Figure 9, the contact stress values of the tested bottom plate are nonlinearly distributed, and the exponential function contact stress distribution model is closer to the test results than the original parabolic model, which verifies the correctness of the model in the case of a larger aspect ratio; 2. For the thin plate with an aspect ratio below 6~8, it can be seen from Figure 9 that as the aspect ratio decreases, the contact stress value of the bottom plate is gradually stabilized within a certain stress range, and the exponential function contact stress distribution model and the parabolic model also converge at the same time. This phenomenon shows that when the aspect ratio is small, the exponential function contact stress distribution model and the original parabolic model are equally feasible and can reflect the contact stress distribution.
Error Analysis of Model and Test Values
The correctness of the exponential function contact stress distribution model can be verified by comparing the theoretical value curves of the exponential function contact stress distribution model on the central path of the thin plate in Section 5.1, the parabolic surface model, and the experimental results in Section 3. In order to apply the model to the engineering practice of foundation base plate design, an error analysis is performed for the above comparison results [32].
In the error analysis, the center of the thin plate is selected to be subjected to a concentrated load of 1000 N. A total of 5 intervals in the plate, marked as intervals 1, 2, 3, 4, and 5, are taken in the x-axis upward past the center. For the thin plate with an aspect ratio of 6~8 or more in Figure 9, the contact stress values of the tested bottom plate are nonlinearly distributed, and the exponential function contact stress distribution model is closer to the test results than the original parabolic model, which verifies the correctness of the model in the case of a larger aspect ratio; 2 For the thin plate with an aspect ratio below 6~8, it can be seen from Figure 9 that as the aspect ratio decreases, the contact stress value of the bottom plate is gradually stabilized within a certain stress range, and the exponential function contact stress distribution model and the parabolic model also converge at the same time. This phenomenon shows that when the aspect ratio is small, the exponential function contact stress distribution model and the original parabolic model are equally feasible and can reflect the contact stress distribution.
Error Analysis of Model and Test Values
The correctness of the exponential function contact stress distribution model can be verified by comparing the theoretical value curves of the exponential function contact stress distribution model on the central path of the thin plate in Section 5.1, the parabolic surface model, and the experimental results in Section 3. In order to apply the model to the engineering practice of foundation base plate design, an error analysis is performed for the above comparison results [32].
In the error analysis, the center of the thin plate is selected to be subjected to a concentrated load of 1000 N. A total of 5 intervals in the plate, marked as intervals 1, 2, 3, 4, and 5, are taken in the x-axis upward past the center. in the central path of Figure 11. The selection of the error interval can be visualized in Figure 11. Table 6. Error comparison between different models and experimental stress values (N/cm 2 ) (λ = 6.25). The average values of test stresses in these intervals are compared with the theoretical stress values of different models in order to avoid, as much as possible, the chance brought by the test measurements. The results of the error analysis are obtained, as shown in Tables 2-6. As seen from Tables 2-4, for the thin plate with an aspect ratio greater than 8, the contact stresses at the base are calculated according to the assumption of linear distribution, and the contact stress values have a large error with the experimental values, with the error reaching 36.35~51.18%. The linear model is the most commonly used model in engineering today. It is conservative in design and does not take into account the non-linear nature of the contact stress distribution [33,34]. As a result, it has a larger error compared to the non-linear parabolic model and the exponential function model proposed in this paper. However, the exponential function contact stress distribution model has an error of within 5% between the stress value and the test stress value. It can be seen that the exponential function model can more accurately represent the contact stress distribution of the thin plate under concentrated loading. From Tables 5 and 6, for the thin plate with an aspect ratio below 8, the substrate contact stress distribution gradually tends to be uniform, and the errors between the exponential function contact stress distribution model and the test stress values are less than 5%. Meanwhile, the parabolic model values are compared with the experimental values, and there is an error of 38.18%~39.54%. The large parabolic model error under concentrated loading, when the aspect ratio is relatively large, is mainly because the model requires stress equations and reaction forces at the four corner points when determining the model coefficients [8]. However, the free boundary conditions under this test condition resulted in inaccurate reaction forces at the four corner points, which caused an increase in parabolic error. The linear model values used in engineering are on the conservative side, and there is an error of 20.03~24.03% compared with the experimental values. The reason for the small error is that the exponential function model proposed in this paper inherently includes aspect ratios, especially for thin plates with large aspect ratios, which are more consistent with the distribution of real substrate contact stresses. Thus, we can see that the exponential function contact stress distribution model proposed in this paper has the characteristics of small error and accurate calculation, which has certain practical significance in engineering applications.
Conclusions
Multiple measurement tests based on experimental measurements of the film pressure distribution measurement system are used to confirm the repeatability of the system. For measuring the base cohesion values and contact stress distribution forms of thin plates with various aspect ratios, the multiple tests also provide precise experimental support. The contact stress distribution can be studied more thoroughly, and the impact of aspect ratio on it can be confirmed by controlling the base force value within a tolerable error range. A graded loading test scheme is created for various load forms based on the thin film pressure distribution measurement system. The thin film pressure transducer directly measures the real contact stresses in the substrate of the sheet with various aspect ratios and material properties to produce the nonlinear situation of the real contact stress distribution.
The form of the contact stress distribution at the bottom of thin plates with various aspect ratios is examined in this paper, and a model for the contact stress distribution with an exponential function and aspect ratio coefficients is suggested. The model, which can more accurately describe the nonlinear situation of the actual distribution of contact stresses at the bottom of the substrate and is supported by the experimental data, has a straightforward form and only requires the determination of an aspect ratio coefficient. The theoretical value of the exponential function model, the experimentally measured value of the real contact stress, and the nonlinear model of the current contact stress distribution are compared. Additionally, the corresponding error analysis is derived to confirm the accuracy of the model in this paper. The exponential function distribution model of contact stress, which is more accurate and consistent with the actual nonlinear distribution, is proposed in accordance with the characteristics of the real test values of the contact stress distribution and model distribution. The exponential function contact stress distribution model can provide nonlinear load input and more precise external force boundary conditions for the internal force calculation of thin slabs with aspect ratios of 6~8 or more, improving the strength and stiffness calculation of foundation base slabs. This model is more accurate than the original linear model of contact stress distribution and the parabolic model.
Only the contact stress distribution in the base of thin plates with different aspect ratios under concentrated loading is taken into account by the exponential function contact stress distribution model proposed in this paper. However, the subsequent work will look into the thin plate's material characteristics as well as additional influencing factors like the type of loading action. Data Availability Statement: All data generated or analyzed during this study are included in this published article.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-04-19T15:34:27.074Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "47e9d2f9994e5b82649813cc9d762cdb79d00715",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/8/4050/pdf?version=1681726533",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b811c526c289c4a0355fbdcae21d5812d3b89d5",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
244052404 | pes2o/s2orc | v3-fos-license | Consolidation with Autologous Stem Cell Transplantation in Patients with Primary Central Nervous System Lymphoma Primer Santral Sinir Sistem Lenfomalarında Otolog Kök Hücre Nakli ile
Introduction: Primary central nervous system lymphomas (PCNSL) are defined as a rare extranodal Non-Hodgkin lymphoma subgroup. The induction regimens involve high dose methotrexate-based chemotherapies mostly for the patients with PCNSL. There is still no standard approach for consolidation therapy. Recently, consolidation with autologous stem cell transplantation (ASCT) after high-dose chemotherapy has been widely used in the treatment of PCNSL. We aim to evaluate the results of PCNS patients who underwent ASCT in our center. Methods: The data of PCNSL patients diagnosed in Hematology Unit of Dr. Abdurrahman Yurtaslan Ankara Oncology Hospital between 2010 and 2021 were analyzed retrospectively. Results: Eleven patients were diagnosed with PCNSL diagnosis. The patients' median age included in the study was 53.5 years (range 38-68). Eight patients underwent ASCT for upfront consolidation. Seven patients achieved CR three months after ASCT; one patient was not evaluated due to exitus in the first month of the transplant. Three patients could not achieve ASCT due to transplantation ineligibility patients and mobilization failure. The median follow-up period in the study was 26 months (range 8-82 months). The median overall survival was not reached. Transplant-related mortality was 12.5%, and the mortality rate was 27% in the whole cohort. In patients who received ASCT, of the 62.5% had an almost two-year survival advantage. For the whole cohort, 73% of the patients had change for more prolonged survival among follow-up. Discussion and conclusion: In our cohort, the PCNSL patients had mostly high-risk disease; however, three-quarters of the patients could receive ASCT, and at the same rate of the patients had advantages for long-term survival.
Introduction
Primary central nervous system lymphomas (PCNSL) are defined as a rare extranodal Non-Hodgkin lymphoma (NHL) subgroup that is typically localized to the brain, eye, spinal cord, and cerebrospinal fluid (CSF) without a primary tumor in the body [1][2][3]. Primary central nervous system lymphomas constitute 4% of all brain tumors and 4-6% of extranodal lymphomas [2]. The annual overall incidence rate of PCNSL is up to 0.5 cases per 100,000 [4]. Among immunocompetent individuals, the median age at diagnosis is 60 years [2]. Approximately 90-95% of PCNSL is Diffuse Large B Cell Lymphoma (DLBCL) [2]. The initial presentation usually has an intracranial mass and associated headache, confusion, weakness, and neurological deficits. Tumor infiltration can be observed in the following percentages: brain hemispheres 38%, thalamus 16%, basal ganglion 14%, corpus callosum 14%, periventricular area 12%, cerebellum 12.5%, meninges 20%, cranial nerves 16%, and spinal nerves 1% [5]. Meningeal involvement can be detected by cytological examination of CSF in 16% of the cases with PCNSL. Isolated leptomeningeal involvement is observed in less than 5% of PCNSL cases [4]. Spinal cord lymphoma is the rarest form of PCNSL and has an inferior prognosis [4,6,7]. In patients with PCNSL, various prognostic scoring systems are used to predict prognosis. The International Extranodal Lymphoma Study Group (IELSG) prognostic scoring is the most widely used one. It is based on age, Eastern Cooperative Oncology Group (ECOG) performance status (PS), lactate dehydrogenase (LDH) level, CSF protein concentration, and involvement of the deep brain structures [8]. Scores 0-1 represents low risk, 2-3 represents intermediate risk, whereas score 4-5 represents high risk [9]. According to the IELSG scoring system, the PCNSL patients with at least two negative factors (IELSG score ≥2) showed poor survival [9]. The survival is similar to those who did not achieve complete response (CR) after two induction chemotherapy [9,10]. Therefore, a more effective treatment strategy is required, especially for PCNSL patients with an IELSG score of ≥2, or whom can not be achieved CR1 after two induction chemotherapy [8][9][10]. The standard treatment approach of PCNSL consists of induction and consolidation. The PCNSL is sensitive to both chemotherapy and radiotherapy. High-dose methotrexate (HDMTX)-based chemotherapy can cross the blood-brain barrier, an essential part of the treatment. Nevertheless, wholebrain radiotherapy (WBRT) as the sole therapy has been associated with poor survival and an increased risk of treatment-related neurotoxicity [10,11]. For most patients with PCNSL, the induction involves HDMTXbased chemotherapies mostly. However, there is still no standard approach for consolidation therapy. In recent years, regardless of the patients' initial prognostic scores, consolidation with autologous stem cell transplantation (ASCT) after high-dose chemotherapy has been widely used in the treatment of PCNSL [12][13][14]. The efficacy of upfront ASCT in PCNSL is not clearly defined due to the limited number of studies and a limited number of patients enrolled in these studies [15]. For this reason, we aim to evaluate the results of PNSCL patients who underwent ASCT in our center.
Methods
The data of PCNSL patients diagnosed in Hematology Unit of Dr. Abdurrahman Yurtaslan Ankara Oncology Hospital between 2010 and 2021 were analyzed retrospectively. Primary central nervous system lymphoma was defined as histologically confirmed NHL restricted to the CNS, including the brain parenchyma, spinal cord, eyes, cranial nerves, or menings [16]. Diagnosis of PCNSL was histologically confirmed by stereotactic brain biopsy, surgical resection, or CSF cytology in all patients. All patients underwent preevaluation with contrast-enhanced brain magnetic resonance imaging (MRI), positron emission tomography to exclude systemic NHL, unilateral bone marrow aspiration and biopsy, and lumbar puncture for CSF analysis unless contraindicated. High CSF protein concentration was defined as more than 0.45 g/l in patients younger than 60 years old and more than 0.6 g /l in patients at the age of 60 and older [8]. Involvement of deep brain structures was defined as involvement of periventricular regions, basal ganglia, brain stem, and cerebellum. At the time of diagnosis, IELSG scoring (age> 60 years, ECOG PS ≥2, high LDH, high CSF protein concentration, and deep brain structures involvement) was performed in all patients [8,10]. Response to induction chemotherapy was evaluated by comparing brain MRI performed before and after the second induction course and after the induction regimen. Response to treatment was assessed according to the criteria of the International PCNSL Collaborative Group [17]. Patients with a chemosensitive response to induction therapy underwent upfront ASCT consolidation. Patients who had chemorefractory response received a salvage regimen. Among ASCT, the engraftment definition for neutrophil was defined as the first day when the absolute neutrophil count (ANC) was >500/mm 3 or 1000/mm 3 for three consecutive days, and thrombocyte engraftment was defined as the first day when thrombocyte count was >20000/mm 3 for three consecutive days without transfusion. All patients received weight adapted G-CSF before the neutrophil engraftment. Overall survival was calculated from the date of histological diagnosis to death or the last date of follow-up. Progression-free survival was calculated from the date of histological diagnosis to disease progress, death, or date of the latest follow-up for progression-free patients, whichever occurred first. Transplantrelated mortality (TRM) was defined as death within the first 100 days after ASCT [18].
The local human research ethics committee approved this study. All procedures performed in studies involving human participants were under the national research committee's ethical standards and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The study was carried out with the permission of the Ethics Committee of Dr. Abdurrahman Yurtaslan Ankara Oncology Training and Research Hospital (Permission granted /Decision number: 2020-12/910).
All statistical analyses were conducted using SPSS V21.0 (SPSS Inc., Chicago, IL) software. Descriptive statistics were applied to summarize the data. The categorical data were reported as rates, and numeric data were reported as medians and average ± standard deviations. The Kruskal Wallis Test was used to analyze engraftment times between the groups. Kaplan Meier test was used to analyze PFS and OS
Results
Eleven patients were diagnosed with PCNSL diagnosis at our center. The patients' median age included in the study was 53.5 years (range 38-68 years). The male to female ratio was 4.5. All patients' disease stage was The IELSG score was low risk in three patient; the score was ≥2 in eight patients that means intermediate and high risk. One patient with an IELSG score of 1 died due to sepsis in the first month of the transplant. As the induction regimen, R-HyperCVAD (the drugs used in course A: cyclophosphamide, vincristine, doxorubicin, dexamethasone, and rituximab; Course B consists of methotrexate, cytarabine, and rituximab) was given to four patients, methotrexate cytarabine to two patients, MATRIX (HDMTX, high dose cytarabine, thiotepa, and rituximab) to four patient, and R-CHOP (rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone) to one patient.
Seven patients achieved CR1 after induction therapy. One patient received methotrexate cytarabine, and one patient who received R-CHOP had stable disease after first-line induction, so they had hyperCVAD for salvage and achieved CR2. Of the two patients who had MATRIX for induction had refractory disease, they had RT and temozolomide for salvage.
Six patients received radiotherapy among follow-up, of the three patients received radiotherapy after upfront ASCT. Six patients received a median of four intrathecal chemotherapy during the induction regimen.
Eight patients underwent upfront ASCT. The TECA (thiotepa, etoposide, and carboplatine) regimen was used as the conditioning regimen in five patients, and the thiotepa-carmustine regimen was used in three patients. Median neutrophil engraftment duration was 12 (range 8-16) days, and median platelet engraftment was 12 (range 9-14) days.
Seven patients achieved CR three months after ASCT; one patient was not evaluated due to exitus in the first month of the transplant. Three patients could not achieve ASCT due to transplantation ineligibility (n:2) patients and mobilization failure (n:1). Who did not received ASCT, one of them had temozolomide for consolidation, two of them had temozolomide for salvage.
The median follow-up period in the study was 26 months (range 8 month-82 months). The median follow-up time after ASCT was 20 months (range 1-63 months).
The median overall survival was not reached. Transplant-related mortality was 12.5%, and the mortality rate was 27% in the whole cohort. Three patients died in the first year of the ASCT due to infection. In patients who received ASCT, the median survival time from diagnosis was 29 months (range 8-82 months), and 62.5% of patients had an almost two-year survival advantage. For the whole cohort, 73% of the patients had change for more prolonged survival among follow-up.
Discussion
Primary central nervous system lymphomas are rare and aggressive malignancies. Besides low incidence, the data are limited about standard induction and consolidation treatment. We aim to evaluate the outcome of the PCNL patients' who received ASCT. In our cohort, IELSG score was ≥2 of 72% of patients and had mostly high-risk disease. However, 73% of the patients could receive ASCT, and at the same rate of the patients had long-time survival.
The two-year OS is 80% in patients with an IELSG score of 0-1 points, 48% in those with 2-3 points, and 15% in those with 4-5 points [8]. As the expected 2-year OS is less than 50% in patients with an IELSG score of 2 at the time of diagnosis, intensive treatments are required, especially for this high-risk group of patients. Furthermore, failure to achieve CR after HDMTX-based induction chemotherapy is associated with poor survival, requiring a more effective treatment strategy for these patients [10,19]. Consolidation with WBRT or ASCT should be considered, especially in high-risk patients, due to the low response rates of standard HDMTX-based induction chemotherapy alone [8,20]. In a recent randomized phase III study, no survival advantage was detected when WBRT was used as consolidation treatment [19]. Additionally, WBRT has been associated with neurotoxicity and a high relapse rate [21]. WBRT should be considered in refractory patients or patients who cannot tolerate highdose chemotherapy. In our cohort, nearly half of the patients had WBRT for palliation or salvage regimen. We preferred ASCT initially for consolidation instead of WBRT to avoid neurotoxicity.
Since ASCT was first used in relapsed PCNSL in 1996, most centers have been used as consolidation treatment in patients with PCNSL [22]. In a multicenter phase II study, 23 PCNSL patients younger than 65 years received HDMTX based induction followed by ASCT with carmustine-thiotepa conditioning regimen. Twenty-one patients received WBRT (45 Gy, two doses of 1 Gy/d) for consolidation after ASCT. With a median follow-up of 63 months, 5-year estimated OS was 87%, and the 5-year probability of relapse-related death was 8.7% [23]. In another study, 13 patients with PCNSL received HDMTX based induction followed by ASCT with carmustine-thiotepa conditioning regimen. Radiotherapy was restricted to patients who did not achieve CR. With a median follow-up of 25 months, 3-year disease-free survival (DFS), and OS was 77% [24]. In a review, 2-year PFS was 69% (range 54% -81%) and 2-year OS was 84% (range 83%-91%) and TRM was 3% for PCNSL patients undergoing upfront ASCT following regimens containing thiotepa and / or WBRT. In the same review, 2-year PFS was 44% (range 25% -62%), 2-year OS was 65% (range 60% -70%), and TRM was 4% in patients who underwent upfront ASCT following thiotepafree regimens. Thiotepa containing regimens were found to be associated with better PFS and OS when compared to thiotepa-free regimens. Addition of WBRT has not been shown to affect OS [15]. In the study conducted by Cho et al., 2-year OS was 93.3% in patients who underwent upfront ASCT and 72.9% in patients who underwent ASCT after salvage therapy, but this difference did not achieve statistical significance. However, PFS was significantly higher in patients consolidated with upfront ASCT than those who underwent ASCT after salvage therapy (91.7% and 25.0%, respectively; P = 0.001) [25]. In our center, if the patient were eligible for ASCT, patients were performed upfront ASCT instead of waiting for relapse. We observed that 62.5% of the PCNSL patients who underwent upfront ASCT could survive among follow-up. The median durations for neutrophil and platelet engraftment were ten days (9-13 days) and 14 days (11-24 days) in the same study during ASCT [22]. We administered thiotepa based conditioning mostly, and the median neutrophil and platelet engraftment duration was 12 days in our study.
We administered HDMTX based induction regimens, and we preferred ASCT consolidation for transplantation eligible patients mostly. However, IELSG score mostly intermediate and high risk in our cohort, 73% of the patients had change for long time survival among follow-up. One of our limitations was the number of the cohort, and it was lower due to the low incidence of PCNSL. The other limitation was the retrospective design of our study.
In conclusion, PCNSL is a rare subtype of NHL and DLBCL is the most seen NHL subtype. There is no standard induction treatment nor consolidation, or conditioning regimen.
Therefore, bone marrow transplantation centers need to report PCNSL patients' outcomes to contribute to the literature to determine the optimum induction treatment, consolidation, or conditioning regimen. | 2021-11-13T16:02:38.789Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a72aad8a8b981744319eaf8c0225cb23bcd9d23e",
"oa_license": null,
"oa_url": "https://doi.org/10.5505/aot.2021.09821",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f43b660a463ef9e7b954f9db8bafa22684e48134",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
12614978 | pes2o/s2orc | v3-fos-license | Applying black hole perturbation theory to numerically generated spacetimes
Nonspherical perturbation theory has been necessary to understand the meaning of radiation in spacetimes generated through fully nonlinear numerical relativity. Recently, perturbation techniques have been found to be successful for the time evolution of initial data found by nonlinear methods. Anticipating that such an approach will prove useful in a variety of problems, we give here both the practical steps, and a discussion of the underlying theory, for taking numerically generated data on an initial hypersurface as initial value data and extracting data that can be considered to be nonspherical perturbations.
I. INTRODUCTION
The formation of a black hole is, in principle, one of the most efficient mechanisms for generation of gravitational waves. Such sources tie together two major research initiatives. Laser interferometric gravity wave detectors [1] hold out a promise of the detection of gravitational waves from astrophysical events. To interpret the results of the gravitational wave signals, and to help find signals in the detector noise, a broad and detailed knowledge will be needed of astrophysical gravitational waveforms. This is one of the underlying motivations for the "grand challenge" [2] in high performance computing, aimed at computing the coalescence of black hole binaries.
Evolving numerical spacetimes and extracting outgoing radiation waveforms is indeed a challenge. In a straightforward numerical approach, a good estimate of the asymptotic waveform requires long numerical evolutions so that the emitted waves can be propagated far from the source. The necessary long evolutions are difficult for a number of reasons. General difficulties include throat stretching when black holes form, numerical instabilities associated with curvilinear coordinate systems, and the effects of outer boundary conditions which are approximate. [3] We suggest here that at least part of the cure for this problem may lie in the use of the theory and techniques of nonspherical perturbations of the Schwarzschild spacetime ("NPS"). By this we mean the techniques for treating spacetimes as deviations, first order in some smallness parameter, from the Schwarzschild spacetime. These techniques differ from "linearized theory" which treats perturbations of the spacetime from Minkowski spacetime and which cannot describe black holes. The basic ideas and methods were set down by many authors and lead to "wave equations" for the even parity [4] and odd parity [5] perturbations.
NPS has been used to compute outgoing radiation waveforms from a wide variety of black hole processes, including the scattering of waves [6], particles falling into a hole [7], and stellar collapse to form a hole [8]. The general scheme of NPS also underlies the techniques for extraction of radiation from numerically evolved spacetimes [9]. NPS computations have recently been used in conjunction with fully numerical evolution, as a code test [10] and as a strong-field radiation extraction procedure [3].
Here we are interested in another sort of application of NPS theory. To understand such applications we consider an example: Two very relativistic neutron stars falling into each other, coalescing and forming a horizon, as depicted in Fig. 1. The curve "hypersurface," in Fig. 1, indicates a spacelike "initial" surface. The spacetime can be divided into three regions by this initial surface and the horizon. The early evolution, in region I, below the initial hypersurface, is highly dynamical and nonspherical. Spherical perturbation theory is clearly inapplicable. Above the initial surface the spacetime remains highly nonspherical in region II inside the event horizon, but outside the event horizon, in region III, it may be justified to consider the spacetime to be a perturbation of a Schwarzschild spacetime. This is essentially guaranteed if the initial hypersurface is chosen late enough, in some sense, after the formation of the horizon. The evolution in region III, then, is determined by cauchy data on the initial hypersurface exterior to the horizon. It is important to note that this is made possible by the fact that the horizon is a causal boundary which shields the outer region from the dynamics of the highly nonspherical central region.
The scheme inherent in this division of spacetime has the potential greatly to increase the efficiency of the computation of the radiation generated when strong field sources form black holes. If one starts from the cauchy data on the initial hypersurface, one can evolve forward in time with the linear equations of perturbation theory. Many of the long-time evolution problems of numerical relativity are avoided and the interpretation of the computed fields in terms of radiation is immediate.
The approach suggested would then seem to be: Use numerical relativity up to the initial hypersurface; use the techniques of nonspherical perturbations in the future of the initial hypersurface. In fact, the efficiency that can be achieved may be even greater. In the early, highly nonspherical, pre-initial hypersurface phase of the development of the spacetime, there may be relatively little generation of gravitational radiation. By using a computational technique which suppresses the radiative degrees of freedom one may be able to compute the early stages of evolution relatively easily. There are two very recent examples of just such applications of this viewpoint. Price and Pullin [11] used as initial data the Misner's [12] solution to the initial value equations for two momentarily stationary black holes. Abrahams and Cook [13] considered two holes moving towards each other, and used numerical values of the initial value equations. In neither case was there any use of fully nonlinear numerical evolution. The rather remarkable success of both computations suggests that there is something robust about the underlying idea of separating horizon-forming astrophysical scenarios into an early phase with no radiation and a late phase with small deviations from sphericity outside the horizon. It is plausible that the bulk of the radiation in most processes is generated only in the very strong-field interactions around the time of horizon formation and that radiation generation in the early dynamics can be ignored. One would, however, think that strong radiation would be emitted during the stages at which the early horizon is very nonspherical and at which time nonspherical perturbation theory would seem to be inapplicable. There should be a tendency for this "early" radiation, produced very close to the horizon, to go inward into the developing black hole, so that the application of nonspherical perturbation theory to the exterior really requires that on the initial spacetime the perturbation are small only well outside the horizon. It would seem that something of this sort would have to be happening to explain the accuracy of the Price-Pullin and Abrahams-Cook results.
Whether or not many problems can be treated with no use of fully numerical evolution, it appears clear to us that these perturbation methods will be applied to a variety of problems in which data on the initial hypersurface is available numerically. The primary purpose of this paper is to provide justification and background for earlier work on this subject and a clear recipe for future applications. In the next section we discuss the meaning, and limitations, of extracting a "perturbation" from this numerical data and computing radiated energies. The explicit process of extracting the perturbations from the numerical data is given in Sec. III. In Sec. IV we demonstrate the use of this procedure via application to a specific example, the Misner initial data.
II. INITIAL DATA AS SCHWARZSCHILD PERTURBATIONS
We outline here the formalism for perturbation theory based on work by Regge and Wheeler [5] and by Zerilli [4], but we will draw heavily on the gauge invariant reformulation of those earlier works by Moncrief [14]. Our starting point is an initial hypersurface which can be taken as a surface of constant Schwarzschild time. We assume that the coordinates x i on that surface are almost Schwarzschild coordinates r, θ, φ and we assume that the values are known, on this hypersurface and in these coordinates, for the 3-metric γ ij and the extrinsic curvature K ij . The conditions for finding such a hypersurface and such coordinates will be made explicit in Sec. III.
Underlying perturbation theory is the idea of a family of metric functions g µν (x α ; ǫ), depending on the parameter ǫ, which satisfy the Einstein equations for all ǫ, and which, in the limit ǫ → 0, become the Schwarzschild metric functions, such as g rr = S −1 . (Here S ≡ 1 − 2M/r and M is the mass of the Schwarzschild spacetime; we use units throughout in which c = G = 1.) NPS theory amounts to the approximation
A. Choice of expansion parameter
It is of some practical importance to realize that the choice of the expansion parameter can have a considerable effect on the range over which perturbation theory gives a good approximation. Let us imagine that we introduce a new parameter ǫ ′ which is a function of ǫ such that dǫ ′ /dǫ approaches unity as ǫ → 0. If we take ǫ ′ to be the basis of our perturbation approach, the approximation becomes At ǫ = 0 the derivative of g µν with respect to ǫ and with respect to ǫ ′ have the same values, so for a given spacetime -that is, for a given value of ǫ -the nonspherical perturbation in (2) differs from that in (1) by the factor {ǫ ′ /ǫ}. Computed energies (which are quadratic in the nonspherical perturbations) will differ by the square of this ratio. Different choices of parameterization will change this factor and affect the accuracy of the linearized approximation.
To show the effects of this parameterization dependence, we take as an example Misner data [11] [12] for two holes. The initial separation of the holes, in units of the mass of the spacetime, is described by Misner's parameter µ 0 . The metric perturbations, however, are not analytic in µ 0 as µ 0 → 0, so µ 0 cannot be used as the expansion parameter in (1). The actual expansion parameter used by Price and Pullin, was a function of µ 0 denoted κ 2 . We consider here what would be the results of perturbation theory done with the expansion parameter (3) Figure 2 shows the results, along with the energies computed by numerical relativity applied to full nonlinear evolution [15]. For all choices of k the agreement between perturbation theory and numerical relativity is good at sufficiently small initial separation (sufficiently small µ 0 ), but as µ 0 grows larger, the agreement increasingly depends on the which parameterization is used. The k = 0 parameterization, the parameter of the Price-Pullin paper, is a reasonably good approximation even up to separations (µ 0 > 1.36) for which the initial apparent horizon consists of two disjoint parts. For positive values of k the agreement is less impressive, while for k = −4, it appears that perturbation theory is giving excellent answers for initial data that are very nonspherical. Clearly the k = −4 parameterization is "better," at least for the purpose of computing radiated energy. There exist yet better choices; in principle a parameterization could be found for which the energy computed by linearized theory is perfect for any initial separation. The crucial point is that we have no a priori way of choosing what is and what is not a good parameterization. The choice of expansion parameter κ 2 was made in the Price-Pullin analysis, because it occurred naturally in the mathematical expressions for the initial geometry. There was no a priori reason for believing it to be a particularly good, or particularly bad parameterization. This point will be discussed again, in connection with numerical results presented in Sec. IV.
The fact, demonstrated in Fig. 2, that the results of linear perturbation theory are arbitrary may seem to suggest that perturbation answers, from a formal expansion or numerical initial data, are of little value. It should be realized, however, that the arbitrariness exhibited in Fig. 2 is simply a demonstration of the fact that linearized perturbation results are uncertain to second order in the expansion parameter. The fact that the results for different parameterizations start to differ from each other around µ 0 ≈ 1.5 simply signals that κ 2 is around unity. (In fact, κ 2 ≈ 0.24 for µ 0 = 1.2.) Higher order uncertainty is an unavoidable feature in the range where the expansion parameter is of order unity. But there is a potential misunderstanding about the meaning of "expansion parameter around unity." To see this consider a change to a new expansion parameter ǫ = 10 −4 * κ 2 . The new expansion parameter ǫ is of order unity for µ 0 ≈ 7, yet we know that perturbation fails dramatically for such a large value of µ 0 . The issue here is that we need some way of ascribing an appropriate "normalization" to the expansion parameter. A sign that the normalization is good is that physically-based measures of distortion start getting large for ǫ around unity. If we had reliable measures of this type then we could have some confidence about the range of the the expansion parameter for which we could neglect second order uncertainty, whether due to parameter arbitrariness or the omission of higher order terms in the calculation. One can formulate interesting measures for the normalization of the expansion parameter, such as the extent to which the linearized initial conditions violates the exact Hamiltonian constraint [16]. Most such measures are useful only for finding a very rough normalization for κ 2 (equivalently, for roughly finding the range in which linearized perturbation theory is reliable). The only reliable procedure for this is to carry out computations of radiated waveforms and energy to second order in the expansion parameter. The ratio of second order corrections to first order results gives the only direct measure of the reliability of perturbation results. If one computes an energy for which the second order correction to the first order result is 10%, then one knows that the third order correction (due to a change in parameterization or an inclusion of third order terms in the computation) will be on the order of 1%.
B. Treating nonlinear initial data as a perturbation expansion
We turn now to the central question of this paper: How does one apply perturbation theory to numerically generated initial data? To do this we consider our numerical initial data to be initial data for a solution in a parameterized family g µν (x α ; ǫ) corresponding to ǫ = ǫ num . The application of perturbation theory is equivalent to replacing g µν (x α ; ǫ num ) by An added familiar complication is that we can introduce a family of coordinate transfor- Such a transformation takes the original family to a new family g ′ µν (x α ′ ; ǫ), which satisfies the same requirements as the original family. We follow Moncrief [14] in constructing, from the 3-metric γ ij on constant-t surfaces, quantities q i , which are invariant to first order in ǫ ("gauge invariant"), for coordinate transformations. The construction of these Moncrief q i is done in two steps. First, the multipole moments of the metric are extracted. In practice this is done by multiplying the metric functions by certain angular factors and integrating over angles. Since we are only interested in quadrupole and higher order for radiation, this step also eliminates the spherically symmetric background parts of the metric function. The second step is to form linear combinations of these multipoles and of their derivatives with respect to radius. We symbolically represent the process of forming these quantities as Here the symbol "Q i " represents the process of multiplying by angular functions and integrating, then multiplying by certain functions of r and taking linear combinations of the results. (Our notation here disagrees with that of Moncrief [14] in a potentially confusing way. Moncrief's perturbation quantities are independent of the size of ǫ. In order to have definitions that can be applied to numerical data we use quantities that -to first orderare proportional to ǫ.) The Moncrief gauge invariants play two different roles. For even parity one of the gauge invariants, q 2 , is a constraint; it vanishes in linearized theory as a result of the initial value equations. In linearized theory, the remaining Moncrief quantities, denoted q 1 here, satisfy wave equations L(q 1 ) = 0, the Regge-Wheeler equation in odd parity and Zerilli equation in even parity.
From our numerical data we construct the quantities q i precisely according to (5). Our numerically constructed "perturbation" quantities will not be invariant under coordinate transformations, but rather will transform as q ′ i = q i + O(ǫ 2 num ). Similarly, the linearized constraint, q 2 will not vanish, but will be of order ǫ 2 num . The numerically constructed wavefunctions q 1 will satisfy L(q 1 ) = O((ǫ num ) 2 ), where L is the Regge-Wheeler or Zerilli wave operators.
The use of NPS methods is equivalent to ignoring the second order terms in the wave equations. The wavefunction q i can then be propagated from the initial hypersurface forward and the radiation waveforms extracted from it. To evolve q 1 off the initial hypersurface, however, requires the initial time derivative ∂q 1 /∂t. This can be computed from the initial extrinsic curvature, but some care is needed. Indeed, the possible ambiguities that arise here are the justification for the somewhat protracted discussion in this section.
If n is the future-directed unit normal to the initial hypersurface then the rate at which the 3-metric is changing is given by where K ij is the extrinsic curvature and L n is the Lie derivative along the unit normal. The unit normal is related to the derivative with respect to Schwarzschild time by ∂/∂t = S 1/2 n. The time derivative of the Moncrief function then can be written To evaluate the right hand side we need to know how Q 1 changes when it is Lie dragged by n. Since Q 1 depends only on γ ij it might appear that one need only Lie drag γ ij to find the change in Q 1 , and that L n Q 1 = Q 1 (L n γ ij , ∂L n γ ij /∂r). From this it would follow that ∂q 1 /∂t = −2S 1/2 Q 1 (K ij , ∂K ij /∂r). It is important to note that this is not the correct relationship between K ij and the cauchy data for the wave equation. The fallacy in this procedure lies in the fact that q 1 must be computed from the 3-metric on a slice for which Schwarzschild time is constant (to first order in ǫ num ). Lie dragging by n moves the 3-metric to a surface that is not (to first order) a constant time surface. The cure is clearly to compare quantities on surfaces of constant t by using L t ≡ S 1/2 L n . It is the Schwarzschild time derivative that commutes with the Schwarzschild radial derivative L t (∂/∂r) a = 0. The correct prescription then follows from We note that the perturbed Schwarzschild metric does have a shift vector β i of order ǫ, and in principle the shift vector influences the time development of γ ij according to ∂ t γ ij = ∂ t ′ γ ij + 2∇ (i β j) , where t ′ is a time coordinate in which the shift vector vanishes. But the shift vector can be considered to be "pure gauge." It is necessary if one wants a complete specification of the coordinates and the metric components, but its value is a matter of choice, and is not necessary for a complete specification of the physics. The initial value, and evolution, of the gauge invariant quantity q 1 is invariant with respect to the choice of β i , and q 1 carries all the (physically meaningful) information about gravitational waves.
The evaluation of q 1 from (5) and ∂q 1 /∂t from (8) completes the extraction, from the numerical data for γ ij , K ij of the cauchy data for the Regge-Wheeler or Zerilli wave equation. An alternative procedure arises if one uses the scalar wave-equations derived from the perturbative reduction of the nonlinear wave-equation for the extrinsic curvature which arises in a new explicitly hyperbolic form of the Einstein equations [17]. In this system, the scalar wave equations are one order lower in time derivative from the usual Regge-Wheeler and Zerilli equations, so the Cauchy data consists of the extrinsic curvature and its time-derivative (which involves the 3-dimensional Ricci curvature).
From the above it is clear that linearized evolution should give good accuracy when applied to numerically generated initial data with sufficiently small deviations from sphericity. For initial data which are known in analytic form one can, of course, apply linearized theory even to cases in which initial deviations from sphericity are only marginally small. The results in Fig. 2, for example, show that the results of such application of perturbation theory give reasonable accuracy for values of µ 0 at which an initial horizon is highly distorted. It is worrisome to apply linearized evolution to marginally nonspherical initial data, which do not, for example, satisfy the constraint q 2 = 0 with reasonable accuracy. Such a procedure -linear evolution of nonlinear initial data -has, among other disadvantages, no clear theoretical framework.
C. Calculating radiated energy by "forced linearization"
We wish to point out here that NPS methods can be used more broadly, and a procedure we call "forced linearization" can be applied to numerically generated initial data in a way that amounts to extracting the linearized part of the data and evolving linearly. This procedure circumvents the difficulty of performing formal linearization to data which is only known numerically. We imagine that we start with an initial value problem in which there is some adjustable parameter, call it µ, such that µ = 0 corresponds to the Schwarzschild initial data. There is no requirement that the family of solutions g µν (x α ; µ) be analytic in µ at µ → 0. There may be additional parameters, call them p i , such as the parameters governing the initial momenta of holes. To apply forced linearization we fix the values of the p i and make a choice of µ such that the computed initial data γ vns ij , K vns ij are "very nearly spherical." One criterion for this would be that q 2 is very small. We then interpret this initial data as being essentially linearized data, to which the approximation in (4) applies. We extract multipoles, form a gauge invariant wave function q 1 , and evolve it with the Zerilli or Regge-Wheeler equation, all as described above. The result of this will be a late-time waveform q vns 1 (r, t) and the energy E vns that it carries. The next step is to characterize the results with a well behaved gauge invariant parameter. To do this we choose some fiducial radius r fid , and evaluate ǫ vns ≡ q 1 (r fid , t = 0) the gauge invariant wave function of the initial hypersurface at this radius.
Next, we leave the p i unchanged, but choose a larger value of µ for which the numerically generated initial data set γ mrgnl ij , K mrgnl ij is "marginal" in that it corresponds to deviations from sphericity large enough so that it differs significantly form linearized initial conditions; one sign of this would be that the condition q 2 = 0 is significantly violated. For this data set we go through the same procedure as above in characterizing the data set by a parameter ǫ mrgnl ≡ q 1 (r fid , t = 0). For this marginally spherical initial data we take the solution for the wavefunction and energy to be The idea underlying this method is that the very nearly spherical data give us the solution for for ∂g µν /∂ǫ| ǫ=0 . For the marginal initial data set we then need only multiply this initial data by the appropriate factor telling us how much larger is the linear part of the nonsphericity than that of the very nearly spherical initial data. The success of forced linearization requires then that ǫ evaluated at r fid be a well behaved parameterization of the linearized part of the nonsphericity in the numerical data. Since our expansion parameter ǫ is the magnitude of the perturbation, it will be a good expansion parameter as long as it is evaluated in a region where the nonlinear deviations from sphericity are small, i.e., where (4) is a good approximation. For this reason it is important that r fid be chosen fairly large. For processes of the type pictured in Fig. 1, the deviations from sphericity fall off quickly in radius, so that at large enough r one can be certain that the initial data are an excellent approximation to linearized data. Evidence for this is that the violations of the q 2 = 0 constraint are always confined to small radii. One easily implemented check on the forced linearization procedure is to look at the factor ǫ mrgnl /ǫ vns and confirm that it is independent of r for r > r fid . In Section III we show that this test is easily passed by a numerical example, and that the results of forced linearization are essentially the same as those of formal linearized theory.
III. EXTRACTION OF PERTURBATIONS FROM NUMERICAL DATA
Here we assume that the reader has numerical solutions for the 3-metric on an approximately t=const surface. The first step in applying NPS to numerical results is to transform to coordinates which are "almost Schwarzschild" coordinates. It is assumed that the numerical γ ij and K ij are expressed in a coordinate system R, θ, φ in which the approximate spherical symmetry is manifest. This means that K ij and ratios like must be small. They all are, in fact, formally of order ǫ num , so if they are not all reasonably small compared to unity there is little reason to think that NPS will work. A Schwarzchildlike areal radial coordinate r needs to be introduced. This can be defined as a function of R by where the integral is taken on a surface of constant R. The metric component γ rr , in terms of this quantity, gives us another test of how close the geometry is to that of a constant time Schwarzschild slice. The quantity should be nearly equal to the constant 2M, where M is the mass of the spacetime. The variability of this quantity in r, θ, and φ, is formally of order ǫ num . There are, of course, other ways of specifying the Schwarzschild-like coordinates. We could, for example, have defined r 2 ≡ γ θθ All these coordinate choices, however, should agree to order ǫ num and are therefore equivalent within a linearized gauge transformation.
To compute the gauge invariant perturbation functions, we first assume that an ℓm multipole of the 3-metric may be expanded as where, for clarity, we have suppressed multipole indices and have replaced Moncrief's h 1 and h 2 odd parity perturbation functions with c 1 , c 2 . The multipole moments c 1 , c 2 , h 1 , H 2 , K, and G are computed by projection onto the relevant spherical harmonics which can be found in Moncrief [14]. Explicit formulas for the important special case of even parity, axisymmetric perturbations may be found in Ref. [10]. For odd parity perturbations, one function can be constructed from the amplitudes c 1 and c 2 which is gauge invariant and satisfies the Regge-Wheeler equation (below), The situation for even parity perturbations is more complicated. Two gauge invariant functions may be formed out of the multipole moments: From k 1 and k 2 it is possible to form two new functions, one which is radiative and one which is equivalent to the perturbed hamiltonian constraint The scaled function with Λ ≡ (ℓ − 1)(ℓ + 2) + 6M/r , satisfies the Zerilli equation (below). The time derivatives of the radiative gauge invariant functions Q × ℓm and Q + ℓm are found by substituting 1 − 2M/rK ij for γ ij in the multipole moment computation and forming the same combinations of moments.
The wavefunctions Q × ℓm and Q + ℓm obey the Regge-Wheeler and Zerilli wave equations respectively: where the wave operator appropriate to Schwarzschild spacetime is in terms of the "tortoise coordinate" r * = r + 2M ln(r/2M − 1), and where the potentials are given by and, Once the Zerilli and Regge-Wheeler equations are integrated for all the desired ℓ and m modes, the total radiated energy can be calculated from the asymptotic timeseries for Q + ℓm and Q × ℓm :
IV. EXAMPLE OF PERTURBATION EXTRACTION
In this section we demonstrate the extraction of a perturbation from a numerical solution to the nonlinear constraint equations -the Misner data representing two black holes at a moment of time symmetry. The Misner 3-geometry may be written [11] as The conformal factor Φ is given by where For this exercise, we pretend that the initial geometry is known only numerically, so no explicit formal linearization can be done. The odd parity perturbations vanish in the Misner solution. We compute the even parity gauge invariant wavefunction for ℓ = 2 using numerical evaluations of (26) -(27). Specifically, we compute K and H 2 of (12) from All the other moments in (12) vanish for the conformally Schwarzschild metric of (25). The function Q + 20 is evaluated at values of r corresponding to the range r * = −20M to r * = 50M. The initial value of Q + 20 (along with its time-derivative which is zero for the Misner time-symmetric initial data) provides initial values for integration of (20). At large radius, r = 100M, the value of ∂Q + 20 /∂t is used in (24) to compute the radiated energy. First, in Fig. 3 we show the result of directly computing the gauge invariant function Q + 20 from the nonlinear initial data, integrating the Zerilli equation, and computing the radiated energy. For small values of µ 0 the agreement with the explicitly linearized data of Ref. [11] is excellent. At about µ 0 ≃ 1.2 the agreement breaks down and the qualitative behavior becomes dramatically different. It is interesting to note that the apparent horizon encompassing both black holes does not exist for µ 0 > 1.36, close to the dramatic reversal in the energy curve.
In Fig. 4 the violation of the linearized constraint by the nonlinear data is shown as a function of radius. We plot the ratio of the constrained gauge invariant function, q 1 to the radiative function q 2 scaled in such a way as to compensate for large violation at r = 2M. The value of q 2 clearly grows much faster than the radiative variable q 1 as the separation is increased.
As discussed in Sec. II, it is possible to obtain the results of formal perturbation theory directly from the numerical data without ever making reference to the analytic solution. In Fig. 5 we demonstrate the application of the forced linearization procedure to the nonlinear Misner data for various values of the fiducial radius r fid . For very small values of µ 0 , such as µ 0 = 0.5, the geometry outside the event horizon is everywhere well approximated by (4) and forced linearization works even for small values of r fid /M. When µ 0 is larger than around 1.5, on the other hand, the initial geometry near the horizon contains significant nonlinear effects, and large values of r fid /M must be used to get results equivalent to those of formal linearized theory.
As r fid gets large, the results become indistinguishable from those of formal perturbation theory reported in Ref. [11]. For r fid = 30M the difference in radiated energy for µ 0 = 3.0 is less than 10 −3 %. This high-accuracy equivalence deserves some explanation. In particular, why is forced linearization equivalent to formal linearization with expansion parameter κ 2 ? Why is that expansion parameter singled out? The equivalence is a result of two features of the way in which the linearizations were done: First, both the formal linearization of Ref. [11], and the forced linearization results in Fig. 5, use precisely the same coordinates. (The forced linearization results, in fact, are not based on initial values that were generated by genuinely numerical means. Rather, the closed form solutions for the Misner metric functions were used. The "almost-Schwarzschild" coordinates of the forced linearization, were precisely the same as the "almost-Schwarzschild" coordinates in Ref. [11]). Secondly, in the "almost-Schwarzschild" coordinate system, the parameter κ 2 is, to all perturbation orders, the coefficient of the dominant nonsphericity at large radius. Forced linearization (in the limit of large r fid ) results in a parameterization based on a gauge invariant measure of nonsphericity at large radius. It therefore must be proportional to κ 2 and produce results equivalent to those of the formal linearization of Ref. [11], in which κ 2 was the expansion parameter.
It should be understood that this does not imply that the parameter κ 2 is physically singled out. A first order change in the "almost-Schwarzschild" coordinates will change the coefficient of the dominant large-radius nonsphericity. We might, for example, transform from the "almost-Schwarzschild" radial coordinate r of (25) to a new coordinate r ′ ≡ r[1 + κ 2 P 2 (cos θ)]. In this case the coefficient of the leading large r ′ term in the metric will be κ 2 + O(κ 2 2 ), and the results of forced linearization with the resulting "numerical" data will differ, when perturbations are large, from the results in Ref. [11]. The forced linearization will have induced an expansion parameter different from κ 2 .
AMA was supported by National Science Foundation grant PHY 93-18152/ASC 93-18152 (ARPA supplemented). RHP was supported by the National Science Foundation under grants PHY9207225 and PHY9507719. The "legs of the trousers" represent the world tubes of two compact objects before coalescence; region I cannot be considered to be nearly spherical. The objects coalesce in region II which is also highly nonspherical, but lies inside a horizon. Region III, above the hypersurface and outside the horizon, can be treated as a nearly spherical spacetime. | 2014-10-01T00:00:00.000Z | 1995-08-29T00:00:00.000 | {
"year": 1995,
"sha1": "778d9105fff8a87998bdd66de75e620f135cfd65",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/9508059",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ca0780a5a762cdddee755c75feb42052e2f52b92",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
267454789 | pes2o/s2orc | v3-fos-license | Korean Nationwide Exploration of Sarcopenia Prevalence and Risk Factors in Late Middle-Aged Women
This study examined specific clinical risk factors for age-related loss of skeletal muscle mass in late middle-aged women with sarcopenia. This Korean nationwide cross-sectional study analyzed data from 2814 community-dwelling women aged from 50 to 64 years old and screened them for sarcopenia. This study examined various risk factors such as age; height; weight; body mass index; waist circumference; skeletal muscle mass index; systolic and diastolic blood pressure; smoking and drinking habits; fasting glucose levels; triglyceride; and cholesterol levels. Complex sampling analysis was used for the data set. Prevalence of sarcopenia with a weighted prevalence of 13.43% (95% confidence interval: 2.15–15.78). The risk factors for sarcopenia were height, body mass index, waist circumference, skeletal muscle mass index, systolic blood pressure, diastolic blood pressure, triglyceride level, and total cholesterol level (p < 0.05). Weight, fasting glucose level, drinking status, and smoking status were not significant (p > 0.05). These results are expected to contribute to the existing literature on sarcopenia and identify potential risk factors associated with the development of sarcopenia in late middle-aged females. By acknowledging prevalence and recognized risk factors, healthcare professionals may augment their proficiency in recognizing and discerning potential instances of sarcopenia in female patients.
Introduction
Sarcopenia is identified via age-related declines in skeletal muscle mass, leading to progressive and generalized skeletal muscle disorder [1].Although various studies have indicated that hormonal changes, immobility, age-related changes in muscle composition, nutritional factors, and neurodegenerative processes contribute to its development, the precise mechanism of sarcopenia is not yet fully understood.Importantly, sarcopenia is particularly common among individuals aged 65 years and above [2].
The elderly population in Asia is rapidly growing, and Korea is among the countries with the highest rates of population aging worldwide.As of 2021, approximately 16.5% of the Korean population was aged 65 years or older, and this percentage is expected to increase to approximately 40% by 2050 [3].Sarcopenia is more prevalent in Korea and Asia than in other countries.
Several studies have reported a higher prevalence of sarcopenia in females compared to males.In a screen of 10,063 individuals, Dam et al. announced a prevalence of 11.80% in females and 5.10% in males [4].Similarly, Hunt et al. examined an older population of about two thousand Japanese individuals in a community dwelling and found a sarcopenia prevalence of 16.56% in females and 10.34% in males [5].A significant proportion of the elderly population in Korea, particularly women, is vulnerable to sarcopenia.
However, compared with the extensive research on sarcopenia in males, the early detection of sarcopenia in females remains a challenging task [6][7][8][9].Healthcare professionals, encompassing physical therapists and primary care clinicians, face difficulties in diagnosing sarcopenia owing to their limited knowledge and diagnostic tools despite the potential negative consequences associated with sarcopenia and the growing population of elderly females.In primary clinical settings, clinicians must assess the likelihood of sarcopenia before considering referrals for diagnosis and treatment.Moreover, the lack of awareness among clinicians regarding sarcopenia as a distinct disease increases the risk of missed diagnoses [10].Therefore, it is crucial for healthcare professionals and primary clinicians to understand the characteristics of key risk factors associated with early detection and prevention to effectively address this challenge [11].Prompt identification and early detection of individuals manifesting symptoms indicative of sarcopenia are essential to ensure timely diagnosis and intervention.Failure to diagnose the condition promptly can result in complications.However, the majority of studies on sarcopenia have primarily focused on individuals aged 65 years and older [12][13][14][15], even though age-related muscle loss can begin as early as the 50s [16][17][18][19][20].It is crucial to identify the risk factors for muscle loss at an earlier stage to effectively prevent and treat this condition.Therefore, this study aimed to investigate the prevalence of sarcopenia and its associated risk factors in women aged 50 to 64 years.We hypothesized that this age group would have specific risk factors and prevalence rates that would differ from those observed in older individuals.
Study Population
The Korean National Health and Nutrition Examination Survey (KNHANES) was designed to investigate the health and nutritional status of non-institutionalized individuals in South Korea.The program was conducted by the Disease Control and Prevention Center.The KNHANES experimental procedures were approved by the Disease Control and Prevention Center Ethics Review Board and all participants signed and agreed to a written informed consent form.The ethics review board screened for human subjects' ethical considerations, reviewed aspects such as the research plan, the informed consent process, participant safety, potential conflicts of interest, and the protection of personal information.The KNHANES IRB ensured that the research was lawful, ethical, and safeguarded the rights and safety of participants.
For eligibility in the sarcopenia group, participants had to satisfy three criteria: (1) be female, (2) have an age range of 50 to 64 years, and (3) to be within range of diagnosis of sarcopenia.Conversely, the normal group comprised women participants meeting the following a criterium: (1) aged between 50 and 64 years.Exclusions encompassed (1) pregnant individuals and (2) those who had undergone a diagnostic procedure involving contrast agent use in the week preceding the survey.
A total of 37,753 participants participated in the 2008-2011 KNHANES survey.Of these, 34,123 individuals were excluded due to being males or below 50 or above 64 years of age, and the remaining 4087 were female.Another 1273 subjects were excluded because they did not undergo health surveys or dual X-ray absorptiometry procedures.The number of female participants included in the final analysis set was 2814.The participants were divided into two groups based on the sarcopenia criteria, with 378 subjects in the sarcopenia group and 2436 in the normal group.
Variables
The study encompassed multiple variables for analysis, including age, height (in cm), weight (in kg), body mass index (BMI), waist circumference (WC), skeletal muscle index (SMI), smoking and drinking status, fasting glucose, triglycerides, total cholesterol (TC), systolic blood pressure, and diastolic blood pressure.
To measure WC, the circumference was determined at the midpoint between the lower rib cage and the upper edge of the iliac crest while the participant was exhaling fully.
Blood analyses were performed following an eight-hour fasting interval, and measurements of systolic and diastolic blood pressures were obtained utilizing a mercury sphygmomanometer after a ten minute repose in a seated posture.This meticulous approach to data collection ensured the accuracy and reliability of the physiological parameters under investigation.
The fasting period of eight hours, during which participants abstained from food intake, is a standard practice aimed at obtaining baseline blood values unaffected by recent meals.This allowed for a more accurate assessment of fasting glucose levels and other metabolic markers, providing insights into the participants' physiological status.
Additionally, the choice of utilizing a mercury sphygmomanometer for the measurement of systolic and diastolic blood pressures underscored the commitment to precision in the study.The sphygmomanometer, a traditional and widely accepted instrument for blood pressure measurement, is known for its accuracy.The readings obtained using this device are considered reliable indicators of cardiovascular health, contributing to the robustness of the study's findings.
The ten-minute rest period in a seated position before blood pressure measurements served multiple purposes.Firstly, it allowed participants to achieve a stable physiological state, minimizing the potential influence of transient factors on blood pressure readings.Secondly, the seated position standardized the conditions, ensuring uniformity across participants and reducing confounding variables that could compromise the internal validity of the study.
The assessment of smoking and drinking habits involved classifying participants into distinct categories based on their usage patterns.Participants were stratified into three groups: non-users, ex-users, or current users, providing a nuanced understanding of their tobacco and alcohol consumption behaviors.
This categorization approach was crucial for delving deeper into the complexities of smoking and drinking statuses within the study cohort."Non-users" denoted individuals who had never engaged in smoking or drinking, establishing a baseline for those with no history of tobacco or alcohol consumption."Ex-users" referred to individuals who were previously involved in smoking or drinking but had since ceased these behaviors.This category recognized the dynamic nature of lifestyle choices and allowed for the examination of potential long-term effects even after discontinuation.Lastly, "current users" represented individuals actively engaged in smoking or drinking at the time of the study, shedding light on immediate associations between these behaviors and other risk factors under investigation.
Moreover, the inclusion of these detailed categories offered a more nuanced exploration of the interplay between smoking, drinking, and various health parameters.Understanding the distinct characteristics of each subgroup enables researchers to discern potential trends, associations, or contrasts in health outcomes.For instance, comparing the health profiles of non-users, ex-users, and current users can illuminate whether the cessation of smoking or drinking is associated with specific health improvements or challenges.These variables were included in the analyses.
Criteria for Sarcopenia
The identification of sarcopenia, classified under the ICD-10-CM code M62.84, necessitates a meticulous evaluation of skeletal muscle mass in the extremities.In this study, the quantification of skeletal muscle mass in the limbs was conducted using dual X-ray absorptiometry (DXA), employing QDR4500A equipment from Hologic, Inc., Bedford, MA, USA.This technologically advanced method ensured precise and reliable measurements for an accurate assessment of skeletal muscle mass.
To gauge muscle mass effectively, the study employed the ASM (kg)/BMI (kg/m 2 ) ratio, commonly referred to as the skeletal muscle mass index (SMI).The SMI calculation offers a quantitative representation of the relationship between muscle mass and body mass, allowing for a more nuanced understanding of muscular health.This index is particularly valuable in the context of sarcopenia diagnosis.
The diagnostic threshold for sarcopenia in women was established based on the SMI value.According to the criteria stipulated by the Foundation for the National Institutes of Health Sarcopenia Project [21], a diagnosis of sarcopenia was confirmed when the SMI value fell below 0.521.This criterion provided a standardized and objective measure for identifying individuals with insufficient skeletal muscle mass relative to their body mass, aligning with established norms in the field.
The robustness of the diagnostic methodology employed in this study underscored its reliability in accurately identifying sarcopenia among the study participants.By adhering to established criteria and utilizing advanced DXA technology, the research not only contributes to the scientific understanding of sarcopenia but also sets a benchmark for accurate and consistent diagnostic practices.The utilization of specific equipment and adherence to standardized criteria enhance the credibility of the study's findings, ensuring that the diagnosis of sarcopenia was grounded in rigorous scientific methodology.This meticulous approach has established a foundation for further research and clinical applications related to sarcopenia and its diagnostic parameters.
Data Analysis
This study used the mean and standard deviation as statistical measures to summarize the data for each measurement.To ensure a representative analysis at the national level in Korea, complex sampling analysis was employed, incorporating the individual weights provided by the KNHNES.The statistical analyses were conducted through the utilization of SPSS software (version 22.0; IBM Corporation, Armonk, NY, USA).
The Information presented in this study revealed that the data employed adhered to a stratified, clustered, and multistage probability sampling design.To examine the variations in chemical parameters between participants afflicted with sarcopenia and those without, independent t-tests and chi-square analyses were employed.This meticulous sampling approach ensured a comprehensive and representative selection, allowing for a nuanced exploration of the chemical profiles in relation to sarcopenia.The statistical tools utilized independent t-tests and chi-square analyses.Multiple logistic regression analysis was used to calculate the odds ratio for sarcopenia.The statistical significance level was set at p = 0.05, to determine the presence of statistically significant associations.
Clinical Risk Factors
Height, BMI, WC, SMI, TC level, triglyceride level, systolic blood pressure, a tolic blood pressure were significantly different between the two groups (p < 0.05).fasting status, smoking status, and drinking variables were not significantly diffe Weighed values present the 95% confidence interval.
Clinical Risk Factors
Height, BMI, WC, SMI, TC level, triglyceride level, systolic blood pressure, and diastolic blood pressure were significantly different between the two groups (p < 0.05).Weight, fasting status, smoking status, and drinking variables were not significantly different between the groups (p > 0.05) (Table 2).
Multiple Logistic Regression for Odd Ratio
Table 3 shows the odds ratios (oRs) with 95% confidence intervals using logistic regression analysis.BMI The 95% confidence interval for the odds ratio was determined through the application of multiple logistic regression.BMI, body mass index; WC, waist circumference; SMI, skeletal muscle mass index; SBP, systolic blood pressure; DBP, diastolic blood pressure; and TC, total cholesterol.
Discussion
This study aimed to evaluate the prevalence and risk factors associated with sarcopenia among community dwelling late middle-aged females.Aging populations in Korea and Asia are rapidly increasing, leading to a higher occurrence of sarcopenia, especially among females.Despite the potential adverse effects of sarcopenia, healthcare professionals face challenges in diagnosing the condition due to a lack of adequate knowledge and diagnostic tools, resulting in overlooked diagnoses and complications.It is useful to utilize variables such as age, height, weight, BMI, WC, SMI, smoking and drinking status, SBP, DBP, fasting glucose levels, triglyceride, and TC.The above variables are a cost-effective, convenient, and accessible approach for identifying patients with potential sarcopenia.Recognizing the risk factors is essential for the early detection and prevention of sarcopenia.The identified risk factors in females included measurements such as WC, SMI, SBP and DBP triglyceride level, and TC.
The consistent identification of waist circumference as a risk factor for sarcopenia has been a focal point in various female sarcopenic studies [25][26][27].One American study revealed sarcopenic individuals had an enlarged waist circumference [25].Likewise, a Brazilian cohort study reported a greater waist circumference among individuals with sarcopenia than those without sarcopenia [26].A separate investigation conducted in Japan revealed that those identified with sarcopenia demonstrated greater waist circumferences compared to a heathy population [28].
The theoretical underpinning of the observed increase in waist circumference in adults with sarcopenia is rooted in the interconnected relationship between increased fat mass and diminished muscle mass [28].Individuals with sarcopenia often have problems with muscle power and function due to muscle loss, resulting in decreased engagement in physical activities, such as challenges in sit-to-stand and walking extended distances both indoors and outdoors [29].This decline in physical activity is strongly correlated with a reduction in total daily energy expenditure and an increase in body fat stores.In particular, it accumulates in the visceral and abdominal regions, ultimately leading to the expansion of waist volume [29].Consequently, the correlation between diminished muscle mass and fat mass accumulation in sarcopenia is bidirectional and reinforces this hypothesis [30].Thus, evidence consistently highlighting waist circumference as a discernible risk factor for sarcopenia emphasizes the need for a nuanced understanding of the intricate interplay between muscle and fat mass.
The other identified risk factor in the blood laboratory examination was an elevation in triglyceride level, a finding that was consistent with previous investigations [31][32][33].In a cross-sectional study conducted by Lu et al. [33] that was conducted on subjects from east China, it was observed that females presenting with sarcopenia manifested heightened serum triglyceride levels.Similarly, in their examination of an older adult cohort in the northern region of Taiwan, Lu et al. discerned a notable increase in triglyceride levels within a demographic characterized by sarcopenia.Correspondingly, Buchmann et al. [31], through their examination of an elderly population in Berlin, concluded that triglyceride levels were elevated within the subset afflicted with sarcopenia compared to their nonsarcopenic counterparts.This collective body of evidence underscores the consistency of the association between sarcopenia and elevated triglyceride levels across diverse geographic and demographic spectra, thereby reinforcing the robustness of the observed correlation.
Insulin resistance is a plausible underlying mechanism for the observed correlation between sarcopenia and elevated triglyceride levels.Insulin resistance disrupts lipid metabolism.Under normal conditions, insulin facilitates the uptake of fatty acids and glucose by adipose tissue.In insulin resistance, this regulatory process is impaired, resulting in an increased release of fatty acids from adipose tissue into the bloodstream [34].Skeletal muscle is a pivotal primary repository, storing approximately 80% of ingested glucose after meals, thereby acting as a critical prevention of hyperglycemia in the bloodstream [19].However, individuals with sarcopenia, particularly women, frequently exhibit a notable reduction in insulin sensitivity.This lowered insulin sensitivity displays a diminished capacity for glucose uptake by skeletal muscles, stemming from lower proportions of type I muscle fibers and a reduced capillary density susceptible to insulin action [35].Furthermore, the accumulation of fat in sarcopenic adults, as mentioned in the discussion on waist circumference, contributes to the synthesis of triglycerides, facilitated by the liver through lipogenesis, and the liver's orchestration of triglyceride synthesis via lipogenesis [36].The liver, a central organ in the intricate lipid metabolism network, demonstrates a discerning reaction when confronted with excess circulating fatty acids in a systemic environment.This response is characterized by the initiation of lipogenesis.Within this intricate biochemical pathway, the liver engages in triglyceride synthesis from an abundance of fatty acids and glycerol molecules, representing a pivotal juncture in the overall metabolic panorama.Lipogenesis, at its core, embodies a molecular performance orchestrated within hepatic cells.This intricate choreography of enzymatic reactions unfolds within hepatocytes and culminates in the conversion of fundamental building blocks, fatty acids, and glycerol molecules into more intricate and storage-ready triglycerides.These enzymatic transformations transcend biochemical processes.Instead, they reflect the meticulously regulated finesse of the liver's molecular machinery, with each enzymatic step intricately controlled to ensure seamless synthesis of triglycerides [37,38].
Total cholesterol was recognized as a risk factor for sarcopenia, which was consistent with the results of previous studies [27,32].According to Du et al. [32], females with sarcopenia exhibit elevated total cholesterol levels compared to their counterparts in the normal group.Similarly, Sanada et al. [27] assessed a Japanese population and observed significantly higher total cholesterol levels in individuals diagnosed with sarcopenia than those in the normal group.
The potential cause for elevated total cholesterol levels may be attributed to both inflammation and mitochondrial dysfunction.Aging is characterized by chronic low-grade inflammation, commonly referred to as "inflammaging", which plays a role in muscle wasting by facilitating the breakdown of muscle proteins and hindering the regeneration of muscle tissue [39].Furthermore, the intricate relationship between sarcopenia and mitochondrial dysfunction contributes to our understanding of the mechanisms behind altered cholesterol levels [40].Mitochondria, the cellular powerhouses responsible for energy production, undergo changes with aging that can lead to diminished energy output and subsequent muscle fatigue.
To delve deeper into the relationship between total cholesterol and sarcopenia, it is crucial to explore the specific mechanisms by which inflammation and mitochondrial dysfunction influence lipid metabolism and muscle health.Inflammatory mediators, such as cytokines, can impact the liver's synthesis of lipoproteins, including cholesterol-carrying molecules [41].This alteration in lipid metabolism may contribute to the observed elevation in total cholesterol levels in individuals with sarcopenia.Mitochondrial dysfunction, on the other hand, can affect the energy balance within muscle cells.The decline in mitochondrial function diminishes the efficiency of energy production, potentially influencing the regulation of lipid metabolism [42].This not only contributes to muscle fatigue, a characteristic of sarcopenia, but may also play a role in the dysregulation of cholesterol levels.Moreover, underscoring the importance of considering hormonal influences and metabolic differences in the interplay between cholesterol and muscle health [43].Hormonal changes associated with aging, including alterations in growth hormone and estrogen hormones, can impact both lipid metabolism and muscle mass maintenance.Understanding the intricate connections between total cholesterol, inflammation, mitochondrial dysfunction, and sarcopenia necessitates a comprehensive examination of cellular and molecular processes.Research in this area holds promise not only for elucidating the pathophysiology of sarcopenia but also for identifying potential targets for therapeutic interventions aimed at mitigating muscle loss and optimizing metabolic health.
Our study's findings demonstrated that systolic and diastolic blood pressures serve as risk factors for women, which was consistent with prior research [33,44].An investigation conducted in Taiwan by Lu et al. revealed that individuals within the sarcopenia group exhibited elevated SBP and DBP compared to their counterparts in the normal group [33].Correspondingly, a British cohort study by Atkins et al., involving 4252 participants, revealed a significant elevation in systolic and diastolic blood pressure in sarcopenia than healthy population [45].Androga and colleague [44] demonstrated that the sarcopenia group had increased prevalence of blood pressure, compared to the healthy counterparts.
The increase in SBP and DBP in individuals with sarcopenia can be attributed to skeletal muscle loss resulting from metabolic alterations and a decline in muscle mass.This phenomenon contributes to diminished energy expenditure, decreased physical activity, insulin resistance, and heightened arterial stiffness in older adults.Additionally, the accumulation of excessive visceral fat may induce an inflammatory response, leading to the thickening of blood vessel walls, constriction of vascular passages, and hindrance of blood flow.Hence, it is imperative to emphasize the potential health implications of elevated SBP and DBP in individuals with sarcopenia.These repercussions include reduced energy expenditure and physical activity, heightened susceptibility to insulin resistance, and an increased likelihood of arterial stiffness among the elderly.Furthermore, the accrual of surplus visceral fat mass has emerged as a pivotal factor in triggering an inflammatory response, subsequently fostering structural changes in blood vessels, constricting vascular passages, and impeding blood flow.
The current study demonstrated a notable strength by focusing on the investigation of risk factors, specifically among females within the representative population of late middle-aged individuals.This age group is particularly significant because sarcopenia progresses rapidly and complications begin in this age group [46][47][48][49].These findings offer valuable insights into the early detection and treatment of sarcopenia.However, it is crucial to acknowledge several limitations of the present study that should be addressed in future research.First, despite the inclusion of a substantial sample size of 2814 participants with representative statistical weights, the use of a cross-sectional design may have restricted the ability to establish causal relationships for the identified risk factors.Factors such as elevated triglyceride and total cholesterol levels have been implicated as potential predictors of sarcopenia.The cross-sectional nature of the study raises the possibility that sarcopenia itself could influence the blood test results.Thus, further research is imperative to comprehensively elucidate the intricate relationship between these predictors and the development of sarcopenia.Future studies could enhance the robustness of their findings by considering longitudinal or randomized case-control designs.Another limitation was the omission of an examination of sarcopenic obesity, a condition characterized by low muscle mass and high body fat.The absence of consideration for sarcopenic obesity is particularly relevant, as it may influence alterations in total cholesterol and triglyceride levels.To facilitate a more nuanced interpretation of the study results, future research should consider the potential impact of sarcopenic obesity on the identified metabolic parameters.In addition, the present study did not investigate the assessment of protein intake.Protein intake is essential for the prevention and intervention of sarcopenia.Furthermore, the diagnosis of sarcopenia did not take into account muscle strength and function.If these factors had been considered, better results could have been achieved.Finally, the study did not consider variables from psychosocial aspects.This should be considered in the next study.
Conclusions
The current nationwide investigation provides the first clinical findings of risk factors and the prevalence of sarcopenia in late middle-aged women.
Within this specific demographic, the prevalence of sarcopenia was estimated to be 13.87%, accompanied by a confidence interval spanning from 12.15% to 15.78%.The study identified clinical risk factors associated with sarcopenia, encompassing parameters such as waist circumference, systolic and diastolic blood pressure, as well as triglyceride and total cholesterol levels.By acknowledging both the prevalence and identified risk factors, healthcare professionals might have an improved capacity to identify and detect potential cases of sarcopenia among female patients.However, further research is required to deepen our understanding of the relationship between these risk factors and sarcopenia and to bolster the robustness of these findings.Exploring longitudinal or randomized case-control study designs holds promise for unraveling the intricacies of this association.
Table 2 .
Clinical risk factors for sarcopenia.
Values were presented as the mean with accompanying standard deviation.The statistical analyses employed included the independent t-test and the chi-square test.BMI, body mass index; WC, waist circumference; SMI, skeletal muscle mass index; SBP, systolic blood pressure; DBP, diastolic blood pressure; FG, fasting glucose; and TC, total cholesterol.
Table 3 .
Multiple logistic regression for odds ratios of sarcopenia. | 2024-02-06T17:51:36.647Z | 2024-01-31T00:00:00.000 | {
"year": 2024,
"sha1": "17d095306e70bb7d02d2f4880e9cafee64833296",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/12/3/362/pdf?version=1706693164",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da1412a7040f58b8d08333a59b26ab9c281046c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216500708 | pes2o/s2orc | v3-fos-license | Backlash against the Court of Justice of the EU? The Recent Jurisprudence of the German Constitutional Court on EU Fundamental Rights as a Standard of Review
Abstract This article discusses two landmark judgements by the German Federal Constitutional Court (CC) on the relationship between domestic and EU fundamental rights protection (Right to be forgotten I and II). In these judgements, for the first time, the CC uses EU fundamental rights as a standard of review. In addition, the CC establishes a novel framework of “parallel applicability” of EU and domestic fundamental rights for subject matters that are not fully harmonized by EU law. The article first presents the new approach, showing that it structurally changes the parameters of the relationship between the CC and the CJEU. Second, the article assesses the legal-political tendency reflected in this change: is this constructive dialogue or rather pushback against the CJEU? The article argues that this new jurisprudence should be characterized as an instance of resistance. The CC resists against the CJEU in its function as fundamental rights court, attempting to reduce the authority of the CJEU and reversing a development that it considered to be unfavourable to its own authority. This is structural pushback aimed at the CJEU’s function rather than at individual decisions or norms - however, without rejection the CJEU as an institution altogether.
A. Introduction
In two landmark judgements delivered on Nov. 6, 2019 (1 BvR 276/17 and 1 BvR 16/13), the German Federal Constitutional Court (CC) overturned its jurisprudence on the relationship between the domestic fundamental rights protection and the fundamental rights protection provided by EU law. Going beyond its previous approach to only assess domestic fundamental rights and whether they are respected, it now decided to also use EU fundamental rights as a standard of review in certain situations. In addition, the court establishes a novel framework of "parallel applicability" of EU and domestic fundamental rights.
The initial scholarly feedback to these decisions was rather positive. Several commentators welcomed the new approach emphasizing that it allows for the CC to be "back in the game", 1 arguing that the approach would focus on what links EU and domestic law rather than on what divides it, 2 or highlighting that the new approach strengthens the CC institutionally and that it gives the CC a "new strong voice" in the European system of fundamental rights protection. 3 This is in line with more general suggestions over the previous years that constitutional courts in EU Member States should use EU law as a standard of review, welcoming instances in which constitutional courts did so. 4 Is this jurisprudence really such a positive development for European integration as these voices seem to suggest? If one takes into account the motivation of the CC for establishing this jurisprudence and assesses the subtleties of the new approach as to when EU fundamental rights will actually play a role in practice and who has a say on that, the picture might seem less bright. In fact, this new approach is a reaction to the expanding fundamental rights jurisprudence of the Court of Justice of the EU (CJEU), through which the court saw itself increasingly marginalized as an institution. To re-establish its influence, the court expands its own jurisdiction by also using the EU Fundamental Rights Charter as a standard of review; and at the same time, it expands the impact of domestic fundamental rights as opposed to EU fundamental rights in this context. This raises the question whether this jurisprudence is an instance of resistance or even backlash against the CJEU or rather a constructive contribution to a polyphonic human rights protection system in Europe.
To address this question, I first present in detail the content of the new framework of the relationship between EU and domestic fundamental rights (part B). This will allow us to assess to what extent this jurisprudence fulfils the criteria that constitute an act of pushback against an international or supranational court in the form of resistance or backlash (part C). If it does fulfil these criteria, this jurisprudence of the CC will contribute to an increasingly critical environment that many international courts are facing. It will join the ranks of other instances of pushback against these courts, a pushback which has become the object of broad scholarly attention in the recent past. 5 Before turning to this discussion, I give a brief overview of the facts and the substantive arguments made in the "twin decisions" issued on 6 November 2019. This provides the necessary background to the discussion of the applicable fundamental rights framework, on which this paper will focus. In substance, the decisions concern, in two very similar constellations, the recently created "right to be forgotten" regarding personal information available on the internet.
In the first case (Right to be forgotten I, 1 BvR 16/13), the complainant challenged in a constitutional complaint a judgment of the Federal Court of Justice (Bundesgerichtshof). The complainant 3 Thomas Kleinlein, Neue starke Stimme in der europäischen Grundrechts-Polyphonie, VERFASSUNGSBLOG, (Dec 1, 2019), https://verfassungsblog.de/neue-starkestimme-in-der-europaeischen-grundrechts-polyphonie/. 4 Davide Paris, Constitutional Courts as European Union courts. The current and potential use of EU law as a yardstick for constitutional review, 24 MAASTRICHT JOURNAL OF EUROPEAN AND COMPARATIVE LAW 792-821 (2017); referring to a letter of the then-President of the CJEU to the President of the Austrian Constitutional Court as well as to a presentation by the then EU commissioner Viviane Reding, who both welcomed the Austrian CC's application of the EU Charter of Fundamental Rights in its decision of 14 March 2012, docket number U 466/11-18, U 1836/11-13: Theo Öhlinger, Vorlagepflicht bei Verstoß eines nationalen Gesetzes gegen Artikel 47 GRCh -Anmerkungen, 25 EUROPÄISCHE ZEITSCHRIFT FÜR WIRTSCHAFTSRECHT 955 (2014). sued German weekly journal "Der Spiegel". In 1982 and 1983, the journal had published two articles about the complainant's criminal trial, in which he had been convicted of murder. These articles have been available in the online archive of the journal since 1999. A search for the complainant's name via an online search engine showed these articles among the top results. The complainant argued that given the time that has passed since the events, his general right of personality as enshrined in Articles 2 (1) and 1 (1) of the German Basic Law gives him the right to request that these articles should not appear as results of a simple name-based online search anymore. The CC agreed with this line of argument. Based on German constitutional law, the CC assessed the Federal Court of Justice's balancing of the personality right of the claimant and the freedom of opinion and the freedom of press of the journal. Even though the CC considered that there is still an interest of the public in having access to these journal articles via the online archive of the journal, the CC decided that the journal has to ensure through technical measures that these articles are not included in the top results of a simple name-based search by general online search engines.
In the second case (Right to be forgotten II, 1 BvR 276/17), the CC based its argumentation on EU fundamental rights as enshrined in the Charter. The case evolved around a TV broadcast that was uploaded to an online archive in 2010. In the broadcast entitled "Dismissal: the dirty practices of employers", the complainant is identified by name and accused of unfair treatment of an employee, who was dismissed by her company. A search for the complainant's name on Google displayed the link to this broadcast among the top resultsa situation that the complainant sought to change. In contrast to the above case, the complainant did not take legal action against the broadcaster but against the search engine operator who refused to remove the broadcast from the search results. The CC emphasised that in this constellation, three sets of rights need to be balanced: the complainant's right to private and family life and to the protection of personal data pursuant to Article 7 and Article 8 of the Charter; Google's freedom to conduct a business pursuant to Article 16 of the Charter; and the freedom of expression of the broadcasting corporation pursuant to Article 11 of the Charter as a directly affected fundamental right of a third party. Based on the specific circumstances of the case, in particular the relatively short time that had passed since the broadcast was first published and the fact that the complainant had voluntarily contributed to it by agreeing to be interviewed, the CC concluded that the complainant's right does not have precedence over the other rights concerned.
Both cases are governed by EU legislation on data protection, currently the General Data Protection Regulation (GDPR). The cases differ with regard to the applicable EU law in one aspect: in the case Right to be forgotten I, the so-called media privilege applies. According to article 85 of the GDPR, Member States shall reconcile the right to the protection of personal data with the right to freedom of expression and information, including processing for journalistic purposes. The CC said that where this provision is applicableas in the case Right to be forgotten I -, Member States are given a leeway in how to reconcile the rights mentioned in Article 85 GDPR. In contrast, where this provision does not applyas in the case Right to be forgotten II -, the Member States do not have any leeway, i.e. the subject matter is fully harmonized by EU law. This distinction is crucial because, as I will lay out in part B of this paper, the court creates different frameworks for the applicability of EU fundamental rights depending on whether the subject matter in the case is fully harmonized by EU law or not.
B. A New Framework for the Relationship between Domestic and EU Fundamental Rights Protection in the German Legal Order
In its twin judgements of November 2019, the CC establishes a two-prong framework for the relationship between domestic and EU fundamental rights protection. I will present both pillars of this jurisprudence below.
I. EU Fundamental Rights as Standard of Review
The first major innovation of this jurisprudence relates to the standard used by the CC in the context of its constitutional review. For the first time, the CC uses the rights enshrined in the EU Fundamental Rights Charter as a standard for review. As of yet, the court had considered itself competent only to review whether domestic constitutional rights have been violated in a given case brought before it by way of a constitutional complaint. This corresponded to reading the constitutional provision conferring jurisdiction on the CC 6 in a way that the term fundamental rights in this provision refersonlyto the fundamental rights section of the German Basic Law. So far, the court had repeatedly stated that "it is inadmissible to challenge the violation of European Community law. Rights under Community law are not among the fundamental rights, or rights that are equivalent to fundamental rights, the violation of which can be challenged under Article 93.1 no. 4a of the Basic Law : : : ". 7 The court now takes a different approach. In Right to be forgotten II, it argues that based on its "responsibility with regard to EU integration", the court shall ensure that EU fundamental rights are guaranteed. 8 The notion of "responsibility with regard to EU integration" is an established concept of German constitutional jurisprudence that the CC deduced from Article 23 of the Basic Law, a norm that addresses matters of Germany's participation in the EU. While the CC has used this concept on multiple occasions to determine the competences and obligations of other constitutional organs such as the government or the legislator, in the present case, it applies this concept to determine the scope of its own competence. The court argues that, in light of this concept, German constitutional law has provided the court with the mandate to engage in a review of EU fundamental rights because these rights are a part of European integration. The claim thus is that German constitutional law requires the CC to ensure the protection of both domestic and EU fundamental rights.
By adopting this new approach, the CC joins a number of other constitutional courts in the EU that have recently started to apply the EU Fundamental Rights Charter as a standard of review. 9 The Austrian Constitutional Court was the first court who chose that path and who provided a detailed explanation for taking that step. In its decision of Mar. 14, 2012, 10 the court decided to henceforth review the cases brought before it both with regard to domestic fundamental rights andif applicableto EU fundamental rights. 11 Notably, the Austrian CC chooses a very different line of reasoning when compared to the German CC. It bases its jurisprudence on an argument drawn from EU law rather than from domestic constitutional law: the principle of equivalence. This principle requires that when Member States lay down the "procedural rules governing actions for safeguarding rights which individuals derive from Community law", the Member States have to ensure that these rules "are not less favourable than those governing similar domestic actions (the principle of equivalence) and do not render virtually impossible or excessively difficult 6 Article 93 (1) of the Basic Law. 7 BVerfG, March 28, 2006, docket number 1 BvR 1054/01, para. 77. This decision has been rendered by the Second Senate of the CC, while the decision Right to be forgotten II is a decision of the First Senate. To avoid the perception of a conflict between the two senates (which also would have had procedural implications), the First Senate has gone to great length arguing that both approaches are in fact compatible (paras 87-93). The First Senate said: "The treatment of corresponding constitutional complaints as inadmissible [by the Second Senate] was not based on an independent statement by this case law that fundamental Union rights were not applicable, but was merely a reflection of the inapplicability of the Basic Law" (para. 89). It remains to be seen how the Second Senate will respond to this argumentation. the exercise of rights conferred by Community law (the principle of effectiveness)". 12 Thus, the rationale of the Austrian court is that because there is a review mechanism for domestic fundamental rights, review for EU fundamental rights should also be available. 13 According to the Austrian court, that is at least true in situations in which the guaranty enshrined in the relevant Charter right corresponds to a right enshrined in the Austrian constitution. 14 The German CC does not refer to this argument or any other argument drawn from EU law. Rather, it argues based on domestic law that there would be a "gap" in the fundamental rights protection if it did not engage in a review of EU fundamental rights as well. This argument will be discussed in more detail in part C of this paper. At its crux, the argument of the German CC is more comparable to the approach taken by the Italian CC. 15 The Italian court has recently established a doctrine of "dual preliminarity" requiring domestic general courts to refer a preliminary question to the Italian CC when domestic and EU fundamental rights are at stake in a case. This doctrine was presented as a necessity in order to guarantee an effective fundamental rights protection and the centralized model of fundamental rights review that exists in Italy.
In contrast to other constitutional courts that have started to conduct a review regarding EU fundamental rights, the German CC takes a more restrictive approach, establishing a rather complex system of applicable review standards. The CC does not simply accept EU fundamental rights as a standard of review in situations in which EU fundamental rights are applicable according to EU law. Instead, the court reduces the scope of its new review standard to the minimum necessary for achieving its goals. In order to do so, the CC establishes a distinction between subject matters that are fully harmonized by EU law and subject matters in which Member States have a leeway. The constitutional courts in other Member States have not referred to this distinction in that way.
Concerning fully harmonized subject matters, EU fundamental rights are, according to the CC's new jurisprudence, exclusively applicable. Domestic fundamental rights have in general no role to play heresave for the existing jurisprudence on ultra vires and constitutional identity (see below). 16 The exclusiveness approach for fully harmonized subject matters continues the established line of jurisprudence, i.e. that "sovereign acts of the European Union and acts of German public authority that are determined by Union law shall in principle not be measured against the standards of the Basic Law." 17 The new approachreviewing certain acts of German public authority that are determined by Union law with regard to their compatibility with the Charter rightsthus enables the CC to exercise jurisdiction in a field where at least in principle, it has not adjudicated since the Solange II decision of 1986. 18 However, in contrast to the pre-Solange II era, the CC will now use EU fundamental rights rather than domestic fundamental rights as standard of review.
For subject matters beyond full harmonisation, the novel framework created by the CC is more multilayered. As I will outline in the next section, EU fundamental rights can potentially be a 12 CJEU, Case C-326/96, Levez v. Jennings Ltd, E.C. R. 1998 I-7835, para. 18. 13 The CJEU has given its appreciation of this argument in its decision of Sep. 11, 2014, Case C-112/13, A v B and others. The CJEU stressed that, in the context of the concrete review of legislation, a system in which general courts refer to the CC by way of interlocutory proceedings is only permissible under EU law when several strict conditions are fulfilled. The argumentation is based on the principle of primacy of EU law and the relevant Simmenthal jurisprudence of the court. However, in the context of constitutional complains such as in the present decisions of the CC, this restriction does not apply. See on the EU law limits on the application of EU law as yardstick for CCs in the different contexts, Paris, supra note 4, 811-814. standard of review in these casesbut their role is very limited. The CC gives as much space as possible to domestic fundamental rights.
II. "Parallel" Applicability of EU and Domestic Fundamental Rights
The second novelty of the twin decisions is that the CC abandons its concept of an exclusive relationship between EU and domestic fundamental rights. While, thus far the CC considered that either EU or domestic fundamental rights are applicable to a case at hand, 19 it has now turned to recognizing a parallel applicability of both sets of fundamental rights. However, this parallel applicability only relates to subject matters that are not fully harmonized by EU law. As highlighted above, the CC exclusively uses EU fundamental rights as a standard of review in situations of full harmonisation. In this regard, the exclusiveness approach of the CC is still alive. Only beyond full harmonisation, the CC has changed its opinion on this issue. Both EU and domestic fundamental rights are now considered (at least prima facie) as standard of review by the CC.
As far as the CC has turned to parallel applicability, the approach correspondsat its basisto the approach taken by the CJEU on the matter. In Åkerberg Fransson, the CJEU has accepted that in "situation where action of the Member States is not entirely determined by European Union law", EU and domestic fundamental rights standards can be applied at the same time. 20 The CC however does not simply adopt the jurisprudence of the CJEU on the matter. It creates its own framework on how the newly recognized "parallel applicability" should translate into practice.
The particularity of this approach is that, despite its label, the new "parallel applicability" framework for situations beyond full harmonisation does not amount to the CC using in an equal and/or simultaneous manner EU and domestic fundamental rights as standard of review. Rather, in Right to be forgotten I, the CC establishes domestic law as the primary standard of review: in general, the court will use domestic fundamental rights as standard of reviewand only exceptionally, it will use EU fundamental rights.
This concept follows from a two-step argumentation. The starting point of this argumentation is a novel presumption that the CC presents for the first time. This presumption has two elements. The first element serves as a justification for parallel applicability beyond full harmonisation. According to the court, whenever EU law leaves a leeway to the Member States for implementing the EU law provisions in question, it can be presumed that this leeway for implementation includes a leeway with regard to fundamental rights protection. 21 That means that beyond full harmonisation, it is for the Member States to decide how and to what extent they protect fundamental rights. In this argumentation, the limits to parallel applicability as called for by the CJEU, i.e. primacy, unity and effectiveness of European Union law, 22 do not play a role. For the CC, wherever there is no full harmonisation, there is in principle room for domestic fundamental rights standards. Moreover, the concept of a leeway does not only make it possible to support a parallel applicability of both standards but also to argue for domestic standards being the natural first point of reference. This element of the presumption thus relates to the distribution of competence between the EU and the Member States. The CC uses a subsidiarity rationale, claiming that where EU law does not harmonize a subject matter, there is no need for a compulsory EU regulation of fundamental rights and therefore there is no common EU fundamental rights standard.
The second element of the presumption which, from a conceptual perspective, is more remarkable than the first element relates to the substantive level of fundamental rights protection. The CC 19 This approach has been prominent in the decision BVerfG, April 24, 2013, Antiterrorism Legislation, docket number 1 BvR 1215/07 (see the discussion of this decision in part C of this paper). claims that it can be presumed that domestic fundamental rights guarantee a level of protection equivalent to that required by the EU Charter of Fundamental Rights. 23 The argument is that even if domestic fundamental rights are used as the standard of review, their application also ensuresin substancethe Charter rights. Therefore, the CC argues, domestic fundamental rights can be used as a primary standard of review. Their application does not undermine the level of protection required by the Charter. Remarkably, this reasoning inverses the reasoning on which the presumption in the Solange II decision of the CC is constructed. In Solange II, the CC argued that it can be presumed that the application of EU fundamental rights guarantees a level of protection that is equivalent to the level of protection required by domestic fundamental rights. 24 This presumption served as a justification for the CC to step back and not to engage in the review of EU acts and domestic acts that are fully determined by EU law. In contrast, the new presumptionwhich however does not replace the old presumption but coexists with itserves the opposite purpose. The CC aims to step forward and to apply domestic fundamental right in cases in which, according to EU law, EU fundamental rights might (also) be applicable. Compared to the Solange II presumption, the new presumption, being a claim made by a domestic court, is rather bold. While in Solange II, the CC decided on the extent of the required domestic fundamental rights protection and claimed that EU law provides in substance an equivalent protection, the CC now decided on the extent of the required EU fundamental rights protection and claimed that domestic law provides in substance an equivalent protection. While it is comparatively natural for a domestic CC to decide on alternative ways in which domestic standards can be ensured, it is much less natural for a domestic court to decide on how EU law standards can be ensured. As this would be a question on which the CJEU might want to adjudicate, it will be interesting to see whether and how the CJEU will react to that jurisprudence of the CC.
The presumption for domestic fundamental rights as the primary standard of review is rebuttable. If it is rebutted, the CC will use EU fundamental rights as standard of review even in situations beyond full harmonisation. 25 However, the court sets the bar for rebutting the presumption very high. Only "in exceptional circumstances", the CC will in fact apply EU fundamental rights. 26 Two scenarios are possible for such a rebuttal: when there are "specific and sufficient indications" that (1) the ordinary EU legislation in the case contains stricter fundamental rights requirements than allowing Member States to apply their domestic fundamental rights standards; or (2) the specific level of protection required by the Charter exceptionally does not correspond to domestic constitutional law. In order to establish the first scenario, the CC requires that the provisions of ordinary EU legislation determine a specific fundamental rights standard, explicitly expressing the wish of the EU legislator that this specific standard is applied by Member States as a harmonized standard. Notably, the court clarifies that it does not consider it to be a sufficient indication when the EU legislator merely refers to certain provisions of the Charter in the recitals to a legal act. 27 This clarification shows that the CC is likely to be rather reluctant in recognizing that the presumption for domestic law as a primary standard of review is rebutted. A similarly high bar seems to apply to the second scenario. Here, the CC considers a rebuttal of the presumption that domestic fundamental rights standards correspond to those required by the Charter when it is evident from the jurisprudence of the CJEU that the CJEU considers certain Charter provisions to require specific standards that are not guaranteed by domestic fundamental rights law. For the CC, this is the case when the Charter contains guarantees that are not part of the domestic fundamental rights law. This overview shows that in situations beyond full harmonisation, the parallel applicability of EU and domestic fundamental rights amounts to a superior position for domestic fundamental rights in this area. They are the primary standard of review based on a presumption that can only be rebutted in very limited circumstances. In practice, the parallel applicability is thus likely to remain largely rhetorical. This observation is one of the aspects that lead to inquire whether the new approach of the CC really contributes to a constructive judicial dialogue between the CJEU and the CC in fundamental rights matters or whether it is an example of judicial resistance against international and supranational courts, an issue discussed in the following section.
C. Constructive Judicial Dialogue or Pushback against the CJEU?
The relationship between the CJEU and the CC has never been free of friction. In some instances, the CC took a more critical stance towards the CJEU and EU law in general, in others, its jurisprudence was more "European law friendly". As this new approach structurally changes the parameters of the relationship between EU and domestic fundamental rights law in the German legal order as well as the relationship between the two courts, it is important to assess which legal-political tendency this change reflects: constructive dialogue or rather pushback against the CJEU. Determining this tendency also contributes to situating this jurisprudence in the broader debate about what seems to be increasing backlash and resistance against international courts. Given that the CJEU so far has encountered very few instances of open resistance by domestic courts, 29 qualifying the nature of this judgement of the CC is particularly interesting.
In order to assess whether this jurisprudence of the CC should be read as instance of resistance or even as backlash against the CJEU, it is crucial to clarify these notions. In the discussion about the pushback against international courts, authors have suggested diverging concepts of backlash and resistance. Generally, backlash is considered to be the strongest form of pushback against courts. It is characterized by various factors. First, backlash is a "reaction to a development with the goal of reversing that development". 30 The actor that is lashing back aims to reinstate a status quo ante that has been modified by a development that is perceived by this actor as unfavourable. The thrust is thus a resetting response: the actor responds to a specific development rather than merely changing its approach towards an unmodified situation; and the actor aims at restoring a previous state of affairs. Second, backlash "targets the institutions as such and their authority" 31 rather than contesting a certain legal norm or its interpretation by a court. 32 The backlashing actor aims "to reduce the authority, competence, or jurisdiction of the court". 33 The pushback is thus of a structural nature; it attacks the court as an institution rather than challenging its jurisprudence with regard to a certain subject matter. Third, for pushback in its extreme form, the actor ultimately rejects the institution as a whole. 34 This is for example the case when a State withdraws from the jurisdiction of a court or when domestic actors including domestic courts cease to procedurally engage with a court and to implement its judgements. As some authors suggest, one can differentiate "backlash" as the extreme form of pushback from "resistance" as a lesser form of pushback. 35 Although fulfilling the first and the second criterion of backlash, resistance thus understood does not reject the institution as such. Instead, an actor that resists against an 29 Hofmann, supra note 5. More generally, backlash in various contexts is characterized by "actions taken in opposition to the system itself", Caron and Shirlow, supra note 5, at 160. 35 Soley & Steininger, supra note 5, at 241 using a different terminology than Madsen, Cebulak & Wiebusch, supra note 5. international court without lashing back "is still invested in the institution and seeks to reform it from within". 36 In the remainder of this section, I will use these three criteria for backlash and resistance in order to assess whether the recent jurisprudence of the German CC characterises as either of these forms of pushback.
I. A Resetting Response of the CC
The first criteriona "reaction to a development with the goal of reversing that development"allows for a relatively clear evaluation in the current setting. The CC does react to a development: the jurisprudence of the CJEU on the applicability of the EU Fundamental Rights Charter and its relationship to the domestic fundamental rights protectionin combination with an increasing density of EU law in many subject matters. Since the much-discussed decisions Melloni 37 and Åkerberg Fransson 38 from 2013, the CJEU has taken a broad approach to the applicability of the Charter, limiting at the same time the applicability of domestic fundamental rights. With regard to the scope of the Charter rights, it interpreted Article 51 of the Charter in a manner that went beyond the wording of this provision. While Article 51 stipulates that the Charter is addressed to "the Member States only when they are implementing Union law", the CJEU took the applicability to be broader, i.e. "where national legislation falls within the scope of European Union law". 39 According to this jurisprudence, "the applicability of European Union law entails the applicability of the fundamental rights guaranteed by the Charter". 40 This jurisprudence had consequences on two levels. By broadening the applicability of the Charter, the CJEU has expanded its authority in fundamental rights matters within the EU. As the institution competent for interpreting the Charter, the court obtained a say in a larger number of legal situations than before. This development towards a broadened authority is multiplied by the increase in the body of EU law. From the perspective of the CJEU, its say on fundamental rights questions is exclusive concerning fully harmonized subject matter. The applicability of domestic fundamental rights is excluded. 41 In contrast, the CJEUat least in principleshares its jurisdiction regarding fundamental rights in situations where the "action of the Member States is not entirely determined by European Union law". 42 The court accepted in Åkerberg Fransson a limited parallel applicability of EU and domestic fundamental rights. This parallel applicability is however subject to the condition that it is "provided that the level of protection provided for by the Charter, as interpreted by the Court, and the primacy, unity and effectiveness of European Union law are not thereby compromised". 43 As a result, the room for the applicability of domestic fundamental rights and thus for the domestic constitutional courts to exercise their jurisdiction is restricted.
This jurisprudence has triggered critique by many judicial and non-judicial actors in the EU. The German CC was one of the fiercest critics of this jurisprudence. It has explicitly voiced its opposition to this understanding of Article 51 of the Charter in its decision on the antiterrorism legislation from April 2013. 44 The court threatened to consider the Åkerberg Fransson decision as ultra vires act of the CJEU if it was to be understood as meaning that "absolutely any connection of 36 Id. Different understandings of the term resistance is adopted by other authors, e.g. Sandholtz, Bei & Caldwell, supra note 33, at 160; Madsen, Cebulak & Wiebusch, supra note 5. a provision's subject-matter to the merely abstract scope of Union law, or merely incidental effects on Union law, would be sufficient for binding the Member States by the Union's fundamental rights set forth in the EUCFR". 45 This threat expressed the CC's concern that domestic fundamental rights as the court's own field of influence would lose practical importance. These concerns were primarily based on the CC's exclusiveness approach with regard to fundamental rights protection in the EU. The court considereduntil it took, as outlined above, a different turn in November 2019that it can be either domestic or EU fundamental rights that are applicable to a given casebut not both. In an exclusiveness framework, a broader applicability of EU law automatically reduces the scope for applying domestic fundamental rights; and that would reduce the impact of the CC as an institution. The concern of the CC that such a development would take place could not be overcome by the subsequent jurisprudence of the CJEU. Although the CJEU took the critique by the German CC and by other actors into account and did not push the Åkerberg Fransson approach to its limit, a relative broad applicability of EU fundamental rights remains reality. 46 This is due both to the jurisprudence of the CJEU on the applicability of the Charter and on the general increase of subject matters that are regulated by EU law.
The second consequence of the broad applicability of EU fundamental rights relates to the relationship between constitutional courts and general courts on the domestic level. A broad scope of the Charter leads to an empowerment of general courts in fundamental rights matters. This affects the judicial structure in legal orders such as the German one which is characterized by a centralizedrather than a diffusemodel of fundamental rights protection. The empowerment of general courts in fundamental rights matters is facilitated by the principles of direct effect and primacy of EU law which since their creation have strengthened domestic courts also beyond fundamental rights matters. 47 Domestic courts have the power and the duty to assess whether acts of domestic authorities are in conformity with EU law, including EU fundamental rights law, and to set aside these acts if they do not comply with EU law. Based on these principles, the general domestic courts become fundamental rights reviewing actors in their own right; they become "miniature constitutional courts". 48 This has caused the CC to lose its monopoly for fundamental rights review in the domestic legal order. In addition to this "disempowerment", 49 ordinary courts have gained influence on the fundamental rights jurisprudence of the CJEU. The preliminary reference procedure gives them a tool to influence how and to what extent the CJEU shapes fundamental rights protection in the EU. 50 The CC who so far did not use the Charter rights as standard of review could not influence the CJEU's fundamental rights jurisprudence in the same 45 Id. at para. 91. 46 For the more restrictive formulation in the subsequent jurisprudence see e.g. Case C-206/13, Siragusa, EU: C:2014:126, paras. 24-25: "[Article 51 of the Charter] requires a certain degree of connection above and beyond the matters covered being closely related or one of those matters having an indirect impact on the other ( : : : ). In order to determine whether national legislation involves the implementation of EU law for the purposes of Article 51 of the Charter, some of the points to be determined are whether that legislation is intended to implement a provision of EU law; the nature of that legislation and whether it pursues objectives other than those covered by EU law, even if it is capable of indirectly affecting EU law; and also whether there are specific rules of EU law on the matter or capable of affecting it ( : : : )." Bobek, supra note 47. 49 Piqani, supra note 47. 50 On the general empowerment of domestic courts, inter alia by the preliminary reference procedure, see e.g. Karen Alter, The European Court's Political Power, 19 WEST EUROPEAN POLITICS 458 (1996). manner. In other words, where the applicability of the Charter is broad, the general domestic courts obtain a stronger voice in fundamental rights matters to the detriment of the CC.
Unlike the Austrian CC, the German CC seems to have been affected most notably by the broadened authority of the CJEU and the resulting perceived encroachment on the CC's competence. In Austria, it was a struggle for authority between the Austrian CC and the general domestic courts that led the Austrian CC to apply EU fundamental rights. 51 In Germany, there was no such prominent struggle between domestic courts. Instead, the CC has aimed its attempt to re-establish its authority directly at the CJEU.
This struggle for authority with the CJEU was prominent at several occasions since the Åkerberg Fransson decision of the CJEU. Most prominently, the CC had tried to maintain a say in fundamental rights cases within the scope of EU law by broadening its concept of constitutional identity review. Based on this notion and the idea of certain constitutional principles that are beyond the reach of European integration, the court constructed a way to conduct a limited review of fundamental rights. 52 This review has become possible by partially expanding the protection of human dignity as required by the German Basic Law to other rights. However, conducting this kind of review comes with considerable challenges. Conceptually, it claims the primacy of certain constitutional law provisions over EU law and has thus the inherent potential of conflicts with the CJEU. It also provides a target for critique from the perspective of domestic constitutional law, in particular for eroding the notion of human dignity as conceived of by the German Basic Law. 53 From the beginning, this review mechanism therefore did not seem able to fil the jurisdictional gap that the CC feared to result from a broad applicability of EU fundamental rights.
It is clear from the decisions Right to be forgotten I and Right to be forgotten II that the CC was motivated by the above developments when establishing its new jurisprudence on the Charter as review standard. In Right to be forgotten II, the CC addresses its reduced importance that results from an increasing density of EU law in many subject matters and thus from a broadening applicability of EU fundamental rights rather than domestic fundamental rights. 54 It argues that without using the Charter as review standard, the court would be less and less able to exercise its judicial function with regard to fundamental rights protection. 55 Although claiming to give reasons for why this new jurisprudence is necessary to guarantee fundamental rights protection for the individuals concerned, the court uses institutional arguments rather than arguments referring to the level of substantive protection. This shows the extent to which the court is mainly motivated by its importance as an institution and the attempt to regain more influence.
The institutional nature of the argumentation is especially vivid when the court addresses the alleged "gap" in the fundamental rights protection that would ensue if it did not use EU fundamental rights as standard of review. 56 The court does not showor even attempt to showthat the substantive level of human rights protection would be lower for the individuals concerned. Instead, the court seeks to show that the existing procedural mechanisms provided by general courts do not correspond to the procedural mechanism that is the centralized review by the CC. In doing so, it assumes that effective fundamental rights protection necessarily requires a centralized system as in the German legal order where a constitutional court holds the monopoly for setting aside legal acts which are not in conformity with fundamental rights law. The court does not provide arguments to support this assumption. Notably, it does not compare the centralized system to other possible systems for fundamental rights protection. In the EU, the fact that 51 See the reaction of the Austrian Supreme Court, decision of Dec. 17, 2012, 9 Ob 55 Id.
the centralized system is not the only option is evidenced by Member States who opted for a diffuse system of fundamental rights protection where general courts exercise such protection. 57 EU law follows a similar approach. Based on the CJEU's long standing jurisprudence on the primacy of EU law, all domestic courts are required to set aside domestic law that is in conflict with EU law. As established in Simmenthal, "every national court must, in a case within its jurisdiction, apply Community law in its entirety and protect rights which the latter confers on individuals and must accordingly set aside any provision of national law which may conflict with it". 58 It is not a required part of this framework that a constitutional court oversees how general domestic courts apply EU fundamental rights. 59 This goes to show that choosing a centralized system of fundamental rights review is not a legal requirement but rather a conceptual decision. 60 The CC transposes this choice that the German Basic Law has made for domestic fundamental rights to the context of EU fundamental rights protectionwithout giving any solid legal arguments. Rather than a legal requirement, applying this model to the EU law context is a deliberate strategic move that benefits the CC as an institutionand that is indeed aimed at doing so.
The new approach taken by the CC responds to both aspects in which the jurisprudence of the CJEU has challenged the influence of the CC: the relationship to the general domestic courts and the relationship to the CJEU. Regarding the first aspect, the CC has taken back some of the control that it had lost to the general domestic courts. The CC regains full oversight over how general domestic courts apply fundamental rights law. In matters in which EU fundamental rights are applicable, the CC can annul judgments of the general courts that do not comply with the Charter (as interpreted by the CC). This re-establishes the relationship that existed between general courts and the CC before EU fundamental rights even came into being. It goes considerably beyond the limited review that had existed until Right to be forgotten II. Before this decision, the CC had restrained itself to reviewing whether general domestic courts of last instance have fulfilled their obligation to address the CJEU in a preliminary reference procedure. This review which was based on domestic constitutional law (the right to one's lawful judge as enshrined in Article 101 of the Basic Law) had led to different approaches taken by a chamber of the first senate and the second senate of the CC as to how strict the review exercised by the CC should be. This issue will now have less relevance as a full review has been re-established. Further, the new jurisprudence is likely to change the situation with regard to preliminary references. Although the CC has left it open in its decisions on the Right to be forgotten, the obligation of last instance courts under Article 267 TFEU to ask a preliminary question to the CJEU is likely to shift from the general courts of last instance to the CC. It is possible that these courts will thus be less inclined to involve the CJEU on a voluntary basiswhich would leave more room for the CC to frame the dialogue with the CJEU.
As to the relationship between the CC and the CJEU, the new jurisprudence aims to reinstate the status quo ante. This means recreating a power balance between the two courts that is at least as favourable to the CC as it was before the CJEU started to expand its authority in fundamental rights matters. Assessing how the court achieves this aim is the object of the following section.
II. Reducing Authority of the CJEU
The second criterion for categorizing the decisions on the Right to be forgotten as backlash or resistance against the CJEU is that these decisions represent an action that aims "to reduce the authority, competence, or jurisdiction" of the CJEU. When assessing this criterion, the result 57 E.g. Finland: Chapter 10 Section 106 of the Constitution; Sweden: Instrument of Government Chapter 11, Article 14. is more ambivalent than with regard to the first criterion. Although some elements of this new jurisprudence might have an EU law friendly effect, other elements limit, for the German legal order, the impact of the Charter as well asdirectly and indirectlythe authority of the CJEU in fundamental rights matters.
To start with, it is uncertain whether this new jurisprudence will strengthen the Charter within the domestic legal order or whether it will marginalize it to some extent. In the latter case, the impact of the jurisprudence of the CJEU in fundamental rights matters and the institutional standing of the CJEU as fundamental rights court would be affected. The Charter is the main legal instrument on which the authority of the CJEU in fundamental rights matters is based. Effects on the Charter and on the authority of the CJEU can result directly from the legal requirements that the new jurisprudence imposes on domestic actors (see below) and indirectly from the broader incentives that the jurisprudence sets for these actors. With regard to the broader incentives, some early commentators of these CC decisions have argued that the new jurisprudence will have positive effects on the visibility and application of the Charter. 61 According to this argumentation, it might be the case that the fact that the CC uses the Charter as a review standard will motivate legal practitioners to engage more with the content and interpretation of the Charter and they might bring forward more arguments based on the Charter than before. However, one can have doubts as to whether the incentive created by the CC will be as positive. As the new framework means limiting the applicability of the Charter as standard of review for the CC almost exclusively to fully harmonized subject matters, the CC takes an explicitly restrictive approach. By communicating this restrictive approach to other domestic actors, the CC might create an atmosphere of reluctance rather than of encouragement. It thus remains to be seen to which extent such a restrictive approach can in fact motivate legal practitioners to engage more with the Charter than before or whether it might, inversely, not even discourage them in certain situations.
To assess how the authority of the CJEU is affected by the new jurisprudence of the CC, it is useful to distinguish between situations of full harmonisation and situations beyond full harmonisation. For fully harmonized subject matters, one can argue that the effectiveness of the Charter and the impact of the CJEU has a reasonable chance of being enhanced in the German legal order. The CC will be an additional actor monitoring the respect of Charter rights, reviewing the decisions of general courts as to how they guarantee these rights. In case the CC closely follows the jurisprudence of the CJEU as to the interpretation of the Charter rights, the new approach of the CC would not negatively affect the authority of the CJEU. This however requires the CC to address more preliminary questions to the CJEU than it did so farwhich it claims to intend to do. 62 The alternative scenario is that the CC will interpret the Charter by itself without involving the CJEU. If this scenario becomes reality, the monopoly of the CJEU for the interpretation of EU law will be undermined. Future practice will thus show what the effect of the new approach of the CC for the authority of the CJEU in fully harmonized subject matters will be.
That being said, the CC does not completely cede the yardstick in fully harmonized situations to EU law and to the interpretation by the CJEU. In particular, it is important to note that the CC does not give up its jurisprudence on constitutional identity review. The concept of constitutional identity aims at disapplying EU law based on domestic constitutional law provisions. In order to have a say in situations that the CC considered to be entirely determined by EU law (such as in the context of the European Arrest Warrant), the CC has used in the past constitutional identity for establishing a limited form of fundamental rights review based on domestic law. 63 This jurisprudence has been developed by the second senate of the CC, while the decisions on the Right to be forgotten have been issued by the first senate of the CC. Considering that the latter declares that its 61 Kleinlein, supra note 3. See also the reaction of the president of the CJEU Koen Lenaerts as quoted here: https://twitter. com/KlausHempel2/status/1200071216654159874. new approach has no effect on the existing jurisprudence on constitutional identity review, it is likely that this instrument will continue to be used by the CC in the future if deemed necessary. 64 As a result, the CC has broadened its possibilities in fully harmonized situations: it can decide based on EU fundamental rights and, as before, it can set aside EU law based on the domestic law principle of constitutional identity. However, by applying EU fundamental rights to situations of full harmonisation in which the CC previously did not adjudicate, the CC might have less incentive to use its other mechanism such as constitutional identity review to construct a basis for its jurisdiction. From now on, the CC can have a say on such situations by simply using EU fundamental rights as a review standard. The motivation to continue broadening the constitutional identity review will thus be less strong. This might reduce the cases in which the CC challenges the CJEU with regard to specific norms of EU law, i.e. it might reduce the risk for normrelated contestation. 65 Yet the potential cases in which the CC will challenge the CJEU based on constitutional identity might appear even more conflictual than before.
The prospects for the authority of the CJEU for not fully harmonized subject matters are rather unfavourable from the outset. Here, the novel framework of parallel applicability is of particular interest. In fact, the parallel applicability of domestic and EU fundamental rights as spelled out by the CC seems to be a double-edged sword. On the one hand, the CC might advocate that its approach represents more tolerance towards EU law as standard of review. This seems to be an improvement for the relationship between EU and domestic law and thus for the relationship between the CC and the CJEUas it is less confrontational than the concept of exclusive applicability of either EU or domestic fundamental rights. However, the way in which the CC designs its version of parallel applicability has the potential of having the opposite effect. The issue lies with the court's "primary standard of review". As outlined above, the CC has created a presumption thatbeyond full harmonisationdomestic law will in general serve as standard of review; only in exceptional cases the presumption can be rebutted to the effect that EU fundamental rights are used as standard of review. The role of EU fundamental rights in this context is thus very limited.
This presumption for domestic fundamental rights which the CC has created as guideline for its own review raises the question as to how it might affect the review exercised by general domestic courts. Although the presumption appears to be of a procedural nature, thus only influencing the review by the CC, the statements of the CC in Right to be forgotten I convey a different impression. They seem to imply that the "primary standard of review" translates into a substantive rank relationship between EU and domestic fundamental rightswith domestic fundamental rights applicability trumping EU fundamental rights in situations beyond full harmonisation. If these statements are meant to establish such a substantive rank relationship, general domestic courts in Germany would be henceforth required by the CC to give precedence to domestic fundamental rights in this context. Now, it is true that the CC states that its jurisprudence does not prevent general domestic courts from applying EU fundamental rights and, where necessary, addressing the CJEU in a preliminary reference procedure according to Article 267 TFEU. 66 This statement appears to go against the establishment of a substantive rank relationship. Immediately after this statement however, the CC stresses that the general courts have to "also" apply the domestic fundamental rights (in situations without full harmonisation); and that in this context, the "abovementioned" principles regarding the "substantive relationship" ("materielles Verhältnis") between EU and domestic fundamental rights apply. 67 The CC thus refers to its "primary standard of review" as "substantive relationship" between EU and domestic fundamental rights. Accordingly, the CC seems to consider the "primary standard of review" to mean "primary applicability" of domestic fundamental rights by all courts within the German legal order. Primary applicability would thus require general domestic courts to give precedence to domestic fundamental rights. Contrary to what the CC claims, the general courts would thus no longer be free to apply EU fundamental rights in the way they did before. This reduces the impact of EU fundamental rights on the German legal order.
Obliging the general domestic courts to primarily apply domestic fundamental rights has potentially a restricting effect on how often general courts will initiate preliminary reference procedures under Article 267 TFEU. When, beyond full harmonisation, domestic fundamental rights are applicable in principle and when the bar to rebut the presumption for this primary standard is very high, parties to legal disputes are likely to focus on their domestic constitutional law arguments rather than trying to rebut the presumption. Courts might thus receive less incentives by parties to initiate a preliminary reference procedure before the CJEU. Further, the demanding requirements to rebut the presumption would oblige courts to make a considerable additional effort in their argumentation if they want to apply EU fundamental rights according to the guidelines established by the CC. This requirement creates a new practical hurdle for applying EU fundamental rights as compared to what existed prior to the new jurisprudence of the CC.
The authority of the CJEU is affected in two ways by the new approach of the CC concerning the substantive relationship between EU and domestic fundamental rights in situations in which EU law does not provide a full harmonisation. First, if domestic courts apply EU fundamental rights to a lesser degree, this reduces the impact in Germany of the CJEU's fundamental rights jurisprudence. The less EU fundamental rights are applied, the less it matters for the German legal order what the CJEU has to say on these rights. The importance of the CJEU as a fundamental rights court diminishes for the German legal order. Second, the scope of authority exercised by the CJEU is affected when general domestic courts are de facto dissuaded from asking preliminary questions to the CJEU. It is important for the CJEUas it is for other international courtsthat a broad number of actors are able and willing to bring legal questions before the court. 68 Less cases mean less possibility for the court to impact the legal order. For the CJEU in its capacity as a fundamental rights court, it is particularly relevant if general domestic courts are dissuaded from bringing cases to the court. There is no direct access of individuals to the CJEU in fundamental rights matters as is the case before the ECtHR. Consequently, the preliminary reference procedure and the ability and willingness of domestic courts to initiate this procedure is important as a substitute for direct access. If the jurisprudence of the CC dissuades the general courts from this procedure, it affects the authority of the CJEUnot only with regard to the German legal order but also conceptually with regard to the CJEU as fundamental rights court of the EU.
The authority of the CJEU is however not only affected in the context of subject matters that are not fully harmonized. There are two further aspects that concern the fundamental rights authority of the CJEU on a more general level. First, by establishing this new approach to the relationship between EU and domestic fundamental rights, the CC de facto rejects the jurisprudence of the CJEU on Article 51 of the Charter in its entirety. If henceforth both the CC and the general domestic courts in Germany decide about the applicability of EU fundamental rights according to the standards set out by the CC in the decisions of November 2019, this de facto leaves no room for Article 51 of the Charter and the jurisprudence of the CJEU on the matter. For the German legal order, the competence of the CJEU for determining the applicability of the Charter is undermined. The CCwho has been very critical with regard to how the CJEU interpreted Article 51has now decided not to fight about this interpretation any longer. Rather, it appears to opt for a more radical solution, substituting the Article 51 system by its own approach to the relationship between EU and domestic fundamental rights. In doing so, the CC takes back control over this relationship for the German legal order to the detriment of the authority of the CJEU.
The second general aspect in which the new jurisprudence of the CC affects the authority of the CJEU relates to the dialogue between the two courts. At first sight, the CC seems open to actively involve the CJEU concerning EU fundamental rights matters by preliminary reference procedure. The CC mentions that it wants to address the CJEU in relation to the interpretation of Charter provisions. However, when it comes to applying EU fundamental rights, determining the content of Charter rights is only the second step. The first step is to decide whether the Charter rights are applicable at all. With regard to the applicability of the Charter, the CC does not intend to involve the CJEU by way of preliminary reference. In particular, the CC does not plan to ask a preliminary question as to the issues that are crucial to the CC's new delimitation between EU and domestic fundamental rights: whether a specific provision of EU secondary law fully harmonizes a subject matter or contains a leeway for Member States; and whether a provision of EU secondary law contains specific fundamental rights requirements that would be able to rebut the presumption for domestic fundamental rights as a primary standard of review. These are questions of EU law that, so it seems, the CC wants to decide on its own. From the perspective of the CC, this approach makes sense. Only by not involving the CJEU in the question of whether EU fundamental rights should be applied in a specific case, the CC can re-establish its authority on the relationship between EU and domestic fundamental rights and put into effect its motive to reinstate a situation in which the CJEU has less power.
Finally, the CC weakens the authority of the CJEU for determining the applicability of the Charter also in an a more indirect manner. According to the CC's approach, the EU actor competent for determining the applicability of the Charter is primarily the EU legislator rather than the CJEU. This claim results from the statement of the CC about how the presumption for domestic fundamental rights can be rebuttedconcerning the first out of the two possibilities described in part B of this paper. A rebuttal requires that ordinary EU legislation explicitly establishes a certain fundamental rights standard that precludes Member States from taking their own approach. A mere reference to one of the Charter rights in the legislative act is not sufficient. Conceptually, the CC thus gives a considerable power to the EU legislator to decide whether Charter rights are applicable or not. However, such full discretion of the EU legislator is not in line with the idea that the Charter has a legal rank equivalent to primary EU law. Primary EU law should be binding on the EU legislator and its applicability should not be at the legislator's full discretion. Rather, it should be (inter alia) for the CJEU to ensure that the legislator respects these standards. Thus, when taking the formulation chosen by the EU legislator as point of reference for whether to apply the Charter rights or not, the CC undermines both the primary law rank of the Charter and the corresponding authority of the CJEU.
Despite some ambiguities as to the future effects that the jurisprudence of the CC will have in practice, the above assessment has highlighted that the CC at least to some extent aims at reducing the authority of the CJEU in fundamental rights matters. One can thus characterize this jurisprudence as pushback of a structural nature that addresses the CJEU as a fundamental rights court.
III. Rejection or Acceptance of the CJEU as an Institution?
The motivation of the CC to react to a development triggered by the CJEU and its aim to reduce the authority of the CJEU in fundamental rights matters do not necessarily mean that the CC rejects the CJEU as an institution altogether. Such a rejection would require additional indications, for example an unwillingness to engage in judicial dialogue with the CJEU or open noncompliance with its judgments. This is not the case here. At least formally, the CC emphasises that the CJEU has the monopoly of interpretation with regard to EU law and that the CC will refer to the CJEU in a preliminary reference procedure in cases in which the interpretation of the Charter provisions is unclear. In view of its reluctance so far to refer a question to the CJEU, the CC even stresses that its new jurisprudence will lead it to refer questions to the CJEU more often. The CC thus accepts the CJEU as an institution competent to rule in fundamental rights matters.
Moreover, based on its new approach, the CC seeks to actively influence the fundamental rights jurisprudence of the CJEU in substance. Already in the past, the CC was not shy to explicitly put forward its own interpretation of EU law provisions on different subject matters, with the aim to convince the CJEU of this interpretation. 69 Increasing the number of preliminary references to the CJEU on fundamental rights issues will give more opportunities to the CC to suggest and promote its view as to how to interpret Charter provisions. Seeking to influence the CJEU is based on an inherent acceptance of the CJEU as an institution. This motivation corresponds to the abovementioned criterion for resistance as a category of pushback: a willingness to "reform" the institution from within. 70 The CC takes this approach.
In sum, the new jurisprudence of the CC should be characterized as an instance of resistance. The CC resists against the CJEU in its function as a fundamental rights court, attempting to reduce the authority of the CJEU and reversing a development that it considered to be unfavourable to its own authority. This is structural pushback aimed the CJEU's function rather than at individual decisions or norms. However, the CC does not reject the CJEU as a fundamental rights institution altogether. It accepts, in principle, that the CJEU has such a role, although seeking to limit this role. Consequently, the third criterion of backlash, which would be the extreme form of pushback, is not fulfilled. The jurisprudence thus qualifies as resistance rather than backlash.
An additional aspect that contributes to categorizing the jurisprudence of the CC as resistance is the question how many open conflicts between the two courts there are likely to be in the future. It is true that it is uncertain to what extent the CJEU will accept the new approach of the CC to consider domestic law as a primary standard of review. This element of the new jurisprudence indeed bears the potential for a conflict. Yet at least at its surface, the language of parallel applicability of EU and domestic fundamental rights seems less confrontational than the previous exclusiveness approach of the CC; and less confrontational than the CC's constitutional identity review. Although the struggle for authority between the courts persists in substance, the "parallel applicability" framing might trigger a more harmonious perception. If, however, the CJEU rejects the CC's version of parallel applicability, the harmony might be short-lived.
D. Conclusion
The German CC is neither the first nor the only constitutional court in the EU to apply the Charter as standard of review. However, the new jurisprudence of the CC is remarkable in several respects. The CC has clearly expressed its motivation to counter a development that had reduced its importance as an institution. The court aims at rebalancing to its benefit the institutional relationship between the CJEU and itself. To achieve that aim, the CC establishes a complex system delimitating the applicability of domestic and EU fundamental rights. This system reduces the relevant authority of the CJEU as far as possible without openly defying the CJEU. In doing so, the new jurisprudence classifies as resistance against the increasing role of the CJEU in fundamental rights matters, however without constituting an instance of backlash. The CC measures its resistance to the extent necessary to regain authority without rejecting the CJEU as an institution. 69 Eg. BVerfG, Jan. 14, 2014, docket number 2 BvR 2728/13, paras. 55-100; BVerfG, Jul. 18, 2017, docket number 2 BvR 859/ 15. On such "preemptive opinions" about the interpretation of EU law by domestic courts, see Stacy Nyikos, Strategic interaction among courts within the preliminary reference processstage 1: national court preemptive opinions, 45 EUR. J. POL. RES. 527 (2016). 70 Soley & Steininger, supra note 5, 241.
Compared to several other much discussed frictions between domestic courts and the CJEU, the CC's decisions on the Right to be forgotten are of a more structural nature. Many of these frictions mainly concerned the contestation of specific norms of EU law or their interpretation by the CJEU such as in the Ajos decision of the Danish Supreme Court 71 or the Taricco Saga between the Italian Constitutional Court and the CJEU. 72 The decisions of the CC go beyond that. They address the institutional role of the CJEU rather than its jurisprudence on isolated legal issues. As a result, their importance as instances of pushback are higher from a conceptual perspective, despite the less confrontational tone.
If one takes into account the role of the CC as a well-established, influential constitutional court in Europe, it is not surprising that the CC takes, yet again, a clear stance against the CJEU. Powerful actors are arguably more likely to push back against a court than less powerful actors. 73 And this might be even more so when these powerful actors see their authority diminishing. As the CC has been a powerful actor both domestically with regard to other judicial and constitutional actors as well as within the European sphere in relation to the CJEU and the constitutional courts of other member states, this has made it more likely for the court eventually to resist on a structural basis. At the same time, the influential role of the CJEU increases the suspense for the future: it makes it particularly interesting to see how the CJEU and other courts in the EU will react to this development. | 2020-03-26T10:33:42.113Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "5df377c4ee05a2ffcf09ee42eff39b12b6b4860b",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/710D095B616D3B65D5EB0BAEE0BB92B2/S2071832220000164a.pdf/div-class-title-backlash-against-the-court-of-justice-of-the-eu-the-recent-jurisprudence-of-the-german-constitutional-court-on-eu-fundamental-rights-as-a-standard-of-review-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9159664288ad0bb4901778791f9a1670b2dbab26",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Political Science"
]
} |
210988249 | pes2o/s2orc | v3-fos-license | Sugar Beet Agronomic Performance Evolution in NW Spain in Future Scenarios of Climate Change
: Changes in environmental conditions resulting from Climate Change are expected to have a major impact on crops. In order to foresee adaptation measures and to minimize yield decline, it is necessary to estimate the e ff ect of those changes on the evapotranspiration and on the associated irrigation needs of crops. In the study presented herein, future conditions extracted from RCP4.5 scenario of IPCC, particularized for Castilla-y-Le ó n (Spain), were used as inputs for FAO crop simulation model (AquaCrop) to estimate sugar beet agronomic performance in the medium-term (2050 and 2070). A regional analysis of future trends in terms of yield, biomass and CO 2 sequestration was carried out. An annual ET 0 increase of up to 200 mm was estimated in 2050 and 2070 scenarios, with ETc increases of up to 40 mm / month. At current irrigation levels, temperature rise would be accompanied by a 9% decrease in yield and a ca. 6% decrease in assimilated CO 2 in the 2050 and 2070 scenarios. However, it is also shown that the implementation of adequate adaptation measures, in combination with a more e ffi cient irrigation management, may result in up to 17% higher yields and in the storage of between 9% and 13% higher amounts of CO 2 .
Introduction
Spring-sown sugar beet is an industrial crop of great importance in Castilla-y-León region (Northwestern Spain), which accounts for 87% of Spanish production of spring-sown sugar beet, with over 24,000 ha [1], and which is the area of the European Union that achieves the highest yields per hectare [2]. As it is also the case for many other crops in Continentalized Mediterranean climate areas, sugar beet requires more water than that provided by rainfall, and thus irrigation is necessary to satisfy its water requirements. Besides, irrigation is the most determining factor in its production, being an indispensable practice in Spain [3]. However, the availability of water for crop irrigation is expected to decrease in the future due to increased demands from other sectors (drinking and household needs, recreation, industry and commerce, etc.) and because of changes in environmental conditions [4,5]. The latter are, in fact, the main source of uncertainty for the viability of sugar beet cultivation in this region in the future.
Baseline Scenario Climatic Data
As indicated in the flowchart shown in Figure S1, daily climatic data from 2001 to 2014 was collected from 29 weather stations that belong to the SIAR (Agroclimatic Information System for Irrigation) network of the MAPA (Spanish Ministry of Agriculture, Fisheries and Food). This climatic data was used to calculate a representative meteorological year for each season, in order to build the baseline scenario for the study. This time series was chosen over other longer (but less local) data series because field data can improve the regional projections of crop models [31]. Mann-Kendall tau test was conducted to detect trends in the dataset.
Projected Climatic Data for 2050 and 2070 Scenarios
Climate data projected for 2050 and 2070 in the locations of the SIAR stations were obtained through the WorldClim global climate layers [32](http://www.worldclim.org). In this project, different global climate models output data from CMIP5 (IPCC Coupled Model Intercomparison Project Phase 5) were downscaled and calibrated (bias corrected) using WorldClim 1.4 as a reference baseline "current" climate [33]. More information on CMIP5 coordinated multi-model dataset, which ensembles 40 GCMs (Global Climate Model) from 20 research groups, may be found in Taylor et al. [34]. Future climate data generated with those GCMs usually has a spatial resolution of hundreds of kilometers, which is problematic for regional studies that consider variation at much higher spatial resolution. Hence, high-resolution information from low-resolution variables needs to be inferred through a downscaling process, which can be conducted in different ways [35]. In particular, WorldClim project uses a methodology that assumes that change in climate is relatively stable over space (high spatial autocorrelation).
The layers selected in this study (monthly average minimum temperature, monthly average maximum temperature and monthly total precipitation) were projections of the Earth system model MPI-ESM-LR, developed by the Max Planck Institute for Meteorology (MPI-M) for RCP4.5, and had
Baseline Scenario Climatic Data
As indicated in the flowchart shown in Figure S1, daily climatic data from 2001 to 2014 was collected from 29 weather stations that belong to the SIAR (Agroclimatic Information System for Irrigation) network of the MAPA (Spanish Ministry of Agriculture, Fisheries and Food). This climatic data was used to calculate a representative meteorological year for each season, in order to build the baseline scenario for the study. This time series was chosen over other longer (but less local) data series because field data can improve the regional projections of crop models [31]. Mann-Kendall tau test was conducted to detect trends in the dataset.
Projected Climatic Data for 2050 and 2070 Scenarios
Climate data projected for 2050 and 2070 in the locations of the SIAR stations were obtained through the WorldClim global climate layers [32] (http://www.worldclim.org). In this project, different global climate models output data from CMIP5 (IPCC Coupled Model Intercomparison Project Phase 5) were downscaled and calibrated (bias corrected) using WorldClim 1.4 as a reference baseline "current" climate [33]. More information on CMIP5 coordinated multi-model dataset, which ensembles 40 GCMs (Global Climate Model) from 20 research groups, may be found in Taylor et al. [34]. Future climate data generated with those GCMs usually has a spatial resolution of hundreds of kilometers, which is problematic for regional studies that consider variation at much higher spatial resolution. Hence, high-resolution information from low-resolution variables needs to be inferred through a downscaling process, which can be conducted in different ways [35]. In particular, WorldClim project uses a methodology that assumes that change in climate is relatively stable over space (high spatial autocorrelation).
The layers selected in this study (monthly average minimum temperature, monthly average maximum temperature and monthly total precipitation) were projections of the Earth system model MPI-ESM-LR, developed by the Max Planck Institute for Meteorology (MPI-M) for RCP4.5, and had a 30-s (of a longitude/latitude degree) spatial resolution (about 900 m at the equator), and 1 month temporal resolution. The MPI-ESM consists of coupled general circulation models for the atmosphere and the ocean, as well as subsystem models for land and vegetation, and for the marine biogeochemistry. Thus, the carbon cycle has been added to the model system [36].
RCP4.5 scenario, which represents stabilization without overshoot pathway to 4.5 W·m −2 at stabilization after 2100 [37,38], was chosen because it is not as optimist regarding GHG reduction as RCP2.6, but it does consider a reduction in greenhouse gases starting before 2050. In this model, global mean surface temperature change is estimated at 1.4 • C in 2046-2065, and at 1.8 • C in 2081-2100.
Calculated Reference Evapotranspiration and Crop Evapotranspiration
Penman-Monteith simplified equation [39], adopted by the FAO, was used for the calculation of monthly and annual ET 0 values, both in 2050 (average for 2041-2060) and 2070 (average for 2061-2080), using temperature and precipitation data from previous section. Wind, humidity and radiation parameters were assumed stationary. The choice of this method would be supported by the fact that many studies have successfully applied it to different climates and time scales [40]. Moreover, in Spain it has been used, for example, by Espadafor et al. [9] and by Vicente-Serrano et al. [10] to examine historical trends of ET 0 .
To validate this method, calculated values were compared with real values coming from the stations, and regression lines were obtained, obtaining a coefficient of determination R 2 of 0.998 (thus confirming that the method of calculation was perfectly acceptable).
Monthly crop evapotranspiration (ETc) was calculated as the product of monthly ET 0 and Kc crop-specific coefficients. Monthly values for sugar beet K C in the area of study were obtained from AIMCRA (Spanish Research Association for Sugar Beet Crop Improvement) [41].
AquaCrop
A detailed description of the model can be found in [42] and [18]. Minimum and maximum temperatures, ET 0 , rainfall and CO 2 concentrations (from Mauna Loa Observatory (Hawaii) records and estimated values for the future, discussed in Section 2.3) were supplied as climate inputs. An appropriate irrigation schedule for crops in this region, based on AIMCRA recommendations [41], with a total dose of 553 mm during the whole cycle, was chosen. This irrigation dose was considered as fixed for predictions in the future scenarios.
Crop Model Calibration
By default AquaCrop offers files for the simulation of different crops, and in the case of sugar beet, the model is automatically calibrated and validated to Foggia (Italy) in 2000. Stricevic et al. [43] calibrated the model for the specific conditions of northern Serbia, concluding that this calibration only implied small changes of a few of the default model coefficients, illustrating the resilience of the model. Garcia-Vila et al. [25] recently calibrated and validated the model for different irrigation water allocations in the two main producing areas in Spain. Consequently, in this study the crop parameters for sugar beet were adjusted taking into consideration aforementioned works to obtain typical yields in the area of study (of over 100 t/ha).
In general, it is more suitable to study the different crop stages through the growing-degree days (GDD), to better reflect the plant physiology [17]. In this study, GDD was used for the baseline scenario, and both GDD and days were used for future projections of the crop growth cycle (GDD for comparisons with the baseline scenario and days for the extended cultivation period calculations). The necessary GDD to achieve each growth stage (Table 1) were chosen on the basis of field data collected in different locations in the area of study [44] and on data available in the literature. Details on sowing and harvesting dates have been reported in a previous paper [44]. As in the calibration proposed by Garcia-Vila et al. [25], it was also decided to slightly increment the water productivity (WP) parameter. This modification would be supported by the conclusions of previous research works that suggest that, although sugar beet is a C3 species, it is very efficient in water use, with a behavior closer to that of C4 crops [48]. Planting density in the model was increased to 125,000 plants/ha, which was the density used in real cultivation conditions [41,44]. The harvest index (HI) was kept at 70%, in agreement with Martínez Quesada [49] and with field data. As in the study by Stricevic et al. [43], soil fertility was not addressed, given that nutrient requirements were fully satisfied following AIMCRA recommendations [41]. Soil types were not specified either.
Validation was conducted by calculating the error between real production data, available from experiments conducted in 2011 and 2012 growing seasons, and that calculated with AquaCrop. For this, as in other works [50], the root mean square error (RMSE) and the normalized root mean square error (RMSEn) were calculated. The global Root Mean Square Error (RMSE) was determined as: where Ai = experimental yield, Si = simulated yield and n = number of observations. Units are t/ha of dry matter yield. The normalized root mean square error was calculated as: whereĀ = mean observed data. Values of RMSEn smaller than 10% are considered as excellent, between 10 and 20% as good, between 20% and 30% as fair and, if larger than 30%, as poor [51].
Carbon Sequestration
Data about carbon content in the different parts of the sugar beet was obtained from experimental data [44], with average values of 43.5% in roots and 37.5% in leaves. Net CO 2 uptake (i.e., carbon storage) was obtained by multiplying dry matter, C content and the C to CO 2 conversion factor (44/12, i.e., the ratio between the molar mass of CO 2 and the molar mass of C).
Cartography and Spatial Interpolation
ArcGIS v.10 software (Esri, Redlands, CA, USA) was used to extract monthly average minimum and maximum temperatures and monthly total precipitation data (used as inputs to the crop model) from WorldClim layers for the coordinates of every weather station. It was also used to generate annual reference evapotranspiration, monthly crop evapotranspiration, annual yield, annual biomass and annual CO 2 sequestration maps. These are shown in the results section below. As for spatial interpolation, ordinary kriging method in Geostatistical Analyst toolbox was used. A spherical isotropic semivariogram [52] was chosen on the basis of RMSE criterion. IDW (Inverse Distance Weighting) was avoided in order to minimize points isolated by bull's-eye effect, while Spline was not chosen either, since it would be more appropriate for smaller scales [53]. The spatial analyst toolbox from ArcGIS was used to implement this interpolation.
Interpolation validation was conducted by "leave one out" cross-validation, which consists in using one of the stations as the validation dataset, and using the rest of stations as the training set to calculate the error of the resulting model in that validation station; and then repeating this process for each and every station in the dataset.
Evapotranspirationn Baseline
Monthly average temperature, precipitation and ET 0 values from the 29 weather stations, with daily data from 2001 to 2014, are shown in Figure 2a.
Evapotranspirationn Baseline
Monthly average temperature, precipitation and ET0 values from the 29 weather stations, with daily data from 2001 to 2014, are shown in Figure 2a. It may be observed that the highest ET0 values were found between June and August, while the lowest ones corresponded to the November to March period, as expected in a continental Mediterranean climate.
Despite the fact that the general trend in Spain in the last decades has been towards an increase in ET0 [10], it is worth noting that Mann-Kendall trend test did not identify any clear trend in ET0 data for the years considered herein. This would be beneficial when it comes to the creation of a "representative year" model.
WorldClim Data
Annual average temperatures for each station in the area of the study are shown in Figure 2b. The average temperature increase of the whole set of stations was 2.4 and 2.7 °C for 2050 and 2070, respectively. It should be noted that these values would be higher than the global averages for the RCP4.5 scenario. Ribalaygua et al. [54], working with SRES scenarios, reported increases in maximum and minimum temperature averages ranging from 1.5 to 2.5 °C (depending on the scenario), relative to the 1971-2000 period, for mid-21st century in Aragón (north-eastern Spanish region). In that study, regarding rainfall, authors explained that there was not a clear trend, but rather higher uncertainties. However, all the scenarios suggest a moderate decrease in rainfall for the midcentury (2%-4%). In this work, the percentage of change of annual precipitations for 2050 and 2070 in the area of study (not shown) was minimum compared to the current scenario, so it should not influence the results in a significant manner.
Crop Model Validation
The minor adjustments made to the calibration discussed in Section 2.6.1 showed an adequate performance when applied to the area of study, in agreement with Heng et al. [55]. In comparison It may be observed that the highest ET 0 values were found between June and August, while the lowest ones corresponded to the November to March period, as expected in a continental Mediterranean climate.
Despite the fact that the general trend in Spain in the last decades has been towards an increase in ET 0 [10], it is worth noting that Mann-Kendall trend test did not identify any clear trend in ET 0 data for the years considered herein. This would be beneficial when it comes to the creation of a "representative year" model.
WorldClim Data
Annual average temperatures for each station in the area of the study are shown in Figure 2b. The average temperature increase of the whole set of stations was 2.4 and 2.7 • C for 2050 and 2070, respectively. It should be noted that these values would be higher than the global averages for the RCP4.5 scenario. Ribalaygua et al. [54], working with SRES scenarios, reported increases in maximum and minimum temperature averages ranging from 1.5 to 2.5 • C (depending on the scenario), relative to the 1971-2000 period, for mid-21st century in Aragón (north-eastern Spanish region). In that study, regarding rainfall, authors explained that there was not a clear trend, but rather higher uncertainties. However, all the scenarios suggest a moderate decrease in rainfall for the mid-century (2%-4%). In this work, the percentage of change of annual precipitations for 2050 and 2070 in the area of study (not shown) was minimum compared to the current scenario, so it should not influence the results in a significant manner.
Crop Model Validation
The minor adjustments made to the calibration discussed in Section 2.5.1 showed an adequate performance when applied to the area of study, in agreement with Heng et al. [55]. In comparison with experimental data from 2011 and 2012 growing seasons, errors remained below 10% in all cases, except for one plot in 2012 (14.72%) ( Table 2). The yield RMSE value, of 2 t/ha, and the yield RMSEn, below 10% (rated as 'excellent', according to [51]), imply that the model would be adequate with a view to analyzing spatial trends for the crop in future scenarios of Climate Change.
Interpolation Validation
A "leave one out" cross-validation of ET 0 values was used to assess the behavior of the spatial kriging interpolation applied to the variables under study. With a RSME of 69.5 mm (5.2% mean error) between observed and interpolated values, this interpolation method may be regarded as a consistent tool for the creation of maps that are representative of the spatial distribution of the studied variables (Table S1).
ET 0 and ETc
In the baseline scenario, the central and southern zones of the area of study showed higher ET 0 values, decreasing as one moves towards the periphery, especially towards the north and northeast (Burgos province). In future scenarios, a clear annual ET 0 increase was calculated for all the area under study both in 2050 and in 2070 (Figure 3), more marked for the latter, with differences vs. the baseline scenario of up to 200 mm ( Figure 4). This increase would be a direct consequence of the temperature rise, given that temperature was the only parameter modified in FAO-56 PM equation. below 10% (rated as 'excellent', according to [51]), imply that the model would be adequate with a view to analyzing spatial trends for the crop in future scenarios of Climate Change.
Interpolation Validation
A "leave one out" cross-validation of ET0 values was used to assess the behavior of the spatial kriging interpolation applied to the variables under study. With a RSME of 69.5 mm (5.2% mean error) between observed and interpolated values, this interpolation method may be regarded as a consistent tool for the creation of maps that are representative of the spatial distribution of the studied variables (Table S1).
ET0 and ETc
In the baseline scenario, the central and southern zones of the area of study showed higher ET0 values, decreasing as one moves towards the periphery, especially towards the north and northeast (Burgos province). In future scenarios, a clear annual ET0 increase was calculated for all the area under study both in 2050 and in 2070 (Figure 3), more marked for the latter, with differences vs. the baseline scenario of up to 200 mm ( Figure 4). This increase would be a direct consequence of the temperature rise, given that temperature was the only parameter modified in FAO-56 PM equation. Within the area of study, the areas that would be most affected by the increase in annual ET0 would be the province of Valladolid, the south of Palencia, the north of Salamanca and Ávila, the Within the area of study, the areas that would be most affected by the increase in annual ET0 would be the province of Valladolid, the south of Palencia, the north of Salamanca and Ávila, the northeast of Segovia, and the east of Zamora. That is, the annual ET0 increase in future scenarios would make the spatial distribution differences that already exist in the baseline scenario more dramatic. The increase would be more marked in the central zone, with a clear southwards direction Within the area of study, the areas that would be most affected by the increase in annual ET 0 would be the province of Valladolid, the south of Palencia, the north of Salamanca and Ávila, the northeast of Segovia, and the east of Zamora. That is, the annual ET 0 increase in future scenarios would make the spatial distribution differences that already exist in the baseline scenario more dramatic. The increase would be more marked in the central zone, with a clear southwards direction (and slightly to the west) so that areas with lower annual ET 0 would move towards the north and northeast over time.
This same trend was also reflected in the monthly differences in ETc values with respect to the baseline scenario, depicted in Figure 5. Between March and May, monthly ETc increases ranging from 1.5 to 20 mm/month are foreseen in the future scenarios. July would be the month in which the monthly ETc increase would be maximum, of up to 40 mm/month. (and slightly to the west) so that areas with lower annual ET0 would move towards the north and northeast over time. This same trend was also reflected in the monthly differences in ETc values with respect to the baseline scenario, depicted in Figure 5. Between March and May, monthly ETc increases ranging from 1.5 to 20 mm/month are foreseen in the future scenarios. July would be the month in which the monthly ETc increase would be maximum, of up to 40 mm/month.
Yield
Assuming that irrigation doses are not increased, yield would be affected by changes in ET0 (Figure 6a). Taking into consideration the pattern for annual ET0 evolution described above, yield would decrease by ca. 9% in both 2050 and 2070 in the central zone of the area of study. An in-depth analysis of modelled effects of sugar beet responses to different irrigation doses may be found in [25].
It is worth noting that the decrease in yield would be slightly larger in 2050 than in 2070. This unexpected result may be tentatively ascribed to a higher photosynthetic activity in 2070 resulting from higher CO2 concentrations. That is, the negative effect of higher temperatures would be partly compensated by larger photosynthetic rates due to the expected CO2 increase [56]. This would be consistent with studies that have carried out FACE (Free Air Carbon Enrichment) experiments to simulate future scenarios with larger CO2 concentrations, such as those by Manderscheid et al. [57], finding a yield increase between 7% and 16% for the CO2 concentration levels foreseen for mid-21st
Yield
Assuming that irrigation doses are not increased, yield would be affected by changes in ET 0 (Figure 6a). Taking into consideration the pattern for annual ET 0 evolution described above, yield would decrease by ca. 9% in both 2050 and 2070 in the central zone of the area of study. An in-depth analysis of modelled effects of sugar beet responses to different irrigation doses may be found in [25]. crop, the region and the chosen adaptation strategies, such as matching crops to soils [29]. Other strategies such as shifting the sowing date, changing the required cultivar growth duration, the development of heat tolerant plants may have to be adopted depending on the location and crop, as noted by Khordadi et al. [30]. Concerning the study presented herein, an important point that should be taken into consideration is that the yield simulations generated by the model in 2050 and 2070 involve changes in the crop cultivation period. While in the baseline scenario sugar beet cultivation goes from March until November, in future simulations the crop season would start earlier, and would finish by October or by the end of September, depending on the area. This responds to the fact that crops, with increasing temperatures, take less time to reach the necessary GDD in each growth stage. Although the temperature rise can increase the developmental rate of the crop, resulting in an earlier harvest, such "heat stress" may have negative effects on crop production [16].
However, the future increase in temperatures could also allow a lengthening of the cultivation period, allowing for earlier sowing. The temperature rise would allow an earlier development of the photosynthetic organ (the leaves), making the most of solar radiation: Due to the slow leaf development in spring, sugar beet crop achieves its highest canopy when the maximum solar radiation of the year has already passed [62,63]. By getting the largest field coverage in the least amount of time, and keeping this coverage for as long as possible, the plant can thus optimize solar radiation interception [64]. In this way, the softer winter temperatures could lead to higher productivity in that part of the year, somehow balancing the losses from the other seasons [56].
According to the simulations, as long as the water requirements of the plant were met with the same water allocation-which involves higher irrigation efficiencies and/or the development of new irrigation strategies [65]-, if the cultivation period was extended, bigger yields could be achieved. In this sense, data about the positive influence of cycle extension on yield, based on field studies conducted in the area of study [66], suggests that yield increases of up to 20% could be attained. This would be in agreement with Hoffmann et al. [67] and Hull et al. [68], who claimed that lengthening of the growing season would have a strong positive effect on sugar beet yield. In Figure 6b, the simulations have been extended until November, generally obtaining larger yields, both in 2050 and 2070 (with an increase of up to 17%). These increases would be more noticeable in the north and eastern zones than in the center and in the south. In 2070, because of higher CO2 concentrations, the increase would be more marked, especially in the northern zone (Palencia and León) and in the east (Burgos). It is worth noting that the decrease in yield would be slightly larger in 2050 than in 2070. This unexpected result may be tentatively ascribed to a higher photosynthetic activity in 2070 resulting from higher CO 2 concentrations. That is, the negative effect of higher temperatures would be partly compensated by larger photosynthetic rates due to the expected CO 2 increase [56]. This would be consistent with studies that have carried out FACE (Free Air Carbon Enrichment) experiments to simulate future scenarios with larger CO 2 concentrations, such as those by Manderscheid et al. [57], finding a yield increase between 7% and 16% for the CO 2 concentration levels foreseen for mid-21st century in the A1B IPCC scenario (SARS scenario). The projections of Vanuytrecht et al. [58] for sugar beet in Belgium showed that although higher temperatures and a shorter growth period alone would reduce potential yield, sugar beet would substantially benefit from the CO 2 fertilization effect (with a mid-century yield increase between 6% and 13%).
Biomass and CO2
On the other hand, an increase in yield would be expected in 2050 in northwestern León and northeastern Burgos, an area that would expand to include also the north of Palencia in 2070, reaching a 10% yield increase. This can be explained because, in these colder zones in the baseline scenario, the plants would actually benefit from the higher temperatures. Therefore, as it has been reported for other crops, future global warming may be beneficial in some regions [50,59], but may reduce productivity in zones where optimal temperatures already exist [60]. Moreover, a shorter crop cycle may result in a reduction of attainable yield [30,61] or in a yield increment [31,59] depending on the crop, the region and the chosen adaptation strategies, such as matching crops to soils [29]. Other strategies such as shifting the sowing date, changing the required cultivar growth duration, the development of heat tolerant plants may have to be adopted depending on the location and crop, as noted by Khordadi et al. [30].
Concerning the study presented herein, an important point that should be taken into consideration is that the yield simulations generated by the model in 2050 and 2070 involve changes in the crop cultivation period. While in the baseline scenario sugar beet cultivation goes from March until November, in future simulations the crop season would start earlier, and would finish by October or by the end of September, depending on the area. This responds to the fact that crops, with increasing temperatures, take less time to reach the necessary GDD in each growth stage. Although the temperature rise can increase the developmental rate of the crop, resulting in an earlier harvest, such "heat stress" may have negative effects on crop production [16].
However, the future increase in temperatures could also allow a lengthening of the cultivation period, allowing for earlier sowing. The temperature rise would allow an earlier development of the photosynthetic organ (the leaves), making the most of solar radiation: Due to the slow leaf development in spring, sugar beet crop achieves its highest canopy when the maximum solar radiation of the year has already passed [62,63]. By getting the largest field coverage in the least amount of time, and keeping this coverage for as long as possible, the plant can thus optimize solar radiation interception [64]. In this way, the softer winter temperatures could lead to higher productivity in that part of the year, somehow balancing the losses from the other seasons [56].
According to the simulations, as long as the water requirements of the plant were met with the same water allocation-which involves higher irrigation efficiencies and/or the development of new irrigation strategies [65]-, if the cultivation period was extended, bigger yields could be achieved. In this sense, data about the positive influence of cycle extension on yield, based on field studies conducted in the area of study [66], suggests that yield increases of up to 20% could be attained. This would be in agreement with Hoffmann et al. [67] and Hull et al. [68], who claimed that lengthening of the growing season would have a strong positive effect on sugar beet yield. In Figure 6b, the simulations have been extended until November, generally obtaining larger yields, both in 2050 and 2070 (with an increase of up to 17%). These increases would be more noticeable in the north and eastern zones than in the center and in the south. In 2070, because of higher CO 2 concentrations, the increase would be more marked, especially in the northern zone (Palencia and León) and in the east (Burgos).
Biomass and CO 2
Besides offering data on yield, AquaCrop also provides results about the total biomass achieved by the crop (Figure 7a), thus allowing to estimate CO 2 sequestration ( Figure 7c). As it occurred with crop yield, a decrease in the average biomass production and in CO 2 assimilation of around 7% in 2050 may be expected when the entire area of study is considered. Such decrease would be slightly mitigated in 2070 (5% decrease) for the same reason discussed above (related to the increase in CO 2 levels). In absolute terms, the assimilated CO 2 would decrease from 49 t/ha in the baseline scenario to ca. 46 and ca. 47 t/ha in the 2050 and 2070 scenarios, respectively.
As in the case of yield, if the cultivation period was expanded, larger average quantities of biomass ( Figure 7b) and captured CO 2 (Figure 7d) would be obtained throughout the area of study, with 8% and 12% increases in 2050 and 2070, respectively. In this case, captured CO 2 , would evolve from 49 t/ha in the baseline scenario to 53 and 55 t/ha (on average) in 2050 and 2070, respectively (9 and 13% increase, approximately).
To sum up, for the studied variables (annual ET 0 , monthly ETc, yield, biomass and CO 2 assimilation), the trends predicted in this work would match in a regional scale what other studies have observed: A northward movement of crop suitability zones, as well as increased crop productivity in Northern Europe [69]. Furthermore, the simulations suggest that there is a greater potential for adaptation in northern, cooler zones, in which the reduction in yields can be compensated by shifting the crop growing season to cooler months [30], by advancing sowing, and by taking advantage of an extended growing period through the use of suitable varieties [70]. Although a regional approach is necessary to assess the effects of Climate Change on future yields and changes in crop suitability, it must be kept in mind that there are many uncertainties associated with this kind of yield simulations, including uncertainties in the GCM models and projections of future climate [70], crop model uncertainties [61,71], assumptions, and observation errors [72]. To these, other likely factors such as an increase in extreme rainfall events and droughts may be added, which should also be taken into consideration in future studies [59,69].
by the crop (Figure 7a), thus allowing to estimate CO2 sequestration ( Figure 7c). As it occurred with crop yield, a decrease in the average biomass production and in CO2 assimilation of around 7% in 2050 may be expected when the entire area of study is considered. Such decrease would be slightly mitigated in 2070 (5% decrease) for the same reason discussed above (related to the increase in CO2 levels). In absolute terms, the assimilated CO2 would decrease from 49 t/ha in the baseline scenario to ca. 46 and ca. 47 t/ha in the 2050 and 2070 scenarios, respectively. As in the case of yield, if the cultivation period was expanded, larger average quantities of biomass ( Figure 7b) and captured CO2 (Figure 7d) would be obtained throughout the area of study, with 8% and 12% increases in 2050 and 2070, respectively. In this case, captured CO2, would evolve from 49 t/ha in the baseline scenario to 53 and 55 t/ha (on average) in 2050 and 2070, respectively (9 and 13% increase, approximately).
To sum up, for the studied variables (annual ET0, monthly ETc, yield, biomass and CO2 assimilation), the trends predicted in this work would match in a regional scale what other studies have observed: A northward movement of crop suitability zones, as well as increased crop productivity in Northern Europe [69]. Furthermore, the simulations suggest that there is a greater potential for adaptation in northern, cooler zones, in which the reduction in yields can be compensated by shifting the crop growing season to cooler months [30], by advancing sowing, and by taking advantage of an extended growing period through the use of suitable varieties [70]. Although a regional approach is necessary to assess the effects of Climate Change on future yields and changes in crop suitability, it must be kept in mind that there are many uncertainties
Conclusions
The spatial distribution of evapotranspiration in 2050 and 2070 was simulated with AquaCrop at a regional scale for spring-sown sugar beet in the region of Castilla-y-León (Spain), based on data from the MPI-ESM-LR climate model and the RCP4.5 emission scenario from the AR5. A clear annual ET 0 increase was observed in all the area of study in 2050 and 2070, with differences vs. the baseline scenario of up to 200 mm, which would result in monthly ETc increases of up to 40 mm in July. Yield (at current irrigation levels) would decrease by ca. 9% in both 2050 and 2070 in the central zone of the area of study. This overall yield decrease would be aggravated in the case of decreasing precipitation levels, or increased frequency of extreme events of drought, which seem to be likely in the future. In a similar fashion, the assimilated CO 2 would decrease from 49 t/ha in the baseline scenario to 46 and 47 t/ha in the 2050 and 2070 scenarios, respectively. However, new opportunities for adaptation may arise by lengthening the sugar beet cultivation cycle, delaying the harvest and advancing the sowing. These measures, along with more efficient strategies of irrigation, could result in higher yields (up to 17% higher) and higher amounts of stored CO 2 (9% and 13% higher in 2050 and 2070, respectively). Keeping in mind the uncertainty and errors associated to these methodologies, the results coincide with the findings of other studies at different scales and in different regions of Europe: The most suitable cultivation zones for some crops would move northwards to cooler zones, and even higher yields may be obtained by the implementation of appropriate adaptation measures. In this context, in order to have tools available to face potential future adverse situations, efforts aimed at minimizing the uncertainty of the methodologies used for the projection of future scenarios, at evaluating the existing strategies of adaptation of the different crops to Climate Change, and at devising new ones are more necessary than ever.
Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4395/10/1/91/s1, Figure S1: Flowchart summarizing the methodology used in the study; Table S1: Comparison between observed and interpolated annual ET 0 values for the weather stations for "leave one out" cross-validation. | 2020-01-16T09:04:17.019Z | 2020-01-09T00:00:00.000 | {
"year": 2020,
"sha1": "801b8b54965d261545cfd4a4159f13a5913aea89",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/10/1/91/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "634eff663a041a8e1e399bffcd204c45b4230df9",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
4610380 | pes2o/s2orc | v3-fos-license | The relationship of gross upper and lower limb motor competence to measures of health and fitness in adolescents aged 13–14 years
Introduction Motor competence (MC) is an important factor in the development of health and fitness in adolescence. Aims This cross-sectional study aims to explore the distribution of MC across school students aged 13–14 years old and the extent of the relationship of MC to measures of health and fitness across genders. Methods A total of 718 participants were tested from three different schools in the UK, 311 girls and 407 boys (aged 13–14 years), pairwise deletion for correlation variables reduced this to 555 (245 girls, 310 boys). Assessments consisted of body mass index, aerobic capacity, anaerobic power, and upper limb and lower limb MC. The distribution of MC and the strength of the relationships between MC and health/fitness measures were explored. Results Girls performed lower for MC and health/fitness measures compared with boys. Both measures of MC showed a normal distribution and a significant linear relationship of MC to all health and fitness measures for boys, girls and combined genders. A stronger relationship was reported for upper limb MC and aerobic capacity when compared with lower limb MC and aerobic capacity in boys (t=−2.21, degrees of freedom=307, P=0.03, 95% CI −0.253 to –0.011). Conclusion Normally distributed measures of upper and lower limb MC are linearly related to health and fitness measures in adolescents in a UK sample. Trial registration number NCT02517333.
Trial registration number NCT02517333.
InTroduCTIon
Children and adolescents with low motor competence (MC) display decreased fitness and lower physical activity (PA) levels affecting their health and well-being as adults. [1][2][3][4][5] Interestingly, MC generally improves throughout development, this is not always true with some young people, particularly girls, observing a decline in MC. 6 The development of MC is not straightforward and can be explained by a number of psychosocial, biological and environmental factors. Furthermore, while MC development in young children is affected by biological maturation, practice and opportunity are more influential during adolescence. 6 The relationship of PA to MC has been extensively explored, but there is less research exploring the impact of MC on PA. 6 How it might impact clinical practice ► Adds to the knowledge that MC may be a target for improving fitness, health and sporting activity in UK secondary school adolescents particularly girls. ► Need to monitor and measure MC levels across secondary school-aged adolescents at the population level. ► Both upper and lower limb MC related to fitness and health markers and may offer routes to improve sporting activity. ► Supports the need to deliver physical education lessons in gender-specific groups.
Open Access percentage) and levels of PA. [7][8][9][10][11][12][13][14] Fundamental motor skills performed by less coordinated young people, low MC, are known to require greater physical and cognitive effort and can be fatiguing. 15 Less coordinated individuals are also known to have altered bioenergetics and struggle with reduced performance on aerobic and anaerobic sporting activities affecting enjoyment and self-esteem. 1 6 10 16-20 Importantly, reduced MC, activity levels and associated comorbidities of low activity are known to persist into adulthood, particularly in adolescents with impaired MC. [21][22][23] Considering the suggested relationship of MC to low PA in childhood and the current crisis in physical inactivity in young people, there is a need to determine MC in young people. Further considering that MC can effectively be trained in school, 24 MC should be measured to determine young people who could benefit from interventions to benefit their longterm health and well-being. 25 With only limited research exploring the hypothesis of MC as being causative, we set out to explore MC in relation to fitness and health markers. PA is known to reduce in young people in secondary school, particularly girls, and so we set out to measure MC in secondary schools. 6 Furthermore, recent evidence indicates that MC skills which predict health-related fitness measures are inconsistent between ages and genders. 26 While MC has been studied in a number of countries in secondary school students, 6 performance and changes over time are known to vary across nations and there is a need to study performance within the context of each nation considering its unique context. 26 27 Therefore, this study measures MC in relation to aerobic and anaerobic fitness and health measures in adolescents aged 13-14 years in the UK for the first time. 28 With limited evidence to support which MC measures are most appropriate when screening, monitoring and developing interventions across different abilities, age and genders for young people internationally, there is a need to use a mix of measures. 26 There is a wide range of movement battery tests that assess MC as a whole or as individual subsections such as manual dexterity, throwing and catching, and balance tasks (static and dynamic), with specific tests for different age groups. [29][30][31] Evidence has suggested that measures of overall MC are invalid due to vast differences in fine and gross MC in a single participant with many tasks not representing normal distributions. These subsections of MC should be assessed separately. 32 33 Therefore, this study uses normally distributed measures of gross upper and lower limb MC, which are required for sports games and activities associated with PA. 34 35 This study will (1) describe MC, aerobic and anaerobic fitness and health measures in secondary school students in the UK and then evaluate the extent of (2) gender differences across MC, fitness and health measures, (3) examine the extent of relationships of MC to aerobic and anaerobic power in boys and girls aged 13-14 years and (4) compare the difference in correlation strengths between upper limb MC and lower limb MC with a corresponding health/fitness measure.
MeTHods Participants and procedures
This cross-sectional study collected data as part of the Engagement, Participation, Inclusion and Confidence in Sport (EPIC) study (NCT02517333). The data was gathered from three secondary schools in Oxfordshire, which tested all students enrolled in year 9. The age of the students ranged from 13 to 14 years. All testing took place at the respective schools' sports halls, within an allocated physical education (PE) lesson. The participants were split into equal groups and rotated around each station, evaluating MC, anthropometric and fitness measures. A total of 718 participants were tested across three schools (311 girls, 407 boys). After a pairwise deletion of missing values, the total number of participants with completed data points across each variable was above 77%.
Permission was gained from each school's head teachers to recruit participants, and opt-out consent was collected from each participant's parent or legal guardian.
Measures
Anthropometrics Each subject was measured for height and body mass, dressed in light sports clothing with shoes on. Data was adjusted by subtracting 1 kg from each participant's weight to compensate for clothing and 2 cm from height to compensate for shoes. This method was used due to the time restrictions imposed by the length of the PE lesson and the number of students tested within that time period. Grip strength was measured using a hand-held dynamometer Takei model TKK 5001 (to the nearest 0.1 kg). A SECA medical 770 digital floor scale measured body mass (to the nearest 0.01 kg) and a Harpenden stadiometer measured height (to the nearest 0.01 m). BMI was calculated as mass divided by height squared (kg/m 2 ).
Aerobic and anaerobic measures
Aerobic fitness was measured using the 20-metre shuttle run test, 36 which shows good validity when compared with VO 2 peak (r=0.69; F(1, 46)=42.54; P≤0.001) and reliability (intraclass correlation coefficient (ICC)=0.93; F (1,19)=2.58, P≥0.13) when testing large numbers of participants in a field setting. 7 37 The test was described to all participants before attempting the test, with a maximum of 15 participants measured simultaneously. Each participant was instructed to run to the 20 m markers before the beep sounded, then turn and run back to the start position before the next beep. As the test progressed the time between the bleeps became shorter causing the participants to run faster. If the participant was unable to reach the end of the 20 m distance before the bleep on three consecutive occasions, or they removed themselves from the test their final level and stage completed was recorded as their score.
Open Access
The broad jump was used to measure anaerobic power. This test has shown good reliability (ICC=0.94, 95% CI 0.93 to 0.95, P<0.001) and validity, compared with leg extension one repetition max (r=0.79, P<0.01) when measuring large numbers of students in a field setting. 38 The test required each participant to jump as far as possible from a standing start behind a marked line. Each participant had to land with both feet together. Two attempts were permitted with the longest jum p recorded as their final score.
Motor competence measures
The alternate hand wall toss was used to measure upper body gross MC. The test required each participant to stand 1 metre away from a wall. Then in an underarm action, a tennis ball was thrown against the wall and caught with the opposite hand. This was repeated for 30 s with the total number of completed catches recorded as their score. 39 Single leg stationary hopping was used to measure lower body gross MC. Each participant was instructed to place their hands on their hips and hop as many times as possible on their preferred leg. The total number of correctly maintained hops was counted over 15 s. 40 data analysis Descriptive statistics were calculated to characterise anthropometric, MC and health/fitness measures by gender, an unpaired t-test compared gender differen+ces. Graphical and statistical methods (Shapiro-Wilk) were used to explore the normality of distribution for upper and lower limb MC, to take into account, that in large samples, small deviations from normality is revealed by inferential statistical methods, but these have little effect on the results of a parametric test. 41 Pearson's bivariate correlation was used to assess the association between aerobic, anaerobic and BMI in relation to upper and lower limb MC for boys, girls and combined genders. A comparison of two overlapping correlations based on dependent groups was used to assess differences between correlations. 42 Hendrickson et al's modification of William's t-test was used to evaluate the differences between dependent correlations, 43 In boys and the whole group, there was a stronger relationship of upper limb compared with lower limb MC to aerobic capacity. Our findings support that both upper and lower limb MC may be a target for improving fitness, health and sporting activity in secondary school adolescents. As a cross-sectional study and considering the low levels of activity and high obesity in UK schools, our findings support an urgent need to investigate the impact of training MC on fitness and health in this age group. 45 Anaerobic fitness and BMI were similar to previous studies in adolescents of the same age across different nationalities. 26 27 However, comparing aerobic fitness, upper and lower limb MC to previous research has previously proven difficult due to the differences in methods used to measure these movement skills. 14 26 46 This is important as children and adolescents in the UK are reported to be less active compared with other countries. 47 48 Therefore, there is a need to engage more adolescents in higher levels of PA, and understanding the relationship of MC to fitness and health is important in order to promote PA in this age group. 45 The observed low fitness levels support previous findings that as children progress into adolescence there is a significant reduction in PA and increased sedentary behaviour, particularly in girls. 49 50 Interestingly, girls also perform significantly lower on both measures of MC. Evidence from a longitudinal study indicated a link between levels of MC and cardiorespiratory fitness. 51 Barnett et al 51 evaluated the relationship between MC as measured by object control (throwing, catching, kicking) and locomotor skill (hop, side gallop, jump and sprint) with aerobic fitness. Their results showed MC in elementary school predicted subsequent aerobic fitness in adolescents, with 25.9% of the variance in fitness attributed to levels of object control. Object control may be a predictor of aerobic fitness in adolescents as a result of greater participation in sports and organised games. These types of activities require high levels of skill in Open Access controlling or moving an object such as football, netball or hockey ball, which also exposes the participant to levels of higher physical intensities. 51 52 This is important as many PE lessons are predominantly sport-based or game-based activities which require MC. 53 Therefore, it is important to consider MC skills especially in this age group where PE lessons are in some cases the only mechanism for increasing moderate to vigorous physical activity (MVPA) levels. 54 This exposure to higher intensities of PA in sports may further explain the difference in upper limb MC compared with lower limb MC in relation to aerobic capacity in the present study. Object control (alternate hand wall toss) was used to measure upper limb MC, whereas lower limb MC used a type of locomotor test (single leg stationary hopping) to measure MC. Therefore, using an object control measure for lower limb MC may produce more valid comparisons, and identify which adolescents require more support in game-based PE lessons, or conversely may require another activity where these skills do not prevent participation. This could lead to improved engagement and more effective in PE lessons and increased levels of MVPA.
Previous research 6 55 showed similar results to other nations and age groups compared with the present study, with significant correlations of upper and lower limb MC to aerobic and anaerobic capacity. 6 34 The MC measures consisted of three gross motor tests, one upper limb (ball throwing speed) and two lower body tests (jump distance and ball kicking speed). These relationships may suggest MC plays a role in achieving better levels of health and fitness in school-aged children, but limitations from the cross-sectional study design require caution when making these conclusions. Furthermore, differences in tests used to assess MC and health/fitness between these two studies, and the different age groups used in the previous study (18-25 years) make it difficult to compare these results further. The significant but weak relationship of upper and lower limb MC to BMI is similar to that reported by previous studies. 56 57 The inverse relationship between BMI and gross MC is suggested to reduce as children age into adolescence, which could be partly explained by growth spurts associated with this age group and limitations with indirect measures of body composition. 56 The evidence for MC and its relationship to measures of physical fitness are well documented, 7 12 20 31 52 55 58 however, there is a lack of consistent measures for MC and relationships to health and fitness across all abilities, ages and genders. 12 This is highlighted by the vast selection of movement battery tests across the literature. These range from assessments designed to identify clinical deficits in MC 39 40 to recently created assessments, which incorporate measures from previous assessments designed for typically developed children. 31 These inconsistencies explain the differences in results and why there is no agreement for an optimal assessment of MC. 12 Therefore, this study looks to add to the knowledge of both upper and lower limb MC and its relationship to fitness and health with simple measures easily employed in school PE screening.
Our study has a number of limitations including a cross-sectional study design. This limits any conclusion regarding cause and effect between health/fitness measures and MC. In addition, there are limitations to the measurements used when assessing MC and fitness. A systematic review 52 reported measures for fitness and MC are used interchangeably across different studies. In the present study, standing broad jump was used to measure anaerobic power; however, previous studies have used the standing broad jump to measure lower limb MC. Therefore, measures assessing health/fitness, which also require high levels of MC, would not be an appropriate test when assessing the relationship between these two variables. 59 Evidence also suggests object control may be a better indicator for associations between MC and fitness. 51 MC measures such as hopping, and fitness measures such as the standing broad jump may not be able to detect subtle differences in performance for this population. This indicates a need for future research to evaluate the most appropriate measures of MC and fitness when assessing specific outcomes, for example, age, gender and ability. Further to this, differences in strength of relationship for upper and lower body MC with aerobic capacity may be as a result of different methods used to assess MC. Using object control as a measure for upper limb MC, but not for lower limb MC, may limit further conclusions. BMI has limitations when assessing obesity, especially in adolescents. This measure assesses body composition indirectly and is reported to have low sensitivity. However, it is recommended for screening adolescents at risk of obesity, 5 and we believe this study has recruited an adequate sample, across whole year groups, from the population to answer the research question. Furthermore, evidence indicates a positive correlation between MC and PA. 12 This suggests PA may promote MC in young children which develops into a mutual relationship into adolescents. This indicates a need to control for PA when assessing MC and health/ fitness in future work. 26 Therefore, this study supports that MC may play an important role in health and fitness in children/adolescents, particularly girls with recommendations that MC levels be measured and monitored across all abilities, along with direct measures of health and fitness. | 2018-04-26T22:47:31.931Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "674d5e499a512a323c2863ae00c51a0cb032b6d7",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopensem.bmj.com/content/bmjosem/4/1/e000288.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "674d5e499a512a323c2863ae00c51a0cb032b6d7",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226959380 | pes2o/s2orc | v3-fos-license | Price equation captures the role of drug interactions and collateral effects in the evolution of multidrug resistance
Bacterial adaptation to antibiotic combinations depends on the joint inhibitory effects of the two drugs (drug interaction [DI]) and how resistance to one drug impacts resistance to the other (collateral effects [CE]). Here we model these evolutionary dynamics on two-dimensional phenotype spaces that leverage scaling relations between the drug-response surfaces of drug-sensitive (ancestral) and drug-resistant (mutant) populations. We show that evolved resistance to the component drugs – and in turn, the adaptation of growth rate – is governed by a Price equation whose covariance terms encode geometric features of both the two-drug-response surface (DI) in ancestral cells and the correlations between resistance levels to those drugs (CE). Within this framework, mean evolutionary trajectories reduce to a type of weighted gradient dynamics, with the drug interaction dictating the shape of the underlying landscape and the collateral effects constraining the motion on those landscapes. We also demonstrate how constraints on available mutational pathways can be incorporated into the framework, adding a third key driver of evolution. Our results clarify the complex relationship between drug interactions and collateral effects in multidrug environments and illustrate how specific dosage combinations can shift the weighting of these two effects, leading to different and temporally explicit selective outcomes.
Introduction
Understanding and predicting evolutionary dynamics is an ongoing challenge across all fields of biology. Microbial populations offer relatively simple model systems for investigating adaptation on multiple length scales, from the molecular to the population level, on timescales ranging from a few generations to decades. Yet even these simplest of systems exhibit rich and often counterintuitive evolutionary dynamics. Despite enormous progress, both theoretical and experimental, predicting evolution remains exceedingly difficult, in part because it is challenging to identify the phenotypes, selective gradients, and environmental factors shaping adaptation. In turn, controlling those dynamics -for example, by judicious manipulation of environmental conditions -is often impossible. These challenges represent open theoretical questions but also underlie practical public health threatsexemplified by the rapid rise of antibiotic resistance (Davies and Davies, 2010;Levy and Marshall, 2004) -where evolutionary dynamics are fundamental to the challenge, and perhaps, to the solution.
Drug combinations are a particularly promising approach for slowing resistance (Baym et al., 2016b), but the evolutionary impacts of combination therapy remain difficult to predict, especially in a clinical setting (Podolsky, 2015;Woods and Read, 2015). Antibiotics are said to interact when the combined effect of the drugs is greater than (synergy) or less than (antagonism) expected based on the effects of the drugs alone (Greco et al., 1995). These interactions may be leveraged to improve treatments -for example, by offering enhanced antimicrobial effects at reduced concentrations. But these interactions can also accelerate, reduce, or even reverse the evolution of resistance (Chait et al., 2007;Michel et al., 2008;Hegreness et al., 2008;Pena-Miller et al., 2013;Dean et al., 2020), leading to tradeoffs between short-term inhibitory effects and long-term evolutionary potential (Torella et al., 2010). In addition, resistance to one drug may be associated with modulated resistance to other drugs. This cross-resistance (or collateral sensitivity) between drugs in a combination has also been shown to significantly modulate resistance evolution (Barbosa et al., 2018;Rodriguez de Evgrafov et al., 2015;Munck et al., 2014).
Collateral effects (Pál et al., 2015;Roemhild et al., 2020) and drug interactions (Bollenbach et al., 2009;Chevereau et al., 2015;Lukacˇisˇin and Bollenbach, 2019;Chevereau et al., 2015), even in isolation, reflect interactions -between genetic loci, between competing evolutionary trajectories, between chemical stressors -that are often poorly understood at a mechanistic or molecular level. Yet adaptation to a drug combination may often reflect both phenomena, with the pleiotropic effects that couple resistance to individual drugs constraining, or constrained by, the interactions that occur when those drugs are used simultaneously. In addition, the underlying genotype space is high-dimensional and potentially rugged, rendering the genotypic trajectories prohibitively complex (de Visser and Krug, 2014).
In this work, we attempt to navigate these obstacles by modeling evolutionary dynamics on lower-dimensional phenotype spaces that leverage scaling relations between the drug-response surfaces of ancestral and mutant populations. Our approach is inspired by the fact that multiobjective evolutionary optimization may occur on surprisingly low-dimensional phenotypic spaces (Shoval et al., 2012;Hart et al., 2015). To develop a similarly coarse-grained picture of multidrug resistance, we associate selectable resistance traits with changes in effective drug concentrations, formalizing the geometric rescaling assumptions originally pioneered in Chait et al., 2007;Hegreness et al., 2008;Michel et al., 2008 and connecting evolutionary dynamics with a simple measurable property of ancestral populations. We show that evolved resistance to the component drugs -and in turn, the adaptation of growth rate -is governed by a Price equation whose covariance terms encode geometric features of both (1) the two-drug-response surface in ancestral populations (the drug interaction) and (2) the correlations between resistance levels to those drugs (collateral effects). In addition, we show how evolutionary trajectories within this framework reduce to a type of weighted gradient dynamics on the two-drug landscape, with the drug interaction dictating the shape of the underlying landscape and the collateral effects constraining the motion on those landscapes, leading to deviations from a simple gradient descent. We also illustrate two straightforward extensions of the basic framework, allowing us to investigate the effects of both constrained mutational pathways and sequential multidrug treatments. Our results clarify the complex relationship between drug interactions and collateral effects in multidrug environments and illustrate how specific dosage combinations can shift the weighting of these two effects, leading to different selective outcomes even when the available genetic routes to resistance are unchanged.
Results
Our goal is to understand evolutionary dynamics of a cellular population in the presence of two drugs, drug 1 and drug 2. These dynamics reflect a potentially complex interplay between drug interactions and collateral evolutionary tradeoffs, and our aim is to formalize these effects in a simple model. To do so, we assume that the per capita growth rate of the ancestral population is given by a function Gðx; yÞ, where x and y are the concentrations of drugs 1 and 2, respectively. We limit our analysis to two-drug combinations, but it could be extended to higher-order drug combinations, though this would require empirical or theoretical estimates for higher-dimensional drug-response surfaces (see, e.g., Zimmer et al., 2016;Zimmer et al., 2017;Russ and Kishony, 2018;Tekin et al., 2016;Tekin et al., 2017;Tekin et al., 2018). At this stage, we do not specify the functional form of Gðx; yÞ, though we assume that this function can be derived from pharmacodynamic or mechanistic considerations (Engelstädter, 2014;Bollenbach et al., 2009; or otherwise estimated from experimental data (Greco et al., 1995;Wood et al., 2014). In classical pharmacology models (Loewe, 1953;Greco et al., 1995), the shape of these surfaces -specifically, the convexity of the corresponding contours of constant growth ('isoboles') -determines the type of drug interaction, with linear isoboles corresponding to additive drug pairs. In this framework, deviations from linearity indicate synergy (concave up) or antagonism (concave down). While there are multiple conventions for assigning geometric features of the response surface to an interactions type -and there has been considerable debate about the appropriate null model for additive interactions (Greco et al., 1995) -the response surfaces contain complete information about the phenotypic response. The manner in which this response data is mapped to a qualitative interaction type -and therefore used to label the interaction as synergistic, for example -is somewhat subjective, though in what follows we adopt the isobole-based labeling scheme because it is more directly related to the geometry of the response surface than competing models (e.g., Bliss independence; Greco et al., 1995).
Resistance as a continuous trait and rescaling in a simple model
The primary assumption of the model is that the phenotypic response (e.g., growth rate) of drugresistant mutants, which may be present in the initial population or arise through mutation, corresponds to a simple rescaling of the growth rate function Gðx; yÞ for the ancestral population. As we will see, this scheme provides an explicit link between a cell's level of antibiotic resistance and its fitness in a given multidrug environment. Specifically, we assume that the growth rate (g i ) of mutant i is given by (1) where a i and b i are rescaling parameters that reflect the physiological effects of mutations on the growth rate. In some cases -for example, resistance due to efflux pumps or drug degrading enzymes (Yurtsev et al., 2013) -this effective concentration change corresponds to a physical change in intracellular drug concentration. More generally, though, this hypothesis assumes that resistant cells growing in external drug concentration x behave similarly to wild-type (drug-sensitive) cells experiencing a reduced effective concentration ax. Similar rescaling arguments were originally proposed in Chait et al., 2007, where they were used to predict correlations between the rate of resistance evolution and the type of drug interaction. These arguments have also been used to describe fitness tradeoffs during adaptation (Das et al., 2020) and to account for more general changes in the dose-response curves, though in many cases the original two-parameter rescaling was sufficient to describe the growth surface in mutants (Wood et al., 2014).
When only a single drug is used, this rescaling leads to a simple relationship between the characteristic inhibitory concentrations -for example, the half-maximal inhibitory concentration (IC 50 ) or the minimum inhibitory concentration (MIC) -of the ancestral (sensitive) and mutant (resistant) populations. In what follows, we refer to these reference concentrations as K j i , where i labels the cell type and, when there is more than one drug, j labels the drug. Conceptually, this means that dose-response curves for both populations have the same basic shape, with resistance (or sensitivity) in the mutant corresponding only to a shape-preserving rescaling of the drug concentration (D ! D=K i ; Figure 1A). In the presence of two drugs, the dose-response curves become doseresponse surfaces, and rescaling now corresponds to a shape-preserving rescaling of the contours of constant growth. There are now two scaling parameters, one for each drug, and in general they are not equal. For example, in Figure 1B, the mutant shows increased sensitivity to drug 1 (a K 1 WT =K 1 Mut >1) and increased resistance to drug 2 ( b K 2 WT =K 2 Mut <1), where superscripts label the drug (1 or 2) and subscripts label the cell type (wild type, WT; mutant, Mut).
The power of this rescaling approach is that it directly links growth of the mutant populations to measurable properties of the ancestral population (the two-drug-response surface) via traits of the mutants (the rescaling parameters). Each mutant is characterized by a pair of scaling parameters, ða i ; b i Þ, which one might think of as a type of coarse-grained genotype ( Figure 1C). When paired with the ancestral growth surface, these traits fully determine the per capita growth rate of the mutant at any external dosage combination ðx; yÞ via Equation 1. While the scaling parameters are intrinsic properties of each mutant, they contribute to the phenotype (growth) in a context-dependent manner, leading to selection dynamics that depend in predictable ways on the external environment ( Figure 1).
We assume a finite set of M subpopulations (mutants), (i ¼ 1; . . . M), with each subpopulation corresponding to a single pair of scaling parameters. For simplicity, initially we assume each of these mutants is present in the original population at low frequency and neglect mutations that could give rise to new phenotypes, though later we show that it is straightforward to incorporate them into the same framework. We do not specify the mechanisms by which this standing variation is initially generated or maintained; instead, we simply assume that such standing variation exists, and our goal is to predict how selection will act on this variation for different choices of external (true) drug concentrations. As we will see, statistical properties of this variation combine with the local geometry of the response surface to determine the selection dynamics of these traits.
Population dynamics of scaling parameters
The mean resistance trait to drug 1, which in our case is the scaling parameter aðtÞ P M i¼1 a i f i ðtÞ, evolves according to where f i ðtÞ is the population frequency of mutant i at time t in the population. Assuming that each subpopulation grows exponentially at the per capita growth rate (dn i =dt ¼ g i n i , with n i the abundance of mutant i and g i the per capita growth rate given by Equation 1), the frequency f i ðtÞ changes according to where g ¼ P M i¼1 f i g i is the (time-dependent) mean value of g i across all M subpopulations (mutants). Combining Equations 2 and 3, we have where Covða; gÞ x P M i¼1 a i f i ðg i À gÞ is the covariance between the scaling parameters a i and the corresponding mutant growth rates g i . The subscript x is a reminder that the growth rates g i and g that appear in the covariance sum depend on the external (true) drug concentration x ðx; yÞ. An identical derivation leads to an analogous equation for the scaling parameter with respect to drug 2, mean susceptibility to drug 2, b, relative to the original population, and the full dynamics are therefore described by Figure 1. Drug resistance as a rescaling of effective drug concentration. The fundamental assumption of our model is that drug-resistant mutants exhibit phenotypes identical to those of the ancestral ('wild type') cells but at rescaled effective drug concentration. (A) Left panel: schematic doseresponse curves for an ancestral strain (blue) and a resistant mutant (red). Half-maximal inhibitory concentrations (K WT ,K Mut ), which provide a measure of resistance, correspond to the drug concentrations where inhibition is half maximal. Fitness cost of resistance is represented as a decrease in drugfree growth. Right panel: dose-response curves for both cell types collapse onto a single functional form, similar to those in Chait et al., 2007;Michel et al., 2008;Wood et al., 2014. (B) Left panel: in the presence of two drugs, growth is represented by a surface; the thick blue curve represents the isogrowth contour at half-maximal inhibition; it intersects the axes at the half-maximal inhibitory concentrations for each individual drug. Right panel: isogrowth contours for ancestral (WT) and mutant cells. In this example, the mutant exhibits increases resistance to drug 2 and an increased sensitivity to drug 1, each of which corresponds to a rescaling of drug concentration for that drug. These rescalings are quantified with scaling constants a K 1 WT =K 1 Mut and b K 2 WT =K 2 Mut , where the superscripts indicate the drug (1 or 2). (C) Scaling factors for two different mutants (red square and red circle) are shown. The ancestral cells correspond to scaling constants a ¼ b ¼ 1. Mutant 1 exhibits increased sensitivity to drug 1 (a>1) and increased resistance to drug 2 (b<1). Mutant 2 exhibits increased resistance to both drugs (a<b<1), with higher resistance to drug 1. (D) Scaling parameters describe the relative change in effective drug concentration experienced by each mutant. While scaling parameters for a given mutant are fixed, the effects of those mutations on growth depend on the external environment (i.e., the drug dosage applied). This schematic shows the effective drug concentrations experienced by WT cells (blue circles) and the two different mutants (red circles and red squares) from panel (C) under two different external conditions (open and closed shapes). True dosage 1 (2) corresponds to higher external concentrations of drug 1 (2). The concentrations are superimposed on a contour plot of the two drug surface (similar to panel B). Right panel: resulting growth of mutants and WT strains at dosage 1 (bottom) and dosage 2 (top). Because the dosages are chosen along a contour of constant growth, the WT exhibits the same growth at both dosages. However, the growth of the mutants depends on the dose, with mutant 1 growing faster (slower) than mutant 2 under dosage 2 (dosage 1). A key simplifying feature of these evolutionary dynamics is that the selective regime (drug concentration) and phenotype (effective drug concentration) have same units.
The online version of this article includes the following video for figure 1: Figure 1-video 1. Detailed selection dynamics associated to Figure 2. https://elifesciences.org/articles/64851#fig1video1 To complete the model described by Equation 5, one must specify the external (true) concentration of each drug (x); a finite set of scaling parameter pairs a i ; b i corresponding to all 'available' mutations; and an initial condition ð að0Þ; bð0ÞÞ for the mean scaling parameters. When combined with the external drug concentrations, the scaling parameters directly determine the effective drug concentrations (D eff 1 ; D eff 2 ) experienced by each mutant according to We note that drug concentrations can be above or below the MIC contour, with higher concentrations leading to population collapse (G<0) and lower concentrations to population growth (G>0) in the ancestral (drug sensitive) cells. However selection dynamics remain the same in both cases as selection depends only on differences in growth rates between different subpopulations.
Equation 5 is an example of the well-known Price equation from evolutionary biology (Price, 1970;Price, 1972;Frank, 1995;Lehtonen et al., 2020), which says that the change in the (population) mean value of a trait is governed by the covariance of traits and fitness. In general, fitness can be difficult to measure and, in some cases, even difficult to define. However, the rescaling assumption of our model replaces fitness with g, which can be directly inferred from measurable properties (the two-drug-response surface) in the ancestral population. In what follows, we will sometimes casually refer to Equation 5 as a 'model,' but it is important to note that the Price equation is not, in and of itself, a mathematical model in the traditional sense. Instead, it is a simple mathematical statement describing the statistical relationship between variables, which are themselves defined in some underlying model. In this case, the mathematical model consists of a collection of exponentially growing populations whose per capita growth rates are linked by scaling relationships. Equation 5-the Price equation -does not include additional assumptions, mechanistic or otherwise, but merely captures statistical relationships between the model variables. We will see, however, that these relationships provide conceptual insight into the interplay between collateral effects and drug interactions.
Equation 5 encodes deceptively rich dynamics that depend on both the interaction between drugs and the collateral effects of resistance. First, it is important to note that as and bs vary together in pairs, and the evolution of these two traits is not independent. As a result, constraints on the joint co-occurrence of a i and b i among the mutant subpopulations can significantly impact the dynamics. These constraints correspond to correlations between resistance levels to different drugsthat is, to cross-resistance (when pairs of scaling parameters simultaneously increase resistance to both drugs) or to collateral sensitivity (when one scaling parameter leads to increased resistance and the other to increased sensitivity). In addition, g contains information about the dose-response surface and, therefore, about the interaction between drugs. The evolution of the scaling parameters is not determined solely by the drug interaction or by the collateral effects, but instead by bothquantified by the covariance between these rescaled trait values and the ancestral dose-response surface.
As an example, we integrated the model numerically to determine the dynamics of the mean scaling parameters and the mean growth rate for a population exposed to a fixed concentration of two drugs whose growth surface has been fully specified ( Figure 2A). The dynamics can be thought of as motion on the two-dimensional response surface; if the initial population is dominated by the ancestral cells, the mean scaling parameters are approximately 1, and the trajectory therefore starts near the point representing the true drug concentration (in this case, ðx 0 ; y 0 Þ), where growth is typically small (Figure 2A). Over time, the mean traits evolve, tracing out a trajectory in the space of scaling parameters ( Figure 2B). When the external concentration of drug is specified, these dynamics also correspond to a trajectory through the space of effective drug concentrations, which, in turn, can be linked with an average growth rate through the drug-response surface ( Figure 2B). The model therefore describes both the dynamics of the scaling parameters, which describe how resistance to each drug changes over time ( Figure 2C), and the dynamics of growth rate adaptation in the population as a whole ( Figure 2D).
Drug 1 (x)
Drug 2 (y) Using the rescaling framework for predicting resistance evolution to two drugs in a population of bacterial cells. (A) Growth landscape (per capita growth rate relative to untreated cells) as a function of drug concentration for two drugs (drug 1, tigecycline; drug 2, ciprofloxacin; concentrations measured in mg/mL) based on measurements in Dean et al., 2020. We consider evolution of resistance in a population exposed to a fixed external drug concentration (x 0 ; y 0 Þ ¼ ð0:03; 0:05Þ (white asterisk). (B) Growth landscape from (A) with axes rescaled to reflect scaling parameters (a; b). The point ðx 0 ; y 0 Þ in drug concentration space now corresponds to the point ð1; 1Þ in scaling parameter space; intuitively, a strain characterized by scaling parameters of unity experiences an effective drug concentration equal to the true external concentration. We consider a population that is primarily ancestral cells (fraction 0.99), with the remaining fraction uniformly distributed between an empirically measured collection of mutants (each corresponding to a single pair of scaling parameters, white circles). At time 0, the population is primarily ancestral cells and is therefore centered very near ð1; 1Þ (gray circle). Over time, the mean value of each scaling parameter decreases as the population becomes increasingly resistant to each drug (black curve). (C) The two-dimensional mean scaling parameter trajectory from (B) plotted as two time series, one for a (red) and one for b (blue). For the detailed selection dynamics, see Figure 1-video 1. (D) The mean fitness of the population during evolution is expected to increase and can be computed from the selection dynamics by numerically integrating the model at each time step.
Selection dynamics depend on drug interaction and collateral effects
This rescaling model indicates that selection is determined by both the drug interaction and the collateral effects, consistent with previous experimental findings. As a simple example, consider the case of a fixed drug interaction but different qualitative types of collateral effects -that is, different statistical relationships between resistance to drug 1 (via a) and resistance to drug 2 (via b). In Figure 3A, we consider cases where resistance is primarily to drug 2 (black), primarily to drug 1 (cyan), strongly positively correlated (cross-resistance, pink), and strongly negatively correlated (collateral sensitivity, green). Using the same drug interaction surface as in Figure 2, we find that a mixture of both drugs leads to significantly different trajectories in the space of scaling parameters ( Figure 3B) and, in turn, significantly different rates of growth adaptation. In this example, crossresistance (pink) leads to rapid increases in resistance to both drugs (rapid decreases in scaling parameters, Figure 3C, D) and the fastest growth adaptation ( Figure 3E). By contrast, if resistance is limited primarily to one drug (cyan or black), growth adaptation is slowed ( Figure 3E) -intuitively, purely horizontal or purely vertical motion in the space of scaling parameters leads to only a modest change in growth because the underlying response surface is relatively flat in those directions, meaning that the rescaled concentration, in each case, lies near the original contour ( Figure 3). As a result of the contour shapes (drug interaction), resistance to both drugs can develop at approximately the same rate (for example), even when collateral structure suggests resistance to one drug will dominate (e.g., cyan case in Figure 3). We note that the dynamics will depend on the (true) external drug concentrations ðx 0 ; y 0 Þ, even if the rescaling parameters remain identical, because a given rescaling transformation will lead to different effective drug concentrations, and therefore different growth rates, for different values of ðx 0 ; y 0 Þ. When both collateral effects and drug interactions vary, the dynamics can be considerably more complex, and the dominant driver of adaptation can be drug interaction, collateral effects, or a combination of both. Previous studies support this picture as adaptation has been observed to be driven primarily by drug interactions (Chait et al., 2007;Michel et al., 2008;Hegreness et al., 2008), primarily by collateral effects (Munck et al., 2014;Barbosa et al., 2018), or by combinations of both (Baym et al., 2016b;Barbosa et al., 2018;Dean et al., 2020). Figure 4 shows schematic examples of growth rate adaptation for different types of collateral effects (rows, ranging from cross-resistance [top] to collateral sensitivity [bottom]) and drug interactions (columns, ranging from synergy [left] to antagonism [right]). The growth adaptation may also depend sensitively on the external environment -that is, on the true external drug concentration (blue, cyan, and red). In the absence of both drug interactions (linear isoboles) and collateral effects (uncorrelated scaling parameters), adaptation is slower when the drugs are used in combination than when they are used alone, consistent with the fact that adaptation to multiple stressors is expected to take longer than adaptation to each stressor alone ( Figure 4; middle row, middle column). As in Figure 3, modulating collateral effects with drug interaction fixed can have dramatic impacts on adaptation (Figure 4, columns). On the other hand, modulating the drug interaction in the absence of collateral effects will also significantly impact adaptation, with synergistic interactions leading to accelerated adaptation relative to other types of drug interaction (Figure 4, middle row; compare green curves across row). Similar interaction-driven adaptation has been observed in multiple bacterial species (Chait et al., 2007;Michel et al., 2008;Hegreness et al., 2008;Dean et al., 2020).
Selection as weighted gradient dynamics on the ancestral response surface
To gain intuition about the dynamics in the presence of both collateral effects and drug interactions, we consider an approximately monomorphic population where scaling parameters are initially narrowly distributed around their mean values. In this case, we can Taylor expand the function where we have neglected terms quadratic and higher. In this regime, g i is a linear function of the scaling parameters, and the covariances can therefore be written as The dynamics of mean growth rate for the four cases. For underlying heterogeneity, we drew 100 random a i and b i as shown in (A) and initialized dynamics at ancestor frequency 0.99 and the remaining 1% evenly distributed among available mutants. It is clear that each collateral structure in terms of the available ða i ; b i Þ leads to different final evolutionary dynamics under the same two-drug treatment. In this particular case, the fastest increase in resistance to two drugs and increase in growth rate occurs for the collateral resistance (positive correlation) case. The time course of the detailed selective dynamics in these four cases is depicted in Figure 3-videos 1-4. The online version of this article includes the following video and figure supplement(s) for figure 3: Figure 3A. https://elifesciences.org/articles/64851#fig3video1 Figure 3-video 2. Evolutionary dynamics for collateral effects in Figure 3B. https://elifesciences.org/articles/64851#fig3video2 Figure 3-video 3. Evolutionary dynamics for collateral effects in Figure 3C. https://elifesciences.org/articles/64851#fig3video3 Figure 3-video 4. Evolutionary dynamics for collateral effects in Figure 3B. Figure 3 continued on next page ½llCov x0 ða; gÞ ¼ s aa x 0 q x G þ s ab y 0 q y G; Cov y 0 ðb; gÞ ¼ s ab x 0 q x G þ s bb y 0 q y G; (8) where s uv ¼ uv À u v and we have used the fact that g P i f i Gða i x 0 ; b i x 0 Þ ¼ Gð ax 0 ; by 0 Þ to first order. This is a type of weak-selection approximation: the trait that is evolving may have a very strong effect on fitness, but if there is only minor variation in such trait in the population, there will only be minor differences in fitness. Equation 5 for the rate of change in mean traits therefore reduces to where a is a vector of mean scaling factors with components a and b, S is a covariance-like matrix given by S ¼ s aa s ab s ab s bb (10) and rG is a weighted gradient of the function Gðx; yÞ evaluated at the mean scaling parameters, rG ¼ x 0 q x Gðx; yÞ y 0 q y Gðx; yÞ Equation 9 provides an accurate description of the full dynamics across a wide range of conditions (Figure 4-figure supplement 1) and has a surprisingly simple interpretation. Adaptation dynamics are driven by a type of weighted gradient dynamics on the response surface, with the weighting determined by the correlation coefficients describing resistance levels to the two drugs. In the absence of collateral effects -for example, when S is proportional to the identity matrix -the scaling parameters trace out trajectories of steepest ascent on the two-drug-response surface. That is, in the absence of constraints on the available scaling parameters, adaptation follows the gradient of the response surface to rapidly achieve increased fitness, and because the response surface defines the type of drug interaction, that interaction governs the rate of adaptation. On the other hand, collateral effects introduce off-diagonal elements of S, biasing trajectories away from pure gradient dynamics to account for the constraints of collateral evolution.
Model predicts experimentally observed adaptation of growth and resistance
Our model makes testable predictions for adaptation dynamics of both the population growth rate and the population-averaged resistance levels to each drug (i.e., the mean scaling parameters) for a given drug-response surface, a given set of available mutants, and a specific combination of (external) drug dosages. To compare predictions of the model with experiment, we solved Equation 5 for the 11 different dosage combinations of tigecycline (TGC) and ciprofloxacin (CIP) used to select drug-resistance mutants in Dean et al., 2020. We assumed that the initial population was dominated by ancestral cells (a ¼ b ¼ 1) but also included a subpopulation of resistant mutants whose scaling parameters were uniformly sampled from those measured across multiple days of the evolution (see Materials and methods). The model predicts different trajectories for different external doses (selective regimes: red to blue, Figure 5, top panel), leading to dramatically different changes in resistance (IC 50 ) and growth rate adaptation ( Figure 5, bottom panels), similar to those observed in experiment. Specifically, the model predicts (and experiments confirm) a dramatic decrease in CIP resistance as TGC concentration is increased along a contour of constant growth. As TGC eclipses acritical concentration of approximately 0.025 g/mL, selection for both CIP resistance and increased growth is eliminated. We note that comparisons between the model and experiment involve no adjustable parameters, and while the model captures the qualitative trends, it slightly but systematically underestimates the growth across TGC concentrations -perhaps suggesting additional Figure 4. Adaptation depends on drug interactions and collateral effects of resistance. Drug interactions (columns) and collateral effects (rows) modulate the rate of growth adaptation (main nine panels). Evolution takes place at one of three dosage combinations (blue, drug 2 only; cyan, drug combination; red, drug 1 only) along a contour of constant growth in the ancestral growth response surface (bottom panels). Drug interactions are characterized as synergistic (left), additive (center), or antagonistic (right) based on the curvature of the isogrowth contours. Collateral effects describe the relationship between the resistance to drug 1 and the resistance to drug 2 in an ensemble of potential mutants. These resistance levels can be positively correlated (top row, leading to collateral resistance), uncorrelated (center row), or negatively correlated (third row, leading to collateral sensitivity). Growth adaptation (main nine panels) is characterized by growth rate over time, with dashed lines representing evolution in single drugs and solid lines indicating evolution in the drug combination. In this example, response surfaces are generated with a classical pharmacodynamic model (symmetric in the two drugs) that extends Loewe additivity by including a single-drug interaction index that can be tuned to create surfaces with different combination indices (Greco et al., 1995). The initial population consists of primarily ancestral cells ða ¼ b ¼ 1Þ along with a subpopulation 10 À2 mutants uniformly distributed among the different phenotypes. Scaling parameters are sampled from a bivariate normal distribution with equal variances (s a ¼ s b ) and correlation ranging from 0.6 (top row) to À0.6 (bottom row). See also Experimentally measured growth response surface for tigecycline (TGC) and ciprofloxacin (CIP). Circles represent 11 different adaptation conditions, each corresponding to a specific dosage combination ðx 0 ; y 0 Þ. Solid lines show 100 simulated adaptation trajectories (i.e., changes in effective drug concentration over time: mean D1 eff D2 eff , for a total time T) predicted from the full rescaling model. Black xs indicate the mean (across all trajectories) value of the effective drug concentration at the end of the simulation. In each case, the set of available mutants -and hence, the set of possible scaling parameters ai and bi -is probabilistically determined by uniformly sampling approximately 10 scaling parameter pairs from the ensemble experimentally measured across parallel evolution experiments involving this drug pair (see Materials and methods). (B) Half-maximal inhibitory concentration (IC 50 , normalized by that of the ancestral strain) for populations adapted for a fixed time period T » 72 hr in each of the 11 conditions in the top panel. The solid curve is the mean trajectory from the rescaling model (normalized IC 50 corresponds to the reciprocal of the corresponding Figure 5 continued on next page evolutionary dynamics not captured by the model. Intuitively, the experimental results can be explained by the strongly antagonistic interaction between the two drugs, which reverses the selection for resistance to CIP at sufficiently high TGC concentrations (compare CIP resistance at TGC = 0 and TGC = 0.04 g/mL, which both involve CIP » 0.2 g/mL; similar results have been seen in other species and for other drug combinations; Chait et al., 2007). We also compared model predictions with experimental adaptation to two additional drug pairs ( Figure 5-figure supplements 1 and 2) and again found that the model captures the qualitative features of both resistance changes to the component drugs and growth adaptation.
Effects of mutations
While we have focused on selection dynamics, the model can be extended to include mutation events linking different phenotypes (i.e., linking different pairs of scaling parameters). These mutational pathways may be necessary to capture certain evolutionary features -for example, historical contingencies between sequentially acquired mutations (Barbosa et al., 2017;Card et al., 2019;Das et al., 2020). To incorporate mutational structure and processes, we modify the equation for each subpopulation n i to where m is the mutation rate and m ji is the probability that strain j mutates to strain i, given that there is a mutation in strain j. We will refer to the matrix formed by the parameters m ji as the mutation matrix as it contains information about allowed mutational trajectories linking different phenotypes (Day and Gandon, 2006). The Price equation for the mean traits (the mean scaling parameters) now becomes where a m and b m are the mean trait values in all mutants that arise, and are given by When the mutation matrix has a particularly simple structure, the effects of mutation will be similar to those of selection. For example, when mutations occur with uniform probability between the ancestral strain and each mutant, the evolutionary dynamics are qualitatively similar to those in the absence of mutation (Figure 6-figure supplement 1, compare to Figure 4). On the other hand, certain mutational structures can lead to new behavior. As an example, we consider a toy model consisting of four phenotypes defined by the level of resistance (sensitive, S, or resistant, R) to one Figure 5 continued scaling constant) for 100 simulated samples of the scaling parameter pairs; shaded region is ±1 standard deviation over all trajectories. Small circles are experimental observations from individual populations; larger markers are average experimental results over all populations adapted to a given condition. Note that drug 1 (TGC) concentration (x-axis) here is just a proxy for the different conditions as you move left to right along the contour in panel (A). (C) Change in population growth rate (g m À g a , where g m is per capita growth of ancestral strain and g a for the mutant; all growth rates are normalized to ancestral growth in the absence of drugs) for populations adapted in each condition. Solid curve is the mean trajectory across samplings, with shaded region ±1 standard deviation. Small circles are results from individual populations; larger markers are averages over all populations adapted to a given condition. Experimental data from Dean et al., 2020. See also of two drugs. The phenotypes are designated SS ða ¼ 1; Note that these phenotypes do not assume, a priori, any particular relationships between the underlying genotypes. For example, each phenotype could correspond to a single mutation -in which case the RR phenotype would be said to exhibit strong cross-resistance -or alternatively, the phenotypes could correspond to single drug mutants (SR and RS) and double mutants (RR), which implies a particular sequence in which the mutations can be acquired. These two situations would lead to significant differences in the expected structure of the mutational matrix and, as we will see, in the evolutionary dynamics.
For simplicity, we consider two different mutational structures: one corresponding to direct and uniform pathways between the ancestor (SS) and all mutant phenotypes (SR, RS, and RR), and the second corresponding to sequential pathways from ancestor to single mutants (SR, RS) and then from single mutants to double mutants ( Figure 6). To illustrate the impact of mutational structure, we consider two drugs that interact antagonistically and choose the external drug concentrations such that D 1 >D 2 ( Figure 6C). While both mutational structures lead eventually to a population of double-resistant phenotypes (RR), the sequential pathway leads to slower adaptation of growth as the population evolves first toward the most fit single mutant (in this case, RS because the drug combination contains a higher concentration of drug 1) before eventually arriving at RR (Figure 6C-E).
The specific trajectory will of course depend on both the drug interaction (the shape of the growth contours) and the specific dosage combination (see Figure 6-figure supplement 2).
There are other types of mutational constraints that can be implemented in the formalism, including explicit functional dependencies between different antibiotic resistance phenotypes. As an example, we assumed a distance-based mutation matrix where the mutation probability between different strains depends on the Euclidean distance between their scaling parameter pairs (a i ; b i ) . Intuitively, this structure means that mutations are more likely between strains with similar rescaling parameters (and therefore similar levels of resistance to the component drugs). As expected, this constrained mutational structure leads to slower growth rate adaptation than a model with a simple uniform mutation model (Figure 6-figure supplement 3). These effects are further compounded by statistical properties of the collateral structures (e.g., positive or negative correlations between the possible as and bs; Figure 3), the specific (stochastic) realization of those scaling parameters ( Figure 6-figure supplement 4), and the precise shape of two-drug growth surface. Note, however, that these dynamics depend fundamentally on the global mutation rate m, which modulates the relative balance between selection and mutation in a given environment.
Evolutionary dynamics under temporal sequences of drug combinations
Evolutionary dynamics in the presence of multiple drugs can be extremely complex. While past work has focused primarily on the use of either temporal sequences or (simultaneous) combinations of antibiotics, more complex scenarios are possible, in principle, but difficult to analyze even in theoretical models. The simplicity of the Price equation framework allows us to investigate evolutionary dynamics in response to a more complex scenario: temporal sequences of antibiotic combinations.
As proof of principle, we numerically studied time-dependent therapies consisting of two sequential regimes (treatment A and treatment B; Figure 7) characterized by different dosage combinations of tigecycline (drug 1) and ciprofloxacin (drug 2) (the growth surface was measured in Dean et al., 2020). The two dosage combinations were chosen to lie along a contour of constant growth ( Figure 7A), meaning that the net inhibitory effects of A and B are the same when applied to ancestral cells. Using experimental estimates for a; b (Figure 3-figure supplement 1C), we find that both the resistance levels to the two drugs and the growth rate increase during treatment, as one might expect. However, the dynamics of these changes depend on both the relative duration of each treatment and total treatment length (Figure 7-figure supplements 1 and 2). For example, consider a treatment of total length T consisting of an initial period of treatment A followed by a final period of treatment B. If we vary the relative length of the two epochs while keeping the total treatment length fixed, we find that the resistance to each drug and the growth rate increase monotonically as the fraction of time in B increases ( Figure 7H-J). This is a consequence of the interplay between the distribution of available mutants and the shape of the growth surface under these particular two drugs; the mutants tend to feature higher levels of resistance to drug 2 than drug 1, yet the benefits of this resistance -that is, the increase in growth rate due to rescaling the concentration of drug 2 -are favored much more so under condition B than condition A (where rescaling tends to produce effective drug concentrations that lie near the original growth contour). It is notable, however, that effects (both resistance levels and final growth) change nonlinearly as the fraction of time in B is increased -that is, even in this simple model, the effects of a two-epoch (A then B) treatment cannot be inferred as a simple linear interpolation between the effects of A-only and B-only treatments.
Discussion
Antibiotic resistance is a growing threat to modern medicine and public health. Multidrug therapies are a promising approach to both increase efficacy of treatment and prevent evolution of resistance, but the two effects can be coupled in counterintuitive ways through the drug interactions and collateral effects linking the component drugs. Our results provide a unified framework for incorporating both drug interactions and collateral effects to predict phenotypic adaptation on coarse-grained fitness landscapes that are measurable features of the ancestral population (Table 1). Special cases of the model reproduce previous experimental results that appear, on the surface, to be contradictory; indeed, adaptation can be driven primarily by drug interactions, primarily by collateral effects, or by a combination of both, and the balance of these effects can be shifted using tunable properties of the system (i.e., the ratio of drug dosages). Our model was inspired by rescaling arguments that were originally introduced in Chait et al., 2007 and have since been shown to capture phenotypic features of dose-response surfaces or adaptation in multiple bacterial species (Michel et al., 2008;Hegreness et al., 2008;Torella et al., 2010;Wood et al., 2014;Dean et al., 2020;Das et al., 2020). Our results complement these studies by showing how similar rescaling assumptions, when formalized in a population dynamics model, lead to testable predictions for the dynamics of both growth adaptation and phenotype (resistance) evolution. Importantly, the model also has a simple intuitive explanation, with evolutionary trajectories driven by weighted gradient dynamics on two-dimensional landscapes, where the local geometry of the landscape reflects the drug interaction and collateral effects constrain the direction of motion. We have also illustrated how de novo mutation can be integrated in the same framework, provided the mutational structure among a pool of possible phenotypes is known or if specific assumptions are made regarding functional links and constraints for shifts between different resistance levels. Mutation at constant rate is indeed a classic version of the complete Price equation. However, more complex mutational effectsincluding stress-dependent modulation of mutation rate (Kohanski et al., 2010;Vasse et al., 2020) -could be included as a more flexible tunable term in Equation 5.
It is important to keep in mind several limitations of our approach. The primary rescaling assumption of the model is that growth of a drug-resistant mutant is well approximated by growth of the ancestral strain at a new 'effective' drug concentration -one that differs from the true external concentration. This approximation has considerable empirical support (Chait et al., 2007;Wood et al., 2014;Das et al., 2020) but is not expected to always hold; indeed, there are examples where mutations lead to more complex nonlinear transformations of the drugresponse surface (Wood et al., 2014;Munck et al., 2014). In addition, it is possible that selection can act on some other feature of the dose-response curve characterizing single-drug effects -modulating, for example, its steepness (rather than merely its scale). While these effects could in principle be incorporated into our model -for example, by assuming transformations of the ancestral surface, perhaps occurring on a slower timescale, that go beyond simple rescaling -we have not focused on those cases. For simplicity, we have also neglected a number of features that may impact microbial evolution. For example, we have assumed that different subpopulations grow exponentially, neglecting potential interactions including clonal interference (Gerrish and Lenski, 1998), intercellular (Koch et al., 2014;Hansen et al., 2017;Hansen et al., 2020) or intra-lineage (Ogbunugafor and Eppstein, 2017) competition, and cooperation (Yurtsev et al., 2013;Sorg et al., 2016;Estrela and Brown, 2018;Frost et al., 2018;Hallinen et al., 2020), as well as potential effects of demographic noise and population extinction (Coates et al., 2018). These complexities could also be incorporated in our model, perhaps at the expense of some intuitive interpretations (e.g., weighted gradient dynamics) that currently apply. In addition, we have not explicitly included a fitness cost of resistance (Andersson and Hughes, 2010) -that is, we assume that growth rates of mutants and ancestral cells are identical in the absence of drug. This assumption could be relaxed by including a prefactor to the growth function, g i ! ð1 À g i ða i ; b i ÞÞg i , where g i ða i ; b i Þ is the cost of resistance, which in general depends on the scaling parameters (if not, it can be easily incorporated as a constant). While such fitness costs have traditionally been seen as essential for reversing resistance (with, e.g., drug-free 'holidays'; Dunai et al., 2019), our results underscore the idea that reversing adaptation relies on differential fitness along a multidimensional continuum of environments, not merely binary (drug/no drug) conditions. Our results indicate resistance and bacterial growth can be significantly constrained by optimal tuning of multidrug environments, even in the absence of fitness cost. Finally, our model deals only with heritable resistance and therefore may not capture phenotypic affects associated with, for example, transient resistance (El Meouche and Dunlop, 2018) or cellular hysteresis (Roemhild et al., 2018).
Our goal was to strike a balance between analytical tractability and generality vs. biological realism and system specificity. But we stress that the predictions of this model do not come for free; they depend, for example, on properties of the dose-response surfaces, the collection of scaling parameters, and the specific mutational structure. In many cases, these features can be determined empirically; in other cases, some or all of these properties may be unknown. The evolutionary predictions of the model will ultimately depend on these inputs, and it is difficult to draw general conclusions (e.g., 'synergistic combinations always accelerate resistance') that apply across all classes of drug combinations, collateral effects, and mutational structures. But we believe the framework is valuable precisely because it generates testable predictions in situations that might otherwise seem intractably complex.
Our approach also has pedagogical value as it connects evolutionary effects of drug combinations and collateral effects with well-established concepts in evolutionary biology and population genetics (Price, 1970;Price, 1972;Day and Gandon, 2006). Approximations similar to Equation 9 have been derived previously in quantitative genetics models (Abrams et al., 1993;Taylor, 1996) and other applications of the Price equation (Lehtonen, 2018). While the gradient approximation does not require that the population be monomorphic with rare mutants or a particular form for the phenotype distribution, it does require that the majority of trait values (here scaling parameters) are contained in a regime where Gðx; yÞ is approximately linear; in our case, that linearity arises by Taylor expansion and neglecting higher-order deviations from the population mean. More generally, the direction of evolutionary change in our model is determined by the gradient of the fitness function rG evaluated at the mean trait ð a; bÞ; when the gradient vanishes, this point corresponds to a singular point (Waxman and Gavrilets, 2005) or an evolutionarily singular strategy (Geritz et al., 1998). Whether such point may be reached (convergence stability) and how much variance in trait values can be maintained around such point (whether evolutionarily stable [ESS]) depend on other features, such as higher-order derivatives (Otto and Day, 2011;Eshel et al., 1997;Lehtonen, 2018;Smith, 1982;Parker and Smith, 1990). In the case of drug interactions, the fitness landscape in ða; bÞ space will typically have a single maximum at (0, 0) corresponding to effective drug concentrations of zero. However, whether that point is reachable in general or within a given time frame will depend on the available mutants preexisting at very low frequencies in the population, or on the speed and biases in the mutational process itself, if such mutants are to be generated de novo during treatment. In principle, the model also allows for long-term coexistence between different strains; in that case, the rescaled effective drug concentrations experienced by both strains would fall along a single contour of constant growth. Hence, while variance in the population growth will necessarily decrease over time, variance in the traits (scaling parameters) themselves can change non-monotonically.
The framework is sufficiently flexible to integrate different strands of empirical data, and our results underscore the need for quantitative phenotype data tracking resistance to multiple drugs simultaneously, especially when drug combinations are potentially driving selection dynamics. At an epidemiological level, the dominant approach in describing resistance has been to use fixed breakpoints in MIC and track the percentage of isolates with MIC above (drug-resistant) or below (drugsensitive) such breakpoint (e.g., ECDC, 2019; Chang et al., 2015). By missing or decoupling patterns of co-occurrence between MICs to different drugs across isolates, this approach remains incomplete for mechanistic predictions. Our framework suggests that going beyond such binary description towards a more continuous and multidimensional phenotype characterization of drug resistance is possible, with applications not just in microbiology but also in the evolutionary epidemiology of drug resistance (Day and Gandon, 2012;Day et al., 2020). In the long run, these advances may yield better and more precise predictions of resistance evolution at multiple scales, and, in turn, optimized treatments that balance short-term inhibitory effects of a drug cocktail with its inseparable, longer-term evolutionary consequences.
Perhaps most importantly, our approach provides a low-dimensional approximation to the highdimensional dynamics governing the evolution of resistance. In contrast to the classical genotypecentric approaches to resistance, our model uses rescaling arguments to connect measurable traits of resistant cells (scaling parameters) to environment-dependent phenotypes (growth). This rescaling dramatically reduces the complexity of the problem as the two-drug-response surfaces -and, effectively, the fitness landscape -can be estimated from only single-drug dose-response curves. Such coarse-grained models can help extract simplifying principles from otherwise intractable complexity (Shoval et al., 2012;Hart et al., 2015). In many ways, the classical Price equation performs a similar function, revealing links between trait-fitness covariance and selection that, at a mathematical level, are already embedded in simple models of population growth. In the case of multidrug resistance, this formalism reveals that drug interactions and collateral effects are not independent features of resistance evolution, and neither, alone, can provide a complete picture. Instead, they are coupled through local geometry of the two-drug-response surface, and we show how specific dosage combinations can shift the weighting of these two effects, providing a framework for systematic optimization of time-dependent multidrug therapies.
Materials and methods
Estimating scaling parameters from experimental dose-response curves The scaling parameters for a given mutant can be directly estimated by comparing single-drug dose-response curves of the mutant and ancestral populations. To do so, we estimate the half-maximal inhibitory concentration (K i ) for each population by fitting the normalized dose-response curve to a Hill-like function g i ðdÞ ¼ ð1 þ ðd=K i Þ h Þ À1 , with g i ðdÞ the relative growth at concentration d and h a Hill coefficient measuring the steepness of the dose-response curve using nonlinear least-squares fitting. The scaling parameter for each drug is then estimated as the ratio of the K i parameters for the ancestral and mutant populations. For example, an increase in resistance corresponds to an increase in K i for the mutant population relative to that of the ancestor, yielding a scaling parameter of less than 1. Estimates for the scaling parameters for the three drug combinations used here are shown in Figure 3-figure supplement 1 (from data in Dean et al., 2020).
While it is straightforward to estimate the scaling parameters for any particular isolate, it is not clear a priori which isolates are present at time 0 of any given evolution experiment. To compare predictions of our model with lab evolution experiments, we first estimated scaling parameters for all isolates collected during lab evolution experiments in each drug pair (Dean et al., 2020). This ensemble includes 50-100 isolates per drug combination and includes isolates collected at different timepoints during the evolution (after days 1, 2, or 3) as well as isolates selected in different dosage combinations Figure 3-figure supplement 1. We then randomly sampled from this ensemble to generate low-level standing diversity (on average, approximately 10 distinct pairs of scaling parameters) at time 0 for each simulation of the model, and we repeated this subsampling 100 times to generate an ensemble of evolutionary trajectories for each condition.
The results of the simulation can, in principle, depend on how these scaling parameters are sampled. While the qualitative differences between simulations do not depend heavily on this choice of subsampling in the data used here ( Figure 5-figure supplement 3), one can imagine scenarios where details of the subsampling significantly impact the outcome. Similarly, precise comparison with experiment requires accurate estimates for the total evolutionary time and for the initial frequency of all resistant mutants, though for these data the qualitative results do not depend sensitively on these choices ( Figure 5-figure supplement 4 and Figure 5-figure supplement 5). We stress that these are not fundamental limitations of the model itself, but instead arise because we do not have a precise measure of the standing variation characterizing these particular experiments. In principle, a more accurate ensemble of scaling parameters could be inferred from cleverly designed fluctuation tests (Luria and Delbrück, 1943) or, ideally, from high-throughput, single-cell phenotypic studies (Baltekin et al., 2017). At a theoretical level, subsampling could also be modulated to simulate the effects of different effective population sizes, with standing diversity expected to be significantly larger for large populations. The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. The following previously published dataset was used: | 2020-11-12T09:02:25.077Z | 2020-11-07T00:00:00.000 | {
"year": 2021,
"sha1": "234b988f8acc5e7c1c6fa5a9e658bd9d4d9f2b53",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.64851",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e26bba11a5fea1b9e2ad96254bff94cbec880a7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
231710274 | pes2o/s2orc | v3-fos-license | LNK promotes granulosa cell apoptosis in PCOS via negatively regulating insulin-stimulated AKT-FOXO3 pathway
Background: Polycystic ovary syndrome (PCOS), which is often accompanied by insulin resistance, is closely related to increased apoptosis of ovarian granulosa cells. LNK is an important regulator of the insulin signaling pathway. When insulin binds to the receptor, the PI3K/AKT/FOXO signaling pathway is activated, and FOXO translocates from the nucleus to the cytoplasm, thereby inhibiting the expression of pro-apoptotic genes. Methods: Granulosa cells were collected from PCOS patients to investigate the relationship between LNK, cell apoptosis and insulin resistance. KGN cells underwent LNK overexpression/silence and insulin stimulation. The AKT/FOXO3 pathway was studied by western blot and immunofluorescence. LNK knockout mice were used to investigate the effect of LNK on the pathogenesis of PCOS. Results: The level of LNK was higher in PCOS group than control group. LNK was positively correlated with granulosa cell apoptosis and insulin resistance, and negatively correlated with oocyte maturation rate. LNK overexpression in KGN cells inhibited insulin-induced AKT/FOXO3 signaling pathway, causing nucleus translocation of FOXO3 and promoting granulosa cell apoptosis. LNK knockout partially restored estrous cycle and improved glucose metabolism in PCOS mice. Conclusions: LNK was closely related to insulin resistance and apoptosis of granulosa cells via the AKT/FOXO3 pathway. LNK knockout partially restored estrous cycle and improved glucose metabolism in PCOS mice, suggesting LNK might become a potential biological target for the clinical treatment of PCOS.
INTRODUCTION
Polycystic ovary syndrome (PCOS) is a common endocrine and metabolic disease that affects 7-9% women of reproductive age [1]. PCOS is usually characterized by ovulation dysfunction, androgen excess and polycystic ovaries observed by ultrasound. PCOS patients are often accompanied by overweight, insulin resistance (IR) and glucose metabolism disorders [2]. Studies show that the high insulin level in PCOS promotes granulosa cell (GC) apoptosis [3,4] which leads to follicular developmental disorders. AGING However, in PCOS patients with IR, the mechanism by which high insulin promotes GC apoptosis has not been fully elucidated. LNK (SH2B3) belongs to the Src homology 2B (SH2B) family which contains SH2 and PH domains. The family consists of intracellular adaptor proteins which regulate various pathways. LNK is considered as an important regulator of insulin signaling pathway, and plays an important part in glucose homeostasis as well as reproduction [5]. Our previous studies showed that levels of LNK were elevated in ovaries of insulin resistant PCOS patients compared with the non-PCOS group, and that LNK co-localized with insulin receptor [6]. In addition, overexpression of LNK in the human ovarian granulosa cell line (KGN) inhibited insulininduced AKT activation [6].
The forkhead box O (FOXO) family plays a vital role downstream of the insulin signaling pathway [7] and actively participates in a variety of cellular and physiological processes including cell proliferation, apoptosis and the regulation of cell cycle [8,9]. FOXO's subcellular localization and function are controlled by post-transcriptional modifications such as acetylation and phosphorylation [10], and insulin plays an important role in the process [11]. In the nucleus, FOXO triggers apoptosis by inducing the transcription of pro-apoptotic genes such as FasL, and thus actively participates in the process of apoptosis. When insulin or other growth factors are present, FOXO proteins are relocated from the nucleus to the cytoplasm [12]. PI3K/AKT is one of the most important pathways in regulating FOXO function in different types of organisms [11]. When insulin or other growth factors bind to their receptors, the PI3K/AKT pathway is activated, and activated AKT phosphorylates FOXO, thereby negatively regulating the nuclear localization of FOXO [9]. Studies suggest that insulin can activate the PI3K/AKT pathway in granulosa cells (GCs) [13]. Moreover, GC apoptosis is increased in PCOS patients with high insulin levels [3]. Therefore, we speculate that there is a complex mechanism for regulating granulosa cell apoptosis in PCOS patients with IR. Therefore, we hypothesize that LNK negatively regulates insulin-activated AKT/FOXO3 pathway and promotes granulosa cell apoptosis and dysfunction, therefore affecting oocyte maturation and thus participates in the etiology of ovulation disorder in polycystic ovary syndrome.
In our study, we demonstrate that LNK expression is increased in PCOS patients, and the higher LNK level in KGN can inhibit the AKT/FOXO3 pathway, thereby inducing apoptosis of the granulosa cells. LNK knockout can partially restore estrous cycle and improve glucose metabolism in PCOS mice.
LNK was positively correlated with the severity of insulin resistance and granulosa cell apoptosis, and negatively correlated with oocyte maturation rate
We obtained metabolic profiles of the included subjects and found that the incidence of IR in PCOS patients was higher. 27 and 12 women were diagnosed as IR in PCOS and control group, respectively. Table 1 shows the characteristics of patients involved in this research. RT-PCR indicated that GLUT4 mRNA expression in luteinized GCs from the PCOS group was lower than that in the control group ( Figure 1A), indicating that there was a certain degree of glucose metabolic disorder. Meanwhile, we detected the level of LNK in luteinized granulosa cells in different groups, and found that compared with the control patients, the level of LNK in PCOS patients was increased ( Figure 1B). Pearson correlation analysis revealed that the level of LNK was positively correlated with the severity of insulin resistance from the total population ( Figure 1C). Respective correlation analysis on PCOS and control group can be found in Supplementary Tables 3, 4 and Supplementary Figure 2. Generally speaking, the r values were higher when correlation analysis was done in PCOS group.
In order to further investigate the changes in reproductive function of patients with PCOS, we examined the apoptosis level of luteinized GCs and oocyte maturation rate in both groups. Results showed that the apoptosis level of luteinized GCs from PCOS patients was higher compared with the control group (Figure 2A), and the maturation rate of oocytes in PCOS patients was significantly lower ( Figure 2B). Interestingly, Pearson correlation analysis revealed that LNK expression was positively correlated with granulosa cell apoptosis rate, and was negatively correlated with oocyte maturation rate ( Figure 2C, 2D).
These results indicate that LNK is elevated in GCs of PCOS patients, and may be an important regulator in insulin resistance, granulosa cell dysfunction and follicular development.
FOXO3 was increased in granulosa cells of PCOS patients and positively correlated with IR, LNK, and apoptosis
The FOXO transcription factor family participates in many cellular processes, such as apoptosis and AGING AGING proliferation, and plays an important role downstream of the insulin and insulin-like growth factor receptors [14]. We detected the levels of FOXO3 in GCs of PCOS and control group. Results showed that FOXO3 mRNA expression was significantly elevated in PCOS group ( Figure 3A1). Western blot analysis showed that p-FOXO3/FOXO3 level was lower in PCOS group ( Figure 3A2). Immunofluorescence staining of GCs showed that in PCOS group, FOXO3 expression was elevated in the nucleus compared with control group ( Figure 3A3). Then we investigated the relationship between FOXO3 and apoptosis, LNK, IR, and oocyte maturation rate with Pearson correlation analysis. The results showed that FOXO3 mRNA expression was positively correlated with LNK mRNA, cell apoptosis rate and IR-related parameters, and negatively correlated with oocyte maturation rate ( Figure 3B-3I), indicating that FOXO3 dysregulation might play an important part in the apoptosis of GCs and follicular development disorder.
Overexpression of LNK impaired insulin signaling and induced apoptosis of GCs
Patients with PCOS are often accompanied by IR [15]. When insulin or other growth factors are present, the AKT/FOXO3 pathway is activated, and FOXO3 is transferred from the nucleus to the cytoplasm, thereby inhibiting the expression of pro-apoptotic factors [16]. Otherwise FOXO3 is translocated to the nucleus and induces apoptosis. In order to explore the relationship and interaction mechanism between LNK, FOXO3 and granulosa cell apoptosis, KGN cells were transfected with LNK pcDNA3.1 (pc LNK), LNK mutant pcDNA3.1 (mut LNK), LNK siRNA (si LNK), empty pcDNA3.1 vector (vec) or negative control RNA (neg). After treated with insulin, the levels of AKT and FOXO3, the subcellular localization of FOXO3, and the apoptosis level of KGN were detected. The results showed that LNK overexpression inhibited phosphorylation of AKT and FOXO3, promoted the nuclear localization of FOXO3 and the apoptotic level of KGN, and the opposite results were obtained when LNK was knocked down ( Figure 4A-4G). Previous studies have revealed that FSHR level is closely related to PCOS [17,18]. In this study, we found that LNK overexpression inhibited the expression of FSHR. ( Figure 4G).
These results indicate that LNK overexpression can negatively regulate the insulin-induced AKT/FOXO3 pathway and promote KGN apoptosis, which may be closely related to the ovulation dysfunction in PCOS patients.
LNK knockout partially restored estrous cycle and improved glucose metabolism in PCOS mice
In order to further explore the role of LNK in PCOS in vivo, wild-type PCOS mouse model (WT/PCOS) and PCOS mouse model with LNK gene knockout (KO/PCOS) were constructed. We monitored estrous cycle, body weight and glucose metabolism, and results showed that LNK KO could partially restore estrous cycle of PCOS mice ( Figure 5A, 5B). The weight and fat rate of the PCOS mice (WT/PCOS and KO/PCOS) were higher compared with the control group (WT/CON and KO/CON) ( Figure 5C). In addition, we did glucose tolerance test (GTT) as well as insulin tolerance test (ITT), and detected mRNA level of GLUT4. The results indicated that knocking out LNK significantly increased the level of GLUT4 and improved glucose metabolism in PCOS mice ( Figure 5D-5F).
Above all, these results indicate that LNK is closely related to the estrous cycle and glucose metabolism of PCOS mice. The increased level of LNK in ovarian GCs of PCOS patients may be an important mechanism AGING leading to ovulation dysfunction. Therefore, LNK may become a potential target for the clinical treatment of polycystic ovary syndrome.
DISCUSSION
In this study, we found that LNK was closely related to insulin resistance and apoptosis of granulosa cells via the AKT/FOXO3 pathway. LNK knockout partially restored estrous cycle and improved glucose metabolism in PCOS model mice, suggesting LNK might become a potential biological target for the clinical treatment of PCOS.
Women with PCOS are often accompanied by insulin resistance, and high levels of insulin may be one of the causes of PCOS [19]. This study found that the incidence of IR in PCOS was significantly higher compared with the control group, and BMI, WHR, insulin level and HOMA-IR were significantly increased in PCOS group, which is consistent with previous studies. Studies show that hyperinsulinemia or excessive secretion of LH may cause abnormal response of granulosa cells to LH and thus impair follicular development [20][21][22]. Some studies imply that LNK is associated with the pathogenesis of human diseases including type 1 diabetes, hypertension, and cardiovascular diseases [23][24][25]. SH2B adaptor protein 3 (SH2B3), also named as LNK, is widely studied in malignant tumors [26,27]. LNK is a negative signal-transduction regulator, which is widely involved in cytokine signaling and cell metabolism [28]. In previous studies, we have proposed that LNK is a significant factor in the development of IR in patients with PCOS and is closely related to the insulin signaling pathway in the ovary [29]. We have also found that LNK regulates glucose transport in adipose tissue through affecting the insulin-mediated IRS1/PI3K/AKT/AS160 pathway [29]. In the current study, we found that LNK level was significantly increased in granulosa cells of PCOS patients, and the expression was positively correlated with insulin resistance and GC apoptosis. These results indicate that the high level of LNK is closely related to granulosa cell dysfunction and insulin resistance in PCOS.
AGING
As an adaptor protein, LNK inhibits phosphorylated tyrosine proteins by recognizing and binding to them through its SH2 domain [28]. LNK is considered to be an important regulator of inflammation and insulin resistance in several tissues and organs [29]. In previous studies, we found that LNK and insulin receptors colocalized [13]. In this study, we discovered that elevated LNK impaired insulin-stimulated AKT and FOXO3 phosphorylation, thereby promoting nuclear localization of FOXO3 and leading to increased apoptosis of granulosa cells. In addition, knocking out LNK could effectively improve glucose metabolism and estrous cycle of PCOS mice, suggesting it might become a potential therapeutic target for PCOS.
In the current study, we found that GLUT4 mRNA level was elevated in LNK knockout PCOS mice. The mechanism of LNK-altered GLUT4 expression was further explored in our other work, in which similar results were found in HFD-induced insulin resistant mice [30].
Our previous DNA sequencing analysis of PCOS and control group found a rs78894077 gene polymorphism in exon 1, PH domain of LNK, and allele C was mutated to T (unpublished). By constructing the corresponding mutant LNK plasmid, we would like to preliminarily investigate the function of this polymorphism. Our current study showed that the cells transfected with mut LNK had similar results as pc LNK. We will further investigate into this polymorphism in our future study.
There are some limitations in this study. Molecular mechanism of the upstream regulation of LNK remains uncertain. Although our results show that LNK regulates FOXO3 function by affecting its phosphorylation status and subcellular location via the AKT pathway, the cause of FOXO3 up-regulation in granulosa cells from PCOS patients remains unclear. Some researchers reported that altered m6A modification was involved in the elevated level of FOXO3 mRNA expression in the luteinized granulosa cells from PCOS patients [31]. In addition, the effects of LNK on oocyte maturation and granulosa celloocyte interaction still need further study. Moreover, hyperandrogenism, another important feature of PCOS, is not fully explored in the current study. We will carry on related investigation in subsequent research work.
In conclusion, our study indicates that LNK expression is elevated in ovarian granulosa cells of patients with PCOS. LNK expression is closely related to insulin resistance, granulosa cell apoptosis, and oocyte maturation rate. LNK overexpression may promote granulosa cell apoptosis by inhibiting insulin-stimulated AKT/FOXO3 pathway. Compared with wild-type PCOS mice, glucose metabolism is improved, and the estrous cycle is more regular in PCOS mice with LNK knockout. This study suggests that LNK dysregulation may play a significant role in the pathogenesis of PCOS, and LNK might become a potential biological target for the clinical treatment of PCOS.
Clinical samples
A total of 82 women aged between 20 and 40 years were enrolled in our study from 2016-1 to 2017-1. Among them were 41 PCOS patients diagnosed according to the Rotterdam criteria [32]. They were planned to receive in vitro fertilization and embryo transfer (IVF-ET) for anovulation (5 cases), oligo-ovulation (32 cases), or other reasons (4 cases) at the reproductive center, department of obstetrics and gynecology of Sun Yat-Sen Memorial Hospital. In addition, 41 non-PCOS women aged between 20 and 40 years with regular menstrual cycles, who were undergoing IVF-ET (long GnRH-a protocol) for tubal or male factor infertility, were enrolled as control group. The exclusion criteria were hyperprolactinemia (prolactin>25mg/L), thyroid dysfunction (hyperthyroidism or hypothyroidism), adrenal diseases, tumors that produce androgen or recent use of medications which might affect endocrine function (e.g., oral contraceptives). The hospital ethics committee approved the study, and we obtained written informed consents from all participants.
Physical examinations were performed on every participants. Height, weight, waist circumference (WC) and hip circumference were measured and body mass index (BMI) (kg/m 2 ) and waist/hip ratio (WHR) was calculated. Participants were also evaluated for acne, acanthosis nigricans and terminal hair. To assess hirsutism, the modified Ferriman Gallwey (mFG) score [33] was applied. FSH, LH, total testosterone, fasting plasma glucose (FPG) (mmol/L) and fasting plasma insulin (FIN) (mU/L) levels were detected. Transvaginal ultrasound was performed to detect polycystic ovarian changes and calculate the number of antral follicles. Abnormal glucose metabolism was diagnosed according to the guidelines published by the American Diabetes Association (ADA) [34].
The homeostasis model assessment for insulin resistance (HOMA-IR), which is defined as (FPG (mmol/L) × FIN (μIU/mL)) / 22.5, is a widely-used index for the clinical evaluation of insulin resistance [35,36]. We used 2.14 as the cut-off point for insulin resistance [37]. The participant who matched one of the following criteria was considered insulin resistant in this study: 1) delayed insulin peak (higher 2-hour insulin compared with 1-hour AGING insulin in the oral glucose tolerance test), or pre-diabetes or type 2 diabetes mellitus defined by the American Diabetes Association or HOMA-IR ≥ 2.14; or 2) clinical observation of the presence of acanthosis nigricans as described in our previous study [6].
Oocyte maturation rate(%) = number of MII oocytes / total number of collected oocytes.
Isolation of granulosa cells
All women received IVF treatment (GnRHa long protocol for ovarian stimulation). 36h after human chorionic gonadotropin was administered, oocyte retrieval was performed on the patient by aspirating follicular fluids. After picking up oocytes, the remaining follicle fluids were collected and centrifuged. Lymphocyte separation medium was used to isolate granulosa cells from the precipitate. Red blood cells were removed using lysis buffer. The cells were washed by centrifugation, and the supernatant was removed. Granulosa cell isolation was done within 1h after follicle fluid collection to avoid postaspiration cell death. Once isolated, the granulosa cells were divided and were either used for subsequent experiments such as RNA or protein extraction, or cultured for immunofluorescence staining, or stored at −80° C until further analysis.
Cell culture
The KGN cells maintain the function of steroid hormone synthesis and the characteristics of granulosa cells, and are often used to study the proliferation, apoptosis, hormone secretion and receptor expression of granulosa cells. The DMEM/F12 medium was used for cell culture. The culture medium was also supplemented with 10% FBS and 100 U/mL penicillin-streptomycin (Invitrogen), as mentioned in our previous study [6]. Cells were incubated in a humidified incubator at 37° C with 5% CO2.
Quantitative real-time PCR (qRT-PCR)
TRIzol reagent (Takara) was used for RNA extraction. Total RNA was quantified with a NanoDrop 2000 spectrophotometer and transcribed to cDNA with the PrimeScript RT Master Mix (RR036A, TAKARA) following the manufacturer's instructions. TB Green Premix Ex Taq II (TAKARA) was used in quantitative real-time polymerase chain reaction. Detailed primer information can be found in Supplementary Table 1.
Western blot
Cells were lysed on ice for 30 min using ice-cold RIPA lysis buffer with protease inhibitor mix and phosphatase inhibitors. Adequate amounts of proteins were loaded into the wells of the SDS-PAGE gel. After running the gel, the proteins were transferred to a membrane (Sigma-Aldrich), which was blocked with 5% BSA for 1 hour at room temperature. The membrane was incubated with primary antibodies over night at 4° C. β-ACTIN was used as standard control. Then the membrane was incubated with secondary antibodies conjugated with HRP (Cell Signaling) at room temperature for 1 hour. Detailed information about antibodies (LNK (Santa Cruz), FOXO3 (Cell signaling, CST), phosphorylated-FOXO3 (CST), AKT (CST), phosphorylated-AKT (CST) and β-ACTIN (CST)) can be found in Supplementary Table 2.
Expression vectors and transfection
Human LNK cDNA and mutant LNK (exon 1, PH domain, allele C was mutated to T) were cloned into respective pcDNA 3.1 vector. siRNA for LNK was synthesized. Cells were grown in 6-well plates. Transfection was performed on KGN cells using 1.0 μg/mL LNK plasmid (pc LNK), LNK mutant plasmid (mut LNK), LNK siRNA (si LNK), empty pcDNA3.1 vector (vec) or negative control RNA (neg) with Lipofectamine 2000 reagent (Thermo Fisher Scientific) following the manufacturer's protocols. The efficiency of transfection is shown in Supplementary Figure 1A, 1B.
Treatment with insulin
48 hours after transfection, after serum starvation for 12 h, the cells were treated with 100nM recombinant human insulin (Sigma-Aldrich) for 30 min, then they were harvested for subsequent experiments such as protein extraction.
Annexin V-FITC apoptosis assay
Flow cytometry was performed for the assessment of apoptosis level. An annexin V-FITC/PI Double-staining Apoptosis Detection Kit (BD Biosciences) was used to stained KGN or GCs. Flow cytometry was conducted with a flow cytometer (BD FACSJazz) according to the manufacturer's protocols. Granulosa cells or KGN cells without staining were used as controls. Representative plots showing cell gating events can be found in Supplementary Figure 1C.
TUNEL analysis
The In Situ Cell Death Detection Kit (Roche) was used for TUNEL analysis following the manufacturer's protocols. Negative control was set following the manufacturer's protocols: Lable Solution (without terminal transferase) was used for cell incubation AGING instead of TUNEL reaction mixture. A confocal microscope (Zeiss LSM700) was used for observation and photography.
Immunofluorescence
After rinsing the cells, 4% paraformaldehyde was used to fix the cells for 15 minutes at room temperature. After washing the cells, 0.1% Triton X-100 was used for permeabilization for 2 minutes on ice. 5% Bull Serum Albumin (BSA) was used for blocking. Immunofluorescence staining was performed with purified primary antibodies FOXO3a (Cell Signaling) and p-FOXO3a (Cell Signaling). Primary antibodies were diluted 1:500 and incubated at 4° C overnight. Then cells were incubated with conjugated anti-mouse or anti-rabbit secondary antibodies (CST #4408 and #8889). After washing, DAPI was added for 10 s. Cells without immunofluorescence staining were used as negative control. Detailed information about antibodies can be found in Supplementary Table 2.
Animals
The ethics committee for animal research in Sun Yatsen University approved the experiments and we followed the NIH Guide for the Care and Use of Laboratory Animals [30]. Mice were kept under standard conditions (12 hours light/dark, with free access to feed and water).
Whole body LNK-knockout (KO) C57BL/6 mice were designed by Can-ZhaoLiu, and were produced with CRISPR/Cas by Cyagen Biosciences Inc, as described in our previous study [30]. The animals were divided into four groups. Wildtype (WT) C57BL/6 mice and KO mice with normal chow were WT control group and KO control group, respectively. WT and KO mice fed with a high-fat diet and injected with dehydroepiandrosterone (DHEA, Sigma) were WT PCOS group and KO PCOS group, respectively. Typically, prepubertal mice, aged approximately 21 days, were injected daily with DHEA (PCOS group, 6 mg/100 g body weight, diluted in 200 μL sesame oil) or sesame oil only (control group) for up to 21 days. Meanwhile, mice were fed a high-fat diet (PCOS group, 60% fat calories) or regular chow (control group, 10% fat calories). The high-fat diet was also used in our previous study [30].
Glucose tolerance test (GTT) and insulin tolerance test (ITT)
In GTT, mice were fasted for 12 hours and injected intraperitoneally with 1 g/kg body weight dextrose (Sigma). In ITT, after fasting for 6 hours, intraperitoneal 0.75 units/kg body weight insulin (Sigma) injection was performed. Glucose levels were detected from tail venous blood with an automated glucometer (Roche) at 0 min, 15 min, 30 min, 60 min, 90 min, 120 min after injection, which was also described in our previous study [30].
Evaluation of estrus cyclicity
Vaginal cell smears were obtained from mice with normal saline at 10AM every morning for 2 weeks. They were placed on glass slides for air dry. Crystal violet was used for staining.
Body fat assessment
EchoMRI 100 (Echo Medical Systems, US) was used to measure mice body fat rate without anesthetization. 3 repeated measurements lasted about 90 seconds were performed for each mouse. Values of fat rate were instantly generated once measurements were done and the average fat rate for each mouse was calculated.
Tissue collection
8-week-old mice were sacrificed to collect ovarian tissue. Tissues underwent subsequent experiments such as RNA extraction or were frozen in liquid nitrogen for future analysis.
Statistical analysis
All data were shown as mean ± SD or mean ± SEM. After normal distribution of the data was confirmed, unpaired two-tailed Student t test or ANOVA with Bonferroni post hoc test was used to test for differences. Pearson test was used to analyze correlation. SPSS 22.0 (SPSS, USA) was used for statistical analysis. Figures were generated by PRISM 7.0 (GraphPad). Statistical significance was considered to be p< 0.05.
Data availability statement
The data underlying this article cannot be shared publicly due to the privacy of individuals that participated in the study. The data will be shared on reasonable request to the corresponding author. Lnk mRNA level was also found to be positively correlated with clinical insulin resistance parameters in PCOS patients. A negative correlation between the level of Lnk and oocyte maturation rate in PCOS patients was also found. (B) FOXO3 mRNA level was found to be positively correlated with clinical insulin resistance parameters in PCOS patients. A negative correlation between the level of FOXO3 and oocyte maturation rate in PCOS patients was also shown. | 2021-01-27T06:16:51.539Z | 2021-01-20T00:00:00.000 | {
"year": 2021,
"sha1": "93f4b88a23fde4334331b3bed2142e0b65fe0838",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/aging.202421",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d2d01e3f448b5a5d47cb1bd85a0dbda3f3bac2ce",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4473303 | pes2o/s2orc | v3-fos-license | Burn Injury May Have Age-Dependent Effects on Strength and Aerobic Exercise Capacity in Males
, Whether burn injury affects boys and men differently is currently unknown. To test the hypothesis that burned boys have lower exercise capacity and exercise training–induced responses compared with burned men, 40 young boys (12 ± 4 years, 149 ± 20 cm, 46 ± 18 kg) were matched to 35 adult men (33 ± 9 years, 174 ± 10 cm, 84 ± 16 kg) based on extent of burn injury (total body surface area burned, boys 46 ± 14% vs men 47 ± 30, P = .85) and length of hospital stay (boys 33 ± 23 vs men 41 ± 32 days, P = .23). Strength (peak torque) and cardiorespiratory fitness (peak VO 2 ) were normalized to kg of lean body mass for group comparisons. Each group was also compared with normative age–sex matched values at discharge and after an aerobic and resistance exercise training (RET) program. A two-way factorial analysis of covariance assessed interaction and main effects of group and time. We found that boys and men showed similar pre-RET to post-RET increases in total lean (~4%) and fat (7%) mass (each P ≤ .008). Both groups had lower age–sex matched norm values at discharge for peak torque (boys 36%; men 51% of normative values) and peak VO 2 (boys: 44; men: 59%; each P ≤ .0001). Boys strength were 13–15 per cent lower than men at discharge and after RET (main effect for group, P < .0001). Cardiorespiratory fitness improved to a greater extent in men (19%) compared with boys (10%) after the RET (group × time interaction, P = .011). These results show that at discharge and after RET, burn injury may have age-dependent effects and should be considered when evaluating efficacy and progress of the exercise program. (J Burn Care Res
Burns are a major cause of injury in children, third behind motor vehicle accidents and drownings that result in death. 1 Notably, poorer outcomes are associated with scalding injuries, younger age, increased burn size, and the presence of inhalation injury, with infants and young children having the greatest risk of death from burn injury. 2 In addition, because cognitive development lags behind motor skill development, children lack comprehension of danger and awareness of their environment, increasing risk-taking behavior leading to burn-related injuries. 3,4 Most importantly, children are not little adults. Children have about three times the body surface area-to-body mass ratio of adults, owing to the fact that their head and neck are much larger than those of adults. It is also estimated that children sustain burn injury in only a quarter of the time as adults, 5,6 possibly leading to a more severe injury.
Burn injury is associated with skeletal muscle catabolism and weakness that are accompanied by hypermetabolism, respiratory injury, and diminished lean body mass (LBM) in both adults and children as well as disturbed growth patterns in children. [7][8][9][10][11][12] These changes have deleterious effects that may alter children's growth and development. Burn injury also instigates an inflammatory response that appears to differ between children and adults as seen by dissimilar cytokine profiles. 13 This difference between children and adults suggests that these populations may benefit from different therapeutic interventions.
We have found that immediately following discharge, a rehabilitation exercise program improves muscle strength, endurance, and LBM. [14][15][16][17][18][19][20][21] The effectiveness of resistance training is influenced by multiple factors, including age, maturation, sex, and the frequency, duration, and intensity of the training program. 22 However, whether burn injury differentially affects exercise capacity and body composition in adults and children is unknown. Whether the benefits of rehabilitative exercise differ between these populations is also unclear. We designed this study to compare exercise capacity and body composition in adult men and boys before and after a rehabilitative exercise program (RET). We hypothesized that boys with burn injury have lower exercise capacities and greater body composition changes than burn-injured men.
Ethical Approval
All experiments were approved by the Institutional Review Board of the University of Texas Medical Branch and were conducted in accordance with the Declaration of Helsinki. Before subjects participated in the study, informed consent was obtained from the burned adults and parents or legal guardians of burned children, in addition to the child assent, as applicable.
Study Design
Subjects were grouped with respect to age (adults and children) and matched for total body surface area (TBSA) burned ( Table 1). The study included 40 children aged 7 to 17 years and 35 adults aged 18 to 45 years, with both groups having 30 per cent or greater TBSA burns. Exercise training was initiated immediately after discharge once wounds were at least 95 per cent healed. At discharge, subjects provided written informed consent and underwent testing for body composition, exercise strength, and aerobic capacity. They then completed a 6-to 12-week (6 weeks for >30-59% TBSA burns and 12 weeks for >60% TBSA burns) aerobic and resistance rehabilitative exercise training program. Following the rehabilitative training, they again underwent body composition, exercise strength, and aerobic exercise testing.
Rehabilitative Exercise Program
Subjects underwent supervised aerobic and resistance exercise 3 to 5 days per week for 6 to 12 weeks at 60-75 per cent of peak VO 2 . Aerobic exercise intensity was maintained at 60 to 85 per cent of the patient's peak heart rate for 20to 40-minute sessions (five metabolic equivalents at ~75 per cent of the volume of peak expired oxygen [peak VO 2 ]) for at least 150 minutes per week by weeks 6 to 12. During the first week, an aerobic warmup of 10 minutes would start the main exercise session and a cool down would end the session. Patients complete as much as they could due to open skin graft wounds that may have limited limit mobility and the ability to exercise on the treadmill or cycle ergometer. The strength training program consisted of at least 3 days per week of whole-body resistance exercise on free weights, such as bench, leg, and shoulder presses; leg extension; biceps, leg, and triceps curls; and toe raises. During the first week of training, the patients were familiarized with equipment and proper technique with minimal weights or loads. Weights or loads were then gradually increased over time from 50 to 60 per cent of the patient's 3-repetition maximum to a goal of 80 to 85 per cent of the 3-repetition maximum by weeks 6 to 12.
Cardiorespiratory Capacity
Cardiorespiratory capacity was determined by measuring VO 2 during a modified Bruce treadmill test. The treadmill test consisted of progressive, 3-minute increases in speed and incline until volitional exhaustion. Expired gases were analyzed by indirect calorimetry (MedGraphics Cardi O2 metabolic cart, St. Paul, MN, USA). Gasses and air flow were calibrated using known gasses (O 2 and CO 2 ) and a 3-l syringe before each test. In all cases, subjects were considered to have reached peak oxygen consumption when they signaled to stop and met three of the following criteria: a respiratory exchange ratio ≥ 1.05, a leveling off in VO 2 with increasing workloads (<2 ml kg min −1 ), final heart rate ≥ 190 bpm, or a final test time of 8 to 15 minutes.
Muscle Strength
Muscle strength was measured using an isokinetic test performed on dominant leg extensors. The Biodex System 4 dynamometer (Biodex Medical Systems Inc., Shirley, NY, USA) measured maximal voluntary muscle contractions at an angular velocity of 150 o /s, and information was recorded to obtain peak torque. All subjects were familiarized to this procedure before each test using visual and verbal explanations.
Body Composition
Body composition was determined using dual x-ray absorptiometry (DXA, Hologic model QDR-4500-W, Hologic, Inc., Marlborough, MA, USA). Subjects underwent lowenergy whole-body x-ray scans using pediatric and adult software for the measurement of LBM and fat mass. The DXA instrument was calibrated using the procedures provided by the manufacturer.
Statistical Analysis
Unpaired t-tests were performed to compare demographics between burned boys and men. Two-way (group × time) analysis of covariance (ANOVA) was performed for body composition, peak strength, and peak VO 2 test. Additionally, a two-way (training × stage) ANOVA was performed to examine differences between pre-and post-training for percent peak heart rate and percent peak oxygen uptake at three stages of the modified Bruce exercise test. Post hoc test was performed using Sidak's multiple comparisons test. To control for growth and body morphology variations between men and boys, we normalized oxygen uptake (VO 2 ) to kg of total body mass (TBM) and LBM. Nonburn normative data for peak VO 2 were obtained from previously published norms. 23 Slopes and intercepts for percent peak heart rate and percent peak VO 2 were compared between men and boys and Pearson productmoment correlation coefficient determined the strength of the linear association. Nonburned normative data for peak torque were obtained from our database and our previous published studies. [14][15][16][17][18][19][20] Data were analyzed and figures generated using GraphPad Prism (Version 6.0, La Jolla, CA, USA), with significance set at P < .05. All data are reported as mean ± SD.
Physical and Exercise Characteristics at Discharge
Physical and exercise characteristics of subjects at discharge (pre-exercise) are presented in Table 1. Men's burn injuries were flame (78%), scald (5%), and electrical (17%), whereas boys' were flame (80%), scald (6%), electrical (8%), and chemical (6%). Men were white-Hispanic (55%), white-Caucasian (40%), and black (5%). The boys were all white-Hispanic. Inhalation injury was present in 17 per cent of men and 22 per cent of boys. Additionally, both groups were matched for drugs to control for the effects of these agents. Both groups had an equal number of participants taking oxandrolone or propranolol (6%), propranolol only (57%), and placebo (37%). Length of stay from admission to discharge was similar between groups (men: 41 ± 32 days vs boys: 33 ± 23 days, P > .05). Men were about 20 years older than boys (P < .0001), were taller, weighed more, and had more absolute fat mass and LBM than boys (each P < .0001). Both groups were matched for percent TBSA burns and percent third-degree burns (P > .05). Based on their TBSA, both groups had similar exercise training (men 7.0 ± 2; boys 6.7 ± 2.3 weeks, P > 0.49). Cardiorespiratory fitness (peak VO 2 ) was significantly lower in boys than in men when expressed as an absolute value (55%) and when normalized to kg of TBM (22%) or LBM (19%; each P ≤ .006). Similarly, absolutes strength measures were lower in boys than in men (peak torque 51%; average power 53%). Peak torque normalized to kg of TBM or LBM was similar between boys and men (P > .05); however, LBM-normalized average power was lower in boys than men (by 53%, P ≤ .04).
Rehabilitative Exercise Training Changes Body Composition Similarly in Burned Boys and Men
The percent change in fat mass and LBM from discharge (pre-exercise) to the end of the rehabilitative training is presented in Figure 1. The percent increase in lean and fat mass did not differ between boys and men (P > .05). In boys, LBM increased by 3 ± 6% and fat mass by 6 ± 9%, whereas in men LBM increased by 4 ± 7% and fat mass by 8 ± 14% (each P ≤ .008).
Burn Injury Affects Strength and Aerobic Exercise
Capacity to a Greater Extent in Boys Than in Men, Whereas Rehabilitative Exercise Training Improves These in Both LBM-normalized peak torque and LBM-normalized cardiorespiratory fitness (peak VO 2 ) at discharge (pre-exercise) and after rehabilitative training are reported in Figure 2. Both were lower in boys than men at discharge and after rehabilitative training (pre-training: peak torque 15% lower and peak VO 2 20% lower; post-training: peak torque 16% lower and peak VO 2 23% lower; main effect for group, P < .0001). At discharge, both boys and men had peak torque and peak VO 2 values that were lower than age-and sex-matched normative values (peak torque by 36-51% and peak VO 2 by 44-59%; P < .0001). Peak torque expressed relative to normative values was lower in boys than men at discharge (15% lower) and after exercise training (13% lower; main effect for group, P < .0001). Cardiorespiratory fitness improved to a greater extent in men (19% increase in peak VO 2 ) than in boys (10% increase) after the rehabilitative training (group × time interaction, P = .011).
Exercise Rehabilitation Improves Relative Submaximal Heart Rate and Oxygen Uptake in Burned Boys and Burned Men
Pre-and post-exercise training responses for relative (percentage of peak values) heart rate and VO 2 during the first three stages of the modified Bruce cardiorespiratory test in boys and men with severe burn injury are presented in Figure 3. Men showed reductions in exercise oxygen consumption during the first three stages after exercise training (training × stage interaction, P = .0001). Boys also showed reductions in relative submaximal exercise oxygen consumption after exercise training (training × stage interaction, P = .32); however, only in stages 2 and 3 were reduced (each, P < .01). Both men and boys showed similar reductions in relative heart rates at each of the first three stages of the Bruce protocol after exercise training (training × stage interaction, P = .0001).
The Relative Relationships Between Heart Rate and Oxygen Uptake Differ Between Burned Boys and Burned Men
Pre-and post-exercise training response for relative (percentage of peak values) relationships between heart rate and oxygen uptake did not differ and were combined to form one linear regression for each group that are presented in Figure 4. Both men and boys had a strong positive relationship between percent peak heart rate and percent oxygen uptake (r ≤ .89). However, the slopes were significantly different between groups (P = .0004). The regression equation for estimating percent peak VO 2 from percent peak heart rate for men was as follows: %Peak VO 2 = 1.274 × %Peak heart rate -33.39 and for boys: %Peak VO 2 = 1.608 × %Peak heart rate -64.94. The explained variance for these regression equations was strong for both, which was r 2 = .81 for men and r 2 = .79. In Table 2, we present a summary of these estimations for prescribing oxygen uptake intensity from relative heart rate values between men and boys.
DISCUSSION
The purpose of this study was to compare the effects of burn injury on strength, aerobic exercise capacity, and body composition in adults and children from discharge until after rehabilitative exercise training. Our results show that both men and boys have similar relative body composition changes at discharge and that lean and fat mass similarly increase after rehabilitative exercise training. We found consistent reductions in strength and aerobic exercise capacity in men and boys when these were normalized to kg of LBM and expressed relative to age-and sex-matched normative values. However, boys had greater reductions than men at discharge and after exercise training. Additionally, we found that submaximal oxygen uptake was improved after rehabilitative training only in men. To the best of our knowledge, we are the first to report that burn injury may differentially affect measures of strength and cardiorespiratory fitness in young boys and men.
Severe burn injuries are associated with skeletal muscle catabolism accompanied by hypermetabolism, inflammatory responses, respiratory injury, and loss of LBM in both adults and children as well as disturbed growth patterns in children. 13,19,24,25 It is important to understand that, unlike adults, children are in the process of reaching developmental milestones. Nonburned children and adolescents may differ in their hormonal and metabolic responses to physical activity. Specific hormones such as growth hormone (GH), insulinlike growth factors, and steroid sex hormones increase growth velocity and assist in cellular growth and proliferation, bone and muscle maturation, metabolic adaptation, and functional ability. 26 Exercise-induced development of physical capacity and performance may, therefore, be greatly influenced by these changes in childhood and adolescence. Current research on this topic is lacking, though our data suggest that exercise rehabilitation regimens should follow age-specific guidelines for maximal benefits. This study builds on our previous work showing that a rehabilitative exercise program started immediately after discharge improves muscle strength, muscle endurance, and LBM in children with burn injury. 14-21 Why strength and aerobic exercise capacity are affected to a greater degree in boys than men is not entirely clear. We have previously reported that the cardiovascular response to submaximal exercise is diminished in burned children compared with nonburned age-or sex-matched children. 27,28 Thus, burn trauma may have a more pronounced effect in children than adults, and this requires further understanding. Notably, adults have important body morphology characteristics that differ from those of children in early stages of development. 29 One important difference is a child's body surface area-tomass ratio. Children are also less economical than adults (use greater oxygen at similar exercise workloads) during submaximal exercise. Moreover, the cardiovascular system is proportionally smaller in children than adults. Healthy nonburned children have smaller hearts, less blood volume, and during exercise lower stroke volume. 30 Most importantly, Reynolds et al found that, in children, burn injury causes cardiac failure, particularly left ventricular myocardial depression, and that this outcome that was likewise different from that seen in burned adults. 31 We have recently reported that, during submaximal exercise, burned children have exercise intolerance and attenuated peak heart rate values compared with nonburned age-or sex-matched children 27,28 ; however, this type of investigation has not been conducted in burned adults to date. Relative submaximal heart rate (row A) and peak oxygen uptake (row B) during the first three stages of the modified Bruce exercise in men and boys at discharge (pre-training) and after rehabilitation training (post-training) in men and boys. **P < .01, ****P < .0001 for pre-to post-rehabilitative training. Comparison of the relative relationship between percent peak oxygen uptake (VO 2 ) and percent peak heart rate between burned men and burned boys. Dotted lines represent 95% confidence intervals. Cardiorespiratory fitness (peak VO 2 ) is a strong predictor of all-cause mortality. 32 Others have reported that, at 5 or more years after burn injury, adults have reduced aerobic exercise capacity, 33,34 suggesting that long-term cardiorespiratory impairments may be present. However, whether this is due to cardiovascular dysfunction or reduced physical activity postburn is not entirely clear. 35 Ganio et al reported that, after at least 10 years after sustaining a burn injury, 88 per cent of adults had cardiorespiratory values below the American Heart Association's age-adjusted normative values. We found that, when cardiorespiratory fitness was normalized to kg of LBM, adult men had values that were 59 per cent of age-or sex-matched normative values at discharge and that these improved to 78 per cent after training. On the other hand, cardiorespiratory fitness was at 44 per cent of normative values in young boys at discharge, and this improved to only 54 per cent after rehabilitative training. Additionally, when we compared strength (peak torque) between adult males and young boys, men were at 51 per cent at discharge and 58 per cent after training, whereas boys were at 36 per cent at discharge and 45 per cent after exercise training.
In addition to having morphological differences, children and adults greatly differ in metabolic efficiency. The influence of exercise on growth is an important question that remains to be answered in nonburn populations. In adults, the magnitude of the endocrine response that regulates adaptations to exercise is intensity dependent. For example, in adults, the GH response after aerobic exercise occurs at work rates as low as 40 per cent 36 of peak aerobic capacity (peak VO 2 ) but are the greatest at 75 to 90 per cent. 36,37 Likewise, resistance exercise produces a greater GH response with high total work and short rest intervals at moderate power (70% or greater). 38,39 Pediatric exercise responses rely more on aerobic system and less on glycolytic metabolism. Thus, children rely more on fat oxidation during exercise than adults because children's glycolytic capacity is not fully developed, as supported by the finding that children produce less lactate during exercise than adults. 29,40 Additionally, children and young adults oxidize exogenous carbohydrates and fat at higher rates than adults. 41,42 These differences may alter the counterregulatory hormones involved in the response to exercise in nonburned populations. Furthermore, children reportedly have a higher proportion of slow twitch, or type I, fibers in the quadriceps than adults. Thus, burn injury likely affects children to a greater degree than adults because of their maturation and growth, which may negatively affect their ability to adapt to exercise training through increasing contractile proteins and proliferation of mitochondria. [43][44][45] However, this is speculative, and whether burn injury differentially affects these regulators of exercise adaptation in burned children and adults is unknown and requires further study.
In nonburned children, it has been reported that aerobic exercise training may produce relatively less of an improvement in aerobic capacity (peak VO 2 ), which is also confounded by the rate of growth and development. 41 However, our previous work has found that burned children do improve aerobic and strength capacities. 14-21 Further, our results have consistently shown that the burned men group responded to a greater degree than our burned children. It is not clear why there are differences, but we do show (in Figure 3 and Table 2) that the relative relationship between heart rate and oxygen uptake differs between groups. The American College of Sports Medicine (ACSM) generally recommends calculations of 50 to 65 per cent of your maximum heart rate for beginners, 60 to 75 per cent for intermediate level exercisers, and 70 to 85 per cent for established aerobic exercisers. We typically use a prescribed target heart rate of 60 to 85 per cent peak heart rate. It may be that the intensity of exercise in which we prescribe may not be providing the same training stimulus in children as it is in adults. In Table 2, we highlight that at lower intensities, burned children are working at a lower percentage of their peak VO 2 compared with men. In this regard, this reduced exercise work load (percentage of peak VO 2 ) may be a reason. It maybe that children need to work at a greater relative heart rate than adults due to these differences. For example, a prescribed intensity of 70 per cent peak VO2 would require a relative intensity of heart rate at 85 per cent for men and 87 per cent for children. At lower intensities where we initially start at 60 per cent peak heart rate, men are training at 43 per cent of their peak VO 2 while children are at 48 per cent, well below the ACSM-recommended guidelines. Further research should determine the dose response of both resistance and aerobic training adaption for obtaining optimal improvements specific to burn populations. Our regression equation provides clear heart rate-based guidelines for men and boys with severe burn injury.
An important limitation of our study is that only boys were tested. Whether girls have similar exercise characteristics as women after burn injury is unknown. Of the individuals admitted to our institution, only about 30 per cent are female, in agreement with an American Burn Association report that burns affect 69 per cent males and 32 per cent females. 46 Others have reported that sex differences in mortality exist after burn injury and that women have a greater risk of death in all age groups from 10 to 70 years old. 47,48 However, whether female children are affected to a greater degree than women with regard to strength and aerobic exercise capacity is unclear. Another limitation of the study is the racial makeup of the groups, which differed and may have affected the results. Moreover, over 60 per cent of the groups were taking propranolol alone or with oxandrolone. We controlled for the effects of drugs by matching adults and children taking similar drugs. We have previously reported that exercise in isolation or with oxandrolone increases LBM in children. 16 Propranolol likewise improves lean mass, strength, and peak VO 2 in children. 49 However, whether these drugs affect adult men after an exercise training program and offer greater improvements in strength and aerobic exercise capacity is unknown. It bears mentioning that, as we have recently reported, administration of the resting beta blocker, propranolol, does not affect exercise heart rate response and therefore burn children on propranolol can appropriately maintain the prescribed intensity of exercise during training sessions when heart rate is used to guide exercise intensity during exercise rehabilitation. 50 In summary, we found that boys with burn injury have similar relative body composition as burn-injured men at discharge and exercise training increases both lean and fat mass in both. We also found that, at discharge, boys experience a greater reduction in strength and cardiorespiratory fitness, with exercise training improving strength to a similar degree in boys and men and aerobic exercise capacity to a greater degree in men. Further studies should determine whether understanding these differential responses can be exploited to improve the rehabilitative process, particularly with regard to tailoring exercise regimens to children and adults with burn injury. | 2018-04-04T00:06:17.680Z | 2018-08-17T00:00:00.000 | {
"year": 2018,
"sha1": "129911edec1183d77541cc8411ae37f63c187c69",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jbcr/article-pdf/39/5/815/25506935/irx057.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a40d208fbce2aae8eae9763d3287e6001e53c7bb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258184902 | pes2o/s2orc | v3-fos-license | Inside‐Out Surgical Anatomy of Superior Laryngeal Artery Endoscopic Dissection and Proposal for Nomenclature of Branches
Abstract Objectives To describe the inside out surgical anatomy of the superior laryngeal artery and to resolve the ambiguities in the nomenclature of its main branches. Study Design Endoscopic dissection of the superior laryngeal artery in the paraglottic space of larynges of fresh frozen cadavers and a review of the literature. Setting A center for anatomy encompassing facilities for latex injection into the cervical arteries of human donor bodies and a laryngeal dissection station equipped with a video‐guided endoscope and a 3‐dimensional camera. Methods Video‐guided endoscopic dissection of 12 hemilarynges in fresh frozen cadavers whose cervical arteries were injected with red latex. Description of the inside‐out surgical anatomy of the superior laryngeal artery and its main branches. Review of the previous reports describing the anatomy of the superior laryngeal artery. Results From inside the larynx, the artery was exposed upon its entry through the thyrohyoid membrane or through the foramen thyroideum. It was traced ventrocaudally in the paraglottic space exposing its branches to the epiglottis, the arytenoid, and the laryngeal muscles and mucosa. Its terminal branch was followed until it left the larynx through the cricothyroid membrane. Branches of the artery, previously described under different names, appeared to supply the same anatomical domains. Conclusion Mastering the inside out anatomy of the superior laryngeal artery is mandatory to control any intraoperative or postoperative hemorrhage during transoral laryngeal microsurgery or during transoral robotic surgery. Naming the artery's main branches according to their domain of supply would resolve the ambiguities resulting from various nomenclatures.
T he larynx receives its arterial blood supply from 3 different sources: the superior laryngeal artery (SLA), the cricothyroid branch of the superior thyroid artery, and the inferior laryngeal artery (ILA). The SLA arises in most of the cases from the superior thyroid artery and the ILA from the inferior thyroid artery. [1][2][3] The SLA is a dominant nutrient artery of the larynx. 1 Only one SLA can provide blood supply to the whole larynx through free anastomoses with its contralateral fellow artery and with the inferior laryngeal arteries on both sides. 4,5 Pertinent knowledge of the anatomy of the SLA and its main branches from the endolaryngeal vantage point is mandatory to manage any arterial hemorrhage during or after transoral laryngeal microsurgery (TLM) or transoral robotic surgery (TORS). The lack of standardized naming of the intralaryngeal branches of the SLA and the absence of their representation in the Nomina Anatomica 2 created discrepancies between authors 3 regarding the anatomical description of the main branches of the SLA.
Oki first described the anatomy of the SLA in 1958. He studied the arterial distribution of 120 hemilarynges by X-ray stereogram after injecting a radio-opaque material into the arteries. He described 5 branches of the SLA and named them the ascending branch, the descending branch, the dorsal branch, the ventral branch, and the medial branch. He cited 7 different types of distribution of the SLA in the larynx, 5 patterns This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. of vascularization of the epiglottis, and 4 patterns of vascularization of the arytenoid. 4 Pearson in 1975, microdissected 20 hemilarynges whose vessels were injected with colloidal radiopaque particles. He adopted the same branches' naming suggested by Oki. He added that the descending branch of the SLA splits into an anterior and a posterior division. 5 Later, Sato used also the same branches' nomenclature adopted by Oki and Pearson. 6 Tissues and organs having the capacity to vibrate require adapted vascular structures to overcome the risk of hypoxia from blood flow interruption. 6 Accordingly, undulation or meandering is a feature of some branches of the SLA. 4 Imura et al, 7 assessed the meandering of some branches of the SLA in 26 hemilarynges of female cadavers. They described 6 branches of the SLA and called them the superior posterior, the anterior, the medial posterior, the medial, the antero-inferior, and the postero-inferior branches.
Rusu et al, 2 did intralaryngeal dissection of 32 hemilarynges, after removing the laminae of the thyroid cartilages and separating the cricothyroid joint. They advocated that the SLA exhibits 5 constant branches and termed them the superior, the anterior, the posteromedial, the antero-inferior and the postero-inferior branches.
Imanishi et al, 1 demonstrated the SLA with angiograms on 3 fresh cadavers injected with a radiopaque material in the femoral and common carotid arteries. They described the SLA as having only an ascending and a descending branch, the descending branch ending into an anterior and a posterior division.
Goyal et al, 3 used a surgical robot to perform a transoral dissection of the supraglottic region in 5 fresh frozen cadaveric heads vascularly injected with silicone. They adopted the same nomenclature of the branches of the SLA as Rusu et al, 2 and cited the anterior, the superior, and the posteromedial branches.
Perotti et al performed in 2018 a microdissection of 11 fresh frozen cadavers tracing the course of the laryngeal vessels from outside inwards, after removal of the thyroid cartilage. In their work, they described only 3 branches of the SLA, namely the epiglottic artery, the postero-inferior artery, and the antero-inferior artery. They tagged the last 2 branches as the terminal branches of the main trunk of the SLA. According to these authors, the postero-inferior artery divides in turn into 2 terminal branches. Perotti et al were the first to advocate the presence of 2 arterial anastomotic networks in the larynx, a medial and a lateral one, connecting the antero-inferior artery to the postero-inferior artery. 8 In the actual study, we associated the technique of injection of red latex into the cervical arteries 9 of the human fresh cadavers to the inside-out endoscopic dissection of the larynges while in the surgical position. Our first aim was to offer a real description of the inside out surgical anatomy of the SLA and its main branches.
Second, we aimed to resolve the ambiguities in the nomenclature of the main branches of the SLA, through reviewing and comparing the previous relevant reports on the surgical anatomy of the SLA and its main branches.
Methods
The research was exempt from the approval of the institutional review board (IRB) since it was on cadavers from body donors, who have provided their personal consent in a written agreement to the use of their body after their death with the Centre for Anatomy, Charité-Universitätsmedizin Berlin. The exemption was issued by the ethics committee at the Charité-Universitätsmedizin Berlin.
Six fresh cadavers of human body donors were transferred to the Center for Anatomy, Charité-Universitätsmedizin Berlin. Their race, age, sex, and cause of death were noted. The carotid arteries on both sides were meticulously dissected. 10 Breach of the vessels or their branches were avoided. Plastic tubes/cannulas were introduced into the common carotid artery on each side through a small incision secured by sutures. The internal carotids were ligated at the skull bases. We flushed the arterial system of the necks with saline to remove any debris or clots. Leaking points, if any, were identified and secured by a hemostat. Twenty to 45 cc of red latex (Ward Science) were injected into the arterial system of the necks. The injection continued until the red latex spilled out from the plastic tubes/cannulas of the contralateral sides.
After injection, the cadavers were frozen for at least 48 hours at −20°C. Once the latex had solidified, the necks were transected at the level of C7-T1 and stored at −20°C until the start of the transoral video-guided endoscopic dissection of the larynges.
Each specimen was dissected 3 times in average on 3 nonconsecutive days. Before each session, the specimens were allowed to defrost at room temperature for 20 hours. Each dissection session lasted for 5 hours in average. After each session, the heads were refrozen at −20°C for at least 24 hours.
The cadaveric heads were put in a position similar to the surgical position during TLM with the necks hyperextended. The light guide was used to position the laryngoscope. The laryngoscope displaced the epiglottis ventrally and was suspended by a support. We used a special laryngoscope (Spiggle & Theis) equipped with a whole length socket encompassing a 0°, 14 cm long endoscope mounted to an EndoSURGERY 3D Spectar camera (Xion GmbH).
The larynges were dissected from the inside out using microphonosurgical instruments (Spiggle & Theis). The dissection started, under videoendoscopic guidance, by an incision in the mucosa of the aryepiglottic fold. The adipose tissue was extracted from the paraglottic space and the adjacent pre-epiglottic space by a mixture of sharp and blunt dissection while preserving the muscles and neurovascular structures.
The SLA and its branches were dissected and traced in the paraglottic space from the dorso-cranial entry point to the ventro-caudal exit point. Dissections were videorecorded and photos were shot. Photos and videos were reviewed several times. We reviewed the previous relevant reports describing the anatomy of the SLA and compared the different nomenclatures of the branches of the SLA in different reports.
Results
The body donors' ages of death ranged from 73 to 98 years (mean = 85 years). They were 4 females and 2 males, all Caucasians. They all died from systemic conditions and were not suffering from any known laryngeal disease.
The red latex filled the superior laryngeal arterial system in 9 out of the 12 hemilarynges. The SLA was seen entering the paraglottic space at the dorso-cranial part of its lateral wall by piercing the thyrohyoid membrane in 10 hemilarynges and through a foramen thyroideum in 2 hemilarynges, 1 on each side, in 2 different heads. To note that Oki reported the entry of the SLA through a foramen thyroideum in 20% of his specimens. 4 From the endolaryngeal aspect, we found that the point of entry of the SLA was surrounded in all 12 hemilarynges by a significant amount of fat, more abundant than the cushion of fat surrounding the artery during its entire course.
After its entry into the larynx, the main trunk of the SLA ran ventro-caudally while being tightly attached to the outer wall of the larynx. The course of the main trunk smoothly ran from cranial to caudal and we did not notice a "point of inflexion" in any of the 12 hemilarynges except in 1. In the other 11 specimens, the main trunk of the SLA rather descended obliquely ( Figure 1). In all specimens, the main trunk of the SLA was separated from the inner perichondrium of the thyroid plate by a thin muscle layer derived from the inferior constrictor muscle of the pharynx (Figure 2).
After running in the paraglottic space for a variable distance, the SLA started to branch. It bifurcated at the supraglottic level into an ascending branch directed towards the epiglottis and showing a marked undulation (meandering) and a descending branch directed towards the laryngeal mucosa and the intrinsic muscles of the larynx. The descending branch bifurcated again into a terminal anterior and a terminal posterior branch. The level of this second bifurcation was variable. The terminal anterior branch coursed ventro-caudally towards the anterior commissure and left the larynx through the cricothyroid membrane ( Figure 2).
The epiglottis was occasionally supplied by extra ascending branch(es) from the anterior division of the descending branch of the SLA (Figure 3). A branch of the SLA supplied the arytenoid area and showed considerable meandering. This branch was occasionally doubled ( Figure 4) and had usually a descending course except in 1 cadaver where it ran an ascending course on both sides ( Figure 5). Rusu et al reported that the branch of the SLA directed to the arytenoid is descending in 77% of his specimens and is ascending in 18.5%. 2 One or more medial branches of the SLA bridged the paraglottic space from lateral to medial on their ways towards the ventricle, the false and true vocal folds ( Figure 6). A branch from the main trunk of the SLA ran caudally and laterally and left the larynx in close proximity to the lower horn of the thyroid cartilage ( Figure 7).
Upon reviewing the anatomic description of the branches of the SLA elaborated by various authors, we deduced that the branch Rusu et al, 2 designated as the superior branch, nearly coincides with the ascending branch according to Oki, Pearson, Sato, and Imanishi et al 1,[4][5][6] and with the superior posterior branch described by Imura et al. 7 Similarly, the anterior branch described by Rusu corresponds to the medial posterior branch described by Imura 7 and to the dorsal branch according to Oki,Pearson, Table 1 correlates the names of the constant branches of the SLA cited in previous publications to their areas of distribution.
Discussion
Postoperative hemorrhage from the SLA or its branches is a major complication after open laryngeal surgery. It is specially more serious when it occurs after TLM or TORS because the bleeding takes place in unprotected airways, not secured by a tracheostomy. 6,7 Such a complication is difficult to predict and its incidence ranges from 0.6% to 8% after TLM. 8 It may reach up to 14% if supraglottic laryngectomy is included. 3 We did our inside out dissection of the SLA on specimens that were fresh frozen and not formalin fixed. Compared to fixed ones, fresh frozen specimens are closer to depict the normal human anatomy. 8 In order to preserve the natural fixation point of the vessels, 10 we did not dissect or remove the thyroid cartilages from their genuine anatomic positions and we did not extract the larynges from the necks.
While describing the intralaryngeal course of the SLA, Rusu et al 2 advocated that the point of inflexion from the transverse course of the SLA to its descending vertical one is located anterior to the base of the superior horn of the thyroid cartilage. From the inside aspect, we did not notice such an inflexion point except in only 1 out of the AL, left arytenoid area; E, epiglottis; L, laryngeal lumen; SLA, superior laryngeal artery; T, thyroid cartilage. between our findings and those from the abovementioned authors may be that 8 of our 12 hemilarynges were from females, where the thyroid laminae are too short 6 to allow for such an inflexion.
Souviron et al, 11 microdissected 20 formalin-fixed cadaveric larynges through an endoluminal approach. They claimed that the superior laryngeal vessels were located under the mucosa of the superior third of a triangle limited by the epiglottic attachment with the aryepiglottic fold, the anterior commissure, and the apex of the vocal process. They proposed that this triangle is the landmark for identification and clamping of the neurovascular elements in the supraglottis. Based upon our dissection and upon the data extracted from Goyal et al 3 and Sato, 6 we do not agree that the artery designated by Souviron et al, is the main trunk of the SLA. It is rather its superior (ascending or epiglottic) branch. At the said level, the SLA main trunk is located at a more dorsal plan, 6 being tightly attached to the thyroid cartilage inner lamina laterally.
In the literature, the anatomic description of the SLA branches showed wide nomenclature discrepancies 2,3,7-11 secondary to a lack of consensus on its branches' naming. We agree with Rusu et al that "the branching pattern of the SLA must be re-discussed." 2 We agree also that an exhaustive description of all possible branches of the SLA is of minimal surgical interest. Goyal et al focused only on the main trunk of the SLA beside the SLA branches distributed to the epiglottis and arytenoids. 3 Perotti et al highlighted only 3 branches of the SLA, namely: the epiglottic, the anteroinferior, and postero-inferior arteries. Other than that, they described an anastomotic network in the paraglottic space between the antero-inferior and the postero-inferior arteries. 8 Pearson stated, decades ago, that the branches of the SLA arose in many orders and combinations, but were distributed to fairly constant locations. 5 Accordingly, and based upon the comparison we did between the different names of the SLA branches (Table 1), we suggest to name the main branches of the SLA according to their distribution, that is, their area of supply, instead of nominating them according to their directions. A more easily understandable designation would be "the epiglottic branch," "the arytenoid branch," "the muscular branches," "the luminal mucosal branches," and so forth. Pearson already called the ascending branch "the epiglottic branch." 5 He also stated that many "unnamed arterial branches and arterioles" are present. 5
Conclusion
The inside out surgical anatomy of the SLA and of its main branches deserves a pertinent knowledge to prevent and manage intraoperative and postoperative bleeding after TLM or TORS. An exhaustive anatomic description of all the branches of the SLA is, however, of minimal surgical importance. Naming of the main branches of the SLA according to their distribution would resolve much of the ambiguities surrounding the SLA branches' nomenclatures. | 2023-04-18T15:02:09.228Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "00e4d3a8e12aef307e40b03185e44326e3c6b73a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/oto2.42",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2edd957b487131ecb30c2771715511f069c37a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
237620718 | pes2o/s2orc | v3-fos-license | Reduced menin expression leads to decreased ERα expression and is correlated with the occurrence of human luminal B-like and ER-negative breast cancer subtypes
Purpose Menin, encoded by the MEN1 gene, was recently reported to be involved in breast cancers, though the underlying mechanisms remain elusive. In the current study, we sought to further determine its role in mammary cells. Methods Menin expression in mammary lesions from mammary-specific Men1 mutant mice was detected using immunofluorescence staining. RT-qPCR and western blot were performed to determine the role of menin in ERα expression in human breast cancer cell lines. ChIP-qPCR and reporter gene assays were carried out to dissect the action of menin on the proximal ESR1 promoter. Menin expression in female patients with breast cancer was analyzed and its correlation with breast cancer subtypes was investigated. Results Immunofluorescence staining revealed that early mammary neoplasia in Men1 mutant mice displayed weak ERα expression. Furthermore, MEN1 silencing led to both reduced ESR1 mRNA and ERα protein expression in MCF7 and T47D cells. To further dissect the regulation of ESR1 transcription by menin, we examined whether and in which way menin could regulate the proximal ESR1 promoter, which has not been fully explored. Using ChIP analysis and reporter gene assays covering − 2500 bp to + 2000 bp of the TSS position, we showed that the activity of the proximal ESR1 promoter was markedly reduced upon menin downregulation independently of H3K4me3 status. Importantly, by analyzing the expression of menin in 354 human breast cancers, we found that a lower expression was associated with ER-negative breast cancer (P = 0.041). Moreover, among the 294 ER-positive breast cancer samples, reduced menin expression was not only associated with larger tumors (P = 0.01) and higher SBR grades (P = 0.005) but also with the luminal B-like breast cancer subtype (P = 0.006). Consistent with our clinical data, we demonstrated that GATA3 and FOXA1, co-factors in ESR1 regulation, interact physically with menin in MCF7 cells, and MEN1 knockdown led to altered protein expression of GATA3, the latter being a known marker of the luminal A subtype, in MCF7 cells. Conclusion Taken together, our data provide clues to the important role of menin in ERα regulation and the formation of breast cancer subtypes. Supplementary Information The online version contains supplementary material available at 10.1007/s10549-021-06339-9.
Introduction
Breast cancers are among the most common malignancies worldwide and remain the leading cause of cancer-related mortality in women [1]. Previous receptor expression analyses enabled their classification into 4 major clinical subtypes, including luminal A, luminal B, HER2-enriched and triple-negative [2]. The luminal A subtype encompasses approximately 44% of breast cancers. This subtype is estrogen receptor (ER)-positive and/or progesterone receptor (PR)-positive and human epidermal growth factor receptor 2 (HER2)-negative, which displays a reduced expression of proliferation-related genes [3] and is sensitive to endocrine therapy with an overall favorable prognosis. The luminal B subtype represents around 20% of breast cancers and displays lower expression of ERα-related genes, a variable expression of (HER2), and a higher expression of proliferation-related genes [4]. This subtype harbors more genomic instability and has a poorer prognosis than the luminal A subtype [5]. The HER2-enriched subtype is ER-negative, PR-negative, HER2-positive, and highly sensitive to therapies targeting the HER2 receptor. The triple-negative breast cancer (TNBC) subtype is negative for all three receptors [6] and is the most aggressive with the worst prognosis.
Patients harboring MEN1 mutations are predisposed to multiple endocrine neoplasia type 1 (MEN1) syndrome, which is associated with multi-occurring endocrine tumors [7], as well as several types of non-endocrine tumors [8]. Numerous studies have revealed that menin is a multifaceted protein involved not only in the development and control of cell growth of endocrine cells but also in a variety of biological processes, including hematopoiesis and osteogenesis [9][10][11]. The wide range of biological functions regulated by menin results from its interaction with numerous proteins [12]. These menin-interacting proteins include transcription factors (the components of AP1, NFkB, and the TGFβ signaling pathways) and chromatin-modifying proteins (mixed lineage leukemia (MLL), Sin3A, and HDAC) [12,13]. Notably, menin physically interacts with a range of nuclear receptors, including ERα and the androgen receptor (AR), to regulate their pathways [14][15][16].
Over the last few years, evidence has emerged, in vivo, to suggest that menin may play a role in breast cancers [17]. (1) Female heterozygous Men1 knockout mice develop cancers of mammary cells with a low frequency [18], and conditional mammary gland-specific Men1 disruption leads to the development of mammary intraepithelial neoplasia (MIN) in over 50% of female mutant mice [19]. (2) Importantly, an exhaustive analysis of several cohorts of MEN1 patients revealed a significant predisposition to breast cancer [20]. (3) Menin downregulation was detected in a substantial proportion of human sporadic breast cancer samples [19], and MEN1 mutations were found, although rarely in sporadic breast cancers, justifying its addition to the list of driver mutations/ genes of this pathology [21,22]. Of note, the abovementioned analyses all highlight the suppressive role of menin in mammary cell tumorigenesis. However, Imachi et al. found that, among 65 ERα + breast cancer samples treated with tamoxifen, menin-positive tumors (20 patients) had worse clinical outcome and were more resistant to tamoxifen than menin-negative tumors, suggesting that menin exerts oncogenic effects in these cases [15]. Interestingly, a recent publication, revealing the role of menin in regulating the enhancer of the ESR1 gene coding for ERα, suggests distinct functions for menin in primary normal mammary cells and in breast cancers [23]. The authors showed that, although menin possesses a crucial tumor-suppressive role in normal mammary cells, it acts as an oncogenic factor in ERα + breast cancer cell lines through an enhancer-mediated regulation of ESR1 transcription. Notably, they demonstrated, by the ZR75-1 breast cancer cell line which does not express menin, that the re-expression of menin leads to enhanced ERα expression. Given the heterogeneous nature of breast cancers, we sought to further investigate the regulation of ESR1 by menin and assess the putative relationship between ESR1 dysregulation due to menin inactivation and the occurrence of human breast cancer subtypes.
Men1 deficiency in mice leads to ERα downregulation in early mammary lesions
We previously reported the occurrence at a high incidence of mammary intraepithelial neoplasia (MIN) lesions, displaying weak ERα expression, in Men1 mammary conditional mutant mice [19]. To further determine the causative role of menin deficiency in reduced ERα expression, we carried out double IF analysis of menin and ERα expression in normal and young mutant mice with MIN lesions, before the development of breast cancer. Three mice per control or mutant group were analyzed. As shown in Fig. 1a, all of the mammary luminal cells in Men1 F/F -WapCre − control mice expressed menin. Conversely, menin expression was lost in 71.1% of mammary cells in these young Men1 F/F -Wap-Cre + mice (Fig. 1a). ERα was expressed in approximately 52.5% of luminal cells expressing menin in the former group (Fig. 1a, upper panel), whereas immunofluorescence revealed that ERα expression was nearly 3.2-fold lower in Men1-deficient cells in Men1 F/F -WapCre + mice (lower panel), compared to Men1 F/F -WapCre − mice. The merged images of menin and ERα staining clearly highlight that ERα is less expressed specifically in the nuclei of menin-deficient luminal cells (Fig. 1a).
Menin downregulation in human ERα + mammary cells leads to reduced ERα expression
Next, we further dissected the regulation of ERα expression by menin using different approaches in ERα + breast cancer cell lines. To achieve this, we first performed MEN1 knockdown (KD) using a siRNA approach. As shown in Fig. 1b, MEN1 KD MCF7 and T47D cells displayed reduced ERα protein expression by western blot analysis, unlike the menin-negative ERα + cell line, ZR75-1. Moreover, MEN1 KD led to a twofold decrease in ESR1 mRNA levels by RT-qPCR analysis in MCF7 and T47D cell lines (Fig. 1c). We then verified the effects of MEN1 KD on ESR1 mRNA and ERα expression levels under estrogen (E 2 ) stimulation. Western blot and RT-qPCR analyses showed that MEN1 silencing further abrogated ERα expression under E 2 stimulation but had no additional effect on ESR1 transcription (Fig. 1d, e), most likely due to the fact that transcription had already reached its lowest level upon estrogen stimulation. All the above data thus confirm our in vivo analyses, indicating that menin is essential in maintaining ESR1 transcription and ERα expression.
Menin binds to the proximal region of the ESR1 promoter
Dreijerink et al. reported that menin plays a crucial role in the regulation of ESR1 transcription in an enhancer-mediated way [23]. We noticed that, although the study revealed the binding of menin at the transcription start site (TSS) of the ESR1 promoter, no further analyses were reported on this region. We thus carried out ChIP analyses to evaluate the binding of menin to the − 2500 bp to + 2000 bp region around the TSS, defined based on previously reported works ( Fig. 2a) [23,24], to fully decipher the regulation of ESR1 transcription by menin at the proximal promoter region. Menin was significantly enriched in the ESR1 promoter region encompassing the TSS to + 2000 bp in MCF7 (Fig. 2b, left panel) and T47D (Fig. 2b, right panel) cells and more specifically in the promoter area C in MCF7 cells. Importantly, we confirmed by luciferase reporter assays that the transcriptional activity of the proximal ESR1 promoter region A/B and C was markedly reduced when MEN1 was knocked down (Fig. 2c).
On considering the data published by Dreijerink et al. [23] showing that H3K4me3 marks are more abundant in the proximal part of the ESR1 promoter, we sought to investigate the involvement of the MLL complex, a major actor modifying H3K4me3 marks [13], in the regulation of the ESR1 promoter. By treating MCF7 and T47D cells with an inhibitor of the menin-MLL interaction, MI503, RT-qPCR analyses unveiled a more than twofold decrease in ESR1 transcription (Fig. 3a). Western blot analyses in MCF7 and T47D cells using the same inhibitor also revealed a decrease in ERα expression at the protein level (Fig. 3b). We then verified the potential alteration of H3K4me3 marks at this region upon inhibition of the MEN1/MLL complex. ChIP analysis with anti-H3K4me3 antibodies showed that, while MI503 treatment led to a markedly reduced binding of menin (Fig. 3c, left panel) at 48 h, H3K4me3 methylation was not altered upon menin/MLL inhibition in the tested region at this time point (Fig. 3c, right panel). We then performed the same analysis at 72 h and 96 h and found that at 72 h, one of the tested H3K4me3 marks significantly decreased, while other H3K4me3 marks slightly decreased but insignificantly (Fig. S1, upper panel). At 96 h, all H3K4me3 marks had decreased significantly, except for the one on the TSS site (Fig. S1, lower panel), whereas a substantial proportion of the MI503-treated cells stopped to grow. Furthermore, RT-qPCR analyses showed that neither siMLL1, nor siMLL2, nor their combination affected ESR1 transcription (Fig. 3d, left panel) and ERα expression (Fig. 3d, right panel). Taken together, our data provide evidence that menin regulates the proximal ESR1 promoter and raise the question of the involvement of factors other than the MLL complex in this regulation.
Having confirmed and extended the role of menin in regulating ESR1 transcription, we sought to further confirm its role in the growth of ER+ breast cancer cells, as previously reported [23]. To achieve this, we used colony formation assays to investigate cell growth behavior after MEN1 knockdown. As shown in Fig. 3e, MEN1 silencing in both MCF7 and T47D cells led to reduced colony formation, supporting that menin is needed for the growth of these ER+ breast cancer cells.
Lower menin expression is associated with luminal B-like and ER-negative breast cancer subtypes
Our observations prompted us to perform a thorough investigation of the levels of the menin protein in a cohort of breast cancer patients having undergone surgery at the Centre Léon Bérard (CLB) hospital from 2001 to 2003. Among 354 patients, 151 (42.7%) had a low menin expression, while 203 (57.3%) had a high expression. Among the 294 patients with ER+/HER2− tumors, 116 patients (39.5%) had a low nuclear menin expression and 178 patients (60.5%) had a high expression. In the cohort of 354 patients, we found that lower nuclear menin (H score ≤ 100) expression was significantly associated with ER-negative breast cancers (P = 0.041) and with the HER2-enriched subtype (P = 0.049, Table 1). Moreover, among the 294 ER+/HER2− patients, we observed that low menin expression was associated with the luminal B-like breast cancer subtype (P = 0.006), larger tumors (P = 0.016), and higher SBR grades (P = 0.005, Table 2).
Interestingly, among the ER+/HER2− cohort, we found that low menin expression was associated with worse distant metastasis-free survival (DMFS), with a 10-year DMFS of 71.5% versus 81.2% in patients with high menin expression, P = 0.053 (Fig. 4a). Furthermore, low menin expression was also associated with a trend for worse disease-free survival (DFS), with a DFS of 65.7% at 10 years versus 75.0% in patients expressing high levels of menin (P = 0.088, Fig. 4b).
Finally, lower expression of menin was also associated with The abovementioned data obtained in human patients, reminiscent of the observations made in Men1-deficient mutant mice, highlight a relationship between reduced menin expression and weaker ERα expression, suggesting that decreased ERα expression triggered by Men1 deficiency could be related to the occurrence of luminal B-like and ERnegative breast cancer subtypes.
Menin downregulation alters GATA3 and FOXA1 expression in ER+ breast cancer cells
Having demonstrated a clinical correlation between menin inactivation and breast cancer subtypes, we wondered whether the factors important for luminal cell differentiation could be affected by menin in ER+ breast cancer cells. GATA3 is known to be a major factor involved in the regulation of ESR1 expression and is ubiquitously present in luminal A breast cancers [25]. Western blot analysis revealed that GATA3 expression was greatly reduced in both MCF7 and T47D cells at the protein level after MEN1 KD, although its level of mRNA was not impacted (Figs. 5a, b, S2). In parallel, we investigated the expression of FOXA1, which plays an important role in mammary cell differentiation and tumorigenesis, and found that its protein expression increased upon MEN1 KD in MCF7 cells, but remained unchanged in T47D cells, whereas no transcriptional alteration could be detected in both cell lines (Figs. 5a, b, S2). Since GATA3 has been reported to interact with menin in lymphocytes [26], and menin is known to interact with one member of the FOXA family, FOXA2 [27], we performed immunoprecipitation (IP) and PLA analyses to determine whether menin could interact with GATA3 and FOXA1 in breast cancer cells. The data obtained demonstrated that they interact with menin in MCF7 cells, as evidenced by IP at the endogenous level (Fig. 5c), by GST pull-down (Fig. 5d) and by PLA (Fig. 5e). Taken together, the current work revealed that menin interacts both with GATA3 and FOXA1 in ER + breast cancer cells. Moreover, its expression could be critically related to the expression of GATA3, a well-recognized marker of the luminal A subtype.
Discussion
The current work provides both clinical and experimental data showing that menin is critically involved in ERα expression and that its inactivation in mammary cells is correlated with the occurrence of luminal B and ER-negative breast cancer subtypes. Our data highlighted cellular and molecular consequences of reduced menin expression in mammary cells, which may affect not only cell proliferation but also other hallmarks of cancer cells, in particular, cell differentiation.
We previously observed that the mammary lesions developing in mammary cell-specific Men1 mutant mice displayed low ERα expression [19]. Our current study further demonstrates that the decrease occurs in the precancerous lesions, suggesting that menin inactivation favors the tumorigenesis of mammary cells with weak ERα expression. Interestingly, by analyzing this expression in a large cohort of breast cancer patients, we found that reduced menin expression is significantly correlated with both ERα-negative and luminal B-like breast cancer subtypes. Consistently, low levels of menin were correlated with larger tumors, more advanced SBR grades, and worse prognosis, all of which are major features of these two breast cancer subtypes [4]. It is worth mentioning that these clinical data, together with those concerning reduced ERα expression in Men1-deficient mouse MIC lesions, further support the oncosuppressive role played by the MEN1 gene in the tumorigenesis of normal mammary gland cells [19,23]. Moreover, while searching for luminal cell factors likely interacting with menin, we unveiled that menin binds physically to GATA3 and FOXA1 in mammary cells, and that MEN1 silencing reduces GATA3 expression in MCF7 and T47D cells. Of note, reduced GATA3 expression is often seen in the luminal B breast cancer subtype but not in luminal A [28]. However, the mechanisms leading to the occurrence of both luminal breast cancer subtypes remain elusive. The current work may provide useful insight and generate interest for further studies. In the meantime, considering the retrospective nature of the study and the heterogeneity of the therapies received by the patients included, the clinical analyses, which could be limited with the cutoff definition by IHC, should be confirmed in other cohorts, preferably through prospective studies.
Dreijerink et al. first described the capacity of menin to regulate ESR1 transcription by binding to the remote Fig. 1 Reduced menin expression leads to a decrease in ERα expression. a Co-immunofluorescence against menin and ERα on mammary gland sections from Men1 F/F WapCre − mice (upper panel) and Men1 F/F WapCre + mice (lower panel) at < 12 months of age. Quantification of IF signals for menin and ERα is shown on the right. b Western blot analyses using antibodies against menin and ERα in MCF7, T47D, and ZR75-1 cells treated with siRNA control (siCtrl) or siRNA targeting the MEN1 gene (siMEN1 hs1). c Quantitative RT-qPCR analyses of MEN1 and ESR1 expression in MCF7 and T47D cells treated with siCtrl or two different siMEN1 (hs1 or hs2). d Western blot analyses using antibodies against menin and ERα in MCF7 and T47D treated with siRNA control (siCtrl) or siMEN1 hs1 and then subjected to estradiol (E 2 ) stimulation at a concentration of 10 nM. e Quantitative RT-qPCR analyses of the MEN1 and ESR1 expression in MCF7 and T47D cells treated with siCtrl or siMEN1 hs1 and then subjected to estradiol (E 2 ) stimulation at a concentration of 10 nM. PS2 transcript was used as a positive control. All data are expressed as mean ± SEM, ns P > 0.05, *P < 0.05, **P < 0.01, ***P < 0.001 ◂ upstream part of regulatory sequences of ESR1, through an enhancer-mediated looping mechanism, involving GATA3 [23]. Moreover, the occupancy of this enhancer sequence by GATA3 has been reported to play an important role in the regulation of ERα expression upon estradiol stimulation [25]. Our findings provide complementary information related to the role of menin in ESR1 regulation through its proximal promoter. Intriguingly, our data showed retarded H3K4me3 methylation on the proximal ESR1 promoter with MI503 treatment, as well as a lack of clear ESR1 Representative images of foci formation assay with MCF7 and T47D cells treated with siMEN1(1) + (3) or siCrtl. Quantification of foci formation assay is shown on the right. All the data are expressed as mean ± SEM, ns P > 0.05, *P < 0.05, **P < 0.01, ***P < 0.001 transcriptional alteration after single MLL1, MLL2, or their combined knockdown with siRNA. Since MI503 has been demonstrated not only to inhibit the interaction between menin and MLL1/MLL2 but also to reduce menin expression itself [29]; our data may suggest that factors other than the MLL complex may also participate in this regulation. It would be interesting in the future to identify the factors or cofactors that interact, positively or negatively, with menin to regulate this gene. In addition, our data seem to support the oncogenic role played by menin in ERα+ breast cancer cell lines, the proliferation of which is highly ERαdependent. Therefore, by combining the data obtained from our experimental and clinical analyses, we consider that menin most likely acts as an oncogenic cofactor in the luminal A breast cancer subtype.
Conclusion
The emerging role for the MEN1 gene in mammary cell tumorigenesis appears to be multifaceted. Our current results provide further data showing that menin may play different, even opposite, roles in the development of different breast cancers, in agreement with the findings reported by Dreijerink et al. Taken together, these results may explain seemingly controversial data reported so far, in particular when comparing data obtained from naturally occurring tumors and those of cultured cancer cells. Furthermore, our findings may also raise awareness to the breast cancer subtypes selected when designing new therapeutic strategies involving the eventual use of menin and MLL inhibitors.
Patients
We screened a total of 433 consecutive female patients with breast cancer who underwent surgery and (neo)/adjuvant therapy at the Centre Léon Bérard (CLB) between January 2001 and December 2003 (Additional file 1). Patients with complete data and with adequate samples assessable for menin by IHC were 354, among which 294 patients had ER+/HER2− tumors. The intrinsic subtypes of breast cancer were defined by the histological grade and IHC surrogates as per St Gallen 2013 consensus [30]. Patients were defined as luminal A-like if positive for ER and PR, negative for HER2 expression, and low proliferation (grade I or grade II with low Ki67 or mitotic index). Luminal B-like was defined as ER-positive and either: PR negativity, HER2 positivity or high proliferation. The study was conducted in accordance with the guidelines in the Declaration of Helsinki and the use of all patient tissues was approved by local IRB and carried out according to French laws and regulations.
TMA analysis of human breast cancers
Formalin-fixed paraffin-embedded breast cancers were prepared and processed for immunostaining as previously described [19]. Tissue micro-array (TMA) block preparation, menin nuclear expression assessment using IHC, and statistical analyses were performed as previously described [19]. The percentage of stained cells was multiplied by the intensity of staining to obtain the H score [31]. For the sake of correlations and survival analyses, the most discriminative cutoff in terms of DFS (as determined by Kaplan-Meier method) was chosen to divide the whole cohort of patients into high menin expression (H score > 100) and low menin expression (H score ≤ 100).
Animal breeding
Men1 F/F -WapCre + and Men1 +/+ -WapCre + mice previously generated in our lab were used [19]. All animal experiments were conducted in accordance with accepted standards of animal care and were approved by the Animal Care and Use Committee of the University Lyon 1. Menin interacts with GATA3 and FOXA1 and influences their expression. a Western blot analyses using antibodies against menin, GATA3, and FOXA1 in MCF7 cells treated with siCtrl or siMEN1 hs1. b Quantitative RT-qPCR analyses of the GATA3 and FOXA1 transcription in MCF7 cells treated with siCtrl or siMEN1 hs1. c Co-immunoprecipitation analyses were carried out by incubating nuclear lysates of MCF7 cells with either anti-IgG, or anti-GATA3 or FOXA1 antibodies and subjected to western blot analyses. d GST pull-down using GST-full-length (FL) menin and nuclear fraction of protein lysates of MCF7 cells, detected by western blot using the anti-GATA3 or FOXA1 antibodies. Coomassie blue-stained gel showing levels of recombinant GST proteins used in GST pull-down assay. e Upper: PLA analysis with anti-menin and anti-GATA3 antibodies in MCF7 and ZR75-1 cells, the latter expressing no menin. Under: the quantification of PLA analysis. All the data are expressed as mean ± SEM, ns P > 0.05, *P < 0.05, **P < 0.01, ***P < 0.001 treatment with E2 and MI503, cells were grown in phenol red-free medium supplemented with 10% charcoal-stripped serum (Biowest) in order to remove steroid hormones (steroid depletion). Cells were then treated for 3 h with E 2 (Sigma) 10 −8 M and MI503 for 48 h. The treatment was repeated after 24 h due to the degradation of the inhibitor with time. Please also see Additional file 2-Supplemental Materials & Methods.
Foci formation assay
For foci formation assay, cells were seeded in 6-well culture plates at 5 × 10 2 cells for MCF7 and T47D. Cells were transfected with siRNA or treated with MI503 and cultured for 2 weeks. The ensuing colonies were stained with 0.05% crystal violet. The images of the plates were analyzed using ImageJ software. Each experiment was conducted in triplicate and statistical analyses were performed using the Prism software.
Luciferase assays
For luciferase assays, MCF7 cells were cultured in 24-well plate. 48 h after transfection with 250 ng of the reporter plasmid PrAB or PrC, and 5 ng pRL-TK internal control vector, cell lysates were prepared and analyzed using a dualluciferase reporter assay system (Promega, Madison, WI), as previously reported [27]. Comparisons between mean values were assessed using the two-tailed Student t test.
Real-time reverse transcription and qPCR analyses
RNAs were extracted using RNeasy-Kits (Qiagen, Valencia, USA). Real-Time PCR analyses were carried out on a Step-One RT-System (Applied Biosystem, France) using SYBR-Green (Life Technologies, France) and corresponding primers (Additional file 2). Results of each sample were normalized.
Protein extraction, immunoprecipitation, GST pull-down, and immunoblotting
Total protein extracts from cells and immunoprecipitation were prepared and analyzed as described previously [27]. For GST pull-down assays, 1.25 µg purified GST menin protein or GST control protein was incubated with 1 mg or 2 mg of nuclear cell extracts prepared from MCF7 cells, as previously described [27]. The co-sedimented proteins were detected by western blot using standard conditions.
Immunostaining
Tissue preparation, immunostaining, and statistical analyses were performed as previously described [19]. Briefly, endogenous peroxidases were quenched in 3% H 2 O 2 solution for 30 min at room temperature. Heat-induced epitope retrieval was performed by immersion in antigen-unmasking solution (catalog no. H-3300; Vector Laboratories) in a microwave oven for 15 min. After blocking with antibody diluent (Dako), sections were incubated overnight with a primary antibody (Additional file 2). For immunofluorescence (IF) staining, signals were detected with a Cy3 or Cy5 tyramide amplification kit (PerkinElmer), with prior incubation with the appropriate biotinylated secondary antibody according to the manufacturer's instructions. Images were acquired on an Eclipse-NiE NIKON microscope using the NIS-Elements Software.
Proximity ligation assay (PLA), image acquisition, and analysis
MCF7 cells were fixed in methanol for 5 min and washed twice in PBS and then treated and analyzed according to the manufacturer's instructions (Duolink II Fluorescence, Olink Bioscience, Sweden). Images were acquired on an Eclipse NiE NIKON microscope using the NIS-Elements Software. For each sample, at least one hundred cells were counted. Analysis and quantification of these samples were performed using the ImageJ software (free access). PLA dots were quantified on 8-bit images using the 'Analyse Particles' command, while cells were counted using the cell counter plugin.
ChIP-qPCR assay
Chromatin for ChIP analysis was prepared from 5 million MCF7 or T47D cells. Briefly, cells were fixed in 1% formaldehyde for 10 min, nuclei were obtained and lysed in 300 μl ice-cold RIPA buffer prior to Chromatin-DNA shearing with a Diogene Bioruptor. ChIP was performed using 5 μg of primary antibodies. Dynabeads® Protein G (10003D, Life Technologies, France) was used to retrieve immunocomplexes according to manufacturers' instructions.
Statistical analyses
For molecular biology experiments, statistical analyses were performed as described in the Fig. legends; unpaired Student's t tests were used unless otherwise indicated. All analyses were conducted using the Prism5 software (Graph-Pad, USA); a P value of < 0.05 was considered to be significant. Results are expressed as means ± standard errors of the means (SEM). For the patient samples, numerical variables were compared using Student's t test, while categorical variables were compared using χ 2 test. Distant metastasisfree survival (DMFS) was defined as time from diagnosis to the date of distant metastasis or death or last follow-up. Disease-free survival (DFS), defined as the time from diagnosis to death or progression or to date of last follow-up (for censored patients), was also calculated. Survival rates were estimated using the Kaplan-Meier method, and comparisons between menin expression groups were performed using the log-rank test. All statistical tests were two-sided, and the P value was considered statistically significant if lower than 5%. Statistical analyses were performed using SPSS 20.0 statistics package. | 2021-09-25T13:37:03.923Z | 2021-09-24T00:00:00.000 | {
"year": 2021,
"sha1": "0de531a197803704dbe99974098bf31775bbb6a3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10549-021-06339-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0de531a197803704dbe99974098bf31775bbb6a3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245959404 | pes2o/s2orc | v3-fos-license | Numerical simulation of gas-liquid transport in porous media using 3D color-gradient lattice Boltzmann method: trapped air and oxygen diffusion coefficient analysis
ABSTRACT In non-aqueous Li–air batteries, the liquid electrolytes penetrate the porous media such as carbon nanotube (CNT) paper structure, transport dissolved substances such as oxygen, and play a role in generating reactants on the surface of the porous media. Although the trapped air generated during the electrolyte penetration process could affect the oxygen transport and performance of the battery, this issue has not been sufficiently investigated. Therefore, in this study, the patterns of electrolyte penetration and air entrapment in porous media were investigated through numerical analysis. A multi-relaxation time color-gradient lattice Boltzmann method was employed for modeling. Based on a two-phase flow simulation in porous media, electrolyte penetration and trapped-air saturation were analyzed in terms of porosity, wettability, and viscosity ratio. The porosity and viscosity ratio did not considerably affect the trapped-air saturation, whereas wettability had a significant effect on the aforementioned parameter. In addition, for each variable, an increase in the effective diffusive coefficient corresponded to increased porosity and hydrophilicity, as well as an improved viscosity ratio.
L x
x-direction length of the porous medium L y y-direction length of the porous medium L z z-direction length of the porous medium M transformation matrix M viscosity ratio m mass n unit normal vector of desired contact angle n * estimated unit normal vector to the triple contact line p pressure Q correction term components S relaxation matrix S saturation s ph indicator function depending on the phase s relaxation parameters s q , s π nonhydrodynamic moments relaxation parameter t time u macroscopic velocity u fluid velocity W sixth-order weight function x spatial position x x-direction coordinate y y-direction coordinate z z-direction coordinate Greek symbols α free parameter related to density ratio β free parameter related to interface thickness in f + δ free parameter related to interface thickness in τ ν δ t time step δ x lattice spacing ε porosity θ contact angle θ angle between unit normal vectors n s and n * μ dynamic viscosity ν kinematic viscosity ρ density ρ N color function σ surface tension T tortuosity τ relaxation time ϕ angle between color gradient and lattice velocity φ coefficient related to α χ free parameter related to ψ ψ coefficient related to surface tension generation operator ω weight coefficient collision operator
Subscripts or Superscripts
A air B blue fluid b bulk eff effective eq equilibrium i discrete lattice velocity direction in initial L liquid LB lattice scale conversion l fluid color R red fluid r relative
Introduction
The demand for non-aqueous Li-air batteries, which are eco-friendly and have excellent reversible capacities, is increasing because of their superior theoretical efficiency in mass and volume-specific energy densities, compared with those of Li-ion batteries (Lai et al., 2020;Xu et al., 2010). Non-aqueous Li-air batteries are composed of an anode, an air electrode containing porous media, and a liquid electrolyte. During the discharging process of the battery, Li ions from the anode move toward the air electrode through the liquid electrolyte. Concurrently, gaseous oxygen in the air is dissolved in the liquid electrolyte, adheres to the surface of the air electrode, and produces reactants. These reactants are important because they improve the maximum discharge capacity of the battery (Li et al., 2011). However, because typical non-aqueous electrolytes have low oxygen solubility, the oxygen transport must be increased for gaseous oxygen to reach the surface (Read et al., 2003;Yuan et al., 2015). Carbon papers are mainly used as the material of the air electrode. Among the carbon papers, the carbon nanotube (CNT) fibrous network structure is used for resisting mechanical stress caused by discharge solid deposition and is outstanding in maintaining a conductive network during discharge (Nomura et al., 2017;Ushijima et al., 2020). This structure takes the form of porous media, and the geometric characteristics of the porous media, such as the porosity (Tan et al., 2017), affect oxygen transport. Since oxygen transport is measured while the electrolyte has already infiltrated the CNT structure, it is not necessary to analyze the permeability of liquid transport in the air electrode. When conducting a twophase flow penetrating simulation of the air electrode, the shape of the infiltrated liquid must be determined. Because reaction products that can affect the battery efficiency form on the surface, the liquid electrolyte must completely cover the surface to improve the battery efficiency (Gwak & Ju, 2016). However, owing to the properties of both the surface and fluid, the fluid may not penetrate the voids between the porous structures and may leave empty spaces while flowing. This is referred to as trapped air, generated naturally in porous media through several mechanisms, such as geological carbon dioxide sequestration, water infiltration in complex soil, and liquid seepage through sediments (Iglauer et al., 2013;Mahabadi et al., 2018;Suekane et al., 2010).
Trapped air is defined as the volume ratio of air to the pores in the porous media that are penetrated by liquid, and this ratio is affected by the structural and surface properties of porous media (Wang et al., 2019). This study is aimed at investigating the relationship between the areas wherein the liquid electrolyte cannot reach the surface and the decrease in the oxygen transport efficiency. As the trapped air lowers the amount of liquid electrolyte present in the porous media, the amount of reactants produced through the electrochemical reaction decreases, which affects battery performance (Shodiev et al., 2021;Tran et al., 2010). Therefore, the trapped air during twophase flow penetration must be studied to understand the behavior of the liquid electrolyte in the air electrode. Lee and Jeon (2014) studied air-trapped regions with respect to electrolyte filling in Li-ion batteries, and Shi et al. (2019) analyzed the droplets and liquid layers in a porous structure in terms of the properties of both the fluid and surface. In addition, Jeon (2021) investigated liquid electrode penetration of lithium-ion batteries based on two-phase flow simulation and confirmed that the effect of particle-size on cathode wettability is important. The aforementioned research groups found that properties such as porosity, wettability, and viscosity ratio can influence the penetration rate of the fluid. However, these groups focused only on the liquid transport phenomenon and did not confirm the correlation between the efficiency of the battery and oxygen transport. In a previous study related to oxygen transport in porous media, Yuan and Sundén (2014) analyzed various properties, such as the porosity, diffusivity, and tortuosity, which can affect the transport phenomena. Bao et al. (2021) also studied oxygen diffusion according to the compression of the fibrous structure. However, these studies are centered on the contents related to morphology, such as the randomness and complexity of porous media, as well as limited to the gas diffusion layer of fuel cells. Although studies on CNT paper are being conducted (Cho et al., 2021;Ushijima et al., 2020), transport properties are analyzed from the viewpoint of the manufacturing process, such as structural transformation. Therefore, studies on the effects of the properties of the liquid electrolyte, such as wettability and viscosity, in relation to oxygen transport in the CNT paper structures remain inadequate. To understand the two-phase flow phenomenon, especially in porous media computational fluid dynamics (CFD) have been conducted (Baumann et al., 2020;Jiang et al., 2020;Mosavi et al., 2019;Sergi et al., 2016;Shamshirband et al., 2020). This is because for air electrodes such as CNT paper structures, it is difficult to visualize liquid penetration in situ through experiments due to the micro-size and random geometries (Shodiev et al., 2021). Among the various CFD modeling techniques, the lattice Boltzmann method (LBM) is primarily utilized to study two-phase flows in porous media. Unlike traditional computational fluid dynamics based on the Navier-Stokes equation, the LBM has the ability to express dynamic fluid interfaces and complex boundaries Lee & Jeon, 2014;Yuan et al., 2021). In this study, we used the color-gradient LBM model devised by Gunstensen et al. (1991), which is advantageous for relatively low generation of spurious velocities, high viscosity ratio between the two fluids, and controlling the numerical interface thickness (Ba et al., 2016;Leclaire et al., 2017). Leclaire et al. (2017) studied twophase flow in porous media using a color-gradient LBM. The flow pattern was analyzed in accordance with the viscosity ratio and capillary number, and the presence of trapped air was confirmed. Moreover, the numerical instability that results from the viscosity-dependent velocity field and the consideration of a two-phase flow with high density and high viscosity ratios can be overcome using the multiple relaxation time (MRT) model (Wen et al., 2019).
This study confirmed that the trapped air can occur in air electrode of CNT paper structure and oxygen transport, an indicator of battery efficiency, can be decreased because of the trapped air that interferes with the reaction products. In addition, it was confirmed that trapped air can affect oxygen transport as it changes according to the liquid and solid properties, such as porosity, wettability, and viscosity ratio. For the two-phase flow simulation, the basic simulation model was a 3D color-gradient model, and the MRT method was utilized to ensure a high density and high viscosity ratio. The proposed model was validated based on the experimental results reported in the literature. The porous media used in this study were constructed similar to carbon nanotube (CNT) structures characterized using scanning electron microscopy (SEM) and commercial software, such as GeoDict. These porous media were employed in the simulation after structural reconstruction through data conversion.
3d MRT color-gradient LBM model
To simulate the two-phase flow, we employed a 3D 19-velocity (D3Q19) MRT color-gradient LBM. This model, first proposed by Gunstensen et al. (1991), represents immiscible fluids using two density distribution functions: red and blue fluid. However, the model is numerically unstable at high densities and high viscosity ratios owing to the undesired terms in the macroscopic momentum equation. Ba et al. (2016) proposed an equilibrium distribution function that eliminates these undesired terms, and Wen et al. (2019) extended this function to a 3D MRT model to improve the numerical stability and accuracy. Therefore, we used the model proposed by Wen et al. (2019) to ensure accuracy at high densities and viscosity ratios.
In the color-gradient LBM, the evolution of the distribution functions can be defined as follows: where x is the spatial position, and f l i is the density distribution function; i represents the lattice velocity direction, and l is the fluid color, which is either red (l = R) or blue (l = B). The total density distribution function is Thus, f R i and f B i denote the density distribution functions of red fluid and blue fluid, respectively. In Equation (1), δ t is the time step, t is time, and e i is the lattice velocity in the i th direction, which is expressed as follows: Here, c = 1 is the lattice speed, which can be defined as δ x /δ t , where δ x denotes the lattice spacing. e i is shown in detail in Figure 1. Moreover, l i is a collision operator that consists of the following three parts: is the surface tension generation operator, and ( l i ) (3) is the segregation operator. The macroscopic values can be obtained using the density distribution functions of all fluids based on the following relationship: where ρ R and ρ B are the macroscopic densities of the red and blue fluids, respectively, ρ is the total density, andu is the macroscopic velocity. Segregation is required to divide the interface of immiscible fluids. Gunstensen et al. (1991) first proposed the recoloring process, which could have a very thin fluid interface. However, in the above algorithm, there were many spurious currents, and there was a problem in that the interfaces were pinned to the lattice in the creeping flow. Therefore, we applied Latva-Kokko and Rothman's recoloring algorithm, which presents few spurious velocities and can solve the lattice-pinning problem (Leclaire et al., 2012), to replace the streaming process of the LBM Equation (1) using Equation (5): wheref l,+ i is the post-segregation distribution function that replaces the segregation operator ( l i ) (3) . f l,+ i is obtained through the following recoloring steps: where β is a free parameter that determines the thickness of the interface, and is between 0 and 1: The smaller this value, the thicker the interface (Leclaire et al., 2012). f * i is the total post-perturbation distribution function, which is obtained through the following steps: is the equilibrium distribution function. The improved equilibrium distribution function to remove the error term of momentum equation occurring in Chapman-Enskog analysis is as follows (Li et al., 2012): In Equation (9), ω i is the weight coefficient, ω 0 = 1/3, ω 1−6 = 1/18, and ω 7−18 = 1/36. And c l s is the speed of sound in the fluid with the color l. Compared with lattice speed c, c l s has a value that changes with the density to match the pressure equilibrium. φ l i is a parameter for adjusting the pressure of the fluid and is defined as whereα l is a free parameter with a value between 0 and 1, to prevent the pressure and equilibrium functions from being negative. And since pressure must have a continuous value at the interface, the following relation is satisfied for all fluids (Liu et al., 2012): Therefore, the values of f l, eq i (ρ l , α l , u = 0) in Equations (6) and (7) become f l, eq i = ρ l φ l i . The pressure of each fluid, p l , was determined using the following equation: The total pressure, p, is the sum of the pressures of all fluids, that is, In Equations (6) and (7), ϕ i is the angle between the phase-field parameter gradient (or color gradient) ∇ρ N and lattice velocity, and cos ϕ i is expressed as follows: where the phase-field parameter (or color function) ρ N is defined as Here, ρ R in and ρ B in are the initial densities of the red and blue fluids, respectively. ∇ρ N is calculated as follows, according to numerical derivative of the LBM: The single-phase collision operator and surface tension generation operator in Equation (8) were calculated using the MRT collision operator. The single-phase collision operator is defined as follows (Li et al., 2010;Li et al., 2013): where I is the unit matrix, S is the relaxation matrix, and C l is the correction term. This correction term is applied to remove the error term caused by the off-diagonal elements of the third-order moment of the equilibrium distribution (Guo et al., 2002). M −1 is the inverse of the transformation matrix M (M is described in Appendix A). The relaxation matrix S, is expressed as : where s is the relaxation parameter, and s b and s ν represent the relaxation parameters corresponding to the bulk and kinematic viscosities, respectively. The other relaxation parameters, s q and s π , are related to nonhydrodynamic moments and in this study, the value of s b , s q , s π is set to 1. The relaxation parameters corresponding to the kinematic and bulk viscosities were determined using τ ν and τ b , as follows: where τ ν and τ b are the relaxation times related to the kinematic and bulk viscosities, respectively.
where μ l is the dynamic viscosity, and μ l b is the bulk viscosity. The following relationships involving τ ν are incorporated to ensure the continuity of the viscosity values at the interface between the two fluids: where δ is a free parameter and is usually set as 0.98 (Wen et al., 2019); τ R ν and τ B ν are the relaxation times for the red and blue fluids, respectively, and are determined by the kinematic viscosities of these fluids. The kinematic viscosity ν l is expressed as ν l = (c l s ) 2 (τ l ν − 0.5)δ t . g R and g B represent quadratic equations with the color function ρ N as the variable and with the viscosity differentiating at the interface (Leclaire et al., 2012). The correction term C l in Equation (15) is described in Appendix A.
To generate surface tension in the color-gradient model, an additional collision function is required. Tölke et al. (2002) proposed a surface tension generation operator in the D3Q19 model, but it caused an error in the stress tensor, resulting in an inaccurate surface tension value. In this study, we used the method proposed by Liu et al. (2012) to generate the surface tension using the concept of a continuum surface force. Using this method, it is possible to model low spurious currents and accurate surface tension values, even in the presence of density differences. To ensure the generality of the MRT model and independence from the relaxation parameters, the surface tension generation operator in Equation (8) is defined as follows: where A l is a free parameter that controls the magnitude of the surface tension, and the surface tension σ is derived as follows: In order to maintain the generality in this simulation, it is assumed that A R and A B have the same value (Liu et al., 2012). To conserve mass and momentum, ψ i should satisfy the following equation: The solutions,ψ i , derived from Equation (22) are expressed as: where χ is a free parameter set to 2 to give simplicity (Liu et al., 2012).
To implement the desired contact angle θ , we used the wetting boundary condition adopted by Xu et al. (2017). First, the color gradient ∇ρ N * of the triple contact line where the red and blue fluids meet the solid surface should be obtained by Equation (14). Then, the estimated unit normal vector n * can be defined as n * = −∇ρ N * /|∇ρ N * |, and the direction of n * must be changed for the desired contact angle. Based on the unit normal vector of the solid wall and the unit normal vector, the normal vector n of the tangential direction of the interface with the desired contact angle is derived (Akai et al., 2018). As the unit normal vector is modified, the color gradient is also replaced as ∇ρ N = |∇ρ N * |n, This allows the simulation to take a shape that matches the desired contact angle for the interface.
Validation
To validate the LBM model used in this study, as depicted in Figure 2(a), a porous medium composed of a spherical bed was simulated (Hao & Cheng, 2010). The porosity ε of the medium was 0.55 and contact angle of 180°was set for all surfaces. After the periodic boundaries on all sides of the domain were set and constant liquid saturations were established, two-phase flow simulations were conducted. The simulations were terminated when the phase distribution reached a steady state. A two-phase flow motion was induced by applying a body force to the fluid, which had capillary and Reynolds numbers of the order of 10 −6 and 10 −4 , respectively. These low values ensured that the two-phase flow was unaffected by the viscosity, density, and gravity. Here, an additional term is required for the single-phase collision operator to apply a body force. Therefore, the body force term F l mentioned in Ba et al. (2016) and Guo et al. (2002) were used for validation (F l is described in Appendix B). Based on the simulation results, the relative permeabilities of the red and blue fluids were calculated as follows, and were compared for each saturation: where k l is the absolute permeability of the porous medium and k l r is the relative permeability of fluid color l and has a value between 0 and 1. and F l is the body force applied to fluid l. As depicted in Figure 2(b), the simulation result for the variation of the relative permeability within the initial red fluid saturation agrees well with the existing experimental results (Bryant & Blunt, 1992). The red fluid saturation S R is the space occupied by the red fluid inside porous media.
Simulation setup
The CNT paper shape was used as the model geometry for the LBM simulations, as illustrated in Figure 3. First, the surface geometry of the CNT paper was generated in the stereolithography (STL) format, which describes a 3D structure with triangular faces. The surface geometry was constructed using a commercial software package, GeoDict (Math2Market, Germany), based on the experimentally determined porosity (0.5-0.7), diameter distribution (15-20 mm), and SEM images. Using this method, scenarios with various porosities and compression along the thickness direction were evaluated. The resulting porosity values were 0.800, 0.711, and 0.665, respectively.
To create CNT structures in LBM simulations, structural reconstruction is a prerequisite (Han et al., 2016) because the geometry needs to be defined as a grid. The STL format created using the GeoDict program defines only irregular surface geometry (Rypl & Bittnar, 2006). However, grid files are described as binary information for all nodes in the structure. Grid files provide an efficient method for processing and accessing geometric information. Therefore, a data conversion method is required to reproduce a porous structure. Thus, a grid file is created by separating fluid node and the solid node respectively. And then, some areas of the grid file are extracted. To determine the number of grids in the simulation domain, we conducted a grid-independence test. Using a CNT paper structure with a porosity value of 0.800 in Figure 3(a), three grid files with the same domain size but a different number of grids, were extracted as shown in Figure 4(a). The number of lattices in the three grids was 34 × 34 × 34, 100 × 100 × 100, and 200 × 200 × 200, respectively. The porosity value approaches the original value of 0.800 as the number of girds increases and a comparison of the porosity values indicated that the difference in porosity between Cases 2 and 3 was smaller than that of Case 1 (within a 3.5% error). Figure 4(b) demonstrates the progress in the liquid saturation for each case. The liquid saturation S L is the space occupied by the liquid inside porous media, and shows the small difference between Cases 2 and 3. The error is within 5.26% for all sections; when the steady state is reached, the liquid saturation value is almost the same (within a 0.5% error). Consequently, a simulation was conducted using the condition corresponding to Case 2, considering the computational cost. Figure 5 depicts the approximate simulation domain for the transient liquid transport process. This domain contains the porous medium structure in the middle, with buffer spaces above and below to allow the fluid to flow. The simulation domain is composed of a mesh consisting of L * x × L * y × L * z = 100 × 100 × 140 in lattice value, including free fluid space above and below. In the LBM simulation, the values of all lattice units, such as the length, time, and mass, are set equal to 1, i.e. L * LB = t * LB = m * LB = 1. Because the actual physical value of the channel size in the x-direction is L x = 0.0003m, the unit lattice scale is equal to L LB = 3.0 × 10 −6 m based on unit conversion (L * x /L x = L * LB /L LB ). Accordingly, we calculated the lattice value and unit lattice scale for each physical value, as listed in Table 1. The initial density ρ R in and ρ B in correspond to liquid and air, respectively, and the density ratio is 1000:1. Also, the viscosity ratio (M) was set to 66, which is similar to the average ratio of organic electrolytes used in Li batteries (Read, 2006). The relaxation time was τ R ν = 1.5 for the red fluid and τ B ν = 0.515 for the blue fluid. The upper and lower parts were set as pressure (or Dirichlet) boundary conditions (Hecht & Harting, 2010). The remaining parts were set as the periodic boundary conditions. The solid wall parts were set as halfway bounce-back boundary conditions. At the inlet, the liquid saturation was set as a film that permeated the porous medium owing to the pressure difference (P) between the inlet and outlet applied to the boundary.
Results and discussion
A numerical study was conducted on liquid transport in a porous layer composed of CNTs, and the liquid and air saturation were analyzed. Based on the porosity, wettability, and viscosity ratio of the simulated porous layer, changes in the liquid transport pattern and air saturation were confirmed. The oxygen transport in the liquid electrolyte was analyzed using the effective diffusion coefficient.
Liquid transport and air saturation in porous media
The liquid transport simulation results for a porosity ε of 0.737, contact angle θ of 90°, and viscosity ratio M of 66 are shown in Figure 6(a). The sky-colored surface indicates the interface between the liquid and air, and δ t is the time step. The simulation results confirmed that the liquid gradually penetrated the empty space within the porous layer. Owing to the randomness of the CNT structures, the shape of the liquid transport was non-uniform when the liquid reached the end point of the porous layer. Figure 6(b) shows a plot of the total liquid saturation, S L , and total air saturation, S A , in the porous layer. The saturation of each phase is calculated as follows: where ε L and ε A are the porosity of liquid and air, which denote the amount of space occupied by the liquid and air, respectively. Therefore, saturation is defined as the ratio of each phase to the porosity of the porous media. As depicted in this figure, the total liquid and air saturation values were 1. Moreover, as the liquid was transported through the porous layer, the liquid saturation increased, whereas the air saturation decreased. Subsequently, when the liquid reached the end of the porous layer, the amount of liquid saturation became constant. However, liquid saturation did not reach a value of 1. This suggests that a region existed inside the porous medium where the liquid did not flow; this is the region containing trapped air. Figure 6(c) indicates that the air distribution in the porous media is in a constant liquid saturation state, which appears in the vertical direction of the flow. The transparent gray regions represent the porous media, and the red regions indicate the interface between the liquid and air, that is, the residual air in the porous media. The air saturation distribution illustrated in Figure 6(c) verifies the various air shapes produced on the surface, owing to an irregular porosity distribution.
Numerical analyses were performed for the three geometries described in the simulation setup to investigate the effect of the properties of the porous layer. Figure 7 shows the change in the total liquid saturation with the porosity, wettability, and viscosity ratio. Figure 7(a) depicts the total liquid saturation for each porosity case over time when the contact angle θ = 90 • and viscosity ratio M = 66 are fixed. As observed, a lower porosity was associated with a lower rate of constant liquid saturation. Additionally, the amount of trapped air remained almost unchanged and was independent of the carbon fiber structure and trapped-air saturation used in this study. Figure 7(b) shows the total liquid saturation for the different wettability cases for a fixed porosity ε = 0.737 and viscosity ratio M = 66. Wettability is the ability of an immiscible fluid to become wet on a surface in the presence of other immiscible fluids; changes in wettability can affect the flow pattern (Aziz et al., 2020;Kim et al., 2020). The wettability cases in this study involved contact angles of 30°, 90°, and 150°, where 30°and 150°c orrespond to hydrophilic and hydrophobic surfaces, respectively. Unlike the changes in porosity, changes in the contact angle did not significantly alter the transfer rates of the liquid. In all cases, the liquid reached the ends of the porous medium at similar times, and the liquid saturation attained a constant value. However, the amount of air remaining in the medium varied with wettability. For a contact angle of 90°, the total air saturation on the hydrophilic surface decreased, whereas that on the hydrophobic surface increased.
Quantitative studies were performed for the three different viscosity ratios described in the simulation setup to investigate the effect of the viscosity ratio of the porous media. A range of 50-80 was specified for the viscosity ratio, based on a value of 66 that was determined in a previous study (Read, 2006). Figure 7(c) depicts the total liquid saturation according to the viscosity ratio when the porosity ε = 0.737 and contact angle θ = 90 • are fixed. For a viscosity ratio of 50, the liquid reached the end of the porous medium more rapidly, whereas the liquid flow was slow for a viscosity ratio of 80. This is contrary to the expected characteristics of porosity; liquid transport accelerates as the porosity increases, and vice versa. These results also confirmed that the amount of air saturation did not change, as in the porosity cases. Trapped air was found on the surface during liquid penetration, and the distribution of trapped air was confirmed to differ according to the type of geometry. However, as seen in Figure 8(b), the amounts of trapped air are different in the hydrophobic and hydrophilic cases. Trapped air is considerably more hydrophobic than hydrophilic because the liquid adheres to hydrophilic surfaces more owing to interactions with the surface during liquid flow, and thus, it penetrates the voids in the porous media. However, in the case of hydrophobic surfaces, owing to the strong repulsive force between the two phases, the fluid cannot pass through the narrow space in the porous medium. It was evident that for the hydrophobic surface, the liquid did not adhere to the surface and reached the end of the porous medium along the flow direction and that the amount of remaining air increased. Thus, the air exists as a larger area in the case of a hydrophobic surface. Figure 8(c) illustrates the pattern of air saturation according to the viscosity ratio. The air saturation distribution at different viscosity ratios indicated that the shapes of the trapped air pockets were slightly different, but that the positions and degree of penetration were similar.
Effective diffusion coefficient
Oxygen transport in a liquid electrolyte occurs through diffusion, which is a measure of the extent to which a material spreads from areas of high concentration to areas of low concentration. This parameter is critical as it enables the performance of a battery to be evaluated in a state in which Li ions or oxygen are dissolved in a liquid electrolyte, such as in a non-aqueous Li-air battery (Huang et al., 2019). The effective diffusion coefficient D eff quantifies the diffusive transport in a porous medium and represents the correlation between porosity and tortuosity (Yuan & Sundén, 2014).
where D is the diffusion in the bulk material and is affected by temperature. In this simulation, as the temperature was the same on all sides, D was a constant value for all the cases. Further, ε eff and T are the effective porosity and tortuosity of the CNT structure, respectively. Effective porosity refers to a portion of the porosity for which the fluid can be analyzed in porous media. Tortuosity indicates the extent to which the flow path in a porous material can become curved, and is expressed as follows: where L e denotes the average distance a fluid travels when passing through the porous material. In porous media, the tortuosity is always greater than 1. In this study, the tortuosity was calculated as follows based on the simulation results (Wang, 2014): where u(x) and u z (x) are the velocity vectors for the fluid at position x and the velocity component along the flow direction z, respectively. The effective diffusion coefficient is a measure of the amount of oxygen required for diffusion and is affected by the structural characteristics of the porous medium. However, the trapped air shown in the simulation results can affect the effective diffusion coefficient of the oxygen dissolved in the liquid electrolyte. The trapped air was primarily generated on the surfaces of the CNT structures, as confirmed by the simulation results. The trapped air on the CNT surfaces suggests that the oxygen dissolved in the electrolyte does not react with a surface on which trapped air is present. This allows the trapped air to act as a dead-end zone, resulting in pores that are inaccessible to the fluid. Therefore, the total air saturation generated after the liquid flowed through the porous media resulted in a decrease in the effective porosity of the CNT structure. The effective porosity, ε eff was calculated as follows: That is, when the simulation reaches a steady state, the total amount of trapped air is same as ε A . Figure 9 presents the results of an analysis of the porosity of air and the effective diffusion coefficient for all cases considered in this study. Among the three variables, namely the porosity, wettability, and viscosity ratio, the simulation was carried out by assigning a certain value to one of these variables. Then, the porosity of air and the effective diffusion coefficient were calculated by varying the other two variables to ensure that the effect of each variable could be clearly confirmed. As shown in Figure 9(a) and (b), the viscosity ratio was fixed at M = 66, and the changes in the porosity of air were determined by varying the porosity and wettability. Figure 9(c) and (d) show the changes according to the wettability and viscosity ratio for ε = 0.737. Figure 9(e) and (f) show the changes in the porosity and viscosity ratio by keeping the wettability value constant by using a contact angle of θ = 90 • . A comparison of the porosity of air in Figure 9(a), (c), and (e) reveals that the wettability has the greatest effect.
For the hydrophobic surface, the porosity of air was highly distributed, but the effects of the porosity and viscosity ratio were insignificant. In the case of the porosity, which corresponds to the geometry, owing to the complex structure of the porous medium, the porosity of air did not change by following a constant pattern; however, the total change was not significant. Additionally, in the case of the viscosity ratio, the magnitude of change was not considerable; however, the greater the overall viscosity ratio, the smaller the porosity of air. Figure 9(b), (d), and (f) show the effective diffusion coefficients for each case based on the effective porosity. The arrows in each graph show that the corresponding variable increases in the direction of the respective arrow. In conclusion, the overall effective porosity and effective diffusion coefficient are proportional to each other. As shown in Figure 9(b), the effective diffusion coefficient increases with porosity, specifically owing to the wettability. In the case of the hydrophobic surface, the difference in the reduction in the effective porosity was substantial, and the effective diffusion coefficient was thus low. Considering one geometry (ε = 0.737), as shown in Figure 9(b) and (d), when the surface is hydrophobic, the distribution of the effective diffusion coefficient is narrow; conversely, when the surface is hydrophilic, the coefficient is highly distributed. Based on the effect of the viscosity ratio for each pattern, the larger the viscosity ratio, the larger is the effective diffusion coefficient. As seen in Figure 9(f), similar to the previous results, it was confirmed that the greater the porosity and viscosity ratio, the greater the effective diffusion coefficient. Figure 10 shows the changes in the tortuosity according to the effective porosity for all the simulation results. Overall, the tortuosity decreased when the porosity increased. The relationship between porosity and tortuosity was also confirmed by the correlations available in the literature for porous structures (Bhatia et al., 2011;Lanfrey et al., 2010). Owing to the complexity and randomness of the porous structure, it is impossible to accurately define the L e in tortuosity. Therefore, in this study, the tortuosity was estimated based on the fluid velocity of the simulation results using Equation (28). A comparison of the relationship between the effective porosity and tortuosity with literature data shows that certain tendencies correspond with those reported in the literature; however, the tortuosity is overestimated as the wettability on the surface increases. These results confirmed that the wettability properties contributed to the increase in the tortuosity. Figure 11 compares the diffusivity D eff /D in all simulation cases with theoretical relations previously reported in the literature. The Bruggeman equation, which was used to calculate the black dashed line in Figure 11, is the most widely used equation for electrode modeling and analysis. The orange dashed line is the diffusivity that was calculated on the basis of a carbon paper structure similar to the geometry used in this study (Zamel et al., 2009). As shown in the figure, the diffusivity values calculated in the simulation were larger than those obtained with the correlation equation represented by the orange dashed line. Therefore, it is necessary to assess the trapped air in terms of the effective porosity, wettability, and viscosity ratio to obtain the effective diffusion coefficient.
Conclusions
In this study, the characteristics of liquid transport and trapped-air saturation were analyzed based on their patterns and parameters (porosity, wettability, and viscosity ratio) by using a two-phase flow simulation in porous media consisting of CNTs. The wettability did not significantly affect the penetration rate of the liquid transport compared to the porosity and viscosity ratio. However, it had the strongest effect on the amount of air trapped, and the more hydrophilic, the less the amount of trapped air. Because this amount of entrapped air decreases effective porosity and increases tortuosity, the effective diffusion coefficient can be affected by porosity, wettability, and viscosity ratio. It can be suggested that these variables are necessary to accurately measure the effective diffusion coefficient.
In this study, it was confirmed that the amount of trapped air did not change significantly with the porosity and viscosity ratio. However, due to the limited number of cases, the characteristics of the CNT structure with a wider range of porosity could not be confirmed. In addition, the liquid transport of materials with higher viscosities, such as gels, could be studied. In the future, we plan to conduct further studies by considering larger viscosity ratios and media with various porosities.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This work was supported by the Korea Medical Device Development Fund grand funded by the Korea government (The Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (KMDF_PR_20200901_0104, 9991007267).
Appendices Appendix A
The transformation matrix M in Equation (15) is expressed as follows: | 2022-01-15T16:07:47.586Z | 2022-01-13T00:00:00.000 | {
"year": 2022,
"sha1": "14e951496305c750c97f7df3a0e784e04e8ce7ae",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19942060.2021.2008012?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "0a09a7c75cf89bb1acd63631097eb3811137b034",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": []
} |
252641007 | pes2o/s2orc | v3-fos-license | Hasselt University’s Roadmap to a European University
Hasselt University (UHasselt) recently joined EURECA-PRO, an eight-university partnership spanning seven European countries working to create a European University on Sustainable Consumption and Production. EURECA-PRO pursues sustainability research focused on the processes of resource extraction, production, and consumption in order to investigate the entire value chain of materials and products. EURECA-PRO partners and UHasselt have similar research expertise in the field of sustainability, present complementary educational profiles, and share multiple strategic policy priorities, such as inclusivity, lifelong learning, sustainability, and internationalisation. In what follows, we outline our motives and hopes in joining EURECA-PRO: the enhancement of international staff/student mobility, increased academic collaboration, further development of research/education innovation, and a positive spill over effects for UHasselt’s wider academic and geographical community. As a relatively new partner, UHasselt will invest substantially in communication strategies and initiatives to build connections between UHasselt staff/students and partnering institutions. UHasselt appreciates EURECA-PRO’s accomplishments to date and acknowledges the facilitation of UHasselt’s entrance into the alliance. Overall, UHasselt considers EURECA-PRO a valuable platform to enhance research, teaching, and public engagement on topics already a part of UHasselt’s DNA. At the same time, we believe that Hasselt University has meaningful contributions to make to EURECA-PRO, for example through expertise in international programmes, specialised and rare infrastructure (eg. Ecotron) and an established reputation as a leading research institute and teaching hub on topics, such as circular economy and energy transition, residing at the heart of the EURECA-PRO Alliance.
Introduction
The European Commission, through the Erasmus+ programme, is committed to the development of European Universities that will strengthen the European Education, Research, and Innovation Area and create additional career opportunities for students and researchers. The European Commission defines European Universities as "transnational alliances of higher education institutions developing long-term structural and sustainable cooperation. They mobilise multi-disciplinary teams of students and academics through a challenge-based approach in close cooperation with research, business and civil society. European Universities will pool together their online and physical resources, courses, expertise, data, and infrastructure to leverage their strengths and to empower the next generations in tackling together the current challenges that Europe and the world are facing. They promote all forms of mobility (physical, online, blended) as well as multilingualism via their inclusive European inter-university campuses" [1].
Since this definition was given in 2020, it has become clear that European University Alliances will only grow in importance. As a forward-looking University, UHasselt has therefore been keen to explore potential partnerships. During our search, UHasselt was approached by several universities and explored numerous potential collaborations. Our hope was to find a balanced collaboration with likeminded partners who shared not only a common network mission (both now and moving forward) and understanding of expectations and commitments but also had a similar perspective on European University development.
For our part, we perceive the development of a European University as a profound commitment that requires a great investment from all partners including a focus on cooperation, a mindset able to overcome present obstacles, and a willingness to face future challenges together. Quickly after initiating talks with the European University on Sustainable Consumption and Production (EURECA-PRO), it became clear that the structure, focus, mission, and vision was a perfect fit for UHasselt. Things therefore moved quickly after that: UHasselt was invited to present its application and, following discussion and the approval of several UHasselt decision-making bodies, a EURECA-PRO Rector's meeting granted UHasselt membership in the alliance.
In January 2022, UHasselt became the eighth EURECA-PRO member. Hasselt University is in full support of EURECA-PRO's ambition to become a leading European University capable of spurring innovation and inspiring transformation in the field of sustainable consumption and production. We share the EURECA-PRO belief that this can be achieved through intense inter-university collab-oration in education, research, and public engagement. UHasselt is committed to strengthening academic collaboration between scholars across the alliance and to increasing international knowledge exchange while delivering interdisciplinary, cutting-edge research outputs and enhance partners' research reputations in this field. By doing so, UHasselt intends to play a significant role in bringing EURECA-PRO students and staff together as an international community of learners and in validating research results at the regional and international level through Open Science practices and the organisation of Science Awareness events. Together with EURECA-PRO partners, Hasselt University has the ultimate ambition to become the prime partner and consultant of the European Commission as well as of industrial players and public agencies active in the field of sustainable consumption and production in Europe and beyond.
Motives and Hopes
EURECA-PRO membership is an important step toward UHasselt's vision of internationalisation and a growing reputation as an international hub for research, teaching, and public engagement on topics related to sustainability. We therefore hope to create exciting new learning opportunities for students and staff within EURECA-PRO to increase international mobility and knowledge production while spurring interdisciplinary research collaboration beyond geographical borders. Doing so within the EURECA-PRO Alliance will grow our reputation as an innovative university in the field of sustainability as well as strengthen our ability to drive innovation and transformation at the local, regional, European, and global levels.
Focus of European and Flemish Research Agenda
Flanders shares Europe's vision and believes that the future of universities will be defined at the European level. In 2019, only two years after French President Macron first raised the idea of "European Universities" , the European Commission moved forward with the creation of 17 alliances. Last year, another 14 alliances were added to these already-existent European Universities. Europe's commitment to European Universities is undeniably strong and convincing: a third call for alliances is on its way, and the ambition to have 60 European Universities by 2024 is backed up by a 1.1 billion budget. Seated at the heart of Europe and housing many European institutions, Belgium is very attuned to Europeanlevel developments. The same is true for the regions of Belgium: Flanders, Wallonia, and the Brussels' Capital Region. Flanders, which houses UHasselt, goes beyond sharing European ambitions and provides substantial financial incentives for Flemish Universities' participation in European University Initiatives to enhance internationalisation within Flemish research-intensive universities. UHasselt has a 904k grant from the Flemish government in support of our EURECA-PRO membership. As mentioned before, EURECA-PRO acts as a force multiplier for enhancing internationalisation processes within Hasselt University. We intend to invest in the international mobility of staff and students, to stimulate interdisciplinary collaboration amongst researchers, to participate in joint grant writing with partners, and to improve the student experience of Hasselt and EURECA-PRO students.
At the same time, we believe that the established sustainability expertise of UHasselt researchers and institutes (e.g. the Centre for Environmental Sciences (CMK) and the Institute for Materials Research (IMO)), our academic community's central position in a variety of international networks (e.g. the Copernicus Alliance and the European Network on Higher Education for Sustainable Development) and the highly specialised equipment at our disposal (e.g. a Field Research Centre with ECOTRON and large-scale biodiversity and climate research infrastructure) will benefit EURECA-PRO partners. In sum, Hasselt University seeks to enhance the international profile of our education in line with the European and Flemish agendas and, by doing so, to increase the quality of our research-led teaching, the joys of life-long learning and student exchange in a 'borderless' educational environment. We believe that these ambitions dovetail perfectly with those of EURECA-PRO and that our approach will benefit all EURECA-PRO partners to create a win-win situation for all involved.
Synergies with UH Policy Priorities
UHasselt's membership in EURECA-PRO is a good match. EURECA-PRO's policy objectives align with UHasselt's four transversal policy objectives: Sustainability, Inclusiveness, Lifelong Learning, and Internationalisation. In terms of focus, UHasselt's EURECA-PRO participation flows naturally from our most prominent research and teaching expertise. The majority of our educational and research programmes circle around the topic of sustainability and explore themes such as circular economy, energy transition, and sustainable resources. Moreover, UHasselt's students and scholars put their sustainability research/education into practice within various organisations and networks in this field and are already involved in public engagement. UHasselt also has a strong commitment to equality and diversity and thus shares the EURECA-PRO conviction that international mobility and the cultural exchange it fosters will contribute to a more open-minded society and a more inclusive Europe. In addition, UHasselt's conviction that research and education is without age or borders runs parallel with EURECA-PRO's focus on "borderless" , flexible, and international educational programmes. Moreover, as an organisation UHasselt's persistent efforts to widen access to higher education, to increase educational opportunities for everyone and to create a supportive academic community for students and staff support EURECA-PRO's plans to create a more inclusive European Higher Education Environment.
EURECA-PRO's vision and dedication reflect UHasselt's mission as a Civic University in multiple ways. We deeply believe in our civic responsibilities as an institution that plays a key role in driving social, economic, and environmental change in our region, Limburg. We share, for instance, a desire to educate engaged citizens and purposeful leaders to increase their employability in fields that address pertinent societal challenges. Like EURECA-PRO, we too aim to positively impact our home regions and their economies while being determined to initiate developments at the European and global level in line with the UN SDGs. Finally, similar to all EURECA-PRO partners, UHasselt believes that joining forces with other stakeholders (higher education institutions, governments, businesses, and civil society organisations) plays an essential role in tackling pressing issues on the local, regional, European, and global levels.
Synergies with EURECA-PRO Partners
We are thrilled to be part of a geographically diverse alliance whose vision, mission, and focus align so well with ours. UHasselt is itself committed to Sustainable Development Goal (SDG) 12; unsurprisingly, similarities, commonalities, and complementarities in research, education, and policy agenda arose rapidly with EURECA-PRO partners. Already after the first meeting, a foundation of common ground for a partnership that produces mutually beneficial outcomes and promising opportunities for teaching, research, and public engagement collaboration had been established. Though rudimentary, Fig. 1 displays the many synergies that exist between EURECA-PRO topics and those researched and taught at UHasselt's institutes, centres, and faculties.
Additionally, UHasselt has a proven expertise in providing education that supports societal innovation and transformation focused on sustainability with inclusivity and accessibility as guiding principles. As such, UHasselt is well positioned to contribute to the development of BSc and MSc programmes in European Studies on Sustainable Consumption and Production. An example of the work we have already done in this direction is our brand new Msc in Materiomics, which combines the study of (1) quantum materials and technologies, (2) materials for energy, generation, storage, and conversion, (3) materials in circular processes, and (4) materials for health with basic academic research and employability skills. Beyond these thematic synergies, UHasselt also offers a creative and interactive study environment. Our education is student centred with a strong focus on differentiated teaching methods and with a vast investment in the development and implementation of innovative educational concepts (such as blended and distance learning). This is reflected in several of EURECA-PRO's educational pillars (such as a dedication to projectbased learning and work-based learning).
Challenges Related to the (Delayed) Participation in a European University Alliance
As mentioned, UHasselt only entered the EURECA-PRO Alliance quite recently, after seven other partners had established a shared history. In order to create true structural collaboration and a veritable European University beyond geographical, disciplinary, cultural, and political borders, it is essential for university partners to know each other well. As UHasselt, we were warmly welcomed by all EURECA-PRO members and are excited to strengthen ties with partner universities while supporting mobility amongst university staff. To this end, Review Weeks are valuable chances for us to get to know our partners better. We cannot emphasise enough how much we appreciate the efforts that have been made to organise these. Each partner university resides in a different educational environment with its own set of rules (e.g. inter-university agreements govern what courses UHasselt can offer), regulations (e.g. in Flanders we have strong language regulations limiting our ability to offer English courses) and peculiarities (e.g. evaluation regimes and accreditation conditions). As in every international collaboration, we will need some time to find overlap between partners that take these varied constraints into account. Nonetheless, we believe that our experience with the Transnational University Limburg during a cooperation between University Maastricht and University Hasselt has provided us with valuable lessons and experience. Moreover, UHasselt students are currently rather unfamiliar with the educational profile of EURECA-PRO partner universities. As such, these universities might not immediately be considered as potential desti-nations for student exchange. The same goes for incoming students beginning at EURECA-PRO partner universities, who might not have heard of UHasselt or its programmes before.
EURECA-PRO will need time to become fully embedded in the UHasselt community as a strategic research partnership. Not only do researchers across the alliance need to find each other intellectually, they also need to meet each other socially to allow partnerships to grow. Moreover, established scholars might need encouragement to consider alliance partners over already existing partnerships. More junior researchers, on the other hand, will need support to ensure sufficient investment in collaboration with partners. Luckily, UHasselt can financially incentivise collaborative EURECA-PRO research. Nonetheless, nothing can replace researchers finding synergy with other scholars themselves by encountering like-minded colleagues to collaborate with. Finally, we appreciate that EURECA-PRO's research focus might not always suit the agendas of all UHasselt research, institutions or centres. We will of course not force scholars to frame their research within EURECA-PRO's vision and are strongly committed to supporting all UHasselt scholars regardless of their EURECA-PRO involvement.
Solution Strategies
UHasselt has an established reputation as a creative and innovative university with strong problem-solving skills and imaginative solutions. The already existing EURECA-PRO Alliance expertise plays in our favour; it, for instance, greatly facilitates the formation of structural collaboration across the network (e.g. by providing Erasmus+ partnerships or opportunities for PhD students). At the same time, the current pace of the project allows us to inscribe ourselves into the EURECA-PRO story and to let collaborations grow. It also enables us to carve out a place for the ambitions of our university. Here follow several proposals to counter the challenges listed in the previous section: We believe that strong communication strategies related to research dissemination and project information will play a vital role in ensuring the successful development and implementation of EURECA-PRO. This will be crucial to driving staff and student engagement as well as incentivising industrial and strategic partners to collaborate with EURECA-PRO. We also need a strong communication strategy between universities to ensure we get our story out in a way that motivates staff and students and gets them excited about the opportunities and possibilities EURECA-PRO can offer. Making sure that staff feel involved is essential as we strive to live up to our university-wide policies of inclusiveness while moving toward EURECA-PRO's interdisciplinary ambitions. Getting the word out will also be crucial in enhancing citizen science initiatives and turning EURECA-PRO into a trusted partner for industrial and social organisations on the Flemish, European, and international levels.
EURECA-PRO has proposed to UHasselt to lead a research cluster, a so-called 'Lighthouse' in EURECA-PRO jargon. This offers the ideal context to integrate UHasselt's research activities with those of the alliance. Reaching an agreement with research-active staff on the topic of the UHasselt Lighthouse is the first step in establishing UHasselt research within EURECA-PRO. Next, we hope to incentivise interdisciplinarity and cross-faculty collaboration by supporting joint seminars or conferences at UHasselt centred around UHasselt's Lighthouse Mission. Furthermore, we plan to host a EURECA-PRO conference on the topic of the UHasselt Lighthouse to bring scholars from across the alliance together. Social events could be organised alongside academic events to showcase UHasselt's welcoming environment while enabling international encounters for our own staff. Additionally, we will financially support staff in initiatives that drive UHasselt's Lighthouse Mission research agenda forward, especially when such initiatives include mobility between partner universities.
Finally, we will develop an internal communication strategy (perhaps including presentations, podcasts and videos) to support students in their funding applications while enhancing student exchange and engagement within EURECA-PRO. | 2022-10-01T15:20:32.512Z | 2022-09-29T00:00:00.000 | {
"year": 2022,
"sha1": "36744be064a4c236c591bbdc5cdd4e201fb296ed",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00501-022-01279-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "e48e378aa07859c7e996d88550f73eff5df6a100",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
52013657 | pes2o/s2orc | v3-fos-license | NetScatter: Enabling Large-Scale Backscatter Networks
We present the first wireless protocol that scales to hundreds of concurrent transmissions from backscatter devices. Our key innovation is a distributed coding mechanism that works below the noise floor, operates on backscatter devices and can decode all the concurrent transmissions at the receiver using a single FFT operation. Our design addresses practical issues such as timing and frequency synchronization as well as the near-far problem. We deploy our design using a testbed of backscatter hardware and show that our protocol scales to concurrent transmissions from 256 devices using a bandwidth of only 500 kHz. Our results show throughput and latency improvements of 14--62x and 15--67x over existing approaches and 1--2 orders of magnitude higher transmission concurrency.
Introduction
The last few years have seen rapid innovations in low-power backscatter communication [20,29,18,15,17], culminating in long range and reliable backscatter systems [25,22,27]. These designs enable wireless devices to communicate at microwatts of power and operate reliably at long ranges to provide whole-home or warehouse coverage. To achieve this, they employ low-power coding techniques such as chirp spread spectrum, to decode weak backscatter signals below the noise floor [22,25] and deliver long ranges.
While these long range backscatter systems are promising for enabling power harvesting devices (e.g., solar and vibrations) as well as cheap and small Internet-connected devices that operate on button-cells or flexible printed batteries, they primarily work at the link layer and are not designed to scale with the number of devices -all these prior designs [25,22,27] are evaluated in a network of 1-2 devices.
Our goal in this paper is to design a network protocol that enables these low-power backscatter networks to support hundreds to thousands of concurrent transmissions. This is challenging because the resulting design must operate reliably with weak backscatter signals that can be close to or below the noise floor. To this end, we present NetScatter, the first wireless protocol that can scale to hundreds and thousands of concurrent transmissions from backscatter devices. Our design enables concurrent transmissions from 256 devices over a bandwidth of 500 kHz. Consequently, it can support transmissions from a thousand concurrent backscatter devices using a total bandwidth of only 2 MHz.
Our key innovation is a distributed coding mechanism that satisfies four key constraints: i) it enables hundreds of devices to concurrently transmit on the same frequency band, ii) it can operate below the noise floor while achieving reasonable bitrates, iii) its coding operation can be performed by low-power backscatter devices, and iv) it can decode all the transmissions at the receiver using a single FFT operation, thus minimizing the receiver complexity.
We introduce distributed chirp spread spectrum coding, which uses a combination of chirp spread spectrum (CSS) modulation and ON-OFF keying. In existing CSS systems (e.g., LoRa backscatter [25]), the AP transmits a continuous wave signal which each device backscatters and encodes bits using different cyclic shifts of a chirp signal. In contrast, in our distributed CSS coding, we assign a different cyclic shift of the chirp to each of the concurrent devices. Each device then uses ON-OFF keying over these cyclic shifted chirps to convey bits, i.e., the presence and absence of the corresponding cyclic shifted chirp correspond to a '1' and '0' In traditional CSS systems, a single device uses different cyclic shifts to convey bits. In distributed CSS coding, each cyclic shift is assigned to a different backscatter device. Each device then uses the presence and absence of cyclic shift to send '1' and '0' bits.
bit respectively, as shown in Fig. 2. Note that in comparison to existing CSS systems where each device transmits log 2 N bits using N cyclic shifts, our distributed design enables N concurrent devices, each of which transmits a single bit using ON-OFF keying. Thus, our design transmits a total of N bits within a chirp duration, providing a theoretical gain of N log 2 N . Our design leverages the fact that creating concurrent cyclic-shifted chirps at a single device requires distributing its transmit power amongst all the cyclic shifts, which reduces the ability of the receiver to decode each chirp. Instead we generate concurrent cyclic-shifted chirps across a distributed set of low-power devices in the network. This allows us to efficiently leverage the coding gain provided by chirp spread spectrum under the noise floor [9]. Further, we can decode all the concurrent transmissions using a single FFT operation, since cyclic shifting the chirps in the time domain translates to offsets in the frequency domain.
Using the above distributed coding mechanism in practice, however, is challenging for two key reasons.
• Near-far problem. A fundamental problem with enabling concurrent transmissions is that signals from a nearby backscatter device can overpower a farther concurrent device. To address this issue, we introduce two main techniques. First, we present a power-aware cyclic shift allocation technique in §3.2.3, where lower SNR devices use much different cyclic shifts than higher SNR devices. We show that such an allocation can allow backscatter devices that have an SNR difference of up to 35 dB to be concurrently decoded. Second, to account for channel variations over time, we develop a zero-overhead power adaptation al-gorithm where backscatter devices use reciprocity to estimate their SNR at the AP, using the signal strength of the AP's query message. The backscatter devices then adjust their transmission power to fall within the tolerable SNR difference. Since this calibration is done independently at each backscatter device using the AP's query, it does not require additional communication overhead at the AP.
• Timing synchronization. The above design requires all the devices to start transmitting at the same time so as to enable concurrent decoding. However, hardware variation and propagation delays of different devices can make it challenging for hundreds of devices to be tightly synchronized in time. To avoid this coordination overhead, we leave gaps between cyclic shifts to ensure that concurrent devices are sufficiently distinguishable and can be decoded. We explore the trade-off between the required gaps and the chirp bandwidth in §3.2.1.
We implement NetScatter on a testbed of backscatter devices. We create backscatter hardware that implements NetScatter and includes circuits to perform automatic power adaptation before each transmission. We deploy our backscatter testbed with 256 devices in an office building spanning multiple rooms as shown in Fig. 1. We implement our receiver algorithm using USRP X-300 software-defined radios. Our results reveal that over a 256 node backscatter deployment, NetScatter achieves a 14-62x gain over prior long-range backscatter systems [25] for its end-to-end link layer data rates. The key benefit however is in the network latency which sees a reduction of 15-67x.
Contributions. Our paper demonstrates, to the best of our knowledge, the first network protocol that achieves orders of magnitude more concurrent transmissions than existing backscatter systems. The closest work to our design is Choir [12] in the radio domain, which decodes concurrent transmissions from 5-10 LoRa radios at a software radio. Choir leverages frequency imperfections to disambiguate between LoRa radios. However, backscatter devices achieve low power operations by running at a lower frequency (1-10 MHz) than radios (900 MHz) and thus have much smaller frequency differences between backscatter devices. This severely limits the ability to rely on frequency imperfections to disambiguate between a large number of backscatter devices (see §2.2). In contrast, our distributed chirp spread spectrum coding mechanism provides a systematic approach to enable large scale backscatter networks.
Primer on Chirp Spread Spectrum
In CSS, data is modulated using linearly increasing frequency signals or upchirps. The receiver demodulates these symbols in a two step process. First, it de-spreads these up- chirp symbols by multiplying them by a downchirp and it then performs an FFT on the de-spread signal. Since the slope of the downchirp is the inverse of the slope of the upchirp, multiplication results in a constant frequency signal, as shown in Fig. 3(a). Thus, taking an FFT on this will lead to a peak in an associated FFT bin. Changing the initial frequency of an upchirp will result in a change in the demodulated signal's FFT bin peak index which corresponds to the initial change in frequency, as shown in Fig. 3(b). This property is used to convey information. When the sampling rate is equal to chirp bandwidth (BW), frequencies higher than BW 2 will alias down to −BW 2 as shown in Fig. 3(c). This means cyclically shifting in time is equivalent to changing the initial frequency and thus to conserve bandwidth, CSS uses cyclic shifts of the chirp in the time-domain instead of frequency shifts. This means that to modulate the data we just need to cyclically shift the baseline upchirp in time. Note that one can transmit multiple bits within each upchirp symbol. In particular, say the receiver performs an N point FFT. It can distinguish between N different cyclic shifts each of which corresponds to a peak in one of the N FFT bins. Thus, we can transmit SF = log 2 N bits within each upchirp symbol, where SF is called the spreading factor.
Based on above explanations, CSS can be characterized by two parameters: chirp bandwidth/sampling rate and spreading factor. Thus, each chirp symbol duration is equal to 2 SF BW and the symbol rate is BW 2 SF . Since CSS sends SF bits per symbol, the bitrate is equal to BW 2 SF SF. This means increasing SF or decreasing BW decreases the bitrate. Further, the sensitivity of the system depends on the symbol chirp duration and increases with SF and decreases with BW.
Existing Collision Approaches
While existing CSS-based backscatter systems do not support collision decoding, we outline potential approaches to deal with collisions in CSS radio systems, i.e. LoRa, and explore whether they can be adopted for backscatter.
Using different spreading factors. One way to enable con- current transmissions is to assign different spreading factors to each device. There are three problems with using multiple spreading factors in the same network: i) the receiver needs to use multiple FFTs and downchirps with different spreading factors to despread upchirp symbols of different devices, which increases the receiver complexity with the number of concurrent transmissions, ii) in LoRa, different BW and SF can be concurrently decoded without sensitivity degradation, only if the chirp slope is different [24]. Specifically, if two chirp symbols transmitted concurrently with different BW and different SF, which result in the same chirp slope, BW 2 SF (shown in Fig. 6 as well), the receiver cannot decode their concurrent transmissions. This results in only 19 different BW and SF pairs that could be used concurrently, iii) further, requiring receiver sensitivity better than -123 dBm and bit rates of at least 1 kbps limits these concurrent configurations to only 8, which does not support hundreds of concurrent devices on a 500 kHz band. Note that ignoring the receiver complexity, this approach is orthogonal to our design since we could in principle run multiple concurrent NetScatter networks with the above 8 SF and BW pairs. Evaluating this is not in the scope of this paper.
Choir [12]. Recent work on decoding concurrent LoRa transmissions leverages the hardware imperfections in radios to disambiguate between multiple transmissions. Specifically, radios have slight variations which result in timing and frequency offsets, which translate to fractional shifts in the FFT indexes. Choir [12] uses these fractional shifts, with a resolution of one-tenth of an FFT bin, to map the bits to each transmitter. However, as demonstrated in [12], in practice this approach does not scale to more than 5 to 10 concurrent devices. To understand this limitation in theory, consider N concurrent devices. The probability that each of these transmitters has a different FFT peak index fraction, given the resolution of one-tenth of an FFT bin, is equal to 10! (10−N)!10 N . When N is 5 this probability is only 30%. Moreover, if any two transmitters use the same cyclic shifted upchirp symbol at the same time, it will result in a collision that cannot be decoded. In the case of LoRa modulation, if there are N transmitters and assuming each device transmits a random set of bits during each symbol interval, the probability of two transmitters using the same cyclic shift is equal to: Figure 5: Bandwidth Aggregation. Here we use an aggregate bandwidth of 2BW but each device transmits only using BW. Upchirps with different cyclic shifts shown in different colors. Each upchirp is assigned to a device.
. For SF = 9 and N = 10, this probability is around 9%. This means that there is around 9% probability that within each CSS symbol, two transmitters will use the same upchirp cyclic shift, which the receiver cannot disambiguate. This probability increases to 32% with 20 devices, preventing concurrent decoding of a large number of transmitters.
Moreover, Choir is based on oscillator imperfection causing frequency variation on different devices, and Choir cannot differentiate two concurrent transmissions if both transmissions fall into same FFT bin fraction. Choir uses an active radio system which generates frequencies in 900 MHz band. However, since backscatter systems are designed to consume less power and only generate baseband signals, their output frequency is less than 10 MHz. Now, in the ideal scenario where the same crystal oscillator is used for both radios and backscatter devices, the frequency variation of the backscatter devices is 90 times smaller than radios and can be even less than 1 FFT bin depending on the SF and BW. This means a backscatter network cannot use all 10 different FFT bin fractions that Choir have used. Fig. 4 shows CDF of FFT bin variation for our actual backscatter hardware which are recorded over time. This results show that FFT variation is always less than a third of an FFT bin. Thus, Choir cannot enable large concurrent transmissions with backscatter.
In conclusion, the desired solution must satisfy three constraints: 1) ability to differentiate between FFT peaks corresponding to different backscatter devices, 2) ability to associate the FFT peaks to the corresponding devices, and 3) ensure that two devices do not use the same FFT peak at the same time. NetScatter design satisfies all these constraints.
Distributed CSS Coding
Our approach is to take advantage of low-power and high sensitivity of CSS modulation to design a communication and networking system that enables hundreds of backscatter devices to transmit at the same time.
At a high level, we use a combination of CSS modula- Figure 6: Timing Mismatch, in detecting beginning of a chirp symbol and its translation to FFT bin variation.
tion and ON-OFF keying to enable concurrent transmissions. Our intuition is as follows: if we look at the FFT plots of Fig. 3, all the FFT bins except one bin are empty; however these empty bins could be utilized for orthogonal transmissions. While it is difficult to design low-power backscatter devices that can transmit multiple cyclic shifts at the same time, we can leverage all these empty bins by having different devices transmit different shifts and make use of the unused FFT bins. In particular, each device is assigned to a particular cyclic shifted upchirp symbol. It sends data by either sending the upchirp symbol or not sending it, i.e., by using ON-OFF keying of its assigned cyclic shifted chirp. Since, there are 2 SF FFT bins, ideally we can support 2 SF concurrent transmissions. This modulation will satisfy the above three requirements. The peaks can be differentiated and assigned to their corresponding devices. Moreover, none of them will use the same FFT bin at the same time.
We note the following about our distributed design.
• Receiver complexity. The received signal is composed of multiple transmissions. They can be demodulated by despreading with a baseline downchirp multiplication and performing an FFT operation. Then, we can determine the presence and absence of a peak in each FFT bin and find if the corresponding backscatter device is sending '0' or '1'. The key point is that the process of despreading and performing FFT, which are the major contributors of the demodulation process and provide a coding gain for each of the backscatter devices enabling them to operate below the noise floor, are being done once and do not depend on the number of concurrent transmissions. This means that the receiver complexity is nearly constant with the number of devices.
• Throughput gain. In our approach, ideally there can be as many as 2 SF transmissions at each symbol period. Since each backscatter device uses ON-OFF keying over a symbol, their individual data rate is BW 2 SF . Thus, the aggregate network throughput is equal to BW. In comparison, LoRa have a throughput of BW 2 SF SF. Thus, we can achieve a throughput gain of 2 SF SF , which shows that the gain exponentially increases with the SF value used in the system. This is expected since the number of concurrent devices we can support is an exponential function of SF, i.e., 2 SF .
• NetScatter and CDMA. Our distributed CSS coding can be thought of as code-division multiplexing mechanism that is low-power and where each of the 2 SF cyclic shifts is in an orthogonal set of codes in a CDMA system. These orthogonal codes are then assigned to 2 SF different backscatter devices which enables 2 SF concurrent transmissions.
• Gain in the context of Shannon capacity. A key gain we are achieving in our design stems from using the power across all the concurrent backscatter devices. Specifically we note that the Shannon capacity of a multi-user network that operates under the noise floor linearly increases with the number of devices. Said differently, the multi-user capacity of an access point network is given as [26], C = BW log 2 (1 + NP S P N ). Here BW is the channel bandwidth, P N and P S are the noise and signal power and N is the number of concurrent devices. At SNRs below the noise floor, the above equation can be approximated as BW ln (2) NP S P N , since ln(1 + x) ≈ x when x is small. This means that for systems that operate below the noise-floor, the network capacity scales linearly with the number of users. This linear increase stems from the fact that the N backscatter devices put in N times more power back to the AP than a single device.
• Bandwidth Aggregation. The bitrate achieved by each backscatter device in our distributed design is given by BW 2 SF and the number of concurrent devices is 2 SF . Thus, while we can increase the number of devices by increasing SF, it would decrease the bitrate of each device. Thus, to increase both the bitrate and the number of device we should increase the bandwidth, BW. Say, we want to support twice the number of devices while maintaining the same bitrate by using twice the bandwidth. This can be achieved in two ways. First, we can use two filters and independently operate two sets of devices across the two bands. This approach requires two different FFTs to be performed independently across the bands. The second approach is to use one aggregate band with twice the bandwidth, 2BW, but use the same SF and chirp BW as before and alias down to −BW whenever the chirp frequency hits the maximum as shown in Fig. 5. To demodulate this signal, we just need to multiply the signal which is composed of the aggregate band by the downchirp and perform 2 × 2 SF FFT operation once. The complexity of this method is lower than the former since there is no need to use filters and separate the bands.
Timing Mismatch
The above design requires all the backscatter devices to be time synchronized. To understand why, consider two consecutive upchirps being sent by a device, as shown in Fig. 6. Now say that we demodulate the signal in these two timing durations, shown in blue and red, we will get different FFT peak locations. Specifically, with a ∆t time difference be-tween these durations, the corresponding FFT bin peak location would change by, ∆FFT bin = ∆tBW. When this change is greater than a single FFT bin, backscatter devices that are assigned to consecutive cyclic shifts interfere with each other and hence cannot be decoded. Thus, all the devices should be time synchronized. In our design the access point sends a query message telling devices to transmit concurrently. The devices use this query to synchronize and respond concurrently. First, we explain the sources of time delay in our system and then we explain our solution. There are multiple factors that can contribute to time delays introduced in practice and can be different for different backscatter devices.
• Hardware delay. Unlike Wi-Fi devices which use much higher clock frequencies for processors, backscatter devices use low-power microcontrollers (MCU) that can introduce a variable delay into the system. For backscatter devices, the source of these hardware delay variations come from the time the envelope detector receives the query message from the access point, communicates it to the MCU and then the device backscatters the chirp. As we show in §4.2, this hardware delay variations can be as high as 3.5 µs, which can translate to more than one FFT bin at 500 kHz bandwidth.
• Propagation delay and multipath. Since backscatter devices can be at different distances to the access point, their time of flight (TOF) can be different. However, since our target application is for whole-home or whole-office sensing, the propagation distance is less than 100 m which translates to a ToF < 666ns = 2×100 3×10 8 and corresponds to only a 0.33 FFT bin change, assuming a bandwidth of 500 kHz. The multipath delay spread for indoor environments is between 50 to 300 ns [23,11]. For 500 kHz, this delay spread translates to less than 0.15 FFT bin change, which is negligible.
Our solution: Bandwidth-based cyclic-shift assignment. Hardware delay variations over time are hard to correct for. As described above, by nature of operating on MCUs and other low-power computational platforms, these devices have a hardware delay variation over time that changes between packets. Our solution to this problem is to put a few empty FFT bins adjacent to each FFT bin assigned to a device. That is, if FFT bin i is assigned to a device, the adjacent SKIP − 1 FFT bins are empty and not assigned to any device. This can be done by using only every SKIP th cyclic shift of the chirp. This ensures that the hardware delay does not result in adjacent devices interfering with each other.
Achieving such an assignment requires us to answer the following key question: how do we pick the value SKIP? As described earlier, given the hardware delay variation ∆t, the shift in the number of FFT bins is ∆tBW. This means that there is a trade-off in our system regarding the total network throughput, bitrate for each device and sensitivity. In particular, increasing BW increases the number of FFT bins that have to be left empty and decreases the total network throughput. On the other hand, decreasing BW reduces the number of FFT bins but decreases the bitrate per device with the same SF. To compensate for the decreased device's bitrate, we can decrease the SF. Note that, we can choose total bandwidth, chirp BW and SF of the system by considering the hardware delay variations, required bitrate per device, sensitivity for each device and total number of devices. For our implementation, we pick the same total bandwidth and chirp BW of 500 kHz and SF = 9 which supports around 1 kbps (976 bps) bitrate at the devices while ensuring that the number of empty bins between devices, SKIP, is two.
Frequency Mismatch
The devices experience frequency offsets because of hardware variations in the crystals used in their oscillators. As explained in §2.1, change in frequency translates to FFT bin change of the demodulated device packet. This again, causes one device to be misinterpreted as other device. Considering a bandwidth of BW and spreading factor of SF, the frequency difference between FFT bins is equal to BW 2 SF . This means that a ∆f frequency offset results in a change in the FFT bin of ∆FFT bin = 2 SF ∆f BW . Therefore either increasing the spreading factor SF or decreasing the BW can increase the shift in the FFT bin. Crystals' frequency tolerance can be as high as 100 ppm [2]. Since backscatter devices run at a few MHz frequencies, this frequency variation translates to less than one FFT bin for the bandwidths and spreading factors in this paper which makes it negligible for our backscatter network. Table 1 shows the timing and frequency mismatch that can be tolerated for different modulation configurations. As can be seen, there are multiple options for achieving the same bitrate and sensitivity. These options will result in different tolerable timing and frequency mismatch, requiring a different SKIP value; this is validated using experiments in §4.2.
Near-Far Problem
Since our networks are designed to work in below-noise conditions, we need to address the near-far problem in our decoding process at the receiver. Specifically, to account for the residual timing and frequency offsets, a CSS receiver has to achieve a sub-FFT bin resolution. To do so without increasing the sampling rate, the receiver uses zero-padding which adds zeros at the end of the time domain samples of the single chirp [12]. Zero-padding operation in the time domain is effectively a multiplication operation with a pulse which translates to convolution with a sinc function in the FFT domain. This makes it easier to locate the FFT peak location. However, convolving with a sinc function introduces side lobes as shown in Fig. 8. Assume that there are two devices with cyclic shifts C 1 = 0 and C 2 . If the power of C 2 is lower than power of C 1 's side lobes, it cannot be decoded.
Our solution. To address this issue, we propose two techniques that work together to increase our dynamic range.
Coarse-grained power-aware cyclic shift assignment. Our intuition here is as follows: Fig. 8 suggests that we should assign adjacent FFT bins to devices that have a small SNR difference. In particular, when SKIP is 2, the neighboring backscatter device will be drowned by the power of the higher SNR device if its power is lower than 13.5 dB of that of the high SNR device. Further, it shows that the side-lobe power of a high SNR device decreases as we go to farther FFT bins. Thus, we need to ensure that a lower SNR device has to correspond to FFT bins that are farther from the FFT bins corresponding to higher SNR devices. This ensures that the side-lobes of the high-SNR device do not affect the decoding of the low-SNR devices. Specifically, we assign different cyclic shifts to different devices at association phase to ensure that the FFT bins corresponding to the lower-SNR devices are close to each other and are far from higher-SNR devices. To do this, the AP computes the signal strength of the incoming device in the association phase (see §3.3.2) and assigns its cyclic shift based on its signal strength and also the strengths of the devices already in the network.
We run simulations to understand the benefits of this al- Figure 9: Backscatter Devices SNR Variance. CDF of SNR variance of backscatter devices in an office environment, when people were walking around, over 30 mins.
location. Specifically, we assign two devices to FFT bins 2 and 258, with SF = 9 and BW = 500 kHz. To be realistic, we added Gaussian frequency mismatch with variance of 300 Hz to each device to account for timing and frequency mismatches between them. We change the power of the second device and measure the bit error rate (BER) for the first device. Fig. 12 shows the BER over 10 4 symbols, for different power differences between the two devices. As can be seen, the BER remains unaffected even when the second device is around 40 dB stronger than the first device. This shows that our power-aware allocation can in theory tolerate power difference of 40 dB between devices. In practice however this is a little lower at 35 dB (see §4.3).
Fine-grained self-aware power-adjustment. While the above assignment is determined at association, mobility in the environment and fading will change the SNR of each of the devices over time (see Fig. 9). To address this, each device adjusts its power over time using the signal strength of the query message from the AP, using three different levels. We define the maximum power of the device as 0 dB power gain. First, during association, we consider two cases for the associating device. If it sees a low received signal strength for the AP's query packet, it sets its power gain to the maximum. Otherwise, it sets its gain to the middle level. This gives the higher signal strength backscatter devices leeway to both increase and decrease their power, after association. The AP uses the resulting backscatter signal strengths during association to assign a corresponding cyclic shift. The backscatter devices use the signal strength at association as a baseline and either increase or decrease their power gains for the rest of the concurrent transmissions, i.e., if the signal strength for the AP's query message increases (decreases), the backscatter devices decrease (increase) their power gain. If the device cannot meet its expected SNR requirements given its limited power levels and assigned cyclic shift, it does not join the concurrent transmissions. If this happens more than twice, the backscatter device re-initiates association after which the AP reassigns the cyclic shifts to account for the new significantly different power value (see §3.3.2).
The key question however is: how can a low-power backscatter device change its transmission power gain? This is interesting since power adaptation has not been used before in the network of backscatter devices. In backscatter, the transmit power gain, Gain power , is equal to |Γ 0 −Γ 1 | 2 4 . Here Γ 0 and Γ 1 are reflection coefficients for switching between two impedance value, Z 0 and Z 1 . Backscatter hardware is designed to maximize the difference between reflection coefficients to maximize their transmission power. This corresponds to Gain power = 0 dB. One way to achieve this is to switch between extreme impedance values, Z 0 = 0Ω and Z 1 = ∞Ω. To achieve power adaptation, in contrast, we pick impedance values that correspond to multiple power settings. In particular, as shown in Fig. 7a, instead of switching from Z 0 = 0Ω, we switch from intermediary impedances and hence achieve lower power gains. Our hardware implementation achieves three power gains of 0 dB, -4 dB and -10 dB to achieve power adaptation. Note that [25] uses a similar circuit structure as Fig. 7b to cancel higher order harmonics. We instead design this circuit structure to control the power.
Design tradeoff. Readers might wonder if reducing the power of high SNR devices would decrease the network throughput, since high SNR devices in traditional LoRa backscatter designs can achieve a higher bitrate. In contrast, by reducing their power we are enabling a large number of concurrent transmissions with a fixed bitrate. Thus, we are encouraging concurrency by reducing the bitrate of high SNR devices. §4.4 compares the results for NetScatter with one where each backscatter device uses rate adaptation to pick its ideal bitrate, while transmitting alone using LoRa backscatter [25]. The results show that the network throughput and latency gains due to large scale concurrency outweigh the reduction in the power for high SNR devices.
NetScatter Protocol & Receiver Details
Pautting it together, the AP transmits an ASK modulated query message which is used to synchronize all the participating concurrent devices. This message conveys information about cyclic shift assignment which are based on the devices' signal strength at the AP. The devices measure the query message's signal strength using the envelope detector and use it to fine-tune their transmit power gain. In the rest of this section, we describe various protocol details required to make our design work in practice. Note that our focus in the protocol design is about scheduling a set of concurrent transmissions. Typically networks could have more devices than concurrent transmitters supported by our design. Since the AP knows the duty-cycle of each device from the association phase (see §3.3.2), it can i) assign the cyclic shifts and ii) schedule the devices involved in concurrent transmissions.
Link-layer Backscatter Packet Structure
Similar to LoRa, the device packet starts with upchirp and downchirp preambles. They are designed to serve two purposes: i) finding the start of the packet and ii) detecting the transmissions. We emphasize here that the device transmits the same assigned cyclic shift for both up- chirps and downchirps in the preamble as well as the payload. The preamble consists of six upchirps followed by two downchirps. This is then followed by the payload and the checksum. We note that in our design, all the devices send their preambles concurrently. This reduces the overhead of transmitting preambles for each device, which in turn increase the end-to-end throughput gain achieved by NetScatter. The AP uses the above structure to achieve two goals.
i) Finding the exact packet start. We use the downchirp in the preamble to find the start of the packet transmission. Specifically, we use the middle point between an upchirp and downchirp and switch by six upchirp symbols to the left to find the packet beginning. We suspect that the LoRa preamble has a downchirp for this exact purpose. We note that in our case, since the upchirp and downchirp in the preamble from each of the devices uses the same cyclic shifts, they are symmetric around the middle point and hence the same algorithm for estimating the packet beginning is applied.
ii) Detecting and decoding each concurrent transmitter. Now that we found the packet start, we need to find out which transmitters are in the network. To do so, for each preamble symbol, we demodulate it and look at the peaks in FFT domain. If there is an FFT peak in the demodulator output which repeats in all the preamble symbols, we conclude that the device corresponding to that cyclic shift is sending data. After finding current devices in the network, we compute the average power over the six preamble symbols for each device. This average power is used as a threshold to demodulate the payload of each device. In particular, if the power of the device's FFT peak for each payload symbol is more than half this average, we interpret that as 1 and 0 otherwise.
Network Association
Say the network already has N devices associated to the AP and the N + 1 th device wants to join the network. A naïve approach is to periodically dedicate time periods for association. This however can lead to high association delays depending on the frequency of the association periods. Our approach instead is to reserve N assoc cyclic shifts and the cor- responding FFT bins for association and use the rest for communication. In other words, all the devices transmit at the same time but the ones who want to enter the network transmit with the N assoc association cyclic shifts.
To address the near-far problem, we reserve two cyclic shifts, one in high-SNR and the other one in the low-SNR cyclic shift regions. The incoming device would choose which association region to transmit based on the signal strength of the AP's query message, calculated using the envelope detector. However to account for the hardware delay variations, as before, we skip two cyclic shifts to ensure that the association packets from the devices can be decoded and won't interfere with communication cyclic shifts. Finally, to support scenarios where more than one device want to associate at the same time, one can use Aloha protocol with binary exponential back-off in the association process. Our deployment does not implement this option and turns ON the backscatter devices one at a time and runs the network only after all the devices are associated.
After the incoming device sends its packet to the AP in association process using the association cyclic shifts, the AP computes its signal strength and decides which cyclic shift and timing schedule it should be assigned to. The AP piggybacks these assignments in its query messages. Fig. 11 shows the ASK-modulated query message that the AP sends. The message has a group ID which identifies the set of 256 devices that should concurrently transmit. In our implementation, since there are only 256 devices, we set this group ID to 0. In a larger network, the AP can assign different sets of devices to different groups depending on their signal strengths, i.e., devices that have a similar signal strength are grouped into the same group to enable concurrent transmissions while further minimizing the near-far problem. This is then followed by an optional association response payload that assigns an 8-bit network ID and a 8-bit cyclic shift. Note that prior LoRa backscatter designs are requestresponse systems that query each backscatter device sequentially and need most of the fields in Fig. 11 other than the group ID and cyclic shift assignment. Since these additional 12 bits is transmitted using 160 kbps ASK downlink, the overhead is negligible compared to the 1 kbps backscatter uplink. Finally, we note that if the AP is unable to assign a new device given the existing assignments, the AP updates the cyclic shift assignments for all the devices in the network. It does so by transmitting the identifier for one of the the 256! orderings, which requires log 2 (256!) (≤1700) bits. This occupies less than 11 ms using our 160 kbps downlink. Figure 12: Near-Far BER Results. We show the effect of the second device's power on the first device's BER vs. SNR for different ratios of the second device's to first device's power with power aware cyclic shift assignments. Figure 13: Our Backscatter Devices. They are arranged closely for this picture. They are spread out across more than ten rooms in our deployment.
Network protocol
Fig. 10 summarizes our network protocol. First the AP broadcasts its query. Device 1, which is already associated to the network receives the query and sends its data using its assigned cyclic shift after performing any necessary power control. Concurrently, device 2 sends a Association Request using one of the N assoc cyclic shifts. The AP receives these two messages and broadcast another query which includes association information for device 2. Upon receiving this query, Device 1 continues to send its data, however, device 2 extract cyclic shift assignment from the query and then transmits Association ACK to the AP in the assigned cyclic shift. If AP receives Association ACK, it adds device 2 to associated devices. Otherwise, it will repeat the association information in the following queries. After association, each device uses its assigned cyclic shift for sending data.
Hardware implementation
COTS Implementation. Our COTS hardware shown in Fig. 13 consists of RF section and baseband section, both implemented on a four layers FR4 PCB. On RF receive side, we implemented envelope detector similar to [18] but at 900 MHz and it has a sensitivity of -49 dBm to receive downlink query messages from AP. 1 RF transmit side consists of five ADG904 [3] switches cascaded in three levels to build an 1 Note that since ASK-modulated AP query received by backscatter device experiences one-way path loss, its required sensitivity is only −44 dBm in contrast to the -120 dBm sensitvity for the backscatter signals. impedance switch network for backscatter, power gain control and also switching between transmit and receive modes. Our backscatter device uses a 2 dBi whip antenna to transmit packets and receive query messages in the 900 MHz ISM band. The baseband side is implemented using an IGLOO nano AGLN250 FPGA [1] and an MSP430FR5969 [5]. We generate CSS packets on the FPGA and output real and imaginary components of the square wave signal to the backscatter switch network. The envelope detector is controlled by the MCU. Downlink receiver algorithm is implemented on MCU. To be resilient to self-interference caused by the AP's single-tone, the baseband at the backscatter device shifts the AP's signal by 3 MHz. Note that the COTS implementation is for prototyping and proof-of-concept; an ASIC is typically required to achieve the orders of magnitude power benefits of backscatter communication.
IC Simulation. We designed and simulated an IC for our backscatter device using TSMC 65nm LP process. It consists of four blocks with total power consumption of 45.2 µW: i) An envelope detector that demodulates the APs ASK query messages and consumes less than 1 µW. ii) Baseband processor for processing and extracting AP data from envelope detector, interfacing with sensors and sending the chirp specifications and sequence of data to chirp generator is handled by this block consuming 5.7 µW of power. iii) A chirp generator that takes SF, BW, cyclic shift assignment and data sequence from the baseband processor to generate the sequence of ON-OFF keying chirps. We used Verilog code to describe the baseband signal's phase behavior and generate assigned cyclic shift with required frequency offset. We used Synthesis, Auto-Place and Route (SAPR) to simulate Verliog code on chip. The power consumption of this block is 36 µW. iv) A Switch network which is composed of three resistors that are connected to NMOS switches to generate backscatter signal with three power gain levels. Note that since these resistors and NMOS switches consume minimal area, more power gain levels can be added at almost no cost. The power consumption of the switch network is 2.5 µW with 3 MHz frequency offset.
Reader Implementation. We implement the reader on the X-300 USRP software-defined radio platform by Ettus Research [8]. We use a mono-static radar configuration with two co-located antennas separated by 3 feet. The transmit antenna is connected to a UBX-40 daughterboard, which transmits the query message and the single-tone signal. The USRP output power is set at 0 dBm and we use an RF5110 RF power amplifier [6] to amplify the transmit signal to 30 dBm. The receiver antenna is connected to another UBX-40 daughterboard, which down-converts the NetScatter packets to baseband signal and samples it at 4 Msps.
Frequency and Timing Mismatch
Measurements 1: Hardware frequency variations. We measure the frequency offsets of our hardware by recording thousand packets for each device. Using the method described in §3.3.3, we compute the frequency offset for the 256 backscatter devices in our network deployment which we show in Fig. 14a. The variations of backscatter devices are less than 150 Hz which is nearly 0.15th of one FFT bin when BW = 500kHz and SF = 9. Therefore, our system is not affected by frequency variation of different devices. Measurements 2: Timing offsets. Next, we characterize how the timing offsets affects ∆FFT bin . This helps us understand how many empty cyclic shifts, SKIP − 1, we need to put for each occupied cyclic shift. To do this, we setup a wireless experiment sending query messages from the AP and receiving transmissions from the backscatter devices deployed in our system. By decoding these transmissions and comparing the received cyclic shifts with what we have programmed the devices to send, we can find the ∆FFT bin for each device; this measurement is a combination of both timing and the small frequency variations on the hardware. Fig. 14b shows residual ∆FFT bin for backscatter devices. The plots show that the ∆FFT bin is considerable. This is because in backscatter devices, the energy detector receives the amplitude modulated query message and sends interrupt to initiate backscatter transmission. Both these steps add to the timing variations. Specifically, the hardware delay variation comes from variation in receiving query message and initiating the transmission on FPGA which can vary from packet to packet. In our deployment in §4.4 with backscatter devices, we use BW=500 kHz, SF=9 and leave one FFT bin between occupied cyclic shifts (SKIP = 2). This translates to supporting 256 devices with an aggregate throughput of around 250 kbps and bitrate per tag of around 1 kbps. Measurements 3: Doppler effects. Other than hardware frequency offsets, Doppler effect can cause changes in frequency as well. However, the effect of it will be much less than 1 FFT bin, BW 2 SF , for most cases. As an example, assume a backscatter device is moving with a speed of 10 m/s. Considering the carrier frequency is 900 MHz, the doppler effect induced frequency change would be 30 Hz which is much less than 1 kHz, the FFT bin frequency, assuming BW=500 kHz and SF=9. To confirm this, we run various mobility experiments where a subject holds a backscatter device and moves with different average speeds which we measure using an accelerometer. We receive transmissions from the device and compute the ∆FFT bin for different motion scenarios. Fig. 15a shows ∆FFT bin for various speeds, which confirms that these speeds do not have an effect on ∆FFT bin .
Near-Far Problem
Measurements 1: Power-aware cyclic shift assignment. As mentioned in §3.2.3, we assign cyclic shifts to devices depending on their signal strength values. To evaluate the effectiveness of this technique, we run experiments with two devices where one of them transmits at a high power (equivalent to being near the AP) with a cyclic shift corresponding to the beginning of the FFT spectrum. Then, we sweep the cyclic shift of the second device from small FFT bin difference cyclic shifts to high FFT bin difference ones. At each of the cyclic shifts, we decrease the power of the second device using an attenuator up to when it has packet error rates less than one percent. Fig. 15b shows the maximum power difference that can be tolerated between these two devices versus the assigned FFT bin difference. As can be seen, as we go further in FFT bin difference, we can tolerate more power difference between the two devices. Note that, because of aliasing Fig. 15b is symmetric around the center. The maximum happens in middle and is equal to 35 dB. This is the dynamic range that our system can support in practice. We also note that when the second device is assigned to an FFT bin 2 cyclic shifts away from the first device, it can be up to 5 dB below the latter and still correctly decoded. This means there is an in-built 5 dB dynamic range resilience to channel variations between devices that have close cyclic shifts. Measurements 2: Self-aware power-adjustment. The second method to address the near-far problem and also increase the dynamic-range is power adjustments at the devices using the signal strength of the AP's query message. To evaluate this, we first measure how well we can adjust power on the devices. We evaluate its efficacy in practical deployments. We use three different backscatter impedance values to be able to transmit packets in three different power gains. Fig. 16 shows the spectrum of backscattered signal at different power levels. These plots show that the hardware creates spectrum that is clean and does not introduce noticeable nonlinearities into the backscattered signal. Furthermore, we can achieve three different power levels: 0, -4, and -10 dB.
Network Deployment
We evaluate three key network parameters: • Network PHY bit-rate. This is the bitrate achieved across all the devices during the payload part of the packet.
• Link-layer data rate. This is the data rate achieved in the network which is defined as the data rate for sending useful payload bits, after considering overheads including the AP's query message and the preamble of the packet transmission.
• Network latency. This is the latency to get the payload bits from all the backscatter devices in the network.
We compare three schemes: i) LoRa backscatter [25] where all devices use a fixed bitrate of 8.7 kbps, ii) LoRa backscatter with rate adaptation where each device uses the best bitrate given its channel conditions and iii) NetScatter. Note that the authors of [25] did not publicly release the code and so, we replicate the implementation adding the missing details and using BW = 500 kHz and SF = 9. We also note that [25] is not designed to work with more than one to two users. Here, we use query-response design with scheduling when there are more users where the AP queries each device. While LoRa backscatter does not support rate adaptation, we wanted to compare with an ideal approach that maximizes the bitrate of each device by picking the optimal SF and BW. Figure 18: Link-layer Data Rate. We evaluate link-layer data rate for NetScatter and compare it with other schemes.
To do so, we measure the signal strength from each of the backscatter devices and compute the bitrate using the SNR table in [4]; this is the ideal performance a single-user LoRa backscatter design achieves with rate adaptation. Network PHY bit-rate. We set each device bit-rate to 976 bps, BW agg = 500 kHz, SF = 9 and a payload size of five bytes. We deploy 256 backscatter devices across the floor of an office building with more than ten rooms. Fig. 1 shows our deployment in an office. Fig. 17 shows the results of network physical rate for our backscatter network deployment. The plot highlights the following key observations.
• The network data rate scales with the number of concurrent backscatter devices. When the number of concurrent devices is less than 128, the variance in the throughput is small. This is because in these scenarios effectively the backscatter devices are separated from each other by more than 2 cyclic shifts (SKIP ≥ 3). As a result, the devices do not interfere with each other and hence can concurrently operate. As we increase the concurrent devices to 256, we are pushing the system to its theoretical limit (with SKIP = 2) and thus, we see larger variances in the network data rate.
• With 256 backscatter devices, NetScatter increases the PHY bit-rate by 6.8x and 26.2x over LoRa backscatter with and without rate adaptation. The gains are lower with the ideal rate adaptation since with rate adaptation high-SNR devices could pick the maximum LoRa bitrate of 32 Kbps.
Link-layer data rate. While the above plots measure the data rate improvements for the message payload, it does not account for the end-to-end overheads including preambles and the AP's query message to coordinate the concurrent transmissions. To see the effect of the AP query packet overhead for NetScatter, we consider two configurations.
• NetScatter Config 1. In this scenario the cyclic shifts are all assigned during the association phase and the AP query packet coordinating the concurrent transmissions is 32 bits long without the optional fields in Fig. 11.
• NetScatter Config 2. In this scenario, the AP query packet contain cyclic shift assignments for all the devices in the network and has a length of 1760 bits. Figure 19: Network Latency. We evaluate the latency of NetScatter and compare it with other schemes.
The above two configurations represent the two extremes of our deployment. We set the backscatter payload and CRC to 40 bits and use the total 8 upchirps and downchirps for preamble. For LoRa backscatter which queries each individual device sequentially, the AP query is 28 bits long. Fig. 18 shows that the gains at the link-layer are higher for NetScatter over LoRa backscatter without and with rate adaptation by 61.9x (50.9x) and 14.1x (11.6x) respectively for config#1 (#2). This is because, in NetScatter, the added overhead of devices' preambles happen once and at the same time for all devices. But the other schemes need to do TDMA which means that sending preamble will not happen concurrently for all devices and these have to be sent individually for each backscatter device since in traditional designs the AP querying each of them sequentially. Further, in LoRa backscatter which queries sequentially, the AP query message is transmitted once for each device in the network versus being transmitted once for all the devices in our design. Finally, since the downlink uses ASK at 160 kbps, the overhead of transmitting 1760 bits in config#2, while reducing the link-layer data rate over config#1, is still low because the backscatter links can only achieve a much lower bitrate.
Network latency. Finally, Fig. 19 shows that NetScatter has a latency reduction of 67.0x (55.1x) and 15.3x (12.6x) over prior LoRa backscatter without and with rate adaptation respectively in network config#1 (#2). This is the key advantage of using concurrent transmissions in low-power backscatter networks. It is noteworthy that since the downlink AP query bit-rate is 160 kbps, AP query duration is negligible compared to duration of backscatter devices' preamble for prior backscatter methods and also for config#1. For config#2, the AP query duration is significantly higher than the config#1. However, the total duration is still dominated by the backscatter payload + CRC and preamble. As a result, AP query is not the dominant factor in link-layer latency.
Related Work
Recent systems use backscatter with Wi-Fi signals [10,29,18], have a receiver sensitivity of only -90 dBm and hence have a limited range and cannot work across rooms unless the RF source is placed close to the backscatter tag [19,18]. LoRa backscatter [25] can achieve long ranges by generating LoRa-compliant packets at the backscatter device. pLoRa [22] backscatters ambient LoRa signals in the environment in contrast to the single tone used as the RF source in NetScatter as well as [25]. We note that all SemTech LoRa chipsets have the capability in software to transmit single tone signals. All these prior long range systems are evaluated in a network of only 1-2 devices and propose to use time-division to support multiple backscatter devices. In contrast, our design enables large-scale concurrent transmissions and can achieve much higher link-layer data rates as well as lower latencies. We also note that these long range backscatter systems [25,22] claim a kilometer range in outdoor scenarios such as open fields. This however requires placing the RF source close to the backscatter devices. In indoor environments where the signal propagates through walls and the RF source is not placed close to the backscatter devices, our network operational range across ten different rooms is consistent with these prior work. Finally, we note that while prior work [25,22] decodes the backscatter signal on Semtech LoRa chipsets, our distributed CSS protocol is decoded on a software radio. We however note that SemTech LoRa SX1257 [7] chipsets provide I-Q samples and hence our approach could also be implemented on these off-theshelf chipsets together with a low power FPGA for baseband processing; this however is not in the scope of this paper.
Finally, recent work on decoding concurrent transmissions from RFID tags, does not achieve the long range operations and below-noise operations of CSS based systems. Buzz [28], LF-Backscatter [13], and others [14,21,16] leverage the differences in the time domain signal transitions and changes in the constellation diagram to decode multiple RFIDs. However, the number of concurrent transmissions in the above designs is limited -the latest in this line of work, Fliptracer [16], can reliably decode up to five concurrent RFID tags. Further, these systems were tested with ranges of 0.5 to 6 feet [28,13,16] and in the same room. Finally, receiver sensitivity of even battery-powered backscatter tags for RFID EPC-GEN2 readers is around -85 dBm. So it cannot support the long ranges and whole-home deployments that CSS modulation based backscatter achieves.
Conclusion
We present a new wireless protocol for backscatter networks that scales to hundreds of concurrent transmissions. To this end, we introduce, distributed chirp spread spectrum coding, which uses a combination of chirp spread spectrum (CSS) modulation and ON-OFF keying. Further, we address practical issues including near-far problem and timing and frequency synchronization. Finally, we deploy our system in an indoor environment with 256 concurrent devices to demonstrate its throughput and latency performance. | 2018-08-15T17:37:58.000Z | 2018-08-15T00:00:00.000 | {
"year": 2018,
"sha1": "f68875795f6c900374f50d6768971b5803846964",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3f2e1e879390ce36f4bef952da474c90fd64cf16",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53362446 | pes2o/s2orc | v3-fos-license | On Statistical Properties of Sets Fulfilling Rolling-Type Conditions
Motivated by set estimation problems, we consider three closely related shape conditions for compact sets: positive reach, r-convexity, and the rolling condition. First, the relations between these shape conditions are analyzed. Second, for the estimation of sets fulfilling a rolling condition, we obtain a result of ‘full consistency’ (i.e. consistency with respect to the Hausdorff metric for the target set and for its boundary). Third, the class of uniformly bounded compact sets whose reach is not smaller than a given constant r is shown to be a P-uniformity class (in Billingsley and Topsøe's (1967) sense) and, in particular, a Glivenko-Cantelli class. Fourth, under broad conditions, the r-convex hull of the sample is proved to be a fully consistent estimator of an r-convex support in the two-dimensional case. Moreover, its boundary length is shown to converge (almost surely) to that of the underlying support. Fifth, the above results are applied to obtain new consistency statements for level set estimators based on the excess mass methodology (see Polonik (1995)).
Introduction
Three geometric closely related properties, with fairly intuitive interpretations, are analyzed in this paper. They are called positive reach, (see, e.g., Federer [13], Rataj [27], Ambrosio et al. [1]), r-convexity, (Perkal [23], Mani-Levitska [17], Walther [32]) and rolling condition (Walther [32,33]). Our interest in these "rolling-type properties" is motivated in the framework of set estimation, that is, the problem of reconstructing a set S ⊂ R d (typically a density support or a density level set) from a random sample of points; see e.g. Cuevas and Fraiman [8] for a recent survey. The classical theory of this subject, dating back to the 1960's, is largely concerned with the assumption that S is convex; see e.g. Dümbgen and Walther [12] or Reitzner [28] for a survey. The rolling-type properties have been employed as shape restrictions on S alternative to (and much broader than) the convexity assumption.
The rolling-type conditions are useful in statistics and stochastic geometry at least in two ways. First, they can be sometimes incorporated to the estimator: for example, if the (compact) support S of a random variable X is assumed to be r-convex one could estimate S, from a random sample of X by taking the r-convex hull of the sample points, much in the same way as the convex hull has been used to estimate a convex support; see Rodríguez-Casal [29]. Second, they can be used as regularity assumptions in order to get faster rates of convergence for the estimators; see, e.g., Cuevas and Rodríguez-Casal [10], Pateiro-López and Rodríguez-Casal [22]. Of course other, perhaps more standard, regularity conditions have also been used in set estimation. They rely on usual smoothness assumptions on the boundary or the underlying density, defined in terms of derivatives. See Biau et al. [3,4] and Mason and Polonik [18] for some recent interesting examples. Some deep results on the connection between differentiability assumptions and rolling-type conditions can be found in Federer [13], Walther [33] and Ambrosio et al. [1].
The most popular among the rolling-type properties is by far the positive reach condition. It was introduced by Federer [13] in a celebrated paper which could be considered as a landmark in geometric measure theory. Among other relevant results, Federer [13] proved that (Theorem 5.6), for a compact set S with reach r > 0, the volume (Lebesgue measure) of the ǫ-parallel set B(S, ǫ) can be expressed as a polynomial in ǫ, of degree d, for 0 ≤ ǫ < r. This is a partial generalization of the classical Steiner formula that shows this property for convex sets for all ǫ > 0. So the positive reach property can be seen as a natural generalization of convexity in a much deeper sense than that suggested by the definition. For some recent interesting contributions on this property see Ambrosio et al. [1] and Colesanti and Manselli [7]. An application in set estimation, more specifically in the problem of estimating the boundary measure, can be found in Cuevas et al. [9].
The r-convexity property provides a different but closely related generalization of convexity (see Section 2 for precise definitions). Whereas this property has also a sound intuitive motivation it is much less popular. An earlier reference is Perkal [23] but, to our knowledge, the first statistical application is due to Walther [32] who uses this condition in the setting of level set estimation. A study of the r-convex hull as an estimator of an r-convex support can be found in Rodríguez-Casal [29].
Let us know establish some notation and basic definitions. We are concerned here with subsets of R d although some concepts and results can be stated, with little additional effort, in the broader setup of metric spaces. The Euclidean norm in R d will be denoted by · . Given a set A ⊂ R d , we denote by A c , int(A) and ∂A the complement, interior and boundary of A, respectively. We denote by B(x, r) the closed ball with centre x and radius r. By convenience, the open ball int(B(x, r)) will be denoted byB(x, r).
In the problem of estimating a compact set and/or its boundary we need to use some suitable distances in order to assess the quality of the estimation and establish asymptotic results (concerning consistency, convergence rates and asymptotic distribution). The most usual distances in this setting are the Hausdorff distance d H and the distance in measure d ν . The definitions are as follows.
Let M be the class of closed, bounded, nonempty subsets of R d . For A, C ⊂ M, the Hausdorff distance between A and C is defined by Let ν be a Borel measure on R d , with ν(C) < ∞ for any compact C. Let A, C be Borel sets with finite ν-measure. The distance in measure between A and C is defined by where ∆ denotes the symmetric difference between A and C, that is, A∆C = (A \ C) ∪ (C \ A). Often ν is either a probability measure or the Lebesgue measure on R d , which we will denote by µ. Note that the distance function d ν is actually a pseudometric but it becomes a true metric if we identify two sets differing in a ν-null set. This amounts to work in the quotient space associated with the corresponding equivalence relation.
Besides the estimation of the set S and the boundary ∂S, some functionals of S are sometimes of interest as estimation targets. This is the case of the (d − 1)-dimensional boundary measure, L(S) (that is the perimeter of S for d = 2 and the surface area for d = 3). There are several, not always equivalent, definitions for L(S) (see Mattila [19]) but we will mainly use the outer Minkowski content, Under regularity conditions, the limit in (1) coincides with the usual Minkowski content given by We refer to Ambrosio et al. [1] for general conditions ensuring the existence of the outer Minkowski content and its relation to other measurements of the boundary of S.
The contributions in this paper can be now summarized as follows. In Section 2 we clarify the relations between the rolling-type conditions pointing out that if the reach of a set is r, then it is r-convex and, in turn, r-convexity entails the r-rolling property. See Propositions 1 and 2. It is also shown that the converse implications are not true in general. In Section 3 we prove that, under very general conditions on ν and S, d H (S n , S) → 0 plus d H (∂S n , ∂S) → 0 (we call this simultaneous convergence full convergence) implies d ν (S n , S) → 0 (Theorem 2). We also show (Theorem 3) that if the sets S n fulfill the rolling condition, the d H -convergence d H (S n , S) → 0 implies full convergence. This result (which has some independent interest) is used below in the paper. In Section 4 we show that, under broad conditions, the class of uniformly bounded sets with reach ≥ r is a P -uniformity class (in the sense of Billingsley and Topsøe [5]) and therefore also a Glivenko-Cantelli class. This is interesting from the point of view of empirical processes, see e.g., Devroye et al. [11] or van der Vaart [31], and will be also used in Section 5 of this paper. In subsection 5.1 we show that, if a support S in R 2 is assumed to be r-convex then, the natural estimator (which, as mentioned above, is the r-convex hull of the sample) is consistent in all the usual senses. In particular, it provides also a plug-in consistent estimator of the boundary length whose practical performance is checked through some numerical comparisons in the appendix. This is an interesting contribution to the theory of nonparametric boundary estimation which so far relies mostly on the use of two samples (one inside and the other outside the set S); see Cuevas et al. [9], Pateiro-López and Rodríguez-Casal [22] and Jiménez and Yukich [16]. The results mentioned in the above points are used in subsection 5.2 for the problem of estimating density level sets of type {f ≥ λ} using the excess mass approach (see Polonik [25]). In particular, we obtain new consistency properties for the estimation of {f ≥ λ} as well as uniform consistency results (with respect to λ) for the same problem.
Rolling-type assumptions
Convexity is a natural geometrical restriction that arises in many fields. While the study of this topic dates back to antiquity, most important contributions and applications date from the 19th and 20th centuries. We refer to the Handbook edited by Gruber and Wills [14] for a complete survey of convex geometry and its relations to other areas of mathematics. However, convexity leaves out usual features of sets such as holes or notches and may be considered an unrealistic assumption in some situations. As a natural consequence, there are meaningful extensions of the notion of convexity; see e.g. Mani-Levitska [17]. In this Section, we shall consider three different shape restrictions that generalize that of convexity. We group them under the name of rolling-type conditions. Their formal definitions are as follows.
Definition 1. Given r > 0, a set S ⊂ R d is said to fulfill the (outside) r-rolling condition if for all x ∈ ∂S there is a closed ball with radius r, B x , such that x ∈ B x and int(B x )∩S = ∅.
The radius r of the ball acts a smoothing parameter. We refer to Walther [33] for a deep study of this condition.
Following the notation by Federer [13], let Unp(S) be the set of points x ∈ R d having a unique projection on S, denoted by ξ S (x). That is, for x ∈ Unp(S), ξ S (x) is the unique point which minimizes δ S (x). Besides all convex closed sets (which have infinite reach) and regular submanifolds of class 2 of R d , the class of sets of positive reach also contains nonconvex sets or sets whose boundary is not a smooth manifold. Sets with positive reach were introduced by Federer [13] who also obtained their main properties. In particular, these sets obey a Steiner formula in the following sense.
where b 0 = 1 and, for j ≥ 1, b j is the j-dimensional measure of a unit ball in R j .
That is, S is r-convex if for all x ∈ S c , there exists y ∈B(x, r) such thatB(y, r) ∩ S = ∅. We refer to Perkal [23] for elementary properties of r-convex sets and connections between convexity and r-convexity. Walther [33] and Rodríguez-Casal [29] also deal with this shape restriction in the context of set estimation. A natural question is whether certain characterizations of convex sets are still meaningful in the context of generalized notions of convexity. For instance, given a set S and r > 0 it is possible to find the minimal r-convex set containing S, the so-called r-convex hull of S. In fact, it follows from the properties of r-convex sets that the r-convex hull of S coincides with the set C r (S) given in (4), see Perkal [23]. Note that the definition of C r (S) resembles that of the convex hull (with balls of radius r instead of halfspaces). However, the same property does not hold when we consider the reach condition. It is not always possible to define the so-called r-hull of S, that is, the minimal set containing S and having reach ≥ r, see Colesanti and Manselli [7]. In fact, when S admits such a minimal set it happens to coincide with C r (S), see Corollary 4.7 by Colesanti and Manselli [7]. This result provides an indirect proof of Proposition 1 below, that states that every set with reach r is also r-convex. Proposition 2 establishes the relation between r-convexity and the rolling condition.
where Nor(S, ξ S (x)) is the set of all normal vectors of S at ξ S (x). Define y λ = ξ S (x) + λη x with 0 < λ ≤ r and take λ ∈ (0, r). Now, reach(S, ξ S (x)) ≥ r > λ and by part (12) of Theorem 4.8 in Federer [13], we get that η Remark 2. Borsuk's conjecture on local contractibility of r-convex sets. The converse of Proposition 1 is not true in general, see Figure 1 (a) for an example of a r-convex set that has not reach r. Even so, we prove in Theorem 6 that if S is a compact r-convex support in R 2 fulfilling a mild regularity condition (which we call interior local connectivity; see Section 5 for details) then S has positive reach, though not necessarily r. If we do not assume any additional regularity condition on S, then the conclusion of positive reach does not seem that simple to get. This is closely related to an unsolved conjecture by K. Borsuk (see Perkal [23] and Mani-Levitska [17]): Is an r-convex set locally contractible? Note that proving that a compact r-convex set has positive reach would give a positive answer to Borsuk's conjecture since, according to Remark 4.15 in Federer [13], any set with positive reach is locally contractible. Recall that a topological space is said to be contractible if it is homotopy equivalent to one point. In intuitive terms this means that the space can be continuously shrunk to one point. The space is called locally contractible if every point has a local base of contractible neighborhoods.
Proposition 2. Let S ⊂ R d be a compact r-convex set for some r > 0. Then S fulfills the r-rolling condition.
Proof. Let x ∈ ∂S. Since x is a limit point of the set S c , there is a sequence of points {x n }, where x n ∈ S c , that converges to x. By the r-convexity, for each n there exists y n ∈B(x n , r) such thatB(y n , r) ∩ S = ∅. Since {y n } is bounded it contains a convergent subsequence which we denote by {y n } again. Then y n → y and it is not difficult to prove thatB(y, r) ∩ S = ∅ and y − x ≤ r. Now, x ∈ S since S is closed and, therefore, x ∈ ∂B(y, r), which concludes the proof.
The converse implication is not true in general, see Figure 1 (b) for an example of a set fulfilling the r-rolling condition but not r-convex.
3 Boundary convergence and full convergence in sequences of sets As mentioned in the introduction, the focus in this paper is on the reconstruction (in the statistical sense) of an unknown support S from a sequence of estimators {S n } based on sample information. In many practical instances, including image analysis, the most important aspect of the target set is the boundary ∂S. However, it is clear that, even in very simple cases, the Hausdorff convergence d H (S n , S) → 0 does not entail the boundary convergence d H (∂S n , ∂S) → 0. See e.g. Baíllo and Cuevas [2], Cuevas and Rodríguez-Casal [10] and Rodríguez-Casal [29] for some results on boundary estimation. We introduce the following notion of convergence.
The following result shows that, under very general conditions, the Hausdorff convergence of the sets and their boundaries implies also the convergence with respect to the distance in measure. This accounts for the term "full convergence". Theorem 2. Let {S n } be a sequence of compact non-empty sets in R d endowed with a Borel measure ν, with ν(C) < ∞ for any compact C. Let S be a compact non-empty set such that ν(∂S) = 0, d H (S n , S) → 0 and d H (∂S n , ∂S) → 0. Then d ν (S n , S) → 0.
Proof. Take ǫ > 0. We first prove that, for n large enough, and To see (5) take x ∈ S such that δ ∂S (x) > 2ǫ and n large enough such that S ⊂ B(S n , ǫ) and The proof of (6) follows along the same lines. Finally, note that as a consequence of (5) and (6), which for large enough n is a subset of {x ∈ S : δ ∂S (x) ≤ 3ǫ}, that decreases to ∂S as ǫ ↓ 0. Since ν is finite on bounded sets and ν(∂S) = 0 this entails lim sup ν(S n ∆S) = 0 which concludes the proof.
We next show that the above result applies to the important class of sets fulfilling the r-rolling condition. Therefore, as shown in Propositions 1 and 2, it also applies to the class of r-convex sets and to that of sets with positive reach. Proof. (a) Assume that the result is not true. Then, (i) There exists ǫ > 0 such that for infinitely many n ∈ N there exists x n ∈ ∂S with δ ∂Sn (x n ) > ǫ or (ii) There exists ǫ > 0 such that for infinitely many n ∈ N there exists x n ∈ ∂S n with δ ∂S (x n ) > ǫ.
First, assume that (i) is satisfied. Since S is compact, there exists a convergent subsequence of {x n } which we will denote again as the original sequence. Let x ∈ ∂S be the limit of {x n }. The Hausdorff convergence of S n to S implies that δ Sn (x) ≤ ǫ/2 for infinitely many n. Furthermore, for large enough n, But, since x ∈ ∂S and S is closed, we can consider y ∈ S c such that x − y < ǫ/2 and δ S (y) > 0. Again by the Hausdorff convergence of S n to S we get that y ∈ S c n for infinitely many n which yields a contradiction. Assume now that (ii) is satisfied. We can assume ǫ < r. The Hausdorff convergence of S n to S implies that δ S (x n ) ≤ ǫ for infinitely many n. This, together with δ ∂S (x n ) > ǫ, yields x n ∈ int(S) for infinitely many n. Let 0 < λ < ǫ/2 < r. Since x n ∈ ∂S n and the sets S n satisfy the r-rolling property, there exists for each n ∈ N a ball B(c n , λ) such that x n ∈ ∂B(c n , λ) andB(c n , λ) ∩ S n = ∅. Again, let us denote by {x n } a convergent subsequence of {x n }. Then x n → x, with x ∈ S and δ ∂S (x) ≥ ǫ/2, that is, B(x, ǫ/2) ⊂ S. Now, let {c n } be a convergent subsequence of {c n }. Then c n → c, with c ∈ S c n and δ Sn (c) > λ/2 for infinitely many n. By the Hausdorff convergence of S n to S, we get that c ∈ S c which yields a contradiction with B(x, ǫ/2) ⊂ S.
(b) By Theorem 4.13 in Federer [13], reach(S) ≥ r and, in particular, µ(∂S) = 0, see Remark 1. The result is now straightforward from (a) and Theorem 2. On the other hand, Theorem 3 shows that this in fact applies to the class of sets with reach ≥ r, which is d H -closed from Theorem 4.13 and Remark 4.14 in Federer [13]. Thus, the class of subsets with reach ≥ r of a compact set is d µ -compact. This will be useful below (subsection 5.2) in order to apply the results in Polonik [25] for classes of sets defined in terms of reach properties.
4 P -uniformity: Billingsley-Topsøe theory and its application to classes of sets with positive reach Let X 1 , X 2 , . . . be a sequence of independent and identically distributed random elements on a probability space (Ω, F, P) with values in a measurable space (E, B), where E is a metric space and B stands for the Borel σ-algebra on E. Denote by P the probability distribution of X 1 on B and let P n be the empirical probability measure associated with X 1 , . . . , X n .
The almost sure pointwise convergence on B of P n to P is ensured by the strong law of large numbers. Moreover, for appropriate classes A ⊂ B the uniform convergence also holds. A class A of sets fulfilling (7) is called a Glivenko-Cantelli class (GC-class). They are named after the classical Glivenko-Cantelli theorem which establishes the result (7) for the case where E = R and A is the class of closed half lines A = (−∞, x], x ∈ R. The study of uniform results of type (7) is a classical topic in statistics. A well-known reference is Pollard [24]. A useful summary, targeted to the most usual applications in statistics, can be found in Chapter 19 of van der Vaart [31].
A popular methodology to obtain Glivenko-Cantelli classes is based on the use of the well-known Vapnik-Cervonenkis inequality (see, e.g. Devroye et al. [11]) which relies on combinatorial tools. This inequality provides upper bounds for P{sup A∈A |P n (A)−P (A)| > ǫ} which depend on the so-called shatter coefficients and VC-dimension of the class A. However, this approach is not useful in those cases where the VC-dimension of A is infinite. This is the case where A is the class of closed convex sets in R d and therefore also for the class of closed r-convex sets.
Nevertheless, it can be shown that the family of all convex sets in R d is a GC-class. This can be done following an alternative approach (maybe less popular than the VC-method), due to Billingsley and Topsøe [5]; see also Bickel and Millar [6]. This methodology is rather based on geometrical and topological ideas and therefore turns out to be more suitable for set estimation purposes. The basic ideas of Billingsley-Topsøe approach can be summarized as follows.
Let P n and P be probability measures on B. A set A is called a P -continuity set if P (∂A) = 0. The sequence P n is said to converge weakly to P if P n (A) → P (A) for each P -continuity set A ⊂ B. A subclass A ⊂ B is said to be a P -continuity class if every set in A is a P -continuity set. A subclass A ⊂ B is said to be a P -uniformity class if sup A∈A |P n (A) − P (A)| → 0 holds for every sequence P n that converges weakly to P . Note that the P -uniformity concept is not established just for sequences of empirical distributions but in general for any sequence of probability measures converging to P . Billingsley and Topsøe [5] derived several results establishing conditions for a class A to be a P -uniformity class. As a consequence, some useful criteria for obtaining GC-classes immediately follow.
The following theorem provides three sufficient conditions to ensure that a class A is a P -uniformity class.
Theorem 4. (Billingsley and Topsøe (1967, Th. 4)) If E is locally connected and if
A is a P -continuity class of subsets of E, then each of the following three conditions is sufficient for A to be a P -uniformity class.
(i) The class ∂A = {∂A : A ∈ A} is a compact subset of the space M of non-empty closed bounded subsets of E.
(ii) There exists a sequence {C n } of bounded sets with P (int(C n )) → 1 and such that, for each n, the class ∂(C n ∩ A) = {∂(C n ∩ A) : A ∈ A} is a compact subset of M.
(iii) There exists a sequence {C n } of closed, bounded sets with P (int(C n )) → 1 and such that, for each n, the class C n ∩ ∂A = {C n ∩ ∂A : A ∈ A} is a compact subset of M.
Theorem 4 can be used to prove that the class of all convex sets in R d is a P -uniformity class, and in particular, a Glivenko-Cantelli class. The following theorem provides a partial extension of this property: we show that, under an additional boundedness assumption, convexity can be replaced with the broader condition of having a given positive reach. Again, the basic tool is Theorem 4.
Theorem 5. Let K be a compact non-empty subset of R d and A = {A ⊂ K : A = ∅, A is closed, and reach(A) ≥ r}, for r > 0. Then the class ∂A = {∂A : A ∈ A} is a compact subset of M. Moreover, A is a P -uniformity class for every probability measure P such that P is absolutely continuous with respect to the Lebesgue measure µ.
Proof. Let {∂A n } be a convergent sequence of sets in ∂A. By Federer's closeness theorem for sets of positive reach (Theorem 4.13 in Federer [13]) it follows that the class A is compact with respect to the Hausdorff metric and, therefore, A n has a convergent subsequence whose limit is a set A ∈ A. In view of Propositions 1 and 2, we can apply Theorem 3 to the class A, and this yields ∂A n → ∂A in d H . Now, from Theorem 4 (i), A is a P -uniformity class.
Estimation of r convex sets and their boundary lengths
Estimation of r-convex supports As indicated in Section 2, r-convexity is a natural extension of the notion of convexity. From the point of view of set estimation, r-convexity is particularly attractive as the estimation of an r-convex compact support S from a random sample X 1 , . . . , X n drawn on S, can be handled very much in the same way as the case where S is convex. In this classical situation (which has been extensively considered in the literature, see, e.g., Dümbgen and Walther [12]) the natural estimator of S is the convex hull of the sample.
In an analogous way, if S is assumed to be r-convex, the obvious estimator is the r-convex hull of the sample, which we will denote by S n . Recall from (4) that S n = B (y,r)∩{X 1 ,...,Xn}=∅B (y, r) c .
This estimator can be explicitly calculated in a computationally efficient way (at least in the two-dimensional case) through the R-package alphahull; see Pateiro-López and Rodríguez-Casal [21]. In Figure 2 we show an example of the r-convex hull estimator. Note that the boundary of the r-convex hull estimator is formed by arcs of balls of radius r (besides possible isolated sample points). The arcs are determined by the intersections of some of the empty balls that define the complement of the r-convex hull, see Equation (8). The r-convex hull estimator was first considered, from a statistical point of view, in Walther [32]. Rodríguez-Casal [29] studied its properties as an estimator of S and ∂S, providing (under some rolling-type assumptions for S) convergence rates for d H (S n , S), d H (∂S n , ∂S) and d µ (S n , S) which essentially coincide with those given by Dümbgen and Walther [12] for the convex case.
Estimation of the boundary length
We are interested in estimating the boundary length of S, L(S), as defined by the outer Minkowski content given in (1). Recall that (see Remark 1) if reach(S) > 0, then the outer Minkowski content is finite.
In Pateiro-López and Rodríguez-Casal [22] the estimation of the usual (two-sided) Minkowski content L 0 (S) given in (2) is considered under double smoothness assumptions of rolling-type on S (from inside and outside ∂S) which in fact are stronger than r-convexity. Under these assumptions, these authors improve the convergence rates for the Minkowski content L 0 (S) obtained in Cuevas et al. [9]. Another recent contribution to the problem of estimating boundary measures is due to Jiménez and Yukich [16]. In all these cases the estimators are based on a sample model which requires random observations inside and outside S in such a way that for each observation one is able to decide (with no error) whether or not it belongs to S.
Theorem 6 provides further insights on the approach of these papers in the sense that, under minimal assumptions, gives a fully consistent estimator S n of an r-convex set S ⊂ R 2 and a plug-in consistent estimator L(S n ) of L(S) based on a unique inside sample. The performance of L(S n ), in terms of bias and variance is analyzed in the appendix.
Before stating Theorem 6 we need some preliminaries.
Definition 5. We will say that a set S fulfills the property of interior local connectivity (ILC) if there exists α 0 > 0 such that for all α ≤ α 0 and for all x ∈ S, int(B(x, α) ∩ S) is a non-empty connected set.
x S Let us define an r-circular triangle as a compact plane figure limited by three sides: two of them are arcs of intersecting circumferences of radius r; the third side is a linear segment which is tangent to both circumference arcs. In other words, an r-circular triangle is an isosceles triangle with a linear segment as a basis and two r-circumference segments as the other sides. See Figure 4. Theorem 6. Let S ⊂ R 2 be a compact r-convex set with µ(S) > 0 that fulfills the ILC property. Let X 1 , . . . , X n be a random sample drawn from a uniform distribution with support S. Denote by S n the r-convex hull of this sample, as defined in (8). Then, (a) S n is a fully consistent estimator of S. (b) DefineS n = S n \ I(S n ), where I(S n ) is the set of isolated points of S n . Thus, for large enough n, with probability oneS n is a compact (not necessarily connected) set whose boundary is the union of a finite sequence of r-arcs, that is, circumference arcs of radius r. We will say that a boundary point ofS n is "extreme" if it is the intersection of two boundary r-arcs from different r-circumferences. The proof is based on the following two lemmas. Proof. If the sequence reach(S n ) were not bounded from below a.s. we would have, with positive probability, a sequence {z n } of points with z n / ∈S n and another sequence (x n , y n ), where x n and y n are boundary points ofS n , with x n = y n and such that z n − x n = z n − y n = inf x∈Sn z n − x = r n → 0.
In principle, the "projection points" x n and y n could be extreme points (as defined above) or "boundary inside points", that is, points belonging to a boundary r-arc but different from the extremes of this arc. However, it is easily seen ( Figure 5) that the latter possibility (i.e., x n or y n is a boundary inside point) leads to a contradiction: Indeed, note that if x n belongs to an r-arc in ∂S n with extremes a and b and x n = a, x n = b, then at a fixed distance r n < r there is only one point in the complement ofS n (necessarily equal to z n ) such that x n is the projection of z n onS n . This would imply that the projection of z n is r b a x n z ñ S n Figure 5: If z n has two projections x n and y n ontoS n , neither x n nor y n can be boundary inside points.
unique since the arc with extremes a and b is an arc of a circle of radius r that does not intersectS n .
Thus, x n and y n must be extreme points and our proof reduces to see that we cannot have (9) with a positive probability. So, let us assume that (9) were true with a positive probability (let us denote by A the corresponding event).
Let T 1 n and T 2 n be two circular triangles, with vertices x n and y n , respectively, determined by the r-arcs in ∂S n whose intersections are x n and y n , see Figure 6. Note that these triangles are not necessarily included inS n ; just a portion of each triangle close to the vertex is in general included inS n .
First, let us prove that the "heights" (i.e. the distances from the vertices to the bases) of T 1 n and T 2 n must necessarily be bounded from below a.s. on A. To see this, recall that each x n is the intersection of the boundary arcs of two balls of radius r, B(c 1n , r) and B(c 2n , r), whose interiors do not intersectS n . By construction, height(T 1 n ) → 0 for some subsequence T 1 n implies that the centers (c 1n , c 2n ) must fulfill c 1 = lim c 1n = lim c 2n = c 2 . This means that the r-circumferences providing the boundary ofS n at both sides of x n tend to coincide as n tends to infinity. We will prove that c 1 = c 2 leads to a contradiction. Indeed, if c 1 = c 2 we would have that, for large enough n, the centers c 1n and c 2n would be very close and the boundary arcs, ∂B(c 1n , r) and ∂B(c 2n , r), would intersect to each other not only at x n but also at another point x * n such that x n − x * n > γ > 0. We conclude B(z n , r n ) \ {x n } ⊂ int(B(c 1n , r) ∪ B(c 2n , r)) ⊂S n c , for large enough n (to see this note that the boundary of B(c 1n , r) ∪ B(c 2n , r) near x n coincides with the triangular sides of T 1 n ). Now, from (9) we have obtained a contradiction y n ∈ ∂B(z n , r n ) ∩S n and y n = x n . As the space of compact sets endowed with the Hausdorff metric is locally compact, zn xn yn Figure 6: In gray, T 1 n and T 2 n .
there exist a.s. convergent subsequences of T 1 n and T 2 n , {x n } and {y n } which we will denote again as the original sequences.
Let T 1 and T 2 be the a.s. Hausdorff limits of T 1 n and T 2 n , respectively. Since the heights of T 1 n and T 2 n are bounded from below, T 1 and T 2 must be also non-degenerate circular triangles. Denote x = lim x n = lim y n = lim z n , a.s. Note that the fact that z n / ∈ S n ensures that x ∈ ∂S. Then, we have two possibilities, (i) if T 1 ∩ T 2 = {x} we would get a contradiction: to see this note that T = T 1 ∪ T 2 is just the union of two r-circular triangles which are disjoint except for the common vertex {x}. Thus, for ǫ small enough B(x, ǫ) ∩ S is included in T . This contradicts the ILC of S at x. See Figure 7 (a).
(ii) On the other hand, the possibility T 1 ∩ T 2 = {x} leads also to contradiction. As the vertices of T 1 n and T 2 n tend to the same point, we would necessarily have that, with probability one, y n (or x n ) belongs to the interior of one of the r-circles defining the arcs of T 1 n (or T 2 n ), see Figure 7 (b). However, by construction, the r-circles defining the arcs of T 1 n and T 2 n do not intersectS n . This concludes the proof of Lemma 1.
Proof. Assume that (10) is not true. Then, with positive probability we have either (i) There exists ǫ > 0 such that for infinitely many n ∈ N there exists x n ∈S n with δ S (x n ) > ǫ or (ii) There exists ǫ > 0 such that for infinitely many n ∈ N there exists x n ∈ S with δS n (x n ) > ǫ. First, since S is r-convex, S n ⊂ S with probability one. ThereforeS n ⊂ S n ⊂ B(S, ǫ) for all ǫ > 0 and we cannot have (i).
With regard to (ii), let {x n } be a sequence of points in S. Since S is compact, there exists a convergent subsequence which we will denote again {x n }. Let x ∈ S be the limit of {x n } and y ∈ int(S) such that x − y < ǫ/2. Note that the existence of such y follows from the ILC property. Then, there exists ǫ 0 > 0 (ǫ 0 ≪ r) such that B(y, ǫ 0 ) ⊂ S. Using (a) in Theorem 6 and a similar reasoning to that of (5) we get that, for n large enough, B(y, ǫ 1 ) ⊂ S n for some ǫ 1 < ǫ 0 . Finally, for n large enough, That is, (ii) neither can be true. This concludes the proof of Lemma 2. Now, in view of Lemmas 1 and 2, the assumptions of Theorem 5.9 in Federer [13] are fulfilled (see also Remark 4.14 in that paper). Theorem 5.9 in Federer [13] essentially establishes that S has positive reach and the curvature measures are continuous with respect to d H (see Remark 5.10 in Federer [13]). In particular we obtain that Φ d−1 (S n , K) → Φ d−1 (S, K) a.s. for any closed ball K such that S ⊂ K. Using Remark 5.8 in Federer [13] andS n ⊂ S we get that Φ d−1 (S n , K) = Φ d−1 (S n , K ∩ ∂S n ) = Φ d−1 (S n , ∂S n ) and also Φ d−1 (S, K) = Φ d−1 (S, ∂S). The proof of Theorem 6 (b) concludes noting (see Remark 1) that L(S n ) = L(S n ) = Φ d−1 (S n , ∂S n ) and L(S) = Φ d−1 (S, ∂S).
Remark 4. This result provides, using stochastic methods, a partial converse of Proposition 1: we prove that r-convexity implies positive reach for ILC sets in R 2 . Thus we also get a partial answer to Borsuk's question (is an r-convex set locally contractible?) mentioned in Remark 2, since from Federer [13], Remark 4.15, any set with positive reach is locally contractible.
Applications to the excess mass approach
Typically, the results on uniform convergence, of Glivenko-Cantelli type, are useful in order to establish the consistency of set estimators which are defined as maximizers of appropriate functionals of the empirical process. An interesting example in set estimation is the excess mass approach, proposed by Hartigan [15] and Müller and Sawitzki [20] and further developed by Polonik [25] and Polonik and Wang [26], among others.
The basic ideas of this method can be simply described as follows: given λ > 0, denote by P an absolutely continuous distribution with respect to the Lebesgue measure µ. Let f denote the corresponding µ-density. Define the excess mass functional on the class B(R d ) of Borel sets, This suggests a method to define an estimator of S(λ) = {f ≥ λ} which can incorporate some shape restrictions previously imposed on this set. Let us assume that S(λ) belongs to some given class of sets A (for example, the class of compact convex sets or the class of compact r-convex sets in R d ). As S(λ) is the maximizer on A of the unknown functional H λ (A), we could define S n (λ) as the maximizer on A of the empirical excess mass functional Proposition 3. Let A be the class of compact sets A with reach(A) ≥ r included in a given ball B(0, R). Given a sample X 1 , . . . , X n from an absolutely continuous distribution P with density f in R d , let S n (λ) denote the empirical level set estimator defined by S n (λ) = argmin A∈A H n,λ (A). Assume that the level set S(λ) = {f ≥ λ} belongs to A. Then, (a) The estimator S n (λ) is fully consistent to S(λ).
Proof. (a) For simplicity, denote H n,λ = H n , H λ = H, S n (λ) = S n and S(λ) = S. First, note that, by definition, H is a d µ -continuous functional and, from Theorem 2, it is also continuous with respect to d H . We have, with probability one, since, from Theorem 5, A is a P -uniformity class. From Theorem 3 and Proposition 2 to prove the full consistency we only need to establish the d H -convergence d H (S n , S) → 0, a.s.
Now, let us take a value ω ∈ Ω (Ω is the common probability space in which the random variables X i are defined) for which d H (S n (ω), S) → 0 does not hold but (11) holds. Then, as A is compact, we should have d H (S n (ω), T ) → 0 for some subsequence {S n (ω)} of S n = S n (ω) and for some T = S, T ∈ A. Then, (11) and the continuity of H implies that H(T ) = H(S). However, the convergence to T = S is not possible, since S ∈ A is the (unique) maximum of H in A. Thus we should have d H (S n (ω), S) → 0 for all ω such that (11) holds. We conclude that (12) must hold with probability one. (b) Follows again directly as a consequence of (a) together with Remarks 4.14 and 5.8 and Theorem 5.9 in Federer [13].
Our main point in this subsection is to show that our P -uniformity results fit in the framework developed by Polonik [25]. This author obtains results of consistency (uniform in λ) of level set estimators, of type sup λ d µ (S n (λ), S(λ)) → 0, a.s. One of his key assumptions is that the involved classes C are Glivenko-Cantelli classes. To be more specific, the main result in Polonik's paper (which is Theorem 3.2) can be applied to our new P -uniformity class of sets with reach ≥ r.
Let us first briefly recall the basic assumptions for the result in Polonik [25].
(A1) For all λ ≥ 0 the excess mass functional H(C) = C f dµ − λµ(C) and its empirical counterpart H n (C) = P n (C) − λµ(C) attain their maximum values on the class C at some sets denoted by Γ(λ) and Γ n (λ), respectively.
(A2) The underlying density f is bounded in R d .
(A3) C is a GC-class of closed sets with ∅ ∈ C.
Note that the maximizer over C, Γ(λ), of H(C) will only coincide with the level set S(λ) = {f ≥ λ} whenever S(λ) ∈ C. For this reason Γ(λ) is called the generalized λ-cluster. Now, let us denote by g * the "measurable cover" of any function g (which of course coincides with g when it is measurable). The main result in Polonik's paper is as follows: Theorem 7. (Polonik (1995, Th. 3.2)) Let Λ ⊂ [0, ∞). Suppose that, in addition to (A1)-(A3), the following two conditions hold: (i) For a distribution Q in R d with strictly positive density, the space (C, d Q ) is compact.
Then, our point is that this result applies directly to the case in which C is the class A considered in Theorem 5, plus the empty set. This is made explicit in the following statement.
Theorem 8. Under assumptions(A1), (A2) and (ii) of Theorem 7, the conclusion (13) is valid if we take C = A ∪ {∅}, where A is the class of compact subsets of a given ball B(0, R) with reach ≥ r.
Proof. We only need to prove that conditions (i) and (A3) hold in this case. The validity of (i) follows easily since A is d H -compact from Theorems 3 and 5, and therefore, from Theorem 2, it is also d Q -compact for any absolutely continuous distribution Q. Also C = A ∪ {∅} is d Q -compact.
As for the condition (A3) it also holds as a direct consequence of Theorem 5.
Note that none of the typical examples of GC-classes (convex sets, ellipsoids, balls,...) allows us to consider multimodal densities. In this sense, Theorem 8 can be seen as an extension of uniformity results in the excess mass approach beyond the realm of convex sets. The class of sets with reach bounded from below is much larger and includes nonconnected members which are now candidates to be considered as possible level sets for multimodal densities.
Appendix: some numerical comparisons
In Theorem 6 we have proved that the boundary length L(S) of an r-convex set S ⊂ R 2 can be consistently estimated in a plug-in way by L(S n ), where S n is the r-convex hull of the sample.
Another estimator of L(S) (in fact of L 0 (S)) has been recently proposed by Jiménez and Yukich [16], based on the use of Delaunay triangulations. This estimator does not rely on any assumption of r-convexity but requires the use of sample data inside and outside the set S. The numerical results reported by Jiménez and Yukich [16] show a remarkable performance of their sewing based estimator (denoted by L s n (S)) which in fact outperforms that proposed in Cuevas et al. [9].
Our plug-in estimator is not directly comparable with L s n (S) since the required sampling models are different in both cases. Moreover, L(S n ) incorporates the shape assumption of r-convexity on the target set S. Nevertheless, it is still interesting analyzing to what extent the use of the r-convexity assumption in L(S n ) could improve the efficiency in the estimation.
We have checked this, through a small simulation study. We have considered two rconvex sets S, defined as the domains inside two well-known closed curves: the Catalan's trisectrix (as in Jiménez and Yukich [16]) and the astroid. See Figure 8. The "true" maximal value of r in the first set is r = ∞ (since it is convex). For the second set r is close to 1. In practice, these values are not known so one must assume them as a model hypothesis keeping in mind that small values of r correspond to more conservative (safer) choices. Recall that the class of r-convex sets is increasing as r decreases. The considered sample sizes are n = 5000, 10000. In the case of the estimator in Jiménez and Yukich [16], the sample observations are uniformly generated on a square containing the domain S and it is assumed that we know (without error) whether a sample point belongs to S or not. In the case of our estimator L(S n ) all the observations are drawn from a uniform distribution on S. The results are summarized in Table 1. The reported values in the columns "mean" and "std" correspond to averages over 1000 runs. The true values of L(S) are 20.7846 (for the Catalan's trisectrix) and 6 (for the astroid). The outputs show a better behavior of L(S n ), especially in terms of variability. Table 1: Mean and standard deviation of the sewing-based estimator L s n (S) and the plugin estimator L(S n ), where S n is the r-convex hull of the sample. The estimator S n is computed for different values of r. The reported values correspond to averages over 1000 runs of samples size n = 5000 and n = 10000. | 2011-10-20T14:17:41.000Z | 2011-05-31T00:00:00.000 | {
"year": 2012,
"sha1": "2aac2a2e72a24401e5eb103036c9dc5c2396c6d1",
"oa_license": null,
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A6AE8535D52961D2A9A4A2521F82C23F/S0001867800005619a.pdf/div-class-title-on-statistical-properties-of-sets-fulfilling-rolling-type-conditions-div.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "2aac2a2e72a24401e5eb103036c9dc5c2396c6d1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
219013392 | pes2o/s2orc | v3-fos-license | Undergraduate Student Concerns in Introductory STEM Courses: What They Are, How They Change, and What Influences Them
Introductory STEM courses represent entry points into a major, and student experiences in these courses can affect both their persistence and success in STEM disciplines. Identifying course-based student concerns may help instructors detect negative perceptions, areas of struggle, and potential barriers to success. Using an open-response survey question, we identified 13 common concerns expressed by students in introductory STEM courses. We converted these student-generated concerns into closed-ended items that were administered at the beginning and middle of the semester to students in 22 introductory STEM course sections across three different institutions. Students were asked to reflect on each item on a scale from very concerned to not concerned. A subset of these concerns was used to create a summary score of course-based concern for each student. Overall levels of student concern decreased from the first week to the middle of the semester; however, this pattern varied across different demographic groups. In particular, when controlling for initial concern and course grades, female students held higher levels of concern than their peers. Since student perceptions can impact their experiences, addressing concerns through communication and instructional practices may improve students’ overall experiences and facilitate their success.
Introduction
Undergraduate students' experiences in introductory gateway science, technology, engineering, and mathematics (STEM) courses contribute to student retention and overall persistence in STEM majors and careers (Suresh 2006;Watkins and Mazur 2013). Half of college students who intend to graduate with STEM degrees fail to do so within six years of starting college (Eagan et al. 2014). The majority of students who leave STEM majors do not persist past the first year (Seymour and Hewitt 1997). Increasing student retention in STEM majors has been proposed as a key strategy to producing the overall number of graduates required to meet the growing need for a trained workforce (President's Council of Advisors on Science and Technology 2012). Improving student retention rates is also necessary to increase diversity in STEM because retention rates are lower for students from underrepresented groups (Alexander et al. 2009;Beasley and Fischer 2012;Cataldi et al. 2018;Hughes 2018;Riegle-Crumb et al. 2019).
Students cite course experiences including their perceptions of classroom climate, faculty behaviors, and interactions with faculty as reasons to leave STEM majors (Geisinger and Raman 2013;Seymour and Hewitt 1997;Suresh 2006;Watkins and Mazur 2013). One challenge for students may be that they can experience significant shifts in instructional methods between their high school and college STEM courses (Akiha et al. 2018), and students who are experiencing their first semester in college or are first generation have different predictions about classroom instruction when compared to their peers (Brown et al. 2017;Meaders et al. 2019). These factors may contribute to students having negative experiences in their introductory level courses. Consequently, identifying the course-related issues about which students perceive and express concern could lead to ways to improve introductory course experiences for all students.
To date, the undergraduate education literature has largely focused on describing students' adverse experiences in course environments through the lens of anxiety (e.g., Barrows et al. 2012;Chapell et al. 2005;Foley et al. 2017). Anxiety, which can be described as a negative, prospective emotion, is associated with lower college exam grades and retention in STEM courses (Barrows et al. 2012;Bellinger et al. 2015;Chapell et al. 2005;England et al. 2017;Foley et al. 2017;Pekrun et al. 2007). For example, England et al. (2017) found that 16% of students (total n = 327) in three introductory biology sections reported moderately high classroom anxiety, and students with greater anxiety were more likely to self-report lower grades and intent to leave the biology major.
One type of anxiety extensively studied in undergraduate education is test anxiety, or fear of failure during exams, and this research has often included an investigation of gender differences. Studies frequently report that female students have overall higher levels of test anxiety than male students (e.g., Chapell et al., 2005;Harris et al. 2019;Núñez-Peña et al. 2016). With respect to how anxiety relates to performance, there is often an inverse relationship between test anxiety and exam performance (e.g., Harris et al. 2019) and between test anxiety and undergraduate GPA (e.g., Chapell et al., 2005). However, studies show mixed results with respect to how the relationship between anxiety and performance plays out by gender. While female students have overall higher levels of test anxiety, some studies find that high test anxiety similarly affects performance for both male and female students (Chapell et al. 2005;Harris et al. 2019;Seipp 1991) or negatively influences female student performance alone (Salehi et al. 2019). In some cases, test anxiety does not impact performance for either gender (England et al. 2019). These studies highlight the complex interplay between emotional states, student background characteristics, and course performance.
Instructors' actions and course design can potentially contribute to anxiety, even in active learning environments (Eddy et al. 2015;England et al. 2017). An investigation of specific active learning strategies revealed that practices such as call and response, worksheets, clicker questions, and peer discussions caused anxiety for some students (Cooper et al. 2018;England et al. 2017). In addition, a student's demographic background can contribute to how they experience different types of instructional strategies. For example, female students report higher levels of anxiety related to whole-class discussions, and international students report higher levels of anxiety regarding peer discussions (Eddy et al. 2015).
While there are many investigations into undergraduate anxiety, other emotions such as concern are less frequently explored. In this article, we aim to identify the salient course-based concerns that undergraduate students have about their introductory STEM courses. We define "concerns" as sources of apprehension, uncertainty, or difficulty that students perceive to affect their interaction with or success in a course. Concerns can arise early in the semester and can relate to a variety of in-class or out-of-class activities. Furthermore, concerns as we have defined them encompass sources of stress as well as other challenges that do not meet the threshold of anxiety. We focused on concerns rather than anxiety because we wanted to identify a broad list of issues that instructors could act on to improve students' course experiences.
Given the importance of course experiences to student persistence, we sought to investigate concerns based on their potential to highlight course components and challenges that could be addressed by the instructor and potentially improve student outcomes. We focused on introductory courses because these courses have reputations as "weed-out courses" (Mervis 2011) and include experiences that influence students' decisions to remain in STEM (Alting and Walser 2007;Chang et al. 2008;Seymour and Hewitt 1997). Additionally, we explored if any concerns varied between students with different demographics because certain groups leave STEM majors at higher rates than others (Alexander et al. 2009;Beasley and Fischer 2012;Cataldi et al. 2018;Hughes 2018;Riegle-Crumb et al. 2019). Differences in concerns between demographic groups could reveal specific areas that instructors can address to provide students with positive experiences. To describe student concerns in introductory STEM courses, we investigated the following research questions: (1) what concerns do students hold about their introductory courses, (2) how do students' concerns in introductory courses change within a semester, and (3) do concerns differ based on student demographic characteristics?
Identifying the Range of Concerns
To explore course-based concerns, we distributed surveys to undergraduate students in introductory STEM courses (Table 1). In Fall 2017, we conducted pilot surveys during both the first-week and mid-semester in 13 introductory STEM course sections (disciplines surveyed included biology, chemistry, and physics) at two public, researchintensive universities and received 2181 student responses from the first-week and 1920 responses from the mid-semester data collection (Supplemental Table 1). The courses included in the pilot survey were taught by nine different instructors who typically used lecture or interactive lecture. For the interactive lecture, the instructors utilized clicker questions, peer discussion, or small group activities (personal communication and past observations).
Two open-ended questions were included in the pilot survey: (1) "How do you expect the use of class time in [course title] to be different from the [subject] course(s) you took in high school?" and (2) "What concerns, if any, do you have regarding these differences in how class time is used?" We included the first question to focus student responses on differences between their high school and college courses. We performed inductive content analysis on all responses to the second question (Thomas 2006). Two co-authors (coders) AKL and AIM separately read through 100 responses collected at one institution during the first week of class. Together, they developed a list of ideas appearing in those responses, which served as an initial codebook. The coders separately tested this codebook on sets of 50-100 responses cycling through both institutions and the first-week and mid-semester datasets. After each iteration, the coders met to compare results and modify the codebook, and this process continued until the coders constructed a finalized codebook.
Next, the coders calculated a consensus estimate of interrater reliability to determine if they held similar understandings of the codes or if any code definitions required further refinement (Stemler 2004). Each coder independently coded 50 random responses, which included all combinations of institutions and first-week and midsemester surveys. If 90% agreement per code was not reached, the previous steps were repeated.
A single student response could be coded for more than one concern. Ten percent of student responses were coded as "other" because their concerns varied widely from the codes, the responses were challenging to interpret, or the responses were not on topic (e.g., criticizing the instructor instead of describing a concern). The coders reviewed all responses coded as "other" together to make sure no potential codes were overlooked, which resulted in two additional codes (concerns due to large class sizes and a lack of efficiency in how class time was used), which were less common ideas but were shared among students. These responses were coded to consensus by both coders. Final Survey Content The final codebook included 17 codes, 13 of which we modified to become closedended items for the final survey implemented in Fall 2018 (Supplemental Table 2). The codes were adapted from their original wording to use language that would be accessible to students and align with the question format. We excluded codes that captured student responses that explicitly described their lack of concerns or their preferences for college courses compared to their high school courses. Consequently, the final list of 13 items only included student-generated concerns and represented both common and uncommon ideas from the pilot survey (Supplemental Appendix A).
In the final survey (Table 1), we asked students to indicate their level of concern for each of the 13 items through a 3-point Likert scale question, with 0 being "Not Concerned," 1 being "Somewhat Concerned," and 2 being "Very Concerned." The question layout included three boxes for students to drag and drop each item into the bin that best captured their level of concern (Supplemental Appendix B). The surveys were built and distributed using Qualtrics (Provo, UT). Each survey included two additional short answer questions asking students to identify their top course-based concern out of the 13 items and explain why it was their top concern. At the end of each survey, we included seven optional demographic questions (Supplemental Appendix B).
Final Survey Administration
During Fall 2018, we distributed final surveys online, out-of-class during the first week and mid-semester to students in 22 introductory STEM course sections at three research-intensive universities. These courses were taught by faculty participating in a professional development group that met on a monthly basis to explore ways to help students with the transition from high school to college STEM courses. Student responses were voluntary, and faculty varied in their distribution of participation credit points as incentives for participation. The course subjects in our study broadly covered STEM disciplines and included biology, chemistry, computer science, earth science, ecology and environmental science, economics, engineering, forestry, mathematics, physics, and statistics (National Science Foundation, National Center for Science and Engineering Statistics 2015). The total course enrollment was 3916 students.
Data Analysis
We received 2436 student responses for the first-week survey and 1671 responses from the mid-semester survey. We removed student responses from the final dataset according to the steps detailed in Supplemental Figure 1. Broadly, for each survey we removed responses from students that (1) were not complete survey responses; (2) were responses from students who filled out the survey for a single course multiple times (after keeping their first response); or (3) did not include responses to all of the demographic questions. We then matched students' responses for those who responded to both the first-week and mid-semester survey by name and student ID. For matched students who answered both surveys, we removed those who changed their answers for demographic questions from the first-week to the mid-semester survey from the dataset.
If a matched student left a demographic question blank on one survey, but answered it on the other survey, their answer was filled in to match on both surveys. After matching student responses, we removed student responses who did not rank their level of concern for all 13 items and did not receive a final course grade. This processing left 650 student responses. Demographic information about student participants is included in Table 2.
Top Concern Question Analysis
In the final surveys, students were additionally asked to identify their top concern by submitting a written response. Two authors, AKL and JKS (coders), performed deductive content analysis using the 13 items as the codebook to characterize students' top concern. In cases where responses listed more than one top concern, the response was deleted from the data set since it did not follow the prompt. Some responses contained top concerns that were not from the 13 items. These responses were coded as "other." After familiarizing themselves with the codebook, the coders independently coded 90 responses (15 per survey per institution) and compared results while discussing any differences. This process was done iteratively until all codes reached 90% agreement. JKS coded the remaining responses, consulting with AKL as necessary for unclear cases. Lastly, JKS and AKL returned to all responses coded as "other" to determine if there were common ideas; however, any new concern appeared in only 10 or fewer responses in either the first-week or mid-semester datasets.
Concern Index
To explore general levels of student concern over time, we sought to condense multiple items into one score representing overall student concern. We calculated the correlation coefficients between the 13 items during the final first-week and mid-semester surveys and identified seven items with moderate (r ≧ 0.3) correlations. These seven items also loaded onto the first component of a Principal Components Analysis (PCA) for the first-week and mid-semester survey data (Supplemental Appendix C). These items include (1) knowing what to study, (2) the course being too difficult, (3) the pace of the course being too fast, (4) being expected to do too much independent learning outside of class, (5) having the necessary skills/background to succeed in the course, (6) receiving too few in-depth explanations, and (7) being able to get help. Since student scores ranged from 0 (not concerned), 1 (somewhat concerned), or 2 (very concerned) on each of these seven items, we collapsed the seven items into one cumulative score that had a potential range between 0 and 14. This cumulative raw score represented their Concern Index, or CI.
We also explored adjusting student responses to each of the seven items by multiplying each response by its corresponding loading scores onto the first component (Supplemental Appendix C). We summed these adjusted responses to create an adjusted Concern Index (Adj.CI). We found strong correlation between the adjusted and non-adjusted CIs (r = 0.99, Supplemental Appendix C), so for ease of interpretation, we used the non-adjusted CI values for further analyses. Finally, we ran reliability analyses of student responses to the seven CI items in R using the psych package (R Core Team 2019; Revelle 2018). Students responded consistently across these seven items on both the first-week (Cronbach's α = 0.76) and mid-semester surveys (Cronbach's α = 0.81).
Statistical Analyses and Data Visualization
To assess the impact of demographic characteristics on total levels of student coursebased concern, we followed the recommendations outlined in Theobald (2018) and used automated model selection to identify separate models for the first week and midsemester (Supplemental Appendix D). The demographic variables considered during model selection were English as a first language, first-generation student status, firstsemester on a college campus status, gender, international student status, transfer student status, and underrepresented racial/ethnic minority student status.
For the mid-semester model selection, we also tested three additional variables: students' first-week CI as a predictor variable to account for baseline levels of student concern, final standardized course grades as a predictor variable to account for the impact of performance in the course, and the percentage of female students enrolled in students' courses to test for the impact of gender balance within a course. We included students' first-week CI as a predictor because first-week CI and mid-semester CI showed a moderate correlation (r = 0.58). We included final standardized grades because in our dataset 20 out of 22 course sections (or 638 out of 650 students) had administered exams by the mid-semester time point and student knowledge of their course performance could impact student concerns. In order to account for the differences in how each course weighted mid-semester exams compared to other course assignments, we included final standardized grades as a predictor variable for overall course performance. Finally, we included the percentage of female students enrolled in students' courses, obtained from each university's registrar, as a predictor because gender balance varied across courses, with the percent of female students ranging between 9% and 64% (Supplemental Figure 2). We also included an interaction between gender and the percentage of female students enrolled to test for potential differential impacts of gender balance on male and female students.
Across the three universities in this study, there were two different GPA scales, with one university using a 4.3 scale and the others using a 4.0 scale. To account for grading scheme differences across universities, we standardized students' final course GPAs using z-scores, which then represented the distance between students' raw final grades and the population mean from each university in units of standard deviation. We calculated z-scores using the formula z-scores = (X -μ) / σ, where X is the score of interest, μ is the class mean score, and σ is the standard deviation.
We compared models with all combinations of fixed effects using the MuMIn R package, which evaluates models using measures of Akaike's information criterion (AIC) (Bartoń 2019). All modeling was conducted using the R statistical software (v.3.3.1) and the lmer and lmerTest packages (Bates et al. 2015;Kuznetsova et al. 2017; R Core Team 2019).
We constructed graphs and figures using the ggplot2, Rcolorbrewer, and ggalluvial packages in R (Brunson 2019;Neuwirth 2014;Wickham 2016). For construction of the Sankey diagrams, we designated students with final standardized grades (i.e., z-scores) between −0.5 -0.5 SDs from the mean as receiving "medium grades" and students above 0.5 SD or below −0.5 SDs from the mean as receiving "higher grades" and "lower grades," respectively. In order to categorize CI, we split the data into tertiles and designated students with a CI between 0 and 4 as "low concern," 5-9 as "medium concern," and 10-14 as "high concern." Results What Are the Types of Course-Based Concerns Students Have?
In the pilot survey, introductory STEM students responded to an open-ended question during the first week and mid-semester asking them to identify what, if any, concerns they had regarding the differences in how class time was used in their high school and college STEM introductory courses. Inductive content analysis of these answers (first-week n = 2181, mid-semester n = 1920) identified 13 common course-based concerns, which included concerns related to the structure and content of the course as well as students' backgrounds and instructional preferences. Nearly all of the survey responses focused on general course-based concerns rather than concerns about the difference in class time between high school and college. The most common course-based concern from the pilot firstweek survey was not getting help or knowing where to get help. Students with this concern described worry over not having enough personal access to their instructors, falling behind and not being able to find help, and not getting immediate feedback. One student responded, "Sometimes I feel that I will get behind in class and feel that I won't have anyone to help," while another wrote that they were "concerned that we won't be able to get immediate feedback on practice problems." Additional representative student responses are included in Supplemental Table 2.
The open-response results from the pilot survey helped to identify types of course-based concerns held by students. However, the open-ended response format resulted in variation in the types of responses we received from students, with some students only submitting one response, and some students submitting multiple concerns. Furthermore, students may have had additional concerns beyond the 1-2 sentences included in their responses. To compare levels of concern held by students across all of the identified course-based concerns, we modified the survey for the Fall 2018 final implementation as described in the methods and in Supplemental Appendix B. Converting the open-response question to a closedresponse format allowed us to identify which course-based concerns were the most common and if students changed in their levels of concerns about these items over the course of the semester.
We used stacked bar charts to display the relative frequency of student levels of concern for each of the 13 items. Knowing what to study was an item that students often expressed being "Very Concerned" about, with 34% of students reporting this level during the first week of the semester and 26% at mid-semester (Fig. 1). In addition, knowing what to study was also the item that students most frequently cited as their "top concern" with 10.3% citing it at the start of the semester and 11.1% at the mid-semester time point (Fig. 2) Fig. 1 Percent of student responses at each level of concern for the 13 items, ordered by percent of students citing each level of concern on the final first-week and mid-semester surveys Top concerns held by students during the final first-week and mid-semester surveys. Percent of student responses who cited each of the 13 items as their top concern, ordered by percent of students citing each level of concern during the first-week survey. Students who reported not having any concerns, students who did not submit a response, students who identified multiple top concerns, and students who wrote responses other than the 13 categories are excluded from the figure. Concern labels are abbreviated, full labels are provided in Supplemental Appendix C
How Do Student Course-Based Concerns Change within a Semester?
We explored how student course-based concerns changed over the semester, across multiple levels of resolution. For approximately half of their selections, students did not change in their levels of reported concern about individual items during the semester, but among students who shifted in their levels, more students decreased in their levels of concern (Table 3). There were two items for which more students reported being very or somewhat concerned at mid-semester compared to during the first-week: being able to pay attention in class and having enough practice problems (Fig. 1). While at the mid-semester time point knowing what to study was still the top concern, being able to pay attention in class was the second most common concern from students ( Fig. 2).
We used the Concern Index (CI, see Methods) as a metric to compare if overall course-based concern levels change between the first-week and mid-semester surveys. During the first week, the mean CI reported by students was 6.5 (or 46.4% of total possible concern; Fig. 3). Overall student concerns decreased between the beginning and middle of the semester. Mid-way through the semester, the mean CI reported by students was 5.1 (or 36.5% of total possible concern), which was significantly lower (p < 0.001) than mean student CI values at the beginning of the semester.
Do Concerns Differ Based on Student Demographic Characteristics?
Given the range of concern levels held by students during the first-week and midsemester surveys, we explored if CI findings varied by student-level and course-level demographic variables (Supplemental Appendix D). The best-fitting model for concerns during the first week included first-generation student status, first-semester on a Table 3 Percent of students who maintained their levels of concern, increased in their levels of concern, or decreased in their levels of concern about the 13 items
Item
Maintained levels of concern Increased in levels of concern Decreased in levels of concern Study 52% 17% 31% college campus status, and gender as predictors, and explained 5% of the variation in the first week CI data (Table 4). While the amount of variation explained by the model is low, this model was significant, indicating that these variables had a significant relationship with the CI of the students during the first week of the semester. According to the model, first-generation, female, first-semester students had a CI of 8.13. This value is the equivalent of reporting "very concerned" for four out of seven items, or a mix of "somewhat" and "very concerned" for more than four items. The effects of being continuing-generation, male, and a returning student were all negative *** Fig. 3 Boxplot showing distribution of student concern index (CI) during the first-week and mid-semester surveys. Wilcoxon signed rank test for significance with a p value <0.001 indicated by ***. Diamonds represent mean values and horizontal lines on each bar represent median values Statistical significance is indicated by *, p < 0.05; **, p < 0.01; a and ***, p < 0.001 and significant, indicating that these demographic groups were associated with a decrease in first week CI. The best fitting model predicted that a continuing-generation, male, returning student would report a CI of 5.21. We also determined if changes in mid-semester CI were similar for students from different demographic groups. We used model selection to identify if any demographic variables explained variation in mid-semester student CI and included first-week CI to control for students' initial levels of concern (Supplemental Appendix D). Additionally, we included final standardized grades (z-scores, see Methods) from students as a predictor for each model to account for differences in course performance.
The best-fitting model for mid-semester CI explained 41% of the variation in student responses. According to the model, a female, first-generation, domestic student who grew up speaking English at home, with an average CI from the first week of the semester, who received the average final grade at their institution, and who was enrolled in a course with the average percentage of female students (39% female) would report a CI of 5.84 (Table 5). Although first-semester on a college campus was Statistical significance is indicated by *, p < 0.05; **, p < 0.01; and ***, p < 0.001 significant during the first week, it was not part of the best-fitting model at the midsemester point. A one unit increase in initial CI is associated with a higher (0.58 unit) mid-semester CI, reflecting that while overall student concerns decrease during the semester, students' initial levels of concern can predict their levels of concern midway through the semester. Similarly, a one-standard deviation increase in final grade was associated with a 0.78 unit decrease in mid-semester CI. In other words, students who ultimately performed better than their peers in the course were less concerned than their peers midway through the semester. Four demographic variables were included in the best-fitting model for the midsemester time point after controlling for initial CI and final standardized grades (Table 5). English spoken at home and international student status were included but were not significant. The effect of being a student who spoke English at home lowered CI relative to the intercept, while the effect of being an international student raised CI relative to the intercept. The low number of students who spoke English at home (n = 38) as well as international students in our sample (n = 20) and large confidence intervals for these students requires further investigation with a larger sample size. First-generation status was similarly included in the mid-semester model, but it was also not a significant predictor of CI. One possible explanation for this result is that the effect of being a first-generation student was masked by differences in initial concern or final standardized grades. In our dataset, we found that on average first-generation students received lower final standardized grades than their peers (Supplemental Figure 3).
On the other hand, the effect of being a female student was significant after controlling for higher initial concern, final standardized grades, and percentage of female students in the course. A female student who had reported the same level of initial concern as a male peer, received the same final standardized grade, and was enrolled in a course with the average percentage of female students (39%) reported a 0.66 unit higher level of mid-semester concern than a male peer (Table 5). This outcome is notable because female students had no significant differences in raw or standardized final grades from male students (Supplemental Figure 3). Taken together, the results from these models suggest that initial course-based concern and final course performance impact mid-semester student course-based concern, but even with controlling for these variables, female students still have significantly more total concern than their peers mid-way through the semester.
The models identified that gender was the only significant demographic variable with an effect on mid-semester CI. Therefore, we used Sankey diagrams to provide a visual representation of students' changes in initial CI to mid-semester CI disaggregated by gender (Fig. 4a). The Sankey diagrams confirmed that male students hold consistently lower levels of concern during both the beginning and middle of the semester than their female peers. While the overall percent of students increasing in their levels of concern is smaller than the percent of students decreasing in their concern (indicated by flow size in Fig. 4a), a larger percentage of female students increased in their concern compared to male students.
The percentage of female students enrolled in a course also affected students' CI. A 1 % increase in the percentage of female students enrolled in a course was associated with a − 0.048 unit decrease in CI for female students (Table 5). However, the interaction of gender with the percentage of female students showed that female student concerns decreased more sharply than male student concerns as the percent of female students in a course increased. The difference in rate is calculated by starting with the interaction (0.033) and adding the effect of the percentage of female students enrolled in a course (−0.048), which results in a − 0.015 unit decrease in CI for male students per 1 % increase in female student enrollment. The model predicts a steeper decrease in CI for female students when compared to male students, as enrollment of percent female students increases (Supplemental Figure 4).
Our regressions identified that course performance had a significant effect on midsemester CI. The Sankey diagrams also revealed interesting patterns for levels of concern based on final standardized grades (Fig. 4b). Students who received lower final standardized grades held higher levels of concern at the beginning of the semester, and the majority of them did not decrease in their levels of concern. On the other hand, students who received higher final standardized grades in the course were initially less concerned than their peers and a larger proportion of them decreased in their levels of concern.
We also examined whether any individual concern was more likely to correlate with course performance, because studies have shown that students' perceived course difficulty is inversely related to course performance (England et al. 2019). Each of the concerns in our study is negatively correlated with final standardized grades but the overall CI has the strongest correlation (Supplemental Table 3), indicating that no single concern is driving the connection with course performance in our dataset.
Discussion
We described concerns held by undergraduate students in introductory STEM courses, observed if those concerns changed over the course of the semester, and assessed if there were differences in concerns based on student characteristics such as gender or course grades. We identified a broad range of course-based issues that may be leveraged to improve undergraduates' experiences in introductory STEM courses ( Figs. 1 and 2). Students' total concern level, measured using the Concern Index (CI), was used to investigate demographic differences in concern and the relationship between student concern and course performance (Tables 4 and 5, Fig. 4).
What Concerns Do Students Hold about their Introductory Courses?
Students in introductory STEM courses expressed a variety of course-based concerns, which we further quantified using closed-ended survey items. For every item, there were students who reported being concerned indicating that these items resonated with students across three campuses and in many different introductory STEM courses (Fig. 1). The list of concerns included topics that may be actionable by the instructor, such as those related to course structure or pedagogy (e.g., pace of the class). There were also concerns related to students incoming preparation and skills (e.g., having the necessary skills/background to succeed in this course), which may indicate areas that could be covered early in the semester to help students with the transition from high school to college STEM courses or that could be provided through supplemental instruction programs.
How Do Students' Concerns in Introductory Courses Change within a Semester?
We used CI as a metric for students' overall level of concern and found that student concern went down from the first week to mid-semester (Table 3 and Fig. 3). When investigating the range of concerns (Fig. 1) and students' top concerns (Fig. 2), the highest reported concern during the first week and mid-semester time point was not knowing what to study. These results may indicate that students were not developing study skills over the course of the semester. Providing students with study strategies early in the course as well as directing them towards specific resources to help them build their study skills may reduce this concern over the semester as well as improve student grades.
Interestingly, at the mid-semester time point, not being able to pay attention for the entire class period rose to become a commonly reported concern (Figs. 1 and 2). Given that undergraduate STEM courses predominantly use lecture , one way to increase student attention could be through the introduction of more active learning (Lane and Harris 2015). Another potential issue related to attention is that students can use electronic devices for a wide array of non-academic purposes, even though they report knowing that technology use can distract them from learning (McCoy 2013;Sana et al. 2013;Tindell and Bohlander 2012). One way to help students pay attention is to couple the use of technology in the classroom to active learning strategies, such as discussion boards and online problem solving (e.g., Barak et al. 2006).
Do Concerns Differ Based on Student Demographic or Course Characteristics?
Our work adds student concern to the growing list of differences between how female and male students experience STEM courses. In our study, female students reported higher levels of concern than male students at the beginning of the semester (Table 4), and their concerns remained higher than male students at the mid-semester point even when controlling for initial concerns, course performance, and the percentage of female students enrolled in their courses (Table 5 and Fig. 4a). This pattern of concern is consistent with that seen for course-related anxiety as previous work has shown that female students have increased anxiety and test anxiety (e.g., Chapell et al. 2005;Harris et al. 2019;Núñez-Peña et al. 2016;Salehi et al. 2019). Additionally, female students can have lower participation in STEM classes and have different views of success and failure when compared to male students with comparable grades (e.g., Eddy et al. 2014;Freedman et al. 2018;Grunspan et al. 2016;Lowe, 2015;Marshman et al. 2018;Robnett and Thoman 2017). However, our work additionally reveals that as courses increase in their percentage of female students, levels of mid-semester concern decrease for all students and at a higher rate for female students (Table 5, Supplemental Figure 4). This variable was not identified in the first-week model, which could indicate that as the semester progresses, classroom environments differ in courses based on the percentages of female students enrolled. Programs designed to aid and increase representation of women in STEM may want to consider measuring concern levels of female students and ascertain whether targeted interventions can lower female concerns and impact subsequent performance.
Studying the change in concern across two time points also revealed patterns in the relationship between students' final standardized grades and course-based concerns. Students who ultimately performed below the mean held higher levels of concerns at the first-week and mid-semester time points (Fig. 4b). Since students with lower performance enter the course with higher concern, it may be possible to ask students about their concerns, including the commonly identified concerns in Figs. 1 and 2, and instructors could talk to their students and make adjustments to alleviate top concerns. Instructors may also consider using a concern metric similar to a CI for identifying students who could benefit from early intervention, such as peer tutoring or supplemental instruction (e.g., Batz et al. 2015;Deslauriers et al. 2012;Lizzio and Wilson 2013;Stanich et al. 2018). Studies about these kinds of supplemental resources often report evidence of effectiveness for improving student performance and retention. However, do such supplemental resources also impact student anxiety and concern? Combining course performance with anxiety and concern measures may also help advance research on course interventions for struggling students.
Conclusions and Future Directions
In this study, we measured course-based concerns, but there are many other factors that may affect students. For example, students likely also hold concerns outside of the course that have potential to impact their performance, such as those about financial resources. Furthermore, our work focused on undergraduates at research institutions, and students at other institution types (e.g., community colleges, regional comprehensives, or minority serving) may hold different concerns. It is important for future work to explore the concerns of students at other institution types to capture the variety of concerns and assess if different interventions are required to provide those students with positive course experiences.
Future work should also investigate the concerns of international students and students who did not speak English at home growing up. While international student status and growing up speaking English at home were included in the mid-semester model, they were not significant (Table 5). Low numbers of international students and students who are non-native English speakers in our sample could be one reason for this unclear result and indicates that more data are needed to explore these trends. However, combining these results with Eddy et al.'s (2015) finding that international students report higher levels of anxiety related to peer discussions, suggests that the concerns and anxiety of international students is an important future direction.
Further investigation is also required to understand the complex interplay between student characteristics, concerns, and success. For example, the relationship between first-generation status, concerns, and grades is complex. First-generation status was identified in the best-fitting model but was not significant when accounting for final standardized grades (Fig. 3), which may be because first-generation students had lower performance than continuing generation students in our sample thereby conflating the measures of performance and first-generation status (Supplemental Figure 3). More investigation is required to empirically study first-generation students' experiences and disentangle grades and first-generation status when it comes to concerns.
Our work focused on identifying course-based concerns held by students across STEM disciplines, but future work could compare variation of course-based concerns within disciplines. To do this, student concern data could be collected across numerous courses within the same STEM disciplines. In addition, the relationship between gender and course gender representation with CI in a discipline should also be investigated. Previous work revealed that female students have lower levels of confidence in their mathematical abilities than male students in calculus and are 1.5 times more likely to leave STEM after college calculus (Ellis et al. 2016). Similar trends are seen in physics and computer science (Marshman et al. 2018).
Taken together, this work focused on identifying student concerns across several demographic groups. There are many studies that aim to describe sources of student anxiety or concern, but to date there are few that use measurements of anxiety or concern to assess the impact of specific interventions. Future investigations of the links between student concerns and instructional practices would be beneficial for educators. For example, autonomysupporting teaching practices such as providing students with decision-making opportunities related to classroom management, choice in how to present their ideas, and opportunities to evaluate work (Reeve 2016;Stefanou et al. 2004;Williams and Deci 1996) aim to provide students with learning environments that support their daily autonomy. Higher student autonomy support from teachers is correlated with higher student self-efficacy and lower drop-out intention among first-year students (Girelli et al. 2018). Interventions that focus on supporting student self-direction may alleviate a number of course-based student concerns. While we can speculate on practices that may help lessen anxiety and concerns for students, their effectiveness would require further evidence. When studying these interventions, it will be important to look at any differing impact based on student demographics, especially gender. Addressing the concerns of female students in particular could be an important step to reducing disparities in STEM majors and future careers. | 2020-04-30T09:11:44.433Z | 2020-04-29T00:00:00.000 | {
"year": 2020,
"sha1": "c47111b181a02930f97b8586e33294d91bcb3d7d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41979-020-00031-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a181cc3015364147fd486b48ab858ec78e294fc7",
"s2fieldsofstudy": [
"Engineering",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
225321284 | pes2o/s2orc | v3-fos-license | Labor or Capital Income Tax for Growth in an Aging Society
This study analyzes whether taxation of labor income or capital income maximizes growth rates, with labor-argument type model, in an aging society. There are certain conditions that maximize growth rates which are indicated by the share of public capital-public pensions. The results of this analysis taxing capital income is better in an economy where private capital is drastically larger than the public capital found in an aging society.
Introduction
It is a well-known fact that the trend of lower birthrates and an aging population has accelerated. According to the Cabinet Office, by 2050, Japan will become a super-aged society, where one out of 2.5 people will be elderly (aged 65 or older). 1 A demographic survey by the Ministry of Health, Labour and Welfare (MHLW), indicated that the total fertility rate fell to 1.42 in 2018. What has caused the aging of population and decreasing birthrate so rapidly? Why has the trend not shown any sign of stopping, despite implementation of diverse measures and policies? It is thought that the situation discussed above is strongly associated with socioeconomic backgrounds of the Japanese population. People who faced the so-called employment ice age (after the economic bubble implosion in (1991)(1992)(1993)) entered a labor market, that was predominantly a buyer's market with less career options. Then, the economic growth rate became stagnant with low stock prices, which, along with other factors such as the deterioration of corporate profit margins and labor environments, led to corrupted work-life balance. Consequently, employment systems of private businesses started to change, as represented by the increase in temporary employment, the ending of permanent employment system, and changes in the retirement allowance systems (early retirement programs and abolishment of retirement allowance system), etc. From a long-term perspective, workers should determine their spending based on their estimated lifetime income. According to the overlapping generation model advocated by Diamond (1965), a person's lifetime income consists of incomes from two periods, namely young and old. Policies relating to pensions include increasing the age for the start of pension benefits, lowering of the retirement pension for active employees, and increasing pension premiums. The national pension premiums have consistently increased since introduction of the system in 1961. In fact, the present premiums are more than twice as much as those in 1990. 2 Meanwhile, since 2020, the amount of pension benefits for new pensioners have decreased. An important point raised in resolving these issues in Japan is the relation between financial sources and benefits. In other words, what type of tax should be imposed and how should it be distributed. These factors are determined based on trade-off between (1) efficiency and (2) fairness. The first point to be considered is (1) efficiency. To maximize efficiency, one needs to maximize the economic growth rate. This study establishes a model based on the neoclassical theory that capital growth will drive up the gross domestic product (GDP) and lead to a greater growth rate or the whole nation. With respect to (2) fairness social welfare in household finance of the nation needs to be considered. Maximized social welfare, at a scale of the whole nation, is not (1) necessarily a sign of maximized welfare in the domestic accounts of individual households because there are inter-and intra-generational gaps. The key is to minimize the gaps and work out policies to maximize both growth rate and welfare. In the body of this article, the endogenous growth model (Romer(1986)) is used to simulate introduction of public capitals proposed by Barro (1990), Barro and Sala-i-Martin (1992), Futagami et al. (1993), Turnovsky (1997) and Yakita (2008). While public pension is introduced with taxation on both labor income and capital income as financial sources in Maebayashi (2013), I hereby establish separate model with taxation of capital income and labor income in order to analyze how they impact growth rate and social welfare.
Intuitively when an increase in capital occurs along with a rise in GDP, and is greater than the decrease in private capital variables, it will grow continuously. The scenario of utilizing capital income tax will grow with no conditions. However, the scenario involving labor income tax requires a strict condition, which is indicated by the share of government expenditure on public capital investment. This also includes the expenditure on public pensions. In today's aging society, the welfare of the elderly, which is the majority of nationals, should be the top priority when the welfare of the entire nation is taken into account. An increase in pension benefits should improve the welfare of the elderly generation. However, from a long-term perspective and in the interest of future generations, as well as to boost productivity, public capital investments are one of the important driving factors for a sustainable growth rate. Furthermore, the increased production efficiency of the current working generations will increase future financial resources, which can be used to increase pension benefit. Hence, the government's cautious long-term investments, rather than conventional public capital investments, is the decisive factor for the growth rate and social welfare of Japan and the rest of the world. Specific analyses of public capital investments are left for further studies.
This study is based on Maebayashi (2013). He indicated the relative scale of capital (private-public) will stably converge on the steady-state. Kamiguchi and Tamai (2019) showed the optimal tax rate maximize growth rate and social welfare subject to the public finance. But these analyzes are considered to tax both capital income and labor income. They don't make compare the two types of tax and how these affect growth and social welfare. In this study, I clarify the different effects of these types of taxes on growth and welfare. If government emphasizes growth, then capital income tax is the better option. However, if the government emphasizes welfare, labor income tax is the better option. The remainder of this study is organized as follows. The next section constructs a dynamic system of private capital and public capital formation into the same two periods, namely young and old, using the over-lapping generations model, as advocated by Diamond (1965). These derive the dynamics of the ratio in private-public capital. Both private capital and public capital are formalized as stock variables.
Individuals
I assume homogeneous consumers and no population growth( = +1 = ). The consumer obtains consumption in working and old age periods and supply labor inelastically in first period where I assume every consumer has one unit of labor. He allocates income for consumption and savings in first period and in the second period consumes all income of savings and a public pension with no bequest. I assume a perfect insurance market by Yaari (1965) and Blanchard (1985) and each individual faces the survival rate ∈ (0,1). I assume logarithmic linear utility function and lifetime budget constraint is shown as follows where time preference is indicated by ∈ (0,1) and ∈ (0,1) indicates the capital income tax.
Production
I consider Cobb-Douglas production function where labor increase with a public capital investment, as per Romer (1986), and the firm's product is a homogeneous good. Inputs are capital and labor. The function is as follows.
I assume a perfect competitive market and solve the problem of profit maximization as follows.
Government
Government tax the capital income and divides tax revenue for public capital investment and pension. , show each public capital investment and pension. The share for public capital investment shown as ∈ (0,1). ∈ (0,1) indicates capital income tax. Depreciation rate of private and public capital is assumed 1. Budget constraint of government is shown as follows.
Equilibrium
This model has three markets which are goods, labor and capital. I consider only capital market by Walras' law with equilibrium condition (15).
Dynamics
The ratio of a private and public capital shows as = . I assume ≥ 1, which means private capital is larger than or equal to public capital. The growth rate of private capital is as follows.
The growth rate of public capital is shown as follows with (13).
The growth rate of is indicated by (20).
(1 )(1 ) 2 +1 1 < 0 The second derivative of (21) is derived as follows. In the first quadrant with horizontal axis and vertical axis +1 , will converge to the steady state if (21), (24) and (25) [Inada-condition] are fulfilled. (Fig.1) will converge stably to steady state. The ratio of private-public capital will increase if the sign of (21) is positive. The steady state of is shown as . If equation (26) satisfies with the growth rate of GDP, private and a public capital, will be the same.
Optimal Tax Rate for Maximizing Growth
The growth rate at steady state is shown as follows by (27). I derive the effect of tax on the growth.
Where ( ) indicates the elasticity of tax for and must be positive for > 0. (28) Here is shown as for simplicity.
=
(1 ) 2 2 1 2(1 ) 2 1 2 (1 ) 2 1 0 1 (1 ) 2 2 (1 ) The growth rate will increase with no condition in the case of capital income tax, which depends on capital increases together with a rise in GDP. I consider two effects, which are: "A: push-up effect" and "B: reduction effect". "A" means that the GDP is pushed-up by the capital income tax increase, through the increase of tax, which makes labor productivity more efficient. On the other hand, "B" indicates the private capital reduction effect. There are two routes in this effect. Firstly, the decrease in income gains with tax reduces savings. Secondly, an increase in public pension leads to increase in income in the 2 nd period and decline saving. In the case of capital income tax, the effect "A" exceeds the effect "B" anytime. Sustainable growth will then be possible with no conditions. Proposition 1.
The growth rate will be positive if the elasticity of capital income tax for is to be positive. For that, no condition is required.
Optimal capital income tax is shown as below.
Individuals
Optimal consumption in working and retire periods and savings are as follows. Here, I indicate labor income tax as .
Government
Government expenses for public capital investment and pension. The share for public capital investment is shown as ∈ (0,1).
Dynamics
The growth of a private capital is shown as follows.
The following shows the growth rate of .
will grow up if the condition (43) is fulfilled. In the first quadrant with horizontal axis and vertical axis +1 , will converge to the steady state if (43), (46) and (47) are fulfilled (Fig.2).
will converge stably to steady state.
Policy Maximizing the Long-Run Growth Rate
I analyze the effect of taxing labor income on the growth and the steady state of is shown as .
(1 )(1 ) 1 (1 )( ) 1 1 01 (1+ ) 1 (1 )( ) 2 (1 ) 01 (1+ ) 1 ( ) 1 01 (1 )(1 ) Where indicates the elasticity of tax for . It has to be positive to achieve positive growth. The condition of >0 is same as (43). 0 < On the other hand, in the case of labor income tax, growth from tax will be possible provided some conditions are required. Same as the case of capital income tax there are two effects. First is "A: push-up effect" as same as in the case of capital income tax. Second are two kinds of effect which decrease private capital. They have also two roots. ① The decrease disposable income with tax. ② The increase income in 2 nd period with public pension. In the case of labor income tax if the effect "A" exceeds the effect "B" the growth rate will rise with tax. But it required the strict condition. Details of this are described in the next section.
Proposition 2
The growth rate will be positive if the elasticity of labor income tax for is to be positive. But for that, condition is required.
Numerical Approach
I calculate condition (43) with numerical approach. According to Cabinet Office (2020) 3 , Mai Chi Dao et.al.(2017) and Maebayashi(2013) three cases are set out as follows. Mai Chi Dao et.al.(2017) indicated the ratio of the share of capital and labor was 4:6 in 1970, but it is currently at 6:4. This means that AI and IT related companies currently increases innovations. This will then replace labor, with relatively cheaper capital. Cabinet Office (2020) anticipates that the number of people in Japan will be reduced to two-thirds of the current population by 2070, and the working force will decrease drastically because of an aging society. Case 1 and Case 2 can't fulfill the condition because the sign is negative. Though Case 3 fulfills requirements ( ≤ 1), capital share requires a number less than 0.05. This is not suitable for current economic situation.
Proposition 3
The economy where government tax labor income will not grow, if government can't spend a lot of public capital investment, and the capital share required is a very low value.
In the Case of Capital Income Tax
Social welfare function is as follow where is a social discount rate and the price of w and r are constant because it is determined by each market. Where I indicate (58) (1 ) (51) is shown as follows by infinite geometric series sum.
The effect for social welfare on capital income tax is given by the next equation: Where I indicate 1 as capital income tax.
}
The condition for (56) to be positive is as follows.
In the Case of Labor Income Tax
The optimal consumption in each period is as follows. Taxation of labor income makes social welfare increase with no condition, though taxation of capital income needs condition to increase social welfare.
Concluding Remarks
The intuitive effects for growth are the same in both cases. In the case of capital income tax, if the GDP boost effect with increase of public capital investment is larger than decrease effect in private capital with decline interest rate and increase of pension, then the growth rate is positive. In other words the elasticity of tax for x will be positive, with no conditions. In the case of labor income tax, if the GDP boost effect is larger than the effect of private capital reduction effect with decline disposable income and increase of pension. The important point of this paper is the condition of the share rate of government which is indicated by public capital investment and public pension. The condition is very severe considering the current economic situation where capital share is large and population declining. However, it may happen that labor income tax is desirable. Taxing labor income is better regarding there is no condition to increase social welfare. | 2020-09-10T10:19:27.406Z | 2020-09-09T00:00:00.000 | {
"year": 2020,
"sha1": "761f725c36c7f6db87c067fd21f27d05d24f9112",
"oa_license": null,
"oa_url": "http://redfame.com/journal/index.php/aef/article/download/4969/5189",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f93bb0eeade4b34936389774e4fe5de315898788",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
235362570 | pes2o/s2orc | v3-fos-license | Trajectories of verbal fluency and executive functions in multilingual and monolingual children and adults: A cross-sectional study
The development of verbal fluency is associated with the maturation of executive function skills, such as the ability to inhibit irrelevant information, shift between tasks, and hold information in working memory. Some evidence suggests that multilinguistic upbringing may underpin disadvantages in verbal fluency and lexical retrieval, but can also afford executive function advantages beyond the language system including possible beneficial effects in older age. This study examined the relationship between verbal fluency and executive function in 324 individuals across the lifespan by assessing the developmental trajectories of English monolingual and multilingual children aged 7–15 years (N = 154) and adults from 18 to 80 years old (N = 170). The childhood data indicated patterns of improvement in verbal fluency and executive function skills as a function of age. Multilingual and monolingual children had comparable developmental trajectories in all linguistic and non-linguistic measures used in the study with the exception of planning, for which monolingual children showed a steeper improvement over the studied age range relative to multilingual children. For adults, monolinguals and multilingual participants had comparable performance on all measures with the exception of nonverbal inhibitory control and response times on the Tower of London task: monolinguals showed a steeper decline associated with age. Exploratory factor analysis indicated that verbal fluency was associated with working memory and fluid intelligence in monolingual participants but not in multilinguals. These findings raise the possibility that early acquisition of an additional language may impact on the development of the functional architecture serving high-level human cognition.
Introduction
The ability to articulate speech fluently (verbal fluency) is crucial in typically developing children and a reliable predictor for their academic success (Memisevic et al., 2018). A large body of research carried out with children and adults has provided evidence for an association between verbal fluency and the broader domain of executive function (e.g., Aita et al., 2019;Luo et al., 2010;Shao et al., 2014). Executive function refers to a set of vital and voluntary controlled cognitive skills that allow us to suppress irrelevant information, shift between tasks, and hold and update information in working memory (e.g., Miyake et al., 2000). Executive function skills might therefore be considered the building blocks of higher level cognitive abilities such as reasoning, problem-solving, and decision-making (Diamond, 2006), supporting effective learning and knowledge acquisition.
Verbal fluency is typically assessed via administration of tasks requiring oral generation of words within defined parameters. One of those most widely employed is the Verbal Associative Fluency Test, which requires participants to spontaneously produce as many words as possible, beginning with a given letter, within 1 min. Typically, the letters used are "F," "A," and "S," so much so that this test is routinely referred to as "F-A-S." Fluency is then inferred by the quantity of eligible words produced, either summed or averaged across the three manipulations. Another approach is most commonly referred to as semantic or category fluency, in which the ability to produce category exemplars is measured using the same basic procedure and scoring. Typical categories include animals, fruits/vegetables, vehicles, and tools (e.g., Bright et al., 2008). Additional constraints are minimal in both tasks (for F-A-S letter fluency, proper nouns are not allowed, and the same word with a different suffix or repetitions are not allowed in either test).
Both letter and category fluency are considered useful measures of how well participants are able to organise lexical retrieval and apply strategic thinking (e.g., Estes, 1974;Lezak et al., 2004). Performance on these tests, therefore, is thought to rely on higher level cognitive control, although verbal fluency is more universally accepted as a "frontal lobe" or executive function test, with category fluency impairments interpreted in the context of semantic knowledge breakdown in addition to executive deficits. Consistent with this view, Alzheimer patients tend to have greater difficulty with category fluency, implicating disproportionate temporal lobe involvement in performance on this task relative to verbal fluency (e.g., Fama et al., 1998;Monsch et al., 1994). In neurologically healthy participants, performance is usually better on category fluency relative to letter fluency, but both are markedly sensitive to ageing and frontal lobe integrity, consistent with disproportionate age-related cortical deterioration in the frontal cortex relative to posterior regions, and to the importance of frontal regions in the creation and organisation of retrieval strategies.
The literature has not provided a clear answer about which executive control mechanisms are most important for successful performance in the letter and category fluency tasks. Some authors have emphasised the role of working memory, selection, and suppression (e.g., Henry & Crawford, 2004;Moss et al., 2005;Rende et al., 2002;Rosen & Engle, 1997). Indeed, to perform fluency tasks, participants must hold the instructions and their earlier responses in working memory and they must also suppress irrelevant words (e.g., words that do not start with the target letter or belong to a certain category) and repetitions. In addition, participants often develop a strategy, which involves the ability to create clusters based on a systematic memory search (e.g., pets cluster = dog, cat; farm cluster = cow, pig; birds cluster = robin, pigeon). However, others have stressed the importance of switching ability (Abwender et al., 2001) and general inhibitory control (Hirshorn & Thompson-Schill, 2006), highlighting the association between verbal fluency and novel problemsolving or fluid intelligence (e.g., Roca et al., 2012).
Another interesting line of research is related to the relationship between verbal fluency, executive function, and multilanguage acquisition. Multilingual speakers are often found at disadvantage in tasks requiring lexical access on the assumption that they generally have a smaller vocabulary in each known language compared to monolingual speakers of those languages (e.g., Bialystok & Feng, 2011;Oller et al., 2007). However, they also have to resolve the greater selection demands associated with fluency in more than one language, and this will in turn result in slower word retrieval when compared to monolingual speakers (Gollan et al., 2005;Ivanova & Costa, 2008).
In contrast to this potential disadvantage, a large body of evidence has been reported in the last three decades for a possible bilingual advantage in executive function. In particular, children and older multilingual adults often outperform their monolingual peers in tasks of nonverbal inhibitory control, shifting, and updating (see Bialystok, 2017, for a review). The reason for this executive advantage is believed to stem from the lexical disadvantage: the higher competitive demand of dealing with two or more languages in a single mind on a daily basis and for protracted period of times may in turn strengthen frontoparietal networks functionally and structurally implicated in nonverbal cognitive control (Bialystok, 2017). This has been prompted, in part, by an increasing understanding of neuroplasticity and how specific and diverse skills and experiences may be underpinned by a core, domain general "control" network (e.g., Duncan, 2013;Voytek et al., 2010). What is less clear is whether this network can somehow be enhanced through a process of multilanguage acquisition and daily multilingual communication.
Neuroplasticity refers to the brain's ability to adapt in response to environmental stimulation through forming, pruning, and reorganising synaptic connections (Pascual-Leone et al., 2005). Richer environments and experiences such as higher social economic status and formal education may have identifiable effects on brain structure and networks as well as measurable behavioural cognitive benefits in areas such as executive function and nonverbal intelligence (Kramer et al., 2004;Noble et al., 2012). Experimental evidence has shown that, in the bilingual brain, both languages are always active even in monolingual settings (Bialystok, 2017;Dijkstra, 2003). This joint activation requires bilinguals to pay attention to changing contexts, select and apply the appropriate language while preventing interference from the non-target language (Bialystok, 2017). Intriguingly, multilingual speakers often underperform in comparison to monolingual peers in category fluency, but not on letter fluency (Gollan et al., 2002;Rosselli et al., 2002). To the extent that letter fluency is disproportionately underpinned by frontal/ executive function (in comparison to category fluency), it has therefore been argued that the use of frontal networks responsible for executive function may, in part, explain why there is typically no disadvantage for letter fluency in multilinguals (Luo et al., 2010). However, although neurological evidence supports the existence of domain general cognitive differences between language groups, the behavioural evidence for the bilingual advantage has been more controversial and the mechanism(s) that underlie the advantage reported in these studies is currently a topic of vigorous debate (see Paap et al., 2015, for a critical review).
In this study, we explored the relationship between verbal fluency and executive function from childhood to older age using a cross-sectional design. A developmental trajectory approach in cross-sectional designs has been successfully used in studies comparing the development of typically and atypically developing children Karmiloff-Smith et al., 2004;Thomas et al., 2001Thomas et al., , 2009. We employed this approach, comparing performance of multilingual and English monolingual speakers from the age of 7 to the age of 80 years.
Our primary objective, therefore, was to address whether early acquisition of a second language alters the functional architecture of higher level cognition. We also evaluate whether there are differences in these developmental trajectories that might be explained by linguistic ability (i.e., monolingual vs multilingual status). To achieve our objectives, we assess performance on a range of measures of executive function and cognitive control and determine their sensitivity to verbal fluency in monolinguals and multilinguals across the lifespan trajectory.
Participants
This project was approved by the Science and Technology Research Ethics panel at Anglia Ruskin University (FST/ FREP/15/505) and was conducted in accordance with the tenets of the Declaration of Helsinki. A total of 324 individuals, all living in the United Kingdom at the time of testing, took part in this study (see Table 1 for the age breakdown and gender details). One hundred and fiftyfour (154) were typically developing children with age ranging from 7 to 15 years old (mean age = 9.6, SD = 1.6, 72 females) and 170 were healthy adults from 18 to 80 years of age (mean age = 38.6, SD = 16.6, 62 males).
Participant scores were extracted from a larger dataset of 536 participants who took part in a 5-year investigation of the effect of multilingualism across the lifespan. In this study, only the participants who completed the relevant tasks were included.
Within the children group, 77 were English monolinguals and 77 were bilinguals/multilinguals of different linguistic backgrounds enrolled in UK primary schools. Their parents completed an online questionnaire designed to establish demographic, socio-economic, and linguistic information Filippi, Ceccolini, Periche-Tomas, Papageorgiou et al., 2020). All multilingual children started the acquisition of two or more languages with English being one of them either simultaneously from birth (N = 59) or within the first 5 years of life (N = 18). All monolingual children reported a basic knowledge of French or Spanish learned at school. However, they did not report daily exposure or use of foreign language, nor the ability to hold a basic conversation in a language other than English.
All multilingual children were reported to be highly proficient in both English and an additional language which they reported to use on a daily basis at home and with the extended family. Twenty-five children were reported to be exposed to a third or a fourth language, although their level of competence in these languages was considered lower.
Within the adult participants, 86 were English monolinguals with none or little exposure to a second language when at school, and 84 were multilinguals from a large variety of linguistic backgrounds. They also completed an online questionnaire in which biographical, socio-economic, and linguistic information was provided.
They all reported to be highly proficient in English plus an additional language, which they used on a daily basis. Fifty-five individuals were raised as bilinguals since birth and 29 within early stages of their lives. Thirty-nine of them reported the knowledge of a third or a fourth language.
A list of all languages spoken by the children and the adults is reported in the online Supplementary Material, Table A1 and A2.
Socio-economic status (SES) information was calculated on the basis of parental (father and mother) highest level of education, employment (adults only), and household income. Each item was scored for academic achievement (i.e., 1 = no formal/primary, 2 = secondary, 3 = undergraduate, 4 = post-graduate, 5 = doctorate), occupation (1 = unemployed, 2 = part-time, 3 = full-time), and a score from 1 to 6 depending on their total household income (from less than £20,000 to more than £100,000). Scores were averaged to create a composite SES score and also analysed separately.
Procedure and materials
As described in the "Participants" section, this study is part of a larger project in which a total of 536 participants performed a total of 10 tasks ( Table 2) that were split into two blocks of 5 (part A and part B), counterbalanced to ensure an equal distribution of participants who were tested starting with part A followed by part B and vice versa. Testing was also carried out at different times of the day, with children predominantly tested in the morning and early afternoon. Overall, with this design, we aimed to reduce the probability that the order of tests or other factors adversely influenced the results. The whole testing session lasted 1 hr and 20 min on average. The experimental battery was conducted on an ASUS laptop, mouse, standard keyboard, and a Technopro® USB gamepad that was adapted with a red and a blue sticker attached to the buttons for the execution of the Simon task, and a green sticker for the execution of the go/no-go task. All instructions were given in English.
Ethical approval for this study was granted by the university committee. Only the children whose parents returned written informed consent were included in the sample. Children were tested in quiet room made available in three primary schools, two in London and one in the Cambridge area. Adults were tested in the testing rooms available at Anglia Ruskin University in Cambridge and at UCL-Institute of Education in London. All participants gave their written and verbal consent before starting the session.
To address the experimental questions of this study, we only included the participants who fully completed the following tasks.
Verbal fluency. Participants performed two conditions, one measuring letter (or phonemic) fluency and one measuring category (or semantic) fluency (e.g., Controlled Oral Word Association Test, COWAT, Strauss et al., 2006). For letter fluency, they were instructed to say, out loud, as many words as they could think of beginning with a specific letter (i.e., F, A, and S) within a time limit of 60 s. For semantic fluency, participants were again given 60 s to produce words belonging to a specific category; these were (1) animals, (2) vehicles, (3) fruits and vegetables, and (4) tools. The number of words generated were summed to provide a letter fluency and a semantic fluency score (Lezak et al., 2004). Any word repetitions and category errors were excluded from data analysis.
Executive function tasks
Visual interference suppression: Simon task. A computerised version of the Simon task (Simon & Wolf, 1963) was programmed in E-Prime version 2.0 (Schneider et al., 2007). A USB gamepad with coloured stickers (red and blue) was used to record response time and accuracy.
The task consisted of 36 trials in which either a blue star or a red star randomly appeared to the left or the right side of a white screen; each colour was presented in equal number of times to the left and to the right. A fixation cross appeared for 800 ms preceding each trial. The participants were instructed to press the left button (labelled with a red sticker) when the red star would appear on the screen and the right button (labelled with a blue sticker) for the blue star. Half of the trials were incongruent, that is, the location of the stimulus and the response button did not match (e.g., red star on the right-hand side of the screen) thereby requiring participants to inhibit the conflicting spatial information and focus on the colour (i.e., conflict resolution). Congruent trials (red star on the left and blue star on the right) did not require conflict resolution. The dependent measure was the "Simon effect" (i.e., the difference between the mean response times for congruent and incongruent trials).
Response inhibition: go/no-go task. All participants performed a go/no-go task called Whack-A-Mole (Petitclerc et al., 2015). They were instructed to press the green button on the USB gamepad as fast as they could when a mole popped up on the screen (go trials). They were also instructed not to press the button when an aubergine appeared on the screen instead of a mole. Trials began with an open mole hole (fixation point) appearing for 500 ms in the centre of a black screen. Go and no-go stimuli were presented for 1,800 and 1,300 ms respectively, unless a response was pressed. Correct responses were visually rewarded for 200 ms with a "WHACK!" graphic for whacking the mole and "AWESOME!" for leaving the aubergine; "OOPS!" was displayed for missing the mole or whacking the aubergine. The ITI was 2,500 ms. Following a practice block of 10 trials (3 no-go trials), participants were given the opportunity to ask questions before progressing on to the first of four blocks. Each block contained 56 trials (25% no-go) presented in a pseudorandom order. Planning and problem-solving: tower of London. A computerised 12 trial version of the Tower of London (Shallice, 1982), included in the free access PEBL battery (Mueller & Piper, 2014), was administered. Each problem required participants to use the computer mouse to move coloured discs (red, blue, and green) from their initial position to match their target position in the fewest possible moves. The participants were instructed to move only one disc at a time, and only the disc on the top of a stack could be moved. A move counter on the right-hand side of the screen would inform them how many moves they could make and how many moves they had left. There was no time limit for each problem, but all participants were advised to carefully plan their moves before they clicked on any discs. Trials ended when participants reached the move limit and the screen displayed feedback on whether or not they had successfully completed the problem.
The trials were presented in a progressively increased order of complexity and consisted of four easy problems requiring two to three moves, four trials with problems requiring four moves, and four trials with more difficult fivemove problems that required planning multiple sub-goals.
Fluid intelligence: Raven's Advanced Progressive Matrices Set 1. Participants completed Raven's Advanced Progressive Matrices Set 1 (Raven, 1998) consisting of 12 items of increasing complexity. Each item consisted of a 3 × 3 matrix containing eight different black and white designs that are logically related and one piece missing at the bottom right; participants were required to deduce from eight potential pieces which piece completes the matrix. The number of correct items out of 12 was recorded. Although no time limit was given, all participants completed the task within 10 min.
Verbal Working memory: digit span forwards and backwards. All participants were administered the digit span backward and forward, subtests of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008).
They were instructed to repeat aloud a sequence of numbers produced by a native English speaker. In the forward condition, the numbers had to be repeated in the same order. In the backward condition, they had to be reversed. Trials began with 2-digit sequences (e.g., 1-7) that the participant verbally recalled either forwards or in reverse order. As trials progress, the sequence gradually increased by one digit. Testing was interrupted when participants failed to recall the digits in two consecutive trials. Each correct response scored 1 point. The sum of correct forward and backward trials was recorded for each participant to provide an ability score.
English receptive vocabulary: British Picture Vocabulary Scale. All participants were administered the British Picture Vocabulary Scale: Third edition (BPVS-III; Dunn et al., 1997), which consists of 14 sets of words, each containing 12 items. Sets are linked with levels of complexity, starting from simple words understood by 2-3 year olds (e.g., ball, Set 1) to more difficult and infrequent words (e.g., lacrimation, Set 14). Panels of four pictures are presented for each item and the researcher orally says a word that is associated with only one picture. All participants started with an ageappropriate set. If two or more errors were made on the starting set, then the researcher established the base set by going back a set at a time until a maximum of one error was made. Next, a ceiling set was established by presenting the participant with progressively more difficult sets until eight or more errors were made on a set. Raw (ability) scores were calculated as the highest number on the ceiling set minus the total number of errors made during the assessment.
Design
This study had a mixed design in which the developmental trajectories of verbal fluency and executive function were built for children and adults in both linguistic groups. Ability scores were obtained for phonological and semantic fluency (number of words produced in each condition), English receptive vocabulary (BPVS-III), fluid intelligence (Raven's matrices), and working memory (digit span forward and backward). Accuracy and response time scores were calculated for the executive function tasks. T-tests, correlation, and regression analyses were performed using SPSS version 25 for Mac. Factor analysis was performed using the "FactorAnalyzer" package with Python (https://pypi.org/project/factor-analyzer/).
There were no significant gender differences in verbal fluency skills (p = .74 for letter fluency and p = .95 for category fluency). Independent t-tests and Bayes factors indicated that English monolinguals and multilinguals were comparable across all verbal and nonverbal measures (Table 3).
Correlations between verbal fluency and executive function
Pearson correlation analysis showed that both semantic and phonological fluency were significantly correlated (at p < .001) with measures of inhibitory control (go task reaction time), accuracy in planning (Tower of London), fluid intelligence (Raven's matrices), working memory (digit span), and receptive vocabulary (BPVS). The correlations with measures of inhibitory control accuracy (nogo trials), shifting and updating (Simon task), and response time for planning (Tower of London) were not significant (p > .05 in all cases). All correlations are reported in the Supplementary Material, Table B1. Stepwise linear regressions were also computed in which semantic and phonological fluency were regressed on digit span, Simon, go/ no-go, and Tower of London measures. For prediction of semantic fluency, three variables were entered: forwards digit span (explaining 18% of the variance), go task reaction time (an additional 8%), and no-go trial accuracy (an additional 3%). The best fit model for phonological fluency was virtually identical, with the same variables and ordering (explaining 19%, +5%, and +5%, respectively). All other variables were excluded as meaningful predictors using the standard inclusion criterion of p = .05.
The role of development and multilingualism for linguistic and non-linguistic skills
Regression analyses checked for outliers with Cook's distance (Cook, 1977) were performed to explore the developmental trajectories of verbal and nonverbal abilities. They revealed that age was a reliable predictor of best performance in both linguistic groups in measures of verbal fluency, receptive vocabulary, fluid intelligence, working memory, and response time in inhibitory control (p ⩽ .001). Age was a significant predictor of accuracy in the executive function planning task (Tower of London), for the monolingual group (p < .001), but not for the multilingual groups (p = .38). For both groups, age was not a reliable predictor for time of planning the first move and for completing the task in the Tower of London, and for inhibitory control accuracy (p > .10). Finally, there was a trend in the relationship between age and the Simon effect in monolinguals (p = .07) while this relationship was just significant in multilinguals (p = .04).
Fisher r-to-z analysis for comparison between correlation coefficients for the monolingual and the multilingual groups indicated that the children's developmental trajectories were largely comparable. However, the trajectory of accuracy for planning/reasoning in resolving the Tower of London task significantly differed between the two groups (p = .009) indicating that age predicts best performance more closely in monolinguals in comparison to multilinguals ( Figure 1).
All results, including Fisher r-to-z analyses, are reported in the Supplementary Material, Table C1.
The relationship between verbal fluency and executive function across development in children
All verbal and nonverbal measures with the addition of the variable age were factor-analysed across all groups with both varimax (orthogonal) and promax (oblique) rotations. Considering that the variables were highly correlated, we opted to report the promax rotation which may offer more valid factor loadings. However, the varimax rotation results are also available in the Supplementary Material, Table D, for comparison purposes.
The analyses yielded four factors with Eigenvalues ⩾ 1, explaining on average 54.0% of the variance for the entire set of variables. Figures 2 to 4 illustrate the factor loadings, which are also reported in the Supplementary Material, Table E. Examination of the factor loadings in the whole children population, that is, monolinguals and multilinguals collapsed, shows a strong fluency construct (Factor 1), largely independent from age and all measures of working memory and executive function. Factor 2 is strongly dominated by age but also reflects response time and vocabulary knowledge. Factor 3 appears to reflect an underpinning executive planning/working memory construct which is independent from response inhibition (Factor 4).
The comparison between monolingual and multilingual children, although presenting some moderate differences in loading distributions, generally confirms an emergent fluency construct in both groups (Factor 1 in monolinguals, Factor 2 in bilinguals). Nevertheless, only in monolinguals is there reliable evidence of coinvolvement of working memory and fluid intelligence within this fluency factor. In bilingual children fluid intelligence, working memory and executive planning ability dominated one factor (in this case, Factor 3), consistent with an underpinning fluid ability/psychometric g construct operating in this group. In monolingual children, stimulus/response conflict monitoring and executive planning ability emerged as distinct constructs (Factors 3 and 4, respectively), with only the former emerging as Go/No Go accuracy performance (Factor 4) in bilinguals.
Male participants showed better verbal fluency performance than females with 50.2 mean words produced for phonological fluency (females = 43.2) and a mean of 75.8 for semantic fluency (females = 71.6). The difference was highly significant for letter fluency skills (p = .002) but not for semantic fluency (p = .08). Independent t-tests and Bayes factors indicated that English monolinguals and multilinguals performed comparably on measures of fluid intelligence and working memory but monolinguals showed significantly better performance on verbal fluency, English vocabulary knowledge, inhibitory control, and planning response times (Table 4).
Correlations between verbal fluency and executive function
Pearson's correlation analysis showed that phonological fluency was significantly correlated with measures of executive function (Simon task, p = .01), working memory, and receptive vocabulary (p < .001). Semantic fluency was significantly correlated with fluid intelligence (p = .01), working memory, and receptive vocabulary (p < .001), but not with executive function and inhibitory control measures (p > .10).
There was a statistical trend in the correlation between semantic fluency and accuracy in performing the Tower of London task (p = .07). All correlations are reported in the Supplementary Material, Table B2. Stepwise linear regressions were also computed in which semantic and phonological fluency were regressed on digit span, Simon, go/no-go, and Tower of London measures. For prediction of semantic fluency, only forwards digit span was included, explaining 16% of the variance. The best fit model for phonological fluency included forwards digit span (18%) and the Simon effect (explaining an additional 4% of the variance). All other variables were excluded as meaningful predictors in both models using the standard inclusion criterion of p = .05.
The role of age and multilingualism for linguistic and non-linguistic skills
Regression analyses checked for outliers with Cook's distance (Cook, 1977) were performed to explore the developmental trajectories of verbal and nonverbal abilities.
They revealed that age was a reliable predictor of best performance for phonological fluency in both linguistic groups (monolinguals p = .003; multilinguals p = .001). However, for semantic fluency, monolinguals' performance was not significantly associated with age (p = .37), whereas for multilinguals age was still a significant predictor of best performance (p = .04).
For other measures, age played a different role in the two linguistic groups. For monolinguals, age was a significant predictor of performance in response time for planning (Tower of London first move, p = .05; Tower of London response time for completing a trial, p = .003), and there was a statistical trend for measures of fluid intelligence and working memory (p = .06). For multilinguals, age was not a significant predictor of working memory (p = .83) and response time for planning (p > .40), but it predicted performance in fluid intelligence (p = .001). In both groups, age was not significant in measures of accuracy in inhibitory control and planning (p > .20).
Fisher r-to-z analysis for comparison between correlation coefficients for the monolingual and the multilingual groups indicated a statistical trend for response time in the Go/No-go task (p = .05). As shown in Figure 5c, monolingual speakers showed a longer response time than multilinguals as they aged. There was a statistical trend in the trajectories of response time for planning (p = .06). Figure 5d and e shows that monolingual speakers were faster than multilinguals at a younger age, but they performed increasingly similarly in older age. The multilinguals' performance did not appear to decline with ageing and remained stable across the lifespan.
All other comparisons were non-significant (p > .10). Regression analysis results, including Fisher r-to-z analyses, are reported in the Supplementary Material, Table C2.
The relationship between verbal fluency and executive function across development in adults
Exploratory factor analysis with promax rotation was conducted with both linguistic groups collapsed and then separately for monolingual and multilingual adults.
The analyses performed with both groups collapsed and separate for monolingual and multilingual adults yielded four factors with Eigenvalues ⩾ 1, explaining on average 45.50% of the variance for the entire set of variables. Figures 6 to 8 illustrate the factor loadings, which are also reported in the Supplementary Material, Table F.
With all adults entered into the analysis, four factors were identified, which we interpret based on the assumption that variable loadings above 0.4 are stable (e.g., Field, 2013). Factor 1 is dominated by verbal fluency and digit span performance and therefore appears to reflect controlled lexical access.
Factor 2 is best represented by visuospatial planning ability (Tower of London accuracy scores), nonverbal abstract reasoning (Raven's matrices scores), and stimulus/ response conflict processing (Simon cost). We therefore consider the underpinning construct to be nonverbal fluid intelligence/psychometric g. Factor 3 is virtually entirely characterised by vocabulary knowledge (BPVS). Factor 4, disproportionately represented by performance on the Go/ No Go task, appears to reflect response inhibition.
As in the analysis of children, notable differences in the loadings emerged when language groups (monolinguals/ multilinguals) were analysed separately (Figures 7 and 8). In multilinguals, Factor 1 is disproportionately associated with fluency performance with more evidence for co-dependence on verbal short-term/working memory in monolinguals (again consistent with monolingual children). Consistent with the full group analysis, in multilinguals, Factor 2 was dominated by visuospatial planning ability, nonverbal abstract reasoning, and stimulus/response conflict monitoring ability, therefore indicative of an underpinning fluid intelligence/psychometric g construct. In monolinguals, there was little or no evidence for a shared construct underlying these abilities. Instead, visuospatial planning and stimulus/response conflict monitoring emerged as distinct constructs (Factors 3 and 4, respectively). Notably, in our monolingual group, Raven's matrices scores showed low and unstable loadings across all emergent factors. Overall, factor analysis in children and adults has shown that (1) verbal fluency appears to be largely independent of measures of working memory, fluid intelligence, and executive function in bilinguals, but is more integrated with working memory and fluid intelligence in monolinguals; and (2) executive planning ability and fluid intelligence dominate the same factor in bilinguals but not in monolinguals. If these differences in the patterns of variable loadings occurred only in the children or the adult participants, they should be regarded as holding limited intrinsic value, but the consistency in the patterns across both sets of data indicate that the differences in the characteristics of these emergent factors may warrant further consideration.
Discussion
This study investigated the developmental trajectories of verbal fluency and executive function in a sample of 324 participants, 154 children from 7 to 15 years old and 170 adults from 18 to 80 years old. Half of the total sample was made of bilingual speakers who started to acquire a second language in addition to English from early stages of life. The other half was made of English monolingual participants. We sought to identify which component of executive function is more associated with verbal fluency skills. In addition, possible effects of multi-language experiences in the development of linguistic and non-linguistic skills were explored by comparing the performance of the English monolingual and multilingual groups. Semantic and phonological fluency were measured according to the standard procedure requiring oral elicitation of words belonging to specific semantic categories or beginning with a given letter. Executive function was measured through a set of tasks, including the Simon task, a Go/No-go task (Whack-themole), and the Tower of London task. Each task targeted specific components of executive function, that is, shifting, updating, inhibitory control, and planning. Measures of short-term and working memory (digit span forward and backward), fluid intelligence (Raven's matrices), and receptive vocabulary (BPVS) were also acquired. Biographical and SES information were collected through administration of an online questionnaire.
Results showed that age was a significant predictor of best linguistic and non-linguistic performance across the whole sample. Multiple regression of fluency measures on our measures of working memory, executive planning, and response inhibition showed limited evidence for a meaningful relationship between phonological or category fluency and executive function. In both age groups, forwards digit span was robustly identified as the best predictor variable, which is typically assumed to be a straightforward measure of short-term memory (unlike backwards digit span, which requires online manipulation of data held in short-term/working memory). Multilingual and monolingual children had comparable trajectories in all measures with the exception of planning skills (Tower of London) where multilingual children did not seem to improve their performance across development as steadily as the monolinguals. In all other measures, neither linguistic disadvantages nor executive function advantages were observed in the multilingual sample.
Similar results were obtained in the adult sample. However, as opposed to children, adult multilingual participants demonstrated a different trajectory in reaction time in inhibitory control (on the Go/No go task). In comparison to monolingual speakers, a slower deterioration in response time over the age distribution was observed on this measure in the bilingual group. This result offers some evidence that managing two or more languages in a single mind may confer possible benefits in the ageing population and in a specific cognitive skill: inhibitory control.
Factor analysis was performed for both groups to explore the relationship between verbal fluency, age, vocabulary knowledge, and nonverbal measures of IQ and executive function. Common patterns were observed. First, verbal fluency appears to be largely independent from executive function measures across the whole sample. However, when monolinguals and multilinguals were compared separately, some significant differences also emerged. Children and adult English monolinguals' verbal fluency performance were associated with measures of fluid intelligence, working memory, vocabulary knowledge, executive function, and age. In multilingual children and adults, verbal fluency remained largely independent from all other nonverbal measures. We offer a tentative interpretation in the following section.
Empirical and theoretical considerations
Overall, the results indicate similar performance levels in both monolingual and multilingual participants on our tests of verbal and nonverbal ability. The developmental trajectories in children and adults also show similar patterns. Considering that the multilingual participants were all learners of English and another language from early stages of life and were all living in the United Kingdom at the time of testing, it is perhaps not surprising that their knowledge of English was like native monolingual speakers when performing the verbal fluency task. The children's developmental trajectories for all nonverbal measures were comparable with the exception of the cognitive planning component measured with the Tower of London task. Here, monolingual children outperformed multilingual peers. This finding is consistent with evidence that the visuospatial planning and problem-solving demands operating in the Tower of London may be served by cognitive mechanisms distinct from those serving verbal working memory performance and nonverbal inhibitory control (e.g., D'Antuono et al., 2017;Kaller et al., 2011;Zook et al., 2004). To the extent that performance on the Tower of London reflects goal-directed planning proficiency, these results indicate that multilingual acquisition during childhood might have negative consequences in this domain but render other aspects of executive functioning unaffected. In earlier work, we have reported a bilingual disadvantage in metacognitive processing evidenced by disproportionately lower confidence in test performance (Folke et al., 2016) and while purely speculative, we raise the possibility that reduced confidence might, in part, manifest in poorer actual performance on complex measures of goal-directed strategic planning such as the Tower of London.
With regard to the adults, again monolingual and multilingual participants had comparable performance on all measures with the exception of nonverbal inhibitory control measured with the go/no-go task and response time on Tower of London trials, on which monolinguals showed a trend towards steeper decline with age. While these findings may infer slower age-related cognitive deterioration associated with multilingualism, we caution against accepting this inference on the basis of this statistically marginal observation.
Other studies provide less equivocal results (e.g., Bialystok et al., 2004), offering the interpretation that lifelong multilingualism may protect the brain from the effect of ageing (e.g., Craik et al., 2010). These findings have generated a heated debate in the field. Some authors argue that positive results may be task-dependent (e.g., Paap et al., 2015;Paap & Greenberg, 2013) and a recent largescale meta-analysis of 152 studies on adults found no systematic evidence for a bilingual advantage in inhibitory control (or any other cognitive ability) after controlling for publication bias (Lehtonen et al., 2018). Consistent with this review, recent research from our lab did not find any significant difference between monolingual and bilingual elderly participants with classical measures of executive function such as the Simon task and the Tower of London (Papageorgiou et al., 2018) and our current finding, based on evidence from a single test, should therefore be interpreted in the context of this increasing weight of pooled evidence against the existence of a straightforward multilingual advantage in any aspect of cognitive control.
Intriguingly, we observed disparity between monolinguals and multilinguals in the patterns of interdependency among our variables revealed via exploratory factor analysis. Furthermore, these differences in the patterns of intercorrelation generally held in both the child and adult groups. Most notably, evidence that verbal fluency, working memory, and nonverbal fluid intelligence share a common underpinning construct was observed in monolinguals but not in multilinguals. In both multilingual children and adults, a strong fluency factor emerged, on which other variables associated with working memory, executive function, and fluid intelligence showed only low or marginal loadings. Our analysis also revealed that while fluid intelligence, working memory, and executive planning ability dominated the same factor in bilinguals, this was not the case in monolinguals-an observation that was again observed in both child and adult groups.
These findings raise the possibility that early acquisition of an additional language may impact on the development of the functional architecture serving high-level human cognition. In earlier work, we have published evidence that the whole-brain network topology underpinning the control of interference during language processing may show divergence in response to multilanguage (vs single language) acquisition (Filippi et al., 2011; and, in this context, it is plausible that functional adaptation and qualitative specialisation of cognitive subsystems responsible for selective attention, working memory, and control may develop. Such a perspective is consistent with the adaptive coding model of neural function (Duncan, 2001) in which neurons are hypothesised to adapt their properties in direct response to ongoing goalrelevant demands. In the current context, the claim is that the networks responsible for controlling language and thought in the multilingual brain must adaptively tune themselves to a more diverse range of inputs than is the case in the monolingual brain, and this leads to differences in the functional selectivity and adaptability of the latent variables serving bilingual cognition.
Why would such group differences in the latent variables explaining performance across our tasks emerge in the absence of group differences in levels of performance? The Inhibitory Control Model (ICM; Green, 1986Green, , 1998 and its expansion, the Adaptive Control Hypothesis (ACH; Green & Abutalebi, 2013), propose that inhibition is the key mechanism for bilingual language processing: to produce one language, bilinguals must inhibit the non-target language. The ACH provides the most detailed account of the bilingual language selection processes. According to this model, there are eight different control processes: (1) goal maintenance, (2) conflict monitoring, (3) interference suppression, (4) salient cue detection, (5) selective response inhibition, (6) task disengagement, (7) task engagement, and (8) opportunistic planning that are recruited differently in relation to the specific linguistic context in use.
The ACH also describes three different interactional contexts: (1) single language, (2) dual language, and (3) dense code-switching. A single-language context operates when languages are used separately (e.g., L1 at home, L2 at work). A dual-language context operates when both languages are mixed (e.g., interactions in which one speaker uses L1 and the other L2). The dense-code switching context occurs when interactions are not only mixed but speakers also "play" with their languages with frequent switches within a single sentence or by creating novel words (e.g., merging two languages in a single word).
For each one of these contexts, the ACH makes distinctive predictions in terms of control process demands. For example, in the context of single or dual-language, goal maintenance and interference control processes are required, presenting overall increasing demand on the speaker's cognitive system. On the contrary, in the densecode switching, the speaker does not need such a high level of control: both languages can be uttered freely in the same interaction.
Our observation that verbal fluency performance is relatively independent from performance on standard measures of working memory and fluid intelligence in multilinguals might be considered consistent with the ACH because the task is performed in a single language context (English), and this model proposes that it is only in a dual-language context (i.e., neither single-language nor dense language switching contexts) that significant recruitment of inhibitory control mechanisms will occur in the bilingual mind. Furthermore, given the model prediction that it is only under dual-language contexts that a bilingual advantage is conferred (for a discussion, see Kałamała et al., 2020), the lack of performance differences between our monolingual and bilingual groups across all our tasks (all presented in English) can also be accommodated. Thus, if we assume that all our multilingual participants are frequent (or dense) language switchers and they habitually use both languages in their daily interactions at work and with friends and family, the interpretation seems consistent with the ACH's prediction that active control processes should not be required to monitor the currently active language.
We acknowledge the potential limitations of this study that are associated with drawing inferences on lifespan developmental trajectories on the basis of data which are necessarily cross-sectional. However, we also acknowledge that this approach has been successfully demonstrated in previous research Karmiloff-Smith et al., 2004;Thomas et al., 2001Thomas et al., , 2009. We therefore encourage further work aimed at understanding how second language learning may alter unity and diversity in the functional organisation and network topology of high-level cognitive processes across the lifespan and recommend that such efforts avoid unnecessary focus on the question of whether there is a genuine bilingual cognitive advantage.
In conclusion, our findings suggest that the brain may adapt functionally in response to the demands associated with multilanguage acquisition, encouraging convergence and divergence in the functional specificity of the cognitive latent variables revealed in patterns of covariation at the behavioural level. It therefore follows that functional mechanisms serving cognitive control may differ between multilinguals and bilinguals but, as the present findings suggest, these differences may not manifest in a performance advantage. | 2021-06-08T06:16:40.157Z | 2021-06-06T00:00:00.000 | {
"year": 2021,
"sha1": "a61776dcc94d75506e2bb127a533c8a248ba2314",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17470218211026792",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8aca8df9ede2d5f2ad809fc9935c4cb8139f8706",
"s2fieldsofstudy": [
"Linguistics",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2540439 | pes2o/s2orc | v3-fos-license | Characterization of the Phytochelatin Synthase of Schistosoma mansoni
Treatment for schistosomiasis, which is responsible for more than 280,000 deaths annually, depends exclusively on the use of praziquantel. Millions of people are treated annually with praziquantel and drug resistant parasites are likely to evolve. In order to identify novel drug targets the Schistosoma mansoni sequence databases were queried for proteins involved in glutathione metabolism. One potential target identified was phytochelatin synthase (PCS). Phytochelatins are oligopeptides synthesized enzymatically from glutathione by PCS that sequester toxic heavy metals in many organisms. However, humans do not have a PCS gene and do not synthesize phytochelatins. In this study we have characterized the PCS of S. mansoni (SmPCS). The conserved catalytic triad of cysteine-histidine-aspartate found in PCS proteins and cysteine proteases is also found in SmPCS, as are several cysteine residues thought to be involved in heavy metal binding and enzyme activation. The SmPCS open reading frame is considerably extended at both the N- and C-termini compared to PCS from other organisms. Multiple PCS transcripts are produced from the single encoded gene by alternative splicing, resulting in both mitochondrial and cytoplasmic protein variants. Expression of SmPCS in yeast increased cadmium tolerance from less than 50 µM to more than 1,000 µM. We confirmed the function of SmPCS by identifying PCs in yeast cell extracts using HPLC-mass spectrometry. SmPCS was found to be expressed in all mammalian stages of worm development investigated. Increases in SmPCS expression were seen in ex vivo worms cultured in the presence of iron, copper, cadmium, or zinc. Collectively, these results indicate that SmPCS plays an important role in schistosome response to heavy metals and that PCS is a potential drug target for schistosomiasis treatment. This is the first characterization of a PCS from a parasitic organism.
Introduction
Schistosomiasis is a chronic disease caused by trematode flatworms of the genus Schistosoma. This neglected, poverty-related disease is found in more than 70 countries. It is estimated that more than 200 million people are afflicted with schistosomiasis, with 779 million at risk of infection, resulting in 280,000 deaths annually [1]. Currently only one drug, praziquantel, is used against schistosomiasis. The low cost of the drug and its efficacy against adult worms of all schistosome species that infect humans has led to its widespread use; currently tens of millions receive annual treatments of PZQ [2]. However, because of high reinfection rates drugs must be administered on an annual or semiannual basis. It is speculated that the exclusive use of a single drug should hasten the evolution of drug resistant parasites [2]. In the laboratory, S. mansoni subjected to drug pressure can develop resistance to praziquantel over the course of relatively few passages [3]. There are clinical reports of praziquantel failures in S. mansoniand S. haematobium-endemic areas of Senegal and Egypt and Kenya [4,5,6]. The availability of alternatives to praziquantel is extremely limited; they are more expensive, have unacceptable side effects and/or are effective on only one schistosome species [7]. Therefore, there is an urgent need to identify new targets and drugs for schistosomiasis treatment.
A pathogen protein can be considered a good drug target if it is essential for pathogen survival, unique to the pathogen and drugable. We hypothesized that one such potential target in schistosomes is phytochelatin synthase (PCS). Phytochelatins (PCs) are a family of peptides that chelate heavy metals [8,9]. They are synthesized non-translationally by PCS from glutathione (GSH), a tripeptide, (c-glutamic acid-cysteine-glycine or c-Glu-Cys-Gly or c-ECG) [10,11], that is itself synthesized by two enzymes cglutamyl-cysteine ligase and GSH synthase [12]. GSH is the most abundant low molecular weight thiol in most cells, provides protection against oxidative damage and plays important roles in cell proliferation, redox regulation of gene expression, xenobiotic metabolism, and several other metabolic functions [13].
Phytochelatins have the general formula (c-EC) n -G, where n = 2-11, and are formed by the transfer of the c-EC dipeptide from one GSH to a second GSH with the release of glycine [10,11]: c{Glu{Cys{Glyzc{Glu{Cys{Gly?
c{Glu{Cys{c{Glu{Cys{GlyzGly PCS proteins are c-EC dipeptidyl transpeptidases (EC 2.3.2.15) [11] and belong to the papain superfamily of cysteine proteases with conservation of the 3D geometry of the catalytic Cys-His-Asp triad [14,15]. Recently PCS have been identified in a number of additional organisms. Caenorhabditis elegans has a single copy PCS gene that confers cadmium resistance when heterologously expressed in yeast [16]. Silencing by RNA interference of C. elegans PCS lead to a cadmium-hypersensitive phenotype [16]. PCS genes or transcripts have been identified in the parasitic nematodes Brugia malayi and Parascaris univalens [17] and in other metazoan organisms, including lower chordates [18] and in several prokaryotic genomes [18,19,20,21]. In contrast, there are no PCS genes encoded in the genomes of mammals, which instead use GSH and metallothioneins, low molecular weight, Cys-rich proteins, to regulate the availability of heavy metals [22].
A number of heavy metals (e.g., Fe, Zn, Cu, Mn) are essential micronutrients for most organisms, and are involved in the catalytic activity or structural stability of numerous enzymes. However, an excess of these heavy metals is often toxic and their cellular levels must be tightly controlled. For instance, an excess of Fe or Cu can lead to increased production of toxic oxygen radicals via the Fenton and Haber-Weiss reactions [23]. Non-essential heavy metals (e.g., Cd, Hg, As) are generally toxic because they displace appropriate metals from enzymes or react with active thiol residues in proteins. Phytochelatin synthase is found in plants and a wide variety of other organisms, from cyanobacteria, algae, ferns, fungi, nematodes [18,24]. Phytochelatins synthesized by PCS are involved in the chelation of a variety of heavy metals.
Little is known about the regulation of metal availability in schistosomes or the detoxification of toxic heavy metals [25]. Because schistosomes live in an iron-rich environment, regulation of iron is the best characterized of the metals [26]. Previous studies have found that iron is stored in female worms in yolk ferritin [27,28] and iron acquisition in schistosomes has been shown to be accomplished by divalent metal transporters [29].
Here we present an initial characterization of the S. mansoni (Sm)PCS gene and protein. We found that three S. mansoni PCS transcripts are produced by alternative splicing from the unique SmPCS gene, potentially encoding three different PCS proteins. Two of these proteins containing the complete phytochelatin synthase domain resulted in large enhancements of tolerance to cadmium toxicity when expressed in Saccharomyces cerevisiae. We also found that this tolerance required free GSH. Phytochelatins containing 2-5 repeat units are produced by recombinant SmPCS expressed in S. cerevisiae. Multiple SmPCS mRNAs are expressed in all the mammalian phases of S. mansoni life cycle. Expression of SmPCS increases when ex vivo worms are cultured in media containing cadmium, iron, copper, or zinc. Collectively, this study indicates that SmPCS plays an important function in the schistosome-host interaction and is a potential candidate for drug development against schistosomiasis.
Parasite Preparation
Infection of mice (NIH Swiss, National Cancer Institute) with S. mansoni cercariae (NMRI strain) obtained from infected Biomphalaria glabrata snails and perfusion of adult worms (6-7 wk) from mice were done as described [30]. This study was approved by the Institutional Animal Care and Use Committee at Rush University Medical Center (IACUC number 08-058; DHHS animal welfare assurance number A3120-01 RNA isolation and cDNA synthesis RNA was isolated from adult worms collected from mice using the TRI reagent (Sigma-Aldrich), subsequent chloroform extraction and isopropanol precipitation of RNA following the manufacturer's instructions. The quality and quantity of the RNA was checked by A 260 /A 280 in a Schimadzu UV-1800 spectrophotometer. Complementary DNA (cDNA) was synthesized using 1 mg RNA, 1 ml oligo dT (500 mg/ml), 1 ml 10 mM dNTP mix along with the 0.1 M DTT, 1 ml RNaseOut and 1 ml reverse transcriptase (200 unit) in the Superscript II Reverse Transcriptase Kit (Invitrogen) following the manufacturer's protocol.
Cloning of SmPCS transcripts and analysis of alternate splicing
An adult worm cDNA library (kindly provided by Dr. Philip LoVerde) was used as a PCR template to amplify the SmPCS open reading frame using Taq DNA polymerase and gene-specific primers (See Table 1 for primer sequences). PCR product was cloned into pCRII TOPO TA vector (Invitrogen). Modified
Author Summary
Schistosomiasis is a chronic, debilitating disease that affects hundreds of millions of people. The treatment of schistosomiasis relies solely on monotherapy with praziquantel and there is concern that drug-resistant parasites will evolve. Therefore, it is imperative to identify new drugs for schistosomiasis treatment. In this study our goal was to characterize a unique gene of Schistosoma mansoni that may be a candidate for drug targeting to control schistosomiasis. This gene, phytochelatin synthase (PCS), is a single copy gene in S. mansoni but is absent from humans. Our results confirm that schistosome PCS produces phytochelatins that are capable of scavenging and detoxifying heavy metals. The expression of the PCS gene in ex vivo adult schistosome worms was increased by exposure to a number of heavy metals. These results indicate that S. mansoni PCS regulates the availability of metal ions that the worm may be exposed to, either as cofactors in metalloenzymes or as excess metals encountered in the blood stream of their mammalian host. Collectively, these results have important implications for drug development for the control of schistosomiasis. Since other helminth parasites have PCS, drug development targeting this enzyme may have wide applications in the control of multiple neglected diseases.
Phytochelatin Synthase of Schistosoma mansoni www.plosntds.org 59RACE was performed using the T3 primer of pBlueScript II or the trans-spliced leader primer [31] and a reverse, gene-specific primer. PCR products ligated in pCRII vector were transformed into TOP 10 Escherichia coli strain following the manufacturer's protocol (Invitrogen) and plated on LB agar plates overnight at 37 uC with 50 mg/ml kanamycin. Plasmids were isolated using Qiagene mini plasmid isolation kit and sequenced by Applied Biosystems 48 Capillary 3730 XL DSL Analyzer. Sequencing reactions were done using 100 ng of template plasmid DNA, the M13 forward or reverse primer and Big Dye Terminator V3.1 Cycle Sequencing at DNA service facility of University of Illinois at Chicago (www.uic.edu/depts/rrc/dnas/).
Expression analysis of SmPCS by reverse transcriptase PCR
The expression patterns of SmPCS transcripts were analyzed in different stages of the worm life cycle. Total RNA was isolated from eggs, schistosomula, juvenile liver worms, and male and female adult worms using the TRI reagent (Sigma-Aldrich) and cDNA was prepared as described above. One ml of cDNA was used as a template for PCR amplification. Forward primers were designed such that each set of primers would amplify only one transcript of the differentially spliced gene (Table 1). For the mitochondrial transcript, the forward primer was designed from a 35 nucleotide region of exon 2 that is spliced out of the two cytoplasmic transcripts (SmPCS-1 and SmPCS-2). To amplify SmPCS-1 only, the forward primer was designed having half of the primer sequence coming before the 35 nucleotide spliced region and the other half from the sequence after the 35 nucleotide spliced out region. For the SmPCS-2 transcript, the forward primer was made by taking the N-terminal half of the sequence from exon 1 and the other half from exon 3. The reversed primers were designed downstream of the forward primer sequences with a comparable primer melting temperature to amplify 690, 553, and 566 base pairs products from mitochondrial transcript, cytoplasmic transcript 1 and 2 respectively. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH, GenBank accession M92359) was also amplified as the PCR template loading control.
Expression of SmPCS during the mammalian life cycle of S. mansoni determined by quantitative PCR Total RNA was isolated from the different stages of S. mansoni with TRI reagent (Sigma-Aldrich) and then cDNA was made as described above. One ml of cDNA was used as a template in triplicate assays for quantitative (q)PCR using the SYBR Green PCR Core Reagents and ABI PRISM 7900 sequence detection system (Applied Biosystems) following manufacturer's instructions. Primers were also designed for S. mansoni GAPDH to use as the Table 1. PCR primers used in this study. internal control. For graphical representation of qPCR data, raw cycle threshold (DCt values) obtained from schistosomula, liver stage, female and male worm transcripts were deducted from the DCt value obtained for egg worm transcripts using the delta-delta Ct (DDCt) method [32,33], with GAPDH transcript levels serving as the internal standard.
Mitochondrial SmPCS
Analysis of the expression of SmPCS in adult S. mansoni worms in the presence of heavy metals Adult worms collected by perfusion from mice were incubated in RPMI 1640 medium (Gibco) containing 25 mM HEPES (pH 7), 100 mg/ml streptomycin and 100 U/ml penicillin for 4 days. Cadmium (II) chloride, iron (III) chloride, zinc (II) chloride or copper (II) sulfate at 50 or 100 mM were added to the culture medium. Six pairs of worms per well were incubated in 6-well tissue culture plates at 37 uC with 5% CO 2 . Media and salts were replaced with fresh solutions after 48 hours. After 4 days worms were collected for RNA extraction, followed by the qPCR as described above.
Expression of recombinant S. mansoni PCS proteins in yeast and cadmium tolerance assay Saccharomyces cerevisiae strain K601 was selected to study cadmium tolerance in presence and absence of variants of recombinant SmPCS. The strain is uracil deficient. The ORFs encoding SmPCS mitochondrial or cytosolic variants were cloned into the yeast expression plasmid pYES 2.1 (pYES2.1 TOPO TA Expression kit; Invitrogen), which has the GAL promoter site. PCR amplification from adult worm cDNA was done using 1 unit Pfu DNA polymerase (Stratagene) following manufacturer's instructions. To add the A-overhang for cloning the PCR products in pYES2.1, the PCR products were treated with 2.5 unit of Taq DNA polymerase (Promega) for 12 minutes at 72uC. The primers used for cloning are listed in Table 1. The sequences of recombinant plasmids were verified. Plasmids were transformed into yeast cells following the instructions in cloning kit. Control yeast cells were prepared carrying empty pYES2.1 vector. Yeast strains were cultured in uracil deficient SC minimal medium following manufacturer's instructions. Recombinant protein was expressed by addition of 2% galactose as the carbohydrate source. For the cadmium tolerance assay, yeast strains were cultured in SC medium deficient of uracil containing CdCl 2 from 0 to 1000 mM at 30 uC for up to 72 hours. Cell growth was monitored by A 600 . The DL-buthionine-(S,R)-sulfoximine (BSO, Sigma Aldrich) solution was prepared by dissolving the BSO in autoclaved water and sterilizing by filtration through a 0.44 micron filter (Millipore).
Determination of PC production: HPLC analysis
Yeast strains expressing SmPCS proteins were grown for 48 hours at 30 uC in 500 ml SC minimal medium with 2% galactose. The cells were collected by centrifugation and then broken in 1.5 ml of Cellytic Y (Sigma-Aldrich). The supernatants were cleared by centrifugation and 75 ml were injected for HPLC analysis after dilution by an equal amount of water. HPLC analysis was done on a Jupiter C18 column (Torrance) using an HP1100 HPLC system (Agilent Technologies) at the University of Illinois-Chicago proteomics core facility. Mobile phase A consisted of 0.1% trifluoroacetic acid in Milli-Q water and mobile phase B consisted 0.1% trifluoroacetic acid in acetonitrile. The column was equilibrated with 5% solvent B. After sample injection, the column was eluted with a linear gradient from 5% solvent B to 100% solvent B in 40 min. at a flow rate of 1 ml/min. Elutes were collected and assayed with 500 mM 5,59-dithiobis-(2-nitrobenzoic acid) (DTNB) in 50 mM potassium phosphate (pH 8) buffer to detect the free sulfhydryls of cysteine, c-GluCys, GSH and PCs at 412 nm in Multiskan Spectrum plate reader [34,35,36]. Synthetic PC2 (c-GluCys-c-GluCysGly), PC3 (c-GluCys-c-GluCys-c-Glu-CysGly) and PC4 (c-GluCys-c-GluCys-c-GluCys-c-GluCysGly were prepared at the proteomics core facility at the University of Illinois-Chicago.
Determination of PC production: mass spectrophotometric analysis
The HPLC elutes that were positive in the DTNB assay were selected for mass spectrometric analysis. Cyano-4-hydroxycinnamic acid (CHCA) was used as the matrix for matrix-assisted laser desorption/ionization time of flight mass spectrometric analysis (MALDI-TOFMS) of the peptide solutions. Sample solutions were mixed 1:1 with the matrix solution (10 mg CHCA in 1 mL aqueous solution of 50% acetonitrile containing 0.1% TFA). Aliquots (2 ml) were spotted onto a MALDI-TOF target and analyzed using a Voyager-DE PRO mass spectrometer (Applied Biosystems) equipped with a 337 nm pulsed nitrogen laser. Peptide mass was measured using positive-ion linear mode over the range m/z 50-3000. Analyses were conducted at the University of Illinois-Chicago proteomics core facility
Genomic Analysis of SmPCS
Analysis of the S. mansoni genome sequence [37] indicated that there is a single PCS protein encoded in the genome (Smp_072740; Schistosoma mansoni Genome Project http://www. genedb.org/Homepage/Smansoni). One genome scaffold (Smp_scaff000249) encodes the entire PCS gene. No orthologous or parologous sequences were identified in the S. mansoni genome databases. The SmPCS gene encodes a predicted protein of 591 amino acids with a theoretical pI of 7.81 and Mw of 67,355 Daltons, which is considerably larger than PCS from other eukaryotes (Fig. 1A). The larger size of SmPCS is due to both Nand C-terminal extensions. Alignment of the PCS domain of SmPCS (the N-terminal half of the protein) with PCS domains from other organisms indicates that there is high sequence identity (36%-53%) and conservation of the catalytic triad of Cys, His and Asp in SmPCS ( Figure S1). The identity of the PCS domain of SmPCS with the bacteria PCS proteins is lower, only 15%-26%. In addition, three of four Cys residues in the N-terminal portion of eukaryotic PCS proteins thought to bind cadmium and in some way be involved in the activity or activation of PCS [38] are present in SmPCS. An unrooted neighbor-joining tree (Fig. 1B) shows that SmPCS is phylogenetically most related to PCS from other metazoans, with PCS from plants, bacteria, protozoa and yeast clustering on separate branches.
Alternative splicing of SmPCS transcripts
Analysis of the SmPCS gene using a variety of predictive bioinformatic tools suggests that the gene product may be targeted to the mitochondria. MitoProt II 1.0a4 predicts SmPCS has a probability of export to mitochondria of 0.9063, the Predotar v. 1.03 value for mitochondrial targeting is 0.19, about 2x above expected value, and TargetP 1.1 predicts SmPCS has a probability of export to mitochondria of 0.857. There is also strong, but not conclusive prediction that mitochondrial SmPCS is secreted through a non-classical secretory process (Secretome 2.0, NNscore = 0.491; non-classically secreted proteins should obtain an NN-score exceeding the normal threshold of 0.5). Using adult worm cDNA as the template, three alternatively spliced SmPCS Phytochelatin Synthase of Schistosoma mansoni www.plosntds.org transcripts were identified (Fig. 2). The complete SmPCS gene has 5 exons and 4 introns. Exon 1 was present in all transcripts as part of the 59 non-coding region of the gene. The transcript that encodes the complete SmPCS ORF (591 amino acids) has a 12 amino acid long mitochondrial leader sequence, including the start methionine residue, in exon 2. One cytosolic SmPCS variant (SmPCS-1) is formed by alternative splicing resulting in the deletion of 35 nucleotides from exon 2. This alternative splicing results in the mitochondrial targeting sequence being present in a different reading frame than the predicted protein with translation starting at a different methionine in exon 2. Complete splicing out of the second exon generated the second cytosolic transcript of SmPCS (SmPCS-2), which lacks the Cys residue of C-H-D catalytic triad. The predicted start methionine for this splice variant is located in exon 3. Using the trans-spliced leader sequence in reverse transcription-PCR there is no evidence that SmPCS transcripts are trans-spliced (results not shown).
Analyses of the expression of PCS in S. mansoni
Expression of the SmPCS gene in different stages of the worm life cycle was determined using quantitative reverse transcription-PCR (qPCR). GAPDH was used as the internal control. qPCR was done in triplicate on two biological replicates and is represented graphically (Fig. 3). Expression was found to be slightly higher in adult worms and schistosomula than in eggs and liver-stage worms. Due to the high degree of similarity between the three transcripts it was not possible to make specific primers to run qPCR for each transcript individually. Similar difficulties were encountered in designing primers for qPCR to distinguish the four PCS transcripts of Sesbania rostrata [39]. The results of reverse transcription-PCR showed that all three transcripts are present with approximately the same abundance in the five stages of the life cycle analyzed (results not shown). We confirmed the specificity of the primers used in the reverse transcriptase PCR by running separate PCR reactions using the sequenced clones in plasmid as template, specific for a particular transcript. The transcript specific primers were only capable of amplifying the expected sized products when transcript specific clones were used as the template (data not shown).
Expression of SmPCS in adult S. mansoni in response to heavy metals
Adult ex vivo worms were cultured in the presence of heavy metals and SmPCS expression was analyzed by qPCR using the DDCt method with GAPDH as the internal control (Fig. 4). In the presence of heavy metals an increase in PCS expression compared to non-treated control was found. The highest increase in expression, almost 5 fold, was seen when worms were cultured with 100 mM iron or copper. Almost a 3-fold increase in expression occurred in worms incubated with 100 mM cadmium. At 50 mM concentration of metals, the expression of SmPCS was significantly above the control, between 1.5-2 fold. A smaller, but significant increase in PCS expression was seen in response to zinc exposure at both 50 mM and 100 mM of metal exposure.
Tolerance to cadmium toxicity conferred by expression of SmPCS in yeast
Saccharomyces cerevisiae was chosen to investigate the activity of SmPCS because it is sensitive to cadmium toxicity and produces negligible amounts of phytochelatins [16,40,41]. When the mitochondrial variant of SmPCS gene is expressed in yeast, cadmium tolerance is dramatically increased compared to yeast cells carrying the empty vector (Fig. 5A). Yeast cells expressing mitochondrial SmPCS were capable of growth in 1000 mM CdCl 2 (Fig. 5B). By comparison, control cells carrying the empty vector were unable to grow in 50 mM CdCl 2 (Fig. 5A). To investigate the importance of N-terminal and C-terminal ends of SmPCS, Ntruncated SmPCS was made by deleting the first 65 N-terminal Phytochelatin Synthase of Schistosoma mansoni www.plosntds.org amino acids of the SmPCS, including the mitochondrial signal sequence. When assayed in yeast, the growth of cells expressing the N-truncated SmPCS protein (Fig. 5C) was the same as yeast cells expressing the mitochondrial SmPCS (Fig. 5B). A Ctruncated SmPCS construct was made by deleting the C-terminal 100 amino acids. Yeast carrying C-truncated SmPCS gene were unable to grow in the presence of CdCl 2 above 50 mM (Fig. 5D). It should be noted that both the N-and C-truncated proteins contain the complete phytochelatin synthase activity domain. The SmPCS cytosolic transcript that lacks the Cys residue of C-H-D catalytic triad was also assayed in yeast for the cadmium tolerance. Under the same growth conditions, yeast carrying this transcript was highly sensitive to cadmium exposure (Fig. 5E). These results suggest that SmPCS with the complete catalytic triad is capable synthesizing PC, which then scavenges and neutralizes cadmium permitting yeast growth. If phytochelatins are formed, there should be a dependence on GSH in the cadmium tolerance induced by SmPCS expression. To examine if the GSH is involved, 500 mM L-buthionine sulfoximine (BSO), an inhibitor of c-glutamyl cysteine ligase, the first step in GSH synthesis [12], was added to yeast cell cultures carrying the mitochondrial variant of the SmPCS gene. BSO or CdCl 2 alone caused no reduction of yeast growth. However, cell growth was greatly reduced when both 500 mM BSO and 500 mM CdCl 2 were present in the culture medium (Fig. 5F). The same result was found for the yeast cells expressing the cytoplasmic variant of the SmPCS protein having . PCS transcripts expression in adult ex vivo Schistosoma mansoni in response to heavy metal exposure. Quantitative reverse transcription (q)PCR amplification of PCS transcripts was performed in triplicate on three biological replicates. For graphical representation of qPCR data, raw cycle threshold (Ct values) obtained for the different stages were deducted from the Ct value obtained for to transcript levels in control worms not exposed to any metals using the deltadeltaCt (DDCt) method [32,33], with glyceraldehyde phosphate dehydrogenase (GAPDH) expression levels serving as the internal standard. The concentrations of the heavy metal salts tested were 50 mM (50) and 100 mM (100). Bars labeled 'a' are significantly higher (p,0.05) than control; bars labeled 'b' are significantly higher (p,0.05) than the control and 'a'. doi:10.1371/journal.pntd.0001168.g004 Phytochelatin Synthase of Schistosoma mansoni www.plosntds.org
Detection and identification of the phytochelatins formed in yeast by SmPCS activity
Yeast strains carrying full-length SmPCS gene or empty vectors were grown for 48 hours at 30uC in presence or absence of 500 mM CdCl 2 in yeast induction medium. Cell extracts were fractionated by HPLC and the HPLC elutes were analyzed by the classical DTNB assay [34] (Fig. 6). The results clearly show that five peaks (fractions 5, 7, 12, 15 and 18) were generated from yeast cell extracts expressing SmPCS gene while the yeast cell extracts made from cells with the empty vector had two peaks (fractions 5 and 7). Synthetic PCs fractionated by HPLC under the same conditions were identified in the same fractions as shown for the yeast extracts in Figure 6 (results not shown).
DTNB-reactive fractions were selected for further examination by mass spectroscopy for peptide identification. MS analysis identified the presence of Cys and c-GluCys in fractions 5 and GSH in fractions 7 of both yeast cell extracts (Fig. 5). It appears that phytochelatin production in yeast cell expressing SmPCS protein significantly depleted GSH levels as a smaller GSH peak is seen compared to this sample from yeast cells with empty vector.
Thiol-reactive peaks in fractions 12, 15 and 18 were only found in yeast cells expressing SmPCS. Mass spectrometric analysis detected phytochelatins with 2 (c-GluCys-c-GluCysGly), 3 (c-GluCys-c-GluCys-c-GluCysGly) and 4 (c-GluCys-c-GluCys-c-GluCys-c-GluCysGly) repeats of c-GluCys in fractions 12, 15, and 18, respectively (Fig. 6). Identification of phytochelatins by mass spectrometry confirms the activity of recombinant SmPCS in the production of PCs that can then act as cadmium scavengers. Synthetic PCs were found at the same MW by MALDI-ToF as shown for the yeast extracts in Figure 6 (results not shown).
Discussion
We have shown that a homologue of PCS is present in S. mansoni. Genome analyses indicate that humans and other mammals do not have genes for PCS. In mammals, heavy metal availability/sequestration is accomplished by metallothionines and GSH [42]. It has recently been determined that humans use GSH, not PCs, for an ATP binding cassette (ABC transporter) to detoxify cadmium [43]. Because PCS is absent from the human genome, we hypothesized that schistosome PCS may be a suitable drug target for schistosomiasis treatment. To establish this, we present here an initial characterization of S. mansoni PCS. We find that SmPCS has conserved amino acid residues in its active site suggesting that the catalytic mechanism of SmPCS should be similar to characterized PCS proteins. When expressed in yeast, SmPCS provides protection from cadmium toxicity, allowing yeast to multiply in the presence of high concentrations of cadmium in a GSH-dependent manner. SmPCS is expressed in all mammalian lifecycles stages and its expression is increased in response to the presence of heavy metals. Collectively, these results indicate that PCS plays an important role in the regulation of heavy metal availability in S. mansoni.
Although the overall similarity of SmPCS to other PCS proteins is relatively low, on the order of 28%, if one considers only the most conserved, N-terminal half of the proteins, identity is ,45%, which is similar to the conservation seen when comparing plant, fungal and nematode PCS proteins. When the complete PCS domain is present in the SmPCS construct expressed in yeast, with or without the mitochondrial targeting sequence, large increases in cadmium tolerance were observed, with growth occurring even in 1000 mM cadmium. However, expression of a form of SmPCS with truncation of the 100 C-terminal amino acids (but with the complete PCS catalytic site) does not provide cadmium tolerance. The C-terminal domains of PCS proteins from eukaryotic organisms are poorly conserved and their function in enzyme activity has not been clearly defined. It has been speculated that the C-terminal region of PCS proteins may have functionally diverged in individual organisms. PCS in cyanobacteria, which lack the C-terminal domain, have been reported to primarily catalyze the hydrolysis of GSH and GSH conjugates, in which GSH is converted to c-glutamylcysteine and c-glutamylcysteine Sconjugates [20,21]. However, the C-terminal domain in PCS from eukaryotes has a role in PCS synthesis since an Arabidopsis thaliana PCS mutant truncated in the C-terminal domain has a cadmiumsensitive phenotype and C-terminally truncated PCS has decreased thermal stability and responsiveness to heavy metals [44]; [45]. It has been suggested that heavy-metal binding by cysteine residues present in the C-terminal domain have a role as a sensor for heavy metals [46]; [47]; [48]. These cysteine residues may function by binding the heavy metal with its subsequent transfer into the closer proximity with the catalytic domain [49]. The C-terminal domain in SmPCS contains eight cysteines and deletion of the one hundred C-terminal amino acids removed five of these cysteines. Expression of this mutant protein did not allow the growth of yeast in cadmium suggesting that this region plays an important function in synthesis of PCs by SmPCS. Alternatively, deletion of this region of the protein resulted in a PCS protein with poor stability and a short half-life. The precise functions of the Cterminal region of SmPCS remain to be determined.
PCS proteins are structurally and catalytically related to Clan CA cysteine proteases including papain and lysosomal cathepsins from animals [15]. Both classes of enzymes have a requirement for a nucleophilic cysteine, which is made more nucleophilic by a 3Dproximal histidine residue [15]. The third residue in the catalytic triad of cysteine proteases is an asparagine, which is sometimes replaced by an aspartic acid [50]. In PCS proteins, an aspartic acid aligns with the conserved asparagine/aspartic acid in cysteine proteases. Site-directed mutagenesis was used to determine that the catalytic triad of Cys-56, His-162 and Asp-180 was absolutely required for phytochelatin synthesis in A. thaliana PCS [51]. Recently, the structure of a prokaryotic PCS-like protein was described confirming that PCS proteins belong to the papain superfamily of cysteine proteases and display conservation of the 3D geometry of the catalytic cysteine-histidine-aspartic triad [14]. The catalytic cysteine-histidine-aspartic acid triad is conserved in SmPCS. Deletion of the cysteine of the triad results in a protein that is unable to confer resistance to cadmium toxicity in yeast, indicating that a similar catalytic mechanism must occur in SmPCS as in other PCS proteins. However, transcripts spliced so that they lack this catalytic cysteine were identified in cDNA populations from adult S. mansoni worms. The function of proteins expressed without a complete catalytic triad is not clear. It should be noted that alternative transcript splicing resulting in partial deletions of the catalytic triad has been seen in other organisms as well. The tropical legume Sesbania rostrata has been reported to have four transcripts including two that lack the complete catalytic triad. These variants were not able to confer cadmium tolerance when expressed in S. cerevisiae [39]. Splice variants of PCS genes in Lotus japonicus have also been reported [52]. We find that all three PCS transcripts, including cytoplasmic transcript-2 that lacks that the complete catalytic triad, are expressed in all the life stages of S. mansoni interacting to its human host. Expression of the cytoplasmic transcript 2 is intriguing but its function remains to be determined.
Although it is difficult to imagine that S. mansoni parasites are routinely exposed to elevated levels of toxic heavy metals within the controlled environment of their definitive host, dietary influx could potentially expose adult and juvenile liver worms in the hepatic portal system to elevated levels of both essential and nonessential heavy metals. In addition, schistosomes degrade host hemoglobin as a source of both heme and amino acids. However, excess heme is toxic due to the ability of its reduced iron to generate oxygen radicals and other toxic reactive species. The polymerization of heme to a nontoxic, insoluble waste product, hemozoin, is important for schistosomes and other hematophagic parasites [53]. SmPCS may be crucial for heavy metal sequestration in the worm. We hypothesize that PCS and phytochelatins are involved in the detoxification of iron produced during the breakdown of host hemoglobin in the parasite gut. Several metals, notably Cu, Fe, Mn, and Zn, are cofactors in essential metalloenzymes within the mitochondria [54]. Since some PCS appears to be targeted to mitochondria, it may have a role metal homeostasis in this cellular compartment in schistosomes.
Reverse genetic approaches have been used to verify the role of PCs and PCS in resistance to heavy metals in a number of systems. RNA interference silencing of PCS in C. elegans produced a cadmium hypersensitive phenotype [16]. Deletion of the pcs gene in Schizosaccharomyces pombe produced strains that were ten-times more sensitive to cadmium and more sensitive than the wild type to arsenic, but no increased sensitivity to copper, zinc, mercury, selenium, silver, or nickel was seen [55], although others found increased sensitivity to copper in S. pombe pcs knock outs [41]. Arabidopsis cad1 mutants are cadmium-hypersensitive and deficient in phytochelatins [55] and are mutated in the gene for PCS1 [55]. Attempts to silence SmPCS in adult S. mansoni worms have not been successful thus far (results not shown). However, the role of SmPCS in defense against heavy metals may be inferred from analysis of factors inducing its expression. Therefore, it was of interest to know if SmPCS expression shows any changes after exposure of worms to heavy metals. Worms exposed to cadmium had increased SmPCS expression. Increased PCS expression also occurred in response to exposure to copper and iron and to a lesser extent to zinc. These results indicate that SmPCS is involved in the processes to regulate the availability of copper and iron, potentially including iron released from heme, and suggests a role in the detoxification of other heavy metals. Increases in PCS expression by copper, iron and zinc has been reported in a variety of plants [35,[56][57][58].
Our investigations suggest that the SmPCS protein has similar activity to PCS proteins from other organisms. Expression of S. mansoni PCS in all life stages of S. mansoni interacting with its human host strongly suggests that it is an essential gene and therefore it can be considered as a prospective target for new drugs. Previous studies have found significant increases in iron storage proteins and zinc transporters in skin stage parasites relative to cercariae [59] reinforcing the importance of metal homeostasis in schistosomes. The drugability of PCS proteins is unknown. No PCS inhibitors are known because their identification has not been the priority of previous research. The structural relationship of PCS proteins to cysteine proteases could be exploited to identify and develop inhibitors of PCS. Cysteine proteases are currently targets for many diseases including cancer, inflammatory diseases, malaria, Chagas disease, schistosomiasis and other parasitic diseases [60][61][62] and chemical libraries targeting cysteine proteases are available. We could potentially tap into the rich array of known cysteine-protease inhibitors to identify PCS inhibitors. This is a goal of our future studies on S. mansoni PCS. Since both parasitic nematodes and trematodes (and potentially cestodes) have PCS genes/proteins, the identification of compounds targeting PCS activity could have broad impacts on drug development for a number of important human pathogens, which are largely neglected by the pharmaceutical industry. Figure S1 Multiple sequence alignment of the phytochelatin synthase domain containing the active site of S. mansoni PCS with the phytochelatin synthase domains from other organisms. The amino acid residues of phytochelatin domains were aligned using ClustalW multiple sequence alignment program. Identical residues are shown with a black background and conservative changes are shown with gray background. The conserved catalytic triad of in the phytochelatin domains, C-H-D, are shown in red and cysteine residues thought to be involved in cadmium binding are shown in yellow. One cysteine residue is substituted by a lysine in S. mansoni PCS and is indicated by a blue circle. The accession numbers for the sequences used are shown in Figure 1B. (TIF) | 2014-10-01T00:00:00.000Z | 2011-05-01T00:00:00.000 | {
"year": 2011,
"sha1": "298c10f24f79719f9466b6c94ba27dd520e02942",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0001168&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "863f0309e54ccae35a010aa6cea2f0a400a25287",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
136258776 | pes2o/s2orc | v3-fos-license | Studies and research on the crack testing for brazed aluminium alloys specimens
The scope of this paper is the identification of an optimum technological solution for brazing aluminum alloys using crack tested specimens. To obtain conclusive results, these tests are conducted on two sets of different specimens. Thus, we get two sets of data which we will compare. These tests are part of the standardized series of tests required by the ASME standards. These are called exfoliation tests. They are used to determine where the crack occurs: in the base material or in the filler material. Thus, we can determine whether the cracking is cohesive or adhesive.
Introduction
One of the materials used in all industrial branches is aluminum and its alloys. In order to obtain an unmistakable joining of one or more components of this material, we can use various technologies such as welding, brazing, bonding, riveting, etc. As in all cases, the technologies used have advantages and disadvantages that we can only highlight with destructive and non-destructive testing [3]. Considering that the brazing operation is a welding-related process, at the boundary between welding and bounding operations, the peeling tests are part of the standardized series of tests.
In order to determine the values at which breaking of the aluminum specimens occurs, two types of specimens will be presented in detail in the paper. To put in execution this destructive test procedure, I used the Universal Fatigue testing Machine INSTRON 8801 shown in Figure 1 in accordance with the effective standards; the loading speed was 1mm/min, the acting force of the grips represents 10% of the force applied on the specimens. For the registration of the local deformations, it was used an extensometer with 50mm measuring base and was applied on the specimens until the specific deformation reached 3%. For fastening the specimens has been used a clamping system series 3520, with its characteristics described by the manufacturer. The device is shown in Figure 2.
Experimental Data
To determine the values of the forces setting it is necessary to calibrate test instrument on a set of samples from the same batch. Thus 13 samples were randomly selected and cut at the same size. Due to tests results several graphs. Mediating these interim results we can determine the value range that should set de device.
The results are presented in Figure 3 and in table 1.
The crack testing of the experimental specimens in case of aluminum brazed alloys were executed on two units of specimens named onwards "folded specimens" and "experimental specimens". [ The first unit consists on 6 specimens that were brazed using an optimum technology: pickling with Aloclene 100 solution, applying the filler material on both sides of the base material, using spectral acetylene and a neutral flame.
It is mentioned that during the entire period of assembling the specimens it was used one single operator to avoid the human errors. "The folded specimens",figure 4, were retrieved from the same material as the "experimental specimens".
The specimens were brazed head to head by folding the ends 5mm long under the terms of SR EN 12797 [2] referring to the specimens used for the crack testing (peeling test). It is to be mentioned that the brazing area is constant on all the specimens.
The brazing technology used for "folded specimens" 3 and 6 is the following: pickling the aluminum alloy in Aloclene 100 solution, depositing the filler material on both sides of the base material, using spectral acetylene and neutral flame. [5] The tests performed on the test machine shown in Figure 1 resulted in a set of values for the folded specimens breaking forces. These are presented graphically in Figure 5. The correlation between maximum tension and maximum breaking force is shown in Table 2. Table 2. The values maximum tensile stress -load on "folded specimens".
The second unit of specimens used for the crack testing is part of the "experimental specimens". This are presented in Figure 6. Representative technologies were considered those that had very good results on nondestructive examination. Figure 7 shows the three graphs corresponding to the 3 "experimental specimens". It should be noted that these specimens were brazed by the brazing technology through which "folded specimens" 3 and 6 were also executed.
It can be seen that the values of the breaking forces are similar to those shown for "folded specimens" 3 and 6.
Conclusion
Standardized peeling tests only refer to samples of the folded specimens. These are difficult to achieve due to the bending of the edge on a relatively small area. At the same time, it is possible, in the case of a quick bending, that the aluminum alloy will change its properties by deformation in the bending zone and thus cause a false breaking at the action of the loads.
Experimentally, we demonstrated that using a set of "experimental specimens" the results are similar to the following: all samples should be taken from the same batch of material, all samples should be brazed using the same technology and most importantly use the same technician in order not to enter other values of human error.
In the case of the folded and the experimental specimens, the crack appears in the base material, not in the brazed joint.
In the case of both specimen units, the results dispersal validates the experiments. The technology that consists on pickling with Aloclene 100 solution, applying the filler metal on both sides of the base material, using spectral acetylene and neutral flame can be considered the optimum way because in consequence of the crack testing (peeling testing) could be obtained a maximum resistance in the brazing joint. | 2019-04-29T13:17:43.531Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "31a1c0ac3c66718215f5962ed39c12f0a38928b0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/227/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4c8000936c8caba27be2413fe78a9e16015f6b8c",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
45152454 | pes2o/s2orc | v3-fos-license | An extended phase space for Quantum Mechanics
The standard formulation of Quantum Mechanics violates locality of interactions and the action reaction principle. An alternative formulation in an extended phase space could preserve both principles, but Bell's theorems show that a distribution of probability in a space of local variables can not reproduce the quantum correlations. An extended phase space is defined in an alternative formulation of Quantum Mechanics. Quantum states are represented by a complex va\-lued distribution of amplitude, so that Bell's theorems do not apply.
Introduction
According to the EPR argument [6], the standard formulation of Quantum Mechanics (QM) is incomplete. The authors did not consider the possibility that measurements could have non local effects. On the other hand, Bell's theorems [2] prove that probabilistic theories with local hidden variables contradict the quantum (and experimental [1]) correlations.
The action reaction principle (ARP) is a fundamental ingredient of Mechanics. In its simplest formulation, it states that when two systems interact both of them depart from their isolated evolution law. Otherwise, there would be changes of state in a system without any cause.
In section 2, a simple argument shows that the ARP is violated in the standard formulation of QM. If it happens in the framework of Classical Mechanics the phase space is incomplete, and additional variables of state will restore the ARP. The same hypothesis, incompleteness of QM, is considered in this paper. Section 3 presents another paradox between the ARP and indirect measurements on an elementary particle, when some virtual paths are discarded. If interactions are local, we conclude that there is an accompanying system, the de Broglie wave [5]. Additionally, there must be variables of state for the corpuscular component, the point particle. The quantum particle is a composite.
The thesis of Bell's theorems can be avoided if some hypothesis is not fulfilled by the mathematical model. A quantum state will be represented, as in standard QM, by a complex valued distribution of amplitude in a space of state variables. In section 4 a generic quantum system, with an arbitrary family of magnitudes as variables of state, is considered, generalizing the case of spin variables presented in [8]. Relative frequencies are obtained by application of Born's rule [4] to the marginal amplitudes, so that the typical interference phenomenon in the superposition appears. Complex amplitudes, as (real instead of non negative) quasi probabilities [9], do not fulfil the hypothesis of Bell's theorems, which do not apply to this formalism.
The phase space of three particular systems is described in the following sections: isolated spinless point particle, the paradigmatic two slit experiment, and spin variables, both for an individual particle and for a composite of two in a total null spin state (Bell's experiment). Predictions of standard QM are reproduced.
The proposed formalism is just a new interpretation of QM, up to the wave like accompanying system 1 and its observable effects, when it is spatially isolated from the corpuscular component. Vacuum must be the background of the wave like system. Last section contains some remarks about its physical interpretation.
The action reaction principle in QM
The following analysis shows that the standard formulation of QM either is incomplete or it violates the ARP.
Let A be a self-adjoint operator, representing a physical magnitude of a quantum system in the associated Hilbert space, and |a 1 > an eigenvector of A with eigenvalue a 1 , representing the initial state of the system. A measurement of magnitude A is performed; the result of measurement is a 1 and the final state |a 1 >, trivial projection of state. The pointer has moved from "neutral" initial position to "a 1 ". One of the systems in interaction (the apparatus) has changed of state and the other has not. The action reaction principle (ARP) is violated. The ARP is violated in standard QM because the maximal number of commuting operators is lower than the dimension of the classical phase space. 2 For another arbitrary magnitude B, [A, B] = 0, we can express the initial state as |a 1 >= j z j |b j >. The system in state |a 1 > does not have definite value of B. If B instead of A were measured, some value, say b 1 , would be obtained. Let us consider the hypothesis that b 1 is a hidden value in the quantum state; being B arbitrary, the same hypothesis applies to all magnitudes. An extended phase space contains variables giving account of the result of arbitrary hypothetical measurements. When A is measured, some of these variables change of value and the ARP is preserved.
We describe quantum states through complex valued distributions of amplitude in a space of state variables. In the space of N -tuples of values of N physical magnitudes P ≡ {(a i , b j , c k , . . .)}, the orthodox quantum formalism assigns a family of distributions of amplitude i z i |a i > 3 , j z j |b j >, k z k |c k >, etc., on each space {a i }, {b j }, {c k }, . . . respectively, sets of eigenvalues of each magnitude. The correspondence between these distributions is determined by the change of bases.
If λ 1 = (a 1 , b 1 , c 1 , . . .) were a complete description of state of an individual system, we would expect the existence of a classical distribution of probability P (λ) in P, representing an ensemble of independent systems. Bell's theorems show that P (λ) does not exist. Wave particle duality and the two slit experiment suggest that there is an additional, accompanying wave like system, the de Broglie wave.
Elementary particle as a composite
Let z 1 |a 1 > +z 2 |a 2 > be the vector of state of an elementary particle, such that |a 1 > and |a 2 > components have spatially separated wave packet representations, i.e., follow different virtual paths 4 . A particle detector is placed in one of the paths, say |a 1 >. There is not detection (indirect, negative measurement). The final state of the system is |a 2 >, different from the initial one. The detector has not (apparently) changed of state, and the ARP is violated.
There are possibly unobserved variables in the detector that change of value along the process. After all, the detector is designed to show an observable reaction when a particle hits it, i.e., through a local interaction. Perhaps the particle following the other path interacts with the detector in an unknown and non local way.
Let us suppose that interactions are local. The particle following path |a 2 > does not interact with the spatially separated detector, located at path |a 1 >. There must be another physical system, the de Broglie wave, that locally interacts with the detector, because there is a change of state in the (composite) system caused by the detector. Perhaps we could find an observable reaction in an appropriately designed detector, looking for a wave like system 5 .
We will suppose that an isolated elementary particle is a composite of corpuscular and wave like subsystems. In the previous experiment, the corpuscular subsystem has a definite position coordinate (the spatial wave packet |a 2 >), while the wave subsystem has two components at |a 1 > and |a 2 >. The distri-bution of amplitude represents an ensemble of physical systems, and a statistical correlation between wave and corpuscular subsystems through Born's rule. Notice that the negative result of measurement is not just a reset of information about the state of system, because there is an observable change of state 6 .
In the alternative formulation sketched in the previous section, λ 1 = (a 1 , b 1 , c 1 , . . .) is not a complete description of state of an individual system. In the former example the particle has precise value of the position magnitude, x 2 at the wave packet |a 2 >, and does not interact (locally) with the detector. Instead, the wave component at |a 1 > does, and later on the interaction between a perturbed wave and the particle gives account of a new final state of the particle. The correlation (interaction) between particle and wave can not be represented by a distribution of probability on the space of state variables; interference phenomena appear in the superposition of complex amplitudes, but not with classical probabilities.
Extended phase space
In Classical Mechanics the phase space of a system is usually finite dimensional and physical magnitudes are real functions on it; there are functional relations between them, as energy and angular momentum depending on position and momentum, etc. In standard QM, magnitudes are represented by non commuting operators, and can not generically have precise values jointly, common eigenvectors. The classical functional dependence between magnitudes is not fulfilled by the quantum eigenvalues 7 .
The extended phase space of a quantum system is defined in two steps. First, a set P of N -tuples of all possible (eigen)values of N physical magnitudes. Magnitudes functionally dependent of two or more commuting ones are redundant, while those functionally dependent of non commuting magnitudes are not. All functions of the classical phase space could be in the list, modulo redundancies, for a maximal resolution 8 . In a second step, a distribution of amplitude is defined in this set P.
Let H be the Hilbert space of a quantum system in orthodox QM, and F = {A, B, C, . . .} a family of self adjoint operators representing physical magnitudes of the system.
A state of the physical system is represented by a pair [λ 0 , Z], λ 0 ∈ P. The first component λ 0 determines the value of all magnitudes in this state. λ 0 contains therefore the result of every hypothetical direct measurement of an arbitrary magnitude. Z represents the composite 9 , and the (possibly stochastic) interaction between wave and particle subsystems.
Being λ 0 hidden, we can at most calculate relative frequencies for any maximal subset of compatible magnitudes from the distribution Z; in fact, we can get relative frequencies for any subset of magnitudes, but if they are incompatible the corresponding relative frequencies are just formal, not observable. The relative frequencies are obtained in two steps, computing first the marginal amplitudes from Z, and then applying the usual Born's rule. For example, relative frequencies for a joint measurement of A and B 10 are found as follows. The marginal amplitudes are and Born's rule, Bell's theorems apply to hypothetical P ′ (a i , b j , c k ) whose marginals were the observable P (a i , b j ), and prove that such P ′ do not exist.
In the correspondence with the standard formalism we must take into account that a state of the system is defined by a ray in the Hilbert space. Normalization factors and arbitrary phases in the definition of a basis must be taken into account. For example, if we get from Z(a i , b j , c k , d l . . .) the marginals Z(a i , b j ), the usual representation is a unit vector 9 An ensemble of individual particle plus wave systems, where λ 0 takes all allowed values. 11 Even if we normalize Z, |Z(a i , b j , c k , . . .)| 2 = 1, the associated marginals are not generically normalized.
where N is the normalization factor, and a i , b j >= exp(iθ ij )|a i , b j > relates eigenvectors generated in the alternative formulation with some "established", used by convention, eigenvectors in the standard formulation 12 .
Entanglement
If we apply the previous formalism to a quantum system made of two or more particles, there will be some physical magnitudes associated to individual particles (each particle position, momentum, etc.) and others associated to the composite (potential energy, angular momentum, . . . ). We can define projections from the space P of N -tuples of eigenvalues of the whole family of magnitudes onto spaces of eigenvalues of magnitudes in each subfamily, P I for particle I, P II for particle II, . . . , P comp for global magnitudes of the composite. The distribution of amplitude Z(λ), λ ∈ P, can be written as Z(λ I , λ II , . . . , λ comp ). It is, as before, an ensemble representation of the composite, accompanying de Broglie wave and corpuscular components of the system. Magnitudes of an individual particle are compatible with magnitudes of another, so that distributions of probability as P (a I i , b II j ) are observable, and are obtained through the marginals where π A I : P → A I , π B II : P → B II are the natural projections from P = A I × · · · × B II × · · · onto the corresponding factors, sets of eigenvalues. When two systems (e.g., elementary particles) interact some physical magnitudes can become correlated and some constraints appear in P, i.e., equations fulfilled by the eigenvalues, state variables. The correlation with the wave like component is now expressed as a distribution in the subspace of P determined by the constraints. The simplest example is correlation in one magnitude of each particle, say A I + A II = A T with the system in a particular eigenstate of A T , e.g., |a T > of eigenvalue a T . Magnitudes (eigenvalues) of both particles fulfil a I i + a II i = a T 13 , and a subset P a T ⊂ P is determined by the constraint. Z is a distribution of amplitude in P a T , equivalently it can be defined to vanish elsewhere. Consequently, P (a I i , a II j | a I i + a II j = a T ) = 0. Independent 12 Two vectors |S 1 >= k z k |c k > and |S 2 >= k exp(iφ k )z k |c k > represent different states, although P 1 (c k ) = P 2 (c k ), because relative frequencies for some other magnitude will not match. On the other hand, |S 1 >= k (exp(iφ k )z k )[exp(−iφ k )|c k >] is obviously the same physical state represented in two orthonormal bases of eigenvectors |c k > and c k >= exp(−iφ k )|c k >, which can be indistinctly used.
13 A common index i characterizes the correlated pair of eigenvalues.
measurements of A I in particle I and A II in particle II, on a correlated pair, has null probability of giving a result that does not fulfil the correlation. The state of the system, [λ 0 , Z], contains hidden variables a I i = a = π A I (λ 0 ), a II i = a T − a = π A II (λ 0 ), determining the result of measurements of A I and A II , and marginals determine the observable relative frequencies, P ((a I i , a II i ) :: |Z i | 2 . These marginals define also the orthodox state |S >= i Z i | a I i > |a II i >. We can generalize the previous case to an arbitrary number of correlated magnitudes, to obtain a subset of correlated states P corr ⊂ P, and a distribution of amplitude in P corr . A I correlated to A II , and B II correlated to B I , are now indirectly correlated 14 ; P (a I i , b II j ) is obtained through marginals and Born's rule, as usual. Equivalence with the standard formulation will be established when the alternative representations i z i |a I i > |a II i >, j z j |b I j > |b II j >, ij z ij |a I i > |b II j > and ji z ji |b I j > |a II i > of the orthodox vector of state are all obtained as marginals from the common distribution Z in P corr , |z i | :: |Z i |, |z j | :: |Z j |, |z ij | :: |Z ij |, |z ji | :: |Z ji |, i.e., modulo normalization and arbitrary phases in the definition of the bases.
Spinless particle
Relevant magnitudes of an isolated and spinless point particle are position and momentum coordinates; energy, for example, is redundant E(p). A distribution of amplitude Z(x, p) in the classical phase space, together with precise values (x 0 , p 0 ), represents the physical state of the system. Z(x, p), similarly to Wigner's quasi probability distribution [9] W (x, p), should reproduce the standard representation, either orthodox amplitudes Ψ(x) and ξ(p), or probabilities P (x) = |Ψ(x)| 2 and P (p) = |ξ(p)| 2 respectively, through marginals.
A simple (not necessarily unique) solution is with Ψ(x) the standard position wave function, and ξ(p) its Fourier transform momentum representation. Marginals and 14 Through the restriction to Pcorr in the sum for marginals.
are in correspondence with Ψ(x) and ξ(p) respectively, up to normalization factors 15 , and with the correspondence between bases x >= exp( ī h xp 0 )|x > and p >= exp(− ī h x 0 p)|p >, |x > and |p > the standard ones. The physical interpretation is a point particle with definite position and momentum, although both magnitudes can not be jointly and consistently measured and their values must therefore be partially hidden 16 , and an accompanying wave, for a wave particle composite whose correlation is represented, in the ensemble, by the distribution of amplitude.
In the same way that Ψ(x) is, in the path integral formalism [7], the integral of elementary amplitudes e iS/h , S the action integral, over the set of all virtual paths with end point x, we can by analogy interpret Z(x, p) as the result of a path integral over virtual (or abstract) paths with end point x and final momentum p. In the general case, abstract paths with common value λ contribute to the amplitude Z(λ).
By linearity, quantum evolution can be expressed as Ψ(x, t 2 ) = dy K(x, t 2 ; y, t 1 )Ψ(y, t 1 ) and it is a matter of interpretation to associate the kernel K(x, t 2 ; y, t 1 ) to a path integral over all virtual paths joining (y, t 1 ) with (x, t 2 ). In relativistic mechanics, the integral can be applied to an arbitrary spatial sheet (in the past or future), and causality is manifest when restricting the domain of integration; the value of the amplitude at a given space time event depends exclusively on values in a spatial sheet inside the past (or future) light cone. Similarly, in the proposed formalism, Z(x, p, t 2 ) = dy dq K(x, p, t 2 ; y, q, t 1 )Z(y, q, t 1 ) can be easily generalized to the relativistic framework. The generic evolution with arbitrary magnitudes is for an adequate "path integral like" kernel. Evolution of λ 0 (t), the corpuscular variables, in a particular system is hidden and stochastic.
The two slit experiment
In the two slit experiment, a third variable is relevant because of interaction with the slits. The state of the system is [(x 0 , p 0 , S 0 ), Z(x, p, S)], with x and p position and momentum variables at the final screen, and S ∈ {L, R} the slit variable, i.e., position at an earlier time.
Marginal amplitude for the final position is Z(x, p, S) = dp Z(x, p, L) + dpZ(x, p, R) i.e., Z(x) = Z L (x) + Z R (x), and gives account of the diffraction pattern, as usual. The particle hits the final screen at position x 0 , with momentum p 0 and coming from slit S 0 . The wave particle interaction, with wave components coming from both slits, is represented through marginals and Born's rule, i.e., wave superposition "guiding" the particle trajectory, in analogy with Bohm's Mechanics [3] 17 . In the measurement of the final position, both momentum p 0 and slit S 0 are hidden. Let us suppose an additional system is located at the R slit, and locally interacts with the system, either with both particle (when going through R, S 0 = R) and wave component at R, or exclusively with the R wave component if the particle goes through L, S 0 = L. There will be a change of state, impulse over the particle (for S 0 = R) and phase shift on the R wave component. The new marginal is Z L (x) + e iθ Z R (x) 18 , and we get a displacement of the virtual diffraction pattern.
If the particle is a photon and the additional system an optical plate of fixed phase shift θ, the diffraction pattern is displaced but preserved. If the additional system is able to show an observable reaction to the presence of the particle, the phase shift on the wave will be stochastic, and will appear independently of a positive or negative (indirect measurement) result for the detection of the particle; the statistical average on θ destroys the diffraction pattern.
The result of measurement allows us to distinguish particles arriving to the final screen from either L or R slit, and observe the relative frequencies P (x, L) and P (x, R), i.e., P (x|S 0 = L) and P (x|S 0 = R). Even if both components of the wave are present we can, for all practical purposes, project onto that accompanying the corpuscular component of the system 19 . The projection of state appears here as a practical rule, grounded on decoherence between de Broglie wave components. 17 But there is not a deterministic law of evolution for the particle. Wave particle interaction is most probably stochastic.
18 For simplicity, we do not consider a modified |Z R |. 19 With known phase shift θ, both wave components interfere for a total distribution P (x, θ) = |Z L (x)+ e iθ Z R (x)| 2 . Photons arriving to x can be (formally) classified according to the conditional probabilities of coming from the (hidden variable) L or R slit, which are proportional to |Z L (x)| 2 and of the stochastic phase shift under measurement of the slit variable simplifies with the denominator, and the standard projection rule is reproduced.
Bell's experiment
Let us consider a spin 1/2 particle. The maximal family of spin variables contains the spin value in any direction of space; spin operators in two different directions are incompatible, and so no one is redundant. The elementary spin state is a map λ(n) from S 2 onto {+, −}, spin up or down, and must be antisymmetric, λ(−n) = −λ(n), because the spin operators fulfill S −n = −S n 20 . We assign a fixed quaternion 21 (instead of complex) amplitude to spin s in direction n, Z(s, n) = sN (n), where N (n) is the quaternion N = n x I + n y J + n z K associated to the unit vector n = n x i + n y j + n z k. I 2 = J 2 = K 2 = −1, IJ = −JI = K, etc., I * = −I, . . .
We assign to the spin state λ, all values of spin prescribed, the quaternion amplitude a kind of "spin path" integral, sum of elementary amplitudes. We will restrict next to a finite, but arbitrary, number of directions {n i } n i=1 . The former integral becomes With this assignment of amplitudes we can now consider particular ensembles. For example, the quantum state with spin up in direction n 1 22 is represented by the ensemble of elementary states λ 1 fulfilling λ 1 (n 1 ) = +, which defines a subset P +1 on the phase space of spin states P = {λ}. We either restrict the sum to P +1 or declare Z(λ) = 0 when λ(n 1 ) = −.
We have marginal Z(s 1 = −) = 0, Z(s 1 = +) = 0, and the orthodox state |+ 1 > is obtained. If we calculate marginals Z(s 2 = ±), i.e., the distribution for the basis {|+ 2 >, |− 2 >}, we get, up to a global factor and using the variables λ ≡ (s 1 , s 2 , s 3 , . . . , s N ), It is easy to check from the relations N * N = 1 and 20 In other words, the map λ acts on the projective space by determining an orientation on each line. 21 The algebra of quaternions is the Lie algebra of rotations. 22 Each orthodox spin state for an individual particle is eigenvector of a spin operator.
that |N 1 ± N 2 | 2 :: (1 ± n 1 · n 2 ) 2 . The orthodox quantum state in an arbitrary basis is reproduced; in other words, marginals from the global distribution of amplitude match the change of bases in the standard Hilbert space, up to normalization and irrelevant phases in the definition of eigenvectors. Z(λ 1 ) is the distribution for the ensemble of states with spin up in direction n 1 . An individual state has hidden variables λ 0 1 determining the result of a measurement of spin in arbitrary direction. Obviously λ 0 1 (n 1 ) = +, and P (± k ) is found from marginals and Born's rule, P (± k ) :: |N 1 ± N k | 2 . Recall that in this interpretation the particle is accompanied by a (we can say here spin) wave, and wave like interference can not be reproduced by an hypothetical classical probability distribution.
Let us consider two particles in a total null spin state, S I n + S II n = S T n (∀n) with eigenvalue(s) 0. In the space of elementary spin states {λ T = (λ I , λ II )}, the constraint λ I +λ II = 0 determines the subset of allowed states is the former (fixed) quaternion amplitude, and Z T is restricted to P 0 , reproduces the quantum results 23 . By definition, marginals Z(s I k , s II k ) with s I k = s II k vanish for all directions n k , so that prefect (anti)correlation exists in each correlated pair.
For two different directions, P (s I 1 , s II 2 ), probabilities for up/down results in directions n 1 for particle I and n 2 for particle II, are calculated in two steps. Marginals simplify to N a global factor. Relative frequencies are found trough Born's rule P (s I 1 , s II 2 ) :: Recall that hidden variables λ T 0 ∈ P 0 accompany the distribution Z T for a complete description of an individual state of the composite. The result of measurements in arbitrary directions on each particle of a correlated pair is prescribed from the generation event.
It is interesting to see explicitly the analogy between interference behavior in the two slit experiment and interference like behavior of the "spin" wave. If we consider another direction n 3 of hypothetical measurement of particle II, the marginals are up to global factor We get from there a formal distribution of probability P (s I 1 , s II 2 , s II 3 ) :: as in the two slit experiment, where we have The analogy can be extended to Bell's theorems, if we consider the following trivial inequality: two strictly positive distributions of probability for independent events can not generate a marginal distribution with zeros. If we consider a point particle arriving to the final screen from either R or L slit as an isolated system, particles through R and particles through L are independent events. Then, (24) with λ, λ ′ additional variables, e.g., momentum. The diffraction pattern can not be reproduced. An additional wave like system, accompanying the point particle, can give account of the phenomenon.
Physical interpretation
Standard QM contains the maximal amount of information we can have about a quantum system. In an interaction, magnitudes not commuting with the Hamiltonian of interaction can change of value. Measurement is an interaction. Therefore, only a maximal family of commuting magnitudes can be jointly and consistently measured. However, it seems to be incomplete: 1. The spatial region where the wave function has significant values is much larger than the observed region where the corpuscular system is located.
2. Measurement, the projection of state, violates the principle of local interactions.
3. The action reaction principle, symmetric effects over both systems in interaction, is also violated.
An extended phase space contains hidden variables, unknown values of some magnitudes of the system; measurement of any of these variables unavoidably destroys the information about previously known precise values of other variables of state. Hypothetical distributions of probability in the extended phase space are not observable. Moreover, Bell's theorems forbid the existence of distributions of probability whose marginals match the quantum predictions. Points, elements of this phase space, can not be a complete description of an individual system of the ensemble.
In the proposed formulation of QM, a complex distribution of amplitude is the representation of an ensemble of quantum systems, each individual system being a composite of corpuscular and wave like subsystems. Variable of state λ 0 of the corpuscular component(s) prescribes for each individual system of the ensemble the result of an arbitrary direct measurement, but it is not a complete description of the composite. Marginal amplitudes and Born's rule determine observable relative frequencies in the ensemble. The interference that appears in the sum of amplitudes generates wave like behaviour, and it is interpreted as a statistical representation of the correlation between the de Broglie wave and corpuscular component in the composite.
Isolated effects of a wave component in indirect measurements, when the particle is spatially separated from the apparatus, are predicted in this formulation in order to preserve both locality and the ARP. These effects are absent in standard QM, where only results and probabilities of direct measurements are described. Such effects have not been observed; equivalence of predictions with the orthodox theory for direct measurements makes this formalism an alternative interpretation of QM.
The physical character of the de Broglie waves is a deeper issue. Vacuum is considered a relevant physical system, which stores energy and interacts with matter and radiation, at both ends of the length scale. In Quantum Field Theory, it is source of annoying divergences, and of the Casimir effect. In Cosmology, dark (vacuum) energy density is responsible of the observed accelerated expansion of the universe. At intermediate scales, vacuum energy density could be behind dark matter, if it were not homogeneously distributed. All we can say about dark matter is it is dark and stores energy, has gravitational pull, which fits well with the vacuum hypothesis. Vacuum fluctuations could be the physical interpretation of the de Broglie waves, as in Quantum Field Theory. The de Broglie wave accompanies all particles and quantum systems, i.e., it must be independent of known fields as, e.g., the electromagnetic. It must be associated to an aether.
There is a formal aether, a Lorentz invariant one. It is the volume form in the light cone of momentum space at each point of space time, understood as a distribution of density of particles with null rest mass. Total volume of the cone is obviously infinite (both density of particles and energy), and represents a classical counterpart of quantum divergences. Total momentum vanishes by symmetry, in all inertial frames! Local, spatially inhomogeneous, displacements from this equilibrium distribution store (positive or negative) finite energy increment, as well as non van-ishing momentum. In a classical fluid there are relevant wave like phenomena at all length scales, from Brownian motion to Earth scale tides. It is appealing to consider a parallelism with quantum fluctuations and dark energy. The classical aether hypothesis (rejected because there is not a distinguished rest frame) has a relativistic invariant and divergent counterpart, which could be relevant too in the theory of gravitation. | 2015-09-23T15:18:12.000Z | 2015-09-23T00:00:00.000 | {
"year": 2015,
"sha1": "24e283116d11d4c51222f9fd7e769c2addd7ae2b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "24e283116d11d4c51222f9fd7e769c2addd7ae2b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
55172500 | pes2o/s2orc | v3-fos-license | The V ortex Aircooler System for Cutting Materials in Aerospace Industry
In aerospace industry there is a situation, when it is not possible to use liquid cooling of cutting zone, because we have special material or special construction. How often in that case used air fluent, but it has some problems with life cycle tools, quality and cutting time. For solving these problems was use the vortex effect, it was open less than one hundred years ago. The unit for making vortex effect is could vortex tube. It was done the design the vortex tube of air cooler system for cutting using some researching [1-5]. Some experimental dates gave verification. The construction was made by metal materials and now is using on JSC «Salut», Samara..
INTRODUCTION
Modern machining takes place in the presence of a standard lubricating coolant (coolant), which is fed directly into the cutting zone through the channels in the tool or external tubes.However, a large group of materials due to low corrosion resistance has to be treated with the use of special coolant, but it should be noted that in real mechanical production it is not always possible to stop the machine to replace the coolant.Taking into account the fact that all aerospace production is small-scale, frequent replacement of coolant is an economically inappropriate action.There is also a whole group of structural materials, the mechanical processing of which is carried out at all without the use of a lubricating cooling liquid.There are cases when the use of coolant must be abandoned not because of the properties of the material of the part, but because of the design itself.For example, drilling largesized sheet metal parts is carried out without the use of coolant, because in view of the large dimensions of the component, the liquid will not circulate in the machine as it should, but it will drain over the sheet metal into the non-working space, which is unacceptable.It is also worth noting that not all machines are equipped with a coolant supply system.
Traditionally, in these cases a jet of compressed air is used for cooling, the efficiency of which is several times smaller, which negatively affects the tool's durability, the quality of the surface being treated and the laboriousness of the machining.In order to partially remove these negative phenomena, it is proposed to use the vortex effect, which was discovered less than a hundred years ago.This phenomenon, realized in a special device called a vortex tube, allows the compressed gas flow to be divided into hot and cold [6].We will feed the cold stream into the cutting zone, and we will transfer the hot stream to the periphery.Use in the design of the vortex tube is due to its reliability and simplicity of the device.
CALCULATION OF THE MAIN PARAMETERS OF THE DEVICE
The device for the countercurrent vortex tube is shown in Fig. 1.It shows the following elements -a tube 2 with a tangential supply of compressed air through the cochlea 1; one end of the tube is closed by a lid with a diaphragm 4 -a cold flow through it, and through the other end of the tube with the throttle 3 a hot stream emerges.Despite the relative simplicity of the tube device, the vortex effect is not fully understood.At the moment, the most accurate and logical description of the vortex effect for the countercurrent tube is provided by the "vortex interaction hypothesis".There are two vortices in the tube.The first "free", which displaces air through the throttle, the second "forced", acting in the axial direction and forcing air through the diaphragm.As a result of the intense turbulent interaction of these vortices, a cooled gas exits through the diaphragm, and a hot gas through the choke.[1] There are a large number of papers, including [1][2][3][4][5]7], in which differential equations describing the processes of turbulent energy-mass exchange are presented.The numerical solution of these equations explains the processes qualitatively but does not show them quantitatively.It is also worth noting that the results of the turbulent energy-mass exchange simulation largely depend not only on the boundary conditions, but also on the methods for their solution.Because of the lack of a common opinion, not only the initial system of equations, but also the methods of solutions, we will have to abandon numerical calculations and turn to empirical calculation methods.Here it is worth noting that different researchers of the vortex effect proposed their own techniques, which, unfortunately, with the same initial data give different values of the geometric parameters of the device.
At the moment there is a large number of not only theoretical, but also experimental studies of the vortex effect.Using the volume of experimental data representing the influence of various factors on ∆ х (the difference in temperatures at the entrance to the tube and the temperature of the cold flow), the following experimental dependences of ∆ х on [2] Having considered these characteristics, we can conclude that ∆ х is a function of 6 variables: Since the considered construction was tested only with a spiral inlet, and different nozzle areas were investigated on a tube of the same diameter, it is advisable to introduce instead of Fn the relative equivalent diameter of the nozzle inlet of the vortex tube: Taking this into account, we rewrite the function ∆ х : ∆ х = ( ̅ ; ; ; ; D ; ) Since the graphical dependencies considered are continuous functions (on the ranges studied), we expand the above function in a Taylor series.The Taylor series for a function of six variables takes the form: where[ ()] 0 -the total differential of the n-th order in a neighborhood ( 01 ; … 06 ).
We draw attention to the fact that from the graphical dependencies given in [2], we can define only partial derivatives of the nth order with respect to one variable.A complete differential of the first order contains only partial derivatives of the first order and can be easily determined, while the differentials, beginning with the second order, contain derivatives of several variables.By the property of the Taylor series, each subsequent order of the differential is less "weight" than the previous one, therefore, for approximate calculations, it is often limited to the first order.The last remark enables us to conclude that the effect of individual variables on a function is significantly higher than their mutual influence.Neglecting, therefore, partial derivatives with respect to several variables, we can rewrite (4) where 1 … 6variables, 01 … 06some meaning of these variables.For the expansion (3) in a series, according to the formula given for six variables, we choose well-traced values of the variables on the considered graphical dependencies (common point): 0 = 15; _0 = 0,5 МПа; _0 = 28 мм; D_0 = 0,5; 0 = 0,5; eq_0 = 0,25 Function Value ∆ х at these values is 38K.
Values 1 … 10 were obtained from the experimental data [2] by means of a PC and are summarized in Table 1.The dependence (7) showed the greatest convergence with the empirical technique given in [1].By this method, the basic geometric parameters of the vortex tube for cooling the object with heat release up to 1 kW: diameter of the vortex tube 17.9 mm; the height and width of the nozzle should be 3.5 mm and 7 mm; diameter of the aperture 9.6 mm; length of the vortex tube is not less than 161,1мм
DEVELOPMENTS OF CONSTRUCTION AND INTRODUCTION TO PRODUCTION
Based on the obtained parameters, the design of the product was developed in the form of a 3D model, shown in Fig. 2. In this figure, we can see a vortex tube 1 with a tangential gas supply through a collet with a throttle 3, adjusting the temperature and flow of cold and hot flow with Using the handle 2 and the poly-hinge tube 4 to feed the jet into the cutting zone.
The construction comprises the following features: 1. Reliability due to the absence of moving elements during operation; 2. Ease of connection to a pneumatic system through standard factory fitting compound; 3. Convenient supply of cold air to the cutting area through a poly-hose tube; 4. Wideadjustmentrange.
According to the developed 3D models of individual parts and assembly of the vortex tube, drawings were made.In the metal, this design was implemented in the tool shop of JSC «Salut», since the choke 3 (Figure 2) was equipped with a fitting clamp of a standard pneumatic tube, the connection to the factory compressor station did not cause any difficulties.The temperature and the ratio of the hot and cold flow were adjusted freely with the help of the handle 2.Moreover, with a considerable opening of the throttle, the tube operated as a vacuum pump, that is, air was sucked through the The air temperature and pressure in the compressor station were measured in automatic mode and displayed.During the experiment, the pressure was 0.67 MPa, the temperature was 98℃ ( ).Thus, the product was tested in hot enough air, which contained moisture.Temperature measurements were made in the range of μ from 0.4 to 0.8 using a chromel-alumel thermocouple (TXA).The experimental dependence of the cooling effect on the flow coefficient is shown in Fig. 3.The introduction into production was carried out on the turning of a very whimsical to the thermal processes of the structural material -aluminum alloy AL4.When machined, this material tends to stick to the chip.This process limits the cutting modes.After installing the vortex tube on the machine, the machine operator was able to increase the feed of the cutter from 0.6 mm/rev to 1 mm/rev for the same depth of cut and spindle speed.The morphology of the chips did not change at the same time.Thus, the introduction of a vortex cooler made it possible to reduce the labor intensity of the operation by a factor of 1.7.
Figure 1 .
Figure 1.Arrangement of a countercurrent vortex tube.
2 MATEC
were found: the area of the inlet nozzle ; vortex tube diameter ; the relative diameter of the diaphragm D = D ; air pressure in front of the vortex tube ; Web of Conferences 179, 02007 (2018) https://doi.org/10.1051/matecconf/2018179020072MAE 2018 2MAE 2018 the ratio of the flow of the cold flow to the flow at the inlet to the tube µ; tube length in calibers = .
Figure 3 .
Figure 3. Dependence of the cooling effect on the flow coefficient.
Table 1 .
Values of the coefficients a, 1 … 10 , с 1 … с 6 .(̅ ; ; ; ; D ; eq )can be reliably used in a certain range of values.The range of values of the function variables is given in Table2.
Table 2 .
Range of application of the formula (7) for calculation∆ х . | 2018-12-07T22:52:17.935Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "dc7ecaecc9ba5111d6ea01ab3d8ef3720860af92",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/38/matecconf_2mae2018_02007.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dc7ecaecc9ba5111d6ea01ab3d8ef3720860af92",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
93251551 | pes2o/s2orc | v3-fos-license | High-performance doping-free carbon-nanotube-based CMOS devices and integrated circuits
Ballistic n-type carbon nanotube (CNT)-based field-effect transistors (FETs) have been fabricated by contacting semiconducting single-walled CNTs (SWCNTs) using Sc or Y. The n-type CNT FETs were pushed to their performance limits through further optimizing their gate structure and insulator. The CNT FETs outperformed n-type Si metal-oxide-semiconductor (MOS) FETs with the same gate length and displayed better downscaling behavior than the Si MOS FETs. Together with the demonstration of ballistic p-type CNT FETs using Pd contacts, this technological advance is a step toward the doping-free fabrication of CNT-based ballistic complementary metal-oxide-semiconductor (CMOS) devices and integrated circuits. Taking full advantage of the perfectly symmetric band structure of the semiconductor SWCNT, a perfect SWCNT-based CMOS inverter was demonstrated, which had a voltage gain of over 160. Two adjacent n- and p-type FETs fabricated on the same SWCNT with a self-aligned top-gate realized high field mobility simultaneously for electrons (3000 cm2 V−1 s−1) and holes (3300 cm2 V−1 s−1). The CNT FETs also had excellent potential for high-frequency applications, such as a high-performance frequency doubler.
Carbon nanotube (CNTs) are hollow cylinders that have been rolled from single-or multi-layer graphene, and they have drawn tremendous attention since they were observed for the first time by Iijima in 1991 [1]. There are two categories of CNTs according to the number of shells: singlewalled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) [2]. A SWCNT is an ideal one-dimensional conductor that typically has a diameter of 1-3 nm and a length of up to several millimeters [3]. Depending on the way that it is rolled from graphene, which can be characterized by the chiral index (n, m), a SWCNT can be a metallic or semiconducting with a band gap inversely proportional to its diameter [2,4]. Metallic CNTs can be used as interconnecting wires in integrated circuits owing to their ultra-strong ability to carry current with density up to 1 10 9 A/cm 2 [2,5]. Semiconducting CNTs have been considered ideal materials for high-performance nanoelectronics owing to their perfect combination of small size, extremely high carrier mobility and long mean-free-path length, large current density, small intrinsic gate delay and high intrinsic cut-off frequency [6][7][8][9].
It is well known that silicon complementary metal-oxidesemiconductor (CMOS) technology will reach its physical and technological limits in the very near future, said to be 2020 [10][11][12]. A new technology must then replace silicon CMOS technology to extend Moore's law. Building fieldeffect transistors (FETs) on semiconducting materials other than silicon, such as carbon nanotubes [6][7][8][9], nanowires [13], and III-V compound semiconductors [14], in which carriers including electrons and holes would flow faster, could be a way to extend Moore's law in the post-CMOS era [12]. Compared with other semiconductors, SWCNTs present ultra high electron mobility, exceeding 100000 cm 2 V 1 s 1 even at room temperature [15], as well as similarly high mobility for holes. Moreover, owing to their cylindrical structure, CNTs are suitable for building gate-all-around FETs to further improve transconductance [16]. Additionally, compared with their carbon cousin graphene [17], CNTs are more suitable for digital applications, which require a transistor with a larger on/off current ratio, because of the existence of a band gap.
n-Type CNT FET and n-type ohmic contact
The first CNT-based FET was fabricated by researchers at Delft University of Technology in the Netherlands in 1998 [18], and the design of carbon-based devices including those made of CNTs and graphene subsequently became the hottest field in nanoelectronics in the past ten years. It is worth noting that the FETs were almost all p-type (i.e., FETs having an abundance of holes) originally, with very large Schottky barriers at contacts. The devices thus performed very poorly owing to poor contacts, which induced small current of tens of nanoamps, and had very low speed.
Operation principle of CNT FETs
Unlike conventional Si-based metal-oxide-semiconductor (MOS) FETs, for which the polarity of the transistor is determined by doping the conduction channel of the device with suitable dopant atoms, FETs built on intrinsic SWCNTs were found to be Schottky-barrier FETs, for which the polarity of the FETs can be determined by controlling the injection of carriers to the channel [19][20][21][22]. The control of the majority carrier type in a semiconducting CNT channel (i.e., electron (n-type) or hole (p-type)) is realized here by the selective injection of electrons or holes into the CNT. If a metal with a high work function is used to contact with the CNT, the Fermi level of the electrode will be close to the valence band of the CNT, affording a smaller Schottky barrier for the hole than for the electron. Therefore, holes are injected into the channel more easily than electrons, and the transistor is a p-type transistor [18][19][20][21][22][23]. In the same way if a low-work-function metal is used to contact with the CNT, the Fermi level of the electrode will be close to the conduction band of the CNT, leading to an n-type transistor. Initially, stable metals such as Pt and Au were chosen for contact electrodes in CNT FETs [18,23], and the FETs were generally p-type owing to these contacting metals having high work functions exceeding 4.5 eV, which is the work function of an intrinsic CNT. In accordance with this principle, CNT FETs fabricated with high-work-function metals as contacts were surely p-type, while those contacted with low-work-function metals such as Al and Ca were n-type [24,25]. Devices using Ti for contact electrodes were ambipolar FETs owing to the equal Schottky barrier height for the electron and hole [19][20][21][22].
It is well known that since there are generally plenty of interface states, the height of the Schottky barrier between the metal and planar semiconductor is not sensitive to the work function of the metal owing to Fermi-level pinning. Therefore, the Schottky barrier height cannot be controlled or adjusted by only selecting the metal for conventional metal-semiconductor (MS) contacts [26]. Fortunately, for a one-dimensional channel such as a CNT, Fermi-level pinning at the metal-CNT interface is weak [27]. This is because in a one-dimensional channel, any interface-stateinduced potential change (due to interface dipoles, for example) decays to zero rapidly away from the interface. The Schottky barrier at the metal-CNT interface is thus much thinner than that for a three-dimensional channel and carrier tunneling through the Schottky barrier is important in CNT devices. The Schottky barrier height is primarily governed by the energy difference between the Fermi level of the metal electrode and the position of the valence (p-type) or conduction (n-type) band edge of the CNT. A metal with a large work function, such as Pt (5.65 eV), Au (5.1 eV) or Pd (5.1 eV), thus tends to line up with the valence band of the CNT and forms a p-channel for hole transport through the CNT [18,23,28].
n-Type ohmic contact
Ohmic contact is a prerequisite condition for fabricating high-performance FETs. As opposed to the conventional method of forming ohmic contact by heavily doping a semiconductor, ohmic contact on CNTs can only be realized by selecting a metal with a work function high or low enough to form a zero or negative Schottky barrier to the carrier since the stable doping of CNTs is not possible. Although the polarity of CNT FETs is affected by the work function of contact electrodes, the work function is not the only factor determining the Schottky barrier at the metal-CNT interface. Therefore, real ohmic contact for CNT FETs was not realized until 2003, when Dai [29] found that palladium can form an excellent p-type ohmic contact with CNTs. The first ballistic CNT FET transistor was then demonstrated at room temperature. For example, the ON-state conductance increased with decreasing temperature and approached the quantum limit at low temperature. Afterward, other metals such as Rh were found to form an ohmic contact with CNTs [30], and the problem of p-type ohmic contact for CNT FETs was completely solved. Compared with p-type ohmic contact, n-type ohmic contact for CNT FETs was difficult to achieve. Although Al and other low-work-function metals were used to contact CNTs and an obvious n-type field effect was observed, large Schottky barriers still existed between the contacts [24,25]. A breakthrough for n-type CNT FETs was not achieved until the end of 2007, when Sc was found to form ideal n-type ohmic contact with CNTs [31].
The excellent performance of the Sc-contacted n-type CNT FETs is due to several favorable factors, including a suitable low work function of about 3.3 eV, and excellent wetting with the CNT (Figure 1(a)). The properties of Sc-contacted CNT FETs are shown in Figure 1. The device is an n-type FET having an ON state at high V gs (~10 V) and near-ballistic ON-state conductance G on = 0.49G 0 (G 0 = 4e 2 /h) at 250 K ( Figure 1(b)). The ON-state conductance increases with decreasing temperature and reaches G on = 0.62G 0 at 4.3 K. The metallic-like temperature dependence of the ON-state conductance and the almost perfectly linear I ds -V ds characteristics suggest that electron injection from the Sc electrode into the conductance band of the CNT is effectively barrier free; i.e. Sc forms an ohmic contact with the n-channel (i.e., the conduction band) of the CNT. At low temperature (4.3 K, Figure 1(c)), the I on /I off ratio exceeds 10 9 for V ds = 0.1 V. Although Sc is an ideal electrode metal for n-type CNT FETs, its exorbitant price and scarcity will prevent largescale applications in the future. Yttrium (Y) was then used to substitute Sc as the contacts for CNT FETs. The performance of the Y-contacted CNT FET was compared directly with that of the Sc-contacted CNT FET, with the FETs fabricated on the same SWCNT adjacent to each other, and was found that the Y-contacted CNT FETs outperform in many ways the Sc-contacted CNT FETs [32]. Since Y is extremely cost effective and widely used in industry, it is expected that Y-contacted devices will be more suitable for fabricating large-scale integrated nanoelectronic circuits.
Pushing n-type CNT FETs to their performance limits
In the two years following the realization of p-type ohmic contacts for CNT FETs, Dai and coworkers [33,34] pushed the performance of the p-type CNT FET to its limit by combining ohmic contacts and a high-k gate insulator.
The development of n-type FETs lagged far behind the development of p-type FETs. Traditionally, n-type CNT FETs were fabricated through chemical doping such as doping with K [35]. However, there were two obvious disadvantages of chemically doping CNTs. Firstly, it is uncontrollable and unstable in air. Secondly, the doped atom in the CNT reduces carrier mobility by introducing scattering.
In 2005, researchers at Intel benchmarked CNT devices of the day with some key general parameters including gate delay and the energy-delay product [36]. The intrinsic gate delay represents the speed potential of a device, and the energy-delay product indicates its power dissipation. They found that the p-type CNT devices outperformed the silicon p-MOS in terms of both speed and power dissipation. However, the n-type CNT FETs were far inferior to n-type Si MOS FETs owing to the lack of good contacts.
Compatibility of the Sc contact and high- insulator
In addition to the good contact, a high-quality gate insulator is another key component for high-performance FET devices, especially for high transconductance [37]. High- (being the dielectric constant of the dielectric layer) insulators such as HfO 2 were successfully integrated in Pd-contacted CNT FETs through atomic layer deposition (ALD) at low temperature, such as 90C, and the p-type FETs exhibited excellent ON-state (characterized by transconductance) and OFF-state (characterized by subthreshold swing) properties without obvious degradation of contact quality [33]. In the same way, the performance of Sc-contacted n-type CNT FETs was further improved by integrating HfO 2 High- dielectrics as the gate insulator.
Although various high- dielectrics have been demonstrated to be technically compatible with carbon-based devices [33,34], it has proven to be very difficult to grow uniform thin high- film directly on the surface of CNTs via a general method, such as ALD. This is because of a goodquality CNT or graphene, there is only a delocalized bond on the surface of the sp 2 hybridization plane, and not many nucleation sites; e.g., defects or dangling bonds [38]. Growing a thick gate dielectric via ALD to bury CNTs is one way to produce a top-gate high- dielectric in CNT FETs [33][34][35]38]. This method is simple to implement but simultaneously limits the ultimate scaling down of the thickness of the gate dielectric. For example, ALD-grown HfO 2 film thicker than 8 nm is needed to fully cover a CNT and to avoid gate leakage [38]. As a result, the corresponding subthreshold swing of the CNT FET fabricated this way remains considerably higher than the theoretical value; i.e., 60 mV/decade.
To realize direct nucleation on the surface of CNTs and consequently grow a uniform high-κ film, several methods have been developed that build nucleation sites via surface treatments before ALD growth. These methods include functionalizations with the introduction of perylene tetracarboxylic acid, deoxyribonucleic acid, NO 2 , and O 3 . The introduction of noncovalent functionalization layers (NCFLs) or pre-treatments not only adds technical complexity but also affects the transport properties of the fabricated CNT and graphene FETs, leading to electric field variation and extra scattering due to the functionalization molecules, and sometimes even damage to the sp 2 carbon framework.
High-quality yttrium oxide (Y 2 O 3 ) is investigated as an ideal high- gate dielectric for carbon-based electronics through a simple and inexpensive process. Utilizing the excellent wetting behavior of Y on an sp 2 carbon framework, ultra-thin (a few nanometers) and uniform Y 2 O 3 layers have been directly grown on the surfaces of CNTs and graphene without using noncovalent functionalization layers or introducing large structural distortion and damage. A topgate CNT FET adopting a 5 nm Y 2 O 3 layer as its top-gate dielectric has excellent device characteristics, including an ideal subthreshold swing of 60 mV/decade (up to the theoretical limit of an ideal FET at room temperature) as shown in Figure 2 [39].
Self-aligned gate
In state-of-the-art silicon CMOS technology, a self-aligned structure is used to ensure the accuracy of the entire fabrication process. As the size of the device becomes smaller and smaller, there is a need for a more precise and reliable way to fabricate MOS FETs automatically, and the self-aligned structure ensures that the edges of the source (S), drain (D) and gate (G) electrodes are precisely and automatically positioned such that no overlapping or significant gaps exist between these electrodes. The use of self-aligned gates is one of the many innovations that have enabled computing power to increase steadily over the last 40 years. A selfaligned structure is therefore necessary for the massive fabricating of high-performance CNT FETs and for the construction of CNT-based CMOS integrated circuits.
A self-aligned structure that utilizes the existence of the native oxide on the surfaces of certain metals, such as Al 2 O 3 on Al, has been developed and used to fabricate near-ballistic p-type CNT FETs with near ideal performance [34]. However, this self-aligned structure is not suitable for n-type devices. A novel self-aligned gate structure that is suitable for fabricating both n-and p-type CNT FETs with a desired threshold voltage and indeed for any FETs based on onedimensional nanomaterials has been presented [40]. For this self-aligned structure, we take advantage of the different growth mechanisms of HfO 2 and Ti films. While the ALDgrown HfO 2 film is continuous with excellent thickness uniformity and step coverage (i.e., uniform film is present even on the sidewalls of the S and D electrodes, which effectively insulates G from S and D), the Ti film grown via e-beam evaporation is basically two-dimensional and is not present on the sidewalls of the S and D electrodes, and therefore, the part of the Ti film between S and D is disconnected from that on top of the S and D or on top of the poly(methyl methacrylate) that defines the gate window. The so-fabricated final structure has precisely positioned edges of the S, D, and G electrodes, and its electrical properties are shown in Figure 3. Both near-ballistic (with a channel length L~120 nm) and long-channel (with L~2 m) n-type CNT FETs are fabricated with this structure on a single SWCNT with a diameter of ~1.5 nm. Quantitative fitting of the electric characteristics of the long-channel FETs reveals high electron mobility exceeding 4650 cm 2 V 1 s 1 and a mean-free path l m = 191 nm as shown in Figure 4. A careful evaluation of the 120 nm devices shows that the self-aligned gate can effectively control the channel yielding a very small gate delay time of 0.86 ps and a large room-temperature I on /I off ratio exceeding 10 4 . In addition, the intrinsic gate delay of sub-100 nm CNT devices fabricated using either type of self-aligned structure has been demonstrated to be less than 1 ps, suggesting potential applications of the CNT devices in the terahertz regime.
After optimizing the contacts and structure, the n-type CNT FETs were benchmarked according to the intrinsic gate delay and energy-delay product as shown in Figure 5. Comparison of the intrinsic gate delay and energy-delay product shows that the performance of the n-type CNT FETs substantially exceeds that of the length-dependent scaling of planar n-type Si MOS FETs and is comparable to that of p-type CNT FETs of similar length [41]. It is obvious that the improvement of speed and energy is primarily due to the tremendously enhanced mobility within CNTs.
The frequency response for the CNT device falls far below the intrinsic performance, and this is mainly due to the large parasitic capacitance between the source/drain and gate electrodes, which is typically three orders of magnitudes larger than the intrinsic gate capacitance of the CNT device. It should be noted that in all previously published self-aligned device structures, there exists a high- dielectric layer (Al 2 O 3 or HfO 2 ) between the gate and source/drain, and this high- dielectric layer remarkably enlarges the parasitic capacitance (by a factor of ~). A self-aligned U-gate structure for a CNT FET has been introduced as shown in Figure 6, and shown to yield excellent DC properties and high reproducibility that are comparable to those of the best CNT FETs based on the previously developed selfaligned device structures [42]. In particular, the subthreshold swing of the U-gate FET is 75 mV/decade and the draininduced barrier lowering is effectively zero, indicating that the electrostatic potential of the whole CNT channel is most efficiently controlled by the U-gate and that the CNT device is a well-behaved FET. The parasitic capacitance of the device has been measured and shown to be one order of magnitude smaller than that of the previously developed selfaligned device structures. The significantly reduced parasitic capacitance of the U-gate device originates mainly from replacing the high- dielectric material between the source/ drain and gate electrodes in the other two self-aligned device structures with a vacant space with ~1 [34,40,42].
Threshold adjustment and control
The threshold voltage V th is one of the most important device parameters to consider when integrating FETs into a complex CMOS circuit. In principle, the threshold voltage of an FET may be adjusted by controlling the doping of its conduction channel, but this method is not suitable for implementing CNT-based doping-free CMOS technology. Alternatively, a gate metal with a suitable work function may be selected to control the threshold of the device, and this is the method we used in the doping-free fabrication of CNT-based CMOS circuits.
The main advantage of the self-aligned gate structure is that it can be used for any gate material with a desired property. In particular, this structure allows us to adjust the threshold voltage of the FET by choosing a suitable gate metal with a desired work function to meet the requirement of the circuit design. Two self-aligned FETs with the same channel length were fabricated on the same SWCNT with a diameter of 2.0 nm. The only difference between the two devices is that one FET used Ti for the gate electrode and the other used Pd. The transfer characteristics of the two devices are given in Figure 7 threshold voltage of the device fabricated using the Ti gate. This shift is found to be dependent on the channel length as shown in Figure 7(b), in which results from 13 devices with different channel lengths but on the same CNT are shown. Among the 13 devices, five were fabricated using a Ti gate and eight using a Pd gate. V th for each device is extracted employing the standard peak transconductance method under a bias of 0.1 V. The channel-length dependence of V th might result from the fact that the electric property of the FETs varies from being channel dominated to being contact dominated as the channel length decreases from a diffusive regime toward a ballistic regime. The V th values for these FETs with a Pd gate are obviously different from those for a Ti gate. For simplicity, two parallel lines are used to fit the data of the two types of FETs, and the V th shift between these two lines is found to be 0.50 V [40].
Scaling behavior of n-type CNT FETs
The ultimate performance of the CNT FETs is reflected in the scaling behavior of the devices, especially the gate length scaling. CNT FETs were fabricated with gate length from 300 nm to 50 µm, and the transfer characteristics of only five devices among these devices are shown in Figure 8(a) for simplicity. For the FETs with gate length of 5 µm, the field-dependent mobility was calculated using the diffusive model and it is shown in Figure 8(b) [32]. The peak mobility of 5100 cm 2 V 1 s 1 is greater than all published electron mobility values for n-type CNT FETs at room temperature. The scaling behavior of CNT FETs was explored by investigating five key device parameters, namely the ON-state resistance, transconductance, intrinsic cut-off frequency, intrinsic gate delay and energy-delay product, which are presented in Figure 8(c) and (d). For a longchannel device, for which the transport is in the diffusive regime, R on is expected to increase linearly with the channel length L g [43,44]. As the gate length is scaled down to a length much shorter than the electron mean free path L m , electron transport in the channel becomes ballistic and the channel resistance becomes independent of the channel length. The electron mean free path is retrieved from the R on -L g curve and reaches 0.638 m, which means the device with L g = 300 nm is clearly ballistic. The cut-off frequency f T increases rapidly with decreasing channel length and reaches 123 GHz for L g~3 00 nm. It is expected that for a device with a channel shorter than 100 nm, f T exceeds 1 THz.
The L g -dependent intrinsic gate delay and energy-delay product for our CNT FETs are shown in Figure 8(e) and (f), together with those of Si-based n-type FETs. The figures show that while the energy-delay product for CNT devices decreases with channel length at a rate similar to that for Si-based devices, the gate delay of CNT devices decreases much more rapidly with decreasing channel length than that of the Si-based device (Figure 8(e)). Continuing with this trend, it is expected that a 30 nm CNT device has a gate delay of ~100 fs, which is among the shortest gate delays ever achieved by a Si-based device for a channel length of less than 10 nm. For long-channel devices, the gate delay scales with the channel length as ~L g 2 , and the energy-delay product as ~L g 3 . Therefore, the CNT FETs have better downscaling potential than Si MOS devices.
Doping-free CMOS devices technology
CMOS circuits are a major class of integrated circuits with tremendous advantages of high noise immunity and low static power consumption. CMOS is sometimes referred to as complementary symmetry metal-oxide-semiconductor (or COS-MOS) to emphasize that typical digital circuitry design uses complementary and symmetrical pairs of p-type (hole) and n-type (electron) MOS FETs for logic functions [45]. Unfortunately, perfectly symmetric CMOS has not been realized. This is because the band structures of all important semiconductors are intrinsically asymmetric around their band gaps or between the conduction and valence bands. Typically, electrons have a smaller effective mass than holes, and the performance of n-type FETs is much better than that of p-type FETs. As a result of the intrinsically asymmetric band structures of Si and all major semiconductors (including III-V and II-VI compounds), holes move much more slowly in MOS FET devices than electrons, dragging down the overall performance of the CMOS circuits.
Semiconductor CNTs have an almost perfectly symmetric band structure between the conduction and valence bands and consequently have essentially the same effective mass for electrons and holes. This band structure symmetry may in principle lead to the same electron and hole mobilities and similar performance for n-and p-type FETs, which are necessary for perfect CMOS performance. Unfortunately, perfectly symmetric CNT-based CMOS devices and integrated circuits have not been realized, and this is largely due to the lagging development of n-type devices [46][47][48][49][50][51].
Doping-free CNT-based CMOS technology
Unlike conventional Si-based CMOS, where the polarity of the FETs is determined by doping the conduction channel of the device with suitable dopant atoms, in CNT-based CMOS, the polarity of the FETs can be determined by controlling the injection of carriers to the channel [31,52]. While Pd may be used to inject a hole barrier freely into the valence band of the CNT to form high-performance p-type FETs, Sc may be used to inject an electron barrier freely into the conduction band of the CNT to form almost perfect n-type FETs. This is a doping-free process. Figure 9(a) shows that the CNT CMOS inverter with a back gate consists of an n-type CNT FET and a p-type CNT FET, and these CNT FETs are fabricated simply by contacting the CNT channel using Pd (p-type) and Sc (n-type) electrodes. The input voltage for the inverter is provided by the common back gate voltage V in = V gs of the n-and p-type CNT FETs, while the output of the inverter is read from the common drain
Almost perfectly symmetric CNT CMOS devices and circuits
Although symmetric n-type and p-type CNT FETs were fabricated with back-gate structure in our earlier works, these back-gate devices cannot deliver near-perfect performance owing to the intrinsic limitation of the back-gate geometry. To further explore advantages of CNT CMOS devices and circuits, highly efficient self-aligned top-gate geometry should be employed. Figure 10 shows the structure and electrical properties of a CNT-based CMOS inverter with self-aligned top gate [53]. This inverter comprises a pair of adjacent n-and p-type FETs fabricated on the same SWCNT with d = 2 nm and the same gate length of L g = 4.0 m. The field transfer ( Figure 10(b)) and output characteristics (Figure 10(c)) are almost perfectly symmetric between the n-and p-type FETs. The mobility curves (Figure 10(d)) show peak mobility of about 3000 cm 2 /V s for the electron and about 3300 cm 2 V 1 s 1 for the hole for the two adjacent n-and p-type FETs on the same SWCNT. The near-perfect symmetry of the mobility between electron and hole manifests experimentally the intrinsic symmetric band structure of the CNT. The voltage transfer characteristics of the CNT-based CMOS inverter show a perfect "1" (with V out = V dd ) and "0" state (with V in = V GND ) and the highest-to-date voltage gain of over 160.
CNT-based CMOS devices are not only more symmetric, faster and less power-consuming than Si-based CMOS devices but their fabrication is also simpler. We compare the main process steps of CNT-based and standard twin-well Si CMOS technology before interconnection [54] in Table 1, showing clearly that the CNT-based CMOS technology is much simpler than Si-based CMOS technology. This is largely due to the doping-free and isolating-free process we developed for the CNT-based CMOS process. This process also requires fewer steps in other main processes than the Si-based CMOS process, including fewer steps in lithography, etching and film growth.
High-frequency applications of CNT FETs
In principle, CNTs generally have an extraordinary radio frequency (RF) response up to the gigahertz regime owing to their ultra-high carrier mobility, suggesting terahertz working potential for future CNT-based electronic devices [55]. In practice, the cutoff frequency of CNT FETS is well below the performance limit owing to the large parasitic capacitance between electrodes [56]. Therefore, FETs fabricated on parallel CNTs array were used to reduce the parasitic capacitance per tube [56], and a cutoff frequency as high as 80 GHz was then measured [57]. However, it is difficult to produce a parallel array of semiconducting CNTs and the device is not easy to miniaturize. Exploring and increasing the frequency response of an FET based on a single CNT will meet two obstacles. One is the large parasitic capacitance between electrodes, and the other relates to the measurement. This key parameter cutoff frequency can be obtained through standard S-parameter measurement using a network analyzer. However, it should be noted that the standard S-parameter measurement cannot be applied to measure accurately the frequency response of a single CNT FET in which the output resistance is much larger than 50 , the ideal value for an impedance-matched measurement.
High-frequency response of devices based on a single SWCNT
Since the real frequency response of a CNT FET is limited by parasitic capacitance instead of the intrinsic limit, there remains much room for reducing the parasitic capacitance and improving the RF performance of the CNT FET via optimizing the geometry of the device. To reduce the main parasitic capacitance between a gate and source/drain, the self-aligned U gate structure has been preferred [42]. The frequency response of the self-aligned U-gate CNT FETs has been assessed via a direct AC measurement, which is usually referred to as a large-signal frequency-domain measurement. Figure 11(a) and (b) shows the geometry of the final device for high-frequency measurement. The setup used to measure the frequency response of CNT FETs is shown in Figure 11(c), where a signal generator is used to apply a sine wave and a spectrum analyzer is used to measure the output crosstalk signal. The measurement results are shown in Figure 11(d), in which P CT+CNTFET is the total signal power and PCT is the crosstalk power with the CNT FET being off. At low frequency (less than 200 MHz), P CT+CNTFET far exceeds P CT . However, as the input frequency increases, P CT+CNTFET and P CT approach each other and almost coincide at about 800 MHz. Since the coincidence between P CT+CNTFET and P CT suggests that the CNT FET no longer works, the cut-off frequency of the device is estimated to be about 800 MHz. It should be noted that this cut-off frequency measured for our SA U-gate device is much higher than any of previously reported value for an FET fabricated on a single CNT recorded by direct measurement. In principle, both the contact width and device channel can be reduced, and the large parasitic capacitance due to the silicon substrate used in this work can be eliminated by replacing the silicon substrate with a more insulating substrate. We expect that the cut-off frequency f T will be significantly improved by further optimization of the device geometry.
High-performance frequency doubler based on large CNTs
In addition to typical semiconducting CNTs and typical metallic CNTs, there are CNTs with a small band gap (SBG) and small current on/off ratio of between 1 and 100 [58]. SBG CNTs are characterized by their small band gap, low current on/off ratio, and typically ambipolar field-effect characteristics. These CNTs are therefore not suitable for applications in logic circuits or as interconnects. However, RF applications do not require the device to be in its off state, offering SBG CNTs a promising field of application. The SBG CNTs can also be used to construct frequency doublers, but because they can operate in a strong-signal mode (i.e., unlike previous weak-signal RF transistors, which operate only in the weak-signal region), the SBG CNTbased FETs may operate over a much larger signal range. Table 1 Comparison of main processing steps for SWCNT-based and standard twin-well Si CMOS technology with shallow trench isolation before interconnection [53,54] Lithography Etching Ion implantation (d) Measurements of the crosstalk power P CT and the total power P CT+CNTFET for the self-aligned U-gate CNT FETs for V gs = 0.5 V and V ds = 0.5 V. The input power is 10 dBm [42].
Figure 12
AC performance of a CNT-based frequency doubler. (a) Schematic diagram illustrating the geometry of a CNT-based ambipolar FET and its working principle for a frequency doubler. When a sinusoidal wave is applied to the top-gate electrode of the FET, with the source electrode being grounded, an output sinusoidal wave with doubled frequency is measured at the drain electrode. (b) Input and output waveforms for an input 1 kHz sinusoidal wave with input V pp = 800 mV and output V pp = 120 mV. (c) Schematic diagram depicting the measurement setup for frequency spectrum analysis. The input signal is applied to the gate of the CNT FET and the output AC signal is coupled through a bias-T to the spectrum analyzer (SA). (d) Measured output signal spectrum for 1 kHz input [59].
When applied to the gate electrode, the input signal may drive the FET from its p-region to n-region yielding a large output at the drain electrode with more than 95% of frequency power being concentrated at the doubled frequency of the input AC signal as shown in Figure 12 [59]. The SBG CNTbased FETs have not only perfectly symmetric ambipolar transfer characteristics but also extremely high carrier mobility (in principle higher than 100000 cm 2 V 1 s 1 in CNTs vs about 20000 cm 2 V 1 s 1 in graphene) on SiO 2 substrate owing to the suppressed substrate scattering. Therefore, the SBG CNT-based FETs could be potentially used to build a high-frequency doubler in the terahertz regime.
Conclusion
Ballistic n-type CNT-based FETs have been fabricated by contacting semiconducting SWCNTs using Sc or Y. The n-type CNT FETS were pushed to their performance limits through further optimizing their gate structure and insulator, and they outperformed Si NMOS FETs with the same gate length. In addition, the CNT FETS had better downscaling behavior than Si MOS FETs. Doping-free CNT CMOS technology was then developed. Taking full advantage of the perfectly symmetric band structure of the semiconductor SWCNT, a perfect SWCNT-based CMOS inverter was demonstrated, which had a voltage gain of over 160. For two adjacent n-and p-type FETs fabricated on the same SWCNT with a self-aligned top-gate, high field mobility was realized simultaneously for electrons (3000 cm 2 V 1 s 1 ) and holes (3300 cm 2 V 1 s 1 ). The CNT FETs also showed excellent potential for high-frequency applications, such as a high-performance frequency doubler. | 2019-04-04T13:15:30.402Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "3d809f4ed184eb862f57f48e1cb1f39ea0e4d548",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11434-011-4791-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "65f10f8cce52f4c1a436fd053c609e5f6fdb5bd0",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
260968468 | pes2o/s2orc | v3-fos-license | The onchocerciasis hypothesis of nodding syndrome
Nodding syndrome (NS) is a phenotypic presentation of onchocerciasis-associated epilepsy (OAE). OAE is an important public health problem in areas with high ongoing Onchocerca volvulus transmission. OAE, including NS, is preventable by strengthening onchocerciasis elimination programs. The presence of tau in OAE postmortem brains could be the consequence of neuroinflammation directly or indirectly induced by O. volvulus. Omics research is needed to investigate whether O. volvulus worms contain a neurotropic virus.
Introduction
Nodding syndrome (NS) was initially believed to be a unique condition that was restricted to certain areas in Tanzania, northern Uganda, and South Sudan and thought to be linked to certain events or living conditions in those areas (e.g., war and displacement of persons in camps) [1].
In recent years, NS cases have been reported in many other countries, all of which had a high level of ongoing or past Onchocerca volvulus transmission [2]. To determine the cause of NS, it is important to investigate whether this condition could be part of a wider clinical spectrum. The latter is indeed suggested by a growing number of epidemiological studies and the following arguments [2].
1. NS and Nakalanga syndrome (characterized by morphological deformities, retarded growth, and delayed/absent secondary sexual development) appear in the same onchocerciasis-endemic areas with high O. volvulus transmission, together with high numbers of other forms of epilepsy with similar characteristics except head nodding seizures [2]. These characteristics are the criteria of the onchocerciasis-associated epilepsy (OAE) case definition proposed for epidemiological studies. This form of epilepsy appears in previously healthy children between the ages of 3 and 18 years, without an obvious cause for epilepsy, in an onchocerciasis-endemic region with high ongoing O. volvulus transmission [2]. Only a relatively small proportion of individuals with OAE present with NS, the most debilitating form of OAE associated with most severe cognitive impairment [3].
2. The NS epidemic in northern Uganda [4] and South Sudan [5] appeared together with an epidemic of other forms of epilepsy meeting criteria of OAE. 3. Nodding and Nakalanga syndromes are often observed in families with siblings with other forms of OAE and may be associated with blindness [6]. 4. Both NS and other forms of OAE present with similar cerebral and cerebellar atrophy on magnetic resonance imaging [7]. Persons with NS may have a higher degree of global cerebral atrophy, but this may be related to a longer duration of epilepsy [7]. In postmortem studies, NS and OAE also present with similar pathological findings [8].
Findings suggesting that O. volvulus directly or indirectly may induce epilepsy 1. A case control study in the Mbam valley, an onchocerciasis-endemic region in Cameroon, revealed more intense infections with O. volvulus in persons with epilepsy than in nonepileptic controls and a strong positive association between community microfilarial (mf) load and epilepsy prevalence. In addition, the study also found an inverse relationship between villages' distance from the river (breeding site for the blackfly vectors) and epilepsy prevalence [9]. Also, in South Sudan, the highest epilepsy prevalence was observed among households living close to blackfly breeding sites, and families at these sites often had several children with OAE [5,6].
2. In population-based surveys in onchocerciasis-endemic areas, a positive association between O. volvulus prevalence and the prevalence of epilepsy was observed [10]. A metaanalysis of 8 population-based studies in onchocerciasis-endemic areas, conducted before 2008, showed that the epilepsy prevalence increased, on average, by 0.4% for each 10% increase in onchocerciasis prevalence [10]. 6. Successful onchocerciasis elimination strategies reduced the incidence of epilepsy including NS in onchocerciasis-endemic regions, as was observed in northern Uganda Mahenge and Maridi, and in western Uganda. OAE stopped appearing once onchocerciasis was eliminated ( Table 1).
Pathogenesis of OAE
While there is a very strong epidemiological association between onchocerciasis and epilepsy, the exact pathophysiology of OAE, including NS, is still unknown. A plausible explanation for the OAE pathology is that the epilepsy is induced by O. volvulus mf occasionally penetrating the brain of heavily infected young children. Indeed, before CDTi was implemented, mf were detected in CSF, e.g., in 1976 by Duke in Cameroon in persons with high O. volvulus [12] mf loads. It is unlikely that the CSF was contaminated with mf from the skin in this study, because the first 5 to 6 drops of CSF were discarded [12]. Additionally, the intensity of mf infection in the CSF increased from 2 mf/ml to 19 mf/ml after administration of diethylcarbamazine (DEC) [12]. Six persons with a high concentration of mf in CSF (8 to 31 mf/ml) developed severe vertigo and one of them a temporary parkinsonianAU : PleasenotethatasperPLOSstyle; eponymic condition. DEC is known to cause inflammation, which could increase blood-brain barrier (BBB) permeability [16]. This increased permeability might make it easier for mf to penetrate the central nervous system (CNS). Duke hypothesized that mf enter the CSF through the capillary wall of the choroid plexus in the lateral, third, and fourth ventricles [12].
In more recent postmortem studies, neither O. volvulus mf nor DNA could be detected in the CSF of persons with OAE [17] or in their brains during postmortem studies [8]. However, this could be due to the fact that the study participantsAU : PleasenotethatasperPLOSstyle; donotusethe had developed their epilepsy many years before, and in the meantime, the parasite might have been eliminated by immune cells of the CNS [2].
Alternative nodding syndrome hypotheses and research priorities
Several alternative hypotheses have been proposed, but so far, none of them have been confirmed [2]. In postmortem studies, tau deposits were detected in the brain of all persons with NS [18] and in most persons with OAE [8]. Signs of neuroinflammation (gliosis and activated microglia) were noted as well, colocalised with tau-reactive neurofibrillary tangles and threads [8]. In addition, signs of earlier ventriculitis were observed in 8 of 9 persons who died with OAE, suggesting involvement of the choroid plexus as proposed by Duke [12]. Microfilariae in the CSF might gain access to the pituitary gland, where their presence might lead to dwarfism (Nakalanga syndrome) [12]. We hypothesise that the tau deposits are the consequence of a neuroinflammatory reaction induced, directly or indirectly, by O. volvulus.
A systemic infection or physiological stress (e.g., a provoked seizure) in a young child, similar to DEC, may cause CNS inflammation that will increase the permeability of the BBB. In case such children harbour a very high mf load, O. volvulus mf, secretory/excretory products, or endosymbionts, including viruses, could occasionally cross the weakened BBB causing neuroinflammation, resulting in epilepsy and tau deposits. Thereupon, the epilepsy and tau deposits could sustain each other (Fig 1).
Recently, an additional risk factor for development of NS and Nakalanga was proposed [13]. In a case-control in Uganda, preterm birth was identified as a risk factor for NS [13]. O. volvulus infection during pregnancy has been associated with an increased risk of spontaneous abortions [13]. Therefore, preterm birth of children who later developed NS may have been the consequence of an O. volvulus infection in the pregnant mother [2]. Such an O. volvulus infection during pregnancy may lead to parasite tolerance that can be transmitted in utero [2]. Thereupon, when this child is exposed to O. volvulus infected blackflies, he/she may develop a very high mf load at a young age, potentially causing NS and/or Nakalanga syndrome, which are the most severe forms of OAE with an earlier epilepsy onset [3]. This hypothesis is currently being investigated, in Cameroon and South Sudan, during a prospective cohort study of children born from O. volvulus infected and noninfected mothers. These children, not yet eligible for ivermectin treatment, will be followed for a 4-year period and assessed annually for O. volvulus infection and neurocognitive development. In case of complicated febrile seizures or epilepsy, a lumbar tap will be performed, and collected CSF will be examined for presence of mf, O. volvulus, and Wolbachia DNA.
In addition, we will conduct omics studies to increase our knowledge about the biology of O. volvulus. With proteomics, we hope to identify O. volvulus excretory/secretory proteins that could play a role in the pathogenesis of OAE. Moreover, a viral metagenomic study of adult O. volvulus worms, extracted in Maridi, South Sudan, from nodules from persons with OAE and persons without epilepsy, is planned (ClinicalTrials.gov registration NCT05868551) to identify possible neurotropic viruses. Proteomic and metagenomic studies may not only reveal a potential pathogenetic mechanism of OAE but also lead to new ways to treat and diagnose onchocerciasis.
Importance to recognise the link between onchocerciasis and epilepsy
Recognition of OAE as a morbidity of onchocerciasis and acceptance that OAE, including NS, can be prevented through strengthening onchocerciasis elimination programs is of paramount importance. The prevention of OAE should be prioritized in public health intervention agendas. Increased awareness about OAE will also improve uptake of CDTi and eventually decrease the burden of onchocerciasis and OAE as well as reducing the time required to eliminate these diseases. | 2023-08-19T05:08:55.026Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "38ff1ffea88ba020046a76ffd109a70bd8a32c67",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "38ff1ffea88ba020046a76ffd109a70bd8a32c67",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
191896467 | pes2o/s2orc | v3-fos-license | Social Comparison Information, Ethnocentrism, National Identity Associated with Purchase Intention in China
agreed that economic reforms, worldwide liberalization of trade, aging advanced economies, and the emergence of the middle class were the main attributors to the growth of emerging markets (Sheth, 2011).
Introduction
International marketers are confronted by many environmental changes.One of them is changes in the global economic environment.In the next two decades, another three billion people would be added to the middle class consumers exclusively from emerging markets (Ernst & Young, 2013).Responding to this profound trend in the 21 century, international marketers are preparing for this peerless market opportunity (Cavusgil et al., 2018).
In emerging markets, the middle-class is developing and becoming the fastest-growing consumer market, which contrasts with the situation of developed countries where the middle-class is maturing or shrinking.Kharas (2016) and the Pew Research Center (2016) reported that the total middle-class' consumption accounted for two-thirds of global spending, with only one-quarter of consumers from advanced economies.Meanwhile, international marketers are critically interested in these rapidly rising groups (Bang, Joshi, & Singh, 2016).Thomas (2018) agreed that economic reforms, worldwide liberalization of trade, aging advanced economies, and the emergence of the middle class were the main attributors to the growth of emerging markets (Sheth, 2011).KPMG (2014) reported that many Western firms are highly attracted to the middle class.Likewise, China has been evolving in this experience.Since 2000, the middle class of 54% of Chinese urban households had expanded two times more than the United States.
Some researchers concluded the characteristics of middle-class consumers are that they are equipped with disposable income, young, better educated, and have demands for wider products and services (Cavusgil et al., 2018;Guo, 2013;Swoboda, Pennemann, & Taube, 2012).Boisvert and Ashill (2018) suggested that the study of Millennials, especially those young generations in their 20s, worth to be investigated in the future.
Age, as one of the demographic antecedents, disparate researchers presented different relationships between age and consumer ethnocentrism (CET).On the one hand, Schooler (1971) and Bannister and Saunders (1978) reported a negative effect of age toward CET.On the other hand, some researchers presented a positive relationship between age and CET (Han, 1988;Good & Huddleston, 1995;Caruana, 1996;Klein & Ettenson, 1999).However, Festervand, Lumpkin, and Lundstrom (1985), Sharma (1995), and Balabanis et al. (2001) found no relationship between age and CET.<Table 1> summarizes these opinions.
Based on the social identity theory (SIT), Zeugner-Roth, Žabkar, and Diamantopoulos (2015) advanced extant research by examining the predictive power of CET as anti-out-group construct, different from national identity (NI) as one pro-in-group construct on the impact of socio-psychological traits of consumer behavior.Meanwhile, Festinger (1954) pointed out that, social comparison is an automatic psychological mechanism rooted in consumer's mind (Haferkamp & Krämer, 2011), and consumer's attention to social comparison information (ATSCI) influences his/her attitudes, opinions, and even the behaviors towards certain objects (Mussweiler & Rüter, 2003).
Even though there some existing studies focus on the CET, NI and purchase intention, but no previous study has evaluated the joint predictive validity of consumer ATSCI, CET, and NI as drivers of consumer behavior, and thereby explicitly tested for their relative importance regarding the direct and indirect effects on consumer's purchase intentions for domestic brands (PIDB).
Our study draws on the social comparison theory (SCT) originally proposed by Festinger (1954) and the social identity theory (SIT) proposed by Tajfel (1974), sample with less than 30 years-old Chinese generations who are the representative middle class, focus on ATSCI, CET, and NI to explain and predict consumer behavior for domestic brand products.
Against the background, the threefold objectives of our study are as follows.First, we conceptualized ATSCI, CET, and NI drawing on SCT and SIT, proposed several hypotheses regarding these domains' effects on PIDB while controlling for potential stimulus' bias, and subsequently tested the hypotheses.Our study focused on young generations of Chinese avoiding the age bias of CET to developed research.Second, our study jointly investigated CET with NI as previous research (Zeugner-Roth, Žabkar, & Diamantopoulos, 2015) presented NI as a one pro-in-group construct different from CET as an anti-out-group construct.Third, our study presented an empirical structural equation modeling using ATSCI, CET, and NI as clustering variables, and then identified the direct and indirect effects on the relationship to the PIDB.
Conceptual Framework and Hypotheses
We conceptualized ATSCI, CET, NI, and PIDB.<Figure 1> shows a proposed model.
SCT and ATSCI
The SCT was originally proposed by Festinger (1954) to understand how an individual's self-evaluation affects social activities.Festinger (1954) argued that an individual is motivated to evaluate himself/herself to reduce uncertainty.When objective criteria were absent, people tended to compare himself/herself with another person (comparison target) to judge his/her own ability and performance.According to Festinger (1954), if the comparison target (others) performs better than the individual (comparer) does, he or she will feel worse; on the contrary, if others are worse-off than him/her, individuals will feel better.Thus, two different dimensions of social comparison emerged: "upwardsocial comparison" and "downward social comparison".
Social comparison is an automatic psychological mechanism, and as Mettee and Riskind (1974, p.348) said, it is "effectively forced upon the individual by his social environment" (Mussweiler & Rüter, 2003), and it influences individuals' attitudes, opinions, and even their behaviors towards certain objects.Festinger (1954) asserted that the social comparison as a psychological tendency is rooted in the mind (Haferkamp & Krämer, 2011).
Attention to Social Comparison Information (ATSCI) means the degree of person's celerity towards collective assessment clues.Calder and Burnkrant (1977) pointed out that, based on others' behaviors, ATSCI helps an individual in presenting oneself in a social setting.The requirement and need of social comparison differs person to person (Lennox & Wolfe, 1984).The ATSCI scale proposed by Lennox and Wolfe (1984) attempted to measure a person's behavior in a society and the level he/she might pay attention to social cues.A high ATSCI person is more likely to display himself/herself based on purchases compared to a low ATSCI consumer.Consumers with high ATSCI pay more attention to others' opinions in purchasing branded products compared to low ATSCI consumers (Das & Saha, 2017).Deval et al. (2013) pointed out that high ATSCI consumers are more open to the impact of social tolerability.Consumers engaging in high ATSCI behavior comparatively show more concentration and interest (Berlyne, 1960;Bilkey & Nes, 1982).Therefore, it is hypothesized that: H1: ATSCI influences consumer purchase intention for domestic brands (PIDB) for young generations.
CET
Shimp and Sharma (1987, p. 280) constructed the CET as "beliefs held by ... consumers about the appropriateness, indeed morality, of purchasing foreign-made products" (Zeugner-Roth, Žabkar, & Diamantopoulos, 2015), and research on CET has been growing substantially over the past few years since the construct started (Cleveland, Laroche, & Papadopoulos, 2009;Zeugner-Roth, Žabkar, & Diamantopoulos, 2015;Shoham & Gavish, 2016).According to Shimp and Sharma (1987), ethnocentric consumer domestic country bias was primarily based on an economic motive and normative belief that supporting domestic companies by purchasing domestic products is necessary (Verlegh, 2007;Shan Ding, 2017).Shankarmahesh (2006) employed CET construction to explain why consumers are apt to purchase their home country's products but not foreign alternatives.In effect, ethnocentric consumers want to protect the domestic by through consuming domestic products (Sharma, 2011;Supphellen & Rittenburg, 2001).Highly-ethnocentric consumers prefer favorite attitudes toward purchasing domestic brands and products because of economic and cultural threats from foreign brands and products (Cleveland, Laroche, & Papadopoulos, 2009;Barbarossa, Pelsmacker, & Moons, 2018).Josiassen (2011) asserted that a particular center on CET is a preferred base of local bias to attitudes and behaviors toward products.For explaining consumer preferences for local and foreign products comprehensively, it is necessary to consider CET an extended range of consumer characteristics (Zeugner-Roth, Žabkar, & Diamantopoulos, 2015) which divides ethnocentrism into a pro-in-group construct (Balabanis & Diamantopoulos, 2004;Sharma, Shimp, & Shin, 1994).
However, one of the demographic antecedents, age, presented different relationships with CET.Through interviews, Schooler (1971) and Bannister and Saunders (1978) reported a negative effect of age toward CET.However, Festervand, Lumpkin, and Lundstrom (1985) sampled US consumers for various products (mechanical, food, fashion, electronics, and leisure products) and reported no relationship between age and CET.As well as, Sharma (1995) targeted Korea, and Balabanis et al. (2001) sampled the Czech Republic and found the same results.Through a survey method, other researchers presented a positive relationship between age and CET (Han, 1988;Good & Huddleston, 1995;Caruana, 1996;Klein & Ettenson, 1999;Balabanis et al., 2001).Thus, it is hypothesized that: H2: CET has a positive effect on DBPI in the young generation.
The concepts of CET and ATSCI are different with less potential for any possible relationship; still, a careful assessment of the two concepts indicates that there might be a connection.Smith (1992) viewed CET sentiments rooted in human values, and consumer decision-making also includes social considerations.CET as a major determinant affects consumer behavior (Zolfagharian, Saldivar, & Braun, 2017); the impact of CET has been always constructed using social identity theory (Tajfel, 1982;Tajfel & Turner, 1986).Siamagka and Balabanis (2015) found that CET positively and significantly influenced susceptibility to interpersonal interactions.Meanwhile, Deval et al. (2013) found that high-level ATSCI consumers were more likely to be prejudiced during purchases by social acceptability appeal.Therefore, it is hypothesized that: H3: CET is positively affected by ATSCI in the young generation.
NI
Drawing on the SIT, Brewer (1999) found that, for consumers, in-group bias due to NI results from feelings of association with the in-group (e.g., home country), but without any explicit stimulus from out-groups (e.g., foreign country).Zeugner-Roth, Žabkar, and Diamantopoulos (2015) advanced extant research by examining the predictive power of NI as one pro-in-group construct, and CET as anti-out-group construct on the impact of socio-psychological traits of consumer behavior.Therefore, NI is fundamentally different from CET.
According to Coombes et al. (2001), NI was defined as consumers' cultural expression of national traditions.In a given cultural context, NI represents the common cultural expression of national traditions by consumers in the same nation (Stöttinger & Penz, 2018).Müller-Peters (1998) proposed this special expression of social identity with the reference group of identity as the citizens of a nation.Blank and Schmidt (2003, p.296) said that NI refers to an inner bond with the nation as well as the importance of national affiliation and the subjective significance.Tajfel (1978) found NI indicates the extent to which consumers identify and presents a positive feeling of affiliation with the nation and the importance the consumer attaches to the feeling (Feather, 1981).
Rooted in consumer's attachment to a nation, NI can be both positive and negative, which stretches from a sense of the explicit contra-identity as the negative identity to a positive identity (Blank, 2003).However, in most cases, NI is likely to be positive.In our study, the positive form of NI was adopted, but not the negative national identity as national disidentification presented by Josiassen (2011).Dinnie (2002) presented that consumers may possess NI that influences purchase behavior.Moreover, Suarez and Belk (2017) suggested merging the local culture based on NI to develop a localization strategy for global firms.Thus, it is hypothesized that: H4: NI has a positive effect on DBPI in the young generation.H5: ATSCI has a positive effect on NI in the young generation.
Data Collection and Sample
Chinese consumers are the sample of our study because China is the world's largest emerging market (Wu & Zhou, 2018).The hypotheses were tested on Chinese generations under 30with an online survey through Wechat, an application with more than one billion monthly Chinese users.Based on prior research, the questionnaire was first developed in English; for ensuring translation equivalence, it was double-back-translated from English to Chinese to resolve inconsistencies through discussion between translators and researchers in ensuring the meaning of each item.In total, 579 completed questionnaires were collected with different locations (back-tracking the participants' IP), ensuring the geographic diversity of the sample.<Figure 2> displays that almost 80% of participants responding to the survey were located in China and that nearly 20% of were overseas at the time.Our study primarily focused on the young generation, so those more than 30 and who failed to finish in 100 seconds were excluded.415 usable responses were left in the final analysis.<Table 2> summarizes the demographic characteristics of the final sample with respect to age, gender, education, and location.All participants are under 30, consistent with the study requirement, and 92.3% of them were between 20 and 30 years old.Female (67.2%) respondents were more than two times greater than male (32.8%).Over 90% of respondents were slightly more educated (61.4% graduated from university, 31.6% of them with more than a master's degree), and income was broad from less than 3000 RMB to more than 15000 RMB per month.
Measurements
For the constructs, measures adapted from previous research and a seven-point Likert scale anchored by one ("strongly disagree") and seven ("strongly agree") was used in all items.According to Lennox and Wolfe (1984), we measured 'Attention to Social Comparison Information' with a three-item version of the ATSCI scale, which was validated by Deval et al. (2013).Similar to Zeugner-Roth, Žabkar, and Diamantopoulos (2015), we operationalized consumer ethnocentrism using the four-dimensional CETSCALE measure initially proposed by Shimp and Sharma (1987), which was widely validated (Balabanis & Diamantopoulos, 2004;Shankarmahesh, 2006;Verlegh, 2007) and adopted in a recently study on the Chinese consumer (Ding, 2017).As Blank and Schmidt (2003) said, there is "little disagreement on the measurement of national identity."Thus, the version of three items was used in our study based on the previous research (e.g., Mlicki & Ellemers, 1996;Verlegh, 2007).
Consumer domestic products purchase intention from the stimulus countries was measured with two items from Putrevu and Lord (1994).<Table 3> summarizes the measures and items.Being Chinese is important to me.
Domestic Brand Purchase Intention (DBPI)
It is very likely that I will buy Chinese brand products.Putrevu and Lord (1994) I will definitely try Chinese brand products in the future.
Reliability and Validity
Before testing the hypotheses, we assessed the reliability and validity of the items for four constructs.Cronbach's α was used in checking internal consistency.<Table 4> shows that the total construct was 0.786, higher than 0.7, that the coefficients of each construct were higher than 0.6, and that ATSCI (α=.0.673),CET (α=0.856),NI (α=0.894), and PIDB (α=0.836) are shown as expected.Hair et al. (2014) suggested 0.6 as the minimum acceptable value for Cronbach's α; thus, the reliabilities of study measures were acceptable.A Varimax, confirmatory principal component analysis was conducted to explore principal components.<Table 4> reports detailed estimation results.In addition, Fornell and Larcker (1981) pointed out average variance extracted (AVE) values of all constructs higher than 0.5 can be used to confirm convergence validity.The AVE values (Table.According to Fornell and Larcker (1981) stating that if AVE is less than 0.5, but the composite reliability (CR) is higher than 0.6 with AVE more than 0.4, we determined that the convergent validity of the construct seems adequate.Moreover, <Table 4> shows that the Ф coefficients among the constructs without 1.0 signify correlations.<Table 5> presents that the discriminant validity of the constructs was confirmed (Anderson & Gerbing, 1988).
Tests of Hypotheses
For assessing the causal relationships among ATSCI, CET, NI, and PIDB, we estimated the path coefficients in our study.<Table 6> reports the standardized path coefficients of the structural equation model analyzing results from the model estimation.<Table 6> presents that χ 2 , CMIN/DF, GFI, AGFI, CFI, TLI, IFI, RFI, NFI, and RMSEA indices were used to evaluate model fitness.The results revealed that χ 2 was 93.000 (DF=49, P=0.000); CMIN/DF was 1.898, and thus less than 3; RMSEA was 0.047, less than 0.05; CFI was 0.979, and the others were GFI, AGFI, TLI, IFI, RFI, and NFI also were more than 0.9.As suggested by Hu and Bentler (1999), these indices indicate satisfactory levels for confirming the criteria for model fitness.Both <Table 6> and <Figure 3> display that the direct path coefficient from ATSCI to PIDB was not significant (γ =0.047, t-Value=0.804<1.96,p=0.
Discussion and Implications
The results of our study show that for the young Chinese generation, there was no direct impact between ATSCI and PIDB.The CET presented a positive and significant relation toward PIDB, this result was consistent with previous research findings (Josiassen, 2011;Strizhakova & Coulter, 2015).The original definition of CET reported by Shimp and Sharma (1987) does not include consumers' intentions to domestic products, but rather highlighted a clear bias of foreign products against Zarkada-Fraser and Fraser (2002).Plenty of subsequent research, however, shows that CET positively biases consumers' PIDB in different countries (Herche, 1992;Olsen, Biswas, & Granzin, 1993;Klein, Ettenson, & Morris, 1998;Suh & Kwon, 2002).The positive influence of young Chinese generation's ATSCI on CET was consistent with Siamagka and Balabanis' (2015) findings of positive association existing between CET and the susceptibility to interpersonal influence.The indirect mediation effect from ATSCI to CET to PIDB was significant.However, these findings were little explored in the previous relevant research.National Identity (NI) positively and significantly influences young Chinese generation's PIDB.This finding agreed with previous literature (Verlegh, 2007;Zeugner-Roth, Žabkar, & Diamantopoulos, 2015).ATSCI was positively and related significantly to NI.In addition, an indirect mediation effect from ATSCI to CET to PIDB was significant.These mediation effect findings are new.Furthermore, ensuring the mediation effects of ATSCI to CET to PIDB and ATSCI to NI to PIDB, we found that the direct path from ATSCI to PIDB is little significant.
Managerial Implications and Contributions
There are some managerial implications for international marketers wishing to stimulate young generations of Chinese to purchase domestic brand products.First, the results show that CET and NI influence young Chinese generations' PIDB.Therefore, an international marketer should consider Chinese market entry-modes (foreign direct investment, international joint ventures and cobranding with a local brand) and branding decisions to mitigate young consumers' domestic bias.Second, as the positive effects of ATSCI to CET and NI were explored, international marketers could use the given information to catch consumer attention in target product promotions.Furthermore, due to the ATSCI existing in young Chinese generations, it indirectly affects consumer PIDB.Meanwhile, consumers pay attention to the electronic word of mouth in decision-making (Lin & Kalwani, 2018).Thus, to build positive attitudes towards domestic brands, international marketers could focus on consumers' information channels to promote products.
Limitations and Further Research
There are some limitations in the current research.First, our study was conducted in China and collected data only from young generations under 30.It is possible for further replication in other settings with different countries to mitigate the age bias.According to Hofstede's (2001) cultural dimensions, China is a collective country; thus, the individual countries could be the sample in a future relevant study.Also, examining the results of our study in other collective countries may be worthwhile.Meanwhile, exploring the findings through comparison with other countries in the future is necessary for generalization.Second, our study did not present specific domestic brands products as investigative categories.From the aspect of product attributes, functional and symbolic attributes were the classical classifications of product (Park & Jeon, 2018).Thus, specific brands categories or products are necessary with respect to further research.Third, though the convergent validity of the construct is adequate in our study, but according to Fornell and Larcker (1981), an AVE more than 0.5 will be better.Thus, in further research, related solutions for higher validity are worth exploring.Finally, our study excluded country of origin (COO) effects and focused on purchase intention toward domestic brand products without mentioning foreign brands.Moon and Oh (2017) suggested that the research of COO effects with CET and NI is necessary in the future; thus, further studies combining ATSCI, COO effects, CET, and NI would be significant.
Table 2 :
Demographic Characteristics of the Sample
Table 3 :
Measures, Items and Scale Sources
Table 4 :
Analyzing Components Results of Constructs and Items
Table 5 :
Results of Analyzing AVE and Correlations Notes: AVE is on the diagonal line in bold text, and the squares of correlation coefficients are in ( ).
Table 6 :
Results of Testing Hypotheses
Table 7 :
Effects of CET and NI between ATSCI and PIDB | 2019-06-14T15:15:18.744Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "6d1230638acda081a55f29ca8057c8910a7b985f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15722/jds.17.5.201905.39",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "21a01904482cb8b9aea15e0a089730f795add8d6",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
199551189 | pes2o/s2orc | v3-fos-license | Trends in energy and nutrient supply in Ethiopia: a perspective from FAO food balance sheets
Background Ethiopia is the second-most populous country in Africa. Although most people still live in rural areas, the urban population is increasing. Generally, urbanisation is associated with a nutrition transition and an increase in risk factors for non-communicable diseases (NCDs). The objective of this study was to determine how the nutritional composition of the Ethiopian food supply has changed over the last 50 years and whether there is evidence of a nutrition transition. Methods Food balance sheets for Ethiopia from 1961 to 2011 were downloaded from the FAOSTAT database and daily per capita supply for 17 commodity groupings was calculated. After appropriate coding, per capita energy and nutrient supplies were determined. Results Per capita energy supply was 1710 kcal/d in 1961, fell to 1403 kcal/d by 1973, and increased to 2111 kcal/d in 2011. Carbohydrate was by far the greatest energy source throughout the period, ranging from 72% of energy in 1968 to 79% in 1998; however, this was mostly provided by complex carbohydrates as the contribution of sugars to energy only varied between 4.7% in 1994 and 6.7% in 2011. Energy from fat was low, ranging from 14% of energy in 1970 to 10% in 1998. Energy from protein ranged from 14% in 1962 to 11% in 1994. Per capita supplies of calcium, vitamin A, C, D, folate and other B-vitamins were insufficient and there was a low supply of animal foods. Conclusions The Ethiopian food supply is still remarkably high in complex carbohydrates and low in sugars, fat, protein, and micronutrients. There is little evidence yet of changes that are usually associated with a nutrition transition.
Background
Over the last 50 years, dietary patterns around the world have changed dramatically [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. Starting in developed countries, and more recently in developing countries, a pattern of "Westernisation" of the diet has emerged, with traditional, largely plant-based diets being replaced by increased intakes of animal products, fats and oils, highly processed foods (e.g. soft drinks, sweet or savoury snacks, reconstituted meat products, and pre-prepared frozen dishes), added sugars and salt, accompanied by a shift towards more sedentary work and leisure patterns. This phenomenon, known as the nutrition transition, generally occurs when a population moves from a predominantly rural, traditional lifestyle to an urban, industrial one [2,14] and is nearly always preceded by epidemiological transitions in that population, such as declining fertility rates, lower maternal and infant mortality, reduced mortality from infectious diseases and increased life expectancy [15]. The major concern is that this transition is strongly associated with rising rates of obesity and other non-communicable chronic diseases (NCDs) [16]. By 2020, NCDs are expected to account for almost three-quarters of all deaths worldwide, and over 70% of deaths from ischaemic heart disease, stroke and type 2 diabetes will be occurring in developing countries [16]. Obesitya major risk factor for NCDs -is already becoming a serious problem in parts of Africa despite the continued presence of undernutrition (defined as insufficient intake of energy and nutrients to meet an individual's needs to maintain good health) [12,17,18], while all around the world the burden of obesity is shifting more towards the poor [11].
Ethiopia is the second-most populous country in Africa and the 13th most populous in the world, with an estimated population of 99.4 million in 2015 [19]. It is also one of the poorest countries in the world, with almost one-quarter of its people living on less than $1 a day [20]. The age profile of the population is very young (median age in 2015 was 18.3 years) and most people (> 80%) live in rural areas [21]; however, the urban population is increasing [20,22]. Between 1990 and 2014 the urban population increased dramatically, from 6,064,000 (13% of total) to 18,363,000 (19%) and is forecast to reach 70,522,000 (39%) by 2050 [23]. Recent data show that 38% of children less than 5 years of age are stunted, 24% are underweight, 10% are wasted and more than 50% are anaemic, along with 18% of men and 23% of women in the 15-49-year age group [24]. Micronutrient deficiencies, including vitamin A, zinc, selenium, iron and iodine deficiency are major public health concerns [20,[25][26][27][28][29]. At the same time, risk factors for NCDs may be increasing, especially in urban areas. In 2005, 14% of urban women were overweight or obese compared with 2% in rural areas, and the highest prevalence (18%) was in the Addis Ababa region [30].
Because of the link between early childhood nutritional deprivation and later adult disease [31][32][33], the Ethiopian population, which is currently experiencing a high prevalence of fetal and post-natal growth retardation, will face an even greater risk for NCDs once these individuals reach adulthood. Thus, it is essential to monitor dietary trends in the country in order to identify the emergence of dietary patterns that are known to promote the development of NCDs. Dietary trends or food supply at a national level can be monitored crudely by the food disappearance method [34] using the food balance sheets that are produced annually by the United Nations Food and Agriculture Organisation (FAO) [35]. This method uses annual data on production and utilisation of all food commodities (including production within the country, imports, exports, stock changes, industrial non-food use, animal feed use, seed use, and waste) to derive a value for the average per capita supply of each commodity. By inputting these values into an appropriate food composition table, the average per capita supply of nutrients can then be calculated. Although they only show per capita supply of food commodities and not the actual dietary intakes of individuals, food balance sheets provide useful and timely information that can lead to a better understanding of current nutrition-related problems at the country level and assist in the development of more effective national public health nutrition policies [36][37][38][39][40][41][42][43][44]. Therefore, the objective of the present study was to analyse the FAO food balance sheets for Ethiopia from 1961 to 2011 to determine what changes have taken place in the energy and nutrient supply in the country over the last 50 years and to investigate whether there is evidence of a nutrition transition.
Methods
FAO food balance sheets for Ethiopia from the period of 1961-2011 were downloaded from the FAOSTAT database [45]. Up to and including 1992, the Ethiopian food balance sheets include data from Eritrea, but not thereafter, as Eritrea gained independence from Ethiopia in 1993. These food balance sheets provide the overall per capita supply (as kilogram (kg)/year) for 98 food commodities, including cereals, starchy roots, vegetables, fruits, oilseeds and oilseed products, tree nuts, animal fats, milk, meats, eggs, and fish. The full list of commodities is shown in Appendix 1. After importing the data into Microsoft® Office Excel 2010 (Microsoft Corp., Redmond, WA, USA), we converted per capita supply (kg/year) to grams/day (g/d).
To evaluate the trends in energy and nutrient supply we constructed a food composition table in Microsoft® Office Excel 2010 and matched the commodities on the food balance sheets with appropriate foods from McCance and Widdowson's The Composition of Foods, 5th and 6th editions plus supplements [46], or, in a small number of cases, the USDA food composition tables [47]. For consistency, foods were coded as being in their least processed form. For example, sweet potatoes were coded as 'sweet potatoes, raw' , barley as 'barley, whole grain, raw' , and eggs as 'eggs, chicken, whole, raw'. Certain broad categories of commodity in the food balance sheets, such as peas, beans, nuts, fish, and those labelled 'other' , are lacking in detail about the specific foods that make up the category. To code these categories in a way that would be region-specific, we reviewed food lists of food frequency questionnaires and other relevant literature from Ethiopia and neighbouring countries [20,[48][49][50][51][52] in order to identify commonly consumed foods that belonged in that category. For categories where this procedure was carried out, supply was divided equally among the constituent items. In total, 53 commodities were matched with foods from McCance and Widdowson's Food Composition Tables, 4 commodities (sorghum and products, cottonseed, fish body oil and ricebran oil) were matched with foods from the USDA food composition database, and 27 commodities were matched to composite codes that we created. The remaining 14 commodities, including such items as alcohol (non-food), sugar beet, sugar cane, aquatic plants, aquatic mammals (other), and infant food, were not coded because they contributed little or nothing to the Ethiopian food supply: for most of these commodities no values at all were provided by the food balance sheets for the entire period under investigation, while for the remainder, supply was either zero or else a very small supply (≤2.5 kg/capita/year) was recorded during certain specific years. Depending on the year, the commodities we coded accounted for between 97 and 99.3% of the total energy supply.
Statistical analysis
The FAO food balance sheets themselves provide estimates of per capita energy, protein and fat supply; however, they do not provide any figures for carbohydrate supply. In order to check the level of agreement between the FAO estimates for energy, protein and fat and our calculated values we obtained Pearson correlations using Microsoft® Office Excel 2010. Statistical significance of correlations was accepted at the 5% level. All tests were two-sided.
Results
Correlations between our calculated values for energy, fat and protein supply and the FAO estimates Figure 1(a) shows the correlation between the per capita energy supply as reported on the FAO food balance sheets and the values we calculated using the food composition database we constructed specifically for this study. On average, our values for energy were 2.1% lower than the food balance sheet estimates, but the correlation between them was statistically significant (r = 0.927, P = 0.0000). Our values for fat supply were, on average, 0.1% higher and our values for protein were 2.2% lower than the food balance sheet estimates; however, again, there was a strong and significant correlation between them (r = 0.695, P = 0.0000 for fat and r = 0.956, P = 0.0000 for protein) ( Fig. 1b and c).
Trends in energy and macronutrient supply
The 50-year trends in per capita energy supply in Ethiopia between 1961 and 2011 are shown in Fig. 2(a). According to our calculations, per capita energy supply was 1710 kcal/d in 1961 and it fell to as low as 1403 kcal/d by 1973. Since then there has been an increase in per capita energy supply, especially since the early 1990s, so that by 2011 the value was about 50% higher at 2111 kcal/d. The trends according to the FAO food balance sheets estimates are also shown in Fig. 2(a) for comparison. Our values were slightly lower than the FAO values until the mid-1970s, but there was excellent agreement between the two datasets after that.
The 50-year trends in per capita supply of protein and fat in Ethiopia are shown in Fig. 2(b) and (c), respectively. According to our calculations, between 1961 and 1976, per capita protein supply fell from 61 g/d to 45 g/d, and per capita fat supply fell from 25 g/d to about 20 g/d. From 1976 to the early 1990s there were fluctuations in the supply but little evidence of any major reversal in the trend. However, since 1993 the per capita supplies of both protein and fat have been increasing, reaching 60 g protein and 30 g fat/d by 2011. Carbohydrate supply also fell during the 1960s and early 1970s, from 327 g/d in 1961 to 270 g/d in 1973 (data not shown); however, since then, carbohydrate supply has been on the rise and by 2011 it was more than 50% higher, at 421 g/d. The FAO food balance sheets do not provide estimates of carbohydrate supply but their estimates for fat and protein are shown in Fig. 2(b) and (c); both sets of figures are in good agreement with our calculated values.
Contributions of macronutrients to energy supply
The contribution of proteins, fats and carbohydrates to total energy supply in Ethiopia from 1961 to 2011 is shown in Fig. 3. Carbohydrate was by far the biggest energy source throughout the period, ranging from 72% of energy in 1968 to 79% in 1998. Energy from fat was exceptionally low, ranging from 14% of energy in 1970 to only 10% in 1998. Energy from protein ranged from 14% in 1962 to 11% in 1994. Alcohol made a very minor contribution to energy, ranging from < 0.2% in 1995 to a maximum of about 0.7% in 1976 (data not shown).
Trends in fatty acid supply and P:S ratio
The 50-year trends in saturated (SFA), monounsaturated (MUFA) and polyunsaturated (PUFA) fatty acid supply in Ethiopia and in the polyunsaturated-tosaturated (P:S) ratio are shown in Fig. 4(a). In general, per capita fatty acid supply -which was already low -declined between 1961 and the mid-1990s and there was a slight increase in the P:S ratio, from a minimum of 0.77 to a maximum of 1.15. These trends have now reversed and since 1998 there has been a consistent, albeit small, increase in fatty acid supply and a fall in the P:S ratio.
Trends in micronutrient supply
The 50-year trends in micronutrient supply in Ethiopia between 1961 and 2011 are shown in Fig. 5(a)-(d). Calcium (Ca), iron (Fe), zinc (Zn), vitamin A, B 1 , B 2 , B 6 , B 12 , niacin, folate, C and D supplies were already low in the 1960s and all declined during the next 2 decades before eventually stabilising and starting to increase from the mid-1990s onwards. The trend for vitamin E was slightly different, as minimum supply occurred earlier, in the mid-1970s.
Trends in supply of major commodities
The commodities providing the most energy in the Ethiopian food supply between 1961 and 2011 are shown in Fig. 6(a). Cereals have been, and remain, by far the major contributors to energy although there has been a change in their relative contributions over time. During the 1960s and 1970s 'Cereals, other' (which in the Ethiopian context is predominantly teff ) was the major energy source, but since the 1980s this has been overtaken by wheat and maize. Sorghum was the second most important contributor to energy supply in the early 1960s, but its importance has diminished over time, although it still ranks fourth in terms of calories provided. Cereals (along with pulses) are also the major sources of protein in Ethiopia, and even in 2011 there was no animal food among the top 5 protein sources ( Fig. 6(b)). In the early 1960s, bovine meat was the major source of fat but in recent decades it has been overtaken by palm oil and milk (Fig. 6(c)).
Discussion
The objective of the present study was to analyse the and sugars, a fall in the percentage of energy coming from starch, and an increase in the usage of vegetable oils and animal-derived foods. Our analysis revealed that although the per capita energy supply in Ethiopia has increased substantially over the course of the last two decades, cereals remain the major contributors to dietary energy and indeed protein, and there has only been a small increase in energy from sugars and in the usage of vegetable oils and animal-derived foods. This suggests that although rapid urbanisation of the population is occurring, the country as a whole is still in the early stages of a nutrition transition.
On average, our calculated values for protein and energy supply were 2.2 and 2.1% lower, and our values for fat were 0.1% higher than those given on the food balance sheets. Both sets of values were significantly correlated and trends observed over time were virtually identical ( Fig. 2(a)-(c)). This suggests that the way we coded the commodities was appropriate. The small differences in absolute values may be due to differences between the food composition tables upon which our analysis was based [46,47] and the older nutrient values used in the FAO statistical databases over time. FAO cautions that for a variety of reasons these older compositional data may not be reflective, in many cases, of the foods and nutrients consumed today [53].
Our analysis showed that energy, protein and fat supply, which were already low, declined from the early 1960s until the early to mid-1970s ( Fig. 2(a)-(c)). Since then, there has been an improvement in the situation, with an increase in energy supply of 570 kcal/capita/d or 40% being recorded since 1992-1993. This is consistent with trends internationally, as Kennedy [54] reported that the global per capita energy supply increased by some 500 kcal/d between 1961 and 1999. Also, results from the Ethiopian Household Income Expenditure Surveys showed an increase of almost 700 kcal/adult equivalent between 1995 and 96 and 2004-05 [55]. Protein supply in Ethiopia has also increased by 40% since the early 1990s but despite this, the prevalence of undernourished people in Ethiopia is still reported to be one of the highest in East Africa [56]. (FAO uses the Prevalence of Undernourishment indicator to estimate the extent of chronic hunger in the world, thus "hunger" -i.e. insufficient consumption of dietary energy -may also be referred to as undernourishment.) The WHO recommends that 55-70% of dietary energy should come from carbohydrate (with < 10% coming from free sugars, which it defines as "all monosaccharides and disaccharides added to foods by the manufacturer, cook or consumer, plus sugars naturally present in honey, syrups and fruit juices") and 15-30% should come from fat [16]. Our data show that unlike most countries, which are experiencing increasing fat and decreasing carbohydrate supply, the food supply in Ethiopia has, throughout the last 50 years, exceeded the WHO recommendation for energy from carbohydrate and fallen below their recommendation for energy from fat. Within the last decade and a half, starting from the mid-1990s, energy from carbohydrate has been falling slowly and energy from fat has been increasing (notably from palm oil), and there has been a downward shift in the P:S ratio to less than 1.0. This may indicate the very early stages of a nutrition transition similar to what has been experienced elsewhere, but the food supply is still remarkably high in starch (from cereals and starchy roots) and low in fat, protein and sugars. For example, our data show that in 2011 carbohydrates provided 75.8% of energy, while only 12.9% came from fat and 11.3% came from protein; moreover, sugars provided only 6.7% of energy. Using different methodology, broadly similar findings were reported by the Ethiopia National Food Consumption Survey [57]. Based on a single 24-h recall collected between June and September 2011 in a nationally representative sample of 6702 women of childbearing age, the contributions of carbohydrates, fats and proteins to energy were 73.5, 16.5, and 9.7% respectively. Interestingly, however, in their smaller sample (n = 377) of urban men, carbohydrates provided only 68.1% of energy, while the contribution from fat increased to 20.7%.
The WHO recommendation for fibre intake is > 25 g/d of total dietary fibre [16]. Our data show that per capita supply of fibre has been increasing since the mid-1970s and was about 43 g/d in 2011; however, it is important to recognise that food balance sheets do not take into account the losses that occur beyond the retail level such as those due to peeling, food preparation and wastage, so this figure is likely to be an over-estimate of true dietary intake.
The present results (Fig. 5(a)-(d)) show that per capita supply for a number of important micronutrients increased slowly over the last two decades, in line with the general increase in energy supply. However, supplies of calcium, vitamin B 2 , folate and vitamin C are still below the WHO and FAO nutrient intake recommendations [58] (Appendix 2) -substantially in the case of calcium and folate -while vitamin A and B 6 are borderline. Given the low supply of animal foods and the low bioavailability of iron and zinc from plant-based diets, these (along with vitamin B 12 and D) are also nutrients of concern. Consistent with these results, a recent study on the micronutrient intakes of urban adults in Northern Ethiopia [27] reported inadequate intakes of calcium, retinol, vitamin B 1 , B 2 , niacin and vitamin C in the vast majority of study participants (73-100%, depending on the nutrient). The Ethiopia National Food Consumption Survey [57] reported a very high prevalence of inadequate intakes of vitamin A in women of childbearing age (81.9%) and in urban male adults (91.3%); moreover, zinc intakes were inadequate in 50.4% of women of childbearing age and in over 60% of urban adult males. The low supply of micronutrients reflects the fact that the Ethiopian diet continues to be composed mainly of cereals, roots, tubers, and pulses. There is low dietary diversity and low consumption of fruit and vegetables, fish and animal products, all of which are important sources of micronutrients.
This study does have limitations. Food balance sheets overestimate actual food consumption and nutrient intakes because they fail to take into account losses that occur beyond the retail level, such as wastage during food preparation, losses due to processing, food that is spoiled or simply not eaten, and food fed to animals in the home [39]. Also, no account is taken of regional differences in food supply within a country, or of differences between different age groups, social classes or rural versus urban dwellers. Thus, it is not possible to compare food balance sheet data directly with data from national food consumption surveys because each approach measures different levels of dietary information [59]. This challenge can be overcome to some extent by expressing results on an energy density basis, as % of total energy, or as ratios. A further limitation is the fact that the food balance sheets show basic commodities rather than specific food products, which poses a challenge when it comes to choosing the most appropriate codes for nutritional analysis. For consistency purposes, our approach was to code at the level of the raw unprocessed commodity wherever possible; however, this could result in an overestimation of some dietary components, such as fibre. The lack of detail regarding the specific foods that make up certain categories of commodity in the food balance sheets is another limitation. As in previous studies [44,60,61], we tried to overcome this limitation by populating these categories using information about commonly consumed foods from region-specific food frequency questionnaires and other relevant literature. Another potential limitation is the fact that the food composition table we constructed was based on UK rather than African food composition data; however, we would not expect the composition of basic commodities to differ all that much between countries, and the UK food composition tables are much more comprehensive than existing African food composition tables [62], which facilitates more accurate coding.
Conclusions
In conclusion, unlike many lower-and middle-income countries which have experienced major shifts in the composition of their food supply over the last 50 years, the Ethiopian food supply is still remarkably high in complex carbohydrates (mainly from cereals, roots and tubers) and low in fat, protein and sugars. Since the early 1990s there has been an increase in the overall energy and protein supply, and the micronutrient supply has also improved, but it is still insufficient for calcium, vitamin A, folate and other B-vitamins. Iron and zinc bioavailability will continue to be compromised by the continuing high reliance on grains and pulses and low use of animal foods. The increased usage of maize and wheat at the expense of teff and the appearance of palm oil and milk as fat sources in recent years may be signalling the emergence of a more highly processed food supply, but there is little evidence yet of the kinds of changes that are usually associated with the nutrition transition. These data should provide a useful starting point for further and more detailed studies on diet and chronic disease associations in Ethiopia, and for developing nutrition and health promotion strategies. Owing to the inherent limitations of food balance sheets, further research should be carried out using different methodologies to corroborate these findings. | 2019-08-14T15:32:46.511Z | 2019-08-13T00:00:00.000 | {
"year": 2019,
"sha1": "91be5ace02754e94d37fbc2d9edd8fb574305296",
"oa_license": "CCBY",
"oa_url": "https://nutritionj.biomedcentral.com/track/pdf/10.1186/s12937-019-0471-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91be5ace02754e94d37fbc2d9edd8fb574305296",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201990891 | pes2o/s2orc | v3-fos-license | ABDOMINAL CUTANEOUS NERVE ENTRAPMENT SYNDROME (ACNES)
Abdominal cutaneous nerve entrapment syndrome is caused by entrapment of an intercostal nerve in a fibrous ring in the rectus abdominis muscle and causes neuropathic pain. It remains an overlooked cause of chronic abdominal wall pain. Carnett’s test is useful to make a diagnosis. An injection of local anaesthetic and corticosteroid combination relieves pain and it is both diagnostic and treatment. This review article describes its pathophysiology, clinical diagnosis and its management. The databases Medline, and Google Scholar were searched using the terms chronic abdominal pain in general, surgical and gynaecological practice. Database were merged and duplicates were removed. The aim of the review is to update the knowledge on this topic in day to day clinical practice
Introduction
The chronic abdominal pain gives anxiety and loss of work and economy to both to patients and health care system. Therefore, it demands several investigations and management modalities. The differential diagnoses of chronic abdominal pain are intra-abdominal disorders such as irritable bowel syndrome, spastic colon, and gastritis. When a correct diagnosis is not arrived, they are given a psychiatric diagnosis such as psychoneurosis, depression, anxiety, hysteria and malingering.
There is an under recognized and underappreciated cause of chronic abdominal pain, called abdominal cutaneous nerve entrapment syndrome (ACNES).
If a patient presents with chronic abdominal pain and no diagnosis is arrived, ACNES should be considered as a most probable diagnosis. (1, 2)
Epidemiology
It is estimated that incidence of ACNES is 1 in 1800. Further, it is seen in up to 30% of the patients with chronic abdominal pain who had negative results of prior diagnostic work up. (2) The peak incidence of the condition is seen among the age group of 30-50 years and it is reported in 12% of pediatric outpatients with chronic abdominal pain.
Pathophysiology of ACNES
The anterior abdominal wall receives its sensory supply via the anterior and lateral cutaneous branches of the anterior rami of the 7th-12th thoracic nerves.
The infrasternal area is supplied by the T7; the umbilicus is by T10; the suprapubic are by T12 and L1 by the iliohypogastric and ilioinguinal nerve ( Figure 1).
ABDOMINAL CUTANEOUS NERVE ENTRAPMENT SYNDROME (ACNES)
The cutaneous sensory nerves and vascular bundles lie in the plane between the internal oblique and transverses abdominis muscles [ figure 2]. They supply the skin after passing towards the posterior wall of the rectus sheath and through the neurovascular channel in the rectus muscle. These neurovascular channels freely mobile within a fibrous ring in the rectus muscle [ figure 3]. Entrapment and mechanical irritation occur when they change the direction to enter a fibrous or osseo-fibrous tunnel or when passes over a fibrous or muscular band. (3) Fibrous ring is the most susceptible for entrapment and a site of nerve compression and ischaemia which produces the symptoms of ACNES. Nerve traction and compression is also caused by rectus muscle contraction.
Localised swelling due to the irritation may directly injure the nerve or compromise the nerve's circulation. Valleix phenomenon explains that the tenderness of the main nerve trunk may be found proximally or distally to the affected portion. Proximal tenderness may result from vascular spasm or from unnatural traction on the nerve trunk against the point of entrapment. In ACNES, all these mechanisms can be at work. (4)
Clinical presentation
Patients often present with abdominal wall pain mainly at the right side lateral edge of the rectus abdominis muscle, but can be in multiple locations. Also the pain radiates to the affected dermatome.
It is sharply localized to a small (<2cm) area of that is always felt in the same place and is usually dull or stabbing type. There are features of neuropathic pain such as retrograde radiation (Valleix phenomenon) due to entrapment neuropathy. The pain gets aggravated when the patient lies on the affected side or sits. Tight clothing, sneezing, coughing, laughing, and physical exercise are other aggravating factors. Even though patients often do not feel "sick," their quality of life can be impaired.
Pain is felt horizontally in the upper abdomen and more oblique in the lower abdomen due to the course of nerves responsible for ACNES in these regions. When the radiation occurs only with movement, it suggests the entrapment within the muscle. When the cutaneous nerve branches entrapped in scar tissue following abdominal surgeries, the direction of pain radiation shows the dermatomal distribution of the particular nerve entrapped.
There are recognized risk factors for ACNES; such as previous laparotomy and laparoscopic surgery and rectus muscle strengthening exercises. In addition, obesity, pregnancy and oral contraceptive use are also other risk factors for ACNES.
Physical examination
The physical examination is performed when the patient is in supine position and it is important to arrive at a diagnosis of ACNES. Pain is exactly localized with a fingertip at the linea semilunaris, i.e., the lateral border of the rectus abdominis muscle in most of the patients.
Carnett's test
When pressing on the point of greatest pain with the finger tip, the pain gets worsened when the anterior abdominal wall is being contracted (Positive Carnett's test). But pain does not always become worse during the examination.
When the pain originates from the abdominal viscera, pain is less marked during the examination (Negative Carnett's test). Adequate voluntary contraction of the anterior abdominal musculature is essential for the proper examination.
Neurovascular channel is constricted when the rectus muscle contracts and worsens the symptoms of neuropathy. Abdominal hernias, abdominal wall haematomas, and rib tip syndrome also produce positive Carnett's sign on examination. (5) Pressure over the nerve at the anterior openings in the rectus sheath will cause pain (Positive Hover sign). Hypaesthesia, hyperalgesia, or allodynia around the area of pain also supports a diagnosis of ACNES and it has been reported by 75% of the patients with ACNES. (6) The "pinch test" is useful if the origin of the site of the pain is not identified. This test is picking up the patient's skin with the subcutaneous fat between the thumb and index finger, first on one side of the midline of the abdomen and then on the other side. The patient will state whether one side hurts more than the other. Cotton and pinprick technique can be used to check for hypoesthesia or hyperesthesia around the pain site. (7) Management Health education about ACNES and the rectus muscle stretching exercises should be given to all patients. Though the efficacies of non-specific pharmacological therapies are unclear, heat or cold application, abdominal binders and transcutaneous electrical nerve stimulation are useful.
Recommended Treatment for ACNES
First line therapy is generally injection of local anaesthetic and corticosteroid combination which relieves pain and reduce the herniation of neurovascular bundle through the ring. A local injection of an anesthetic agent completely relieves the pain. The combine injection is the most commonly used one to treat ACNES. And it is both diagnostic and treatment. 0.5 to 1 ml of 2% Lidocaine is used and the length of needle varies according to the thickness of subcutaneous tissue. Usually 21 G or 22 G is used. Spinal may be needed sometimes to reach the injection point. (8)
Technique for inserting the needle
There are many techniques to identify the landmark for injection. Palpation with fingers, nerve stimulator to identify the nerve and ultra sound guided injection. Ultrasound-guided local anaesthetic injection is increasingly recommended nowadays in the literature and it gives the median duration of pain relief of 12 weeks. (9) There are palpable depressions on the lateral edge of the rectus muscle and this is the point of injection where the needle is introduced through skin, subcutaneous tissue, aponeurosis and to the fatty plug surrounds the neurovascular bundle when emerging from the fibrous channel.
When passing through the tissues, aponeurosis and the fatty plug produce resistant to the needle and the needle should not be inserted deeper than this level as this will further increase the pressure within the fibro muscular channel.
The tip of the needle should be placed in front of the fibrous ring just beneath the aponeurosis, and the examiner should make sure the position of the needle before the injection by pulling out the needle into the subcutaneous tissue to insert again.
The injection is best given in patient standing and bearing down position. But can be given in lying position if the patient is comfortable.
Position the needle with middle finger of one hand in the aponeurotic opening and use the other hand for cleaning the area with alcohol and inserting the needle above the tip of the finger. The other hand should not be taken off until the needle is being situated in correct position and use the same hand to stabilize the needle while injecting the drugs. Patient should be explained not to breathe during the injection.
Neuromodulation using pulse radiofrequency lesion has been attempted prolong the pain relief. There is an inflicting a second nerve injury resulting with these injection procedures a small but significant risk. Surgical options are available for ACNES. Suggest surgical neurectomy for ACNES, but the long-term outcome is yet to be known. (10,11) | 2019-09-09T18:39:17.593Z | 2019-08-19T00:00:00.000 | {
"year": 2019,
"sha1": "99eca1fb10e413fbc89cea86217f642971464aa1",
"oa_license": "CCBY",
"oa_url": "http://jmj.sljol.info/articles/10.4038/jmj.v31i1.62/galley/127/download/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e87de188fbf25e8c9a7c5c6d95951f8c8f8549b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2785044 | pes2o/s2orc | v3-fos-license | Tool Use as Gesture: New Challenges for Maintenanceand Rehabilitation
Tool Use as Gesture: new challenges for maintenanceand rehabilitation There are many ways to capture human gestures. In this paper, consideration is given to an extension to the growing trend to use sensors to capture movements and interpret these as gestures. However, rather than have sensors on people, the focus is on the attachment of sensors (i.e., strain gauges and accelerometers) to the tools that people use. By instrumenting a set of handles, which can be fitted with a variety of effectors (e. is possible to capture the variation in grip force applied to the handle as the tool is used and the movements made using the handle. These data can be sent wirelessly (using Zigbee) to a computer where distinct patterns of movement can be classified. Different approaches to the classification of activity are considered. This provides an approach to combining the use of real tools in physical space with the representation of actions on a computer. This approach could be used to capture actions during manual tasks, say in maintenance work, or to support development of movements, say in rehabilitation.
INTRODUCTION
Researchers in the field of Human-Computer Interaction (HCI) have expended a great deal of time and effort on the design and psychology of graphical user interfaces, but significantly less attention to the design of the devices with which users can interact with these interfaces [4][7] [8].Our current range of interaction devices restrict people to a very limited number of actions which tend to be performed in series and tend to require only one hand.Many aspects of our everyday lives involve the use of our hands to grasp, manipulate and operate objects in the world around us.We have a well-developed repertoire of movements to allow fine motor control of our hands and fingers, and yet these movements are rarely supported in HCI.More often than not, the flexibility of the human hand is ignored and movements are reduced to either pressing buttons or grasping a mouse to make small, constrained movements in order to control a cursor on the screen [4][7] [8].Developing interactive technology that reflects the richness of dexterous behavior remains a challenge for HCI.There are, of course, exceptions to this statement.Over the past decade or two, pen-based computing has allowed people to hold a stylus and manipulate it much like a pen to write and draw on screen or to hold a stylus to manipulate objects in virtual environments and receive haptic feedback [4], and in the past few years, gaming devices, such as the Nintendo Wii, have supported hand and arm movements that are similar to those used in sport and dance.One reason for this trend is the desire to produce 'mulit-functional' devices, such as the mouse, which can be used to perform a variety of functions.These devices share a common underlying approach: the device that is held in the hand is first and foremost intended to be used to act upon virtual objects on the computer screen or in the virtual world.This can be contrasted with the many ways that we use devices (or tools) to act upon real objects in the real world.Not only are the compliances and behaviours of these real objects more complex than their virtual counterparts but also the range of actions we perform with tools to exploit these compliances are more varied.Notwithstanding the fact that a single device is unlikely to satisfy all of the requirements of all users of a computer [43], there is an obvious irony in the use of the term multifunctional to describe a mouse.The functionality of the mouse lies not in the device (that can only offer the functions of linear movement in the horizontal plane and depression of one, two or three buttons) but in the objects on the graphical user interface that the mouse is used to manipulate.While this provides a means of linking physical activity to a graphical display, in many of our everyday activities, the use of the tool provides implicit feedback to the user.This feedback takes the form of the feel of the tool and the effects that the user can make on objects in the world using the tool.An alternative perspective would be to define the physical device in terms of its functionality, and to capture user behaviour to manage HCI.This is the approach that underlies Tangible User Interfaces.In broad terms, interaction with tangible user interfaces can be considered as follows: 'a user manipulates a physical artifact with physical gestures, this is sensed by the system, acted upon, and feedback is given' [32] p.253).The physical artifacts can range from models of real objects [9] [11][38], to construction blocks [40] to everyday objects that have been adapted to connect to a digital environment, such as the MediaCup [14].It is this latter class that is the focus of this paper.Thus, one can conclude that ' interaction devices can be developed as significant components of the computer systems, not only acting as transducers to convert user action to computer response, but communicating all manner of feedback to the user and supporting a greater variety of physical activity.'[4] p. 276).The manipulation of pointing devices, such as mouse or joystick or game controllers, requires specific control movements; users can not simply adapt movements that are familiar to them but need to learn new ones.Admittedly, this learning is not particularly onerous because the range of movements permitted is so small.However, it does mean that there is a gap between making a movement in 'real-life' and making a movement in order to control virtual objects on a screen.This latter type of movement has the goal of performing actions in order to control something.Some devices, such as styli and pens, are able to support movements that are similar to learned movements, such as drawing and writing.However, it is interesting to note that there are still some differences between performing these movements with pen and paper versus stylus and screen [4].This means that the movements that a person is performing can be considered partly 'natural' (ie learened and practiced in everyday life) and partly a response to the demands of the computer.Rather than the user having to learn sets of movements that the computer can interpret, the computer could be made to adapt to the sets of movements that the person naturally wants to make.By presenting people with a well-defined task, such as hitting a tennis ball on a screen, it is possible to produce a good specification of the range of motion that might be expected and, using accelerometers or vision-tracking, it is possible to measure this motion from a handheld unit in order to recognize an action (which after all, is the approach taken by the Nintendo Wii).In this latter case, the person performs as action which is intended to be functionally equivalent to that performed in real-life and expects the computer to make an appropriate response, by having the avatar that the person is controlling perform the same action.Current commercial approaches to capturing such actions rely on accelerometers to respond to movements of the device held in the hand.This provides a reliable means of capturing gross movement but struggles with finer movements that might characterize many dexterous actions.Thus, in order for the capture of human movement to develop there is a need to allow computers to respond to fine motor control.For this paper, the focus will be on ways in which one can capture the actions that people perform with objects in the real world, and treat these in much the same way as gestures are treated, i.e., as actions that can be recognized and evaluated.One way in which this can be achieved is through further refinement of the objects that the person holds when interacting with the computer.In this paper, the focus lies on capturing data from the handles of domestic tools and using these data to model different types of performance.It is proposed that such developments not only provide an interesting ground for exploring ways of analyzing human activity but also lead to the development of novel forms of interaction device.There are a number of domains in which such recognition could prove valuable, and in this paper considers two of these: (i.) Capturing and recording everyday actions of people undergoing healthcare or rehabilitation, not in the laboratory but in their own home; (ii.)Monitoring the actions of technicians involved in maintenance work and producing a computer log of the work.
These domains of application are considered further in the discussion section.In the next section, a review of approaches to analysing human activity from sensor data will be presented.This is followed by a discussion of capturing data from human interaction with tool handles, and a description of a prototype system.Then the results of initial trials and analysis of different activities is presented, before the paper concludes with a discussion of future developments.
Using Sensors to Analyse Human Activity
Previous research has looked at the automated analysis of ambulatory motion, with some real-time feedback, to aid in rehabilitation of walking [19][28], and at the use of activity recognition to monitor arm movement [15][17].Such systems enable rehabilitation to be carried out at home, with the use of unobtrusive sensor systems at a reasonable cost compared to current hospital medical systems [6].Amft and Tröster [1][2] used a range of sensors on the person to define movements involved in eating, as part of a diet monitoring application.For example, a microphone and electromyography sensor was used to recognize chewing and swallowing, and accelerometers on the lower arm indicated movements towards the mouth.
In maintenance work, Ogris et al. [30] combined a body-worn ultrasonic unit to track hand location with an accelerometer to track hand movements when people performed bicycle repair tasks.By capturing the action performed, the system was able to provide guidance and feedback to the user regarding appropriate actions to perform.Another paper reports a system in which combined RFID and bar-code reading is used to identify tools and components, and accelerometers on wrists, to define actions, with a head-mounted web-camera to record and check maintenance activities [31].
In this case, recognition was used to both guide feedback to the user and also to capture novel approaches to a task (which could then be filmed, using the head-mounted camera, for inclusion in future training videos).In a similar manner, Maurtua et al. [25] developed a system to recognize picking up a tool or component and using this to determine whether a car assembly task was being performed correctly.Stiefmeier et al. [34] defined car assembly as a series of sub-tasks and sought to recognize when each sub-task had been completed.In these papers, recognition had the primary goal of checking maintenance procedures against 'good practice'.
Using Sensors to Record Grasp
In the field of ergonomics, grasp is often evaluated through grip dynamometry.This involves the person pulling against a sprung handle; the amount of force used to pull the handle is measured off a calibrated scale.This shows effective grip strength but does not provide an indication of how well the person can grasp an object or how grasp varies with activity.The instrumentation of tools to measure grip force has been explored previously in many specific applications such as golf grip [22] and children's handwriting [9].The approach in these studies was to cover the handle of the tool in a force sensing mat.Both studies used the Tekscan 9811 sensor, which consists of a 0.1 mm array of force sensing cells that respond to force with a linear change in resistance.However, there are more traditional forms of sensor that are much cheaper and which could provide usable data, in the form of strain gauges.Murphy et al. [29] use strain gauges on the top and sides of a knife blade, near the handle, in order to measure forces applied during cutting.Memberg and Crago [27] designed a two sided handle, with strain gauges on each side.This design was used as the basis for the initial prototype in this paper (see figure 1).
McGorry [26] used a three sided handle, with strain gauges on each side, and this was used as the basis for the design of the second prototype (figure 2).These previous studies, regardless of the sensors used, concentrated on the design of the handle and the collection of data from the use of the sensors.However, there was little attempt at the using these data to interpret the activity beyond simple visual analysis.If these devices are to be useful in HCI, there is a requirement to develop techniques for classifying and recognising activity from instrumented tools.To this end, Kranz et al. [23] fitted a torque sensor between the handle and blade of a large chef's knife.The data collected from this sensor, combined with the data from load-cells under a cutting board, could be used to characterise the cutting of different foods.This shows how the use of instrumented tools can provide data to support activity recognition.However, the Kranz et al. [23] study was concerned with the forces applied through the tool's blade rather than the interaction between hand and handle.It is, therefore, of interest to ask whether hand-handle interactions can be captured with sufficient reliability to allow actions to be classified.In this paper, our aim is to model the hand-handle interactions (through motion and grip) and it would be interesting to consider whether this approach could be comparable to that used by Kranz et al. [23].Consequently, the testing procedure that is employed requires users to perform activities using an instrumented knife; the activities include cutting different foods and spreading butter.
CLASSIFYING ACTIONS
Modelling of human performance, on the basis of accelerometer data, has been performed by neural network analysis [24][39], through hidden Markov Modelling, [3][20][42] or through Gaussian Mixture Models [31].Each approach has the potential to be computationally intensive and in this project, the objective was to use a technique which could run on a low power processor, so would be less computationally demanding.This could involve the use of classifiers, such as Naive Bayes and C4.5 [5] [33] [36].The features in the Naive Bayes classifier were modelled using a Gaussian distribution.
Training data are used to calculate parameters that define a probability distribution for each feature in each class.These parameters form the classification model.Classification involves using a probability distribution function with the parameters of the model to calculate the probability of each feature of the unknown sample data.Naturally varying phenomena, such as human actions, tend to vary with a Gaussian distribution; hence it is deemed useful in human activity recognition.The C4.5 algorithm generates decision trees, where the leaves are the classifications, and the rest of the nodes above them are the features.The decision tree is then used to classify unknown data samples by traversing down the tree based on the value of each feature of the unknown sample.When a leaf of the tree is reached, the classification is found.
In contrast to Naive Bayes, decision trees strongly model interdependence of features.The C4.5 algorithm is a well developed algorithm for building trees which deals with issues such as over-fitting data.The implementation of the C4.5 algorithm decision tree and naive Bayes classifiers is relatively simple, compared to the implementation of many of other classifiers.
Software Development
The project required a number of software modules to be developed to capture of data from the instrumented handles and perform classification.While there are various commercial packages that can do some of these tasks, it was felt that developing modules in-house would allow greater control over the manner in which the data were processed.All modules were written in C#, running under Windows .Net.
The first module was an application for real-time visualisation and recording incoming data from the sensors (Figure 3).It displayed graphs for analysing the real-time output of the three accelerometer axes, and the strain gauge output.Following the capture of data, the next step required segmentation to be done with relatively minimal effort from the user while maintaining a high level of precision (Figure 4).A second module allowed the recorded data to be visualised on a scrollable graph, and segmented and categorised simply by clicking on the graph.This allowed rapid removal of all irrelevant and null data.Before classification, two feature subset selection (FSS) methods were used to remove features calculated from samples that had low salience to the classification of the data.This is important because features that do not help characterisation can dramatically reduce the accuracy of recognition.There are various approaches to FSS that can be used.Although it is theoretically possible to test every possible subset against the classification algorithm, in practice this is impractical.This project uses over 100 features resulting in over 10 100 calls to the classifier algorithm (which would take years of processing time on conventional computers).For this reason, FSS methods generally use some kind of search method, which involves gradually building up the feature set using heuristics to reduce the search space.The wrapper method [21] is one of the more powerful methods and involves the use of the classifier algorithm to help evaluate the best subset.The wrapper method usually gives superior results to filter methods (methods that do not use the classification algorithm) due to the fact that they produce results specifically suitable for the classification method [16].
Filter methods use similar search methods to the wrapper method, but they do not use the classification algorithm; instead they use a function that evaluates the merit of the features against the training data.These methods tend to be much faster than the wrapper method [16] and they also have the potential to be useful with many different classification methods.Correlation-based Feature Selection (CFS) is demonstrated by [16] and shown to give significant improvement when used with a Naive Bayes Classifier (this kind of classifier is also used in this work).
Both filter and wrapper methods can have different search methods applied to them.Kohavi and John [21] show that the Best First search method, which uses some simple heuristics, generally finds better subsets than a simple greedy search.Both wrapper and CFS filter FSS were individually tested in this project using best first searches.
ACTIVITIES FOR CLASSIFICATION
The instrumented knife was used to perform a range of simple tasks that could be commonly used in domestic settings.These tasks include basic action on a variety of materials.The actions, and materials, were as follows: The test set-up was the same for all participants and activities; the participant, sitting at a table, was presented with items on a plate.These items (i.e., cheese, orange, cucumber, toast) were cut or otherwise acted upon using the instrumented knife.
Each activity started and ended with picking up and putting down the knife, with multiple cutting/ spreading/slicing actions performed in between.All of the data that did not contain any action related data (irrelevant leading and trailing data and long pauses) were removed, splitting some of the actions into multiple samples.All the samples were automatically segmented into uniformly sized subsamples suitable for feature creation.Each dataset used in the leave-one-out testing was acquired from separate occasions of data collection.
Figure5: Accelerometer plotsshowing two activities
As Figure 5 illustrates, the accelerometer showed some variation in terms of broad type of activity.In comparison with the 'spreading' activity, the cutting activities generally returned very small degrees of motion as demonstrated in the second part of the above diagram.In Figure 6, the activity of cutting a piece of toast is recorded.The uppermost plot shows variation in grip, as measured by the strain gauge, and the other three plots show movement in the three axes of the accelerometer.It can be seen that the cutting action is preceded by an increase in grip force, which is maintained until the cut has been made, and then the force reduces.
Combinations of the data from the different sensors were used to classify the actions.For example, Figure 7 clearly shows the separation of multiple activity classes by two accelerometer features; some of the classification was successfully carried out using only accelerometer features.
Recognition Accuracy
It was pointed out, in section 2.1, that the classifiers used in this study had been selected because of their relatively low computational overhead.This means that they might be expected to perform less well than more sophisticated methods.In terms of recognition performance, both the naïve Bayes and the C4.5 classifiers achieved precision and recalls above 60% which indicates that these classifiers work.On average, the naïve Bayes classifier performed better than the C4.5 decision tree classifier.Across both datasets the errors were most common between the two cheese cutting and two cucumber cutting activities, indicating that these activities are similar.If these pairs of similar classes were regarded as the same, the naïve Bayes classifier achieved precision and recall values of 90% and above.None of the feature reduction methods improved the results when using the C4.5 classifier.The wrapper FSS method was the most beneficial, but due to its extreme computational complexity, it could take prohibitively long times to run when used on larger datasets.The CFS method only slightly improved precision and reduced recall, but the reduction of the features is still useful as it reduces the computation time.The most useful feature subsets found did not reject the features calculated from grip force, which shows that there was value to including the force sensor.However some features calculated from the accelerometer were ranked as more valuable, indicating that a combination is required.
DISCUSSION
This paper demonstrates the development of a prototype instrumented handle.The use of the offthe-shelf sensors means that the device is potentially cheap to produce and the simple classifiers that have been implemented show that it is possible to determine which actions are being performed.While the recognition rates for individual actions vary from 60% to 90%+, it is likely that the handles would be used in conjunction with other sensors, e.g., RFID, which would provide additional data to support Tool Use as Gesture: new challenges for maintenanceand rehabilitation Manish Parekh, Chris Baber the classification of activity.If the electronics were reduced further (e.g., through the implementation of a MEMS solution) it would be possible to embed the sensors, processor and communications entirely in the handle of the tool.The challenge lies less in the implementation of the sensing components and more in the capture and processing of the data that are produced.In terms of HCI, the concept underlying this design is to provide a means of allowing people to use familiar, everyday tools and objects in their normal environments.This allows them to focus on the physical tasks that they would normally perform, with a computer being able to record specific actions.In a previous study of maintenance work [31], we demonstrated how the capture of activity concerning user movement, from sensors on the person, and RFID could be used to both generate sets of instructions for performing the tasks (in the form of training videos for uncommon tasks) and the logging of actions (which could be compared against a job-list or procedures).This paper shows how it might be possible to have the sensors fitted on the tools that a person uses, which we argue would be less intrusive than having the sensors on the person.In terms of rehabilitation, the ability to capture behaviours in the person's normal and familiar environment, in terms of re-learning simple domestic tasks, could prove an interesting and beneficial development.Having a means of capturing sensor data and classifying specific actions could provide an indication of changes in performance.In terms of maintenance, the ability to monitor tool use could not only provide a way of tracking performance (and comparing this against the standard procedures that need to be followed, particularly in safety critical systems) but also to assess condition and wear of tools or level of ability of the tool user.This information could form part of tool-replacement program in preventative maintenance or an indication of the need for refresher training of personnel.It may be possible to recognise anticipatory grip force before the user starts different phases of the activity.
In familiar situations, where an increase in load is predictable, e.g., when picking up an object, grip force is typically adjusted in phase with changes in load [12][13][41].Studies such as [18] and [37] show that grip force adjustments in holding a tool, prior to a collision, anticipate the impact force in terms of velocity.These studies imply that people adjust their grip force, on the handles that they are holding, in anticipation of future actions or effects.This notion could be used to further refine the modelling processes, e.g., either in terms of structuring the activity into phases, or in terms of defining the sequences with which actions are performed.We could then compare the time spent in anticipation or action across different types of user or different conditions.This could, for example, provide a fine-grain measure to compare performance over time in order to see if the performance of the user has improved, perhaps as the result of practice or training.
The focus of this paper has been on the use of simple classification schemes to label particular tasks performed with instrumented handles.This can provide a record when tasks were performed (by logging them in a time-stamped database), perhaps for monitoring maintenance work or for recording everyday behavior in a home-setting for rehabilitation.Further work can be applied beyond the simple classification to consider the performance of individual tasks.
Figure3:Figure 4 :
Figure3: Theprogram for visualising and recording data iv. cut_cucumber_flat (slice) v. cut_orange (through peel) vi.cut_toast vii.get_butter viii.spread_butter_toast Each action was performed 15 times by each of the five volunteers who participated in the data collection phase.After processing, this gave some 600 samples of data for testing.The classifier algorithms were evaluated using hold out testing; the dataset was split into three equal sets, and then every two set combination was used as the training set, with the third used for testing.This makes sure that the test data has never been seen by the classifier; this is important because the point of a classification algorithm is to recognise unknown data.Although testing on the training data would show that the classifier is working, it would not tell you if it has the ability to cope with real data with random variation.It is possible for a classifier to get perfect classification results on the training set but completely fail in a real example because problems such as over-fitting of data would not be shown in testing against the training data.
Figure 6 :
Figure 6: Plots of the strain gauge and 3 axes of the accelerometer
Figure 8 :
Figure 8: Performance of the classifiers For example, by comparing the pattern of activity performed by an individual against a template representing 'good' performance, it is possible to compare experts against novices (in maintenance work) or to evaluate changes in performance (in rehabilitation).While these analyses are beyond the scope of this paper, the prototypes and data collection capabilities we have developed will support this as the next stage of development for the work.Activity Recognition in the Home using Simple and Ubiquitous Sensors," In Proceedings of Second International Conference on Pervasive Computing (Pervasive 2004), 158175 [37] Turrell, Y.N., Li, F.-X. and Wing, A.M., 1999, Grip force dynamics in the approach to a collision, Experimental Brain Research, 128, 86-91 [38] Underkoffler, J. and Ishii, H., 1999, Urp: a luminous-tangible workbench for urban planning and design, CHI '99, New York: ACM, 386-393 [39] Van Laerhoven, K., Aidoo, K. and Lowette, S., 2001, Real-time analysis of data from many sensors with neural networks, 5th International Symposium on Wearable Computers, Los Alamitos, CA: IEEE Computer Society, 115-123 Our thanks to ACM SIGCHI for allowing us to modify templates they had developed [40] Weller, M.P., Do, E, Y-L. and Gross, M.D., 2008, Posey: instrumenting a poseable hub and strut construction toy, Proceedings of 2nd International Conference on Tangible and Embedded Interaction, New York: ACM, 39-46 [41] Westling, G. and Johansson, R.S., 1984, Factors influencing the force control during precision grip, Experimental Brain Research, 53, 277-284 [42] Westyn, T., Brashear, H., Atrash, A. and Starner, T., 2003, GeorgiaTech Gesture Toolkit: supporting experiments in gesture recognition, ICMI03 ± 5th International Conference on Multimodal Interfaces, New York:ACM, 85-92 [43] Whitefield, A., 1986, Human factors aspects of pointing as an input technique in interactive computing systems, Applied Ergonomics, 17, 97-104 | 2016-01-29T17:58:53.149Z | 2010-09-01T00:00:00.000 | {
"year": 2010,
"sha1": "ad40bc7714d081e44221495a91d67d4037644d0e",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/92fb1435-882c-4fd9-a113-357b99814f3a/ScienceOpen/241_Parekh.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ad40bc7714d081e44221495a91d67d4037644d0e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
254633640 | pes2o/s2orc | v3-fos-license | Categorization of tinnitus listeners with a focus on cochlear synaptopathy
Tinnitus is a complex and not yet fully understood phenomenon. Often the treatments provided are effective only for subgroups of sufferers. We are presently not able to predict benefit with the currently available diagnostic tools and analysis methods. Being able to identify and specifically treat sub-categories of tinnitus would help develop and implement more targeted treatments with higher success rate. In this study we use a clustering analysis based on 17 predictors to cluster an audiologically homogeneous group of normal hearing participants, both with and without tinnitus. The predictors have been chosen to be either tinnitus-specific measures or measures that are thought to be connected to cochlear synaptopathy. Our aim was to identify a subgroup of participants with characteristics consistent with the current hypothesized impact of cochlear synaptopathy. Our results show that this approach can separate the listeners into different clusters. But not in all cases could the tinnitus sufferers be separated from the control group. Another challenge is the use of categorical measures which seem to dominate the importance analysis of the factors. The study showed that data-driven clustering of a homogeneous listener group based on a mixed set of experimental outcome measures is a promising tool for tinnitus sub-typing, with the caveat that sample sizes might need to be sufficiently high, and higher than in the present study, to keep a meaningful sample size after clustering.
Introduction
Subjective tinnitus, the perception of a phantom sound in the absence of an external stimulus, is a complex phenomenon whose causes and mechanisms are not yet completely understood. Due to a lack of standardization to assess tinnitus and due to multiple definitions of the phenomenon, the real prevalence of tinnitus is largely unknown. Studies report prevalence ranging from few percent to around 30% [1]. The American Tinnitus Association estimates that around 15% of Americans have subjective tinnitus, which, differently from objective tinnitus, is not related to blood flow or muscular-skeletal mechanisms [2]. Despite this, tinnitus research is still very inconclusive, and treatment outcomes are, in many cases, only placebo effects [3]. One of the challenges is that much of the current literature on tinnitus sufferers include listeners with tinnitus and a hearing loss [4], which potentially complicates the interpretation of the results. With respect to the mechanisms, hearing loss can roughly be divided into sensorineural hearing (SNHL) loss and conductive hearing loss (CHL). SNHL is mainly defined as reduced audibility due to damage of the inner ear or the auditory nerve and attributed to aging or induced by noise. CHL is related to problems in the transmission of the sound either in the external ear (e.g cerumen impaction) or in the middle ear (e.g fluid presence). Regardless of the type of hearing loss, tinnitus can also be present. Moreover, hearing loss and tinnitus are also two possible symptoms of Meniere's disease and acoustic neuroma [5]. In addition, in human studies the otologic background of each subject can be difficult to assess. A characterization of tinnitus in homogeneous subgroups would be helpful to understand the underlying mechanisms behind the issue and proceed to the development of focused treatments [3].
One approach to shed light on the mechanisms underlying tinnitus is to identify biomarkers that are independent of any subjective reporting of the listener. The results of different studies aimed to find biomarkers for tinnitus are inconsistent and often suggest opposing theories to explain the observed phenomena. However, a recurrent hypothesis associates tinnitus with an increased neural firing that could be either more centrally located or in more peripheral locations [6]. Non-invasive electrophysiological and imaging techniques like electroencephalography (EEG) [7], auditory brainstem response (ABR) [8][9][10], magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI) [11,12] and positron emission tomography (PET) have been widely used to try to test this hypothesis and localize the source of the tinnitus.
One potential mechanism for reduced input into the brain is the deafferentation of auditory nerve fibers (cochlear synaptopathy, CS) in the inner ear. This might lead to a compensatory increase in neural gain, which itself could cause tinnitus [13]. Numerical simulations have shown that CS might lead to overrepresentation of specific frequencies which might be one mechanism underlying tinnitus [8]. One of the main assumptions underlying this appraoch is that a noise trauma triggers the subsequent degeneration of spiral ganglion neurons (SGN) (Fig 1). This degeneration will not immediately be visible in the audiometric thresholds, but The assumption is that a noise trauma triggers a temporal threshold shift, accompanied by cochlear synaptopathy. With time, the thereby reduced input into the brain stem leads to tinnitus and to spiral ganglion neuron (SGN) degeneration. Progressive effects of synaptopathy and SGN degeneration lead to high-frequency hearing loss under the assumption of higher vulnerability of the basal part of the cochlea. The same timeline might apply to age-related cochlear synaptopathy and shortened by noise overexposure.
https://doi.org/10.1371/journal.pone.0277023.g001 first be evident after a critical time and after a critical amount of SGN degeneration has been reached [14,15]. It has also recently been shown in human temporal bones, that synaptopathy highly correlates with age, suggesting it being omnipresent also in normal hearing listeners [16].
A recent study showed that tinnitus sufferers had lower middle ear muscle reflexes (MEMRs) compared to a control group [17]. This suggested that the presence of tinnitus might be related to the functioning of the efferent system which, in turn, might be connected to CS. The use of MEMR as a measure of synaptopathy-related tinnitus is particularly promising both for being fast and for being a physiological objective biomarker. However other studies showed no connection between MEMR and tinnitus [18].
Besides physiological measures, behavioral measures might contribute to identifying subgroups of mechanisms underlying tinnitus.
A combination of assessing tinnitus frequency, tinnitus loudness, psychophysical tuning curves (PTC) and tinnitus tuning curves (TTC) revealed differences in a group of tinnitus listeners [19]. This variability potentially provides information about the tuning properties of the system and thereby the potential mechanism underlying tinnitus. As for synaptopathy, high frequency audiometry (HFA) has been selected as one proxy of the presence of synaptopathy as animal models showed a higher prevalence for synaptopathy at tonotopic places connected to high frequencies [20]. Finally, adaptive categorial loudness scaling (ACALOS) was reported to provide indications about the presence of hyperacusis, that has been suggested to be an important confound to take in account [21].
To develop a screening procedure for tinnitus which allows tinnitus subtyping, the selection of effective indicators of tinnitus and synaptopathy is critical. At the same time, the screening should be testing different parts of the system while not being too extensive in time to avoid effects of fatigue.
The present study attempts to categorize a subclass of tinnitus sufferers with normal hearing thresholds. A screening procedure was developed, consisting of psychophysical and physiological measures hypothesized to be sensitive to the presence of tinnitus and cochlear synaptopathy.
The data collected were used as input for a clustering algorithm. The objective to apply a clustering algorithm were to check if the algorithm is able to discriminate between tinnitus and a control group, and to evaluate if application of this algorithm can identify subgroups in the group of tinnitus sufferers.
Participants
Twenty listeners suffering from tinnitus participated in the experiment (tinnitus group, mean age 32.2 years, 6 females). An age (+/-4 years, mean 0.73, std 1.34) and audiogram (+/-10 dB per each frequency) matched group of 15 normal hearing listeners (30.2 mean age, 6 females) was included as control (control group). Inclusion criteria for the tinnitus group were a) non pulsatile, subjective persistent tinnitus (i.e. not transient as related to a temporal threshold shift or upper respiratory tract infection); b) audiometric thresholds not higher than 20 dB HL for the for the following audiometric frequencies: 125, 250, 1000, 2000, 3000, 4000, 6000, 8000 Hz. Each listener in the tinnitus group completed the Tinnitus Handicap Inventory (THI) questionnaire (mean: 22.2, min: 4, max: 54, standard deviation: 13.34). Otoscopy was performed to exclude obstruction by cerumen or infections. All participants provided informed consent and all experiments were approved by the Science-Ethics Committee for the Capital Region of Denmark (reference H-16036391).
Apparatus
The psychoacoustical measures (high frequency audiometry, psychophysical tuning curves, tinnitus likeness, adaptive categorical loudness scaling) were implemented as custom software in MATLAB [22]. Stimuli were generated digitially, converted into an analogue waveform (RME Fireface UCX soundcard), amplified (SPL Phonitor MINI) and presented through headphones (Sennheiser HDA 200). The calibration was done by equalizing the transfer function (amplitude and phase) of the transducer using a FIR filter of order 2047. The transfer function was measured using the sound level meter NorSonic Nor139, the artificial ear: G.R.A.S. 43AA-S2 (ear simulator kit according to IEC 60318-1 & -2, with pre-polarized microphone) and the calibrator: Brüel & Kjaer 4230 or 4231 sound calibrator. The wide band typanometry (WBT) and MEMR latencies were measured with the Interacoustics Research Platform and the Titan Suite software (Interacoustics A/S). The Titan research platform was used to measure the WBT and MEMR. The Titan Suite software was used to calculate the latencies.
Tinnitus likeness
Tinnitus likeness was measured for all listeners in the tinnitus group. The loudness and pitch of the individual listeners' tinnitus was measured using a tinnitus likeness (TL) procedure based on [23] using a) pure tones and b) 1/3rd octave wide narrow-band noise with Gaussian amplitude distribution as a probe. In a first step, the listeners were asked to adjust the level of a 1 kHz probe tone to match the loudness of the tinnitus. This was repeated three times. The average level of these three repetitions was used as the probe level for the frequency rating of the tinnitus in the second step. To match the pitch of the tinnitus, probes tones with frequencies between 0.125 kHz and 14 kHz were randomly presented for a duration of 2 s including 50 ms raised-cosine ramps at on-and offset. For each frequency, the listener was asked to rate the likeness of the probe with their tinnitus on a scale between 1 ("not similar at all to my tinnitus") to 10 ("sounds exactly like my tinnitus"). The probe was repeated in intervals with 2 s silent intervals, until the participant did not give an answer. The listener was also given the option "not heard". The maximum of the likeness spectrum was identified and used for loudness matching with tones in the third step: The listeners were asked to match the level of the probe centered at this frequency with the loudness of their tinnitus. This was repeated three times. The average level of these three repetitions was used in step four: To match the pitch of the tinnitus with the narrow-band noise probes, probes with the same center frequencies as in the second step were used.
Adaptive categorical loudness scaling
Loudness growth was measured using an adaptive categorical loudness scaling (ACALOS) procedure [24] for all listeners in the control group and the tinnitus group. The stimulus consisted of a 1/3rd octave wide noise centered at the test frequency (0.25 kHz, 0.5 kHz, 1 kHz, 2 kHz, 4 kHz, 6 kHz) with a duration of 1000 ms including 50 ms raised-cosine on-and offset ramps. The frequencies were presented in random order and the listeners were asked to rate each sound from "Not heard" (corresponding to a value of 1) to "Extremely loud" (corresponding to a value of 50). The maximum level of the stimulus was set to 95 dB HL. In the the first trial, the level of the stimulus was set to 65 dB SPL. The next trial in a run was chosen dependent on the answer given in the previous sound presentation and a continuous function was fit through the measured data points to obtain the frequency specific loudness growth function. For details on the procedure, please see [24] and [25].
Wideband tympanometry and middle ear muscle reflex
Wideband tymapanometry (WBT) and MEMR were measured ipsilaterally using the Titan system (Interacoustics A/S) and a custom software (Interacoustics Research Platform) implemented in [22] for all listsners in the control group and the tinnitus group. To measure the WBT the ear canal pressure was swept between -300 daPa and 200 daPa in descending and ascending direction with a speed of 100 or 300 daPa/s. The data were consistent for the different pump speeds in the same listener when tested in a subset of the listeners. The individual tympanometric peak pressure (TPP) was defined as the peak of the WBT and used for the pressurization before measuring the MEMR. The middle ear muscle reflex (MEMR) was measured using a paradigm based on [26] consisting of a series of clicks and activators (click-activatorclick). The activator in this case was 506ms of white noise with onset and offset of 2.3 ms, respectively, obtained with a Kaiser window. The level of the noise was varied between 75 dB SPL and 105 dB SPL in steps of 5 dB. The transducer was calibrated based on peak-to-peak voltage of a 1 kHz tone. The strength of the reflex was quantified using the change in sound energy between the average of four stimuli with the elicitor and the baseline [27]. This will be referred to as DELTA absorbance. The threshold criterion for presence of the MEMR was chosen arbitrarily to be a DELTA absorbance of 0.03. The latency of the MEMR was recorded with the Titan suite software for pure tone probes with frequency of 500 Hz. The threshold used in this part of the experiment was calculated with the Titan Suite. The latency was calculated at a level of 10 dB above the reflex threshold subject to an upper limit of 100 dB SPL.
Fast high-frequency audiometry, psychophysical tuning curves and tinnitus tuning curves
High-frequency audiometry (HFA), psychophysical tuning curves (PTC), and tinnitus tuning curves (TTC) were implemented using a Bayesian algorithm based on [28]. HFA and PTC were measured for all listsners in the control group and the tinnitus group. TTC were measured only for the listeners in the tinnitus group. The algorithm maximizes the information obtained from each estimate by selection of the measurement parameter to reduce the uncertainty based on the listeners responses. The parameters used in HFA, PTC and TMC are summarized in Table 1.
The HFA was implemented as a 1-interval-2-alternative forced choice (1-I-2-AFC) procedure. One stimulus interval of the HFA contained a series of three tone bursts, each with a duration of 250 ms including 20 ms on-and offset ramps and an inter-pulse gap of 100 ms. The tone bursts had frequencies between 8 kHz and 16 kHz. The listeners were asked to indicate if they heard the tone ("Yes") or not ("No"). The PTC was implemented as a 2-interval-2-alternative forced choice (2-I-2-AFC) procedure. For the PTC, the stimulus contained two noise bursts with a bandwidth of 0.5 octaves and Gaussian amplitude statistics. The noise bursts were generated in the frequency domain by setting everything outside the pass band to zero. The tone bursts were centered at 1 kHz (for the PTC at 1 kHz, PTC1k) or at the tinnitus frequency identified by the tinnitus likeness measure (TL). The probe for the PTC at tinnitus frequency (PTCtf) had a constant level corresponding to the loudness found in the tinnitus loudness matching. If that level was below threshold, the level was set to 5 dB above the value found in the loudness matching experiment. The same frequency and level used for a given listener from the tinnitus group was used for the matched listener in the control group. The probe for the PTC at 1 kHz, had a level of 10 dB above the individual threshold in quiet at 1 kHz. The center frequency of the noise was varied between +/-1 octaves around the the reference frequency. The listeners were asked if the two sounds were different.
The TTC was implemented as a 1-interval-2-alternative forced choice (1-I-2-AFC) procedure. For the TTC, the stimulus was a single 0.5 octave-wide noise burst with Gaussian amplitude distribution and a duration of 2000 ms including 20 ms on-and offset ramps. The noise was centered at the tinnitus frequency. The listeners were asked to indicate whether the tinnitus is audible ("Yes") or not ("No") in the presence of the noise.
Clustering
The collected data were organized into a 35 x 18 matrix with (participant x measures). The MEMR was measured twice, but only one of the two was included in the processing. The decision was based on the WBT. The WBT had to be positive and have the maximum peak in the pressure range between -150 to 150 daPa. If both measurements met these criteria, the first measurement was used. In addition to the strength of the MEMR, the MEMR latency for a tone at 500 Hz was also included in the predictors. The PTCtfs were projected onto categorical outputs: U-shaped (USH), flat (F), other (O). The TTCs were projected onto the categories: USH, F, O or non-maskable (NM). The categorization was conducted by three independent judges, distributing the raw data into the provided categories. The ACALOS was quantified in a single number derived from the mean (through the various frequencies) of the difference between the most comfortable level and hearing threshold. The tonal/noisy and the tone likeness of the tinnitus are information retrieved through the tinnitus likeness test. The first indicate the type of tinnitus and the second the higher likeness rating for tones. The HFA was tested twice. The median of the two tests has been calculated separately and the mean of the two values was chosen as input for the clustering. Age, tinnitus type (monaural or binaural), and the Tinnitus Hearing Impairment score were obtained through direct communication and the questionnaire.
To determine the optimal number of clusters, the silhouette algorithm was used [29]. The silhouette calculates for each observation the ratio of the distance intra-cluster and inter-clusters. The closer the output value of the silhouette is to 1 the better is the clustering. The closer is to 0, the higher is the uncertainty about which cluster the observation belongs to. In this analysis the Gower distance was used. The Gower distance, or similarity, can be calculated for numerical, categorical, logical or text data. In particular, the distance between two observations would be the average of the feature-specific distances in the range [0 1]. If the feature is numeric, the distance will be the ratio between the difference of the two values and the maximum range for that feature. If the feature is categorical, the value will be 1 if the features fall in the same category, 0 otherwise.
After selecting the number of clusters, the algorithm Partitioning around Medoids (PAM) was applied to identify the clusters. The algorithm identifies k objects, medoids, defining the clusters, where k is the number of clusters. The medoids are always defined among the elements to cluster [30]. The strategy of the algorithm is to minimizes the distance between the object to classify and one of the medoids. The distance between the objects to classify and the medoid is given calculating the Gower distance [31].
Random forest
Given the output of the clustering, the next step was to try to understand which features contributed the most to the classification. To be able to select the most informative predictors, a Random Forest algorithm [32] available in MATLAB [22] was used. First, a set of 500 decision tree weak learners has been generated and trained for regression with a Bagging method. Bagging, as well as Boosting, is a method that consists in training more models with the same learning algorithm randomly sampling data-points to generate training data sets for the weak learners [33]. We then used the ensemble regression function that combines the weak learner models with the data to improve the prediction accuracy of the learning. To avoid selecting the settings manually, a Bayesian Optimization (BO) was used to optimize the choice of the hyperparameters [34]. The parameters that were optimized are: method (either Bagging or LSboost), the number of cycles for which the ensemble is trained, the learning rate, and complexity. The importance of the different predictors was calculated for better interpretability of the clusters relative to the used dimensions. To maximize the reliability of the result this routine has been run 50 times and the importance factor evaluated as the mean of all outputs.
Tinnitus likeness
The results of the Tinnitus Likeness were split into two categories: i) The rating of similarity of the participant's tinnitus with the tone that they picked to be the most similar (TonLiken), and ii) the information whether the tinnitus was rated more similar to a tone or to noise (Ton/ Nois). Out of 20 tinnitus participants five (s03, s09, s12, s14, s20) rated their tinnitus to be more similar to noise, twelve rated it to be closer to a tone, and three (s19, s15, s08) rated in the same way the noise and the tone. The TonLiken was very variable across participants ranging from 1.67 (s12) to 10 (s18) on a scale from 0 to 10. The participants whose ratings were below 6 were s01, s09, s12 and s20 and roughly overlap with the group which rated the noise to be more representative of their tinnitus. Participant s01, however could not find any sound which resembled their tinnitus. Individual data can be found in S1, S4 and S5 Figs. The majority of the tinnitus group rated their tinnitus to be more similar to high frequency sounds. Since the PTCtf test was performed presenting a tone whose frequency was supposed to be the most close to the tinnitus, the information for tones is reported. Nine out of 20 participants chose the most similar tone to their tinnitus to be at 14 kHz, two at 12.5 kHz, one at 10 kHz four at 8kHz, two at 6 kHz, one at 3 kHz, one at 500 Hz (Table 2). . Individual data can be found in S6 and S7 Figs. The loudness growth functions were quantified using the lower slope, the upper slope, and the breakpoint of the fitting function. The ACALOS was only performed until 95 dB HL. For some of the listeners the loudness perception didn't reach the maximum rating (50) either for all the frequencies or a subset of them at this stimulus level. Hence, the mean of the difference between the most comfortable level (MCL) and the hearing threshold (HTL) across frequencies was used as an indicator for the presence of hyperacusis. The ACALOS rating ranged from 54.67 dB (s09) to 82.67 dB (c18). The median of the tinnitus group was 68.83 dB and the median of the control group was 75.33 dB. In line with the initial hypothesis, the most comfortable level was higher for the control group compared to the tinnitus group. But the difference was not statistically significant (Fig 2, panel C). Because no clear criteria exist, no binary classification (hyperacusis versus non-hyperacusis) was made before proceeding to the clustering and the MCL-HTL measure was used as input for the clustering. Individual data can be found in S12 and S13 Figs. The MEMR strength was derived as the mean of the Delta Absorbance across frequency for each level for further analysis. In general, the MEMR showed a stronger response (grey) for high stimulus levels compared to low stimulus levels (red). The MEMR shows, however, high variability across listeners. For 105 dB SPL, for example, the minimum MEMR strength was 0.0324 (s13), while the maximum (not considering the outlier c20) was a factor of 20 higher with 0.6661 (c04). Similar patterns were found at the other levels. While some listeners had a strong response, others didn't show any, even at higher levels (Fig 3 panel B). When comparing the tinnitus and control group, there was no significant difference at any level (Wilcoxon rank sum test, p > 0.05). The MEMR was tested twice, but only the less noisy result was kept. The MEMR for s11 resulted in too noisy data and was therefore discarded. One listener (s9) in the tinnitus group had a particularly high compliance of the tympanic membrane, therefore the WBT was tested with the clinical setting before testing the MEMR. The latency of the MEMR was calculated with the clinical setting of the Titan (Interacoustics A/S) at the frequency of 500 Hz. In the following analysis it will be referred to as 'RL500'. The results for this variable ranged from 35 ms to 239 ms (mean 127.31 and standard deviation 51.60 ms). There was no significant difference between the tinnitus and the control group (Wilcoxon rank sum test, p> 0.05).
Wideband tympanometry and middle ear muscle reflex
Fast high-frequency audiometry, psychophysical tuning curves, and tinnitus tuning curves probability to be within one SD in the 50% contour graph. The circles and the crosses represent the responses to the trials. Circles represent trials that have been heard ("yes" to the question "Did you hear the sound?"), and crosses represent sounds that have not been heard ("no" to the question "Did you hear the sound?"). The standard audiometry (125-8000 Hz) for the two listeners is presented in the inset. The two listeners in panels A and B represent the differences found in the group of audiometrically normal hearing listeners (i.e., in the range from 125 Hz to 8 k Hz). Some listeners showed thresholds <15 dB HL for the whole frequency range up to 16 kHz, while others showed a strong decline in sensitivity at frequencies higher than 8 kHz. Across subjects, the HFA results showed high variability. The HFA was measured twice, and for each measurement the median across frequencies was calculated. The mean of the two medians was used for further analysis ('MMedian'). The intersubject standard deviation of the mean of the two median values of the two HFA measurements was 13.99 (max:41.42 min:-4.22). This was expected since the age range of the participants varies from 22 to 60 years old. A strong correlation of the HFA with age was found (Spearman correlation, 0.74; p = e-7). The test-retest reliability of the median of the two measurements was quantified using the ICC measure [35,36]. The repeatability of the HFA results was strong (ICC = 0.989). The median of the HFA measure for tinnitus listeners was 7.815 dB HL, while for controls was 3.25 dB HL. The difference was not statistically significant (Wilcoxon rank sum test, p> 0.05). Fig 5 shows the PTC at 1 kHz (PTC1k, panel A), the PTC at tinnitus frequency (PTCtf, panel B) and the tinnitus tuning curve (TTC, panel C) for the same listener. Individual data can be found in S2, S3, S14-S18 Figs. The vertical lines indicate the frequency of the probe frequency for the PTCtf and TTC derived from the the tinnitus (the output of the tinnitus likeness). This listener had a tuned response at the tinnitus frequency (panel B). For many subjects, however, the PTCtf was challenging, especially for those with tinnitus at very high frequencies (12,5 or 14 kHz). Therefore, the data above 12,5 kHz were considered unreliable and therefore excluded from the analysis. The PTCtf results have been classified either Tuned or Non-Tuned (T, NT) by three independent judges. Out of 35 participants, 15 had a tuned response (s01, s03, s04, s05, s10, s12, s14, s16, s17, s19, s20, c14, c15, c16, c18) and the rest had non-tuned response.
The TTC (panel C) was reported to be challenging especially by the participants describing their tinnitus as bilateral. All tests were conducted in one ear only. Hence, the masker was only
PLOS ONE
provided in one ear. It is likely that the difficulty in the disassociation between the two ears made the test harder for this group of listeners. For each listener, the ear in which the tinnitus was subjectivity considered stronger was selected. If the tinnitus was self-reported as equally bothering, the choice was random. Participants reporting their tinnitus as central (s01, s07, s20) were considered in the same way as the one reporting it on both sides (s03, s04, s05, s06, s10, s11, s12, s13, s14, s17, s18, s19), taking into account a potential different behaviour of the TTC. Fewer participants had monoaural tinnitus (s02, s08, s09, s15, s16). This information was reported as 'Mono/Bi'. The TTC results have been classified as Tuned, Non-Tuned or Non-Maskable (T, NT, NM) by three independent judges. Out of 20 tinnitus participants 4 resulted to have Non-Maskable tinnitus (s01, s02, s10, s20), 6 resulted to have Tuned TTC (s07, s13, s14, s17, s18, s19) and the rest Non-Tuned.
Clustering
The results of the clustering differed depending on the inclusion or the exclusion of the tinnitus-specific predictors.
The silhouette algorithm resulted in similar scores for 2 and 5 clusters for inclusion of all predictors. Hence, the clustering algorithm was performed with 2 and 5 clusters. The results for 2 clusters showed a clear separation of the tinnitus-and the control group (not shown). Of the tinnitus listeners, only subject s15 was included in the control group, despite suffering from tinnitus. The clear separation between the listener groups is anticipated because all predictors, including those exclusive to the tinnitus group, where available to cluster the data.
Although trivial, this outcome shows the feasibility of the suggested clustering approach for the provided input data. The results for the 5 clusters (Fig 6) showed one outlier in the control group (c20) which presents the only element in Cluster 5. One cluster was found with all the remaining listeners from the control group plus one tinnitus participant (s15) as part of Cluster 4. The rest of the tinnitus group divided into 3 clusters (Cluster 1,2, and 3). It is interesting to note how tinnitus participants grouped in three different clusters. However, also in this case, since the clustering included tinnitus specific measures (Tinnitus Likeness, TTC, Mono/ Bi, Ton/Nois) that were absent for controls (NaN) the separation between controls and tinnitus participants was anticipated.
To avoid biases due to the tinnitus-specific predictors, an additional clustering analysis was performed on the participants excluding the measures exclusive to the tinnitus group. Based on the silhouette algorithm with the predictors excluding the tinnitus-specific parameters, the clustering algorithm was run with 3 clusters.
The result of the clustering is shown in Fig 7. One cluster contained only the outlier c20, as in the clustering results with 5 clusters (see Fig 6). The other two clusters contain both control and tinnitus participants in different ratios. The first cluster contains the majority of the controls (10 out of 15) and about the half of the tinnitus participants (9 out of 20). The second cluster contains only few controls (4 out of 15) and about the half of the tinnitus participants (11 out of 20). Table 3 shows the means of the numerical predictors and the categorical predictors for each cluster.
For the 5-cluster analysis, clusters 1-3 (which contained all but one listeners from the tinnitus group) varied mainly in terms of HFA, MEMR95, RL500 and the PTCtf. There is consistency that the on average youngest listeners (Cl1) had the lowest HFA thresholds and the strongest MEMR95. The on average oldest listeners (Cl3) had the highest HFA and the the weakest MEMR95. The longest latencies RL500 were found in Cl2 which contained exclusively listeners with exclusively non-tuned PTCtf. Clusters 4-5 contained all listeners from the control group plus one listeners reporting tinnitus (s15). Cluster 5 contained only one listener (c20). All values for Cl4 were within the range of clusters 1-3 which indicates that it was the For the 3-cluster analysis, cluster 3 contained a single listener (C20). The main differences between clusters 1 and 2 were the PTCtf with only tuned and non-tuned characteristics, Table 3. Resulting parameter values of the clustering analysis with five (upper rows) and three (lower rows) clusters. For each cluster (Cl, first column), the following parameters are shown: HFA mean (dB HL), MEMR mean, ACALOS mean, age mean, RL500 mean (in ms), PTCtf, PTCtf summary, THI mean, TonLik mean, T/N summary, Mono/Bi (M/B) summary, and TTC summary. In the 5-cluster analysis, all measures were included, while in the 3-cluster analysis the tinnitus measures were excluded. respectively. Listeners in Cl2 also had longer RL500 latencies compared to the listeners in Cl1. The main differences between clusters 1-2 and cluster 3 are the low HFA for c20 paired with a strong MEMR95. Overall, the listeners could be grouped in to clearly separable clusters. But even for a relatively low number of clusters (3 and 5), the number of listeners in each group became relatively low. Hence, the small sample size might have contributed to the heterogeneity of the members (control / tinnitus) in each cluster.
Random forest
The analysis of importance (Fig 8) showed that for the five clusters analysis the most important predictors were: tonal or noisy tinnitus (0.1297), TTC (0.1133), and monaural or binaural tinnitus (0.1127). The next factors had an importance of around 0.03 and were more variable, despite representing the mean of 50 repetitions of the analysis. Thesese factors included HFA (MMedian), MEMR at 95 dB (MEMR95), ACALOS and Likeness of tinnitus to a tone (TonLiken). The three most important measures were in the Only-Tinnitus category, that means applicable only to the tinnitus group. Not surprisingly, the most important measures are tinnitus-only measures. These easily separate the control (that lack these information) from the tinnitus group. However, The TonLiken predictor is not among the highest rated, but also tinnitus related.
The importance analysis with the clustering without tinnitus specific measures is shown in Fig 9. In this analysis the PTCtf is the most important predictor (0.1127) and all the others
PLOS ONE
except the latency of the reflex on a comparable level (around 0.02). The maximum level of Importance, however, for both cases is around 0.1-0.13.
Discussion and conclusions
In the present study, various data were collected with the goal to have multiple sources of variability as input into a cluster analysis. The test groups were compiled based on characteristics as homogeneous as possible within the two groups (tinnitus and controls) but also between the groups to reduce noise in the data. The analysis showed no significant difference between the tinnitus and the control group.
According to our hypothesis, a group of participants should have shown some characteristics usually linked to cochlear synaptopathy (CS). Our hypothesis related to CS was: i) increased thresholds at high frequencies [20], ii) decreased activation of the MEMR [26], iii) a lower MCL. Considering the first cluster with five groups, the cluster showing this characteristics was cluster 3. Interestingly, cluster 3 was also the one with higher average age ( Table 3). The type of tinnitus was mainly tonal for the first two clusters and noise-like for the cluster whose characteristics were closer to CS. Similarly, evidence exist that noise-like tinnitus can be more common in older patients with higher-level hearing loss [37]. Also the mean of the THI questionnaire responses was higher for Cluster 3. However, the three tinnitus groups were relatively small (Cluster 1 = 8 participants, Cluster 2 = 5 participants, Cluster 3 = 6 participants). Due to the small size of the clusters, no statistical analysis was run to assess the significance of these differences. We hypothesized that participants with noise-like tinnitus would show a different PTCtf compared to the tinnitus participants with tonal tinnitus. In fact, the tonal tinnitus perception could be a distraction when trying to distinguish a trial with or without probe tone. The results showed that cluster 1 and 3 were the ones with mainly tuned (T) PTCtf, which we can consider to be consistent with PTC in normal hearing listeners. Cluster 1 was composed of participants having mainly tonal tinnitus, while cluster 3 was mainly composed of listeners with noise-like tinnitus. Therefore, the first result was not in agreement with the initial hypothesis. In addition to this, Listeners in Cluster 3 had a mean tinnitus frequency lower that the ones in Cluster 1 (7.67 kHz for Cluster 3, 11.06 kHz for Cluster 1), which made the task even harder. Listeners reported the task to be more demanding with higher frequencies when compared with the same test performed at 1 kHz. Cluster 2 and 3 are in line with the hypothesis that PTCtf shows a shape similar to non-tinnitus sufferers for noisy tinnitus than for tonal tinnitus. This is, however, inconsistent with Cluster 1.
For what concerns the monaural or binaural characteristic of tinnitus, the distribution was uniform across the three groups, so there was no group having a single output for this predictor. We hypothesized that TTC would have been mainly Non-tuned (NT) or Non-maskable (NM) for binaural tinnitus but it is not the case from the data, although participants with binaural tinnitus reported the high difficulty in disentangling the perception in the two ears.
It needs to be underlined that in this experiment HFA, PTC and TTC were performed with a Bayesian procedure. This allowed to decrease the duration of the experiment and to get a continuous result. However, the PTC at very high frequency was perceived as a very hard task, giving noisy results. Therefore, we decided to handle the results of the last two tests (PTC, TTC) grouping them into categories and this might have created some bias in the clustering. Exemplary outcomes of the PTC are shown in Fig 10. Panel A indicates the unreliability of the data for listeners with a high-pitched tinnitus. Panel B illustrates a listener where the tuning of the PTC likely seemed to be affected by the presence of the tinnitus. Panel C illustrates a listener where tinnitus frequency and PTC seemed independent from each other. These data illustrate the strong variability in this outcome measure. An automated procedure with a higher number categories might refine the clustering result, but will require a much higher number of listeners for the analysis.
In the second clustering we see, in fact, that the PTCtf takes over the clustering. However, the TTC and PTC can give interesting insights about the maskability of the tinnitus and the tuning of the cochlea, respectively, and shed light on the hearing mechanisms connected to tinnitus. The second clustering without tinnitus measures is less informative and the mean differences across groups are not very broad (Table 3). However, what can be extrapolated from this result is that tinnitus participants had no homogeneous and distinct features with respect to the controls. With more participants we would have expected to see a mixed group with tinnitus and controls having characteristics leaning towards CS.
The reason for the different findings across studies might lie in the differences how the measures were assessed. One study [18] measured MEMR using a threshold criterion of ". . .reduction in compliance of 0.02ml or greater with appropriate morphology and no evidence of significant measurement artifact" [18] using a clinical paradigm with a tonal probe and coupler calibration. Another study [17] and the present study used a broadband elicitor and an in-situ calibration method. [18] and the present study used ipsilateral elicitors, while [17] used contralateral elicitors. The metric used in [18] was a threshold, while [17] and the present study used a metric reflecting the MEMR strength. In [18] and in the present study, listener were assessed with audiometry up to 14 kHz (16 kHz in the present study), while [17] only screened the audiogram up to 8 kHz and included listeners in the tinnitus group up to 30 dB HL. In the present study, no significant differences were found in MEMR strength between the tinnitus group and the control group at any level measured (Wilcoxon rank sum test, p > 0.05). The authors in [17] argue that their paradigm reduced the influence of the medial olivocochlear (MOC) reflex by the low rate of click presentation. The paradigm used in [18] and in the present study is, however, more consistent with the most sensitive measure of synaptopathy in mouse [38]. Hence, it can not be excluded that different variations of MEMR strength variability across listeners and mixed effects of MOC reflex and high-frequency sensitivity loss could have an effect on the results. While the studies by [17] and [18] rely on a pairwise comparison of outcome measures, the grouping of the elements in the high-dimensional predictor space into clusters might be robust against the introduced variability and more sensitive to consistencies in correlations across multiple predictors. This benefit might, however, be jeopardized by the need for larger sample numbers to allow proper statistical estimates of the distributions of the data.
There might be additional factors contributing to the creation of clusters in the present study. One factor not explicitly controlled for is gender of the listeners. Previous studies on gender effects on comorbidities of tinnitus found none of weak correlations between gender and suggested tinnitus-related comorbidities. A recent study evaluated tinnitus severity using THI along with multiple indicators of psychiatric distress in a sample of 245 listeners [39]. They found no gender effect on tinnitus severity in their sample, but weak correlations of gender and a number of psychiatric conditions like depression (females) or stress (males). Another study [40] found similar results in a sample of 107 listeners with correlations only between gender and psychiatric comorbidities, but not direct measures of tinnitus like THI. The sample size in the present size is considerably lower and does not allow for a statistical analysis of similar correlations. However, the numbers in the various clusters were rather balanced and indicate no trend towards an impact of gender on the results (11/4 and 11/8 male/ female in the 3-cluster analysis in the two clusters containing more than one listener). It might be interesting to evaluate putative correlations between the measures used in the present study and psychiatric diseases by addition of another dimension in the clustering analysis.
Subject 15 (s15) became an outlier in the 5-cluster analysis. It was assigned into a cluster with listeners from the control group, despite reporting tinnitus. Even though only a individual listener, this results warrants some speculation. Based on the results, it is challenging to link the assignment of the listener to similarities to the control group. But it might be interesting to speculate about the exclusion of this listener from the tinnitus-dominated cluster, despite its proximity in the projection on the first two principle components. The listener reported a broadband tinnitus (TL, panel O), similar to a high number of other listeners in this group. It showed a relatively flat HFA below (20 dB HL in the measured range) which was highly similar to a large number of other listeners in the group. Also loudness perception (ACALOS) and the psychophysical tuning at 1 kHz (PTC1k) seemed similar to a high number of other listeners. This listener differed from the other listeners in the tinnitus group in terms of the MEMR. Compared to most other listeners showed the MEMR of s15 (panel N) a relatively flat profile with little variation with intensity. At low frequencies, there is a tendency towards a rather constant delta-absorbance which is different to most other listeners. for other listeners, the delta absorbance tends to be more negative between 0.5 kHz and 1 kHz compared to frequencies below 0.5 kHz. Only at the highest level measured (105 dB) was the MEMR more in agreement with that of the other listeners with negative values up to about 1 kHz and positive values above 1 kHz. The psychophysical tuning curve at tinnitus frequency (PTCtf) showed a, compared to the other listeners, relatively shallow U-shape. Also in the tinnitus tuning curve (TTC, panel M), this listeners showed a concave shape with a maximum at the tinnitus frequency. The relatively unique shape of the MEMR indicates a reduced sound level sensitivity of the mechanism modifying the sound transduction into the cochlea. This mechanism was only visible in the data at the highest level measures. The PTCtf and the TTC might suggest that there is an interaction of the acoustic probe with the tinnitus mechanism. In the PTCtf, the U-shape seems broadened due to the presence of the tinnitus. In the TTC, the concave shape and the peak at the tinnitus frequency might be a sign for entrainment of the tinnitus frequency to the external probe tone. All these indications are derived from the data of a single listener. It might be interesting to confirm similarity across these measures in a group of listeners selected based on a single of the measures. It needs, however, to be noted that the relatively small sample size might lead to a poor estimate of the high-dimensional distributions derived in the predictor space. This undersampling might hence lead to noise in the assignment to clusters and hence the appearance of this outlier.
In conclusion, the use of clustering for the categorization of tinnitus participants revealed that tinnitus participants with normal hearing can still have very different outputs in many tests. Confirming, in other words, that tinnitus is a very diverse phenomenon for causes, comorbidity and hearing disorders. However, the value of the silhouette function, showing the validity of the classification, was not very high. This means that to confirm the results of this preliminary study a higher number of participants needs to be recruited. Finally, clustering with a big data set, potentially including dimensions of psychiatric comorbidities could be the starting point towards a refined categorization of tinnitus. (from s01 to s10). Each participant has been tested twice except participant s17, which was struggling with the task. Each label on the top right of each panel is composed by two parts: the first is an incremental index and the second one is the label for the specific participant. On the y-axis are presented the hearing thresholds and on the x-axis the frequencies (from 8 kHz up to 16 kHz). The central lighter line represents the threshold. The two darker lines represent the limits of the one standard deviation shift. Circles represent trials that have been heard, and crosses represent sounds that have not been heard. The results for participants s17 and s19 are missing due to technical problems. In each panel on the y-axis is reported the level of the masker, while on the x-axis is reported the frequency of the masker. The target is a tone of frequency 1 kHz and level 5 dB higher than the individual threshold. The central lighter line represents the PTC. The two darker lines represent the limits of the one standard deviation shift. The circles represent the trials in which the target has been heard, and crosses represent the trials in which the target has not been heard. (EPS) S15 Fig. Individual results for psychophysical tuning curve at 1 kHz (control group). The figure represents the individual results of the Psychophysical Tuning Curve (PTC) at 1 kHz for all the control participants (from c01 to c20, panel A-R). The numeric part of the labels for each participant are equal to the labels of the tinnitus participant matched (in hearing loss and age). In each panel on the y-axis is reported the level of the masker, while on the x-axis is reported the frequency of the masker. The target is a tone of frequency 1 kHz and level 5 dB higher than the individual threshold. The central lighter line represents the PTC. The two darker lines represent the limits of the one standard deviation shift. The circles represent the trials in which the target has been heard, and crosses represent the trials in which the target has not been heard. | 2022-12-15T05:08:49.997Z | 2022-12-13T00:00:00.000 | {
"year": 2022,
"sha1": "97109f2309c6c1504133883eadd77a498b3d3985",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "97109f2309c6c1504133883eadd77a498b3d3985",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207863195 | pes2o/s2orc | v3-fos-license | Existential Types for Relaxed Noninterference
Information-flow security type systems ensure confidentiality by enforcing noninterference: a program cannot leak private data to public channels. However, in practice, programs need to selectively declassify information about private data. Several approaches have provided a notion of relaxed noninterference supporting selective and expressive declassification while retaining a formal security property. The labels-as-functions approach provides relaxed noninterference by means of declassification policies expressed as functions. The labels-as-types approach expresses declassification policies using type abstraction and faceted types, a pair of types representing the secret and public facets of values. The original proposal of labels-as-types is formulated in an object-oriented setting where type abstraction is realized by subtyping. The object-oriented approach however suffers from limitations due to its receiver-centric paradigm. In this work, we consider an alternative approach to labels-as-types, applicable in non-object-oriented languages, which allows us to express advanced declassification policies, such as extrinsic policies, based on a different form of type abstraction: existential types. An existential type exposes abstract types and operations on these; we leverage this abstraction mechanism to express secrets that can be declassified using the provided operations. We formalize the approach in a core functional calculus with existential types, define existential relaxed noninterference, and prove that well-typed programs satisfy this form of type-based relaxed noninterference.
Introduction
However, noninterference is too stringent and real programs need to explicitly declassify some information about secret values. A simple mechanism to support explicit declassification is to add a declassify operator from secret to public expressions, as provided for instance in Jif [11]. However, the arbitrary use of this operator breaks formal guarantees about confidentiality. Providing a declassification mechanism while still enforcing a noninterference-like property is an active topic of research [5,6,8,9,13,16].
One interesting mechanism is the labels-as-functions approach of Li and Zdancewic [9], which supports declassification policies while ensuring relaxed noninterference. Instead of using security labels such as L and H that are taken from a security lattice of symbols, security labels are functions. These functions, called declassification policies, denote the intended computations to declassify values. For instance, the function λx.λy.x == y denotes the declassification policy: "the result of the comparison of the secret value x with the public value y can be declassified". The identity function denotes public values, while a constant function denotes secret values. Then, any use of a value that does not follow its declassification policy yields a secret result. The labels-as-functions approach is very expressive, but its main drawback is that label ordering relies on a semantic interpretation of functions and program equivalence, which is hard to realize in practice and rules out recursive declassification policies 1 .
An alternative approach to labels-as-functions is labels-as-types, recently proposed by Cruz et al. [5]. The key idea is to exploit type abstraction to control how much of a value is open to declassification. The approach was originally developed in an object-oriented language, where type abstraction is realized by subtyping. A security type T ⊳ U is composed of two facets: the safety type T denotes the secret view of the value, and the declassification type U (such that T <: U ) specifies the public view, i.e. the methods that can be used to declassify a secret value. For instance, the type String ⊳ String denotes a public string value, i.e. all the methods of String are available for declassification, while the type String ⊳ ⊤ (where ⊤ is the empty interface type) denotes a secret String value, i.e. there is no method available to declassify information about the secret. Then, the interesting declassification policies are expressed with a type interface between String and ⊤; e.g. the type String ⊳ StringLen exposes the method length of String for declassification. With this type-based approach, label ordering is simplified to standard subtyping, which is a simple syntactic property, and naturally supports recursive declassification. Also, this type-based approach enforces a security property called type-based relaxed noninterference, which accounts for type-based declassification and provides a modular reasoning principle similar to standard noninterference.
We observe that developing type-based relaxed noninterference in an objectoriented setting, exploiting subtyping as the type abstraction mechanism, imposes some restrictions on the declassification policies that can be expressed. In particular, because security types are of the form T ⊳ U where the declassifica-tion type U is a supertype of the safety type T -a necessary constraint to ensure type safety-means that one cannot declassify properties that are extrinsic to (i.e. computed externally from) the secret value. For instance, because a typical String type does not feature an encrypt method, it is not possible to express the declassification policy that "the encrypted representation of the password is public".
In this paper, we explore an alternative approach to labels-as-types and relaxed noninterference, exploiting another well-known type abstraction mechanism: existential types. An existential type ∃X.T provides an abstract type X and an interface T to operate with values of the abstract type X. Then instances of the abstract type X are akin to secrets that can be declassified using the operations described by T . For instance, the existential type ∃X.[get : X, length : X → Int] makes it possible to obtain a (secret) value of type X with get, that only can be "declassified" with the length function to obtain a (public) integer.
Because existential types are the essence of abstraction mechanisms like abstract data types and modules [10], this work shows how the labels-as-types approach can be applied in non-object-oriented languages. The only required extension is the notion of faceted types, which are necessary to capture the natural separation between privileged observers (allowed to observe secret results) and public observers (i.e. the attacker, which can only observe public values) 2 . Additionally, the existential approach is more expressive than the object-oriented one in that extrinsic declassification policies can naturally be encoded with existential types.
The contributions of this work are: -We explore an alternative type abstraction mechanism to realize the labelsas-types approach to expressive declassification, retaining the practical aspect of using an existing mechanism (here, existential types), while supporting more expressive declassification policies (Section 2). -We define a new version of type-based relaxed noninterference, called existential relaxed noninterference, which accounts for extrinsic declassification using existential types (Section 3). -We capture the essence of the use of existential types for relaxed noninterference in a core functional language λ ∃ SEC (Section 4), and prove that its type system soundly enforces existential relaxed noninterference (Section 5). Section 6 explains how the formal definitions apply by revisiting an example from Section 3. Section 7 discuses related work and Section 8 concludes.
Overview
We now explain how to use the type abstraction mechanism of existential types to denote secrets that can be selectively declassified. First, we give a quick overview of existential types, with their introduction and elimination forms. Next, we develop the intuitive connection between the type abstraction of standard existential types and security typing. Then, we show that to support computing with secrets, which is natural for information-flow control languages, we need to introduce faceted types.
Existential types
An existential type ∃X.T is a pair of an (abstract) type variable X and a type T where X is bound; typically T provides operations to create, transform and observe values of the abstract type X [10].
For instance, the type AccountStore below models a simplified user repository. It provides the password of a user at type X with the function userPass and a function verifyPass to check (observe) whether an arbitrary string value is equal to the password.
Values of an existential type ∃X.T take the form of a package that packs together the representation type for the abstract type X with an implementation v of the operations provided by T . One can think of packages as modules with signatures.
For instance, the package p △ = pack(String, v) as AccountStore is a value of type AccountStore, where String is the representation type and v, defined below, is a record implementing functions userPass and verifyPass: v [ userPass = λx : String. userPassFromDb(x) verifyPass = λx : String.λy : String. equal(x, y)] Note that the implementation, v, directly uses the representation type String, e.g. userPass has type String → String and is implemented using a primitive function userPassFromDb : String → String to retrieve the user password from a database. Likewise, the implementation of verifyPass uses equality between its arguments of type String.
To use an existential type, we have to open the package (i.e. import the module) to get access to the implementation v, along with the abstract type that hides the actual representation type. The expression open(X, x) = p in e ′ opens the package p above, exposing the representation type abstractly as a type variable X, and the implementation as term variable x, within the scope of the body e ′ . Crucially, the expression e ′ has no access to the representation type String, therefore nothing can be done with a value of type X, beyond using it with the operations provided by AccountStore.
Type-based declassification policies with existential types
We can establish an analogy between existential types and selective declassification of secrets: an existential type ∃X.T exposes operations to obtain secret val-ues, at the abstract type X, and the operations of T can be used to declassify theses secrets.
For instance, AccountStore provides a secret string password with the function userPass, and the function verifyPass expresses the declassification policy: "the comparison of a secret password with a public string can be made public". With this point of view, concrete types such as Bool and String represent public values. A fully-secret value, i.e. a secret that is not declassified, can be modeled by an existential type without any observation function for the abstract type.
We can use the declassification policy modeled with AccountStore to implement a valid well-typed login functionality. The login function below is defined in a scope where the package p of type AccountStore is opened, providing the type name X for the abstract type and the variable store for the package implementation.
open(X, store) = p in ... String login (String guess , String username){ if ( store . verifyPass (guess , store . userPass(username))) ... } The login function first obtains the user secret password of type X with store.userPass(username), and then passes the secret password (of type X) to the function verifyPass with the guess public password to obtain the public boolean result. The above code makes a valid use of AccountStore and therefore is welltyped.
The type abstraction provided by AccountStore avoids leaking information accidentally. For instance, directly returning the secret password of type X is a type error, even though internally it is a string. Likewise, the expression length(store.userPass(username)) is ill-typed.
Note that because declassification relies on the abstraction mechanism of existential types, we work under the assumption that the person that writes the security policy-the package implementation and the existential type-is responsible for not leaking the secret due to a bad implementation or specification (e.g. exposing the secret password through the identity function of type X → String).
Progressive declassification. The analogy of existential types as a mechanism to express declassification holds when one considers progressive declassification [5,9], which refers to the possibility of only declassifying information after a sequence of operations is performed. With existential types, we can express progressive declassification by constraining the creation of secrets based on other secrets.
Consider the following refinement of AccountStore, which supports the declassification policy "whether an authenticated user's salary is above $100,000": AccountStore provides extra abstract types Y and Z, denoting an authentication token (for a specific user) and a user salary, respectively. The type signatures enforce that, to obtain the user salary, the user must be authenticated: a value of type Y is needed to apply userSalary. Such a value can be obtained only after successful authentication: verifyPass now returns an Option [Y ] value, instead of a Bool value. Note that the salary itself is secret, since it has the abstract type Z. Finally, isSixDigit function reveals whether a salary is above $100,000 by returning a public boolean result.
Observe how the use of abstract type variables allows the existential type to enforce sequencing among operations. Also, we can provide more declassification policies for a user salary Z, and can use the authentication token Y with more operations. An existential type is therefore an expressive means to capture rich declassification policies, including sequencing and alternation.
Computing with secrets
As we have seen, with standard existential types, values of an abstract type X must be eliminated with operations provided by the existential type. While so far the analogy between type abstraction with existential types and expressive declassification holds nicely, there are some obstacles.
First, with standard existential types, it is simply forbidden to compute with secrets. For instance, applying the function length: String → Int with a (secret) value of type X is a type error. However, information-flow type systems are more flexible: they support computing with secret values, as long as the computation itself is henceforth considered secret, e.g. the value it produces is itself secret [18]. Allowing secret computations is useful for privileged observers, which are authorized to see private values.
Faceted types were introduced to support this "dual mode" of informationflow type systems in the labels-as-types approach [5]. While that work is based on objects and subtyping, here we develop the notion of existential faceted types: faceted types of the form T @U , where T indicates the safety type used for the implementation and U the declassification type used for confidentiality. Figure 1 shows AccountStore with existential faceted types. Given a public string (T L denotes T @T ), userPass returns a value that is a string for the privileged observer, and a secret of type X for the public observer (i.e. the attacker).
When computing with a value of type String@X, there are now two options: either we use a function that expects a value of type String@X as argument, such as verifyPass, or we use a function that goes beyond declassification, such as length, and should therefore produce a fully private result. What type should such private results have? In order to avoid having to introduce a fresh type variable, we assume a fixed (unusable) type ⊤, and write Int H to denote Int@⊤. This supports computing with secrets as follows: String H login (String L guess , String L username){ if (length( store . userPass(username)) == length(guess ))...; } Instead of being ill-typed, length(store.userPass(username)) is well-typed at type Int H , so the function login can return a private result, e.g. a private string at type String H .
Public data as (declassifiable) secret
Information-flow type systems allow any value to be considered private. With existential faceted types, this feature is captured by a subtyping relation such that for any T , T @T <: T @⊤, and for any X, T @X <: T @⊤. Value flows that are justified by subtyping are safe from a confidentiality point of view. In particular, if a (declassifiable) value of type String@X is passed at type String H , it is henceforth fully private, disallowing any further declassification.
Additionally, in the presence of declassifiable secrets, of type T @X, one would also expect public values to be "upgraded" to declassifiable secrets. This requires the security subtyping relation to admit that, for any type T and type variable X, we have T L <: T @X.
Note that admitting such flows means that type variables in a declassification type position are more permissive than when they occur in a safety type position. For instance, isSixDigit can be applied to any public integer (of type Int L ), and not only to ones returned by userSalary. In contrast, userSalary can only be applied to a value opaquely obtained as a result of verifyPass. In effect, the representation of authentication tokens is still kept abstract, at type Y L (i.e. Y @Y ). This prevents clients from actually knowing how these tokens are implemented, preserving the benefits of standard existential types. Conversely, the salary and password expose their representation types (Int and String respectively), thereby enabling secret computation by clients.
Relaxed noninterference with existential types
Existential faceted types support a novel notion of type-based relaxed noninterference called existential relaxed noninterference (ERNI) that defines if a program with existential faceted types is secure. ERNI is based on type-based equivalences between values at existential faceted types. We formally define the notions of type-based equivalence and ERNI in Section 5, but here we provide an intuition for this security criterion and the associated reasoning. Let us first consider simple types, before looking at existential types.
Type-based relaxed noninterference. Two integers are equivalent at type Int@Int = Int L if they are syntactically equal, meaning that a public observer can distinguish between two integers at type Int L . We can characterize the meaning of the faceted type Int L with the partial equivalence relation Eq Int = {(n, n) ∈ Int×Int}. Using this, two integers v 1 and v 2 are equivalent at type Int L if they are in the relation Eq Int -meaning they are syntactically equal.
Dually, the type Int@⊤ = Int H characterizes integer values that are indistinguishable for a public observer, therefore any two integers are equivalent at type Int H . Consequently, the meaning of the faceted type Int H is the total relation All Int = Int × Int that relates any two integers v 1 and v 2 .
With these base type-based equivalences, one can express the security property of functions, open terms, and programs with inputs as follows: a program p satisfies ERNI at an observation type S out if, given two input values that are equivalent at type S in , the executions of p with each value produce results that are equivalent at type S out . This modular reasoning principle is akin to standard noninterference [18] and type-based relaxed noninterference [5].
Intuitively, S in models the initial knowledge of the public observer about the (potentially-secret) input, and S out denotes the final knowledge that the public observer has to distinguish results of the executions of the program p. The program p is secure if, given inputs from the same equivalence class of S in , it produces results in the same equivalence class of S out . Consider the program e = length(x) where x has type String H . The program e does not satisfy ERNI at type Int L , because given two strings "a" and "aa" that are equivalent at String H , i.e. ("a", "aa") ∈ All String , we obtain the results 1 and 2, which are not equivalent at type Int L , i.e. (1, 2) / ∈ Eq Int . However, e is secure at type Int H .
Relaxed noninterference and existentials. When we introduce faceted types with type variables such as Int@X, we need to answer: what values are equivalent at type Int@X? Without stepping into technical details yet, let us say that the meaning of a type Int@X is an arbitrary partial equivalence relation R X ⊆ Int × Int, and two values v 1 and v 2 are equivalent at Int@X if they are in R X . Because X is an existentially-quantified variable, inside the package implementation that exports the type variable X, R X is known, but outside the package, i.e. for clients of a type Int@X, R X is completely abstract: a public observer that opens a package exporting the type variable X does not know anything about values of type Int@X.
For instance, consider again the program e = length(x) but assume that x now has type String@Y . Does e satisfy ERNI at type Int L ? Here, we need to know what is the relation R Y that gives meaning to String@Y . Instead of picking only one relation R Y , ERNI quantifies over all possible relations R Y . This universal quantification over R Y corresponds to the standard type abstraction mechanism for abstract types (i.e. parametricity [15]). That is, the program e satisfies ERNI at type Int@Int, if it is secure for all relations R Y ⊆ String × String. Then, to show that ERNI at type Int@Int does not hold for e it suffices to exhibit a specific relation for which ERNI is violated. Take the relation R Y = {("a", "aa")}, and observe that length("a") = length("aa").
Illustration. Finally, we give an intuition of how ERNI accounts for extrinsic declassification policies. We reuse the salary operations from AccountStore, simplifying the retrieval of the secret salary. The type SalaryPolicy provides a secret salary and a function isSixDigit to declassify the salary as before.
Formal semantics
We model existential faceted types in λ ∃ SEC , which is essentially the simply-typed lambda calculus augmented with the unit type, pair types, sum types, existential types, and faceted types. All the examples presented in Section 2 can thus be encoded in λ ∃ SEC using standard techniques. This section covers the syntax, static and dynamic semantics of λ ∃ SEC . The formalization of existential relaxed noninterference and the security type soundness of λ ∃ SEC are presented in Section 5. injections inl e and inr e to introduce sum types, as well as a case construct to eliminate sums; finally, pack and open introduce and eliminate existential packages, respectively. Types T include function types S → S, primitive types P , the unit type 1, sum types S + S, pair types S × S, existential types ∃X.T , type variables X and the top type ⊤. A security type S is a faceted type T @U where T is the safety type and U is the declassification type.
Well-formedness of security types. We now comment on the rules for valid security types, i.e.facet-wise well-formed types. We have three general form of security types T L and T H and T @X. While there is no constraint on forming types T L and T H , such as Int L , X L and Int H , we need two considerations for types such as Int@X.
The first consideration is that inside an existential type ∃X.T the type variable X, when used as a declassification type, must be uniquely associated to a concrete safety type. For instance, the existential type ∃X.(String@X → Int@X) is ill-formed, while ∃X.(Int@X → Int@X) and ∃X.(X@X → Int@X) are wellformed. For such well-formed types, we use the auxiliary function sftype(∃X.T ) to obtain the safety type associated to X; for instance sftype(∃X.(X@X → Int@X)) = Int (undefined on ill-formed types).
The second consideration is when a client opens a package. The expression open(X, x) = e in e ′ binds the type variable X in e ′ , therefore the expression e ′ can declare security types of the form T ′ @X. However, for the declaration of the type T ′ @X to be valid, the safety type of the declassification type variable X must be T ′ . For instance, if the safety type of X is Int, the expression e ′ cannot declare security types such as String@X, otherwise computations over secrets could get stuck. The question is how to determine the safety type T ′ of X in e ′ . Crucially, the expression e necessarily has to be of type (∃X.T ) L , therefore we can obtain the safety type for X with sftype(∃X.T ). To keep track of the safety type for each type variable X, we use a type variable environment ∆ that maps type variables to types T (i.e. ∆ ::= • | ∆, X : T ) With the previous considerations in mind, the rules for well-formed security types are straightforward. In the rest of the paper, we use the judgment ∆ |= S to mean well-formed security types S under type environment ∆. A well formed security type S is both facet-wise well-formed and closed with respect to type variables. We also use ∆ |= Γ to indicate that a type environment is well-formed, i.e. all types in Γ are well-formed. In the following, we assume well-formed security types and environments. Figure 3 presents the static semantics of λ ∃ SEC . Security typing relies on a subtyping judgment that validates secure information flows. The left-most rule justifies subtyping by reflexivity. The middle rule justifies subtyping for two security types with the same safety type, when the declassification type of right security type is ⊤. Finally, the right-most rule justifies subtyping between a public type T L and T @X.
Static Semantics
As usual, the typing judgment ∆; Γ ⊢ e : S denotes that "the expression e has type S under the type variable environment ∆ and the type environment Γ ". The typing rules are mostly standard [14]. Here, we only discuss the special treatment of security types.
Rule (TVar) gives the security type to a type variable from the type environment and rule (TS) is the standard subtyping subsumption rule. Rules (TP), (TFun), (TPair), (TU), (TInl), (TInr) and (TPack) introduce primitive, function, pair, unit, sum and existential types, respectively. In particular, rule (TPack) requires the representation type of the package to be more precise than the safety type associated to X in the existential type, i.e. T ′ ⊑ sftype(∃X.T ). The precision judgment has only two rules: reflexivity T ⊑ T , and any type is more precise than a type variable T ⊑ X.
Rules (TApp), (TOp), (TFst), (TSnd), (TCase) and (TOpen) are elimination rules for function, primitive, pair, sum and existential types, respectively. When a secret is eliminated, the resulting computation must protect that secret. This is done with ⌈S ′ ⌉ S , which changes the declassification type of S ′ to ⊤ if the type S is not public: Let us illustrate the use of rule (TApp). On the one hand, if the type of the function expression e 1 is (S 1 → S 2 ) L , i.e. it represents a public function, then the type of the function application is S 2 . On the other hand, if the function expression e 1 has type (S 1 → T 2 @U 2 )@X or (S 1 → T 2 @U 2 ) H , i.e. it represents a secret, then the function application has type T 2 @⊤. Rule (TOp) uses an auxiliary function Θ to obtain the signature of a primitive operator and ensures that the resulting type protects both operands with ⌈⌈P ′′ @P ′′ ⌉ P @U ⌉ P ′ @U ′ . Rules (TFst) and (TSnd) use the same principle to protect the projections of a pair. Rule (TCase) requires the discriminee to be of ∆; Γ ⊢ e1 : S S = (S1 → S2)@U ∆; Γ ⊢ e2 : S1 ∆; Γ ⊢ e1 e2 : ⌈S2⌉S type (S 1 + S 2 )@U , and both branches must have the same type S ′ . Likewise, it protects the resulting computation with ⌈S ′ ⌉ S . Finally, rule (TOpen) applies to expressions of the form open(X, x) = e in e ′ , by typing the body expression e ′ in an extended type variable environment ∆, X : T ′ and a type environment Γ, x : T L . Two points are worth noticing. First, the association X : T ′ allows us to verify that security types of the form T ′ @X defined in the body expression e ′ are well-formed. Second, we make the well-formedness requirement explicit for the result type ∆ |= S ′ , which implies that S ′ is facet-wise well-formed and closed under ∆-i.e. the type variable X cannot appear in S ′ .
Dynamic semantics and type safety
The execution of λ ∃ SEC expressions is defined with a standard call-by-value smallstep dynamic semantics based on evaluation contexts (Figure 4). We abstract over the execution of primitive operators over primitive values using an auxiliary function θ. We define the predicate safe(e) to indicate that the evaluation of the expression e does not get stuck.
Theorem 1 (Syntactic type safety). ⊢ e : S =⇒ safe(e) Having formally defined the language λ ∃ SEC , we move to the main result of this paper, which is to show that the λ ∃ SEC is sound from a security standpoint, i.e. its type system enforces existential relaxed noninterference.
Existential relaxed noninterference, formally
In Section 3 we gave an overview of existential relaxed noninterference (ERNI), explaining how it depends on type-based equivalences. To formally capture these type-based equivalences, we define a logical relation, defined by induction on the structure of types. To account for type variables, we build upon prior work on logical relations for parametricity [2,15,17]. Then, we formally define ERNI on top of this logical relation. Finally, we prove that the type system of λ ∃ SEC enforces existential relaxed noninterference.
Logical relation for type-based equivalence
As explained in Section 3, two values v 1 and v 2 are equivalent at type S, if they are in the partial equivalence relation denoted by S. To capture this, the logical relation ( Figure 5) interprets types as set of atoms, i.e. pairs of closed expressions. We use Atom [T 1 , T 2 ] to characterize the set of atoms with expressions of type T 1 and T 2 respectively. This definition appeals to a simply-typed judgment ∆; Γ ⊢ 1 e : T that does not consider the declassification type and is therefore completely standard. The use of this simple type system clearly separates the definition of secure programs from the enforcement mechanism, i.e. the security type system of Figure 3.
In Section 3 we explained what it means to be equivalent at type Int@X appealing to a relation on integers R X ⊆ Int × Int . To formally characterize the set of valid relations R X for types T 1 and T 2 we use the definition Rel [T 1 , T 2 ]. To keep track of the relation associated to a type variable, most definitions are indexed by an environment ρ that maps type variables X to triplets (T 1 , T 2 , R), where T 1 and T 2 are two representation types of X and R is a relation on closed values of type T 1 and T 2 (i.e. ρ ::= ∅ | ρ [X → (T 1 , T 2 , R)]). We will explain later where these types T 1 and T 2 come from. We write ρ 1 (U ) (resp. ρ 2 (U )) to replace all type variables of ρ in types with the associated type T 1 (resp. T 2 ), and ρ R (X) to retrieve the relation R of a type variable X in ρ. Figure 5 defines the value interpretation of a type T , denoted V T ρ, then the value interpretation of a security type S, denoted V S ρ, and finally the expression interpretation of a type S, denoted C S ρ.
Interpreting concrete types. We first explain the definitions that do not involve types variables. V T @⊤ ρ (resp. V T @T ρ) characterizes when values of T are indistinguishable (resp. distinguishable) for the public observer. V T @⊤ ρ is defined as Atom [ρ 1 (T ), ρ 2 (T )] indicating that any two values of type T are equivalent at type T @⊤. Note that this also includes values of type X@⊤.
Two public values are equivalent at a security type T @T if they are equivalent at their safety type, i.e. V T L ρ = V T ρ. The definition V P ρ relates syntactically-equal primitive values at type P . Two functions are equivalent at type S 1 → S 2 , denoted V S 1 → S 2 ρ, if given equivalent arguments at type S 1 , their applications are equivalent expressions at type S 2 . Two pairs are equivalent at type S 1 × S 2 if they are component-wise equivalent. Two values are equivalent Finally, two expressions e 1 and e 2 are equivalent at type T @U , denoted C T @U ρ, if they both reduce to values v 1 and v 2 respectively and these values are related at type T @U . (Note that all well-typed λ ∃ SEC expressions terminate.) Interpreting existential types. We now explain the value interpretation of existential types ∃X.T , type variables X and security types of the form T @X, which all involve type variables. Two public package expressions pack(T 1 , v 1 ) as ∃X.T and pack(T 2 , v 2 ) as ∃X.T are equivalent at type ∃X.T , denoted V ∃X.T ρ, if there exists a relation R on the representation types T 1 and T 2 that makes the package implementations v 1 and v 2 equivalent at type T , denoted Note that if the existential type ∃X.T has a concrete safety type T ′ (not a type variable) for X, then both T 1 and T 2 necessarily have to be equal to T ′ . Otherwise, T 1 and T 2 are arbitrary types. Two values are related at type X, denoted V X ρ, if they are in the relational interpretation R associated to X (retrieved with ρ R (X)). Two values are related at type T @X, denoted V T @X ρ, if they are in ρ R (X), or if they are publicly-equivalent values of type T (i.e. a package can accept public values of type T where values of T @X are expected).
We illustrate these formal type-based equivalences in Section 6, after formally defining existential relaxed noninterference and proving security type soundness.
Existential relaxed noninterference
As illustrated in Section 3, ERNI is a modular property that accounts for open expressions over both variables and type variables. To account for open expressions, we first need to define the relational interpretation of a type environment Γ and a type variable environment ∆: The type environment interpretation G Γ ρ is standard; it characterizes when two value substitutions γ 1 and γ 2 are equivalent. A value substitution γ is a mapping from variables to closed values (i.e. γ ::= ∅ | γ [x → v]). Two value substitutions are equivalent if for all associations x : S in Γ , the mapped values to x in γ 1 and γ 2 are equivalent at S. Finally, the interpretation of a type variable environment ∆, denoted D ∆ , is a set of type substitutions ρ with the same domain as ∆. For each type variable X bound to T in ∆, such a ρ maps X to triples (T 1 , T 2 , R), where T 1 and T 2 are closed types that are more precise than T . R must be a valid relation for the types T 1 and T 2 . We write ρ 1 (e) (resp. ρ 2 (e)) to replace all type variables of ρ in terms with their associated type T 1 (resp. T 2 ), We can now formally define ERNI. An expression e satisfies existential relaxed noninterference for a type variable environment ∆ and a type variable Γ at the S, denoted ERNI(∆, Γ, e, S) if, given a type substitution ρ satisfying ∆ and two values substitutions γ 1 and γ 2 that are equivalent at Γ , applying the substitutions produces equivalent expressions at type S.
Security type soundness
Instead of directly proving that the type system of Figure 3 implies existential relaxed noninterference for all well-typed terms, we prove it through the standard definition of logically-related open terms [5]:
Lemma 1 (Self logical relation implies PRNI). ∆; Γ ⊢ e ≈ e : S =⇒ ERNI(∆, Γ, e, S) The proof of security type soundness relies on the Fundamental Property of the logical relation: a well-typed λ ∃ SEC term is related to itself.
Theorem 3 (Security type soundness). ∆; Γ ⊢ e : S =⇒ ERNI(∆, Γ, e, S) Proof. By induction on the typing derivation of e. Following Ahmed [2], we define a compatibility lemma for each typing rule; then, each case of the induction directly follows from the corresponding compatibility lemma.
Related Work
We have already extensively discussed the relation to the original formulation of the labels-as-types approach in an object-oriented setting [5], itself inspired by the work on declassification policies (labels-as-functions) of Li and Zdancewic [9]. Formulating type-based declassification with existential types shows how to exploit another type abstraction mechanism that is found in non-object-oriented languages, with abstract data types and modules. Also, existential types support extrinsic declassification policies, which are not expressible in the receiver-centric approach of objects. For instance, the AccountStore example of Section 2 is not supported by design in the object-oriented approach.
The extrinsic declassification policies supported by our approach are closely related to trusted declassification [8], where declassification is globally defined, associating principals that own secrets with trusted (external) methods that can declassify these secrets. In our approach, the relation between secrets and declassifiers is not globally defined, but is local to an existential type and its usage. In both approaches the implementations of declassifiers have a privileged view of the secrets.
Bowman and Ahmed [3] present a translation of noninterference into parametricity with a compiler from the Dependency Core Calculus (DCC) [1] to System Fω. In a recent (as yet unpublished) article, Ngo et al. [13] extend this work to support translating declassification policies, inspired by prior work on type-based relaxed noninterference [5]. They first provide a translation into abstract types of the polymorphic lambda calculus [15], and then into signatures of a module calculus [4]. While that work and ours encode declassification policies via existential types (module signatures), we focus on providing a surface language for information flow control with type-based declassification. In particular, their translated programs do not support computing with secrets, which is enabled in both this work and the original work of Cruz et al. [5] thanks to faceted types. Additionally, they only model first-order secrets (integers), while our modular reasoning principle seamlessly accommodates higher-order secrets.
In another very recent piece of work, Cruz and Tanter [6] extend the objectoriented approach to type-based relaxed noninterference with parametric polymorphism, thereby supporting polymorphic declassification policies. Polymorphic declassification for object types is achieved with type variables at the method signature level, which supports the specification of polymorphic policies of the form T ⊳ X. Existential types are closely related to universal types. In particular, the client of a package that exports a type variable X must be polymorphic with respect to X; hence our work supports a form of declassification polymorphism in the client code. It would be interesting to extend λ ∃ SEC with universal types in order to study the interaction of both abstraction mechanisms in a standard functional setting. Finally, because of the receiver-centric perspective of objects, they have to resort to ad-hoc polymorphism to properly account for primitive types. Here, primitive types do not require any special treatment for declassification polymorphism, because of our extrinsic approach to declassification.
The idea of using the abstraction mechanism of modules to express a form of declassification can also be found in the work of Nanevski et al. [12] on Relational Hoare Type Theory (RHTT). RHTT is formulated with a monadic security type constructor STsec A(p, q), where p is a pre-condition on the heap, and q is a postcondition relating output values, input heaps and output heaps. Thanks to the expressive power of the underlying dependent type theory, preconditions and postconditions can characterize very precise declassification policies. The price to pay is that proofs of noninterference have to be provided explicitly as proof terms (or discharged via tactics or other means when possible), while our less expressive approach is a simple, non-dependent type system. Finding the right balance between the expressiveness and the complexity of the typing discipline to express security policies is an active subject of research.
Conclusion
We present a novel approach to type-based relaxed noninterference, based on existential types as the underlying type abstraction mechanism. In contrast to the object-oriented, subtyping-based approach, the existential approach naturally supports external declassification policies. This work shows that the general approach of faceted security types for expressive declassification can be applied in non-object-oriented languages that support abstract data types or modules. As such, it represents a step towards providing a practical realization of informationflow security typing that accounts for controlled and expressive declassification with a modular reasoning principle about security.
An immediate venue for future work that would be crucial in practice is to develop type inference for declassification types, which should reduce to standard type inference [7]. Finally, a particularly interesting perspective is to study the combination of the existential approach with the object-oriented approach, thereby bridging the gap towards a practical implementation in a full-fledged programming language like Scala that features all these type abstraction mechanisms. | 2019-11-13T02:00:47.856Z | 2019-11-11T00:00:00.000 | {
"year": 2019,
"sha1": "c7d01b76de036fa3058381b21f4efd577bff0c2a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.04560",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c7d01b76de036fa3058381b21f4efd577bff0c2a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
73680392 | pes2o/s2orc | v3-fos-license | The Socio-Economic Impact of Controlled and Notifiable Wildlife Diseases in the Southern African Development Community (SADC) States of Africa
Over the past two decades, wildlife based ecotourism has rapidly expanded on a global scale and remains an important source of foreign revenue for many developing countries. Almost all countries within the Southern African Development Community (SADC) states have an income stream that is derived primarily from ecotourism [1]. It therefore becomes imperative to thoroughly assess the sustainability of the wildlife industry, particularly in these developing countries. The wide spectrum of disease (endemic and/or exotic) that exists within wildlife impedes export and trade and thus contributes toward crippling rural economies of many African countries. For the purpose of this review, the socio-economic impact of two diseases, i.e Foot-and-mouth disease (FMD), an endemic disease of cloven-hoofed animals and Avian influenza virus (AIV), an zoonotic disease of birds, will be discussed. A controlled animal disease is any animal disease in respect of which any general or particular control measure has been prescribed while for a notifiable disease, it is required by law to report the occurrence or identification of such disease to responsible government authorities. FMD and AIV fall in both categories of classification in all SADC member states and are listed as such within the OIE guidelines [2]. Although these two diseases do not enjoy the monopoly of wildlife diseases, they are relevant examples to illustrate the burden that wildlife diseases can impose on communities if not controlled appropriately. It is hoped that by discussing the clinical, biological and socio-economic impact of these diseases, inferences and parallels on similar infectious diseases affecting both wild and domestic animal hosts can be drawn. A comprehensive list of wildlife/domestic host diseases with a potential to disrupt animal health patterns and pose a threat as emerging diseases is both humans and animals is discussed in other work [3,4].
Introduction
Over the past two decades, wildlife based ecotourism has rapidly expanded on a global scale and remains an important source of foreign revenue for many developing countries. Almost all countries within the Southern African Development Community (SADC) states have an income stream that is derived primarily from ecotourism [1]. It therefore becomes imperative to thoroughly assess the sustainability of the wildlife industry, particularly in these developing countries. The wide spectrum of disease (endemic and/or exotic) that exists within wildlife impedes export and trade and thus contributes toward crippling rural economies of many African countries. For the purpose of this review, the socio-economic impact of two diseases, i.e Foot-and-mouth disease (FMD), an endemic disease of cloven-hoofed animals and Avian influenza virus (AIV), an zoonotic disease of birds, will be discussed. A controlled animal disease is any animal disease in respect of which any general or particular control measure has been prescribed while for a notifiable disease, it is required by law to report the occurrence or identification of such disease to responsible government authorities. FMD and AIV fall in both categories of classification in all SADC member states and are listed as such within the OIE guidelines [2]. Although these two diseases do not enjoy the monopoly of wildlife diseases, they are relevant examples to illustrate the burden that wildlife diseases can impose on communities if not controlled appropriately. It is hoped that by discussing the clinical, biological and socio-economic impact of these diseases, inferences and parallels on similar infectious diseases affecting both wild and domestic animal hosts can be drawn. A comprehensive list of wildlife/domestic host diseases with a potential to disrupt animal health patterns and pose a threat as emerging diseases is both humans and animals is discussed in other work [3,4].
It is worth noting that legal frameworks and responsibilities for wildlife disease investigation and reporting are not clear in most African countries [5,6]. An extensive list of legislations passed in several SADC states that include Botswana, Mozambique, Namibia, South Africa and Zimbabwe, have been comprehensively discussed and listed in the work by Bekker et al [7]. From the list one can deduce that different legislations and policies exist for different countries and that a coordinated effort within the region does not necessarily occur. The impact of disease outbreaks within the SADC states such as the Democratic Republic of Congo (Table 1, [8]) is an indication that wild life disease control policies either do not exist or are inadequately implemented within certain regions. Furthermore the incidence of disease outbreaks and the reduction in the number of animals destroyed and/or slaughtered as result thereof ( Table 2, [8]), indicate marginal success in the implementation of control policies in countries such as South Africa, Zimbabwe and Botswana. The fact that the number of outbreaks recorded between 2007 and 2010 remains constant, highlights the difficulty in eradicating and/ or adequately controlling such diseases, especially in the absence of a common regional policy. Recent debates [9] on free agriculture trade within the SADC region indicate a willingness to coordinate and encourage agricultural trade within the region. We therefore envisage the possibility of a coordinated disease control policy applicable for the entire SADC region, which would result in the social and economic upliftment of the communities involved. However, disease control within wildlife and responsible authority allocation is currently ill defined at best, in the majority of SADC countries. Further complexity is added by the fact that free-ranging wildlife do not easily lend themselves to manipulation such as diseases surveillance and vaccination. The result is a lack of active research in the field of wildlife disease diagnostics; hence tests and vaccines that are developed for domestic animals have mostly not been tested in the wildlife and are therefore necessarily effective for wildlife disease control. Wildlife therefore remains an effective reservoir for transboundary diseases, which not only affect other wildlife species, but domesticated animals also thus leading to massive socioeconomic losses in the country concerned.
A high proportion of African countries have game reserves coupled with pastoral nomadic methods of livestock farming. In countries where wildlife boundaries are clearly demarcated, such as South Africa, there is still a high degree of activity between wildlife and livestock at this boundary interface. Alternative methods are thus needed to control the spread of disease between wildlife and domesticated animals [10]. Failure to implement effective control strategies may result in severe economic losses and social disruption, as the livelihoods of most rural pastoral communities are reliant on the wellbeing of their livestock. Further damage to the economy could result from the loss of valuable wildlife due to disease, leading to reduced revenue from depressed tourism patronage. Therefore, the activity as well as the intensity of activity at the wildlife/ livestock interface requires innovative control strategies that will permit the country concerned to market its livestock, wildlife and animal products, profitability. This includes a greater understanding of disease virus profile within the wildlife stocks as well as proper implementation of prevention and control mechanisms that are adapted to the region of choice.
As an example, SADC countries with an exception of South Africa, Botswana and Namibia are generally endemic for FMD. As a result there is an unmitigated, permanent ban on the export of most livestock commodities from Southern Africa and the African countries in general, to lucrative European and Asian markets that are free of the disease. FMD is an endemic disease in Africa that is generally maintained in the free-ranging wildlife populations, particularly buffalo. Avian influenza on the other hand, can be regarded as an exotic disease as it was introduced into the local poultry largely through migratory birds and ducks [11,12]. The costs incurred from both diseases can have an undesirable impact on livestock populations and agriculture. Additional costs are a consequence of mitigation or control efforts, losses in trade and other revenues such as tourism as well as impacts derived from the emergence of pandemics as in the case of a zoonotic outbreak of avian influenza. Visible direct costs include death in young stock, reduced livestock growth, reduced milk production and abortion. Some of the invisible costs include reduced fertility, which necessitates the requirement for larger numbers of breeding animals thus translating to higher production costs and costs incurred for eradicating the disease from animals. Drugs, labour, vaccines, surveillance and forgone revenues are difficult to estimate for both AIV and FMD as these are dependent on the livestock density and the efficiency of the mitigation measures implemented by the responsible authorities [12,13].
Wildlife and Transboundary Diseases
Transboundary animal diseases are diseases that cause damage or destruction to farmers' property, may threaten food security, injure rural economies, and potentially disrupt trade relations. Viral diseases that include amongst others, Foot and Mouth Disease (FMD), African Swine Fever (ASF) and Avian Influenza (AI), periodically affect the South African commercial agriculture sector and the SADC region in general. The absence of suitable disease surveillance and monitoring technologies, coupled with inadequate diagnostic facilities at the pen-side, are the major obstacles in controlling these important agricultural diseases [14]. In the SADC context, the absence of efficient control and prevention strategies at the borders of each member state enables the rampant movement of both animals and their associated diseases across geographical regions. This further complicates the epidemiology and eradication of diseases such as FMD. It is therefore critical to control wildlife linked transboundary diseases more effectively as a region rather than as respective countries in an economically attached region.
Foot-and-Mouth Disease (FMD)
Foot-and-mouth disease virus (FMD) infects a number of wildlife species and in the Southern African landscape and the epidemiology of the virus is greatly influenced by the role of wildlife, particularly the African buffalo (Syncerus caffer) in maintaining and spreading the disease to susceptible domestic animals [10,13,[15][16][17][18][19][20][21][22][23]. Individually infected buffalo are able to retain FMDV for at least five years, while the virus can persist for up to 24 years in an isolated herd [16]. In contrast, cattle are only able to maintain the virus for up to 3.5 years after infection [24]. In the Kruger National Park (KNP) in South Africa, buffalo calves become infected with all three SAT serotypes and individual animals are able to maintain more than one serotype during its lifetime. These serotypes are therefore constantly evolving in buffalo populations in Southern Africa giving rise to the extensive intratypic variation currently observed for these SAT types [21]. Buffalo calves become acutely infected with FMDV at three to eight months of age when their maternal antibodies wane. Once infected, they are able to excrete virus in large amounts thus infecting other animals such as impala, which have been implicated to be intermediate hosts. Acutely infected impala and other antelope species are unable to maintain a carrier status, but it has been suggested that they are able to spread the virus to cattle outside the KNP by penetrating the cordon fences commonly used to separate livestock from wildlife [25]. This is only limited to the vicinities closer to the KNP borders and for areas closer to other game reserves and farms with infected buffalos. We suspect the same pattern may be repeated throughout the SADC region.
Avian influenza (AIV)
Wild aquatic birds such as ducks, geese, gulls and shorebirds are carriers of various influenza A subtypes [26,27]. Although all bird species are thought to be susceptible to influenza A viruses, some domestic poultry species such as chickens, turkey and guinea fowl are known to be highly vulnerable to such infections. In susceptible birds, avian influenza is transmitted in a number of ways, including contact with contaminated nasal, salivary or fecal material from infected birds [28]. Indirect transmission via virus contaminated water and formites have also been reported. Some studies have shown the incidence of avian influenza outbreak to coincide with the increased population of migratory ducks in the same region [29]. Open domestic poultry markets have also been implicated in the spread of avian influenza in the past, although the waterfowl species have been identified as the well-characterized reservoir of different subtypes of avian influenza [30]. Part of the difficulty with exotic diseases such as avian influenza and particularly with regards to rural flocks, is the challenge in forming physical barriers to disease, mainly as a result of the financial implications associated with erecting such bio-containment infrastructures [31].
Economic impact of wildlife transboundary diseases
The costs associated with animal disease can change as societies and economies evolve, making it important to monitor such changes in order to respond in a timely and appropriate manner [32]. Following an outbreak, a country has its supply of beef and related products in case of FMD, or poultry and related commodities in case of AIV, negatively affected through morbidity and mortality. International economic impact to the affected region follows as the trade bans are imposed from the respective international trade partners thus further depreciating the economic prospects of the diseased country. Additional economic depression can be observed following the spillover effects such as tourism restrictions following the implementation of remedial action to contain and eradicate the outbreak. Financial compensation is usually the route most national livestock administrators follow to both boost outbreak control compliance by farmers and to facilitate quick recovery of the affected sector. This flow of finance is usually not adequately budgeted for and therefore negatively impacts the country's budget allocation. Even when a pre-arranged cost sharing method between the public and the private sector exists, the local economic depression following an outbreak does place an unusually large burden on the country concerned. For African countries whose budgets are relatively small, the effect of an outbreak in a region, which was previously a disease free zone, is significantly large in comparison to the total GDP of the country [33].
Foot-and-Mouth Disease (FMD)
Foot-and-Mouth Disease is internationally regarded as the most important economic viral disease of domesticated livestock, which has the potential to spread rapidly through susceptible animal populations. Despite the low mortality rates in susceptible animals, outbreaks of FMDV have a significant impact on the productivity, and therefore the livelihood of resource-poor farmers. Since livestock are highly important in the agriculture-based economy of many of the Southern African Development Community (SADC) member states, trade and quarantine restrictions negatively impacts the national economies of such states, by blocking rural income generation, job creation and most importantly compromising food security. Despite the accumulation of extensive knowledge of the disease as well as the availability of vaccines, attempts at eradicating FMD have remained unsuccessful. An understanding of the epidemiological complexities of FMD has therefore refocused the emphasis on control rather than eradication. As an example, it has been estimated that an investment of 19.6 million US$ in the reduction of losses linked to cattle morbidity and mortality in Sudan would result in revenue generation equalling US$ 40.5 million [32].
Avian influenza (AIV)
Avian influenza is considered one of the most important transboundary animal diseases to have emerged with such a significant impact on human health. The disease has been recognized as a highly lethal viral disease of poultry since 1901 [34]. Sporadic outbreaks of avian influenza in South Africa have had significant impact on the poultry industry. According to the Ostrich Business Chamber, South Africa is the foremost supplier of ostrich products to the international market, accounting for up to 67% of exports with revenue of approximately US$ 120 million annually. The recent outbreak of the highly pathogenic H5N2 strain of avian influenza resulted in the immediate ban on all exports of ostrich products to the European Union. This placed the industry under immense financial strain and inevitably resulted in job losses of approximately 20,000 people directly employed by the industry.
In April 2011, the South African ostrich industry was severely affected by an outbreak of avian influenza. Highly pathogenic avian influenza (HPAI) H5N2 was detected on eight commercial ostrich farms in the Oudtshoorn and Uniondale areas in the Western Cape Province. Concerns of a potential outbreak of the HPAI in domestic poultry and the awareness of the pandemic potential of these viruses, led to the rapid, preventative slaughtering of more than 50,000 birds and a suspension on all exports of poultry products, equating to US$ 140 million in export losses. This drastic action was necessitated since phylogenetic studies have indicated that new subtypes are derived from genetic reassortments between the LPAI isolates from wild birds and those traditionally found circulating in the poultry and ostrich populations in South Africa. The diversity of (b) The development and use of sensitive, cost-effective and rapid diagnostic tests, which can be used for outbreak surveillance to assist in the management of this disease; (c) The eradication of the disease by culling infected flocks [35].
In developing countries, the implementation of some these containment strategies are not always feasible and therefore other approaches, which include the use of vaccines to manage clinical disease, prevent human infection and ultimately maintain food security, have been adopted [36]. Avian influenza vaccines have been successfully used in the control of HPAI in domesticated poultry and captive birds in countries that include Asia, Europe, Africa and South America and have since improved the livelihood of many rural communities in developing countries [36][37][38].
In the Nigerian study based on the AIV 2011 outbreak, 80% of the workers from the affected farms lost their jobs while 45% of employees from unaffected farms also lost their jobs as the ripple effect of the outbreak costs followed. The Ghanaian study reflected similar values in that about 75% of the employees lost their jobs. One can therefore extrapolate high unemployment related to an AIV outbreak within the local region of the outbreak in Africa [39]. Current state of veterinary services and preparedness levels in developing countries, especially in Africa, pose a real and present threat to the prevention and control of an AIV outbreak. Smallholder poultry systems tend to have a medium to lowlevel biosecurity and animal mortality is higher than in intensive production systems where biosecurity tends to be higher. Financial risk is however higher for commercial farmers due to high density of poultry in their settings.
Social impact of FMD and AIV in SADC
Livestock plays a critical and varied role in the economies of SADC states. At household level, livestock provides food, income and is generally regarded as an asset, while at a national and regional level it contributes to food security, trade and GDP [8,40]. It follows then that the negative disruption of wealth and exacerbation of poverty through animal diseases within rural communities will impede the general social way of life. Examples include the ability to pay dowry through cattle as a traditional method of formalities exchanged throughout the Bantu nations of the SADC region. In certain parts of SADC, crop cultivation requires the use oxen to plough the fields. An outbreak of FMD during the main planting season can disrupt crop cultivation and threaten the social way of life due to increased poverty levels. The majority of SADC communities wherein most of the game parks and reserves are situated are mainly rural communities. Their livelihood is largely dependent on crop and livestock agriculture. Small stock traders are particularly vulnerable since an avian influenza outbreak would devastate their trade through local and regional ban on poultry trade. It is well established that one of the major obstacles in implementing proper biosecurity primarily for rural or communal livestock is the absence of adequate biosecurity measures. This is primarily as a result of the prohibitive costs related to the implementation of such biosecurity infrastructures. An outbreak of either FMD or AIV within a rural community in a SADC region does not only alter the social economy by diverting national funding to control the outbreak, but changes in the livestock and/or flock herds drastically affects the general day to day lives of rural communities.
Foot-and-Mouth Disease (FMD)
Foot-and-mouth disease (FMD) is a highly contagious, acute vesicular disease affecting cloven-hoofed animals (cattle, sheep, pigs, goats, buffalo and various other wildlife species). The disease is endemic in most developing countries in particular Africa, Asia and South America. The causative agent is a positive-sense, singlestranded RNA foot-and-mouth disease virus (FMDV) classified in the genus Aphthovirus within the family Picornaviridae [41,42] The 140S virion of FMDV consists of a single stranded RNA genome, approximately 8.5 Kb in length, enclosed within an icosahedral capsid made up 60 copies each of four structural proteins (VP1, VP2, VP3, VP4) [41,42]. The mutation rates of these RNA viruses are inherently high due to the lack of RNA polymerase proof reading mechanisms [43,44]. As a result, FMDV exists as seven distinct serotypes (O, A, C, Asia-1, SAT 1, SAT 2 and SAT 3) that reflect significant genetic and antigenic variability [45][46][47]. The Southern African serotypes (SAT1-3) are endemic to sub-Saharan Africa but several different epidemiological clusters, based on the distribution of the serotypes and topotypes, evaluation of animal movement patterns and impact of wildlife and farming systems, have been identified for the African continent [48]. The South SADC countries, i.e. Swaziland, Lesotho, South Africa, Botswana and Namibia have segregated wildlife areas that harbour African buffaloes known to be infected, asymptomatically, with FMD virus serotypes SAT-1, SAT-2 and SAT-3. These SAT-serotypes have thus been shown to co-circulate in the various designated clusters along with the Euro-Asiatic (O, A and C) serotypes [49][50][51][52]). The SAT viruses differ significantly from each other with respect to geographical distribution, incidence of outbreaks in domesticated livestock as well as infection rates in wildlife species ([17,53,54]. Within the SAT viruses there are at least eight topotypes within SAT-1, 14 in SAT-2 and six within SAT-3. The SAT-1 viruses are commonly found circulating in buffalo herds, while SAT-2 viruses appear to be the most widely distributed serotype in sub-Saharan Africa and are frequently associated with outbreaks of the disease in livestock [54,55]. It has thus been suggested that the different SAT types may have differential abilities in crossing the species barrier, which relates to the varying degrees of pathogenicity among species [56]. The perplexing epidemiology of FMD is dependent on a number of factors that include amongst others virulence of the viral stain and its ability to produce lesions; the stability of the viral particles in different environmental conditions; the immunological status of the host and its ability to respond to infection and environmental factors that can provide geographical barriers that either prevent or promote the dissemination and transmission of virus [14]. FMD is a highly transmissible disease and infection generally occurs via the [57,58]. Transmission is also possible through abrasions on the skin or mucous membranes, however in such instances 10,000 times more virus particles are required for successful infection [57,58]. The clinical outcome of the disease may vary among the host species considered and the infecting virus strain. In domesticated animals such as cattle and sheep, fever and viraemia usually start within 24-48 hours after infection, followed by progressive spread of the virus to different organs and tissues and finally presenting as secondary vesicles, generally on the feet and tongue [42,[59][60][61]. Excreted virus has been also been detected in the milk, semen, urine and feces of infected cattle [58]. In cattle, the incubation period is usually between 2 and 14 days depending on the infection dose and route of infection. Pigs on the other hand are much less susceptible to aerosol infection than cattle and require as much as 6000 TCID 50 of virus to establish infection [62,63]. They therefore usually become infected either by eating food contaminated with FMDV or by coming into direct contact with infected animals [62,63]. The incubation period is much shorter (approximately two days) and they are able to excrete far more aerosolized virus particles than both cattle and sheep [56,64].
Avian influenza (AIV)
Avian influenza virus (AIV) is classified as a type A influenza virus that belongs to the Orthomyxoviridae family. These viruses have a spherical virion with numerous spherical glycoprotein projections, a helical nucleocapsid and a genome consisting of 8 segments of single-stranded negative-sense RNA that code for 11 viral genes [65]. Type A viruses are classified on the basis of the antigenic properties of two surface glycoproteins, hemagglutinin (HA) and neuraminidase (NA) (World Health Organization Expert Committe, 1980). Thus far, sixteen hemagglutinin (H1-H6) and 9 neuraminidase (N1-N9) subtypes, occurring in various different combinations (i.e H1N1, H5N1 and H7N7) have been identified [66][67][68][69][70].
Influenza A viruses are continuously evolving primarily due to the lack of proofreading activity of the viral RNA polymerase during replication of the genomic RNA segments [71]. The high level of antigenic point mutations introduced into the HA and NA surface proteins are responsible for the annual influenza epidemics and the associated mortalities [72,73]. Antigenic shift caused as a result of the segmented nature of the influenza virus genome, is a second mechanism of virus evolution [74]. Due to their surface location, however, genes that code for the HA and NA proteins are likely to be under immense selection pressure by the host immune system and are therefore expected to continuously evolve. The reassortment of viral segments leads to the production of novel progeny viruses for which no pre-existing immunity exists and the new viruses are thus able to escape the host immunity. When sufficiently infectious, the emergence of these new viral strains is the most common cause of influenza pandemics [75][76][77].
Influenza viruses infecting poultry can be divided, according to their virulence, into two categories. The highly pathogenic avian influenza viruses (HPAIV) cause a systemic infection with high mortality rates (100%) and the low pathogenic avian influenza viruses (LPAIV), which cause localized infections that result in mild respiratory diseases in poultry [78]. Although there are many subtypes of the virus, the H5 and H7 subtypes are generally associated with high pathogenicity, with the prevailing theory that HPAIV variants evolve from subtypes of LPAIV in domestic poultry by mutation or recombination events [79,80]. The transition from low pathogenicity to high pathogenicity is governed by the insertion of basic amino acids into the haemagglutinin cleavage site, which then causes systemic viral replication and acute generalized disease in domesticated poultry [81][82][83][84][85]. Other avian influenza strains lacking this multi-basic cleavage site are considered LPAIV and are perpetuated in nature in wild bird populations [86][87][88].
Avian influenza viruses generally infect the cells that line the respiratory and intestinal tracts of birds and are excreted in high concentrations in their faeces. Transmission of the virus between birds is considered a complex process dependent on the viral strain, bird species and certain environmental factors [89]. Studies have shown that virus concentrations of up to 10 8 .7 mean egg infectious doses (EID) per gram of faeces could be detected from infected ducks [90]. In addition, these viruses were shown to remain infective in contaminated lakes or ponds for up to 30 days at low temperatures thus leading to the transmission of avian influenza via the faecaloral or possibly the faecal-cloacal route [91,92]. It has been further suggested that depending on environmental conditions, the virus could most likely also over winter and remain a source of infection during the warmer spring seasons [93].
Foot-and-Mouth Disease (FMD)
In Southern Africa, the control and prevention of FMDV is based on (a) the implementation of effective physical barriers (i.e fencing) that separates wildlife from livestock; (b) routine vaccination of cattle in high risk areas exposed to infected buffalo populations; (c) movement control of susceptible animals and animal products and (d) surveillance to monitor outbreaks [20,[94][95][96]. The OIE recognizes fencing as an acceptable method for establishing FMD disease free zones in southern Africa. However, these physical barriers are often subject to both environmental and human pressures such as flooding; breakage due to wildlife and damage from theft [95]. Relying on fencing alone increases the risk of FMD transmission between wildlife and livestock and vaccination therefore currently remains the main tool for the control of the disease in livestock, particularly in endemic areas [97,98].
The current FMD vaccines used worldwide are chemically inactivated whole-virus preparations, typically formulated using the water-in-oil adjuvant and with a potency of at least 3 PD 50 (protective dose) [98,99]. These formulations increase the humoral immunity, which is known to be the most influential factor in preventing FMD [98,100]. Although the use of inactivated vaccine preparations have been successful in controlling and reducing the number of FMD outbreaks in many parts of the world, there have been considerable concerns and limitations regarding its use in preventative control programs. Due to the antigenic variability of the virus, current vaccination preparations often confer low levels of cross-protection following supplementary vaccinations. Other limitations include the difficulty in adapting some viruses to cell culture, thus slowing the introduction of new vaccine strains, reducing vaccine yield and potentiating through prolonged passage, the selection of undesirable antigenic changes [101,102]. Furthermore, vaccination does not induce sterile immunity and animals may still be able to infect non-vaccinated animals and may also become persistently infected and lastly, the current vaccines are relatively expensive, especially for the small and subsistence farmer [24,[103][104][105]. Towards developing vaccines with improved efficacy and coverage, continuous monitoring of the field isolates is required to determine the applicability of existing vaccines and the emergence of novel epidemiological situations [98]. Inactivated vaccines induce short-lived immunity and it is recommended that naïve animals receive two initial vaccinations (a primary and secondary dose) 3-4 weeks apart followed by re-vaccination every 4-6 months to prevent spread of disease within populations [106]. However, in the African environment this may differ for different manufacturer's depending on the potency of the vaccine and some manufacturer's recommend five vaccinations per annum. The FMDV particle is also known to be relatively unstable with respect to both temperature and pH, and this has a considerable impact on the shelf life of vaccines, particularly in developing countries where the maintenance of cold-chains is sometimes not possible [107]. To that end, reverse genetics approaches for producing infectious cDNA clones into which the insertion of novel capsid genes that confer increased capsid stability and/or adaptation to cell culture, are currently being explored for a number of FMD serotypes [108][109][110].
Other factors of concern include (a) the requirement of high containment facilities for handling live viruses for antigen production and the associated risks of virus escape into the environment (b) the production of FMD antigens in large-scale suspension or monolayer cell lines, which potentially results in lower antigen yields due to the inability of certain serotypes and subtypes to adapt to cell culture (c) the presence of nonstructural viral proteins in vaccine preparations that complicate the distinction between vaccinated and infected animals (d) the inability to produce rapid protection against challenge by direct inoculation thus potentially exposing susceptible, vaccinated animals to infection prior to the development of their adaptive immune response and (e) the possibility of creating a carrier state in vaccinated animals following an FMD infection [98]. While these concerns are being addressed in the development of novel vaccine technologies, alternative control strategies reviewed by [111] include subunit or peptide vaccines, live attenuated vaccines and empty viral capsids. Although much less potent than whole inactivated virus particles, peptide vaccines have been shown to induce either partial or in some cases full protective immunity following the administration of multiple vaccine doses [112,113]. Baculovirus-derived virus-like particles or adenovirus-vectored vaccines for delivering interferons or FMDV capsid proteins have both been shown to be highly immunogenic [114][115][116]. Although vaccines are considered to be the most important factor in the global control of FMD, the high levels of genetic diversity observed for the different virus serotypes limit the possibility of developing a single vaccine approach. The interval between vaccinations is ritical to prevent a "window of susceptibility" and where the continuous or sporadic presence of virus in carrier animals is present.
Avian influenza (AIV)
The diversity of avian influenza virus, and its potential to continuously evolve, is the primary factor driving the requirement for (a) the implementation of stringent biosecurity measures at the farm level to control movement of flocks and prevent virus dissemination (b) the development and use of sensitive, cost-effective and rapid diagnostic tests, which can be used for outbreak surveillance to assist in the management of this disease and (c) the eradication of the disease by culling infected flocks [35].
In developing countries, the implementation of some these containment strategies are not always feasible and therefore other approaches, which include the use of vaccines to manage clinical disease, prevent human infection and ultimately maintain food security, have been adopted [36,117,118].
Currently available commercial vaccines for the control of avian influenza are inactivated whole virus AI vaccines. These vaccines have mostly been used to control low pathogenic avian influenza (LPAI) as well as high pathogenic avian influenza (HPAI) outbreaks [119][120][121]. Although these vaccines have been shown to be safe and efficacious against AIV, they have several disadvantages that include cost of production, laborious method of administration and lack of long-term immunity, which in turn necessitates booster vaccinations. The use of these vaccines further complicates diagnosis making it impossible to differentiate infected from vaccinated animals therefore leading to continuous shedding of the virus in the field [122]. Furthermore, biohazards associated with manufacturing these vaccines and low vaccine yields generated from using embryonated fowl eggs has reduced the efficacy of these vaccines [123,124]. In an attempt to overcome some of these limitations, several different vaccine technologies have been developed, which has been extensively reviewed [125]. Briefly, they include (a) inactivated whole viruses developed using reverse genetics approaches [126][127][128][129]; (b) in vitro expressed HA protein in either cell cultures (eukaryotic, yeast or plant derived), bacterial (E.coli) or insect derived viral vectors (baculovirus) [130][131][132]; and (c) in vivo expressed HA proteins using live bacterial or viral vectors (eg. Fowl poxvirus, vaccinia virus, rous sarcoma virus and adenovirus) [133][134][135][136].
Despite the availability of different AIl vaccine technologies, there are several critical aspects that need to be considered when selecting the appropriate vaccination program. One such concern is the emergence of antigenic drift within the viral population, which results in the occurrence of modified viruses that can escape the immune response of the vaccine strain. It is therefore essential that suitable control programs be implemented such that correct seed viruses are selected for the development of vaccines that enable the detection of field exposed flocks. Other aspects include the reliance on adequate monitoring and surveillance systems being in place to ensure the early detection of and rapid response to AI infections [36,137].
Conclusion
Livestock trade contributes about 15% of global agricultural trade, of which more than 80% of exports are from developed countries [10]. This presents a favourable economic potential for the SADC states in particular and Africa in general, should the endemic status of FMD be managed effectively to create disease free zones. In Africa, the diverse wildlife species attracts local and international tourism, which forms the lifeline for income generation for developing countries. The communities around the wildlife reserves and the nomadic cattle herding practices where livestock and wildlife interact facilitate the transfer of viral diseases to livestock. This adds complexity to both disease control and to determining the loss of revenue for countries where both livestock and wildlife play an integral part. It is clear that effective disease control is beneficial for both the wildlife/conservation sector as well as the livestock based export industry, although emphasis has been placed primarily on disease control within the livestock industry. Surveillance of migratory birds is limited even though ducks are the known to be the main reservoirs for the transmission of avian influenza. Similarly, although African buffalo are the known to be the maintenance host of FMD, factors that contribute to the transmission of the virus to livestock remain unknown.
Developing countries, with specific emphasis on the African continent, have an obligation and need to improve the socioeconomic outlook of the resource-poor communities, by reducing the levels of poverty and implementing applicable national development plans. The trade relevance of both AIV and FMD and in the case of AIV, its zoonotic capacity, has a major impact on the economies of developing countries. Investment in controlling and preventing the spread of disease has significant financial benefits that usually outweigh the costs incurred during outbreak situations. As an example highlighted in the Agra study, an investment of USD 1 towards the implementation of a disease prevention strategy resulted in the generation of revenue to the value of USD 12 [32]. However, it should be noted that the actual revenue generated from effective and efficient prevention measures will depend on the prevailing conditions within the disease outbreak region, which include the animal density levels, the intensity of export activity as well as the market size of the region.
For exotic diseases such as AIV, the outbreak is best addressed by focussing on the domestic host by test-slaughter and mass vaccination, respectively. Preventing contact between infected domestic animals and wildlife is desirable, but not always feasible in many African countries. Some industries such as the South African ostrich business sector has, by its nature animals that are in themselves semi domestic, hence the biosecurity becomes much more difficult to implement or police. When an exotic disease becomes established in a free ranging wildlife population, the control options become considerably limited and frequently unpopular, since the culling of valuable wildlife remains the main option for control.
Based on the AGRA report [138], about 25% of African countries have no program for control of viral disease despite the high incidence of zoonotic and non-zoonotic epizootic diseases. This situation is compounded by the dire lack of qualified personnel to fulfil this role. Furthermore, the lack of sophisticated technical resources in many SADC regions prevents the accurate, timely detection and reporting of FMD outbreaks. The socioeconomic challenges of the African continent will continue due to weak investments in animal health, the lack of scientific capacity, improper implementation and/or lack of awareness of policies and general weak governance of food safety due to competing national demands. Access to high-end markets depends on disease control options that include (a) maintaining zones recognized as FMD-free from which livestock may be exported without the requirement for vaccination (b) the creation of containment zones with high levels of regulation and biosecurity thus favouring compliance with export regulations (c) commodity-based trade, which enables the trading of processed products that precludes the possibility of virus dissemination and (d) managing the disease and focusing on local trade rather than export.
Thus, regardless of the access strategies being sought after the implementation of effective disease control programmes within the SADC regions remains imperative for both livestock production and revenue generation.
It is therefore imperative that the wildlife disease control is further addressed before the SADC states can see the full economic potential for being endowed with both wildlife and livestock sectors. It is through proper management, effective legislation and increased wildlife diseases research that the agriculture based economies can improve and thereby lift the social well being of the communities within SADC nations. By maximising the revenue generated from these interrelated sectors, longterm sustainable earnings in foreign currency will potentially reduce poverty through local job creation. The wildlife disease detection, prevention and control will become increasingly relevant since most of the diseases that affect wildlife seem to show only mild symptoms while they show devastating clinical effects to livestock and poultry as demonstrated by FMD and AIV, respectively. Although the economic impact of wildlife diseases is easier to measure imperially, the social impact and the disruption to the way of life in many native communities within SADC states, is usually not reported as a direct link to animal disease outbreak such as FMD and AIV. Social cohesion, due to wealth accumulation through livestock and the absence of disease, could be an added advantage of properly controlling animal diseases in the most vulnerable rural communities. | 2019-03-31T13:43:38.413Z | 2014-07-12T00:00:00.000 | {
"year": 2014,
"sha1": "9f027944581cff2ff2441df857610481f1c96083",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/the-socioeconomic-impact-of-controlled-and-notifiable-wildlife-diseases-in-the-southern-african-development-community-sadc-states-of-africa-2375-446X.1000115.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7ec8ac348e8ed67458fb34f58fcc6a357e4675d3",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Biology"
]
} |
11386879 | pes2o/s2orc | v3-fos-license | Changing patterns of home visiting in general practice: an analysis of electronic medical records
Background In most European countries and North America the number of home visits carried out by GPs has been decreasing sharply. This has been influenced by non-medical factors such as mobility and pressures on time. The objective of this study was to investigate changes in home visiting rates, looking at the level of diagnoses in1987 and in 2001. Methods We analysed routinely collected data on diagnoses in home visits and surgery consultations from electronic medical records by general practitioners. Data were used from 246,738 contacts among 124,791 patients in 103 practices in 1987, and 77,167 contacts among 58,345 patients in 80 practices in 2001. There were 246 diagnoses used. The main outcome measure was the proportion of home visits per diagnosis in 2001. Results Within the period studied, the proportion of home visits decreased strongly. The size of this decrease varied across diagnoses. The relation between the proportion of home visits for a diagnosis in 1987 and the same proportion in 2001 is curvilinear (J-shaped), indicating that the decrease is weaker at the extreme points and stronger in the middle. Conclusion By comparison with 1987, the proportion of home visits shows a distinct decline. However, the results show that this decline is not necessarily a problem. The finding that this decline varied mainly between diagnoses for which home visits are not always urgent, shows that medical considerations still play an important role in the decision about whether or not to carry out a home visit.
Background
Home visits are commonly seen as an important part of general practice. However, in the past decades, there has been a world-wide decrease in home visiting rates. Although there are strong variations between countries, as well as between GPs, this decrease was found in most European countries and North America [1][2][3][4].
How this decrease must be evaluated is debatable. On the one hand, this trend can be an indication of improved efficiency: GPs spend less time on less urgent home-visits, saving more time to treat patients in their practice. On the other hand, some are concerned that an essential part of general practice care might disappear and that this might lead to undesirable and dangerous situations.
Previous studies showed that home visiting rates are affected by demand, as well as supply-related factors. GPs will be more likely to visit patients who are seriously restricted in their ability to come to the practice. These restrictions can be related to age or disability but also to the complaint for which the GP is consulted. A non-medical reason for a home visit may occur if a patient has no transport.
On the supply-side, the GP's style of work has an influence. Some GPs will be more likely to address the wishes of their patients than others. The criteria for the level of discomfort that is acceptable for patients vary across GPs. Also workload related factors and the location of the practice have an influence. GPs in smaller practices make more home visits [5,6], and the proportion of elderly on the GP's list is also positively related to the number of home visits [6,7]. Furthermore, previous studies showed higher home visiting rates in rural areas than in urban areas [5,[8][9][10].
Although the decline in home visits is generally known, very little is known about the nature of this decrease. That is to say: How does this decrease vary across different diagnoses in proportion to their urgency? The purpose of the present study was to analyse and to quantify this decrease in more detail.
The decrease in home visits indicates that GPs have sharpened their criteria for home visiting. However, GPs will still make, at least in their own point of view, responsible decisions, taking into consideration the possible discomfort or danger for the patient. This means that some complaints give more possible options than others. If a complaint appears to be very threatening, it is clear that a home visit is indicated; therefore we expect that the decrease in home visits in such cases is low. However neither do less urgent cases, on the other hand, allow the opportunity for a strong decrease. This is simply because GPs never did carry out a home visit in these cases. In other words: there is a 'bottom-effect'. The most room for making a decision about whether or not a home visit should be done, and thus for a decrease, are those complaints that are in the middle, the doubtful cases.
We expect, therefore, that the relation between the chance to get a home visit for a specific complaint and this same chance in the past is not a linear, but a J-shaped relation, indicating that the decrease is stronger in the middle and much smaller at the extreme points.
Methods
Data used in this study originate from two Dutch National Surveys of General Practice (DNSGP) [11,12]. In the first DNSGP data were collected from April 1987 until March 1988 in a stratified sample of 193 general practitioners in 103 practices, who served 335.000 patients in total. In the second Dutch National Survey of General Practice data were collected during one calendar year (2001) in 104 representative general practices in the Netherlands, comprising of 195 general practitioners, who served 385.461 patients in total. The DNSGP was funded by the Dutch Ministry of Health. GPs and other care providers were asked to record every contact in an electronic medical record system. The data used in this study are the diagnosis, and the kind of contact, such as a phone call, surgery consultation, or home visit. The diagnosis was coded using the International Classification of Primary Care (ICPC). The type of contact was registered during six weeks in DNSGP2 and during three months in DNSGP1. Due to technical problems, some practices had to be excluded.
A selection of contacts was made based on two criteria. First, the diagnosis had to be registered 50 times or more in both databases. The reason for this is that under 50 percentages are determined too much by individual cases. Second, the contact had to be a face to face contact. The decision to pay a home visit is considered a two-step process. First the decision is made whether it is necessary or not to see the patient, and if not, whether a telephone consultation is an alternative. Second, whether the patient should come to the GP or the GP to the patient. Therefore, we assume that the alternative for a home visit is usually a surgery consultation. A selection of 246,738 contacts, both home visits and surgery consultations, in 1987 and 77,167 contacts in 2001, remained.
Both files were aggregated by diagnosis (ICPC-code). The variable to be aggregated was home visit (yes = 1, no = 0). In this way, for every diagnosis a proportion of home visits was computed for both years. This procedure resulted in 246 diagnoses varying from 0% to 86% home visits. Before aggregating, we weighted the data of 1987 on age and urbanization to the population of 2001. This was done to adjust for these factors, which are commonly known to influence home visits. This weighting had, however, very little influence. The un-weighted results are shown in the annex [see Additional file 1].
Statistical analyses
The analyses were done on the level of diagnoses. Two regression analyses were conducted, using the proportion of home visits in 2001 for a specific diagnosis as the dependent variable, and the percentage of home visits for that same diagnosis in 1987 as the independent variable. In the first analysis we estimated a simple linear regression-model. Since we hypothesized that this relation is rather curvilinear, J-shaped, instead of linear, in the next step we added a quadratic term to the model. The whole model can now be expressed by the following equation: Whereby Y represents the proportion of home visits within one diagnosis in 2001 and x the proportion of home visits in 1987. Both models will be presented.
Results
Some characteristics of the practices, patients and contacts involved in the analyses are presented in table 1. Of all face to face contacts that were included, 14.1% was a home visit in 1987 and 7.4% in 2001. Previous studies showed that of all contacts approximately 17% was a home visit in 1987 and 9% in 2001 [4]. There were a few differences between both years. The percentage of urban practices was slightly higher, the average list size was higher, which is also the case in the National population, and lastly, the average age of the patients was also slightly higher.
Home visits are still more often carried out with the elderly people. The older the patient, the higher the chance on a home visit. This is illustrated by figure 1. The most striking difference between both years was found among the youngest patients. In 1987 significantly more home visits were carried out with children. In the youngest cohort (0 through 5 years), the percentage of home visits decreased from 20% to 3%. The proportion of home visits is also smaller among the older cohorts, especially those between the age of 55 and 75. Above that age, the difference between both years gets smaller. Table 2 represents the results of the regression analyses. In model 1, the linear coefficient of 0.78 was found to be significant at the 0.001 level. The estimated proportion of home visits for any diagnosis is approximately 75% of the proportion in 1987. The fit of the model is quite high: 79% explained variance. In model 2 the quadratic term was added and was also found to be significant at the .001 level. This leads to 4% additional explained variance. The proportion for 2001 can now be expressed as: 0.01+ 0.36 times the proportion in 1987, plus 0.66 times the square of this proportion. These results confirm the hypothesized J-shaped relationship.
To get a better insight, both regression lines are displayed in figure 2. When for a diagnosis only 20% of the contacts resulted in a home visit in 1987, in 2001 the estimated proportion is 11%. 40% in 1987 becomes 26% in 2001, 50% becomes 36%. At the level of 80% in 1987 there is still a decrease of 7% but when we reach 90% or more, there is hardly any decrease. Theoretically, at the proportion of 96%, the estimated proportion in 2001 exceeds the proportion in 1987. However, such high proportions do not really exist in the file.
Obviously, some diagnoses are closer to their predicted value than others. Although the model fits very well, there are some diagnoses that show relatively high differences between both years. Table 3 shows the top-5 of diagnoses with the strongest decreases in the proportion of home visits. These are: fever; acute myocardial infarction; osteoporosis; concussion; and tonsillitis, angina, and scarlatina. In only a few diagnoses there is a contrast to the overall trend, a higher proportion of home visits in 2001 than in 1987. This was the case for 'generalized pain' (A01) and acute stress-reaction (P02).
Conclusion and discussion
By comparison with 1987, the proportion of home visits shows a distinct decline. We expected that this decrease was not equal for all kind of diagnoses, but relatively stronger for the complaints 'in the middle', with median proportions, and smaller at the extreme points. Our findings lend support for this hypothesis.
One plausible explanation for this finding is that every home visit is the outcome of the weighting of discomfort and, or danger, for the patient on one hand and the discomfort, for example in the amount of time spent, for the GP on the other hand. Better transport facilities for patients and an increase of the workload experienced over a period of time might have loaded the latter factor. It is obvious that in very severe cases these non-medical factors are of less importance. The more threatening a complaint, the less room the GP has for making medical and other decisions. This finding suggests that the decrease in home visits is not necessarily a problem. There seems to be no reason to assume that GPs take unacceptable risks since medical factors are still taken into consideration. In urgent cases, most GPs still visit their patients. An explanation for some large decreases is that medical knowledge and commonly accepted ideas about specific complaints have changed. In the list of strongest decreases, fever, streptococcal infections and concussion can be traced back to altered views in medical management. Fever in itself is no reason for a visit, in the case of concussion, advice can often be given without seeing the patient. The reason that patients with a myocardial infarction have fewer visits, is likely to be related to the more active therapeutic approach adopted since 1987. Many of them undergo a PTCA within the first days after their infarction and within a week they leave the hospital. In 1987 the treatment was more often conservative, the patients stayed longer in the hospital and were discharged with restrictions on exercise. It is not plausible that the decrease involves the first emergency calls when a patient experiences chest pain. However, the design of our study does not differentiate between several types of visits. The place of osteoporosis in the top-five decreases is difficult to interpret within the limits of this study.
Although the results showed that the decrease in home visiting rates become smaller when the complaints become more urgent, there is a decrease in the overwhelming majority of the complaints. The finding that GPs do more visits when the patients report acute stress reactions or psychological symptoms, is surprising in the light of the declining number of visits. An explanation might be that in case of serious psychological symptoms, it is easier for the GP to visit these patients than receiving them in their practice. So, in such cases it is both in the interest of the GP and the patients to carry out a home visit. Moreover, when an emotionally stressed and possibly confused patients calls, it is often difficult to make an estimation of the urgency of the complaint [14].
What does this information mean for the GP? First, the results show that some complaints provide more room for manoeuvre in the choice of whether or not to carry out a home visit. Furthermore, in the discussion over whether or not the decrease in home visiting is problematic, this information supports the claim that GPs who reduce their number of home visits do not necessarily make irresponsible decisions.
The purpose of this study was to describe the relationship between GPs and their patients in very broad outlines in order to get an insight into the overall pattern of the decrease in home visits on the level of complaints and diagnoses. Therefore we used aggregated data and created abstract research entities. The characteristics of patients, GPs, practices and their context have been shown to play an important role in home visiting but were beyond the scope of this study. However, more insight into the nature of the decrease in home visits can be an important point of departure for more explanatory studies. | 2014-10-01T00:00:00.000Z | 2006-10-17T00:00:00.000 | {
"year": 2006,
"sha1": "ba46d86d692f1373d94c9f68468c799cb1fde4ad",
"oa_license": "CCBY",
"oa_url": "https://bmcprimcare.biomedcentral.com/track/pdf/10.1186/1471-2296-7-58",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba46d86d692f1373d94c9f68468c799cb1fde4ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25536307 | pes2o/s2orc | v3-fos-license | Biochemical Characterization of Human Collagenase-3*
The cDNA of a novel matrix metalloproteinase, colla- genase-3 (MMP-13) has been isolated from a breast tumor library M. M., M., J., and C. J. Biol. Chem. 269, 16766–16773), and a potential role in tumor progression has been proposed for this enzyme. In order to establish the possible role of colla- genase-3 in connective tissue turnover, we have expressed and purified recombinant human procolla- genase-3 and characterized the enzyme biochemically. The purified procollagenase-3 was shown to be glycosyl- ated and displayed a M r of 60,000, the N-terminal sequence being LPLPSGGD, which is consistent with the cDNA-predicted sequence. The proenzyme was activated by p -aminophenylmercuric acetate or stromely- sin, yielding an intermediate form of M r 50,000, which displayed the N-terminal sequence L 58 EVTGK. Further processing resulted in cleavage of the Glu 84 –Tyr 85 peptide bond to the final active enzyme ( M r 48,000). Trypsin activation of procollagenase-3 also 2 hydrolysis (Willen- The K i for both were determined using the following equations: k off (cid:53) k obs v s / v 0 and K i (cid:53) k off / k on . N-terminal Amino Acid Sequencing— N-terminal sequence determi-nations of purified procollagenase-3 or active collagenase-3 were per- formed by automated Edman degradation using an Applied Biosystems 470A protein sequencer with on-line 190A HPLC for phenylthiohydan- toin-derivative analysis.
The human matrix metalloproteinases (MMPs) 1 comprise a family of at least 11 homologous zinc-dependent endopeptidases that degrade the macromolecular components of extracellular matrices. They have been implicated in matrix remodeling processes associated with normal mammalian development and growth and in the degradative processes accompanying arthritis and tumor invasion. The MMPs can be divided into three main subfamilies, collagenases, stromelysins, and gelatinases, and other enzymes that do not belong to these groupings. Three highly homologous human collagenases, fibroblast (MMP-1), neutrophil , and collagenase-3 (MMP-13) have been identified by analysis of their respective cDNAs (Goldberg et al., 1986;Whitham et al., 1986;Hasty et al., 1990;Freije et al., 1994). Sequence comparison revealed that they share more than 50% sequence identity and three functionally important domains, namely the propeptide, catalytic, and C-terminal domains. Procollagenase latency is due to the propeptide domain, which consists of about 80 amino acids including a free cysteine residue within the highly conserved PRCGVPD sequence motif. The catalytic domain of about 180 amino acids contains two or one calcium and two zinc binding sites as revealed by x-ray crystallographic analysis of the catalytic domains of fibroblast and neutrophil collagenases in the presence of synthetic inhibitors (Borkakoti et al., 1994;Bode et al., 1994;Lovejoy et al., 1994). The structure comprises a five-stranded -sheet, two bridging loops, and two ␣-helices. The C-terminal domain is linked via a short hinge sequence motif to the catalytic domain and shares sequence homology with vitronectin, being essential for the triple helicase activity of fibroblast and neutrophil collagenases Clark and Cawston, 1989;Sanchez-Lopez et al., 1993;Hirose et al., 1993;Knä uper et al., 1993a). The active enzymes form tight binding noncovalent complexes with their natural inhibitors, referred to as tissue inhibitors of metalloproteinases (TIMPs), in a 1:1 stoichiometric fashion. The interaction of the collagenases with TIMPs is mainly regulated by the catalytic domain , but C-terminal domain interactions increase the association rates of complex formation.
Biochemical studies on fibroblast and neutrophil collagenases describing their activation mechanism, substrate specificity, and inhibitor interaction in relation to their domain organization are well advanced (Murphy et al., 1987Clark and Cawston, 1989;Hirose et al., 1993;Sanchez-Lopez et al., 1993;Knä uper et al., 1990aKnä uper et al., , 1990bKnä uper et al., , 1993aKnä uper et al., , 1993b, but there are currently no data available regarding the activation * This work was supported in part by the Arthritis and Rheumatism Council, United Kingdom, by a Wellcome Trust Travelling Fellowship (to V. K.) and by Comision Interministerial de Ciencia y Tecnologia Spain Project . The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Activation of Procollagenases and Determination of Their Concentration-Procollagenase-3 and neutrophil procollagenase were routinely activated by incubation with 1 mM APMA at 37°C. Procollagenase-3 (20 g) was activated with either 1.4 or 2.8 g of active stromelysin at 37°C for up to 7 h. In order to achieve "superactivation" of both neutrophil and fibroblast procollagenase (purified according to Knä uper et al. (1990a) and Murphy et al. (1992)), the enzymes were activated by combined treatment with trypsin and stromelysin (Knä uper et al., 1993b;Murphy et al., 1992). The concentrations of the three active collagenases were determined by titration against the synthetic hydroxamic acid based metalloproteinase inhibitors CT1399 and CT1847 (kindly provided by Celltech Ltd., Slough, United Kingdom).
Activity Assays-The specific activities of the active collagenases were determined using 14 C-labeled type I collagen in a diffuse fibril assay at 35°C essentially as described by Cawston et al. (1981). Correspondingly, the gelatinolytic activity was determined at 37°C using [ 14 C]gelatin as the substrate (Cawston et al., 1981). The degradation of acid-soluble type II and III collagen was quantitated using the gel scanning protocol described by Welgus et al. (1981). The activity of collagenase-3 versus the synthetic quenched fluorescent peptide substrates Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 and Mca-Pro-Cha-Gly-Nva-His-Ala-Dpa-NH 2 was assessed using a Perkin Elmer spectrofluorometer (LS 50B) (Knight et al., 1992). The three serpins, ␣ 1antichymotrypsin, antithrombin III, and plasminogen activator inhibitor 2 (5 g each) were incubated with 86 ng of active collagenase-3 at 37°C for 5 h. The reaction was terminated by the addition of reducing sample buffer prior to electrophoretic analysis. In the case of ␣ 1antichymotrypsin, the reaction products were purified by reverse-phase HPLC on a Vydac 218TP54 column (4.6 ϫ 250 mm) at a constant flow rate of 1 ml/min using a linear gradient of 5-95% acetonitrile and further analyzed by amino acid sequencing.
Purification of Recombinant Tissue Inhibitors of Metalloproteinases TIMP-1, TIMP-2, and TIMP-3 and Determination of Their Concentrations-Recombinant forms of the three human TIMPs were purified from the relevant transfected NSO mouse myeloma clones (Murphy et al., 1991;Willenbrock et al., 1993;Apte et al., 1995). TIMP concentrations were determined by titration against human recombinant stromelysin, the concentration of which had been determined by titration with a standard preparation of TIMP-1 (concentration determined by amino acid analysis) (Murphy and Willenbrock, 1995).
Kinetic Studies Using Synthetic Hydroxamic Acid-based Inhibitors CT1399 and CT1847-The rate constants for the association of 50 pM active collagenase-3 with the synthetic inhibitors CT1847 (2-8 nM) and CT1399 (100 -300 pM) were determined by analysis of the progress curves of Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 hydrolysis (Willenbrock et al., 1993). The apparent K i values for both inhibitors were determined using the following equations: k off ϭ k obs (v s /v 0 ) and K i ϭ k off /k on .
N-terminal Amino Acid Sequencing-N-terminal sequence determinations of purified procollagenase-3 or active collagenase-3 were performed by automated Edman degradation using an Applied Biosystems 470A protein sequencer with on-line 190A HPLC for phenylthiohydantoin-derivative analysis.
Expression and Purification of Human Procollagenase-3-
Human procollagenase-3 was expressed by stable transfected NSO mouse myeloma cells and purified using S-Sepharose fast flow and Sephacryl S-200. The procollagenase-3 preparation was free of other matrix metalloproteinases as assessed by gelatin and casein zymographic analysis (results not shown). The final purified procollagenase-3 displayed a M r of 60,000 when analyzed by SDS-PAGE under reducing conditions (Fig. 1, lane 1). The proenzyme was shown to be glycosylated as demonstrated by N-glycosidase F treatment (Fig. 1, lane 2). This reduced the M r to 53,600, which is in excellent agreement with the M r predicted from the cDNA sequence. Thus 10% of the M r of the proenzyme corresponds to N-linked sugars. N-terminal amino acid sequencing of procollagenase-3 revealed the sequence LPLPSGGD, which is consistent with the cDNA predicted sequence (Fig. 2). A minor portion of the secreted procollagenase-3 displayed the N-terminal amino acid sequence PLPSGGD. The loss of Leu 1 in a part of the enzyme preparation may be due to the activity of a leucine aminopeptidase produced by the NSO mouse myeloma cells during the average culture period of 2 weeks. The procollagenase-3 preparation was Ͼ98% latent and displayed barely detectable levels of enzymatic activity prior to activation, and it can be concluded that the loss of Leu 1 did not effect the latency of the proenzyme.
In contrast, autoactivated collagenase-3 displayed a M r of 48,000 when analyzed by SDS-PAGE, and its proteolytic activity could not be enhanced by APMA treatment. N-terminal amino acid analysis revealed the sequence YNVFPRTLKWSK-MXL demonstrating the complete loss of the propeptide domain and assigning Tyr 85 as the first amino acid of the active enzyme. The Asn 98 residue was clearly glycosylated due to the lack of a signal during amino acid sequencing.
Activation of Procollagenase-3 by APMA-Purified procollagenase-3 was activated by treatment with 1 mM APMA in a time-dependent fashion (Fig. 3A). The activity generated was monitored using Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 described by Knight et al. (1992). Full activation was achieved after a time interval of 30 min. Parallel analysis of the M r of the enzyme by SDS-PAGE revealed that the M r of the proenzyme was reduced through at least one short-lived intermediate form (M r 50,000) to the final active collagenase-3 displaying a M r of 48,000 (Fig. 1, lanes 3-5). Preincubation of procollagenase-3 with a two molar excess of recombinant TIMP-1 prior to APMA activation prevented the formation of the low M r final active enzyme form (M r 48,000), clearly demonstrating that this process was autoproteolytic (Fig. 1, lane 7). Under these conditions, two intermediate enzyme forms were demonstrated displaying apparent M r values of 56,000 and 50,000. The M r 56,000 intermediate was not detectable in the absence of TIMP-1, which might indicate that it was extremely unstable being rapidly converted to the M r 50,000 species. This might indicate that procollagenase-3 activation by APMA is a three-step process.
N-terminal amino acid sequencing showed the initial generation of the sequence LEVTGKL after a 4-min activation of procollagenase-3 by APMA, which is due to the cleavage of the Gly 57 -Leu 58 peptide bond (Fig. 2). During the progress of activation, the initial intermediate form (M r 50,000) was converted to the final fully active enzyme by hydrolysis of the Glu 84 -Tyr 85 peptide bond leading to the release of the complete propeptide domain (Fig. 2).
Activation of Procollagenase-3 by Stromelysin-1-The ability of stromelysin to activate procollagenase-3 was monitored by following the increase in the rate of hydrolysis of Mca-Pro-Leu-Gly-Dpa-Ala-Arg-NH 2 and the loss of the procollagenase-3 propeptide by SDS-PAGE ( Fig. 3B and Fig. 1, lanes 9 -11). At the concentration of procollagenase-3 tested, there was no detectable activation by autoproteolysis, but the addition of trypsin-activated stromelysin led to an increase in collagenase-3 activity. Activation of procollagenase-3 by stromelysin was dependent on the stromelysin concentration employed during the incubation period (Fig. 3B). Analysis of the reaction products by SDS-PAGE revealed that the proenzyme was converted to a doublet of M r 50,000 and 48,000, respectively (Fig. 1, lane 10). N-terminal sequence analysis of the stromelysin-activated collagenase-3 showed that the Gly 57 -Leu 58 and Glu 84 -Tyr 85 peptide bonds were cleaved. Thus, stromelysin activation of procollagenase-3 proceeds in a two-step mechanism via intermediates also obtained during APMA-induced autoproteolytic activation (Fig. 2).
Activation of Procollagenase-3 by TPCK-treated Trypsin-Procollagenase-3 was rapidly activated by TPCK-trypsin through a very transient intermediate form of M r 52,000, which was only faintly visible after mixing of the proenzyme with trypsin during the start of the reaction. During the activation progress, the proenzyme was converted to the active collagenase-3 displaying a molecular mass of 48,000, which showed the new N terminus Tyr 85 as a result of the hydrolysis of the Glu 84 -Tyr 85 peptide bond. Thus it was not the result of tryptic cleavage that should occur after Lys or Arg residues in position P 1 . It has therefore to be concluded that the initial tryptic cleavage in the propeptide domain lead to the autoproteolytic loss of the rest of the propeptide by an autoproteolytic event. Furthermore, the active enzyme was not stable in the presence of TPCK-treated trypsin and was further hydrolyzed into smaller sized fragments, which might be due to tryptic cleavage at the Lys 257 -His 258 or Lys 260 -Thr 261 peptide bonds within the hinge region of collagenase-3. Identical results were obtained when a mixture of TPCK-treated trypsin, and stromelysin was used to activate procollagenase-3. As the collagenolytic activity of collagenase-3 is dependent on the presence of the C-terminal domain, 2 we did not determine the specific activity of trypsin activated collagenase-3 since high amounts of the catalytic and C-terminal domain were present in the reaction mixture even after only short incubation times (Fig. 1, lane 13). In addition, the activity versus the peptide substrate declined during prolonged incubation such that after 3 h only 60% of the initial maximal activity was retained, while the active enzyme was completely converted to the catalytic and C-terminal domain respectively (Fig. 1, lane 14).
Determination of the Substrate Specificity of Collagenase-3: Physiologically Relevant Substrates-Active collagenase-3 degraded the interstitial collagens (I, II, III) at 25°C into typical 3/4 and 1/4 fragments. Collagenase-3 cleaved type II collagen about 5 times faster than type I and 6 times faster than type III collagen. Attempts to quantitate the cleavage of soluble type I, II, and III collagen using the SDS-gel scanning protocol (Welgus et al., 1981) were performed at varying enzyme to substrate ratios, but it proved impossible to establish linearity. In addition, we found that the 3/4 fragments were stained more efficiently than intact collagen, which made it impossible to accurately quantify collagenolysis. Quantitative comparison of the activity of collagenase-3 relative to those of fibroblast or neutrophil collagenase was therefore only possible by generating data simultaneously using the 14 C-labeled type I collagen diffuse fibril assay, and these are summarized in Table I. Collagenase-3 displayed a specific activity of 100 g/min/nmol, which was comparable with those values obtained for "super-2 V. Knä uper and G. Murphy, unpublished results. activated" fibroblast or APMA-activated neutrophil collagenase. In contrast, "superactivated" neutrophil collagenase was 3 times as active and can be assigned as the most efficient type I collagenolytic enzyme in the human.
The gelatinolytic activity of collagenase-3 and its homologous counterparts were determined using [ 14 C]gelatin (Table I). Collagenase-3 displayed the highest specific activity, 90.7 g/min/ nmol, respectively. Thus the enzyme was 44 times more efficient than fibroblast and 3-8 times better than neutrophil collagenase.
The rapid proteolytic degradation of two different serpins (␣ 1 -antichymotrypsin and plasminogen activator inhibitor 2) by highly purified active collagenase-3 was demonstrated by SDS-PAGE, while antithrombin III was resistant to degradation (not shown). Further analysis of the ␣ 1 -antichymotrypsin cleavage products by N-terminal amino acid sequence determination revealed that collagenase-3 hydrolyzed the Ala 362 -Leu 363 peptide bond within the extended reactive site loop of the serpin, two amino acid residues downstream from the reactive site center. The cleavage of the Ala 362 -Leu 363 peptide bond of ␣ 1 -antichymotrypsin coincides with its inactivation as recently demonstrated by Mast et al. (1991) for collagenase (MMP-1) and stromelysin (MMP-3).
Quenched Fluorescent Peptide Substrates-Active collagenase-3 cleaved the peptide substrates Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 and Mca-Pro-Cha-Gly-Nva-His-Ala-Dpa-NH 2 at the Gly-Leu and Gly-Nva peptide bonds as revealed by amino acid analysis of the HPLC-purified reaction products. Active site titrations of fully APMA-activated collagenase-3 were performed using the synthetic inhibitor CT1399 to determine the enzyme concentration. The initial rate of substrate hydrolysis showed linear dependence on substrate concentration in the concentration range 0.7-8 M, demonstrating that K m Ϸ 8 M. At substrate concentrations greater than 8 M, estimates could not be made due to insolubility of the sub-strates and absorptive quenching. Therefore, individual values of k cat and K m could not be determined. The values of k cat /K m for the hydrolysis of both substrates were estimated at substrate concentrations of 0.7 and 1.4 M, which fulfilled the requirements of [S] Ͻ Ͻ K m allowing direct determination of k cat /K m . Simultaneously, k cat /K m values for fibroblast and neutrophil collagenase were determined under identical conditions and compared with the values obtained for collagenase-3 (Table II). Collagenase-3 hydrolyzed both synthetic peptide substrates 70 -100 or 7-10 times more efficiently than fibroblast or neutrophil collagenase. Thus collagenase-3 is the most potent peptidolytic enzyme of all three homologous collagenases.
Inhibition of Active Collagenase-3 by TIMPs-Inhibition studies of active collagenase-3 with TIMP-1, TIMP-2, and TIMP-3 were performed by preincubation of collagenase-3 with TIMP concentrations (determined by active site titration with active stromelysin) up to 2 times the enzyme concentration (determined by active site titration with CT1399) using 2-h preincubations. Residual enzymic activities were determined by hydrolysis of Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 and plotted versus TIMP concentration. Analysis of the data revealed that all three TIMPs inhibited the enzyme in a 1:1 stoichiometric fashion (Fig. 4). Initial kinetic analysis of collagenase-3 TIMP interaction demonstrated that TIMP-1 showed association rate constants in the region of ϳ8 ϫ 10 6 M Ϫ1 s Ϫ1 and TIMP-3 ϳ10 ϫ 10 6 M Ϫ1 s Ϫ1 , while the value for TIMP-2 was ϳ1.8 ϫ 10 6 M Ϫ1 s Ϫ1 . 3 Thus TIMP-3 reacted 1.2 times faster than TIMP-1 and 5.5 times faster than TIMP-2.
Inhibition of Active Collagenase-3 by Hydroxamic Acid-based Inhibitors and Kinetic Analysis of Their Interaction-
The collagenase-3 concentration was determined by active site titration using the synthetic hydroxamic acid-based peptide inhibitors CT1399 and CT1847. These inhibitors are competitive and react with 1:1 stoichiometry as revealed from x-ray crystallographic analyses of structurally related inhibitors with the catalytic domains of fibroblast and neutrophil collagenase (Lovejoy et al., 1994;Borkakoti et al., 1994;Bode et al., 1994). Apparent k on values for their interaction with collagenase-3 were determined as described under "Experimental Procedures." The enzyme (50 pM) was added to the reaction mixture containing 0.7 M substrate and 100 -300 pM CT1399 or 2-8 nM CT1847. Inhibition was observed as curvature in the progress of substrate hydrolysis and analyzed according to Willenbrock et al. (1993). Equivalent assays in the absence of inhibitor FIG. 3. A, activation of procollagenase-3 by APMA. Procollagenase-3 was incubated at a concentration of 626 nM in the presence of 1 mM APMA at 37°C. At the indicated time points, aliquots were removed and assayed using Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 . Results are presented as rates of substrate hydrolysis (M Ϫ1 s Ϫ1 ). B, activation of procollagenase-3 by stromelysin. Procollagenase-3 was incubated with 1.4 or 2.8 g of active stromelysin at 37°C. At the indicated time intervals, aliquots were removed and assayed for activity using Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 . Results are presented as rates of substrate hydrolysis (M Ϫ1 s Ϫ1 ). f, procollagenase-3 activated by 1.4 g of stromelysin; q, procollagenase-3 activated by 2.8 g of stromelysin; Ç, procollagenase-3 in the presence of buffer.
TABLE I Comparison of the collagenolytic and gelatinolytic activities of collagenase-3 (MMP-13), fibroblast collagenase (MMP-1), and neutrophil collagenase (MMP-8)
The activated collagenases were incubated with 14 C-labeled type I collagen in a diffuse fibril assay at 35°C or with [ 14 C]gelatin at 37°C, essentially as described by Cawston et al. (1981). revealed that curvature was due only to inhibition by CT1399 or CT1847 and was not due to enzyme instability or substrate depletion. The initial velocities v 0 were independent of inhibitor concentration, and k obs showed linear dependence on inhibitor concentration. Thus it can be concluded that inhibition of collagenase-3 by CT1399 and CT1847 proceeds via a simple bimolecular collision. The second order rate constants k on were in the range of 1.4 ϫ 10 6 M Ϫ1 s Ϫ1 for CT1847 and 17.0 ϫ 10 6 M Ϫ1 s Ϫ1 for CT1399. This revealed that CT1399 reacted 12.1 times faster than CT1847. CT1399 showed an apparent K i value of 4 pM and CT1847 a value of 540 pM. The K i value for CT1399 of 4 pM can be regarded only as an upper estimate, since analysis at enzyme concentrations below K i could not be performed due to the limitations in assay sensitivity and the lack of enzyme stability at these low concentrations.
DISCUSSION
Human collagenase-3 is a novel member of the matrix metalloproteinase superfamily and has been cloned from a breast tumor cDNA library (Freije et al., 1994). The enzyme is expressed in the surrounding endothelia of the tumor and may be involved in tumor progression and metastasis. Consequently, biochemical analysis of the activation mechanism, substrate specificity, and inhibition profile of collagenase-3 is of vital importance in order to understand its possible role in vivo. We have, therefore, expressed and purified recombinant human procollagenase-3 and analyzed its biochemical properties in detail and compared these with the homologous human collagenases and gelatinase A.
Procollagenase-3 showed a high degree of N-linked glycosylation as demonstrated by enzymatic deglycosylation (11.7% of its M r corresponds to N-linked sugars). Amino acid sequencing revealed a lack of signal for the Asn 98 residue, thus it can be deduced that the glycosylation site N 98 LT carries N-linked sugars. This glycosylation site is conserved between collagenase-3, neutrophil collagenase (Knä uper et al., 1990b), and gelatinase-B and is occupied in all three enzymes. The role of the high levels of glycosylation observed for these three enzymes is not quite clear to date. It has been speculated that glycosylation of neutrophil collagenase and gelatinase-B might be important for targeting these enzymes to the specific granules of neutrophils, where they are stored prior to exocytosis. However, in the case of collagenase-3 it is not clear where the enzyme might be produced in vivo and why it carries a relatively high amount of N-linked sugars. It is most unlikely that the glycosylation will cause any changes in the enzymatic properties, activation, or TIMP interaction of collagenase-3, since studies on the natural and recombinant catalytic domain of neutrophil collagenase have shown that the unglycosylated recombinant protein has indistinguishable enzymatic properties (Knä uper et al., 1993a;Schnierer et al., 1993).
Activation of matrix metalloproteinases is one of the control mechanisms regulating extracellular connective tissue turnover. We have therefore studied the mechanisms leading to procollagenase-3 activation. Stromelysin activated procollagenase-3 by a two-step mechanism, which is similar to that observed for gelatinase-B (Shapiro et al., 1995;Ogata et al., 1992). In addition, neutrophil procollagenase was activated by stromelysin by a single-step mechanism (Knä uper et al., 1993b), while the fibroblast procollagenase cannot be directly activated by stromelysin (Murphy et al., 1987;Suzuki et al., 1990). The peptide bonds cleaved within procollagenase-3, neutrophil procollagenase and progelatinase-B seem to be readily accessible to stromelysin, while fibroblast procollagenase is resistant until proteolysis of upstream regions of the propeptide have been affected by combined trypsin-stromelysin treatment leading to "superactivation" (Murphy et al., 1987;Suzuki et al., 1990). In contrast, procollagenase-3 was very susceptible to either trypsin alone or trypsin in combination with stromelysin, which lead to the rapid loss of the C-terminal domain, thereby destroying the collagenolytic activity of the enzyme. Although relatively high amounts of stromelysin were needed to activate procollagenase-3 efficiently over 6 h, this activation pathway may still be of relevance in vivo, since very high levels of stromelysin have been observed under certain pathological conditions (Walakovits et al., 1992;Matrisian and Bowden, 1990).
Collagenase-3 can be assigned to the collagenase subfamily of matrix metalloproteinases, according to substrate specificity analysis, hydrolyzing the interstitial collagens I-III into 3/4 and 1/4 fragments preferentially cleaving type II collagen over type I and III. In contrast, fibroblast collagenase preferentially cleaves type III and neutrophil collagenase type I collagen 4. Inhibition of active collagenase-3 by the three homologous TIMPs. Active collagenase-2 (2 nM) was incubated with increasing concentrations of either TIMP-1 (q), TIMP-2 (f) or TIMP-3 (å). (Welgus et al., 1981;Hasty et al., 1987). Thus the three collagenases show distinct collagen substrate specificities, which implies that they may have evolved as specialized enzymes in order to dissolve different connective tissues, which vary in their collagen composition. Collagenase-3 may especially be important in the turnover of articular cartilage, which is rich in type II collagen. The specific activities of the three collagenases against type I collagen were in the range of 100 -120 g/min/ nmol enzyme with exception of "superactive" neutrophil collagenase, which cleaved 338 g/min/nmol. By comparison of the ratios of collagenolytic/gelatinolytic activity (Table III) or collagenolytic/peptidolytic activity (not shown) of the three enzymes, it becomes clear that fibroblast collagenase is the most specific collagenase within this group, although the specific collagenolytic activity of "superactive" neutrophil collagenase is 3 times higher.
Collagenase-3 cleaved gelatin and the two synthetic peptide substrates with highly improved efficiency when compared with fibroblast or neutrophil collagenase. Thus, it appears that collagenase-3 not only efficiently degrades type I collagen, but it might also act as a gelatinase to further degrade the initial cleavage products of collagenolysis to small peptides suitable for further metabolism. This is in agreement with results obtained earlier for rat collagenase, which shows relatively high levels of gelatinolytic activity (Welgus et al., 1985) and shares the highest degree of homology with human collagenase-3, as does mouse collagenase (Henriet et al., 1992;Quinn et al., 1990). According to the high degree of functional and sequence homology between human collagenase-3 and the rodent collagenases, these enzymes belong to the collagenase-3 subfamily (MMP-13) of matrix metalloproteinases and are distinct from human fibroblast collagenase (MMP-1). We therefore propose to introduce a revised nomenclature for the rodent collagenases to prevent further confusion in the literature assigning them as MMP-13. Indeed, it may be concluded that rat and mouse cells express only collagenase-3 (MMP-13), there being no evidence to date for a homologous MMP-1 in either rat or mouse. The relative distribution of fibroblast collagenase (MMP-1) and collagenase-3 (MMP-13) in human tissues awaits detailed studies, but initial observations suggest that MMP-1 is predominant.
Comparison of the ratios of gelatinolytic over peptidolytic activity of collagenase-3 with those values obtained for human gelatinase A revealed that collagenase-3 is 10 times less efficient than wild-type gelatinase A (Murphy et al., 1994). The high efficiency of wild-type gelatinase A against gelatin as a substrate can be attributed to the fibronectin-like type II repeats, since a gelatinase A deletion mutant (⌬ V191-Q364gelatinase A) lacking these sequence motifs has a similar ratio of gelatinolytic over peptidolytic activity to collagenase-3 (Murphy et al., 1994). Thus collagenase-3 shares some proteolytic characteristics with the gelatinase subfamily of matrix metalloproteinases, which is reflected in common structural elements shared by collagenase-3 and the gelatinases being localized within the active site cleft as discussed below.
Sequence alignments of the active site residues of the collagenases with the gelatinases revealed that the Arg (Fig. 5, number 1) in fibroblast collagenase is changed to Ile or Leu in collagenase-3, the rodent collagenases, neutrophil collagenase, and in the gelatinases. It has been noted by Stams et al. (1994) that the SЈ 1 -pocket in neutrophil collagenase is significantly larger than the equivalent pocket in fibroblast collagenase and that we can deduce that due to the presence of Leu within collagenase-3 and the gelatinases that these have a similar enlarged SЈ 1 -pocket and structure. Hence these enzymes should be able to hydrolyze a broader range of substrates. Second, collagenase-3, neutrophil collagenase, and the rodent homologues share a Pro residue (Fig. 5, number 3) with the gelatinases, while fibroblast collagenase has an Ile residue in this position. Furthermore, collagenase-3, the rodent enzymes, and the gelatinases contain negatively charged residues just preceding the third His residue of the catalytic zinc binding motif (either Asp or Glu; Fig. 5, number 2). In contrast, this residue corresponds to Ser or Ala in fibroblast or neutrophil collagenase. The presence of a negatively charged residue in collagenase-3 and the gelatinases might well have implications on the polarization of the zinc-bound water molecule within these enzymes, possibly increasing its nucleophilic nature (Fig. 6). This would certainly account for the increased proteolytic efficiency of collagenase-3 and the gelatinases, as indicated by our experimental results, but it remains to be confirmed by site-directed mutagenesis.
Analysis of the inhibition profile of collagenase-3 by the three homologous TIMPs revealed that all react in 1:1 stoichiometry by forming noncovalent tight-binding complexes, which is in agreement with earlier published data on other matrix metalloproteinases (for review see, Murphy and Willenbrock (1995)). Key residues specifically conserved between the gelatinases, collagenase-3 (and partially MMP-8), which may be of importance for gelatinolytic specificity are indicated in boldface italics.
Comparison of the efficacy of two synthetic hydroxamate inhibitors against collagenase-3 confirmed the structural similarity to the gelatinases. CT1399, which has a K i of less than 10 pM for gelatinase A and 16 pM for gelatinase B, had an approximate K i of ϳ4 pM for collagenase-3 and a K i of 385 nM for MMP-1. Similarly, CT1847, which has a K i of 1.55 nM against gelatinase A and 2.1 nM against gelatinase B had K i values of 0.54 nM against collagenase-3 and of 2.9 nM against MMP-1. 4 It may be concluded that inhibitors directed against gelatinases will also be efficient in the control of collagenase-3.
Our studies have indicated that human collagenase-3 is a potent proteinase with a broad spectrum of activity against extracellular matrix proteins (data not shown) as well as collagenolytic and high gelatinolytic activity. The regulation and location of its expression relative to the more specific fibroblast collagenase will be a matter of great importance for future study. | 2018-04-03T02:15:38.989Z | 1996-01-19T00:00:00.000 | {
"year": 1996,
"sha1": "abace383b97439a24ca3256ac932d7a4a27034ba",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/3/1544.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "751499d784374eb688ace51553592773e3e67997",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
91915272 | pes2o/s2orc | v3-fos-license | How do citizens perceive farm animal welfare conditions in Brazil?
The aim of this study is to understand the perceptions of Brazilian citizens about the actual conditions of farm animal welfare in the poultry, beef, and dairy supply chains. To reach this aim, an online survey was conducted. The analysis was based on descriptive statistics and three logistic regressions models. Results of descriptive statistics showed that citizens in Brazil had mostly negative perceptions about the actual conditions of animal welfare in the poultry, beef, and dairy supply chains. Results of the logistic regression models showed that in the poultry and dairy supply chains, citizens with background in agricultural/veterinary sciences, and citizens who reported a higher level of knowledge about these supply chains, were more likely to perceive as bad the actual conditions of farm animal welfare. In the poultry supply chain, citizens who reported previous contact with poultry farms were also more likely to perceive as bad the actual condition of farm animal welfare. In addition, the perception that farmers are mainly focused on the economic aspect of farming and less on animal welfare, the perception that animals do not have a good quality of life while housed on farms, and the perception that animals are not adequately transported and slaughtered, negatively impact on perceptions about the actual conditions of farm animal welfare in the three supply chains. We concluded that a protocol aimed to improve citizens’ perceptions about the actual conditions of farm animal welfare should focus in all phases of the supply chains.
4
89 University of Grande Dourados/Faculty of Management, Accounting and Economics. Before starting 90 data collection, the questionnaire was tested with 20 participants. All the questions were translated to 91 Portuguese.
92
To collect the data, we conducted an anonymous online survey. In a first step, we contacted 93 by phone human resource departments in several universities across Brazil. In this first contact, we 94 explained the purpose of our research, and asked if the department would forward a survey link for the 95 personal e-mail of students, professors and administration staff. Upon acceptance, we sent a follow-up 96 e-mail to human resource departments with the survey link and a brief description of the research, 97 which was then disseminated online for the academic community. Each university disseminated the 98 questionnaire of only one supply chain. We received 1.617 questionnaires of which three were 99 disregarded because they were incomplete. The final number of questionnaires was 728 for the poultry 100 supply chain, 586 for the beef supply chain, and 300 for the dairy supply chain. The data collection 101 took place from November 2016 until December 2017.
Statistical analysis
103 Statistical analysis was conducted in two steps. In a first step, we used factor analysis to 104 reduce the number of items used to represent participants' perceptions about animal welfare. Principal 105 component was used as the extraction method. The criterion to define the number of factors was an 106 eigenvalue greater than one [19]. Items were included in a factor when they presented factor loadings 107 greater than 0.5. Factors scores were generated for subsequent analysis [19].
108
In a second step, we run three logistic regression models. The three dependent variables were 109 participants' perceptions about the actual conditions of FAW on each supply chain. In the original 110 questionnaires, this variable was measured in a Likert scale from 1 to 5 (S1 Table). In order to run the 111 logistic models, we transformed the variable participants' perceptions about the actual conditions of 112 FAW on each supply chain into a binary variable, where participants who answered 1 or 2 were 113 gathered to a bad condition group (Bad:0) and participants who answered 3, 4 or 5 were gathered to 114 regular condition group (Regular:1). We tested the impact of two groups of independent variables: 115 participants' socio-demographic characteristics, and participants' perceptions about animal welfare.
116 The significance level was p<0.05.
118
Descriptive statistics 119 Descriptive statistics of participants' socio-demographic characteristics are presented in S3 120 Table. Socio-demographic characteristics were similar for the participants in the poultry and beef 121 supply chains, but somehow different for participants in the dairy supply chain. Participants who 122 answered the dairy supply chain questionnaire were, on average, older, more educated, and earned a 123 higher income compared to participants who answered the poultry and beef supply chains 124 questionnaires. Apart from these differences, others participants' socio-demographic characteristics 125 were similar within the three supply chains: most participants were female, most of them study/work 126 out of the fields related to agricultural/veterinary sciences, most of them had previous contact with 127 farm animals, most of them lived in urban areas, and most of them were pet owners.
128
Descriptive statistics of participants' perceptions about the actual conditions of FAW on each 129 of the three supply chains and other questions related to animal welfare are presented in S3 Table. In
152
Descriptive statistics about the statements used to measure participants' perceptions about 153 animal welfare are presented in S2 Table. For the statements related to FI (Perc 1 , Perc 2 , Perc 3 , Perc 4 ), 154 the mean were above or close to 4, which indicates that participants agreed that most farmers focus too 155 much on the economic aspect of farming and less in animal welfare. For the statements related to LQ 156 (Perc 5 , Perc 6 , Perc 7 , Perc 8 ), the mean were below or close to 3, which indicates that participants did not 157 agree that animals have a good quality of life while housed on farms. For the statements related to HC 158 (Perc 9 , Perc 10 ), the mean were below or close to 2, which indicates that participants agreed that humans 159 are allowed to use animals for consumption.
160
Logistic regression models 161 We tested the impact of socio-demographic characteristics, and participants' perceptions 162 about animal welfare on their perceptions about the actual condition of FAW in each supply chain.
163 Results of the three logistic regression models are present in Table 1. The socio-demographic 164 characteristics age, gender, pet ownership, and consumption of animal products did not significantly 165 impact on participants' perceptions about the actual condition of FAW in any supply chain. In the 166 poultry supply chain, participants who reported previous contact with poultry farms were more likely 167 to perceive as bad the actual condition of FAW compared to participants who had not reported previous 168 contact. In the poultry and dairy supply chains, participants in the fields of study related to 169 agricultural/veterinary sciences were more likely to perceive as bad the actual conditions of FAW 170 compared to participants out of these fields. In those supply chains, participants who reported a higher
245
A potential limitation of this study concerns selecting participants only in the academic 246 community. In comparison to the Brazilian population our sample is younger, more educated, and 247 earns a higher income [23]. Although we acknowledge that our sample is unbalanced in terms of 248 education, income, and age, we argue that academic community members have more access to 249 information that might drive changes in production systems. | 2019-04-03T13:06:09.092Z | 2018-07-30T00:00:00.000 | {
"year": 2018,
"sha1": "7cc949f3937a9247c672d22c799aa33a797b69c9",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2018/07/30/380550.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "8b0fe2b88db335024bedf5705f0a54fbd615a724",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Business"
]
} |
87578494 | pes2o/s2orc | v3-fos-license | Evidence of high production levels of thermostable dextrinizing and saccharogenic amylases by Aspergillus niveus
The aim of this work was to analyze the effect of several nutritional and environmental parameters on amylase production by a novel, isolated from the thermotolerant filamentous fungus Aspergillus niveus. This strain produced high levels of amylolytic activity in Khanna liquid medium supplemented with commercial starch, initial pH 6.5, under static conditions for 72 h. Among the tested carbon sources, milled corn, oatmeal, soluble potato starch and maisena were the best inducers of enzymatic secretion (220, 180, 170 and 150 U/mL), respectively. The main products of hydrolysis analyzed by thin layer chromatography were glucose, maltose and traces of maltooligosaccharides, suggesting the presence of -amylase and glucoamylase activities in the crude extract. The optimal pH were 4.5 and 5.5 and the optimum temperature was 65°C. The enzymes were fully stable up to 1 h at 55°C. It was possible to verify the presence of three bands with amylolytic activity in non-denaturing polyacrylamide gel electrophoresis (PAGE). These aspects and other properties suggested that the amylases produced by A. niveus might be suitable for biotechnological applications.
INTRODUCTION
Amylases are hydrolytic enzymes produced by plants for the assimilation of starch present in some types of roots; animals for the digestion of starch present in food (Aiyer, 2005) and by many prokaryotic (Salahuddin et al., 2011) and eukaryotic microorganisms that use starch as carbon source (Peixoto et al., 2003, Reddy et al., 2003. α-Amylase (E.C. 3.2.1.1 or 1, 4-α-D-glucan glucanohydrolase) is the key enzyme in the metabolism of a wide variety of living organisms, which use starch as carbon and energy sources. This enzyme randomly cleaves the internal α-1,4-glucosidic linkages of starch, glycogen, and related polysaccharides to produce oligosaccharides of different sizes (Reddy et al., 2003;Bhanja et al., 2007). α-Amylases are the most important enzymes because of their potential application in industrial process, such as in the fermentation, textiles, pharmaceuticals, detergent, brewing, baking, paper and food industries (Gupta et al., 2003, Bhanja et al., 2007, Salahuddin et al., 2011. α-Amylase is essential through the starch processing and plays an important role in the liquefaction of starch and its subsequent saccharification, where larger carbohydrate chains are hydrolyzed and converted into smaller carbohydrates (Chaplin and Bucke, 1990;Baks et al., 2006).
Glucoamylase hydrolyzes α-1,4 and α-1,6 linkages of starch and related polymers to produce glucose as the sole end-product. Glucoamylase also hydrolyzes other starch-related oligo-and polysaccharides, and shows a preference for the hydrolysis of maltooligosaccharides of at least six residues (Sukara and Doelle, 1989;Cereia et al., 2006). One of most important applications of glucoamylases is the production of high glucose syrups from starch, and these enzymes are also used in the production of ethanol and in the baking and brewing industries (Imai et al., 1994).
The current work constitutes the first study on concomitant production of α-amylase and glucoamylase by Aspergillus niveus, a fungus considered as an excellent producer of other enzymes such as xylanase (Peixoto-Nogueira et al., 2008;Betini et al., 2009), amylases (Silva et al., 2010) and pectinases (Maller et al., 2011). Then, the aim of this work was to describe the production of amylases by A. niveus on submerged fermentation supplemented with different carbon sources and the characterization of end-products of hydrolysis on the soluble starch.
Cultivation conditions
The composition of Adams, Czapek, SR and Khanna media (Adams, 1990;Wiseman, 1975;Rizzatti et al., 2001;Khanna et al., 1995), respectively was evaluated for amylase production using cultivation under stirring or stationary conditions, a 72 h period, at 40°C. Amylase production was standardized in Khanna medium and time-course was carried out up to six days, at 40°C. In order to study the effect of pH and temperature, the fungus was cultivated in Khanna medium at different pHs (range of 4.5 to 7.5) and temperature (25 to 50°C). The effects of physical conditions on the fungus incubation (static x stirred) were tested incubating the fungus intercalating on static and stirred conditions for 144 h, at 40°C.
Preparation of crude enzyme and growth quantification
Filtrates were obtained by filtration through filter paper in a Buchner funnel. The filtrates were dialyzed against 0.1 M sodium acetate buffer, pH 5.0, at 4°C, overnight. After that, the samples were used as a source of crude extracellular amylolytic activity, while mycelia were dried in stove until constant weight for the quantification of the dry biomass.
Enzymatic assay
The amylase activities were determined by measuring the production of reducing sugar using 3,5-dinitrosalicylic acid (DNS) as described by Miller (1959). The assay was carried out at 65°C using da Silva et al. 1875 1.0% starch solution in 0.1 M sodium acetate buffer, pH 5.0. An enzyme unit was defined as the amount of released reducing sugar at an initial rate of 1 µmol min -1 at 65°C. In addition, the enzyme activity was measured according to Cereia et al. (2006), using soluble starch as substrate, in which the amount of glucose released was estimated by peroxidase/glucose oxidase. Protein was determined by Lowry method (Lowry et al., 1951) using bovine serum albumin as standard.
Effect of different carbon sources on amylases production
The effect of different carbon sources was evaluated using 1% soluble starch, milled corn, oat meal, maisena, amylopectin, maltose, rice straw, raffinose, wheat bran, sugar cane bagasse, lactose, sucrose, corn cob, arabinose and without any carbon source. The temperature, cultivation time and pH of cultivation were 40°C, 72 h and 6.5, respectively. The enzymatic assay was followed as described in the previous item. The cultures with soluble starch and without carbon source were used as control.
Enzymatic characterization
The optimum pH was determined at 65°C using citrate-phosphate buffer (pH range 3.0 to 8.0). The pH stability was determined at 30°C, for 2 h, after pre-incubation of the diluted enzyme in citrate phosphate buffer at different pH values (pH range 3.0 to 8.0). The thermostability was determined by measuring the residual activity after the incubation of the diluted enzyme in the absence of substrate at 50 to 70°C in 0.1 M sodium acetate buffer at pH 5.0, for 6 h.
Amylolytic activities in polyacrylamide gel electrophoresis (PAGE)
Non-denaturing polyacrylamide gel electrophoresis (PAGE) was performed in 5 to 10% gels according to Davis method (Davis, 1964). Two identical samples were applied in the gel. After the electrophoresis procedure, this gel was vertically cut into two parts, obtaining two lanes with identical migration of proteins: (A) the gel was incubated with 0.5 M acetate buffer, pH 5.0 during 30 min and immediately immersed in 1% (w/v) potato starch where it was maintained during 20 min. The amylolytic activity was determined by incubation of the gel with a mixture of solutions of 0.3% KI and 0.15% I 2 until the appearance of the bands of amylolytic activity; (B) the other part of the gel was sliced into several segments, which were macerated in the presence of 0.1 M sodium acetate buffer, pH 5.0 and incubated in the presence of 1% starch for 2 h. The products formed were quantified by DNS (Miller, 1959) and by GOD (glucose oxidase kit).
Reproducibility of the results
All data were statistically analyzed.
Evaluation of different cultivation media and combinations of physical methods to increase the enzymatic production
The nutrient composition of Khanna medium, in static condition revealed the best amylase production ( Figure 1A), with about 38% more amylase yields than the SR medium, the second best composition of nutrients. This result was used in subsequent experiments, in which the fungus was incubated in Khanna medium, under stirring or static conditions, or in association of the two conditions, with a total time of incubation of 72 h, initial pH 6.0, at 40°C ( Figure 1B). This experiment was carried out to verify if the changes in physical conditions of aeration during the growth would result in an increase in enzyme secretion. The highest amylolytic activity was once more observed with 72 h of incubation under static conditions Effect of initial pH on cultivation medium, time and temperature on the production of amylases The highest enzymatic production occurred in a culture medium with initial pH 6.5 (Figure 2). These results are similar to those reported for Aspergillus fumigatus (Goto et al., 1998), Aspergillus oryzae Ahlburg (Cohen) 1042.72 (Bennamoun et al., 2004) and Penicillium fellutanum (Kathiresan and Manivannan, 2006). The specific activity was higher with 72 h of growth ( Figure 2B) and the maximum enzymatic activity was detected at 40°C. At a higher temperature (50°C), an accentuated decrease in enzymatic production to 25 U/mg protein was observed ( Figure 2C).
Effect of different carbon sources on amylases production
Among the tested carbon sources, milled corn, oatmeal, rice straw and soluble potato starch were the best inducers of enzymatic secretion (Figure 3). It was observed that milled corn and oatmeal together with soluble potato starch, which have high content of ions and vitamins, were important compounds for the growth of the microorganism. Other carbon sources tested such as rice straw, constituted basically of cellulose and hemicellulose, were not specific inducers of amylase. Maisena TM (commercial product obtained from corn starch), maltose, wheat bran, amylopectin and raffinose also demonstrated an excellent induction of amylases. On the other hand, sugar cane bagasse, corn cob sucrose, lactose and arabinose were the worst inducers of amylase synthesis, showing amylolytic yields close to the filtrate obtained from cultures incubated without carbon sources.
Different substrates hydrolysis
A. niveus amylases (3.6 U/mg) were used to hydrolyze cassava flour, corn flakes, barley flakes, rye flakes, oat flakes, wheat bran and soy flakes (Figure 4). The highest reducing sugar liberation occurred when the enzymatic extract was incubated with cassava flour and the lower quantity was observed when the substrate was soy flakes.
Hydrolysis products analysis
A. niveus crude extract cultures obtained from media supplemented with soluble starch, rice straw, maisena TM or maltose, as carbon sources were assayed with commercial 1% starch. The reducing sugars formed during the hydrolysis were analyzed by thin layer chromatography ( Figure 5). The higher diversity of maltooligosaccharides was formed when starch was the inducer, which revealed mono-, di-, tri-and traces of oligosaccharides. On the other hand, when maltose was used as a carbon source, minor diversity of sugars was observed. Filtrates of cultures induced by Maisena TM and rice straw presented maltose and glucose as major hydrolysis products. The different profiles of the endproducts of the four inducers suggested that distinct enzymatic complexes could be be synthesized by A. niveus, or could it be the same enzymes produced in different expression levels. It is possible that the corresponding glucose bands were formed due to the hydrolytic action of glucoamylase, while the maltose and maltotriose corresponding bands were formed due to αamylase activity.
Effects of temperature and pH on amylolytic activity
The enzymatic assays were carried out at pH 3 to 8.0 ( Figure 6A). The higher activities were obtained at pH 4.5 and 5.5. This data suggested the presence of more than one enzyme with amylolytic activity in the crude extract, which can only be proved by the elution in chromatography columns. An A. terreus α-amylase presented a great activity at pH 5.0 (Ali and Hossain, 1991). On the other hand, an α-amylase of Cryptococcus flavus presented a great pH activity corresponding to 5.5 (Wanderley et al., 2004) and α-amylase of A. oryzae presented a pH reaction corresponding to 6.0, close to the neutrality (Carlsen et al., 1996). Testing pH stability of A. niveus amylase ( Figure 6B) showed the observation of a good performance in a wide range of pH (3.0 to 6.5) possible In order to analyze the effect of temperature, A. niveus enzymatic assays were carried out at 50 to 80°C ( Figure 6C). The higher enzymatic levels were observed at 65°C, which is considered greater than those determined for mesophilics linkages of A. terreus, A. terreus NA-170 and A. fumigatus (Nguyen et al., 2002;Ghosh et al., 1991;Silva and Peralta, 1998), respectively. The enzymes were stable for six hours at 50°C. At 55°C, 50% of the initial activity was retained after four hours. These data are important for a possible application in industrial processes having in mind the long period of hydrolyzes using these enzymes in high or moderate temperatures.
Amylolytic activity in polyacrylamide gel
electrophoresis (PAGE)
The crude extract containing amylolytic activity was applied in a non-denaturing polyacrylamide gel electrophoresis (PAGE) (Figure 7), and after running, it was revealed as described in the materials and methods section. It was possible to verify the presence of three bands with amylolytic activities revealed by KI and I 2 solutions ( Figure 7A), by measuring the loss of binding capacity between starch and iodine resulting from the action of amylases. In order to better define the type of enzyme corresponding to the bands with amylolytic activity, the gel was sliced and the final products was determined by DNS and GOD. It is possible to suggest that the bands 1 and 2 correspond to glucoamylase activity and band 3 corresponds to α-amylase activity, because glucose was not detected in this sample ( Figure 7B). The ability of producing an enzymatic complex is usually common among filamentous fungi. This is interesting, because these enzymes might be used in association with different stages of starch saccharification.
Conclusion
The presence of three enzymes with amylolytic activity in gel electrophoresis revealed the presence of a complex amylolytic system that efficiently produces glucose, maltose and maltooligosacchacarides as hydrolysis product of starch. The physicochemical characteristics of these enzymes, as pH and temperature, on activity and stability at high temperatures demonstrated that these enzymes produced by a novel isolated thermotolerant filamentous fungus, Aspergillus niveus has a great potential to be industrially applied in the hydrolysis of starch. | 2019-03-31T13:42:38.074Z | 2013-04-10T00:00:00.000 | {
"year": 2013,
"sha1": "b5ddb2d177f107e6eb18c738e8bdc45ccaa34c79",
"oa_license": "CCBY",
"oa_url": "http://academicjournals.org/journal/AJB/article-full-text-pdf/F8D2EC823125",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7a21338125515b54f24e1fa1e8bae53328325ff7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
202178229 | pes2o/s2orc | v3-fos-license | Cesium and Strontium Contamination of Nuclear Plant Stainless Steel: Implications for Decommissioning and Waste Minimization
Stainless steels can become contaminated with radionuclides at nuclear sites. Their disposal as radioactive waste would be costly. If the nature of steel contamination could be understood, effective decontamination strategies could be designed and implemented during nuclear site decommissioning in an effort to release the steels from regulatory control. Here, batch uptake experiments have been used to understand Sr and Cs (fission product radionuclides) uptake onto AISI Type 304 stainless steel under conditions representative of spent nuclear fuel storage (alkaline ponds) and PUREX nuclear fuel reprocessing (HNO3). Solution (ICP-MS) and surface measurements (GD-OES depth profiling, TOF-SIMS, and XPS) and kinetic modeling of Sr and Cs removal from solution were used to characterize their uptake onto the steel and define the chemical composition and structure of the passive layer formed on the steel surfaces. Under passivating conditions (when the steel was exposed to solutions representative of alkaline ponds and 3 and 6 M HNO3), Sr and Cs were maintained at the steel surface by sorption/selective incorporation into the Cr-rich passive film. In 12 M HNO3, corrosion and severe intergranular attack led to Sr diffusion into the passive layer and steel bulk. In HNO3, Sr and Cs accumulation was also commensurate with corrosion product (Fe and Cr) readsorption, and in the 12 M HNO3 system, XPS documented the presence of Sr and Cs chromates.
S1. Steel Composition and SEM Analysis of Steel Surface
The arrows in image (C) show the attacked ferrite stringers (scale bar = 50 μm).
S2. GD-OES Depth Profiling
For a quantitative assessment of the elemental depth distribution of steel samples, it is necessary to correlate the GD-OES measurement time with depth of analysis. In this work, the depth of a GD-OES crater generated after 20 seconds of sputtering was determined using laser confocal microscopy ( Figure S2). Assuming a constant sputtering rate, 1 time can then be converted into depth by an appropriate scaling calculation. However, as the sputtering rate of a sample could be affected by changes in the composition of a material, the depth equivalent data can only be considered as an estimate.
The crater depth was measured to be 761 ± 59 nm, corresponding to an average sputtering rate of 38 ± 3 nm s -1 . The presentation of GD-OES profiles as a function of sputtered depth ( Figures 1, 4, and S3) reveals the surface oxide thickness after acid and alkaline passivation treatment as 6 ± 1 nm and 12 ± 1 nm, respectively. This result is consistent with a 316L stainless steel passivation kinetic study which reported a film thickness of 4.8 nm after immersion in 6 M HNO3 for 24 hours. 2 An equilibrium film thickness was not obtained in this 24 hour study and therefore our reported value of ~6 nm after 720 hours is likely to be reasonable. passivating medium). The reduced Fe 2p signal after acid passivation treatment is due to an increased Fe solubility at low pH, whereas a similar effect occurs for Cr under basic conditions.
The Fe 2p1/2 and 2p3/2 peaks at 724.6 and 710.7 eV, respectively, are associated with Fe2O3. 3 Furthermore, the contributions at 719.8 and 706.7 eV are due to metallic Fe, 4 likely corresponding to photoemission from bulk material. It is important to note that the feature at ~ 720 eV in the spectrum from the alkaline sample may also be ascribed to the Fe 2p3/2 satellite, where the presence of the corresponding Fe 2p1/2 satellite at 733.1 eV supports this assignment. 3 In addition, the feature at ~ 742 eV may also be identified as a daughter peak of one of the Fe 2p peaks, although an exact assignment remains unclear. 5 The Cr 2p1/2 and Cr 2p3/2 peaks at 586.2 and 576.7 eV, respectively may be assigned as Cr2O3. 6 For a similar reason outlined for Fe, elemental Cr was also identified by the corresponding photoelectron lines at 583.4 and 574.1 eV. These results reveal that, in combination with the GD-OES data, a fundamental structure of the passive layer is a Cr2O3 layer underneath a Fe2O3 over layer.
The relative concentrations of these two components are highly sensitive to the solution pH, where Cr grows at the expense of Fe oxide under acidic conditions. The surface enrichment of Fe after alkaline pH treatment is also apparent, although this effect is more subtle owing to the high Fe oxide content in the passive film formed by atmospheric exposure. The increased Fe stability within the passive film under alkaline conditions has important ramifications for the identification of Cs present in the steel material as the Cs 3d photoelectron are likely to be masked by the more prominent Fe 2p peaks ( Figure S4). Thus, despite an increased amount of Cs accumulating on the steel surface at alkaline pH (see Table S3), no Cs could be detected by XPS on the steel surface after contamination under alkaline solution conditions. Figure S4. XPS high-resolution spectra of (A) Fe 2p, and (B) Cr 2p photoelectron peaks of 304 stainless steel as a function of passivation treatment.
S4. Sr and Cs Sorption and Kinetic Modelling
The Ho model pseudo-second order kinetic fits are shown in Figure S5. The model is described in the main paper. The rate of adsorption for the Lagergren pseudo-first order model is dependent on the sorption capacity of the substrate, which is expressed as: 7 where qe is the equilibrium uptake (g m -2 ) and k is the first order rate constant (hr -1 ). The integrated form over the boundary conditions t = 0 to t = t and qt = 0 to qt = qt is Therefore a plot of log(qe-qt) against t will yield a linear relationship of gradient -k and a yintercept of log(qe) /k2q 2 e is obtained. A fundamental disadvantage of this kinetic model is that S8 some knowledge of the equilibrium sorption capacity must be known. In this work, the maximum qt value measured for each individual sorption was taken as qe. The pseudo-first order kinetic plots are shown in Figure S6 for all for systems studied, where the pseudo-first order rate constant can be determined by the gradient of the fit. Another kinetic model tested was the Elovich model ( Figure S7). In the Elovich equation, the overall rate of analyte removal from solution is derived from competing adsorption and desorption processes, 8 which is expressed as: where qt is the amount sorbed at time t, α is the initial sorption rate (g m -2 hr -1 ) and β is a constant related to the rate of desorption (m 2 g -1 ). The integrated form over the boundary conditions t = 0 to t = t and qt = 0 to qt = qt is Rearranging into the linear form yields In order to simplify this kinetic model, it is often assumed that αβt > 1 i.e. the contribution of t0 is negligible. 9 The rate equation then becomes: When the qt is plotted against lnt, a linear plot of gradient 1/β and a y-intercept of ln(αβ)/β is obtained. The corresponding kinetic plots are shown in Figure S7 for all for systems studied, where the values of β and α can be calculated from the slope and intercept of the fits, respectively.
S10
The statistical results of the Ho, Lagergren, and Elovich kinetic fits are summarized in Table S2. It can clearly be seen that the Lagergren and Elovich equations do not give reasonable R 2 values and in all cases Sr and Cs sorption behavior can be more accurately described by Ho pseudo-second order kinetics. | 2019-09-10T20:24:10.437Z | 2019-08-26T00:00:00.000 | {
"year": 2019,
"sha1": "4dfc4c803954415654f1602da35dc765181274ff",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.9b01311",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1515367e87d1c8f13aaeda899ffc898770d4ed45",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
} |
244117368 | pes2o/s2orc | v3-fos-license | Recursive Self-Improvement for Camera Image and Signal Processing Pipeline
Current camera image and signal processing pipelines (ISPs), including deep trained versions, tend to apply a single filter that is uniformly applied to the entire image. This despite the fact that most acquired camera images have spatially heterogeneous artifacts. This spatial heterogeneity manifests itself across the image space as varied Moire ringing, motion-blur, color-bleaching or lens based projection distortions. Moreover, combinations of these image artifacts can be present in small or large pixel neighborhoods, within an acquired image. Here, we present a deep reinforcement learning model that works in learned latent subspaces, recursively improves camera image quality through a patch-based spatially adaptive artifact filtering and image enhancement. Our RSE-RL model views the identification and correction of artifacts as a recursive self-learning and self-improvement exercise and consists of two major sub-modules: (i) The latent feature sub-space clustering/grouping obtained through an equivariant variational auto-encoder enabling rapid identification of the correspondence and discrepancy between noisy and clean image patches. (ii) The adaptive learned transformation controlled by a trust-region soft actor-critic agent that progressively filters and enhances the noisy patches using its closest feature distance neighbors of clean patches. Artificial artifacts that may be introduced in a patch-based ISP, are also removed through a reward based de-blocking recovery and image enhancement. We demonstrate the self-improvement feature of our model by recursively training and testing on images, wherein the enhanced images resulting from each epoch provide a natural data augmentation and robustness to the RSE-RL training-filtering pipeline.
Introduction
Digital camera's Image and Signal Processing(ISP) pipeline commonly relies on specialized digital signal processors for image processing. They are used in converting RAW acquired images, captured by the camera's digital sensors into conventional RGB or JPEG images. Camera manufacturers have been pursuing and requiring the development of sophisticated filters as part of their camera ISP to resolve diverse image artifacts (distortions) during the conversion process. These image filters include methods for demosaicing [35,18], deblurring [7], white balancing [47], color correction [34], etc.. Today's digital camera ISPs however need to be even more sophisticated. With increased image resolutions, image artifacts are naturally heterogeneous, as there is a mixing of distortions caused by sensors, lenses, motion etc. For instance, Fixed-pattern Noise (FPN) is a known issue which commonly refers to as Dark Signal Non-Uniformity (DSNU) and Photo Response Non-Uniformity (PRNU) [6]. Another example would be Bayer Filter artifacts which occur when demosaicing Color-Filter Array(CFA). The two typical artifacts from Bayer Filtering are false color, an abrupt color shift along edges that prevents good interpolation, and Moiré ringing.
Deep learning approaches have progressively replaced the image and signal processing applied in conventional computational photography tasks. For instance, with low-level details and hierarchical structures of neural network implemented, one could achieve superior performance for image deblurring (e.g. [3,51,45,46]) and deblurring (e.g. [33,44]) tasks. Nevertheless, most trained and deep learned prior solutions rely on the assumption that there is a single image artifact per RAW image which needs to be diagnosed and filtered. Heterogeneous camera image artifacts, especially mixtures, are insensitive to pixel locations. The assumption is widely accepted with image signal processing tasks, and particularly agreed within image artifact removal problems, including many of the deblurring tasks. For a camera ISP, however, this assumption fails to capture spatial heterogeneous acquired corruption caused by a mismatch of acquisition settings and environmental lighting in realistic scenes. Moreover, one can hardly tell if the artifacts come from sensor limitations, environmental changes (day v.s. night) or post processing such as lossy compression. Such varied real world image processing issues motivates us to adapt a recursive self-improving machine learning approach for the next generation of camera ISPs.
In this paper, we present a deep reinforcement learning model that works in learned latent subspaces, recursively improves camera image quality through a patch-based spatially adaptive artifact filtering and image enhancement. Our RSE-RL model (section 3) views the identification and correction of artifacts as a recursive self-learning and self-improvement exercise and consists of two major sub-modules: (i) The latent feature sub-space clustering/grouping obtained through an equivariant variational auto-encoder enabling rapid identification of the correspondence and discrepancy between noisy and clean image patches. (ii) The adaptive learned transformation controlled by a trust-region soft actor-critic agent that progressively filters and enhances the noisy patches using its closest feature distance neighbors of clean patches. Artificial artifacts that may be introduced in a patch-based ISP, are also removed through a reward based de-blocking recovery and image enhancement. We demonstrate (section 4) the performance of our RSE-RL model including the self-improvement features by recursively training and testing on images, wherein the enhanced images resulting from each epoch provide a natural data augmentation and robustness to the RSE-RL training-filtering pipeline.
2 Related Work 2.1 Source of Camera Image Artifacts • Optical Aberrations: The optical instrument has its own limitation. For instance, distortion and blur occur when the lens has a spherical aberration. Chromatic aberration is a failure of a lens to focus all colors to the same point. Vignetting is a reduction in illumination and saturation toward the periphery compared to the image center. Lens flare occurs in photography when rays from a very bright light source have internal reflections and scatter in the lens system of a camera, overlaying the captured image with artifacts such as blown out starbursts, colored shapes, rain-bow patterns and haze.
• Light and Sensor Capturing Issues: Noise, contrast and atmospheric haze in environments definitely affect the result of RAW image capturing. Fixed-pattern Noise (FPN) is a known issue which commonly refers to Dark Signal Non-Uniformity (DSNU) and Photo Response Non-Uniformity (PRNU) [6].
• Bayer Filter Artifacts: These artifacts occur when demosaicing Color-Filter Array(CFA). Two typical artifacts are false color, which is abrupt color shift along edges that prevents a good interpolation, and Moiré Artifact, which is caused by discretization of continuous signals and yields repeated patterns.
There have been dedicated efforts to calibrate instruments and to invent more sophisticated camera ISP algorithms for inevitable artifacts removal (e.g. [37]). With the advent of powerful computing units, deep learning algorithms are applicable in more and more aspects of research, so does in computational photography (c.f. [53]).
Image Filtering and Enhancement
The ubiquity of noise in digital photos leads to a fast-growing image denoising problem. In the early years, there are traditional methods that apply Gaussian blurring, TV regularization [41], or a coefficient transform in Fourier domain [43]. However, it is the idea of non-local mean denoising from Buades et al. [11] that truly made a gigantic leap in denoising performance. The non-local mean method is built upon self-similarity and redundant information over realistic images. Later on, another non-local denoising approach, referred to as BM3D [13] uses the sample idea and exploits the sparsity further. There are also discussions with respect to patch-based scheme [19]. As the deep learning era arrives, more and more works are using Convolutional Neural Network(CNN) or Generative Adversarial Network(GAN) that beats most of the classical, sophisticated methods [55,21,50,38,24]. However, due to the expensive capturing procedure, as noise is diverse in large-scale realistic photos, most of the prior works fall into the study in the synthesized domain only. For instance, the most common AWGN noise model cannot effectively remove noise from real images, as discussed in earlier benchmarks [39]. How to fill in the gap between the synthesized noise model and real-world noisy images remains an open question.
Camera ISP There are numerous works related to camera image signal processing, where different types of articulated modelings are applied to different vision tasks. Among all of them, some related works such as color demosaicing(e.g. [27]), image denoising(e.g. [45]), auto white balance correction(e.g. [4]), removing lossy compression artifacts (e.g. [17]), etc., have been separately discussed under deep learning settings. Within the scope of image denoising, there is a novel work, referred to as CycleISP [54], which develops a generative model to generate synthesized realistic image data that is both forward and reversed. Moreover, solving multi-task image enhancement is possible: [42] is an early work that focuses on jointly solving demosaicing and denoising problems by applying deep learning schemes from the source sampling of sensors. Later, a recent work [25] presented a single deep learning model containing 5 parallel learning levels to replace the entire camera ISP pipeline, referred to as PyNET. The input RAW image from a cellphone is aligned with a DSLR camera output as the supervised training data and PyNET outputs a visually high-quality sRGB image.
Latent Subspace Learning One of our major contributions is to learn a latent encoding for maximizing the usage of self-similarity in natural images via probing subspaces. There are at least two different branches of workflows. One branch is focusing on a direct disentanglement of latent space clustering under variational. For example, in [20,8,15], the hierarchical structure of latent space is explored to obtain richer representations compared to a single prior. Another branch of works is a direct fusion of deep learning models and machine learning methods. They developed algorithms that perform clustering or learning a mixture model within the latent space encoding, including Hard K-means Clustering [52], Soft K-means Clustering [26], Gaussian Mixture Model VAE [14], and direct subspace clustering VAE [31]. All of these exemplary models manage to learn from visual recognition and classification tasks, yet their performance in a more realistic image denoising task needs to be testified.
Recursive Self-Enhancing Camera ISP
In this section, we describe our reinforcement learning model that recursively improves in spatially adaptive, heterogeneous image artifact filtering and image enhancement. We target the problem of Figure 1: The overall pipeline of our RSE-RL: For each given observed image, we split the image into local patches and feed every patch as a stack into the encoding network. The latent space is divided into three subspaces, the encoder projects the YUV features of the patches onto three latent subspaces Z y , Z u , and Z v . Both the clean patches and noisy patches are projected onto the three spaces. A set of transformations T are learned to transform the latent representation of the noisy patches to a corresponding representation of the clean patches in all three subspaces. The transformed noisy representations are sent to the decoder for image reconstructions. After the denoised images are constructed, a PSNR is calculated and used to obtain the reward for a soft-actor-critic reinforcement learning model. The RL model uses the distance from the target PSNR and actual PSNR as the reward to adjust the trainable weights in the transformation T . Hence we have a self-enhancing image denoising network.
resolving image artifacts, specifically on image denoising, but our approach can be extended to solve other comprehensive tasks, such as generating RAW to sRGB images.
In the paper, we assume the observed image I obs is obtained via the following mixture modeling: The function f is an identity function when we are performing sRGB to sRGB image denoising tasks. The noise Σ s and the mask matrix M s are independent and blind to the model ( refers to element-wise product). Moreover, we do not rely on the underlying distribution Σ s . Except for the synthesized dataset, we cannot obtain the accurate number of artifact types S, and S is a hyper-parameter in most scenarios. We would like to present and discuss how to disentangle and filter from (1) using our RSE-RL.
Overall Pipeline of RSE-RL Our recursive self-improving camera ISP in Figure 1 is a multiple latent subspaces variational autoencoder. For every input image I obs ∈ R H×W ×C , we first divide the input image into D by D patches P ∈ R D×D×C with overlaps allowed. For every D by D patch P in the image, we denote its location mask H n in the original image domain. P relates to the observed image in the form P def = I obs H n . Even though we cannot probe the magnitude of the artifacts, it can be approximated within patch with a single dominant modeling, namely: Our RSE-RL network is learning from image patches P and I gt H n . Both clean patches P c and noisy patches P b are fed into the network. Each batch of image patches is fed into our encoder q(P | θ), with parameter θ. The encoder will generate three latent vectors z y , z u , and z v on three subspaces that we defined. The three subspaces preserve the features on three channels of the patches. The three channels are one luma component Y and two chrominance components, U (blue projection) and V (red projection) respectively.
On each subspace, this is a transformation function T that learns to transform a noisy projection to a clean projection: The transformation T is trained so that the transformed projection approaches the clean patch representation Then, based on the transformed latent space projections, we set up the decoder Each patch's latent encoding representation is fed into the decoder to decode independently. The decoded latent vectors are integrated to reconstruct the patches. The output is the reconstructed blocks that approximate I gt H n .
Loss Function In practical implementation, our training process optimizes different loss functions. For training the network, our training loss L vae consists of two parts: the evidence lowerbound(ELBO) [30], including the data fitting term and the KL loss, and the regularization term. The gradient computed from L vae is used to update all the parameters in the network, including the encoder, decoder, and the transformation functions. where The transformation functions T y , T u , and T v are trained under the identical terms. On each latent subspace, L tran is computed from the noisy patch projection and clean patch projection. Three transformation functions are optimized separately, where the loss function on each latent subspace is only computed by the projections in the corresponding latent subspace. Hence, three losses L trany , L tranu , and L tranv are computed. The transformation functions are optimized by the corresponding loss: T y is optimized by L trany .
Patch Assembly and Block Artifacts Upon getting the reconstructed blocks from decoders, we merge these blocks back to the original image. If one naively concatenates two adjacent patches without any tolerance of the overlaps generated when creating them, one could observe block artifacts if these two patches pass different decoders. We apply post processing to remove block artifacts generated from concatenation. For overlapping regions, we average the pixel output based on the distance between the pixel and the overlapped patch centers via linear interpolation. To further remove block artifacts, an reinforcement learning algorithm can be applied to learn a better size of overlapping region.
Reinforcement Learning
The Soft Actor-Critic (SAC) reinforcement learning algorithm [22,23] is utilized for self-enhancing image denoising. The RL model learns the action a, which is the concatenation of the weight vectors a y , a u , and a v that used to adjust the transformation functions T y , T u , and T v . These trainable weights are multiplied with the weight vectors, hence we define the action as: where T i is the updated ith weight and T i is the previous weight. The algorithm continuously updates these trainable weights to enhance the transformation from noisy to clean patch representations, which consequently improves the final denoising performance. Alternatively, the RL model can learn to reduce the blocking artifacts.
A central feature of SAC is entropy regularization. This policy is trained to maximize a trade-off between expected return and entropy, a measure of randomness in the policy. This is connected to the exploration-exploitation trade-off: increasing entropy results in more exploration, which can accelerate learning speed. It can also prevent the policy from prematurely converging to a bad local optimum. In SAC, an entropy bonus is reflected in Q π : where I is an input image and I' is the resulting image when the transformation in the network has the set of weight vectors a, α > 0 is the trade-off coefficient and logπ(ã|I ) is the defined entropy.
SAC learns a policy π and two Q functions Q concurrently. In particular, the policy is learned by the Critic network which maximizes V π (I), and the Q functions is learned by the Actor network which minimizes a sample-approximated MSBE L(Φ i , D).
SAC sets up the MSBE loss for each of the two Q-functions: where d is the done signal to set a terminating state, the target is given by In this RL algorithm, the states are the images, and the reward r is given by computing the distance between the actual PSNR and the target PSNR, namely where k and c are constants for adjusting the scale of the reward and P SN R and P SN R t are the averaged actual PSNR from testing and the target PSNR of the entire patch-assembled image, respectively. The RL model learns to enhance the transformation and reduce the blocking artifacts. Thus, to maximize the reward, the model is optimized so that the average PSNR of the testing images approaches the target PSNR.
Experiments
To justify the performance of our RSE-RL algorithm, two datasets are considered in our experiments: Synthesized Noisy CelebFaces Attributes (CelebA) Dataset [36] and Smartphone Image Denoising Dataset (SIDD) [2]. Gaussian noise is applied to images in the synthesized noisy CelebA dataset, while the SIDD dataset contains realistic artifacts generated by smartphone cameras. An objective of our experiments is to exhibit a performance enhancement by utilizing the RSE-RL network structure. For comparison, an ordinary Variational Autoencoder is trained under identical experimental settings in each experiment, referred to as Single Decoder VAE for comparison. For more details in training, we refer to the supplementary materials.
Our Networks with CelebA Patches
Dataset Construction Large-scale CelebFaces Attributes (CelebA) Dataset is adapted for our experiment. CelebA_HQ/256 dataset, which consists of images with size 256 × 256 pixels, are selected from the CelebA dataset. These images are then sampled into two sub-datasets: a training set with 2250 images and a validation set with 11250 images. Heterogeneous artifacts generator is applied to the training set to generate noisy images from celebA, referred to as the synthesized noisy CelebA dataset. Gaussian noise is generated on these images utilizing OpenCV [9]. Each image, both noisy and ground truth, in the training set is divided into 16 × 16 pixels patches, with 4 pixels overlap with the surrounding patches.
Experimental Setup We test the denoising performance of our RSE-RL over synthesized noisy CelebA dataset. Our encoder projects the patch-based images into the latent space using 5 convolutional layers and 2 fully-connected layers in a subsequent order, while the decoder is a similar structure with 2 upsampling layers and 5 transposed convolutional layers. The model is implemented in Keras [12] and Tensorflow [1], and we use a single 12GB NVIDIA Tesla K80 GPU for training and testing on the synthesized noisy CelebA dataset. One batch has 128 patches that are trained simultaneously. The parameters are optimized in 50 epochs using Adam algorithm [29] with β 1 = 0.9 and β 2 = 0.999. The learning rate is set by an exponential decay learning rate scheduler that has an initial rate of 0.001, decay factor of 0.95, and decay step 1000.
The soft actor-critic algorithm for self-enhancing is implemented by using Stable-Baseline3 [40] from OpenAI [10]. We use OpenAI Gym for setting up the environment. The reward in the environment is computed as stated in Equation 13. The model is trying to maximize the reward by optimizing the actions that adjusting the trainable weights in the three transformation functions, as stated in Equation 8. The action space is a set of weight vectors within the bound (0.999, 1.001) for minor adjustments. The observation space is the actual PSNR score we achieve. In the synthesized CelebA dataset, the learning rate for the model is set to 0.001. The results before and after the recursive self-enhancing procedure are indicated in Table 1, denoted RSE-RL(before) and RSE-RL, respectively. We can observe a significant improvement regarding all the quality metrics by utilizing RSE-RL. A visualization of our denoised image results after self-enhancement can be found in Figure 2.
Among the results in Table 1, the RSE-RL achieves the best performance regarding all three metrics.
We can observe small enhancements on all three quality metrics after applying the RL model for selfimprovement. This demonstrates that our self-enhancing model is able to continue being optimized during the testing stage. In addition, the results indicate that learning the transformations on the latent spaces is effective. By comparing the three latent subspaces Z Y , Z U , and Z V , we can observe that the noise has the largest impact on the Y space Z Y , which represents the luminance (brightness) of the image. There is no significant difference between the noisy and clean patch representations on the other two subspaces. This indicates that Gaussian noise has the most significant impact on brightness, compare to chrominance (represented by U and V). The visualized latent subspaces can be seen on Figure 3.
Another observation is that RSE-RL's denoising performance is not significantly affected by the training data size. In order to demonstrate this feature, an experiment using 450 training images (0.2 million patches) is conducted. This network is also tested under 11250 testing images, and the result is shown in Table 1, denoted as RSE-RL(S). This result can be compared with RSE-RL(before) to observe how the training data size affect the performance. This observation provides an effective way of utilizing this network to largely reduce training time. illumination, and lighting conditions, as well as signal-dependent noise. This dataset is a benchmark for denoising algorithms, as its noise is generated under realistic scenarios.
SIDD Denoising Result
In the experiment, 320 sRGB images are selected for training the network, and the SIDD Benchmark Data is used to evaluate the performance of our network. The SIDD Benchmark Data contains 40 noisy sRGB images and their ground truth. 32 patches are selected from each benchmark image for evaluation. For training the network, each noisy image and its ground truth are divided into 24 × 24 × 3 patches, with 8 pixels overlapping. 11.19 million patches build up the training data.
Experiment Setup
The encoder is composed of five convolutional layers and two fully connected layers, while decoders have the inverse structure as the encoder. Each encoder has approximately 2.2 million parameters, and each decoder has 1.6 million parameters. The model is trained on a single 12GB NVIDIA Tesla K80 GPU with a batch size of 128. The parameters are optimized in 20 epochs using the same optimizer as the Synthesized Noisy CelebA dataset defined in Section 4.1.
The reinforcement learning model is identical to the model implemented in 4.1, which adjusts the trainable weights in the three transformation functions. We want to show the improvement by using the self-enhancing RL technique. The results before and after self-enhancement are shown in Table 2, denoted as RSE-RL(before) and RSE-RL, respectively. Our denoising result on SIDD dataset The results show that our self-enhancing RL model contributes a small enhancement on PSNR, which demonstrates that our RL model is able to improve the denoising results. Since we are only involving PSNR in the reward function, we can only observe some improvements in terms of PSNR. Table 2 also lists a set of benchmark denoising methods and deep learning methods which are used to compare against our network. In the table, noisy images are the images before denoising procedures; BM3D, NLM, and KSVD are the benchmark non-DL results; DANet and RDB-Net are two of the state-of-art deep learning methods used for comparison with our method. The performance of our model is significantly better than traditional methods. The visualized results of self-enhanced denoised images can be seen in figure 4.2. BM3D, NLM, KSVD, DANet, and RDB-Net are the benchmark results. As for efficiency comparison, our RSE-RL only contains 2.5 million parameters in total, whereas DANet contains ∼ 60 million parameters, leading our network train much faster compared to the state-of-the-art structure. Blue points are noise projections and red points are clean projections. Principle Component Analysis is applied to reduce the dimension of the latent subspace into 2 for visualization. We can observe a rotation on the latent space between the noisy and clean patch projections.
The images from SIDD consist of the same set of realistic artifacts generated by smartphone cameras. This leads to the same transformation on the latent space for every patch since each patch consists of the same types of noises. From figure 5 we can observe a rotation between the noisy patch projections and the clean patch projections on the latent space which is the transformation on the latent space.
Conclusions and Discussions of Broader Impact
Overview of our work We have presented our RSE-RL model, a self-improving camera ISP built upon policy learning. The patch-based transformation is trained in decomposed subspace to identify and rectify different types of artifacts progressively and respectively. We define the action and reward for a self-enhancement framework and further discussed its potential in real-world image data. Nonetheless, our work is an early-stage exploration, where the transformation is mostly in the linear components. We are moving toward considering patch ordering or other complicated environmental settings to further strengthen our work.
Discussions of Broad Impact The method we proposed is an RL-based solution to low-level vision tasks. There might exist an accuse of collecting private photo information from users when the our research extends to application side. But our current research don't envision any broad ethical issues surrounding our largely mathematical learning technique. We would of course welcome any issues the reviewers might raise, and promise to address them.
A Detailed Setup of Our RSE-RL Framework Figure A1: For each given observed image, we split the image into local patches and feed every patch as a stack into the encoding network. For both clean P c and noisy patches P b , the encoder transforms the channels of patches from RGB to YUV. The latent space is divided into three subspaces, the encoder projects the YUV features of the patches onto three latent subspaces Z y , Z u , and Z v . Both the clean patches and noisy patches are projected onto the three spaces. A set of transformations T = {T y , T u , T v } are learned to match the latent representation of the noisy patches to a corresponding representation of the clean patches in all three subspaces. For instance, T y (z b y ) = z c y for a noisy patch z b y and a clean patch z c y in subspace Z y . The transformed noisy representations are sent to the decoder for image reconstructions. The decoder reconstructs YUV channels from the latent spaces representations and transform the channels from YUV back to RGB, hence we get the denoised images.
Architecture of Patch Transformation -Correspondence Network In our network architecture, the image patches are transformed from RGB to YUV channels prior to the encoding procedure. The RGB-YUV transformation is defined as
R G B
A set of encoders q y , q u , and q v encode the YUV channels respectively and project the patch information on three latent subspaces Z y , Z u , and Z v . The dimension of one subspaces is set to 72 for both sets of experiments, hence the latent space dimension is 216. In each of the latent subspaces, both clean and noisy patch representations are projected and we want to learn a transformation that matches noisy patches to clean patches representations. The transformations T y , T u , T v are defined and operated in their corresponding latent subspaces. Each of the transformation T s (s ∈ {y, u, v}) is a three-layer MLP, with identical dimension layers and ReLU activation. Each transformation is trained to match from a noisy patch representation z b s to a clean patch representation z c s within its latent subspaces, the loss function is defined in Equation (7).
Our reinforcement learning setting In the reinforcement learning settings, we start with our pre-trained VAE and transformations and try to enhance the transformations recursively. The RL model starts with the trainable weights in our pre-trained models and learns to minor adjust the trainable weights in the transformations for obtaining higher PSNR. The reason for starting with our pre-trained models is that random initialization on the trainable parameters leads to extreme difficulty on convergence as well as getting better performance.
In the soft actor-critic implementation, we use the default parameter settings for the Q-function (9), where the entropy regularization coefficient α = 0.2 and the discount factor γ = 0.99. The reward r for both sets of experiments are defined as below: where the specifications are identical to Equation (13). The target PSNR is set to P SN R t = 30.0 for the experiments on CelebA dataset and P SN R t = 34.0 for SIDD dataset.
The action space a = {a y , a u , a v } is a set of weight vectors whose dimension is identical to the hidden dimension of T s . The action function is defined in Equation (8), which the dot products of the weights in T s and its corresponding weight vector a s , s ∈ {y, u, v} are assigned as the new weights in T s . More specifically, we have T sj = a s · T sj for s ∈ {y, u, v}, j = {1, 2, 3} since T s is a three-layer MLP. The state space is a set of trainable weights in the three transformations. The states are denoted T sj in the equation above.
B Additional Experimental Results of Our RSE-RL framework
Decomposed Subspace Visualization The following sets of figures show the denoising results on each of the Y, U, and V spaces, which further demonstrate how the noises are removed on each space. The figures A2 and A3 present the examples on both our synthesized CelebA dataset and SIDD dataset. Furthermore, we show some patch-wise matching in Figure A4. The figure gives several specific patches as examples for demonstrating how the noisy patches map to the clean patches. It also shows the noises on the patches specifically, and we can observe the noises on Y, U, and V spaces.
Iterative Image Enhancement Improvement Framework using our RSE-RL network We also present how the images are recursively self-enhanced in the reinforcement learning framework in Figure A5. The resulting images over CelebA dataset has shown us a improvement using RL backbone. Without a RL training, our VAE performance yield at a local minimum while the weight updating under feedback control gives us a closer to optimal result. We would also justify that, based on our observation, random initialization of the entire scheme would yield a much more slower convergence rate and an extreme low-PSNR local minimun, and thereby the pretrained network parameter initialization is crucial for generating high-quality enhanced images.
Justification of Removing Block Artifacts
There might be the case where our filter generates patch-based enhancement result locally while ignoring the neighboring patches. The one-to-one correspondence from noisy patches to clean patches might cause additional block artifacts, as stated earlier in Section 3. We propose the post processing using overlapping patch smoothing, or additional deblocking algorithm to correct the newly introduced artifacts. Below we show an ablation study under the influence of overlapping patch selections and the use of deblocking artifacts.
In general testing, we compare the qualities between the non-overlapping patches and overlapping patches, as well as the qualities before and after using the deblocking method [28]. The average PSNR for images composed by non-overlapping patches is 27.8214. And we can observe obvious blocking artifacts on the edge of the patches (in Figure A6). When we apply the deblocking method, the average PSNR is slightly reduced to 27.8212 and the block artifacts can still be visualized.
By comparison, after we apply the overlapping patches, there is a smooth transition on each of the edge between two blocks. The average PSNR for images composed by overlapping patches is 28.84, which is a significant enhancement. We can also observe the enhancement in the figure A6. However, we applied the deblocking method to the images with overlapping patches and there is no observable improvement, which the average PSNR stays the same. Figure A2: CelebA Denoising Visualization in YUV Spaces: images on the top row are images that contain the synthesized artifacts (Gaussian Noise). The images on the second row are the denoising result from our RSE-RL. And the images on the bottom row show the difference between the first two rows, which are the expected noises we removed by the network. The images are scaled to [0, 255] for all channels. Columns from left to right show the images on RGB channels, Y space, U space, and V space, respectively. Our method reveals and remove the noise decomposed in three channels respectively. Figure A3: SIDD Denoising Visualization in YUV Spaces: specifications are identical to Figure A2. The figure demonstrates a noise removal over channels and show our patch based method can apply to large-scale, realistic image as well. Figure A4: Patch based matching results on YUV spaces: the first row are the noisy patches, the second row are the clean patches that match to the noisy patches, the third are the contrast, representing the noise we are removing. Columns from left to right-three columns as a group-are the images on YUV spaces respectively. For presentation, the patches are scaled to [0, 255] for all the channels. This justifies the correctness of our patch transferring scheme within each patch locally. The detailed is preserved, too. Hence, our method is totally amenable to any size of images. Figure A5: Recursive Self-enhancing RL Visualization: the figure shows how the test images are recursively enhanced in a 50-epoch RL training. We observe a performance boost when iterating the weights of decomposed transformations under three latent subspaces. With the reinforcement learning agent, the network converges to a better result compared to the case where only a solo VAE framework can achieve. The last line also specify the difference between the starting image we fed into RL agent and the final result after recursive learning. Figure A6: Deblocking Results: this figure shows the results of a deblocking method [28], as well as our overlapping patch smoothing alternative. It shows that our overlapping patch smoothing method can remove the block artifacts that may be created by our patch-based scheme. The columns from left to right show the noisy image, image composed by patches without overlapping, non-overlapping patches with deblocking enhancement, image with overlapping patch smoothing, and image with overlapping patch smoothing + deblocking enhancement. The PSNR for these images from left to right are 19.89, 29.51, 29.50, 30.31, and 30.31. | 2021-11-16T02:16:21.204Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "76ea797e29ceeeffe61784972facdad2dd953557",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "76ea797e29ceeeffe61784972facdad2dd953557",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
219084560 | pes2o/s2orc | v3-fos-license | Motivation Moderates the Effect of Internal Control Systems and Budgetary Participation on Individual Performance
Individual performance is the level of work performed by individuals in the organization that is used as a basis for evaluating the organization. This study aimed at obtaining empirical evidence regarding the influence of the internal control system and participation in preparing budgets on individual performance with motivation as moderating. This study was conducted at 30 Bangli District Regional Organizations (OPD) collected through questionnaires. The samples were 60 people through non-probability sampling. This study was tested using a structural equation model (Structural Equation Modeling-SEM) based on variance or Component-based SEM with a Smart-PLS 3.0 analysis tool. The result showed that the internal control system has a significant positive effect on individual performance. Budgeting participation has a positive effect but not significant on individual performance. The result showed that work motivation can strengthen the internal control system and participation in compiling budgets on individual performance.
Introduction
Government performance is often a concern because performance is an important indicator in the success of a government (Harsasto, 2013). The phenomenon of performance in local government, one of which is in Bangli Regency. Based on the results of the examination of the Supreme Audit Agency (BPK) on the representative of Bali Province in Bangli Regency, a disclaimer was given (not giving an opinion) on the 2013 Local Government Financial Report (LKPD). This opinion was given because the BPK found mismanagement of assets, there was no evidence transactions and weaknesses in the government's internal control system. Furthermore, Bangli got a disclaimer opinion because there were figures that could not be trusted by the BPK, namely on the cash side that exceeded the tolerance limit and there were errors in the leveling of assets and management of assets where the numbers on the balance sheet were not accompanied by supporting evidence Results of BPK's audit of LKPD for the 2014 fiscal year, Bangli Regency obtained a Fair opinion with an Exception (WDP). This opinion increased from the one received by Bangli Regency in the previous year. In the Bangli Regency LKPD in the fiscal year of 2014 the BPK found that the treasury in the treasury was missing, there were assets that did not match the acquisition value and there were weaknesses in the government's internal control system. Then in the 2015 Fiscal Year LKPD, Bangli Regency still obtained WDP opinion because the BPK still found weaknesses in the government's internal control system. Furthermore, based on the examination of LKPD Fiscal Year 2016 -2018, Bangli Regency was given a Fair Opinion without Exception (WTP). This opinion has increased from the previous 5 years. Although Bangli Regency won the WTP opinion, the BPK still found weaknesses in the internal control system in the preparation of financial reports and non-compliance with applicable laws and regulations such as inadequate asset administration, inadequate tax revenue management, budgeting for employee expenditure, goods and services expenditure , capital expenditure is not in accordance with the provisions, and management of grant spending is inadequate. (www.denpasar.bpk.go.id).
The findings obtained by BPK reflect that there are still weaknesses in the performance of the Bangli Regency government. Some weaknesses that were found from the BPK's submission of audit results related to the evaluation and evaluation of the performance of government agencies' performance were the reasons for the government's optimal implementation. Nasir and Oktari (2013) stated that the need to measure the performance accountability side is in the aspect of activity in the environment of government agencies, one of which is in terms of individual performance.
The findings made by BPK suggest that the performance of the Bangli Regency government is still slow. The government is not yet optimal because there are still weaknesses that can be seen from the results of the evaluation and evaluation of performance accountability of government agencies that were previously submitted by BPK. Nasir and Oktari (2013), stated that activities in the environment of government agencies need to be measured in terms of performance accountability, one of which is in terms of individual performance. Individual performance is a measure that can be used to determine the comparison of the results of the implementation of the duties of employees or government officials, the responsibilities given by the organization in a certain period, and can relatively be used to measure work performance or organizational performance (Panggesto, 2014). So individual performance is a description of the level of achievement of goals or objectives as a translation of the vision, mission and strategy of local government agencies that indicate the level of success or failure of the implementation of activities in accordance with the main tasks and functions of the government apparatus.
Government internal control systems are very important in the sustainability of a government agency (Adewale, 2014). Every government agency should have a control system that can minimize existing risks (Saleba, 2014). With the existence of internal control, the entire process of audit activities, review, evaluation, monitoring, and other supervisory activities of the organization in order to provide adequate confidence that the activities have been carried out in accordance with established benchmarks (Dewi, 2015). Research conducted by Putri (2013) in the Padang Regional Government Work Unit (SKPD), research conducted by Tresnawati (2012) at the Bandung City Revenue Service and Trihapsoro (2015) on the Boyolali District SKPD conducted a study of the influence of the government's internal control system on the performance. These researchers found the same results, the government's internal control system had a positive effect on performance. While the research conducted by Shodiq (2001), Boritz and Jee (2007) and Santoso (2016) showed different results, this researcher found that the government's internal control system had no effect on performance.
The performance achieved by an organization is basically the achievements of the members of the organization itself, starting from the top level to the bottom level. In order to realize good governance, the government continues to make various efforts to improve performance, one of which is by improving the overall state administration system (Dewi, 2015). The use of budget is a concept that is often used to see performance. The application of performance-based budgeting to government agencies in Indonesia was announced through the enactment of Law No. 17 of 2003 concerning State Finances and was implemented in stages starting in the 2005 fiscal year.
Budgeting is an important and complex activity, so that a budget is right on target and in accordance with its objectives, it requires good cooperation between subordinates and superiors in preparing the budget. Budgeting participation is the best method in preparing the budget, because all components in the organization are involved in budgeting (Yanti, 2016). The budgeting process is an important activity, because the budget will have a functional and dysfunctional impact on the attitudes and behavior of the members of the organization involved in the preparation process. To prevent dysfunctional impacts by providing subordinates in this case employees or employees to participate in the budgeting process (Silmilian, 2013). Employees who are given the authority to participate are expected to provide good results in their participation in preparing the budget. Employee participation in determining organizational goals to be able to encourage organizational effectiveness to be better and able to minimize conflicts or problems that occur between individual goals with organizational goals. With the participation of individuals in the preparation of the budget is expected to motivate these individuals to achieve budget goals so that it will have an impact on improving individual performance.
Research conducted by Ferawati (2011), Utama (2013) and Saraswati (2015) concerning the effect of budgeting participation on performance found that budgetary participation had a positive effect on performance. However, different results were found from research conducted by Bryan and Locke (1967), Nursidin (2008) and Medhayanti (2015) found that budgetary participation had no effect on performance.
The inconsistency of the results of research that has been done previously makes researchers want to test whether the government's internal control system and budgetary participation affect individual performance by adding work motivation as a moderating variable. By using contingency theory, work motivation is able to act as a moderating variable in the relationship between government internal control systems and budgetary participation in individual performance. Wiguna (2016) states the need for research using a contingency approach because it can be used to test external factors that affect a relationship. Wijaya, et al. (2016) states that motivation provides a positive impact on internal control systems and improves performance. This means that motivation can moderate the effect of the government's internal control system on performance. Then research conducted by Dina (2014) that examined the effect of budgetary participation on performance with work motivation as a moderating variable found that motivation moderates the effect of budgeting participation on performance. In this study using a contingency approach using moderating variables, namely work motivation supported by research conducted by Wijaya et al. (2016) and Dina (2014) who both used work motivation as a moderating variable in the effect of the government's internal control system and budgetary participation on performance.
Methods
This research is located in all DPOs in Bangli Regency. This location was chosen because researchers were interested in BPK-RI's opinion on LKPD of the Bangli Regency Government which received a disclaimer opinion in 2013, WDP opinion in 2014-2015 and WTP opinion in 2016 -2018. The time of the study was 2019. The research was conducted in OPD which is in Bangli Regency which amounts to 30 OPD. The scope of this study is limited to the role of work motivation as a moderator that influences the government's internal control system (SPI) and participation in preparing budgets on individual performance. This study uses government officials who worked on 30 OPD in Bangli Regency as the study population. The sampling technique used is non-probability method with saturated sampling technique that is determining the sample that all members of the population are used as samples. Respondents in this study amounted to 60 respondents, consisting of PPK employees (Financial Administration Officers) and Treasurer OPD in the Bangli District Government Environment. The data that has been collected is analyzed by (Structural Equation Modeling-SEM) based on variance or Component based SEM, which is famously called Partial Least Square (PLS) Visual version 3.0
Result And Discussion
In this study there are four constructs consisting of 2 exogenous variables, namely the Government Internal Control System which is measured by sixteen indicators. Second, Budgetary Participation as measured by six indicators. The endogenous variables in this study are Individual Performance as measured by five indicators. The moderating variable in this study is motivation which is measured by six indicators.
The construct is said to have high reliability if the Composite Reliability value is above 0.70 and the Cronbachs Alpha value is above 0.60 (Ghozali, 2008 To strengthen the valid statement of the construct of this study, researchers also used the Average Variance Extracted (AVE) method. a good construct requires the AVE value to be above 0.50. AVE test results are described as follows: Table 2 gives AVE values above 0.5 for all constructs contained in the research model. The lowest value of AVE is 0.601 in the SPI construct, so it can be concluded that the construct in this study is valid.
In assessing the model with PLS it starts by looking at the R-square for each latent dependent variable (Ghozali, 2013). Table 3 is the result of R-square estimation using Smart PLS. Based on the determination coefficient data above it is known that the R-Square value of Individual Performance of 0.893 the magnitude of the R-Square value of 0.893 equals 89.3% can be explained by three construct variables.
The significance of the endogenous indicators can be seen from the T-statistic value. If t-value> t- Table, all indicators can be said to be significant in measuring endogenous constructs. The results of testing with the bootstrapping method of SEM PLS analysis are shown in Figure 1. The basis used in testing hypotheses is the value contained in the output forinner weight. Table 4 provides estimated outputs for testing structural models. The first hypothesis testing results show that the relationship of internal control systems to individual performance has a parameter coefficient of 0.538 with t of 3.817 where the value is greater than t table (1.906). These results indicate that the government's internal control system has a positive and significant relationship to individual performance, which means that hypothesis 1 is accepted. This proves that the existence of an internal control system in accordance with the application of existing work rules will improve individual performance. Putri (2013) states that a good internal control system in an organization is able to create a good overall process of activities, so that later it will provide a belief for individuals that the activities carried out have been carried out in accordance with established benchmarks. This study obtained the same results as the research conducted by Trihapsoro (2015) and Njeri (2014) which stated that the government's internal control system had a positive effect on individual performance.
b. Effects of Budgetary Participation on Individual Performance
The results of the second hypothesis testing show that the relationship between budgetary participation and individual performance has a parameter coefficient of 0.164 with t of 0.797 where the value is smaller than t table (1.906). These results indicate that budgetary participation has a positive and not significant relationship to individual performance, which means that hypothesis 2 is rejected. This means that the level of employee involvement is low in the budget preparation process.
Individual performance is indeed involved in the budgeting process but the involvement of an individual is limited to participating in the plan because as an obligation to participate but the involvement is not balanced by using creative ideas owned by an individual. Supposedly getting involved and working well will produce good performance. The results of this study support the research conducted by Andison andAugustine (2017), andJanah andRahayu (2015). However, this study does not support the research that has been done by Ermawati (2012), andAisyah, et al (2017).
c. Effect of Work Motivation Moderating Government Internal Control Systems On Individual
Performance.
The output results show that the interaction coefficient value of the government's internal control system with work motivation of -0.193 with t of 2.067 where the value is greater than t table (1.906). Based on these data, work motivation moderates the government's internal control system on individual performance, which means hypothesis 3 is accepted. Positive and significant value of the coefficient of interaction between the government's internal control system and work motivation means that work motivation can strengthen the effect of the government's internal control system on individual performance. This means, with a good work motivation through a good government internal control system can affect individual performance.
Research conducted by Atmadja, et al. (2014) states that motivation has a significant influence on internal control systems. This indicates that motivation alone can influence internal auditors to work and provide useful statements for the effectiveness of internal control systems. This research is supported by research conducted by Wijaya, et al. (2016) which states that motivation provides a positive impact on internal control systems and improves performance. This means that motivation strengthens the influence of the government's internal control system on individual performance.
d. Effect of Work Motivation Moderate Budgeting Participation On Individual Performance
The output results show that the interaction coefficient value of budget participation with work motivation is 0.251 with t equal to 2.659 where the value is greater than t table (1.906). Based on these data, work motivation moderates the effect of budgetary participation on individual performance, which means hypothesis 4 is accepted. Positive and significant value of the interaction coefficient of budgeting participation with work motivation means that work motivation can strengthen the influence of budgetary participation on individual performance. This means, with the existence of work motivation through empowerment in anticipation of budgeting can affect maximum individual performance at government agencies.
High or low levels of employee motivation in carrying out the budgeting process can affect the performance of the employee. The higher the motivation of the employee, the more effective the performance of the employee in the budgeting process, because the high motivation possessed by an employee allows the employee to be better at participating in budgeting. The results of this study are in line with research conducted by Adiputra (2002), Becker and Green (1992) in Riyadi (1998) and Mia (1998) which state that work motivation has a significant effect in moderating the influence of budgetary participation on performance. Where motivation has a significant effect on the relationship between budgetary participation and performance
Conclussion
Based on the results of data analysis and discussion described in the previous chapter, it can be concluded that for the first direct test the results state that there is a significant positive influence between the internal control system and individual performance. The second test, the results state that participation in preparing a budget has a positive but not significant relationship to individual performance. In addition, testing with moderating variables obtained results including work motivation which moderates the effect of the internal control system on individual performance. Work motivation is a moderating effect of participation in preparing a budget on individual performance. | 2020-04-23T09:08:18.450Z | 2020-04-20T00:00:00.000 | {
"year": 2020,
"sha1": "cee80b38e61b1740f60dd8fd96130bf19ccc9b27",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.undiksha.ac.id/index.php/IJSSB/article/download/24341/14961",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b86765ccbbff25e43315c0e8fe553808772d1659",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
233481174 | pes2o/s2orc | v3-fos-license | Annihilators of local cohomology modules via a classification theorem of the dominant resolving subcategories
This paper investigates when local cohomology modules have an annihilator that does not depend on the choice of an ideal. Takahashi classified the dominant resolving subcategories of the category of finitely generated modules over a commutative Noetherian ring. We show that his classification theorem describes annihilation results of local cohomology modules over a finite-dimensional ring with certain assumptions or a Cohen-Macaulay ring.
Introduction
Throughout this paper, let R be a commutative Noetherian ring. The vanishing of local cohomology modules has been widely studied in local cohomology theory. Grothendieck's vanishing theorem [ Vanishing phenomena of local cohomology modules can also be seen with the help of other factors. In one example, Faltings' annihilator theorem [5] states that some power of the second ideal annihilates local cohomology modules over a homomorphic image of a regular ring; see also [1,Theorem 9.5.1]. In other examples, Raghavan established the uniform annihilation theorem over a homomorphic image R of a biequidimensional regular ring with a finite dimension; see [7,Theorem 3.1]. His theorem guarantees that there exists an integer n (depending only on a finitely generated R-module M ) such that b n H i a (M ) = 0 for all ideals a, b of R and all integers i < λ b a (M ) = inf {depth M p + ht(a + p)/p | p ∈ Spec(R) \ V (b)}. In particular, the ideal b n has some annihilator s such that sH i a (M ) = 0 for all ideals a of R and all integers i < λ b a (M ). In 2006, Zhou showed that, if a locally equidimensional ring R of finite positive dimension is a homomorphic image of a Cohen-Macaulay ring of finite dimension or an excellent local ring, then R has a uniform local cohomological annihilator; see [10,Corollary 3.3]. A uniform local cohomological annihilator is an element s ∈ R\ (∪ p∈Min(R) p) such that, for each maximal ideal m of R, one has sH i m (R) = 0 for all integers i < ht m. It should be noted that these annihilators do not depend on the choice of the maximal ideal.
The purpose of this paper is to study vanishing phenomena of local cohomology modules. We start by focusing on [1,Lemma 9.4.3], which relates the difficult part of a proof of Faltings' annihilator theorem; see [1,Chapter 9.4]. This lemma is concerned with the fact that, if M p is a non-zero free R p -module for a finitely generated R-module M and p ∈ Spec(R), then there exists an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R). In this regard, the paper deals with the following question as to when a similar vanishing phenomenon occurs. It goes without saying that the interesting case is when a prime ideal p belongs to Supp R (M ); see also [1,Lemma 9.4.1].
Question. Let M be a finitely generated R-module and let p ∈ Spec(R). When does there exist an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R)?
Our strategy stems from Dao and Takahashi's classification theorem of the dominant resolving subcategories of the category of finitely generated modules over a Cohen-Macaulay ring; see [4,Theorem 1.4]. Furthermore, it should be noted that Takahashi recently classified such subcategories without the Cohen-Macaulayness of rings; see [8,Theorem 5.4]. These theorems suggest that dominant resolving subcategories have a close relationship with the existence of annihilators of local cohomology modules. Indeed, using [4, Theorem 1.4], we provided an answer to the above question in a finite-dimensional Cohen-Macaulay ring; see [9,Theorem 4.3].
One of our aims is to establish the following theorem, which shows that [8,Theorem 5.4] removes the assumption of finite dimension from [9,Theorem 4.3].
Theorem 1.1. Suppose that R is a Cohen-Macaulay ring, and let p ∈ Spec(R). Then, for a finitely generated R-module M , the following conditions are equivalent.
(1) There exists an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R).
Another purpose is to investigate the above question under (not necessarily Cohen-Macaulay) finite-dimensional rings. The following theorem is one of the main results, which provides an answer to our question under some assumptions. Theorem 1.3. Suppose that R is a finite-dimensional ring, and let p be a prime ideal of R with depth R q = grade(q, R) for all q ∈ U (p) = {q ∈ Spec(R) | q p}. Then, for a finitely generated R-module M , the following conditions are equivalent.
(1) There exists an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R).
The above assumption of prime ideals is not an unacceptable condition. Indeed, our assumption is very similar to the Cohen-Macaulayness of rings: [3,Lemma 3.1] states that one has depth R q = grade(q, R) for all q ∈ Spec(R) if and only if R is an almost Cohen-Macaulay ring; for details, see also Remark 4.2 (2).
Finally, we can investigate all finitely generated modules by combining Theorem 1.3 with the following result.
The organization of this paper is as follows. We dedicate section 2 to the preparation: notations, definitions, and fundamental facts. Section 3 provides the essential result in this paper; see Theorem 3.4. This result describes that Takahashi's classification theorem [8,Theorem 5.4] provides a necessary and sufficient condition that local cohomology modules have an annihilator that does not depend on the choice of the ideal. After showing Theorem 1.3 and Corollary 1.4 in section 4, we establish Theorem 1.1 and Corollary 1.2 in section 5.
Preliminaries
Throughout this paper, let R be a commutative Noetherian ring, and all modules are assumed to be unitary. We denote by R-mod the category of finitely generated R-modules. We suppose that all subcategories of R-mod are full and closed under taking isomorphisms. The symbol N 0 denotes the set of non-negative integers. We adopt the convention that the grade, the depth, and the dimension of the zero module are ∞.
First of all, we will recall a set of N 0 -valued functions on Spec(R) that was introduced by Takahashi in [8, Definition 5.1 (1)]. We note that the symbol N in the paper [8] stands for the set of non-negative integers.
Definition 2.1. Let C be a subcategory of R-mod. We denote by F(C) the set of maps f : Spec(R) → N 0 such that, for every p ∈ Spec(R), there exists E ∈ C satisfying the following two conditions: Next, we need to recall the notions of a dominant subcategory and a resolving subcategory of R-mod. We denote by Ω n R M the nth syzygy module of an Rmodule M . In particular, we will write Ω 1 R M = Ω R M , which is the kernel of some epimorphism from a projective module to M . Note that Ω n R M is uniquely determined up to projective summands.
(1) We denote by add X the subcategory of R-mod consisting of the modules isomorphic to direct summands of finite direct sums of copies of modules in X .
(3) We say that X is a resolving subcategory if X contains all finitely generated projective modules and is closed under taking direct summands, extensions, and syzygies.
Takahashi showed that the set F(R-mod) classifies the dominant resolving subcategories of R-mod; see [8, Definition 5.1 and Theorem 5.4].
Theorem 2.3. (Takahashi). There exist mutually inverse order-preserving bijections
where the maps φ and ψ are given by Finally, we will give the notion of a subcategory associated with local cohomology modules and summarize some properties. We adopt the convention that the local cohomology functor H i a (−) is the zero functor for all ideals a of R and all negative integers i.
There exists s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R) .
(2) We denote by U (p) the generalization closed subset {q ∈ Spec(R) | q p} of Spec(R) for p ∈ Spec(R).
A finitely generated R-module M over an arbitrary ring R is called a maximal Cohen-Macaulay module if depth M p ≧ dim R p for all p ∈ Spec(R). We note that the depth of the zero module is ∞, and thus we consider the zero module to be maximal Cohen-Macaulay.
We suppose that R is a Cohen-Macaulay ring. Then the subcategory R(p) contains all maximal Cohen-Macaulay R-modules. In particular, the subcategory (2) Our statement is proved by the same argument in the proof of [9, Proposition 4.2] without the assumption of the Cohen-Macaulayness for the ring R. 2)]. Furthermore, since R is a Cohen-Macaulay ring, we deduce from [8,Corollary 4.7] that the resolving subcategory R(p) is dominant. (We note that the assertion of [8,Corollary 4.7] has been proved without assuming that the base ring has a finite dimension.) 3. Conditions for the existence of an annihilator of local cohomology modules [8,Theorem 5.4] implies that, if the resolving subcategory R(p) for p ∈ Spec(R) is dominant, then there exists the map f R(p) ∈ F(R-mod) corresponding to R(p). In this section, we will investigate the relationship between values of the map f R(p) and the existence of annihilators of local cohomology modules.
First of all, in the case when the resolving subcategory R(p) is dominant, the properties in Proposition 2.5 (1) and (2) determine a range of value of the map f R(p) . Lemma 3.1. Let q be a prime ideal of R. Suppose that a dominant resolving subcategory X of R-mod corresponds to a map f ∈ F(R-mod) that is given by Theorem 2.3. Then the following assertions hold.
(1) Let n(q) be an integer with 0 ≦ n(q) ≦ depth R q . If X is contained in {M ∈ R-mod | depth M q ≧ n(q)}, then one has the inequalities 0 ≦ f (q) ≦ depth R q − n(q). (2) If the R-module R/q is in X , then one has the equality f (q) = depth R q .
Proof. [8,Theorem 5.4] states that the subcategory X can be described as follows: (1) We note that the map f ∈ F(R-mod) is an N 0 -valued map. Therefore, we need to establish the inequality f (q) ≦ depth R q − n(q).
By the definition of F(R-mod), for the prime ideal q, there exists a finitely generated R-module E(q) satisfying depth R q − depth E(q) q = f (q), and depth R p − depth E(q) p ≦ f (p) for all p ∈ Spec(R).
The above equality ( * ) implies that the module E(q) is in the subcategory X . Therefore, our assumption yields the inequality depth E(q) q ≧ n(q). Consequently, we achieve the equality and the inequality (2) Since our assumption says that the module R/q is in the subcategory X , the above equality ( * ) yields depth R q − depth (R/q) q ≦ f (q). On the other hand, the definition of F(R-mod) gives the equality depth R q − depth E(q) q = f (q) for some finitely generated R-module E(q). Consequently, we have that is the equality f (q) = depth R q .
Remark 3.2. Let q be a prime ideal of R.
(1) We note that each map f ∈ F(R-mod) satisfies inequalities 0 ≦ f (p) ≦ depth R p for all p ∈ Spec(R). Considering this fact, the assumption of inequalities 0 ≦ n(q) ≦ depth R q is appropriate to investigate the assertion (1) in Lemma 3.1.
(2) [2, Proposition 1.2.10 (a)] states that one has Next, Lemma 3.1 (2) makes the equality (*) in the proof for Lemma 3.1 a slightly simpler form. Lemma 3.3. We suppose that a resolving subcategory X of R-mod is dominant. By Theorem 2.3, the subcategory X corresponds to a map f ∈ F(R-mod). Let S be a subset of the set {q ∈ Spec(R) | R/q ∈ X }. Then one has the following equality of subcategories of R-mod: Proof. Let q be a prime ideal in the set S, and let M be a finitely generated Rmodule. Since the R-module R/q is in the subcategory X , Lemma 3.1 (2) Consequently, we can establish the following equalities of subcategories of R-mod Let p be a prime ideal of R. According to Proposition 2.5 (2), the subcategory R(p) has the relation Spec(R)\U (p) {q ∈ Spec(R) | R/q ∈ R(p)}. Consequently, as an immediate consequence of Lemma 3.3, we now present the essential result in this paper. The following theorem provides a necessary and sufficient condition that local cohomology modules have a dominant annihilator, in the sense that this annihilator does not depend on the choice of the ideal for local cohomology modules.
Theorem 3.4. Let p be a prime ideal of R. Suppose that the resolving subcategory R(p) is dominant, and also that R(p) corresponds to the map f R(p) ∈ F(R-mod) that is given by Theorem 2.3. Then, for a finitely generated R-module M , the following conditions are equivalent.
(1) There exists an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R).
(2) One has depth R q − depth M q ≦ f R(p) (q) for all q ∈ U (p). Theorem 3.4 reveals that Takahashi's classification theorem [8,Theorem 5.4] suggests the following conclusion: the questions below have a closed relationship with the existence of a dominant annihilator of local cohomology modules.
Question. Let p be a prime ideal of R.
(1) When is the resolving subcategory R(p) dominant? (2) What value does f R(p) (q) take for each q ∈ U (p)? Proposition 2.5 (4) (respectively, (5)) states that the above question (1) has an affirmative answer when R is a finite-dimensional ring (respectively, a Cohen-Macaulay ring). Regarding the above question (2), the next section will give an answer when R has a finite dimension under certain assumptions. Furthermore, the last section will provide a complete answer when R is a Cohen-Macaulay ring.
The case of finite-dimensional rings
The purpose of this section is to establish annihilation results for local cohomology modules over a finite-dimensional ring with certain assumptions.
In the case when a prime ideal p of a finite-dimensional ring R has the equality depth R q = grade(q, R) for all q ∈ U (p), we can describe a condition for the existence of an annihilator of local cohomology modules without using the notion of maps in F(R-mod).
Theorem 4.1. Let p be a prime ideal of R with depth R q = grade(q, R) for all q ∈ U (p). Suppose that the resolving subcategory R(p) is dominant (e.g., the ring R has a finite dimension). Then, for a finitely generated R-module M , the following conditions are equivalent.
(1) There exists an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R).
Proof. Note that finite-dimensional rings have the dominant resolving subcategory R(p) by Proposition 2.5 (4).
Let R be a (not necessarily finite-dimensional) ring such that the resolving subcategory R(p) is dominant. Then [8,Theorem 5.4] gives the map f ∈ F(R-mod) corresponding to the subcategory R(p).
We will show that the equality f (q) = 0 holds for each q ∈ U (p). Indeed, Proposition 2.5 (1) and our assumption yield the inclusion relation Thus, we can deduce from Lemma 3.1 (1) that 0 ≦ f (q) = depth R q −depth R q = 0. Theorem 3.4 establishes the equalities These equalities mean that an R-module M is in the subcategory R(p) if and only if one has sup q∈U(p) {depth R q − depth M q } ≦ 0. (1) In the preceding section, we talked about the question for the value of f R(p) (q) for each q ∈ U (p). As in the proof for Theorem 4.1, if R is a finite-dimensional ring and p satisfies the equality depth R q = grade(q, R) for all q ∈ U (p), then one has the value f R(p) (q) = 0 for each q ∈ U (p).
Theorem 4.1 concludes that the discussion now turns to the existence of annihilator of local cohomology modules for a finitely generated R-module M with sup q∈U(p) {depth R q −depth M q } > 0. Theorem 4.1 describes the following corollary as an answer to the discussion about such modules. Corollary 4.3. Let p be a prime ideal of R with depth R q = grade(q, R) for all q ∈ U (p). Suppose that the resolving subcategory R(p) is dominant (e.g., the ring R has a finite dimension). Let M be a finitely generated R-module with Our assumption guarantees that n is a non-negative integer.
In the case when n = 0, our assertion is immediately from Theorem 4.1. Now suppose that n ≧ 1. Since each prime ideal q in U (p) has depth R q ≦ depth M q + n, the depth lemma [2, Proposition 1.2.9] yields the inequalities and the equality Therefore, one has the inequality sup q∈U(p) {depth R q − depth ((Ω n R M ) q )} ≦ 0. We now apply Theorem 4.1 to the module Ω n R M . Then there exists an element s ∈ R\p such that sH i a (Ω n R M ) = 0 for all ideals a of R and all integers i < grade(a, R). Our aim is to show that the above element s yields sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R) − n. To prove our assertion, we now fix an ideal a of R.
We suppose that one has grade(a, R) − n ≦ 0. Since the local cohomology functor H i a (−) is the zero functor for all integers i < 0, we can achieve the equality sH i a (M ) = 0 for all integers i < grade(a, R) − n.
Next suppose that we have grade(a, R) − n ≧ 1. It should be noted that one has the inequality grade(a, R) ≧ 2. A projective resolution · · · → P k → · · · → P 1 → P 0 → M → 0 of the R-module M provides short exact sequences for all positive integers i and k.
Since the R-module P k−1 is projective, or a direct summand of a free module, [1, Theorem 6.2.7] yields H i a (P k−1 ) = 0 for all integers i < grade(a, R). Consequently, the above element s provides that (Ω n R M ) = 0 for all integers i with i + n < grade(a, R). The proof of our corollary is completed.
The case of Cohen-Macaulay rings
This section will investigate the existence of annihilators of local cohomology modules over a Cohen-Macaulay ring. In particular, our purpose is to verify that Cohen-Macaulay rings establish results similar to Theorem 4.1 and Corollary 4.3.
We begin with the following easy lemma.
Lemma 5.1. We suppose that R is a Cohen-Macaulay ring. Let p ∈ Spec(R), and let M be a finitely generated R-module with depth M p ≧ depth R p . Then one has depth M q ≧ depth R q for all q ∈ U (p).
Proof. Let q be a prime ideal of R with q p. Since R is a Cohen-Macaulay ring, the module M satisfies the inequalities and the equalities We now achieve the following conclusion about when local cohomology modules over a Cohen-Macaulay ring have an annihilator that does not depend on the choice of the ideal. The result below removes from [9,Theorem 4.3] the assumption that the base ring has a finite dimension.
Theorem 5.2. Suppose that R is a Cohen-Macaulay ring, and let p ∈ Spec(R). Then, for a finitely generated R-module M , the following conditions are equivalent.
(1) There exists an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R).
Proof. Note that the Cohen-Macaulay ring R has the dominant resolving subcategory R(p) by Proposition 2.5 (5). Hence, [8,Theorem 5.4] gives the map f ∈ F(R-mod) corresponding to the subcategory R(p).
We claim that the equality f (q) = 0 holds for each q ∈ U (p). Indeed, since R is a Cohen-Macaulay ring, [2, Theorem 2.1.3] yields the equality depth R q = grade(q, R). It therefore follows from Proposition 2.5 (1) that Applying Lemma 3.1, we obtain 0 ≦ f (q) = depth R q − depth R q = 0.
We now establish the equalities using Theorem 3.4, the above claim, Lemma 5.1, and the Cohen-Macaulayness for the ring R, respectively. Consequently, we can conclude that an R-module M is in the subcategory R(p) if and only if an R p -module M p is a maximal Cohen-Macaulay R p -module. (1) When R is a Cohen-Macaulay ring, the proof for Theorem 5.2 presents the complete answer to the question (2) in section 3. Indeed, with the notation of Theorem 3.4, we have already established the equality f R(p) (q) = 0 for each q ∈ U (p).
Note that, for each q ∈ Spec(R), one has depth M q = ∞ if and only if M q = 0 if and only if CM-dim Rq M q = −∞.
The corollary below is a generalization of [9, Corollary 4.6], which guarantees the existence of annihilators of local cohomology modules for finitely generated modules over a finite-dimensional Cohen-Macaulay ring. Using Theorem 5.2, the same proof for [9, Corollary 4.6] works without the assumption that the base ring has a finite dimension.
Corollary 5.4. Suppose that R is a Cohen-Macaulay ring, and let p ∈ Spec(R). Then, for a finitely generated R-module M , there exists an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R) − CM-dim Rp M p .
Remark 5.5. Let R be a Cohen-Macaulay ring, let p be a prime ideal of R, and let M be a finitely generated R-module.
(1) We can regard Corollary 5.4 as an immediate consequence of Corollary 4.3 by the equalities in Remark 5.3 (2). Note that, in the case when CM-dim Rp M p < 0, we have the equality CM-dim Rp M p = −∞. Namely, one has M p = 0. It is easy to see that there exists an element s ∈ R \ p such that sM = 0. Since local cohomology functors are R-linear by [1, Properties 1.2.2], we achieve the equalities sH i a (M ) = 0 for all ideals a of R and all integers i; see also [1, Lemma 9.4.1].
(2) Let t be a non-negative integer. We suppose that M has an element s ∈ R \ p such that sH i a (M ) = 0 for all ideals a of R and all integers i < grade(a, R) − t. Then the following investigation will lead to the inequality t ≧ CM-dim Rp M p .
When the equality M p = 0 holds, one has CM-dim Rp M p = −∞. Therefore, there is nothing to prove.
Next suppose that M p = 0. We now take the prime ideal p as the above ideal a. The flat base change theorem [1,Theorem 4.3.2] yields H i pRp (M p ) = 0 for all integers i < grade(p, R) − t. Since Nakayama's lemma guarantees that pR p M p = M p , we deduce from [1, Theorem 6.2.7] and [2, Theorem 2.1.3] that depth M p ≧ grade(p, R) − t = depth R p − t.
Consequently, we can conclude t ≧ depth R p − depth M p = CM-dim Rp M p by [6, Theorems 3.8 and 3.9]. | 2021-05-04T01:16:06.541Z | 2021-05-03T00:00:00.000 | {
"year": 2021,
"sha1": "164795fd5d856e7a62803326b16d2a48445a31e9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e5a81c33d33104bc78232e04a08bca87411d53dd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
219515928 | pes2o/s2orc | v3-fos-license | The Influence of Audit Opinion and Managerial Ownership on Income Smoothing in Banking Companies
This research was conducted on banking companies that are included in the top 30 in Indonesia. Data analysis method used in this study is to use a descriptive statistical analysis and logistic regression method with data processing using SPSS 21. The results of this study indicate that audit opinion has a negative but not significant effect on income smoothing. This indicates that with a low audit opinion does not indicate the company is making income smoothing. Managerial ownership shows the results have a significant negative effect on income smoothing, this indicates that the ownership of shares in the company can reduce the actions of managers to make income smoothing.
This research was conducted on banking companies that are included in the top 30 in Indonesia. Data analysis method used in this study is to use a descriptive statistical analysis and logistic regression method with data processing using SPSS 21. The results of this study indicate that audit opinion has a negative but not significant effect on income smoothing. This indicates that with a low audit opinion does not indicate the company is making income smoothing. Managerial ownership shows the results have a significant negative effect on income smoothing, this indicates that the ownership of shares in the company can reduce the actions of managers to make income smoothing.
INTRODUCTION
The financial report is a company facility to convey financial information that contains the accountability of management to fulfill the needs of external parties, namely information on company performance. According to the Statement of Financial Accounting Concept (SFAC) No. 1, earnings information is the main concern to assess performance or accountability of management.
Besides that, earnings information also helps the owner or other parties in estimating the company's earnings power in the future. The existence of a tendency to pay more attention to this profit is realized by management, especially managers whose performance is measured based on earnings information, thus encouraging the emergence of deviant behavior, one of which is earnings management (Putri et al, 2014).
The issue of information disclosure has received global attention from various parties due to its importance to investors in making investment decisions. The information asymmetr y that exists between the managers and shareholders has positioned the managers above the shareholders in terms of information advantage about the firm. Managers have exclusive access to operational information about firms' actions and future prospects, which causes them to have compelling reasons to ensure confidentiality of the information. Therefore, firms with a weak practice of information disclosure policy could cause managers to take advantage in pursuing their self-interests at the expense of their shareholders (Ghani at al, 2016). Strategy alignment between organization objectives and business unit and support functions become crucial for organization successful. Organization is able to execute its strategy well to compete with its rivals if organizational strategies are linked to business units and support functions within organization (Yuliansyah, 2015 as an independent auditor, Bank Indonesia as the payment system authority that handles credit cards, and the OJK as the institution responsible for banking supervision . Banking companies do more income smoothing than non-banking companies (Indriastuti, 2012).
The larger the size of the bank, the more likely to have more information than the smaller banks.
Information related to decision making will also increase. Bank size can be an indicator of the assessment of investors to assess the performance of the bank. Large banks generate relatively large profits. This is what can make large banks to make enough earnings management because one of the main reasons is to meet expectations from investors or shareholders (Agusti and Tyas, 2009).
Financial details reported by a company are important information for investors. Investors make decisions partly because the total earnings show the company's financial performance (Makhsun et al, 2018).
Earnings management arises as the impact of agency theory that occurs due to an inconsistency of interests between shareholders (principal) and company management (agent). Income smoothing is a management intervention in external financial reporting with the aim of benefiting itself (the manager). This conflict arises due to the emergence of information gaps provided, therefore it requires the existence of financial statement audits by competent and independent third parties (Lin and Mark, 2010). Company share ownership by management aims to align various interests between the principal and agent. By giving the manager the opportunity to be involved in the shareholding aims to align the interests of the manager with the shareholders, so that the greater the proportion of ownership by the managerial management will be more active for the benefit of the shareholders which includes themselves.
HYPOTHESIS Agency Theory
Agency theory is a theory that addresses differences in interests between agents and principals (Jensen & Meckling, 1976). Scott (2012) argues that agency theory is the most appropriate form of contract design to integrate principal and agent interests in the event of a conflict of interest. The company has many contracts, for example a work contract between the company and its managers and a loan contract between the company and its creditors.
The employment contract which mention here is a work contract between the owner of the capital and the manager of the company. There is interest between the agent and the principal want to maximize their utility with the information they have, the agent has more information than the principal (full of information), so that cause asymmetry of information. Information that is more owned by managers can encourage managers to take actions in accordance with their wishes and interests. As for capital owners, it will be difficult to effectively control the actions taken by managers because they only have little information available.
Therefore, sometimes there are certain policies carried out by company managers without the knowledge of the capital owners or investors.
Auditing
Auditing is a critical and systematic examination by an independent party of the financial statements prepared by management along with accounting records and supporting evidence in order to be able to provide an opinion on the fairness of the financial statements. Auditor quality is needed in determining good financial statements with high-quality auditors that are expected to increase investor confidence (Agoes, 2007).
Unqualified opinion with explanatory language
This opinion is given if the audit has been carried out or completed in accordance with auditing standards, the presentation of financial statements in accordance with generally accepted accounting principles, but there are certain circumstances or conditions that require an explanatory language.
Reasonable opinions with qualified opinions
According to IAI (2002), this type of opinion is
Managerial ownership
According to Sugiarto (2009) managerial ownership is ownership of shares by the management of the company. The greater the managerial ownership in the company, the managerial party will try to improve its performance for the benefit of shareholders, so as to avoid the existence of earnings management carried out by company managers.
Earnings management
Earnings management is one of the factors that can reduce the credibility of financial statements, and add bias in financial statements and disrupt financial statement users who believe the profit figures from the engineering as profit numbers without engineering (Wiryadi and Sebrina, 2013). Schipper (1989) defines earnings management as intentional management intervention in the process of determining earnings, usually to fulfill personal goals.
The pattern of earnings management according to Scott (2012) can be done by:
Taking a bath
This pattern occurs during reorganization, including the appointment of a new CEO, by reporting large losses. This action is expected to increase profits in the future because the burden of future periods is reduced.
Income minimization
Income minimization is done when the company experiences a high level of profitability, so that if the profit in the next period is expected to drop dramatically, it can be overcome by taking the previous profit.
This action was carried out with the aim of not getting political attention.
Income maximization
This pattern aims to report high net income for the purpose of greater bonuses, motivation to avoid violations of debt agreements, or to avoid a sharp drop in stock prices. Income maximization is applied when profit decreases.
This pattern is carried out by taking the previous period's profit deposits or withdrawing profits for the future period, for example by delaying the charging of costs.
Income smoothing
This pattern is carried out by the company by leveling the reported earnings, so as to reduce the fluctuations in profit that are too large, because investors generally prefer relatively stable profits.
Influence of Audit Opinion on Income Smoothing
De Angelo (1981) mentions that audit quality is the probability that an auditor finds and reports about a violation in the audit accounting system. Table 1 as follows.
Index Eckel Calculation Results
The company is classified as income smoothing if: a. Index Eckel value ≥ 1 then the company is classified as not doing income smoothing b. and if the Index Eckel <1 then the company is classified as doing income smoothing. , table 2 shows the number of companies that make income smoothing and do not make income smoothing.
Logistic Regression Test Results
Logistic regression was used in this study because the dependent variable in the study was a dummy variable. Logistic regression is used to test whether the probability of the occurrence of the dependent variable can be predicted by the independent variable (Ghazali, 2013).
Following are the results of the logistic regression analysis test:
Model Feasibility Test
The first step to knowing that a logistic regression model is an appropriate model will first look at the suitability or feasibility of the overall model. In this case the Hosmer and Lameshow Test is used. The following is a Hosmer and Lameshow Test Test
Test Fit Model (-2log likelihood)
Testing is done by comparing values -2log likelihood (block number = 0) with final -2log likelihood (block number = 1). If there is a decline, then the model shows a good regression model. Following are the Fit Model Test tables:
Regression Coefficient Testing
Regression coefficient testing is done to test how far all independent variables included in the model have an influence on the dependent variable.
Following are the results of data analysis with logistic regression: to be able to harmonize the potential differences in interests between outside shareholders and management (Jensen and Meckling, 1976). So the agency problem will be lost if a manager is also at the same time as an owner. The greater the proportion of management ownership in the company, the management tends to try harder for the benefit of shareholders who also include themselves.
The results of this research agree with the results of the research by Kouki et al (2011) which revealed that managerial ownership has a negative effect on earnings management and can improve the quality of the financial reporting process. Similar to the results of Oktovianti and Agustia (2012), which states that managerial ownership has a significant negative effect on earnings management. Pratiwi's research, et al (2015) also shows that there is a significant influence with a negative coefficient between managerial ownership on information asymmetry, this is in accordance with the principle of transparency in the implementation of corporate governance which reveals that the higher managerial ownership in a company, the information disclosure that occurs in the company is getting higher too, so it will reduce the gap of the information contained in it. In addition, the use of professional management will reduce the information gap in the company because professional management will maintain its credibility so that the company that becomes its responsibility will be more transparant.
MANAGERIAL IMPLICATIONS
This research provides empirical evidence that audit opinion does not have a significant effect on income smoothing, while managerial ownership has a significant effect on income smoothing. Subsequent research is expected to add other variables that are predicted to affect income smoothing and can also use a wider sample, not only banking companies but can use companies that have gone public in Indonesia..
CONCLUSION
Based on the results of the research as described, it can be concluded that: | 2018-12-19T14:18:52.373Z | 2020-05-21T00:00:00.000 | {
"year": 2020,
"sha1": "b5b86cbfa66f13f3e10ef7533804f3e874350ec7",
"oa_license": "CCBY",
"oa_url": "http://www.irjbs.com/index.php/jurnalirjbs/article/download/1357/pdfrev1",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "907363a4b01222c9ba37b34dd5d56d9a468f49e8",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
228806899 | pes2o/s2orc | v3-fos-license | Assessment of wear micromechanisms on a laser textured cemented carbide tool during abrasive-like machining by FIB/FESEM
The combined use of focused ion beam (FIB) milling and field-emission scanning electron microscopy inspection (FESEM) is a unique and successful approach for assessment of near-surface phenomena at specific and selected locations. In this study, a FIB/FESEM dual-beam platform was implemented to docment and analyze the wear micromechanisms on a laser-surface textured (LST) hardmetal (HM) tool. In particular, changes in surface and microstructural integrity of the laser-sculptured pyramids (effective cutting microfeatures) were characterized after testing the LST-HM tool against a steel workpiece in a workbench designed to simulate an external honing process. It was demonstrated that: (1) laser-surface texturing does not degrade the intrinsic surface integrity and tool effectiveness of HM pyramids; and (2) there exists a correlation between the wear and loading of shaped pyramids at the local level. Hence, the enhanced performance of the laser-textured tool should consider the pyramid geometry aspects rather than the microstructure assemblage of the HM grade used, at least for attempted abrasive applications.
Introduction
Continuous or interrupted sliding contact between hard tools and counterparts is intrinsic to all machining operations of metallic alloys. As a result, the piece gets shaped, but also the cutting tool gets worn. The amount and extensiveness of wear is the synergic result of many different factors in the tribological system, including contact and lubrication conditions as well as material properties. Reliable assessment of degradation phenomena associated with the wear of cutting tools requires both two-dimensional and three-dimensional characterization, directly linked to surface and subsurface scenarios, respectively. Cross-sectional analysis is a practical and efficient approach to revealing wear information related to subsurface integrity and microstructural changes [1][2][3]. Metallographic sample preparation on crosssections combined with optical and scanning electron microscopy is an inspection protocol often used for acquiring data at the subsurface level. However, the unintended (but possible) introduction of artifacts (e.g., mechanical-induced changes) during the metallographic preparation may be an important drawback. Hence, sample preparation may either affect (by hindering or even removing) wear-induced features, such as cracks or adhered material, or introduce secondary/additional deformation and damage [1]. This additional, unintended damage is particularly present while assessing near-surface phenomena where critical features exist, especially at the surface edges of the prepared sample. Furthermore, metallographic cross-sectioning does discern general and uniformly introduced changes, but may fail to capture localized or heterogeneously distributed ones.
Following the above ideas, the use of small-scale advanced characterization techniques, such as those involving complementary actions of focused ion beam (FIB) milling and field-emission scanning electron microscopy inspection (FESEM), has emerged as a powerful tool for analyzing tribological phenomena. Unlike conventional scanning or transmission electron microscopy, a FIB-instrument employs ions, instead of electrons, as charged particles that are accelerated and focused using electric and magnetic fields. Gallium ions, generated from liquid gallium (Ga) source, are commonly used [4]. Because of the heavier character of the ions, their sputtering effect enables precise in-situ micro-milling on the target sample, i.e., at specific and selected surface locations. Meanwhile, when combined with an FESEM unit, high-spatial-resolution images can be obtained from the secondary electrons generated by ion sputtering [5][6][7][8]. Therefore, the FIB/FESEM dual-beam platform becomes a powerful tool to study concurrently induced changes, regarding surface and subsurface levels, at precise locations on the micrometer scale. Implementation of this advanced characterization technique has proven to be quite successful in characterizing degradation mechanisms of engineering materials under different service-like conditions, such as abrasive or adhesive wear [9,10], corrosion [11,12] or even tribocorrosion [13,14]. Recently, pulsed laser emerges as a nonconventional machining approach on the micron scale, and it has been successfully implemented to functionalize the surfaces of cutting tools [15].
In high precision abrasive machining, tools are usually made from super hard materials, such as composites of diamonds or cubic boron nitrides. Such a machining process is usually expensive due to the precious materials and highly accurate control systems. In previous studies, abrasive-like microfeatures of a conventional cubic boron nitride (CBN) honing stone were successfully reproduced on a cobalt-nickel-based cemented tungsten carbide grade (WC-CoNi) using laser technique [16,17]. The first results of the cutting tests enlightened the possibility of employing cemented carbides as an alternative to the precious materials in abrasive machining processes, as these microfeatures achieved similar material removal effectiveness as the CBN honing stone in the cutting tests [16]. However, mild surface degradation of these microfeatures was detected after the cutting tests. Therefore, it becomes a practical demand to access the wear mechanism of these "micro-textures" in the cutting tests with an adequate approach. To be more precise, FIB/FESEM is used in this study to assess and understand wear phenomena taking place in laser-shaped microfeatures of the hardmetal (HM) tool during the cutting test.
Tools and abrasive-like testing
The tool material was a HM grade consisting of coarse WC grains embedded in a 28.5 wt% metallic (CoNi) binder. Aiming to replicate topographic scenario of a conventional CBN based honing stone, a picosecond laser micromachining system sculpted arrays of hexagonal pyramids morphologically similar to abrasive grains exposing on the CBN tool surface [16]. Instead of common anisotropic geometrical properties of the diamond or CBN grains in the composites, microfeatures produced on HMs by a laser can possess geometrical regularities in order to avoid disfunction or surface damage of the diamond or CBN tools often resulting from the erupted hard grains. One of the reasons for the brutal grain eruption was the local stress concentration due to irregular geometrical properties. Figures 1(a, b) and Table 1 show basic mechanical and microstructural characteristics for the laser-surface textured (LST) HM tool studied. The cutting capability of the LST-HM tool was evaluated in a workbench designed to simulate an external honing process, under lubricated condition [16,18]. As illustrated in Fig. 1(c), the workbench was integrated into a lathe. In order to replicate honing, the workpiece was secured on a spindle, which could rotate at a fixed speed. Meanwhile, the test tool sample was oscillated and moved towards the workpiece through a transmission lever with a certain feed. As a result of the movements, the surface of the workpiece was machined by the honing-like action. Detailed machining parameters are listed in Table 2. In this cutting test, the workpiece was made of steel 20MnCr5. The working surface was preliminarily fine-turned with an arithmetic average roughness Ra about 4 µm. After the abrasive test, Ra came down to 0.7 µm. It was demonstrated that the LST-HM tool smoothed the surface of a steel workpiece, down to roughness levels close to those attained by the reference CBN honing stone.
However, morphological changes of the cutting microfeatures were assumed to be different: surface topography of the CBN tool retained dynamic stability due to the self-dressing phenomena, whereas the pyramids on the LST-HM tool might suffer permanent degradation. Hence, in-depth inspection and understanding of wear mechanisms of these effective cutting microfeatures on the LST-HM tool become mandatory to improve its performance.
Wear-induced changes on surface and microstructural integrity of the LST-HM tool
Following the described abrasive-like test, a direct surface inspection was carried out using FESEM (Sigma VP, Zeiss). Cutting microfeatures exhibited morphological changes at different levels. The most severe damage was found close to the pyramids cutting front, where penetration into the workpiece first occurred (Fig. 2(a)). At the cutting tip, large 4 Friction | https://mc03.manuscriptcentral.com/friction amounts of heterogeneous material were stacked, resulting in built-up edges (e.g., Fig. 2(b)) [19]. Such degradation was linked to chip adhesive wear when machining the 'sticky' material (20Mn-Cr5), especially at the low cutting speed used in the conducted test. Around and particularly below the cutting tip, some notch wear and breakage of WC grains were also found. These phenomena should also be linked to the referred chip adhesion on the cutting tip, as it would lead to high pressure and consequent local plastic deformation at the surface. At all other sites of the pyramids, surface conditions remained almost unchanged, as sustained by the observation of the laser-induced periodic surface structures [20], e.g., Fig. 2(c). These findings were further proven by energy-dispersive X-ray spectroscopy (EDS). Several aspects may be highlighted after the resulting element distribution map ( Fig. 2(d)) was analyzed. First, material removal occurred only at the top of the pyramids (effective cutting areas). Second, plenty of workpiece material (Fe) remained, due to adhesion, around the cutting tip of the pyramid. Finally, geometry and surface integrity below the top level of pyramids did not show any discernable changes. Following these microscopic observations, it may be concluded that the degradation of pyramids mainly results from sliding abrasion, rather than from abrupt breakdown or cleavage. Cross-sectional inspections using FIB milling (Helios 600, FEI) were done at the rake surfaces of the pyramids, which are the chip flow surfaces ahead of cutting fronts (contact areas) as indicated by arrows in Figs. 3(a, c). Obviously, these places are subject to damage as a result of the pressing and sliding of the chips. Analysis of a reference pyramid (i.e., before cutting test) showed a smooth profile, without evidence of any microstructural changes or damage underneath the patterned surface ( Figs. 3(a, b)). Corresponding subsurface images after the cutting tests are given in Figs. 3(c, d). In general, the pyramid shape was not affected, except at the position of the rake surface and cutting front. There, changes were found in both micro-constitutive phases: ceramic grains and metallic binder. As indicated by arrows in Fig. 3(d), substructural changes in the binder were concentrated on a very shallow region (about 1 m in thickness). On the other hand, damage within ceramic grains was discerned even at distances as deep as 5 µm. Extensive and detailed FIB/FESEM analysis permits speculation that the wear-induced degradation of the pyramids is a sequential process resulting from pressing and sliding contact of the pyramids (abrasive units) against the workpiece during the cutting test. Different wear phenomena may be expected: deformation, cracking, adhesion, and material detachment.
At the initial stage, the tool was set to get in contact with the workpiece, followed by the application of the normal force. The penetration of the pyramids into the workpiece then occurred due to the different relative hardness between tool and workpiece. Under these indentation-like conditions, the two microconstitutive phases of the HM exhibited distinct irreversible changes. As shown in Fig. 4(a), the binder experienced plastic deformation (marked with the circle). On the other hand, cracks were discerned within the ceramic grains (indicated with the arrows), possibly as a result of strain compatibility forced by the extruded binder or from direct contact with the workpiece. In a subsequent stage, as the workpiece began to rotate, penetrating pyramids were subjected to tangential forces. As a consequence, they undertook shear stresses, besides the normal one. The synergic effect of both stresses increased the deformation level as well as potential crack propagation. As shown in Fig. 4(b), besides damage evidenced within grains, large cracks were also found to extend along the boundaries of adjacent ceramic particles. As the rotation continued, loosed grain particles could be partially spalled or even crushed together with the adhesive binder at some places, which directly led to an uneven topography profile (Fig. 4(c)). Small-scale cavities within such profiles were then filled with workpiece material. It enhanced local adhesion, which later extended over the pyramid surface as a thin film ( Fig. 4(d)). It may be expected that such a 1 micronthick layer is detrimental to the tool functionality, as the tribological condition between the abrasive pyramids and the workpiece changes. In this regard, excessive heat might be produced as galling-like friction rises, and lubrication becomes less efficient. Furthermore, the local stack of adhered material could lessen the material removal precision and increase surface roughness. The degradation of the tribological conditions led to more severe expelling or detachment of material. Based on the FIB/FESEM characterization, it may be speculated that the vertical pressure exerted on the pyramid during the penetration was the most important factor for the harm of microstructural integrity. Afterward, shearing stress aggravated the loosening and breakdown of the grains. Besides the mechanical properties of the material, the stress resistance is also strongly related to the geometrical features of those microfeatures. Therefore, it is necessary to analyze the influence of the geometrical features on the stress distribution.
According to the initial design, pyramids with a flat (top) contact area was aimed in order to avoid collapse during the sliding movement ( Fig. 1(a)). However, such dull cutting tips indeed translated into a rather difficult penetration. As a result, subsequent plowing into the counterpart could abruptly take place, making the workpiece suffer severe normal and shear stresses. Interaction among pyramids and the workpiece consists of many discrete area-to-area contacts during the cutting process, as a result of the smooth and flat top surface design. Therefore, local stress was hardly redistributed, and it became difficult to build up a full lubrication scenario between tool and workpiece, i.e., contacting areas rub under either mixed or even dry conditions. Such a tribological scenario aggravated the degradation of the pyramid surfaces, yielding (ceramic) grain crushing and material adhesion. Based on the above analysis, it is then suggested that a sharp penetration angle or a reduction of the contact surface area of the pyramids, especially at the positions close to the cutting tips, should be the aim for obtaining smoother penetration and plowing.
Conclusions
In this study, wear mechanisms of an LST-HM tool were characterized, after an abrasive-like cutting test. FIB/FESEM revealed the wear micromechanism of the abrasive microfeatures: 1) At the initial step, cracks appeared through the grains and along the carbide/binder interface.
2) Binder was removed layer by layer. Adhesion of workpiece material was found on the contact surface, especially at those cavities produced by the broken and erupted grains.
Based on the wear analysis, pyramid geometry, contact conditions, and material properties are recalled as critical factors to improve the tool performance: a smoother penetration and plowing should be aimed for enhancing the cutting capability of the tool. Coating of the abrasive units may also be suggested as an action towards increasing wear resistance and maintaining tribological conditions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 2020-11-05T09:03:44.707Z | 2020-11-02T00:00:00.000 | {
"year": 2020,
"sha1": "64711b45319b6655df9b7dafe2cd685a09ba67c7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-020-0422-z.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dc33108b11522666b57b9ea4f678bbea907210cf",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
231952145 | pes2o/s2orc | v3-fos-license | High-Flow Oxygen Therapy Application in Chronic Obstructive Pulmonary Disease Patients With Acute Hypercapnic Respiratory Failure: A Multicenter Study
Objectives: To evaluate the effect of high-flow oxygen implementation on the respiratory rate as a first-line ventilation support in chronic obstructive pulmonary disease patients with acute hypercapnic respiratory failure. Design: Multicenter, prospective, analytic observational case series study. Setting: Five ICUs in Argentina, between August 2018 and September 2019. Patients: Patients greater than or equal to 18 years old with moderate to very severe chronic obstructive pulmonary disease, who had been admitted to the ICU with a diagnosis of hypercapnic acute respiratory failure, were entered in the study. Interventions: High-flow oxygen therapy through nasal cannula delivered using high-velocity nasal insufflation. Measurements and Main Results: Forty patients were studied, 62.5% severe chronic obstructive pulmonary disease. After the first hour of high-flow nasal cannula implementation, there was a significant decrease of respiratory rate compared with baseline values, with a 27% decline (29 vs 21 breaths/min; p < 0.001). Furthermore, a significant reduction of Paco2 (57 vs 52 mm Hg [7.6 vs 6.9 kPa]; p < 0.001) was observed. The high-flow nasal cannula application failed in 18% patients. In this group, the respiratory rate, pH, and Paco2 showed no significant change during the first hour in these patients. Conclusions: High-flow oxygen therapy through nasal cannula delivered using high-velocity nasal insufflation was an effective tool for reducing respiratory rate in these chronic obstructive pulmonary disease patients with acute hypercapnic respiratory failure. Early determination and subsequent monitoring of clinical and blood gas parameters may help predict the outcome.
inspiratory effort to an improved blood gas exchange (2). Its benefits were shown in various clinical studies, from which the use of HFNC has been extended to ICUs (3,4). There is evidence that high-velocity nasal insufflation (HVNI) therapy, an advanced form of HFNC, delivering high flow at high velocity provides a more efficient flush mechanism for upper airway deadspace. This use of high velocity has been shown to be noninferior to noninvasive mechanical ventilation (NIMV) as a support strategy in ARF from various causes (5). HFNC has been suggested in successful management of patients with chronic obstructive pulmonary disease (COPD) (6). The results in a series of case reports on the use of HFNC in patients with COPD exacerbation were also encouraging (7)(8)(9). A subgroup analysis of Hypercapnic and COPD patients was performed from a larger study of HVNI in the management of undifferentiated respiratory failure, suggesting the ability to provide adequate ventilatory support by avoiding intubation (10). The efficacy and safety of the use of HVNI as a first-line support treatment strategy in COPD patients with hypercapnic respiratory failure are, however, still unknown.
The main objective of this pilot study was to evaluate the effect of HVNI on the respiratory rate (RR) when used as a first-line ventilation support in COPD patients with acute hypercapnic respiratory failure. The secondary objectives were to determine possible changes in the clinical signs of respiratory failure and in blood gas exchange and the presence of predictors for success or failure of treatment.
MATERIALS AND METHODS
This multicenter, prospective, analytic observational case series study was conducted in five ICUs in Argentina, between August 1, 2018, and September 30, 2019. Patients 18 years old or more with moderate to very severe COPD (in the primary physician's judgment), who had been admitted to the ICU with a diagnosis of hypercapnic ARF (Paco 2 > 45 mm Hg [6.0 kPa] and pH < 7.35), were included in the study. The patients' diagnosis was determined at admission by arterial blood gas (ABG) analysis, taken while at rest receiving supplemental oxygen titrated to maintain an arterial blood oxygen concentration (Sao 2 ) between 88% and 92%, and at least one of the following: RR greater than or equal to 25 breaths per minute, intercostal and/or supraclavicular inspiratory retraction, or thoracoabdominal asynchrony.
Demographic and anthropometric data, comorbidities, hospitalization duration, and clinical parameters were recorded. Patient comfort during HVNI therapy was recorded using a Visual Analog Scale ranked from one to five, one being "very comfortable" and five "very uncomfortable. " Exclusion criteria include patients prescribed NIMV (before admission or at the time of evaluation), those requiring intubation and invasive mechanical ventilation (iMV) and those with pH less than 7.20, Paco 2 greater than 80 mm Hg (10.7 kPa), degraded level of consciousness (Kelly-Matthay score [KMS] > 3) (11), unstable hemodynamics (systolic blood pressure < 90 mm Hg [12 kPa] or mean blood pressure < 65 mm Hg [8.7 kPa] with fluid intake), and/or contraindications to the use of HVNI (cannula placement impossible, profuse bleeding of nasal cavities, or ARF due to neuromuscular disease).
High-Flow Oxygen Therapy Delivery and Devices
The patient was placed half-seated, tilted at 45°. A Precision Flow Plus (Vapotherm, Exeter, NH) HVNI technology was used. Therapy was started at a flow rate of 40 L/min at a temperature of 43°C and Fio 2 of 1.0, which was titrated targeting an Sao 2 between 88% and 92%. Flow rate and temperature were adjusted to the individual patient's work of breathing, comfort, and tolerance.
After the first hour of HVNI, ABG analysis was performed. At that time, the criteria for treatment interruption were tachypnea (RR > 35 breaths per minute), persistent intercostal and/or supraclavicular retraction, persistent thoracoabdominal asynchrony, worsening gas exchange (pH < 7.30 and/or 20% increase in Paco 2 , and/or Pao 2 lower than 60 mm Hg [8 kPa] at an Fio 2 of 1.0), or KMS less than 3. A need escalate treatment to NIMV or iMV was considered a "treatment failure. " Absent failure, HVNI was given without interruption for 24 hours. Subsequent treatment suspension was authorized only in the presence of the following criteria for 2 consecutive hours: Fio 2 less than 30% with Sao 2 greater than or equal to 92% and RR less than 25 breaths per minute. During treatment suspensions, patients received low-flow oxygen therapy (mask or nasal cannula) to keep Sao 2 88-92% and restarting HVNI was reevaluated every 30 minutes. HVNI was restarted if RR greater than 25 breaths per minute, intercostal and/or supraclavicular respiratory retraction, thoracoabdominal asynchrony, and increased oxygen support (increased ≥ 20% for longer than 5 min). In those patients in whom it had to be restarted, HVNI continued for a period of at least 12 hours until a subsequent evaluation.
HVNI suspension for a period of 24 consecutive hours or longer was considered a treatment success.
Statistical Analysis
Continuous data were expressed as mean and sd or as median and interquartile range . Normality was assessed by visual inspection and Shapiro-Wilk test. Categorical data were expressed as absolute values and/or percentages. A sample size of 20 patients was calculated for detecting a 15% difference in RR with 80% power and α value of 0.05, from previous studies (3). Nonparametric variables were compared using Friedman, McNemar, and Mann-Whitney U tests. A p value of less than 0.05 was considered significant. SPSS software Version 25.0, IBM Corp., Armonk, NY, was used to perform the statistical analysis.
Ethical Considerations
The study protocol was approved by centers' Institutional Review Board (IRB) (Sanatorio Anchorena Recoleta IRB F004-02-A[04]2017) and an informed consent form was recorded. The study was registered with ClinicalTrials.gov (NCT04109560). This study did not receive any financial support. The HVNI equipment was provided by JAEJ S.A. (Buenos Aires, Argentina).
Tolerance to High-Velocity Nasal Insufflation Therapy, Its Duration, and Scheduling HVNI treatment was comfortable for the patients, with the values being 1.5 (1-2) at the start of treatment and 1 (1-2) at the completion (p = not significant). Intolerance was not recorded as the cause of failure in any patient and no unexpected adverse events of any kind were observed. The duration of HVNI was 48 hours (34.5-96.2), the longest recorded time being 194 hours (8 d).
Patient Results and Stratified Analysis of Success/Failure
HVNI application failed in seven patients (17%), requiring NIMV within a median of 12 hours (1-36 hr); one progressing to iMV. Stratified analysis of the group of patients with a HVNI failure showed no significant improvement from baseline in the first hour of treatment of RR, Paco 2 , or pH, while such changes were seen in successful HVNI patients ( Table 2). HVNI failure was associated with persistent respiratory acidosis within the hour after the start of HVNI (pH 7.37 vs 7.31, respectively; p = 0.022) ( Table 2).
Of the 40 patients entered in the study, three died (7%); two of them subsequently died after successful HVNI treatment, while the third patient, with very severe COPD, died during iMV. The median durations of ICU stay and hospital stay were 7 and 12 days, respectively.
DISCUSSION
This study evaluated the effects of HVNI administration in patients with COPD and acute hypercapnic respiratory failure. The principal results were as follows: 1) HVNI causes early and sustained changes in the clinical and blood gas parameters; 2) RR, Paco 2 , and pH appear to be early prognostic factors of treatment success, with acidosis at 1 hour of onset of HVNI associated with treatment failure; 3) HVNI was successful as supportive treatment in 83% of cases; and 4) HVNI was well tolerated. Currently, COPD patients with ARF who have a pH of 7.20-7.35 (absent metabolic etiology) are considered good candidates for the application of NIMV, leaving iMV as a second-line treatment option in the case of failure (12,13). Early improvement of pH and/ or RR is a good predictor of favorable NIMV outcome, with a response observed almost universally within the first 2 hours of initiation (14). During the administration of HVNI, we found a 27% decrease of RR during the first hour of treatment, similar to that reported for NIMV (15). The decrease in RR was accompanied by a fall of Paco 2 , suggesting a reduction of Paco 2 possibly linked to either an increase in tidal volume or a decrease of functional dead space. This behavior of both RR and Paco 2 could be useful as an indicator of favorable outcome. In this context, persistent acidosis at the start of HVNI administration was a prognostic factor for treatment failure.
"Accessory muscle" recruitment due to structural and functional alterations are typical of COPD patients, particularly in ARF (16,17). In these patients, the increase in respiratory effort is due to air trapping produced by flow obstruction, placing a mechanical overburden on the respiratory musculature (18). RR decrease with HFNC has been suggested to improve pulmonary emptying through an increase in expiratory time (19), allowing improved diaphragm function by optimizing contraction length (20,21). The positive end-expiratory pressure effect of HVNI could also have played a small role in this direction by counterbalancing the load resulting from air trapping (22,23).
HFNC treatment of COPD patients has been described in several reports of stable or NIMV-intolerant patients (6)(7)(8)(9)24). For this study, it was decided to use HVNI as first-line supportive treatment, based on the available physiologic (14,16) and clinical data (5)(6)(7)(8)(9)(10)24) as reported by various studies (15,(25)(26)(27)(28). Failure for NIMV in COPD patients with hypercapnic ARF and pH less than 7.35 is approximately 15%, and 25% for patients with a pH less than 7.30 (15,29). In this study, HVNI failed as a supportive therapy 17% for hypercapnic COPD patients, similar to that reported for NIMV used for this patient population (15,29). The average mortality reported for COPD patients with hypercapnic ARF treated using NIMV is approximately 6% (15,(27)(28)(29)(30), compared mortality among patients in this study was 7%. The length of hospital stay was 12 days for this HVNI patient group, which compares favorably to the length of stay reported for a similar population treated using NIMV (15). Patient comfort during NIMV is one of the known factors to consider for successful therapy (31). Comfort is substantially better with the use of high-flow cannulas as compared with NIMV masks (16,32,33). HVNI application was well tolerated in our study; the technique was described as being "very comfortable" or "comfortable" in all cases. There was no interruption of HVNI due to patient discomfort. Cannula-based high-flow therapy, compared with NIMV, also removes any asynchrony (34,35), reduces the caregiver interventions, and lowers risk of pressure injury due to therapy interface (36).
The average flow with HVNI in our study, as well as that of Doshi et al (5), was lower (32.5 and 30 L/min, respectively), compatible with the recently published data describing 30 L/min as the optimal flow rate to reduce work of breathing in COPD patients, comparable to NIMV at an inspiratory pressure of 11 cm H 2 O (11-13 cm H 2 O) and an expiratory pressure of 5 cm H 2 O (5 cm H 2 O) (22). Cannula type may play a mechanistic role in the effect of therapy. HVNI administration employs a small-bore nasal cannula prong, imparting greater velocity to the gas flow. This has been demonstrated to provide a mechanistic advantage to flush the accessible deadspace in the upper airway, likely through creation of increased turbulent kinetic energy (37)(38)(39).
There were limitations of this study. It was not a randomized controlled trial (RCT). This study was designed as a pilot study for a subsequent RCT. Second, this study was not blinded to the investigators or to the subjects, which could add bias; however, due to the study design, it was impossible.
CONCLUSIONS
High-flow oxygen therapy using HVNI through a nasal cannula was an effective tool for reducing RR and providing oxygenation support of these COPD patients with acute hypercapnic respiratory failure. HVNI therapy in this study has a 17% failure rate, which may be comparable to NIMV. Clinical behavior and blood gas parameters may help predict the outcome of HVNI management for such patients.
Our study suggests that the use of HVNI as supportive treatment in COPD patients with acute hypercapnic respiratory failure warrants further randomized study comparing it to NIMV. | 2021-02-19T05:04:30.464Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "27cabacbe3e8210020940e3f46a9e0d344e29e5a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/cce.0000000000000337",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27cabacbe3e8210020940e3f46a9e0d344e29e5a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258817790 | pes2o/s2orc | v3-fos-license | How Monet became a millionaire: the importance of the artist’s account books
This essay explores Monet’s rise to great wealth, drawing on evidence provided by the artist’s three account books, housed in the Musée Marmottan Monet, Paris. Assimilating unpublished data, the essay charts Monet’s growing annual income as well as the increasing individual prices for his paintings. It argues for the central role of the artist’s serial painting process in his financial success. The essay examines the seminal contribution of Parisian dealers to Monet’s growing wealth, principally the dealer Paul Durand-Ruel. It also explores Monet’s willingness to work with a range of other dealers in order to raise his prices. The essay looks at the significant role of Monet’s collectors, and particularly the internationalization of his clientele, as a key factor in his success. Overall, it argues for Monet’s commercial acumen, and in general, his recognition of the importance of his own agency in the creation of his market.
Introduction
By the early years of the twentieth century, Monet was a very affluent artist, surrounded by the accoutrements of wealth, including a Panhard & Levassor car, an extensive house at Giverny, with expansive gardens tended by six gardeners, and a large paintings collection. In histories of Impressionism in the early part of the twentieth century, Monet was generally presented as the leading painter of the Impressionist group, and certainly the leading landscape painter (Duret, 1906(Duret, , 1910Mauclair, 1903). How did Monet achieve this celebrity and prosperity? This article focuses on a resource that has been somewhat overlooked but which in fact represents arguably the principal means of comprehending Monet's developing commercial success. This is the group of three account books, which Monet
Monet's annual income
Monet's account books enable us to trace the increase in his annual income from 12,100 francs in 1872 to 369,000 francs in 1912, as is evident from Table 1. Based on the conversion rate to dollars provided by www. histo rical stati stics. org (Edvinsson, 2016), Monet's income in 1912 was worth $71,146 at that time. In contemporary terms, this amounts to $2,050,000. 5 Relative income over time can be measured in different ways, however (Officer & Williamson, 2022). Between 1912 and 2021, the average US wage increased by 175 times. In that context, Monet's income in 1 3 1912 would be worth $12,400,000 today. 6 Clearly, Monet was a very wealthy man by the early part of the twentieth century, earning millions of dollars a year, in contemporary terms, from his painting. His 1912 income, as for the majority of his career, was based exclusively on the sale of his paintings. 7 Table 1 reveals that Monet enjoyed some commercial success in the early 1870s before his earnings declined significantly in the mid to later part of this decade. The late 1870s was a period in which Monet faced some financial hardship, to which his letters of the time attest, although it is worth noting that the ever-dramatic artist often exaggerated his financial plight. During these years, his income came not only from the direct sale of his paintings but also from a complicated system of advances on pictures that were not yet delivered or payments from supporters to offset imminent bills. 8 In the early 1880s, Monet's annual income significantly picked up, rising to the annual income of 44,500 francs a year by 1887. Thereafter, there is a gap of eight years in the account book, from 1889 to 1897, for which there is no clear reason. In 1898, Monet began to make entries again and his 1898 income of 173,500 francs indicates the substantial increase in his earnings in the intervening years. The remaining entries, until 1912, highlight Monet's ongoing affluence.
The account books highlight the dominating role of Monet's dealers in his sales. Dealer sales account for approximately 90% of the artist's total annual income in the books of 2,774,305 francs. The closely related history here is the ability of these dealers to broaden their collector base in the years of Monet's career, particularly through the establishment of an international clientele. 9 The Approximately 53% of Monet's total sales (or 59% of his sales to his dealers alone) were to Durand-Ruel and Co., indicating that dealer's pre-eminent role. 10 The account books also reveal the extent to which Monet sold directly to collectors. Although the number of these collectors is high, especially in the 1870s, direct sales to collectors account for only approximately 7% of the artist's total annual income in the books. 11 The account books highlight Monet's evolving subject-matter over the years. Table 2 shows the way in which Monet's landscapes shifted in subject from Paris and Argenteuil in the 1870s to the Normandy Coast in the 1880s to London in the early 1900s and finally to the Giverny Water Lilies and Venice scenes by 1912. They also show the perhaps surprising range of his travels across Europe from Italy in the South to Norway in the North. There is, however, no clear correlation between the increase in prices for Monet's work and any shift in subject-matter in his production. It is possible to speculate that Monet's views of Étretat on the Normandy Coast, or his scenes of London, or Venice, were more commercially successful because he was representing here well-known tourist sites. Certainly, it is true that Durand-Ruel repeatedly advised him to paint Venice long before he finally did so in 1908. 12 Yet, at the same time, Monet also found commercial success with what were conventionally mundane motifs-whether a simple haystack or his water lily pond with its scattered waterlilies and grasses. The model of Monet's increasing income over the years is mirrored by the economic careers of the artist's leading Impressionist peers. Edgar Degas's income, for example, increased exponentially in the early years of the twentieth century, when he was selling work to the dealer Ambroise Vollard for up to 210,000 francs in a single year (Tinterow, 2006). 13 Pierre-Auguste Renoir was also selling his paintings for very considerable amounts by this time (Jensen, 2015). Edouard Manet, however, died too soon, in 1883, to benefit from the market rise in his prices at the end of the nineteenth century. His commercial success would only come posthumously. Manet's account books reveal that he never earned very large sums from the sales of his work and failed to find a buyer for his most ambitious paintings (Jensen, 2022;Kelly, 2011
Increasing painting prices
Monet's account book reveals that his growing income was above all due to the increasing prices of his individual paintings rather than any increase in volume of production. This is evident in Table 3 that indicates the average prices for Monet's paintings over the years, as well as the volume of his production of paintings each year. An overview of the prices for Monet's paintings reveals an increase from approximately 300 francs in the early 1870s to 600 francs in 1881 to 1200 francs in 1886 and then, after the break in the account book entries, to 6500 francs in 1898 before a rise again to 12,000 francs in 1902 and around 14,000 francs in 1909. These sums represent average prices and Monet's work could sell for higher or lower each year. In 1877, he sold his pictures for as little as 25 francs each while, in 1901, he was selling Rouen cathedral, evening (Pushkin Collection, Moscow), to the Russian collector, Sergei Shchukin, for 18,000 francs. As we shall see, the reasons for these price increases were many and diverse. Table 3 indicates that Monet sold the greatest number of works in the late 1870s at precisely the time that he was the poorest. In 1877, he thus sold 86 paintings, the most he would ever sell in a single year. During the late 1870s, Monet sold many preparatory sketches-what he called "sketches" ["esquisses"] or "rough sketches" ["pochades"]-for very low amounts. In 1877, for example, he sold 5 "pochades" for a total of 125 francs to the pastry cook Eugène Murer. In contrast, Monet actually sold fewer works in his wealthy later years, in part because these later paintings were complete and resolved works that were often the product of considerable studio reworking. As we shall, Monet ascribed considerable importance to the amount of labor that he invested in a painting.
What is clear, from an overview of Monet's account books, as well as his exhibitions of the 1890s and 1900s, is that his strategy of making series paintings, beginning with his exhibition of Haystacks in Durand-Ruel's gallery in 1891, was central to his rise to wealth. An overview of Monet's career from the 1890s, and as evidenced in the account books from 1898, highlights the radical increase in his prices after he began to exhibit his work in series. Monet differentiated himself from his peers by his pioneering serial painting strategy. Here, his production of several related paintings-each representing a different temporal or light effect on a particular motif-formed a complementary, and highly commercial, whole. Monet even entered these works in his account books as a "series" rather than as individual painting entries.
By the early years of the twentieth century, Monet was receiving amounts for his individual paintings that are comparable to the amount of 20,000 francs that Robert Jensen has described as the standard for a high-end or premium painting (Jensen, 2022). Decades earlier, Monet's friend, Daubigny, had received this amount for a single painting, Moonrise (Hungarian National Gallery, Budapest) in 1872 (Ambrosini, 2016;Kelly, 2013), the most he was ever paid for a single work. Manet, too priced his most ambitious works, such as Olympia (Musée d'Orsay, Paris), at 20,000 francs each. Such prices, while high, were still far below those of the most prominent Salon or academic artists like Adolphe-William Bouguereau, Jean-Léon Gérôme or Ernest Meissonier. The comparison between the prices of avant-garde "outsider" Impressionist painters and established "insider" Salon painters has been an important area of recent scholarship (Galenson & Jensen, 2007;Etro, Marchesi and Stepanova 2020). The prices of such Salon painters on the secondary market could be more than 100,000 francs. A single painting by Meissonier, for example, 1814, the French Campaign (Musée d'Orsay, Paris), even sold in 1889 for the enormous sum of 850,000 francs, the largest price for a contemporary painting in the nineteenth century. 14 In thinking of his series paintings as a single whole, Monet's sales for a "series" raised comparably large amounts. In 1909, for example, he noted that he had sold "16 Water Lilies series canvases" ("16 toiles série des Nymphéas") to the partnership of the dealers Durand-Ruel and Bernheim-Jeune frères for 233,000 francs.
1872-1880: local Parisian support
The account books show that, in 1872 and 1873, Monet's principal buyer was the dealer Paul Durand-Ruel. Table 5 indicates that Monet sold work to the dealer for 9,800 francs (of a total amount of 12,100 francs) in 1872 and 20,100 francs (of a total amount of 24,800 francs) in 1873. Monet had met Durand-Ruel when they were both in London in 1870-71 during the Franco-Prussian War. The dealer has often been portrayed as an evangelical believer in Impressionism but, of course, as a businessman, he was also preoccupied with making money. The extensive correspondence between dealer and artist over several decades shows a close, if at times tense, relationship, with Monet appearing as the more volatile figure, often dramatizing his struggles and complaining about his financial woes, and Durand-Ruel emerging as more phlegmatic and down-to-earth, and repeatedly reassuring the artist that he "should not get flustered." 15 Despite their extensive dealings, they never signed a contract together, although this practice began to become increasingly common in the later part of the nineteenth century (Baetens, 2010). 16 Around 1874, Durand-Ruel began to experience his own financial problems and, for the rest of the 1870s, Monet's sales were focused on local collectors, most of whom lived in Paris and its environs. Table 4 indicates the artist's most important 1870s collectors, in terms of the amount of their purchases. A principal supporter was the opera singer, Jean-Baptiste Faure, who owned between 50 and 60 paintings by Monet, most of which were acquired directly from the artist (Distel, 1990). The account book details the sale of 24 paintings in 1874-75 for 9450 francs in total. 17 Also important was the Romanian homeopathic doctor, Georges de Bellio, who acquired more than 30 Monet paintings between 1876 and 1881 (Distel, 1990, 109). De Bellio bought these pictures for low amounts, always for less than 500 francs. In June, 1877, for example, Monet sold 10 canvases to this collector for 1000 francs, at 100 francs each. Another significant collector was the department store owner, Ernest Hoschedé, who acquired a number of works between 1874 and 1878, including large-scale decorative paintings. The painter, Gustave Caillebotte was also a collector, who even paid Monet's rent repeatedly. The two account books for the years between 1874 and 1880 are full of Monet's notes of "advances" and financial transactions "on the account" of these collectors in order to pay his bills (Wildenstein, 1996). 18 Monet sold to a large number of approximately 60 collectors between 1872 and 1880, often just on a single occasion. The account book, as indicated in Table 4, demonstrates that he sold work to other artists, like Edouard Manet, the Italian Impressionist, Giuseppe de Nittis, and the Nancy painter and photographer, 15 A single photograph exists of the two men together, an image from 1893 (Durand-Ruel Archives) of the bowler-hatted Durand-Ruel alongside Monet, as the latter stands, legs astride, in a position of authority, at his home in Giverny, surrounded by his family. 16 Monet never signed contracts with any of his dealers, preferring a more informal engagement. Durand-Ruel too did not use contracts in his gallery practice. 17 In a note in his second account book, after his accounts for the year of 1880, Monet stated that he had sold 19 paintings to Faure between 1872 and 1877 for 6,750 francs. The sales in the first account book, however, indicate a slightly higher amount. 18 Monet seems to have maintained individual accounts with these collectors (de Bellio, Hoschedé and Caillebotte) during this period of some financial hardship in his life. Charles de Meixmoron de Dombasle. He also sold to writer friends, including Théodore Duret and Zacharie Astruc. Particularly notable in these years is Monet's extensive sale of "sketches" ["esquisses"] and "rough sketches" ["pochades"], generally at a lower price than his "finished" or complete paintings, and often to friends or fellow artists. 19 Around 1880, indeed, the viewpoint
1881-1888: Durand-Ruel, America, and dealer diversification
The account books indicate that Monet's commercial fortunes began to improve in 1881, largely because of the support of Durand-Ruel, who returned in that year to support Monet's work. For the next five years, from 1881 until 1885, Durand-Ruel exercised a virtual monopoly on Monet's production (see Table 5). Monet's improving fortunes in the 1880s can more generally be connected to the increasing liberalization of the art market during this decade, following the end of the governmentcontrolled Salon in 1880 (Etro et al, 2020). As the Salon declined, dealer gallery shows-such as those of Durand-Ruel-became all the more important, considerably benefiting Monet, who had dealer one-person shows for the first time in this decade.
In 1883, Durand-Ruel held the first dealer solo show of Monet's work, a novel strategy that the artist fully embraced. Correspondence between artist and dealer, however, reveals tensions. 21 Ultimately, the show was critically well received, and an important step in Monet's career. Later in the decade, Monet had a solo show with Boussod and Valadon in 1888, and a major retrospective in Georges Petit's gallery in 1889. Monet's developing success in the 1880s was inextricably connected to the increasing internationalization of his collector support. However, the artist's correspondence with Durand-Ruel in the mid-1880s reveals that he was initially resistant to the internationalization of his collector base, and specifically Durand-Ruel's efforts to find new collectors in America. Monet rejected these efforts, affirming that he would only find true collector taste for his work in Paris. He wrote to Durand-Ruel on July 28, 1885 of his recent work: "I confess that I would not like to see some of these canvases leave for the land of Yankees. I would rather keep a good selection for Paris, because above all it is only here that some good taste still exists." (Durand-Ruel & Durand-Ruel, 2014). Over the coming months, he reiterated his viewpoint. 22 Perhaps Monet here sought to separate himself from academic Goupil artists, like Gérôme and Meissonier, who had greatly benefited from that gallery's extensive 20 In 1880, Monet let de Bellio know about Petit's position. The collector responded on January 12, 1880, expressing hurt that anyone might think he was exploiting Monet, although the accusation was in fact not without foundation. De Bellio noted that the new sales to Petit were "good news" but warned the artist against alienating existing collectors by raising his prices. 21 Monet complained, in a letter of March 5, 1883, that the exhibition of 56 paintings was "a flop…a catastrophe" as a result of Durand-Ruel's failure to offer adequate promotion as well as his poor gallery installation, notably in a space with excessive bright daylight (Durand-Ruel 2014, 171). 22 For example, Monet wrote to Durand-Ruel on January 22, 1886, "Do you really need quite so many paintings for America?…You think only of America, while here we are forgotten, since every new painting you get you hide away. Look at my paintings of Italy which have a special place among all I've done; who has seen them and what has become of them? If you take them away to America, it will be I who lose out over here. I deplore the disappearance of all my paintings like this." On April 11,1888, Monet wrote to Durand-Ruel, "I am heartbroken to see all of my paintings leave for America.". international, and particularly American, connections, from as early as the 1840s. 23 Despite Monet's reluctance, Durand-Ruel showed 40 of the artist's works (the most by any artist in the exhibition) in his show of 289 Impressionist paintings and pastels in New York in 1886. This represented an important moment in the growing American market for Impressionism. Over the ensuing years, as American collector interest increased, Monet changed his perspective and embraced a more international collecting base. He may have been responding here to the growing international market for the work of Barbizon artists in the 1880s. His landscapes were compared by critics to the landscapes of Jean-Baptiste-Camille Corot and Théodore Rousseau, whom we know that he respected, and whose work sold for very high prices in this decade. During the 1880s, Monet increasingly came to believe in selling his work to a wider range of dealers. In his letters, he repeatedly affirmed his belief that an artist should not sell to a single dealer, noting that rivalry between dealers could help to increase the prices of his paintings. On March 22, 1892, for example, he wrote "… for an artist, it is wholly inauspicious and negative to sell through one dealer alone." Table 5 shows the diversification of Monet's sales to other dealers, notably Georges Petit and Boussod and Valadon, in the 1880s. Petit inherited the family business of his father, Francis, and he organized an impressive series of International Expositions, from 1882, in the family galleries on the Rue de Sèze (Fitzgerald, 1995). Monet was impressed by the lavishness of these displays. 24 From the mid-1880s, he regularly sold work to Petit, leading to an increase in his prices. In the summer of 1887, for example, he sold 8 of his paintings of the island of Belle Île in Brittany for an average price of 1500 francs. Monet exhibited repeatedly in Petit's International Expositions, in addition to his 1889 retrospective of 145 paintings. 25 In 1887, Monet also began to sell work to the gallery of Boussod and Valadon, the successor to the prestigious international Goupil company (Penot, 2010;Serafini, 2016;Penot, 2017;David et al, 2020). The Paris gallery of the firm was run by Theo Van Gogh, younger brother of Vincent and an unusually insightful gallerist, with whom Monet seems to have rapidly built up a rapport. Monet sold his first work to Boussod and Valadon in April, 1887. The following year, in June, he sold a new series of ten paintings of Antibes and its surroundings on the French Riviera for a combined total of 11,900 francs. As a way of winning Monet over, Theo van Gogh offered a novel marketing strategy, whereby the artist received not only payment for each of these paintings, but also 50% of the profit on their subsequent sale. 26 For example, Monet sold The Beach of Juan-les-Pins (W1187) for 1300 francs in June, 1888. The painting was re-sold soon after for 3000 francs, a profit of 1700 francs, of which Monet received 850 francs. In response to Durand-Ruel's complaints about his sales and related exhibition with Boussod and Valadon, Monet pushed back, in a letter of September 24, 1888, which referenced the dealer's own financial problems: "you find it regrettable that I accepted this engagement but, dear Mister Durand, what would I have become in the last four years without first of 1 3 all M. Petit and without the maison Goupil [Boussod and Valadon]? No, don't you see, what's regrettable is that circumstances have constrained you from being able to continue to buy…" In general, Monet's account books thus chart the diversification of his dealer base in the 1880s. However, it is worth noting that his prices in this decade, although improved, remained relatively low, certainly when compared with the prices of academic artists, like Gérôme, Meissonier, or Bouguereau.
1889-1897: series paintings and growing international support
As we have seen, Monet's sales from 1889 to 1897 are not recorded in the account book. These years were, however, crucial for the artist's increasing income, and particularly for growing American support. Monet occasionally welcomed American supporters at his home in Giverny. In 1889, for example, he hosted the American collector, James F. Sutton, who was in Paris for the Exposition Universelle of that year. Monet sold many works thereafter to Sutton, generally through the intermediary of the collector's agent, Isidore Montaignac, also a regular visitor to Giverny. 27 In 1893, the painter Camille Pissarro claimed that Sutton owned 120 Monet paintings and, in 1904, the Boston collector, Desmond Fitzgerald, estimated that Sutton owned 50 pictures by Monet (Distel, 1990, 235;Stuckey, 1995). The Monet catalogue raisonné in fact lists 45 works that were once owned by Sutton but this was still a significant amount (Zafran, 2007, 91). 28 By the end of 1892, the Chicago collectors, Bertha and Potter Palmer, owned 50 works by Monet. They visited Monet at Giverny in that year. 29 Monet differentiated himself from his peers by his serial painting strategy, first evident in his exhibition of 20 Haystacks (Meules) paintings at Durand-Ruel's gallery in 1891. Thereafter, he organized exhibitions in the same gallery of Poplars, in 1892, and Rouen Cathedral paintings, in 1895. These shows attracted critical acclaim. During the early to mid-1890s, Monet raised his prices considerably. 30 A key moment here was his production of Rouen Cathedral pictures for which he 27 Montaignac, a close colleague of Georges Petit, was a dealer in his own right. The extent to which he acquired works in this capacity versus being an agent for Sutton is unclear. 28 It is, however, difficult today to connect this number of works to Sutton. 29 The Palmers' adviser, Sara Hallowell, visited Monet and wrote to the Palmers on July 9, 1892: "The other day Montaignac secured three of [Monet's] pictures-not direct from him-and sent to me to come see them, they being, as he said, fine examples. These were sold, respectively, for 7000, 6500 and 6000 francs. Both Durand-Ruel and Montaignac tell me they find [Monet] absurd in his prices now, asking them even more than he did you when you visited his studio, so now the dealers are scouring Paris for his pictures." The Palmers' collecting of Monet was largely completed by 1893. 30 Monet asked 15,000 francs each for his Rouen Cathedral paintings, to Durand-Ruel's consternation. On September 10, 1894, Monet wrote to Durand-Ruel that he had succeeded in selling paintings at this price: "I thought that I would write to inform you that, despite your concerns, my Cathedrals have found buyers and that several have departed and that others have been requested from me at the prices that you know." On November 23, 1895, he wrote to the dealer, "It's a certain fact that from the day when I allowed myself to ask for certain prices for my cathedrals, our relationships and business affairs have never been the same." increased his prices to a new level of 15,000 francs each. 31 He did this because, for him, these were unusually labor-intensive works on which he had worked for three years. Evidence for this comes from Alfred Atmore Pope who, as we have seen, visited Monet at Giverny in 1894. After his visit, Pope noted: "Monet said that he spent three years over these pictures and was going to have 15,000 francs for them ($3000), that he wouldn't be paid for his time at less price-He is "on to it" that dealers have an agreement to stand out against his price and says he will get it or box them up…" (Zafran, 2007). Monet's increased prices caused tensions with Durand-Ruel.
Monet sold his Rouen Cathedral paintings to a range of buyers. Isidore Montaignac bought 4 paintings in June, 1896, for 52,000 francs. 32 This information appears in an autograph note by Monet that lists his sales to Montaignac between 1892 and 1900. 33 It offers an important addition to the data provided by the account books: it indicates that Monet sold paintings, amounting to 280,000 francs, to Montaignac, between these dates. 34 There were also French collectors willing to pay the increased sums that Monet demanded. The note also indicates that the collector, Isaac de Camondo, paid 55,000 francs for 4 of the Cathedrals in 1895. 35 Nonetheless, Monet's market was increasingly dominated by his international collectors. On March 3 1895, Durand-Ruel wrote that there were over 300 paintings by Monet in American private collections (Durand-Ruel & Durand-Ruel, 2014, p. 184).
1898-1912: the importance of international support
The account books, when renewed in 1898, show the continuing success of Monet's series paintings works. In June, 1898, Monet sold 4 paintings of Cliffs Pourville ('Falaises Pourville") and 2 of Mornings on the Seine to Petit, for 6500 francs each. Monet probably charged less for these paintings than his Rouen Cathedral pictures because he produced them more rapidly. Nonetheless, this amount was still a considerable increase on the price of 1500 francs for which he was generally selling his paintings a decade earlier. The success of his Seine river series may have been connected to their clear relationship with the Barbizon master, Camille Corot, whose 31 Camille Pissarro noted in late October, 1894: "All Paris is talking about the prices Monet is asking for his Cathedrals, a whole series which Durand wants to treat himself to, but Monet is asking 15,000 francs for each". 32 Montaignac had been one of Petit's many employees, and ran his own gallery on 9, Rue Caumartin after 1893. In 1891, Camille Pissarro wrote to his son about Montaignac, "I have known him for about ten years…He worked for Georges Petit; he was the right-hand man at that gallery. He seemed to be smart and likeable, and then last year I learned from Monet that he had been managing his affairs a long time…as well as Sisley's." 33 Vente Archives Claude Monet. Artcurial, December 13, 2006, lot 197. Claude Monet Note autographe. Ventes à Montaignac, 1892-1900. 34 For the years between 1898 and 1900, the note replicates entries in the account book but it also includes additional sales for these years. 35 Vente Archives Claude Monet. Artcurial, December 13, 2006, lot 197. Claude Monet Note autographe. Ventes à Montaignac, 1892-1900.
3
comparably misty river scenes sold for very high prices in the 1890s. Monet spoke of his admiration for Corot, and the connection was noted by several critics.
The account books reveal Monet's increased sales in 1898 to 1899. Again, most of his sales were to dealers, as is evident in Table 6, which records the artist's sales to dealers between 1898 and 1912. Interestingly, the account books indicate that Monet's dealers were not always rivals but, at times, also collaborators. In 1898 and 1899, Georges Petit worked with Alexandre Bernheim the elder and Isidore Montaignac to buy paintings by Monet. Nonetheless, Durand-Ruel remained Monet's principal patron. In 1899, Monet sold 6 "Pond with Water Lilies" ["Bassin aux Nymphéas"] pictures to Durand-Ruel at 6500 francs each for a total of 39,000 francs. These works-which have come to be known as Monet's Japanese Bridge pictures-were probably among those exhibited at Durand-Ruel's gallery in 1900 in another solo exhibition. Durand-Ruel often sold on these works rapidly at considerable profit. He would acquire works from Monet at 6500 francs and sell them on for up to 15,000 francs (Stuckey, 1995). This was a practice of which Monet became aware, around 1900, and resented.
Monet also painted London frequently, representing the well-known motif of the Houses of Parliament, during his trips to the English capital from 1899 until 1901. Monet's choice of London scenes, which enjoyed widespread commercial success, tapped into a widespread French fascination with London as a tourist location, as well as the earlier histories of painting the capital by the noted artists, Joseph Mallord William Turner and James McNeill Whistler. In November, 1901, Monet sold a painting of Waterloo bridge to the New York dealer, Julius Oehme, for 8000 francs. 36 On May 11, 1904, he sold Durand-Ruel 18 paintings of London views for 188,000 francs. This included the sale of "11 Parliaments ["11 parlements"] for 99,00 francs On June 7, he sold the dealer a further 6 London bridge pictures for 56,000 francs. 37 With the addition of another purchase, Monet sold paintings for the very large sum of 252,000 francs to Durand-Ruel in 1904, his largest annual sale to the dealer. In the following year, he noted the sale of 17 more London paintings to the dealer. Among this group were 7 Waterloo Bridge pictures, priced at 10,000 francs each for a total of 70,000 francs. Monet's serial approach was evident in his exhibition strategy. He saw the gallery space as an inherent component of his project, installing his paintings together, and noting that the pictures "take on their true value only through the comparison and succession of the series." (Patry et al., 2015).
Alongside Monet's sales to dealers, Table 7 indicates Monet's direct sales to collectors from 1898 to 1905. 38 This was a practice that caused tension with Durand-Ruel who considered that he was being by-passed. The account books indicate sales in these years to a small group of American collectors: William Fuller (who wrote Journal of Cultural Economics (2023) 47:437-460 Monet's first American biography in 1899 [Fuller, 1899]); an unnamed "American woman," perhaps the Chicago collector, Bertha Palmer; and a direct sale to James F. In the spring of 1912, Monet sold 29 paintings of Venice-the result of his twomonth trip to that city in 1908-to the dealers Josse and Gaston Bernheim-Jeune for the very large sum of 339,000 francs. Monet had first begun to sell work in 1900 to these two young dealers who had inherited their gallery from their father, Alexandre. The Bernheim-Jeune brothers increasingly challenged Durand-Ruel as a result of their ability not only to cultivate an American but also an East European and Russian clientele. (Dauberville, 1996). Monet sold them the first batch of "15 Venice canvases" ("15 toiles Venise") for 166,000 francs and the second batch of 14 works for 173,000 francs. 43 The paintings were exhibited in the Bernheim-Jeune galleries in the spring of 1912 to widespread critical success, and the dealers were able to sell them on, often at considerable profit.
1912-26: the final years
The account books end in 1912. Monet, however, continued to sell paintings intermittently until his death. It is true that Monet became increasingly preoccupied with his legacy, and specifically his Grandes Decorations project, which involved the gifting of a large number of enormous panels to the French State with a view to their permanent installation. Nonetheless, his late correspondence indicates that he also continued to be concerned with sales and that his prices continued to rise until his death. He continued to sell to the Durand-Ruel company and the Bernheim-Jeune brothers, including paintings in 1919 for 20,000 francs each (Mathieu, 2019, 78). 44 The internationalization of his collector base continued at this time most notably with the Japanese collecting of his work. In 1920, Monet offered mid-size paintings to a Japanese supporter, Shintaro Yamashita, for 25,000 francs each, in a letter which also indicates the way in which he priced paintings according to their size. 45 The following year, the prominent Japanese collector, Kojiro Matsukata visited him at Giverny, and soon after, in December, 1921, Matsukata purchased the Water-Lily Pond, Weeping Willow Reflections (The National Museum of Western Art, Tokyo) as part of a purchase of 18 paintings in total. This was the only large-scale panel from his Grandes Décorations series that Monet ever sold himself. Soon after, Matsukata apparently sent Monet a check for 800,000 francs for another painting with the instruction that the artist should choose the particular work to be sold (Anon., 1922). It is difficult to confirm this amount but, if true, this was by far the most Monet ever received for a single painting. 46 In 1920, a delegation from the Art Institute of Chicago visited the artist at Giverny, offering to buy 30 decorative paintings for the enormous sum of 3 million dollars (Anon., 1926). Monet refused their offer, preferring to pursue his patriotic ambition of a permanent installation for his late large-scale works, intended to celebrate the greater glory of France. Monet's ambition was ultimately realized in the galleries of the Musée de l'Orangerie, where 22 of his Grandes Décorations panels were ultimately installed. The other large-scale 43 As Monet noted in 1912 correspondence, he placed letters on the back of his canvases to signify their prices (A for a work valued at 10,000 francs, B for one at 12,000 francs, C for 14,000 francs, D for 15,000 francs). 44 The Durand-Ruel archives indicate, for example, that Weeping Willow (Columbus Museum of Art, W1869), was acquired jointly (50/50) by Durand-Ruel and Bernheim-Jeune frères from the artist on January 21, 1919 for 20,000 francs. (Mathieu, 2019, p. 204.). 45 "Canvases between 80 cm and 1 m are priced around 25,000 francs. In the past, I used to sell them between 50 to 100 francs at the most. I have to say again that I feel somewhat embarrassed at this admission." Monet to Shintaro Yamashita, Giverny, February 19, 1920. 46 It is difficult to confirm this amount, as Monet himself makes no mention of it.
1 3 panels remained in his studio at his death. After his death, Monet's estate was valued in 1926 at just over 5 million francs (Mathieu, 2017). Over 3 million of this was in shares. Monet's home at Giverny was ascribed a value of 400,000 francs and his studio contents, as well as his extensive personal collection of paintings, an amount of nearly 1.4 million francs. Despite Monet's late wealth, it is notable, as Jensen has shown, that the artist's prices seem to have flattened out after about 1910, in comparison with his Impressionist peers at the high end of the market. In the inter-war years, his prices, on average, although higher than Pissarro, Morisot or Cassatt, were lower than those of Renoir, Degas or Manet, and the leading post-Impressionists, such as Cézanne, Gauguin and Van Gogh (Jensen, 2015).
Conclusion
What then can we conclude from Monet's account books? In the 1870s, they show that, in his struggling years, Monet sold many works directly to local collector colleagues, many of whom were friends. From the 1880s onwards, however, there was a significant difference. His growing wealth depended on sales to his circle of dealers, and correspondingly on his dealers' ability to find an increasingly international range of collectors. From this decade, Monet did sell some works directly to collectors, but relatively few. The key decade for the artist's increasing prices was the 1890s, when, unfortunately, there is a gap in the account book records. Nonetheless, the extensive coverage of Monet's sales elsewhere in the books, over a 40-year period, enables us to clearly chart his rise. The data indicate that Monet's wealth resulted from three principal factors: the development of his serial painting approach in the 1890s; his engagement with a group of successful Parisian dealers, most notably Paul Durand-Ruel; and the related internationalization of his collector base. Other factors also contributed to Monet's success such as his rare organization of an auction sale or State patronage but these had relatively little importance in terms of the dominant upward arc of Monet's commercial career. 47 By the time of his death in 1926, Monet's priorities had arguably shifted from achieving economic success to establishing his historic legacy. Yet, his account books from the earlier decades of his career remain as a seminal record of his rise to wealth and fame. They are a key source in our understanding of Monet's career, and more broadly nineteenth-century artists' careers, that deserve to be better known. They provide insight into Monet's commercial shrewdness and, more generally, highlight his recognition of the importance of his own agency in constructing the art market around his work. | 2023-05-21T15:03:42.019Z | 2023-05-19T00:00:00.000 | {
"year": 2023,
"sha1": "58d3f9430bcfa3839e88ac635ee9a83a7027e7df",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10824-023-09473-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "9b79f82cceff6d2991470e20c007fb26093823a0",
"s2fieldsofstudy": [
"Art",
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
259234816 | pes2o/s2orc | v3-fos-license | Utilization of cervical cancer screening and its associated factors among women of child-bearing age in Mangochi district, Malawi: a facility-based cross-sectional study
Background Cervical cancer screening (CCS) uptake remains low in poor countries. Few studies have assessed individual need and health system factors which facilitate/impede use of healthcare services, including CCS utilization. Thus, we examined associations between these factors and CCS utilization among women of child-bearing age (WCBA) in Mangochi, Malawi. Methods A cross-sectional study, sampling 482 women (18–49 years) using a multi-stage sampling method was conducted in five health facilities (HFs). Data were collected using a structured interview questionnaire from June-July, 2019. Chi-squared or Fisher’s exact tests were used to compare the distribution of CCS utilization according to different independent groups. Results Our study found that 13.1% of the study participants had a history of CCS. The proportion of WCBA with a history of CCS was significantly higher among HIV + women than HIV- women and women with unknown HIV status, respectively [27.3% (33/121) vs. 8.5% (30/353) vs. 0% (0/8), χ2 = 29.18, df = 2, p < 0.001]. Significantly higher among those who had ever heard of cervical cancer (CC) than those who had not [23.0% (60/261) vs. 1.4% (3/221), χ2 = 49.28, df = 1, p < 0.001], among those who heard of CC from HFs than those who heard through radios, friends/family and other sources, respectively [31.2% (44/141) vs. 16.7% (7/42) vs. 9.3% (5/54) vs. 16.7% (4/24), χ2 = 12.62, df = 3, p = 0.006], among those with positive beliefs towards CCS than those with negative beliefs [19.2% (53/276) vs. 4.9% (10/206), χ2 = 21.37, df = 1 p < 0.001], among those recommended for CCS by health workers (HWs) than those not recommended [19.6% (53/270) vs. 4.7% (10/212), χ2 = 23.24, df = 1, p < 0.001], among those willing to be screened by male HWs than those unwilling [14.4% (60/418) vs. 4.7% (3/64), χ2 = 4.57, df = 1, p = 0.033]. Fisher’s exact test showed that CCS uptake among WCBA varied significantly by level of knowledge of CC signs/symptoms, with 66.7% (12/18) and 19.8% (48/243) among those with high-level and low-level knowledge screened, respectively (p < 0.001). Conclusions HIV status, ever heard of CC, sources of information, knowledge of CC signs/symptoms, beliefs, recommendations by HWs for CCS, willingness to be screened by male HWs were associated with CCS utilization. Thus, sensitization campaigns for CCS should be conducted to increase uptake. Further, health facilities should intensify health education on CC, including signs and symptoms to increase knowledge. In addition, CC program implementers should be willing to train both males and females to offer CCS as the clients are open to be attended to by male providers as well.
Background
Women from sub-Saharan Africa have low utilization of cervical cancer screening (CCS), estimated at 12.87% although evidence suggests that cervical cancer (CC) is a global public health concern [1]. The CCS programmes face a number of challenges in these poor resource settings. For example, a low CCS uptake was reported among rural Zimbabwean women where only 9% of the respondents in a study had ever had CCS [2]. Likewise, Malawi CCS coverage in 2020 was low at 25.5% even though the target was to screen 70% of eligible women [3]. In Northern Ethiopia, a study established that 19.8% of age eligible women had ever been screened for CC [4]. The lower CCS utilization rate in low and middle income countries has been attributed to various factors, including accessibility to testing facilities, lack of health education, low socioeconomic status, low perceived risk of disease, fear of CC diagnosis, fear of pain and embarrassment, lack of female health care providers, busy schedules, and beliefs that such tests are unnecessary [5].
Malawi started screening for CC in the early 1980s through a donor-funded programme which phased out due to sustainability problems [6]. Later in 1999 a relaunch of the CCS programme was introduced as a pilot program which was thereafter scaled up in 2002. Later on, the Ministry of Health (MoH) through the Reproductive Health Directorate (RHD) formulated the National Sexual and Reproductive Health and Rights (SRHR) policy to integrate CC as an SRHR priority area [7]. In 2013, the Cervical Cancer Control Program (CECAP) started conducting a pilot study on the Human Papilloma Virus (HPV) vaccine [8]. In 2016, the National Cervical Cancer Control Strategy (2016-2020) was developed to guide the implementation of CC control activities by CECAP and other stakeholders [8]. The Southern Africa Litigation Centre report also affirms that lack of awareness of the disease by the general public as well as health workers is a contributing factor towards the low uptake of services and hence the high prevalence and mortality in Southern Africa [9].
Malawi has the highest mortality related to CC, with 51.5 deaths per 100,000 women per year and prevalence estimated at 72.49 cases per 100,000 women per year [10]. Nevertheless, evidence has shown that there are several factors affecting CCS utilization other than lack of knowledge of the disease or any other parameter related to it. For example, there are health system enabling and individual need factors that play a key role in utilization of health services, including CCS uptake among WCBA. A Kenyan study found that out of 85.2% of women who were recommended by medical personnel to go for CCS, only 46.3% did undertake the screening test [11]. These findings are similar to a Malawian study conducted in Blantyre district where 72.4% of the participants had heard of CCS but only 13.2% had gone for CCS [12]. This result was similar to what was found in another Malawian study done in Mangochi district which reported 13.1% CCS uptake among WCBA [13]. Similar CC studies conducted in southern Malawi showed that age, multiple sex partners, lack of husband's approval for screening, lack of knowledge of the disease and screening services and distance to a facility were statistically significantly associated with the utilization of CCS services. Some of these factors contributed to delays in accessing the screening services [6,12,14,15]. Further, a systematic review on barriers affecting uptake of CCS in low and middle income countries found that unfriendly or male work staff barred women from undergoing CCS [5]. Mangochi district mirrors national challenges in terms of CCS uptake. For instance, the providers could see 2 or 3 women with VIA positive result each day at one CCS clinic in the district in 2015 [6]. This was a pointer to the magnitude of the problem of CC in the district. Despite this realization, CCS remains a challenge. By 2016, Malawi had 154 facilities offering CCS services [16]. Of which, 14 facilities offering CCS were from Mangochi district and they were government, Christian Health Association of Malawi (CHAM) and private-owned [16]. Limited studies have been conducted to examine associations between individual need and contextual enabling factors and CCS utilization among WCBA. We applied the sixth revised version of Andersen's Behavioral Model of Health Services Use which indicates that health behaviours are influenced by both contextual and individual characteristics and within these characteristics are the three dynamics namely; predisposing, enabling and need factors [17]. The Model predicts one's use of healthcare services (CCS) by focusing on contextual enabling (health system) factors that facilitate or hinder the utilization of healthcare services, and one's (individual) perceived or influenced need for care [17]. Thus, we examined the utilization of cervical cancer screening and its associated factors, both individual need and health system factors, among WCBA in Mangochi district, Malawi. The Thus, sensitization campaigns for CCS should be conducted to increase uptake. Further, health facilities should intensify health education on CC, including signs and symptoms to increase knowledge. In addition, CC program implementers should be willing to train both males and females to offer CCS as the clients are open to be attended to by male providers as well. Keywords Utilization, Women of child-bearing age, Cervical cancer screening identified factors will help health authorities and their partners to implement a CC control programme that will improve the utilization of the CCS services and ensure early diagnosis and treatment among WCBA in the district.
Study design and setting
This was a facility-based cross-sectional study design and the study was done in Mangochi, Malawi [13].
Study participants
The study participants were WCBA attending health facilities [13].
Sample and procedures
482 WCBA were sampled by applying probability proportional to size procedures [13].
Data collection
Data were collected through a survey method using a paper-based structured interview questionnaire. The data collector was a registered nurse who had a training on the questionnaire prior to its use. During data collection, skip patterns were observed. For instance, those respondents who answered "no" to the question "ever heard of cervical cancer" in the knowledge section, the whole section was skipped and questioning continued in the other section. For example, the respondents were in the "access to CCS" section asked if they have ever undergone CCS regardless of them having knowledge or not. This was still asked in order to establish if respondents were getting the CCS without being informed (to check if information is given prior to offering the screening).
Questionnaire, independent and dependent variables
In this study, the dependent variable was utilization of CCS. Respondents were asked if they had ever been screened for CC and the response was binary (yes − 1 or no − 0). For independent variables, we assessed both categorical and continuous data. Variables with continuous data were categorized accordingly. The respondents were asked questions pertaining to the health system factors; willingness to be screened for CC by male health workers, distance travelled from their respective villages to the facilities offering CCS and recommendation given by health workers for CCS as well as individual need factors; HIV status, history of multiple sex partners, had participants ever heard of CC, source of CC information, knowledge of signs and symptoms of CC, knowledge of CC risk factors and beliefs towards CCS.
I. Individual need factors
The following were the variables that were examined under individual need factors; HIV status, life time sex partners, participants ever heard of CC, source of CC information, level of knowledge of CC signs and symptoms, level of knowledge of risk factors, level of beliefs towards CCS.
HIV status
HIV status was categorized and coded as below; HIV negative [HIV-] (1) which refers to respondents who did not have HIV as confirmed by HIV testing. HIV positive [HIV+] (2) which refers to respondents who had HIV as confirmed by HIV testing and Unknown HIV status (3) which refers to respondents who were unaware of their HIV status.
Life time sex partners
Life time sex partners was categorized and coded as below; with 2 or more sex partners (1) and those without or with 1 life time sex partner (2).
Had participants ever heard of CC?
This was categorized and coded as below; yes (1) and no (2).
Source of CC information
Source of CC information was categorized and coded as below; health facility (1), radio (2), friends/family (3) and other [television, newspapers/magazines, school/learning institution, & mobile public address system](4).
Level of knowledge of CC signs and symptoms
Level of knowledge was categorized as low-level -1 and high-level -2. A scoring method was developed. Knowledge about signs and symptoms of CC was measured by using a question with five knowledge answers. A total of 5 points were given to respondents who upon probing had given all the correct answers. Respondents were asked to mention CC signs and symptoms. The correct answers were; post coital bleeding (1 point), foul vaginal discharge (1 point), painful sex (1 point), lower abdominal pain (1 point) and abdominal mass as a sign of CC (1 point). A mean score of 0.64 was calculated. Respondents with scores above the mean score were deemed as having high knowledge of CC signs and symptoms whereas those with scores below it were deemed as having low knowledge [18]. Thus, even respondents with one correct answer were deemed as having high-level knowledge of CC signs and symptoms. We, therefore, considered a score of mean value or above (≥ 3) to this question as a high-level of knowledge, otherwise lower scores were deemed as low-level of knowledge of CC signs and symptoms [19].
Level of beliefs towards CCS
Level of belief was categorized as positive belief -1 and negative belief -2. A scoring method was developed. A total of 4 points were given to respondents who had given all the correct answers. Four statements were read to the respondents to gauge their beliefs towards CCS, namely; I am at risk of getting cervical cancer hence I need to go for screening (respondents were awarded 1 point if they gave either of the two responses -strongly agree or agree and 0 point if they gave strongly disagree and disagree as responses). Cervical cancer screening is important (respondents were awarded 1 point if they gave either of the two responses -strongly agree or agree and 0 point if they gave strongly disagree and disagree as responses). Cervical cancer is curable if diagnosed early (respondents were awarded 1 point if they gave either of the two responses -strongly agree or agree and 0 point if they gave strongly disagree, disagree and "not sure" responses). And the last one was, I am afraid the screening procedure might be painful that is why I have not gone for screening (respondents were awarded 1 point if they gave either of the two responses -strongly disagree or disagree and 0 point if they gave strongly agree, agree and "not sure" as responses).
A mean score of 2.64 was calculated. Respondents with scores above the mean score were deemed as having positive beliefs towards CCS whereas those with scores below it were deemed as negative beliefs towards CCS [18].
Level of knowledge of risk factors
Level of knowledge was categorized as high-level -1 and low-level -0. A scoring method was developed. Knowledge about risk factors of CC was measured by using a question with seven knowledge answers. A list of CC risk factors was read out loud to the respondents and they were asked if they knew them. The respondents had to indicate yes, no or do not know. Answering yes was assigned 1 point. A total of 7 points were given to respondents who had given all the correct answers. The CC risk factors were; having multiple sexual partners (1 point), STIs history (1 point), being HIV+ (1 point), early onset of sexual activity (1 point), family history of CC (1 point), having uncircumcised male partner (1 point) and high parity (1 point). A mean score of 5.62 was calculated. Respondents with scores above the mean score were deemed as having high knowledge of risk factors of CC whereas those with scores below it were deemed as having low knowledge [18].
II. Health system factors
Three variables were assessed under health factors, namely; recommendations for CCS given by health workers, willingness to be screened for CC by male health workers and distance travelled to the health facility.
Recommendations for CCS given by health workers
Recommendations for CCS given by health workers was categorized and coded as below; yes (1) and no (2).
Willingness to be screened for CC by male health workers
Willingness to be screened by male health workers was categorized and coded as below; yes (1) and no (2).
Distance to the health facility
Data in terms of distance travelled by respondents from their respective villages to health facilities were collected by asking them to name the village where they came from. The Research Assistant (data collector) had estimated distances for all villages to the health facilities where the study was conducted. The estimated distances from health facilities to all villages were collected from the Environmental Health Department of Mangochi District Hospital. Data on distance were collected as a continuous variable and were later categorized and coded into the following; ≤10 km meant distance travelled by equal to or less than 10 km (1), 11-20 km meant distance travelled ranging from 11 to 20 km (2) and ≥ 21 km meant distance travelled by a respondent equal to or beyond 21 km (3).
Statistical analysis
Data were entered and analyzed using Statistical Package for Social Sciences (SPSS) Inc. PASW Statistics for Windows, Version 18.0. Chicago: SPSS Inc. Chi-squared or Fisher's exact tests were used to compare the distribution of CCS uptake according to different independent groups, and statistical significance was considered at p < 0.05.
Discussion
This study was aimed at examining the associations between individual need factors as well as the health system factors and CCS utilization among WCBA. The individual need factors focuses on how people view their own general health as well as professional judgement and test results they get and how these lead to a decision to seek medical care or not whereas the health system factors have an impact on facilitating or impeding healthcare service use [17]. For instance, WCBA are either motivated or demotivated by these factors to undergo CCS. *these questions were asked only to those who have ever heard of cervical cancer.
Individual need factors associated with CCS utilization
We found that a proportion of respondents who had done CCS was significantly higher among HIV positive women than HIV negative women and women with unknown HIV status. This was in agreement with an Ethiopian study that suggested that women who had tested HIV positive were 5.6 times more likely to screen for CC than those who had tested HIV negative [4]. Similar findings were shared in a study done by [20] which found that patients who were living with HIV were almost 2 times more likely to screen for CC than the other patients who were HIV negative. A lot of the respondents at Mangochi district hospital were exposed to information on the link between HIV and CC as health education and screening is offered on daily basis to all HIV positive clients accessing services at the ART clinic. The national CECAP strategy also recommends that 80% of women on ART should be screened for CC [16]. Our finding, therefore, is also good feedback to the integration of HIV and CC services program as this will potentially improve uptake of CCS if scaled up across the district. We also found that respondents who had undergone CCS were significantly higher among those with general knowledge on CC (who had heard of CC) than those who had never heard of CC. Several studies have found a significant association, that women with knowledge on CC were more likely to screen for CC than those without knowledge of the disease [21,22]. However, we observed that 1.4% (3/221) of the respondents who had never heard of CC had undergone CCS. This finding was interesting because we never expected that the respondents who had never heard of CCS would have ever been screened. We speculate that other women might have just followed their colleagues recommended to undergo CCS and also underwent the screening without receiving health education or proper explanation. Whatever the case, we encourage health workers to provide comprehensive information to their clients before offering health services. Further, health workers should continue providing health education on CC to more women in the district to increase CCS utilization. This study found that the respondents who had been screened for CC were significantly higher among those whose source of information on CC was the health facility than those who whose source of CC information were radio, friends/family or other sources. Similar findings were shared in Namibian and Ethiopian studies where respondents who got information on CC from the health facilities were more likely to undergo CCS [23,24]. Another Malawian study also found that women expressed that they had first heard the CC information from health workers (35%), relations or neighbors (34%) and from the radio (30%) [13]. This disagrees with a Ghanaian study which suggested that the media was crucial in influencing women to go for CCS and media types highlighted that motivated women to be screened for CC were radio and television [25]. Other studies have also highlighted the critical role that media plays in providing CCS information to the masses. For instance, a high screening prevalence has been reported in women who had media exposure in Kenya [26] and Nigeria [26]. Another Malawian study done in Phalombe district also indicated that radios were the main source of information followed by health workers among men [27]. This current study finding shows how important it is for the health workers to provide information on CC to women at every opportunity they present themselves to the health facility. Our study also found that proportion of respondents who had done CCS was significantly higher among those with high-level knowledge of CC signs and symptoms than those with low-level. Similar findings were reported by another Malawian study done by [12]. However, a Kenyan study by [20] found that women who did not know that bleeding after sex is a sign of CC had increased chances of accepting a CCS test than those who knew [20]. Our study had a lot of women with low level of knowledge about the signs and symptoms of CC. This entails that health education or information giving on CC should be detailed, covering all areas including signs and symptoms of CC as being more knowledgeable can result into increased uptake of CCS. Further, our study found that CCS was significantly higher among those with positive beliefs towards CCS than those with negative beliefs. These results are similar to results from Ethiopian and Malaysian studies [4,18,24,28]. Equally, the behavioral model of health services use states that for one to undertake a personal health practice, the practice is influenced by a perceived need which is described as how people view their own general health and function status [17]. Thus, the perceived need is what influences the decision of whether or not one should seek medical care.
Health system factors associated with CCS utilization
Our study suggested that a proportion of respondents who had done CCS was significantly higher among those who were recommended for CCS by health workers than those who were not. This is in agreement with findings from a Kenyan study that reported that health education or advice given by health workers was statistically significantly associated with uptake of CCS services [11]. Further, the same Kenyan study found that below half (46.3%) of the women had undergone CCS although over three-quarters of women (85.2%) were recommended for CCS [11]. These findings, notwithstanding, it is important, however, that health workers should intensify health education as well as giving proper advice in terms of CCS to women to improve CCS utilization in Mangochi district.
Our results established that a proportion of respondents who did CCS were significantly higher among those willing to be screened for CC by male health workers as compared to those who were not willing. This was a unique finding in a Muslim dominated setting where modesty is paramount. According to health workers in a Malawian study, the use of male service providers in CCS clinics was a barrier in provision of services in the country [6]. Further, the health workers had noted that the clients preferred older female service providers. Findings from another Malawian study done in Phalombe district with married men reported that most men (77%) had no problem with the gender of the health worker conducting the CCS [27]. Men are decision makers in Malawian culture as it is with most African cultures and their approval of both male and female service providers may ease the woman's decision making to undergo CCS. This was different from studies done elsewhere in Africa where women expressed that age and gender of the service provider was a determining factor for them to go for the CCS or not [11]. Further, men indicated that they would not allow their women to go for CCS if the health provider is male stating that it was a taboo for another man to see their women's private parts except during child birth. It is, therefore, very necessary to consider the cultural requirements in different communities and personal preferences of women when performing CCS. Our assumption in this study is that WCBA are familiar with male service providers when accessing other services aside from CCS hence this finding.
Limitations of the study
Our study had its own limitations which included the following; firstly, data collectors read out the list of risk factors for CC and list of beliefs towards CCS which might have run the risk of women giving socially desirable responses [13]. Furthermore, this study was done in the health facilities as a result we might have missed views from women who were at home during the time we conducted data collection.
Conclusions
Distance travelled to health facilities, number of sex partners in life time and knowledge of CC risk factors were not statistically significantly associated with CCS utilization among WCBA. Recommendations by HWs for CCS, willingness to be screened by male HWs, HIV status, beliefs, had participants ever heard of CC, knowledge of CC signs and symptoms, sources of information were statistically associated with CCS utilization. Thus, sensitization campaigns for CCS should be conducted to increase utilization. Further, health facilities should intensify health education on CC, including signs and symptoms to increase knowledge among WCBA and CC program implementers should disregard gender when training CCS providers as the clients are open to be attended to by male providers as well. | 2023-06-24T13:38:17.603Z | 2023-06-24T00:00:00.000 | {
"year": 2023,
"sha1": "8022bc74ccc2c0278021c63808d0dd37b00a5ec7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "8022bc74ccc2c0278021c63808d0dd37b00a5ec7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
17001067 | pes2o/s2orc | v3-fos-license | Diabetes mellitus in two genetically distinct populations in Jordan
Objectives: To compare clinical, anthropometric, and laboratory characteristics in diabetes type 2 patients of 2 genetically-distinct ethnicities living in Jordan, Arabs and Circassians/Chechens. Methods: This cross sectional ethnic comparison study was conducted in King Abdullah University Hospital, Irbid and The National Center for Diabetes, Endocrinology, and Genetics, Amman, Jordan between June 2013 and February 2014. A sample of 347 (237 Arab and 110 Circassian/Chechen) people living with diabetes were included in the study. Data were collected through direct interviews with the participants. Clinical data were collected using a questionnaire and anthropometric measurements. Laboratory data were extracted from the patients’ medical records. Results: More Arabs with diabetes had hypertension as a comorbidity than Circassians/Chechens with diabetes. Arabs living with diabetes were generally more obese, whereas Circassians/Chechens living with diabetes had worse lipid control. Arabs with diabetes had higher means of glycated haemoglobin (HbA1c) and fasting blood sugar, and more Arabs with diabetes had unsatisfactory glycemic control (60.6%) than Circassians/Chechens with diabetes (38.2%) (HbA1c ≥7.0%). Most participants (88.8%) had at least one lipid abnormality (dyslipidemia). Conclusion: Multiple discrepancies among the 2 ethnic diabetic populations were found. New diabetes management recommendations and policies should be used when treating people living with diabetes of those ethnicities, particularly in areas of glycemic control, lipid control, and obesity.
T ype 2 diabetes mellitus )DM( is a chronic systemic metabolic disorder characterized by high blood glucose, caused by defective insulin secretion from the pancreas and/or altered insulin function in cells around the body. These cellular changes are caused by both genetic and environmental factors making the pathogenesis, of the disease complex. 1 As a systemic disease, DM encompasses many short-term and long-term complications caused by chronic hyperglycemia, including the risk of cardiovascular, cerebrovascular, ophthalmic, renal, and neurologic disease, along with acute life-threatening conditions. These complications negatively affect the quality and duration of the individual's life, and pose a huge burden on the patient, their families, and the health system as a whole. 2 Worldwide, DM is an increasing problem reaching epidemic proportions. 3 The International Diabetes Federation )IDF( estimated the number of affected individuals in 2012 to be more than 371 million. 4 Arab countries are also affected by this surge. Studies demonstrate increasing prevalence of DM in Middle Eastern and North African Arab countries. 5 In 2012, the IDF reported a DM prevalence of 9.1% in the Middle East and North Africa. 4 In Jordan, the prevalence of type 2 DM is on the rise; data show an increase from 13% in 1994 to 17.1% in 2008. 6,7 These worrisome results solidify DM status as a burden on an already strained health system in the Middle-Eastern country. Studies show that in addition to the variation of DM prevalence among different countries resulting in a unique geographic distribution, DM prevalence varies among different ethnic groups and subpopulations within the same country. 8 This reflects the role of ethnicity in developing and treating DM. Studying ethnically and genetically isolated diabetic populations sharing the same environment could be of high yield in the search of unanswered questions regarding the factors contributing to the development of DM. Furthermore, such studies could guide new management policies and recommendations for DM patients of the studied ethnicity. The Jordanian population consists of multiple ethnic constituents; most Jordanians are of Arab ethnicity and descent. Two other culturally and demographically important ethnicities in Jordan are Circassians who originally from Caucasus and Chechens originally from Chechnya. These 2 populations are sharing the same environmental factors, dietary habits, and lifestyle with the Jordanians of Arab ethnicity and are ethnically and genetically isolated, mainly due to the endogamous marriages and preservation of traditions and language. 9,10 Dajani et al 11 studied the prevalence of DM, clinical characteristics, and glycemic control of these 2 populations, and reported a DM prevalence of 9.6% in Circassians and 10.1% in Chechens. This discrepancy with the results of previous analyses on the Jordanian population raises the possibility of a different pathogenic process and etiologic factors in those different ethnicities sharing the same environment in Jordan. A previous study 12 compared the prevalence of metabolic syndrome and its components between the Circassian/Chechen population living in Jordan with the Arab population living there, and reported multiple differences.The objective of this study is to compare the demographic details, clinical characteristics, and laboratory parameters between a genetically-isolated combined Circassian/Chechen diabetic population, and an Arab diabetic population, both sharing the same environment in Jordan. Differences among the 2 groups could be used to guide public health programs and treatment policies that account for such differences. They would also pave the way for a genome wide association study to discover the genetic factors responsible for these differences, and therefore novel genetic factors in the pathogenesis and management of type 2 DM.
Methods. Study population and data collection.
A comparative study between diabetes patients of Arab descent and diabetes patients of Circassian/Chechen descent was conducted. Ethical approval to carry out the study was granted by the Ethics Committee of Jordan University of Science and Technology, Irbid, Jordan and all procedures followed were in accordance with the 1964 Declaration of Helsinki. All participants signed a written informed consent to the study. Data were collected in the period from 2013 to 2014 through direct interviews with the participants at King Abdullah University Hospital in Irbid, and at The National Center for Diabetes, Endocrinology, and Genetics in Amman, Jordan. Six hundred DM type 2 patients were approached to participate in the study. Of those, 347 fulfilled the inclusion criteria mentioned previously and consented to be included in the study as seen in information of the previous 3 generations was analyzed in the Circassian, Chechen, and Arab populations to verify their ethnicity and origin. Inclusion criteria included: diagnosis with type 2 DM more than 6 months prior to data collection, confirmed ethnicity )Circassian, Chechen, or Arab( in the last 3 generations using pedigree information, living in Jordan, age >30 years old, and age of diagnosis >18 years old. Patients with type 1 DM were excluded from the study. Circassian and Chechen subjects were combined into one group as analysis showed no significant demographic, clinical, or laboratory differences between them. Information was obtained using a questionnaire designed for the study. The questionnaire consisted of 3 parts. The first part focused on social-demographic details, including gender, age, marital status, and level of education. The second part included questions regarding the patients' lifestyle; including regular exercise, a healthy dietary plan )reduced sugar, salt, and fat intake(, and smoking history. The third part explored the patients' clinical characteristics, including a family history of type 2 DM and hypertension and the presence of comorbidities, specifically asking regarding hypertension, coronary artery disease, and dyslipidemia.
Measurements and laboratory analysis. Anthropometric measurements were obtained at the interviews. The height was measured to the nearest centimeter using a standard height scale equipped with a headpiece, the weight was measured to the nearest kilogram using a standard mechanical weighing scale, and the waist circumference was measured to the nearest centimeter at the area of maximal circumference midway between the iliac spines and the lowest costal margin using a measurement tape during minimal respiration Laboratory data )fasting blood sugar, HbA1c, and lipid status components( was extracted from the patients' medical records after acquiring their consent. The most recent lab data at the time of the interview were included, which had to be no more than 3 months prior to the interview, and the patient had to be on the same treatment plan he was on at the time of the interview.
Definition of variables. Level of education was divided into 3 categories: basic education )defined as not successfully finishing 12 years of school(; secondary education )defined as successfully finishing 12 years of school but not holding a college/university degree(; and graduate/postgraduate )defined as holding a college/ university degree(. Regular exercise was defined as 30 minutes of exercise at least 3 days a week as per the Jordanian health authorities' guidelines. A person who had quit smoking more than 6 months prior to the interview was defined as a former smoker. A positive family history of a disease was defined as a confirmed diagnosis of that disease in a first degree relative. Patients were considered to have abdominal obesity if their waist circumference was 102 cm or greater in males and 88 cm or greater in females. The body mass index )BMI( was calculated using the equation: BMI = weight in kg / )height in m²(. Patients were considered to be overweight if they had a BMI of 25-29.9 kg/m², and obese if they had a BMI of 30 kg/m² or greater as defined in the WHO Technical Report Series 894 / 2000. The following definitions and cutoff points were Saudi Med J 2017; Vol. 38 )2( www.smj.org.sa used in the study according to the ADA guidelines / 2015: glycemic control was considered satisfactory if HbA1c levels were <7.0%, and unsatisfactory if HbA1c levels were ≥7.0%. Elevated LDL was defined as ≥100 mg/dL, low HDL was defined as <40 mg/dL in males and <50 mg/dL in females, elevated cholesterol was defined as ≥200 mg/dL, and elevated triglycerides was defined as ≥150 mg/dL. Patients were considered dyslipidemic if they had at least one of the previously mentioned lipid abnormalities.
Statistics. The Statistical Package for the Social Sciences )IBM Corp., Armonk, NY, USA( version 22 was used for data entry and analysis. Categorical variables were presented as frequencies and percentages, while continuous variables were presented using mean and standard deviation. A p-value cutoff of 0.05 was used to determine statistical significance of a result.
Results. Participants' characteristics. The sociodemographic details of the 2 populations are shown in Table 1. No statistically significant differences in age, marital status, or level of education were found between Arabs and Circassians/Chechens. Clinical and lifestyle details. As shown in Table 2, more Arabs )78.1%( reported a first degree relative with DM than Circassians/Chechens )59.1%(, but more Circassians/Chechens )77.3%( reported a family history of hypertension than Arabs )65.1%(. Of all participants 69.2% were diagnosed with and treated for hypertension. Differences between the 2 groups were found in Arabs having hypertension )75.5%( and dyslipidemia )76.8%(, which was more than Circassians/ Chechens' hypertension )55.5%( and dyslipidemia )35.5%(. Less than half of the total population )43.5%( were on a healthy low sugar, salt, and fat diet )43.5%(, and 51.0% of all participants regularly exercised )30 minutes a day at least 3 days a week(. Circassians/ Chechens exercised more than Arabs, but there were more current smokers )20.0% versus 9.3%( and former smokers )27.3% versus 14.4%( in the Circassian/ Chechen group. A total of 12.7% of participants were active smokers at the time of the interview.
Anthropometric measurements and obesity. Table 3 shows the anthropometric measurements and obesity details of the 2 ethnic samples. Arabs had significantly higher weight, waist circumference )WC(, and waist-toheight ratio )WHtR(means than Circassians/Chechens. No differences were found in terms of BMI means or obesity status between the 2 ethnic groups. More Arabs )90.0%( were centrally obese according to waist circumference than Circassians/Chechens )69.8%(. Table 4, both glycemic parameters studied )fasting blood sugar and glycosylated hemoglobin( were significantly higher in Arabs. Additionally, significantly more Arabs )60.6%( were found to have unsatisfactory glycemic control )HbA1c ≥7.0%( than Circassians/Chechens )38.2%(. A total of 53.5% had unsatisfactory control. Most participants )88.8%( had at least one lipid abnormality )dyslipidemia(. No significant differences were found in HDL means or percentage of patients with low HDL between the 2 groups. However, Circassians/ Chechens had higher means of LDL, cholesterol, and triglycerides than Arabs. Circassians/Chechens elevated cholesterol )47.3%( and triglycerides )60.0%(, which was more than Arabs cholesterol )20.3%( and triglycerides )42.6%(. Discussion. Factors for developing DM are generally categorized into modifiable )such as, diet, physical activity, lipid levels, and hypertension(; and non-modifiable )such as, ethnicity, genetics, age, and family history of DM(. Studying both types of factors is important to fully understand the pathogenesis of the disease and its progress, as well as its prevention and optimal management. This study was conducted with the aim of finding significant differences in terms of clinical, anthropometric, and laboratory characteristics between 2 genetically distinct ethnic groups sharing the same environment, and multiple differences were Waist to height ratio )mean±SD( 0.67 ± 0.08 0.61 ± 0.08 0.65 ± 0.09 0.000 SD -mean standard deviation found. These differences should be considered when developing a management plan for Arab, Circassian, and Chechen people living with diabetes. Additionally, these findings are worth further exploration -genetically and clinically-in the hope of finding new risk or protective factors to the development and control of DM. Previous studies had explored hypertension, metabolic syndrome, prevalence of DM and glycemic control, and nutrient intake and its relation to diabetes in the Circassian and Chechen populations in Jordan. [11][12][13][14] This study compares a combined diabetic Circassian/ Chechen population with a diabetic Arab population living in Jordan in terms of clinical details, obesity, and glycemic and lipid lab parameters.
Glycemic and lipid laboratory values. As shown in
Socio-demographic characteristics were comparable between Arabs living with diabetes and Circassians/ Chechens living with diabetes. These environmentdetermined similarities imply that differences found in other areas can be attributed to genetic differences between the studied ethnic groups. Hypertension is a common comorbidity among diabetes patients, and studies show a higher prevalence of hypertension in diabetics than in nondiabetics. 15,16 Approximately 69.2% of the studied sample were diagnosed with hypertension and treated with antihypertensive medication. A study in the UK demonstrated a difference in hypertension among diabetes patients of different ethnic groups sharing the same environment; 16 similarly, this study showed a significantly higher percentage of hypertension in Arabs with diabetes )75.5%( than in Circassians/ Chechens with diabetes )55.5%(. Self-reported dyslipidemia was also higher in Arabs )76.8%( than in Circassians/Chechens )35.5%(. High-fat diet and physical inactivity are established risk factors for development of DM. 1,17 Exercise, on the other hand, is a protective factor against developing DM, and is an effective component in its management. 18 Less than half of both groups were on a healthy low-fat diet. Circassians/Chechens living with diabetes exercise more than Arabs living with diabetes )74.5% of Circassians/Chechens with diabetes versus only 40.1% of Arabs with diabetes regularly exercise(. These findings necessitate putting more emphasis on lifestyle modifications as an important component of DM management when treating DM in all studied ethnicities. More Circassians/Chechens were current or former smokers than Arabs.
Anthropometric indices are indicators of metabolic risk in general and DM risk in particular. 19,20 Multiple studies had shown that indices that take the distribution of fat and central obesity into account )waist circumference and waist-to-height ratio( are better predictors of metabolic risk than indices that do not )weight and BMI(. 19,21,22 Anthropometric indices were examined and demonstrated a higher weight, waist circumference, and waist-to-height means in Arabs. Obesity, in particular, is a risk factor for development of DM, cardiovascular disease, and reduced life expectancy. 1,23 It is increasing in prevalence in Middle-Eastern populations in general 24 and the Jordanian population in particular. 6,25 The studied sample showed that 60.6% of all participants were obese )defined as BMI ≥30 kg/m²(, and 83.2% were centrally obese )defined as ≥102 cm WC in males or ≥88 cm WC in females(. No differences were found in percentage of overweight or obese participants among the 2 ethnic groups. However, more Arabs were centrally obese )90%( than Circassians/Chechens )69.8%(. Two glycemic laboratory parameters were analyzed, HbA1c and fasting blood sugar; Arabs with diabetes had higher means of both. It was also found that more Arabs )60.6%( had unsatisfactory glycemic control )HbA1c ≥7.0% as defined by the ADA( than Circassians/ Chechens )38.2%(. Previous studies carried out on Jordanian diabetic populations reported percentages consistent with the percentage found in participants of Arab ethnicity in our study but much higher than the percentage found in participants of Circassian/Chechen ethnicity. 7,25,26 These results may indicate the presence of protective genetic factors in the Circassian/Chechen population in terms of glycemic control; factors that may have a role in the management of DM in that population in the future.
Dyslipidemia, as described earlier, is a common comorbidity of DM. It is still unclear if it is an independent risk factor for DM or just a confounding factor to obesity and glucose intolerance, however, it is an established risk factor for cardiovascular and ischemic heart disease, common causes of death in patients with diabetes. 27 Of all participants, 88.8% were found to have at least one lipid abnormality and hence, dyslipidemia. Such a high percentage in both groups suggests a problem in the approach and management of dyslipidemia in Jordanian DM patients. Analysis of the components of dyslipidemia revealed that Circassians/ Chechens had higher means of LDL, cholesterol, and triglycerides than Arabs. Circassians/Chechens also had a higher percentage of patients with elevated cholesterol )47.3%( and elevated triglycerides )60%( than Arabs cholesterol )20.3%( and triglycerides )42.6%(. This study had multiple points of strength. It analyzed 2 understudied diabetic ethnicities and compared them to a third; all living in the same environment and having similar environmental factors. Furthermore, multiple findings suggesting a different disease process among the studied ethnicities were found. Those differences can be used to guide DM public health programs that take them into account. One limitation of this study is the lack of multivariate analysis and that it did not utilize regression techniques.
In conclusion, this study compared geneticallyisolated ethnic populations sharing the same environment in order to widen the understanding of any factors that contribute to the type 2 DM global epidemic. Multiple discrepancies between the 2 populations were found suggesting a role for genetic factors. New DM management recommendations and policies should be implemented when treating diabetic patients of those ethnicities particularly in areas of glycemic control, lipid control, and obesity. Moreover, further clinical and genetic studies aimed at these populations should be designed in the hope of discovering novel risk and protective factors in DM type 2 pathogenesis, glycemic control, obesity, and dyslipidemia in diabetic patients. | 2018-04-03T06:14:49.429Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "37f8be108e565e4b4c0f2236d0593ea865df82a4",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.15537/smj.2017.2.17910",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37f8be108e565e4b4c0f2236d0593ea865df82a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248530038 | pes2o/s2orc | v3-fos-license | Time to Death and Its Determinant Factors Among Patients With Chronic Heart Failure in Northwest Ethiopia: A Retrospective Study at Selected Referral Hospitals
Background Heart failure (HF) is a major health problem that affects patients and healthcare systems worldwide. It is the leading cause of morbidity and death and negatively impacts the quality of life, healthcare costs, and longevity. However, the causes of death were not well defined. This study aimed to identify the determinants of death among patients with HF in the Amhara Region, Northwest Ethiopia. Methods A multicenter retrospective cohort study was conducted on 285 patients in the age group 15 years or older under follow-up from 1 January 2015 to 31 December 2019. Descriptive analyses were summarized using the Kaplan–Meier survival curve and the log-rank test. Then, the Cox-proportional hazard regression model was employed to estimate the hazard of death up to 5 years after they were admitted to the HF department to follow up on their treatment. Results Out of 285 patients with HF, 93(32.6%) of the respondents were dying within 5 years of follow-up. Anemia was the common comorbid disease (30.5%), and valvular heart disease was the most common etiology (33.7%) of chronic heart failure in this study. This study showed a significant mortality difference between hospitals. HF patients with hypertension [adjusted hazard ratio (AHR): 3.5076, 95% confidence interval (CI): 1.43, 8.60], anemia (AHR: 2.85, 95% 1.61, 5.03), pneumonia (AHR: 2.02, 95% 1.20, 3.39), chronic kidney disease (2.23, CI: 1.31, 3.77), and diabetes mellitus (AHR: 2.42, 95% CI: 1.43, 4.09) were at a higher risk of death. Moreover, patients with symptoms listed in the New York Heart Association Class (III and IV), Ischemic Heart Disease and unknown etiologies, men (AHR: 2.76, 95%:1.59, 4.78), and those with a high pulse rate (AHR: 1.02, 95%:1.00, 1.04) were at a higher risk of death. Conclusion There was a mortality difference between hospitals. This study has revealed that HF patients with anemia, diabetes mellitus, pneumonia, hypertension, chronic kidney disease, HF etiologies, severe New York Heart Association Class (III and IV), men, and high pulse rate were the main factors associated with death. Health professionals could give more attention to patients whose pulse rate is high, men, and a patient who had comorbidities in the ward.
INTRODUCTION
Heart failure (HF) is a major health problem that affects patients and healthcare systems worldwide (1). It is the leading cause of morbidity and mortality and negatively impacts the quality of life, healthcare costs, and longevity (2). It is a pandemic, and at present, an average of 64.3 million people are living with HF worldwide (3). It is also associated with high morbidity and has a significant impact on healthcare expenditures in developed countries (4)(5)(6).
Chronic heart failure (CHF) is the final common pathway of various cardiac diseases and is characterized by high morbidity and mortality (7). While morbidity due to CHF is high in many parts of the world, the etiologies are different. The most common underlying cause of HF in high-income countries is coronary artery disease. In Sub-Saharan Africa (SSA), the predominant causes were rheumatic heart disease, hypertensive heart disease, cardiomyopathy, and Cor pulmonale (8). Low-income nations were disproportionately affected by preventable causes of HF, such as rheumatic heart disease and hypertension (9).
Patients with CHF often have multiple factors that accelerate disease progression to a greater or a lesser extent and worsen the response to treatment (10,11). During CHF management in patients, wide ranges of comorbidities pose important challenges. Since the impact of these comorbidities and their interactions remain incompletely understood, predicting patients' clinical courses is difficult (12). It can increase morbidity and mortality, complicate the care of these patients, and affect the quality of life for patients with CHF (13,14). HF is an increasingly common condition, and patients often experience persistent symptoms and poor quality of life, even when they are receiving the best possible treatment for their HF. Due to the high prevalence of comorbidities with HF, many coexisting medical conditions have been independently associated with the increased risk of morbidity and mortality (15). As such, optimal management of existing comorbidities in the setting of CHF is particularly important to prevent disease progression, reduce CHF hospitalizations, and improve quality of life (16).
Mortality in patients with CHF remains high, but the causes of death were not well defined. Despite the high death rate among patients with CHF, most kinds of HF can be prevented with a healthy lifestyle. Even once HF has been established, premature fatalities can be avoided by seeking medical help as soon as possible. Most patients with HF have other conditions that dominate their health experience (17). To improve outcomes for patients with HF and to ultimately save the lives of Abbreviations: AF, atrial fibrillation; AHR, adjusted hazard ratio; CHF, chronic heart failure; CHR, crude hazard ratio; CI, confidence interval; CKD, chronic kidney disease; BVF, bi-ventricular failure; DTRH, Debre Tabor Referral Hospital; DBP, diastolic blood pressure; FHSACH, Felege Hiwot Specialized and Comprehensive Hospital; HHD, hypertensive heart disease; IHD, ischemic heart disease; K-M, Kaplan-Meier; LVEF, left ventricular ejection fraction; LVF, left ventricular failure; NYHA, New York Heart Association; OR, odds ratio; PH, proportional hazard; PR, pulse rate; RHD, rheumatic heart disease; RVF, right ventricular failure; RR, reparatory rate; SBP, systolic blood pressure; SD, standard deviation; SSA, Sub-Saharan Africa; UoGSACH, University of Gondar Specialized and Comprehensive Hospital; VHD, valvular heart disease; WHO, World Health Organization. the patients, identifying the determinant factors of HF was important. Adequate studies to describe the rates of death in hospitalized patients with HF were not adequately studied in Ethiopia. Furthermore, in this specific demographic, critical factors related to in-hospital death have not been addressed in Northwest Ethiopia. However, multiple factors associated with the death of patients with CHF still need to be assessed. Thus, this study was conducted to address this issue and to identify the determinant factors of death among patients with CHF in three selected Amhara Region referral hospitals, in Northwest Ethiopia.
Study Area and Study Design
Our study area was purposely selected from three Referral Hospitals in the Amhara Region, namely, Debre Tabor Referral Hospital (DTRH), Felege Hiwot Specialized and Comprehensive and Specialized Hospital (FHCSH), and the University of Gondar Comprehensive and Specialized Hospital (UoGCSH), with the respective locations at 666 km, 578 km, and 725 km from Addis Ababa, the capital city of Ethiopia. DTRH is the only referral hospital in the zone and surrounding regions, serving 2.3 million people with curative and preventive health treatment. There are 91 beds available for inpatient services and 12 outpatient departments (OPDs). Patients with specific chronic conditions are referred to the hospital's specialty chronic illness clinics for follow-up (18). FHCSH is a tertiary referral and teaching hospital with 400-bed and around 15 adult OPDs that serve over 7 million people in the surrounding area (19). UoGCSH is a tertiary teaching and referral hospital in Northwest Ethiopia that has over 450 inpatient beds and provides referral health services to over 5 million people. This hospital provides a variety of services to the community, including chronic disease treatment. It has 13 distinct wards and 14 different OPDs (20). A multicenter retrospective cohort study was used.
Duration of Study
The duration of the study was 5 years. The investigator reviewed the medical profile of each patient from his or her charts. The study period was between 1 January 2015 and 31 December 2019.
Study Population
In this study, patients with CHF in age groups 15 years or older were selected. All patients with CHF follow-up in all Amhara Region Referral Hospitals were our target population. All randomly selected patients with CHF who took CHF treatment for a minimum of 1 month during the follow-up period in the study hospitals were included. Patients with incomplete baseline variables and patients with acute heart failure were not considered for this study.
Source of Data and Method of Data Collection
In this study, we used a secondary source of data. The data were obtained from three selected referral hospitals in the Amhara region. The variables that we used in this study were extracted from patients' chart which contains epidemiological, laboratory, and clinical information of all patients with CHF under follow-up including a detailed HF history and socio-demographic variables. The data were collected by healthcare service providers of the CHF clinic after we had given adequate orientation for them about the way of data collection and the variables that were included in this study.
Sample Size Determination and Sampling Procedure
The sample size estimation was determined by considering financial constraints, time constraints, and data analysis techniques. Before the actual data were collected, emphasis was made on the determination of the sample size, which mainly depends on the purpose of the study, the available resources, and the precision required. By taking the proper sample size, the degree of precision required for generalization was increased. Thus, the sample size determination formula (21) adopted for this study is as follows: where no is the sample size needed; N is the total population size of the patients with CHF in three selected referral hospitals (N = 4,064); Z is the upper α/2 values of standard normal distribution, and, for this study, we used a value of α = 0.05 as the significance level, which is Z = 1.96; P is the proportion (death of patients with CHF); and d is the level of precision (maximum allowable error). The specification of d must be small to have good precision (21). We used the maximum allowable difference between the maximum likelihood estimate and the unknown population parameter ( d = 0.05). We used the probability of the event, that is, 31.3% of the total patients, and this was obtained from a previous study in Ethiopia based on the University of Gondar referral hospital (16). Therefore, we used P = 0.313 as the probability (proportion) of death. Therefore, no = 306, and if no N > 5%, we should use no The total number of patients with CHF in the study period in each hospital was N DTRH = 1,140, N UoGCSH = 1,354, and N FHCSH = 1,570. We used proportional allocation to select the sample from each hospital. The proportion was calculated as follows: the total number of CHF patient follow-ups on a CHF clinic at a given hospital between 1 January 2015, and 31 December 2019, multiplied by our calculated sample size (285), and then divided by the total number of patients with CHF who started CHF follow-up in the three hospitals in the study period (N = 4,064). The total sample size in each hospital was n DTRH = 80, n UoGCSH = 95, and n FHCSH = 110. A simple random sampling technique was employed to select a representative sample from each hospital.
Response Variables
The response variable (outcome variable) for this study is the time to death of patients with CHF. In this study, the patients who had experienced death during the observation period were the event of interest. The patients who had not experienced death during the follow-up period were censored. These included patients with CHF lost to follow-up, those who were referred to other health institutions, those who were discharged with improvement, or those who stayed with admission beyond the study period.
Independent Variables
The predictors associated with the time to death of patients with CHF are either socio-demographic variables or clinical variables. These variables are gender, age, CHF type (left ventricular failure, right ventricular failure, and biventricular failure), hospitals (DTRH, UoGCSH, and FHCSH), etiology of HF (VHD, HHD, IHD, Cor pulmonale, dilated cardiomyopathy, and other etiologies), NYHA class (Class II, Class III, and Class IV), LVEF, residence (rural, urban), pulse rate, respiratory rate, systolic blood pressure, diastolic blood pressure, weight, and presence of atrial fibrillation, diabetes mellitus, hypertension, pneumonia, chronic kidney disease (CKD), and anemia as comorbidities.
Data Management and Statistical Analysis
SPSS version 23.0 was used for data entering. R version 4.0.3 statistical software was used for statistical analysis. Data are described by numbers and percentages or by means and standard deviations depending on the scale of measurements. Descriptive statistics for continuous variables were summarized using mean and standard deviation. For comparison between groups, exclusively nonparametric tests were used. The survival probability among patients with CHF from the starting date to the follow-up to the event was estimated using the Kaplan-Meier survival curve. After we estimated the survival probability, the Cox proportional hazard regression model was fitted. All variables with a P ≤ 0.25 in the bivariable analysis were included in a multivariable analysis. The Cox-proportional hazard model assumption was checked using a formal statistical test, the GLOBAL test. In the final model, hazard ratios with 95% confidence intervals (CIs) and P-values (< 0.05) were used to identify statistically significant predictors and to measure the strength of association.
Ethical Approval
Since the data collection was conducted retrospectively from patient medical record charts and patients were not directly involved in data collection, informed consent from patients was not applicable. To access patients' medical record charts, ethical clearance was obtained from the Institutional Review Board, Faculty of Natural and Computational Science, Debre Tabor University, Ethiopia, with reference number: RCS/181/2019. This study was conducted in accordance with the guidelines of Good Clinical Practice and the Principles of the Declaration of Helsinki.
RESULTS
The data consist of 285 congestive patients with HF who were treated under CHF follow-up in three selected hospitals, Amhara, North Western Ethiopia. We have a 100% response rate. The survival endpoint of interest is the time to death of patients with CHF. Thus, 93(32.6) patients were dying at the time of the study while the remaining 192 (67.4%) patients were censored.
Kaplan-Meier Estimates and Log-Rank Tests
The Kaplan-Meier estimator was applied to estimate the survival curves for categorical predictors. Figure 1 indicates that nonhypertensive patients have a higher probability of survival. In addition, Figure 2 shows that female patients have a higher probability of survival throughout the 5year CHF treatment period than male patients. This means that the probability of death was higher for male patients and hypertensive patients compared with female and nonhypertensive patients (Figures 1, 2).
To check for significant differences among categories of factors, the log-rank tests were applied to all categorical variables. The null hypothesis is that there is no significant difference between the survival experiences of different groups of categorical variables.
In Table 3, the log-rank tests showed that there was a significant difference in the death rates between groups of gender, hospital, residence, CHF type, NYHA class, hypertension, pneumonia, diabetes, anemia, and CKD patients at a 5% level of significance ( Table 3).
Uni-variable Cox Proportional Hazard Model
The univariable Cox proportional hazard regression models were fitted for every covariate to check covariates that affected the survival of patients with CHF before proceeding to higher models. Consequently, the candidate variables for building a multivariable Cox model are the sex of patients, the place of residence, hospital, PR, RR, SBP, DBP, NYHA class, CHFtype, etiology, and the presence of hypertension, CKD, anemia, pneumonia, and AF as comorbidity.
Cox Proportional Hazard Assumption
The proportional hazard model assumption asserts that the hazard ratios are constant over time. This means the risk of failure must be the same, no matter how long subjects have been followed. Cox proportional hazard assumptions were checked Table 4, the P-values of all covariates are >5%, indicating that the correlation between Schoenfeld residuals and survival time is not significant, which implies that all the covariates satisfy the proportionality assumption at 0.05 levels of significance and that the P-value of the GLOBAL test (0.0679) is not significant. This indicates that the proportional hazard (PH) assumption for the Cox model was not violated ( Table 4).
Cox PH Model
All the parameter estimates were estimated by considering the other predictors. Notably, 95% of CIs for the hazard ratios of the statistically significant risk factors do not include one (the null value). In contrast, the 95% CIs for the nonsignificant risk factors include the null value. The results of the Cox proportional hazard model are presented in Table 5.
Based on Table 5, sex, hospital, hypertension, DM, anemia, CKD, pneumonia, NYHA class, etiology, and pulse rate were significant factors that increased the risk of death among patients with CHF. We observed that the hazard of death among male patients with CHF was 2.8-fold [adjusted hazard ratio (AHR): 2.76, 95% CI: 1.59, 4.78, P = 0.001)] higher than that among female patients. The hazard of death among patients with CHF followed up by FHCSH was 3.6-fold (AHR: 3.64, 95% CI: 1.71, 7.76, P = 0.001) higher and among patients with CHF followed up by UoGCSH was 2.6-fold [(AHR: HR = 2.58, 95% CI: 1.25, 5.29, P = 0.009] higher compared with patients with CHF followed up by their treatment in DTRH. The hazard of death of CHF patients with hypertension was 3.5-fold (AHR: 3.51, 95% CI: 1.43, 8.60, P = 0.006) higher than in nonhypertensive patients with CHF. The hazard of death among CHF patients with CKD was 2.2-fold higher as compared with those without CKD (AHR: 2.23, 95% CI: 1.31, 3.77, P = 0.003). Additionally, the hazard of death among patients with CHF who had pneumonia was 2fold [AHR: 2.02, 95% CI: 1.20, 3.39, P = 0.008)] higher than in patients who did not have pneumonia as comorbidity. Furthermore, the hazard of death among patients with CHF caused by IHD was 2.6-fold (AHR: 2.64, 95% CI: 1.13, 6.15, P = 0.024) higher and those caused by other etiologies were 0.3fold (AHR: 0.32, 95% CI: 0.13, 0.78, P = 0.012) higher compared with patients caused by VHD. Lastly, a higher baseline heart rate was a significant predictor of mortality (HR = 1.02, 95% CI: 1.00, 1.04, P = 0.015). When PR is increased by one unit, the expected hazards of death of the patient are increased by 2% ( Table 5).
DISCUSSION
This study examines the effect of comorbidities and other factors on the survival time of patients with CHF. It also demonstrated that a higher heart rate was associated with adverse outcomes such as a high risk of mortality among patients with CHF. In the multivariable Cox proportional hazard model, sex, hypertension, CKD, pneumonia, diabetes mellitus, anemia, hospital, NYHA class, etiology/cause of HF, and pulse rate are significantly associated with the hazard of death.
In this study, we found that male patients had a higher risk of mortality compared with female patients. This result supported by most of the studies (16,22,23) showed that female patients had a slightly higher survival probability than male patients. In line with other studies (24)(25)(26), these studies indicate that the male gender had a high risk of mortality. However, this study contradicts two earlier studies (27), one which showed no significant differences in mortality among the gender of patients with HF and the other (28) which showed that female patients had a higher hazard of mortality.
Anemia was the common comorbid disease (30.5%) in this study. The risk of death among anemic patients with CHF was 2.8-fold higher as compared with their nonanemic counterparts. Similar to previous studies (11,(28)(29)(30)(31), we found that CHF patients with anemia had a high risk of mortality. The risk of death among CHF patients with diabetes mellitus was 2.8fold higher compared with patients with nondiabetic mellitus. These results are in agreement with the previous findings (11,22,24,28,(31)(32)(33)(34)(35)(36)(37). Patients with both diabetes mellitus and HF are particularly at an elevated risk of death compared with nondiabetic patients. This study contradicts the previous study (27), which showed that DM has no significant death effect on patients with HF.
Another finding of this study revealed that the death rate among patients with CHF with advanced class [NYHA class (IV) and NYHA class (III)] was higher compared with patients with NYHA class (II). This finding was highly supported by previous studies (24,38), which found that a greater NYHA class worsened the quality of life of patients with HF. This finding contradicted the previous study (27), which showed that NYHA has no significant effect on the death of patients with HF.
This study reveals that patients with CHF presenting with hypertension were at higher risk of mortality. This finding was in line with the previous findings (28,29,35,39,40), which showed that hypertension had a positive significant effect on the prevalence of CHF. Similarly, mortality due to CHF was significantly higher in patients with pneumonia and CKD, which is in line with the studies of Jobs et al. (41) and van Deursen et al. and Senni et al., (31,40) respectively.
Further findings of this study demonstrated that VHD was the most common etiology (33.7%) in this study. However, patients with CHF caused by IHD were at a higher risk of death compared with patients with CHF caused by VHD. This finding is in agreement with the previous Ethiopian studies (42). A significant positive, linear relationship was observed for both baselines and serially measured pulse rates with all-cause mortality (43). This finding is not in line with the previous study (30) that found no significant relationship between CHF and any of the etiologies of CHF (ischemic heart disease, dilated cardiomyopathy, hypertensive heart disease, valvular heart disease, and other etiologies) except cor pulmonale. This contradicted another study (44), which found that death outcomes were similar across etiological categories.
A higher baseline heart rate was known to be a significant predictor of death. This study was in line with the studies (43,45), showing that a higher heart rate/pulse rate was associated with a high hazard of mortality. Both baselines and serially recorded pulse rates were found to have a significant positive, linear relationship with all-cause death (43). A higher baseline heart rate was a significant predictor of mortality, and reducing the heart rate improves prognosis in patients with HF (46). The hazard of death for the patients who were treated at FHCSH and UoGCSH was higher compared with patients with CHF following their treatment follow-up at DTRH. This was due to patients with severe HF being treated at one of the comparative and specialized hospitals.
Strengths and Limitations
This study was performed in a multicenter setting, which can enhance the generalizability of the data to the entire population. In addition, this study has provided a real insight into the current clinical pattern among hospitalized patients with CHF in Northwest Ethiopia. However, at the same time, there were certain limitations in this study. First, due to the retrospective nature, the data obtained might be affected by the documentation culture of the hospital and the healthcare providers. Second, potentially relevant variables such as body mass index, alcoholism, marital status, education level, and smoking status were not included.
CONCLUSION
In this study, we aimed to examine the time to death and its determinant factors among patients with CHF in Northwest Ethiopia. According to the finding, men, hypertension, CKD, pneumonia, diabetes mellitus, and anemia were positive significant factors of death compared with their counter groups. There is a mortality difference between hospitals. The highest heart rate was associated with a high risk of mortality among patients with CHF. Health professionals could provide more attention to patients with CHF whose pulse rate is high and to patients with comorbidities of hypertension, coronary kidney disease, pneumonia, diabetes mellitus, and anemia in the ward. Finally, to show the association factor between longitudinal and survival analyses, a future extension of this study-a joint model of longitudinal measure pulse rate and time to death-is recommended.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by to access patient's medical record charts, Ethical Clearance was obtained from Debre Tabor University Institutional Review Board of Faculty of Natural and Computational Science, Ethiopia, with reference number: RCS/181/2019. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
YM contributed to the write-up, development of the proposal, data collection format, data entry, data analysis, and write-up of the manuscript. MM, AB, and SF participated in the design and data analysis, critically read the manuscript, and edited the manuscript. All authors have read and approved the manuscript. | 2022-05-06T13:04:27.647Z | 2022-05-06T00:00:00.000 | {
"year": 2022,
"sha1": "48fc0af544d5e751b313fff15dc8d78583911103",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "48fc0af544d5e751b313fff15dc8d78583911103",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
233711614 | pes2o/s2orc | v3-fos-license | Investigation of Oligonucleotide Usage Variance Between SARS -related Coronaviruses and Common Cold Coronaviruses
Background: The widespread outbreak of SARS-CoV-2 has become a deal threat for human health. This new emerged virus coupled with severe acute respiratory syndrome (SARS) and middle east respiratory syndrome (MERS) viruses belong to coronoviridae family, which develop SARS in human being. However, prior to the emergence of virulent viruses, the coronaviruses were known as the leading causes of mild common cold. Getting more knowledge about the genome organization of different strains can conduct us how these viruses evolve and become a virulent strain. Here, we reported the difference of oligonucleotide distribution contributing in genome of two groups of coronaviruses, SARS related viruses versa common cold coronaviruses, by employing weighting algorithms approaches. Results: In this study, we found a few oligonucleotides, which signicantly distinguish two viral groups. Among dinucleotide’s features, the discrepancy of TC and CC between SARS related viruses and common cold coronaviruses was quite considerable. Furthermore, CC dinucleotide was sequentially repeated in a few multinucleotide patterns including CCA, CCAC, ACCAC, and CACCAC motifs with the highest values, which also discriminated two viral groups. Conclusions: Theses remarkable oligonucleotides might point towards the existence of some particular RNA elements that might be involved in viral infectivity.
Background
Coronaviridae family is one of the largest groups of RNA viruses, which cause a board range of diseases in animal species and human (1). Although until 2002, it was supposed that human coronaviruses only cause a mild self-limiting respiratory disease, the emergence of severe acute respiratory syndrome (SARS) in 2003, middle east respiratory syndrome (MERS) in 2012 and recently the widespread outbreak of SARS-CoV-2 virus in 2019 has been sparked to be paid more attention to human coronaviruses as those pathogens develop pneumonia, severe acute respiratory syndrome and even death in human being (2). In contrast, some coronavirus strains such as OC43, HKU1, and NL63 and 229E have more probably association with seasonal common cold and are rarely led to severe disease in human (3)(4)(5)(6). Although, due to the current global pandemic of SARS-CoV-2, the extensive researches are being done to expand the knowledge about virulence factors in the pathogenicity process (7)(8)(9), the mechanism of coronavirus's strains in developing mild to severe illnesses in human is ambiguous yet. Among infectious pathogens, viruses, particularly RNA viruses have regularly evolved their genome to make an adaption by host and avoid host antiviral mechanisms to nally establish a severe disease in host (10). In fact, viral genome consists of particular regions in both coding and non-coding area which interact with both viral and host factors which dictate the progression of disease (10,11).
Viruses are able to change their genome during selective pressure to escape from host defense approaches (12,13). To take an example, an antiviral protein in human called, zinc-nger antiviral protein (ZAP), enables the restriction of viral replication by blocking speci c sequences enriched by CG dinucleotide in viral genome (14)(15)(16), however, some RNA viruses such as HIV are able to decline the abundance of CG in their genome during evolution and eventually hamper ZAP's binding activity (17). Although the genome organization of coronaviruses was structurally determined, extra investigations on genome structure of different coronavirus species can draw up a guide for better understanding of virus evolution and discovering the potential patterns present in viral genome that might have signi cant role in virus life cycle and pathogenicity. The genome of coronaviruses is a positive single strand RNA with nearby 27-30 kb length, which is the biggest genome among RNA viruses. The genome organization is almost similar in all strains containing gene 1 (ORF1ab) which occupies two-thirds of the genome approximately 20 kb encoding replicase enzyme and a few number of non-structural proteins and ORF2-9, which make up only about 10 kb of genome encoding S-E-M-N (structural proteins) respectively. The 5′ and 3′ ends of genome contain untranslated regions (UTR) which include leader sequence in 5′, several stem loops in both 5′ and 3′ ends and a ploy A tail in 3′ end which plays signi cant roles in viral replication and translation (18,19). Generally, the genome of viruses has constructed by distribution of nucleotides, which are able to create particular patterns including dinucleotides, trinucleotides and other multinucleotides that may have a vital role in viral replication and pathogenesis (20). As mentioned above, viruses are able to change their genome during evolutionary process to maximize their power against host defense activity. In this study, we decided to perform a comprehensive analysis on genome of two groups of coronaviruses, SARS related viruses and common cold coronaviruses in order to expand our knowledge about the genome structure of human coronaviruses and oligonucleotide distribution in their genome. To achieve this goal, at rst the relative frequency of dinucleotides to multinucleotides were calculated and then, various attribute weighting algorithms were used to determine the discrepancy of oligonucleotide distribution in genome of two groups of human coronaviruses. Findings of current study can conduct us to identify some particular oligonucleotide patterns in genome of coronaviruses, which might have a vital role in evolutionary adaption and viral infectivity.
Datasets generation
Generally, 5 datasets were generated based on oligonucleotide feature that each one contained 532 samples (293 viral sequences related to SARS and 239 viral sequence related to common cold) with16 dinucleotides, 64 trinucleotides,256 tetra nucleotides, 1024 penta-nucleotides, and 4096 Hexa-nucleotides respectively as oligonucleotide's attributes in each dataset. Moreover, one dataset containing all 5440 was also created to be analyzed by weighting algorithms (Sup 1).
Selection of the most important features
Given the data cleaning was performed to remove useless attributes, all attributes were precious and remained in each dataset. The importance of each contributing attribute in viral genome was evaluated by attribute weighting algorithms in two groups of SARS and common cold coronaviruses. Albeit a signi cant few number of oligonucleotide attributes were identi ed between two viral groups that have been presented in table 1 and Sup 2, a considerable oligonucleotide pattern was also observed which discriminated two viral groups. Brie y, CC dinucleotide got a signi cant value among dinucleotides attributes. Moreover, among trinucleotide features, CC dinucleotide was also repeated with a high value in CCA, GCC, and ACC features. In continue, those features were also sought among tetra oligonucleotides, which three features of CCAC, GCCG, and ACCC were identi ed with the signi cant value. Interestingly, our attention was drawn to CCAC and ACCC patterns as being also repeated with a signi cant weight among a few Panta and hexaoligonucleotides g 1. Furthermore, to identify the most important oligonucleotide pattern, all attributes from di to multinucleotides were also run at one dataset by different weighing methods. Remarkably, CACCAC oligonucleotide coupled with a few other features, was highlighted with the highest score (seven value) as shown in table 1. The result of feature selection was provided in Sup 2. were illustrated on reference genomes in g 2. We found ten conserved motifs of CACCAC in different positions on SARS-CoV-2and SARS genome. However, the only seven-conserved motif of CACCAC was identi ed on MERS genome.
Although the most repetitions located on ORF1, CACCAC Motif was also repeated one time on S ORF of SARS and MERS and tree times on SARS ORF S. Moreover, this motif was also observed in the 3′ UTR site of SARS-CoV-2and SARS. However, this motif was also identi ed on common cold coronaviruses genome; the number of repetition was quite variable in each species. In addition, the motifs were not quite conserved among some strains especially HKU1 strains.
Discussion
The genome of RNA viruses contains different structures such as cis-acting elements, repeated sequences and RNA motifs, which contribute in the process of viral life cycle (11,26). In fact, these elements are able to interact with viral and cellular factors and regulate viral translation, replication and encapsidation (27,28). For instance, the presence of a particular RNA structure named internal ribosome-entry sites (IRESs) in 5′ end of genome in many pathogenic viruses such as hepatitis A virus (HAV) , hepatitis C virus (HCV) and poliovirus allows them to interact with host ribosomal proteins and recruit eukaryotic translation machinery for their own proteins synthesis (29). Moreover, some other features can be involved in virus strategies for induction and regulation of host immune system. A conspicuous example of this sort of features is the existence of (pathogen-associated molecular pattern) PAMP as a small piece of RNA in viral genome. In fact, PAMPs are conserved small sequence of viral genome, or viral replication products, which are recognized by pattern-recognition receptors (PRRs) such as Toll-like receptors (TLRs) or RIGI-like receptors (RLRs) and in the following, host innate immune system, would be activated against the pathogens (30,31). In contrast with this, the presence of some other motifs or RNA elements in genome of some viruses assists them to evade host immune mechanism. As an example, an RNA structure in the 3C protease ORF of poliovirus genome inhibits the function of RNase L, an antiviral endonuclease, that is activated during viral infections as the part of innate immune system (32,33). According to the importance of these elements in viral replication and infectivity, the current study was performed to comprehensively analyze viral sequences of high virulent coronaviruses in comparison to coronaviruses related to common cold to predict a few probable signi cant RNA motifs. With the development of computational programs, the presence of RNA structures in viral genome has been anticipated by bioinformatics methods. Recently, feature selection techniques such as attribute weighting algorithms have already been used to predict the most important attribute in nucleotide and amino acid level among a large number of protein or genome sequences (23)(24)(25).
In this study, the relative frequency of contributing oligonucleotides (dinucleotide to hexa nucleotide) in viral genome of different coronavirus strains was calculated as explained in the method section, and then the most important patterns were identi ed by different attribute weighting algorithms. Given the results, a sequential pattern of CC dinucleotides to CACCAC hexa nucleotides de ned by almost 90 percent of all attribute weightings, were identi ed as the most important features to distinguish SARS and common cold coronaviruses g 1. A few previous experiments showed that the presence of CCA boxes in viral genome, particularly the genome of positive single strand RNA viruses, would increase signi cant levels of transcriptional initiation at multiple sites. In fact, viral replicase seems to be able to initiate transcription from CCA boxes without the presence of a unique promoter (34). In the current study, CCA motif was shown as a remarkable feature among trinucleotides and it was repeated sequentially in CCAC, ACCAC, and CACCAC motifs. Furthermore, among all attributes features (di to hex nucleotides), CACCAC was also valued by 70 percent of all weighting models Table 1. There is a possibility that the presence of conserved multiple motifs in genome of SARS related viruses, especially, SARS and SARS-CoV-2 with the most frequency of this motif, might exert a strong in uence on viral RNA synthesis. It is noticeable that this motif was also presented as a conserved motif in 3′ UTR of SARS-CoV-2 and SARS but it was not observed on other coronaviruses genome in this region. According to the importance of 3′ UTR sequences in viral replication and infectivity, the role of this remarkable motif should be evaluated. In this study, some other oligonucleotide features with sizable score was also distinguished between two viral groups as shown in table 1 and sup 2. To understand the biological importance of these features in life cycle of different coronavirus strains, those should be aimed and scrutinized by laboratory techniques in cell culture system and animal models.
Among dinucleotide features, TC and CC dinucleotides, which were con rmed by 80 and 90 percent of all attribute weighting respectively, attracted us too. According to a myriad number of researches, dinucleotide composition constitutes a genomic signature among a variety of virus species, which might represent a signi cant impact on viral life cycle and host adaption (20,35). To exemplify, the reduced frequency of UA and UU dinucleotides in HCV genome lead to the interferon (INF) resistance among some HCV genotypes (36). In some other research, it is supposed that frequency of CG and UA enables RNA viruses to escape from host immune system (17). In this study, the relative frequency of TC and CC dinucleotides in SARS related viruses were signi cantly different from those of common cold coronaviruses. It can be supposed that TC and CC dinucleotides represent an important role in coronaviruses pathogenicity. Interestingly, there is a human enzyme named Apo lipoprotein B mRNA-editing enzyme-catalytic polypeptide-like 3 (APOBEC3), which has an effective role in innate antiviral immunity especially about retroviruses and DNA viruses (37,38). The preferred effective sites of two main isoforms of this enzyme, APOBEC3A and APOBEC3 G were reported as TC and CC respectively (39). Both of mentioned dinucleotides were as distinguishing features between two viral groups in the current study. Although in most of studies, the antiviral activity of this enzyme has been identi ed on retroviruses and DNA viruses. The recent study on NL63 coronavirus showed that replication of RNA viruses can be also restricted by APOBEC3 activity (40). It can be hypothesized that the difference of TC and CC dinucleotides in genome of two groups of coronaviruses is more likely in the result of evolutionary process and thus it can have a substantial role in viral pathogenicity.
Conclusion
To conclude, this mining showed us a few highlighted oligonucleotide features that differed in genome of two groups of common cold and SARS coronaviruses. Those features might contribute to a better understanding of coronaviruses pathogenicity and encountering in innate immunity in the future.
Viral Genome Sequences
For the beginning, the nucleotide database of NCBI was searched for each virus species including SARS-CoV-2, SARS, MERS, HKU1, OC43, NL63, and 229E viruses to obtain full-length genome sequences of each strain. Totally, nearby a hundred full genome sequence of each virus species were retrieved as initial data. However, in the case of NL63, 229E and HKU1, the numbers of deposited full genomes were less than 100. To con rm that, the retrieved sequences belonged to the same species, the multiple sequences alignments were computed using CLUSTAL Omega algorithm in EBI web service. Finally, after checking the aligned sequences and excluding some genomes related to animal species, the nal initial data for each human virus strain was created. The more detailed information of viral sequences was summarized in table 2.
Oligonucleotide's frequency analyses and attributes extraction
In order to carry out the preliminary analysis, a Hyper Talk program was written in the lab view software, which accepted Fasta text -formatted les. The written program in the software was able to scan the sequences sequentially and build up the overall nucleotide composition, alongside with the frequency of each oligonucleotide in turn. In this study, the frequency of dinucleotides to hexa oligonucleotides for each sequence was computed as an observed oligonucleotide in the lab view software (21). On completion of the scan, the expected numbers of a given oligonucleotide were also calculated using Markov method (22). To avoid the effects of length factor of the sequences and estimate the level of statistical signi cance of oligonucleotides occurrences, the observed to expected oligonucleotides ratios were obtained (21) and nally, each oligonucleotide odds ratio was considered as an attribute.
Totally 5440 attributes (16 dinucleotides, 64 trinucleotides, 256 tetra nucleotides, 1024 penta-nucleotides, and 4096 Hexa-nucleotides) were extracted for each virus sequence by lab view software. List of attributes and calculated values were presented in Sup 3.
In the following, a new dataset was generated for each oligonucleotide feature in two viral groups; the viral sequences related to the Severe Acute Respiratory Syndrome (SARS) including SARS-CoV-2, SARS-CoV, and MERS viruses and the viral sequences related to common cold including OC43, HKU1, NL63, and 229E. Then, the attributes of SARS related viruses were compared with those related to common cold coronaviruses (Sup 1). For this aim, each dataset was imported into Rapid Miner Software [Rapid Miner, Germany] and the following steps were sequentially done. The processes of datasets creation and data mining are outlined in the g 3.
Data Filtering
In order to get a nal cleaned database (FCdb), any duplicated attributes, useless and related attributes with Pearson correlation coe cient greater than 0.9 and also numerical attributes with standard deviations less than or equal to a given deviation threshold (0.1) were excluded from the datasets (23).
Attribute weighting
Ten different algorithms of attribute weightings named Information Gain, Information Gain Ratio, Rule, Deviation, Chi Squared, Gini Index, Uncertainty, Relief, Support Vector Machine (SVM), and PCA (24,25) were performed on all datasets to achieve the most important nucleotide's attributes that probably discriminate coronaviruses which cause SARS against those which are known as common cold coronaviruses. During execution of attribute weighting program, each attribute gained a value between 0-1 showing its importance. Then, the attribute with a weight higher than 0.7 owning the highest number of weighting algorithms was allocated as the most important attribute. All Attributes and the relevant weighting models have been presented in Sup 1. The sequential pattern with the highest value, which discriminates the genome of SARS-related coronaviruses from common cold coronaviruses | 2021-05-05T00:08:32.327Z | 2021-03-22T00:00:00.000 | {
"year": 2021,
"sha1": "5ef5b341f5543572ec97cbdae35081b784c61cee",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-328801/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "f65fd4a4c78189d3e31fda9dd63151d2a7f32ed1",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
46747745 | pes2o/s2orc | v3-fos-license | Factors Affecting Surgical Decision-making—A Qualitative Study
Background Guidelines and Class 1 evidence are strong factors that help guide surgeons’ decision-making, but dilemmas exist in selecting the best surgical option, usually without the benefit of guidelines or Class 1 evidence. A few studies have discussed the variability of surgical treatment options that are currently available, but no study has examined surgeons’ views on the influential factors that encourage them to choose one surgical treatment over another. This study examines the influential factors and the thought process that encourage surgeons to make these decisions in such circumstances. Methods Semi-structured face-to-face interviews were conducted with 32 senior consultant surgeons, surgical fellows, and senior surgical residents at the University of Toronto teaching hospitals. An e-mail was sent out for volunteers, and interviews were audio-recorded, transcribed verbatim, and subjected to thematic analysis using open and axial coding. Results Broadly speaking there are five groups of factors affecting surgeons’ decision-making: medical condition, information, institutional, patient, and surgeon factors. When information factors such as guidelines and Class 1 evidence are lacking, the other four groups of factors—medical condition, institutional, patient, and surgeon factors (the last-mentioned likely being the most powerful)—play a significant role in guiding surgical decision-making. Conclusions This study is the first qualitative study on surgeons’ perspectives on the influential factors that help them choose one surgical treatment option over another for their patients.
BACKGROUND
Guidelines and Class 1 evidence are strong factors that surgeons use to help make decisions.With little Class 1 evidence to guide most decisions, dilemmas arise and surgeons often turn to other factors to help guide them make these decisions. 1,2[5][6] With the presence of excellent working environments, availability of equipment and surgical tools, and the continuous appearance of new surgical techniques, procedures, and research, the question of which surgery would best treat a specific operable condition is a major challenge for surgeons.This dilemma often invites other important factors to help determine individualized surgical treatment options for each patient.
METHODS
The study was approved by the Research Ethics Board at the University Health Network.Informed consent was obtained from all participants.
Study Design
A prospective qualitative study was conducted to examine surgeons' views on the influential factors that encourage them to choose one equally fit surgical procedure over another.Participants for this study were consultant surgeons, surgical fellows, and senior surgical residents from various fields within surgery working at academic hospitals of the University of Toronto, Canada.A total of 733 consultant surgeons and clinical fellows in the Department of Surgery at the University of Toronto were invited to participate in this study by email using E clipspriority news from the department of surgery at the University of Toronto; two senior surgical residents were specifically requested, as their perspectives would be expected to be valuable, and it added to the spectrum of senior and junior surgeons.Those who were interested to participate e-mailed the coinvestigator (C.G.) who scheduled appointments with everyone who responded.Semi-structured faceto-face interviews were conducted with all of the participants using an open-ended questionnaire with specialty-specific clinical vignettes (Appendix).The clinical vignettes were developed based on the most common surgical conditions that currently do not have one single standard surgical approach, guidelines, or Class 1 evidence to guide surgeons.
Setting and Participants
Participants were either senior or consultant surgeons, surgical fellows, or senior surgical residents in the Department of Surgery working at academic hospitals of the University of Toronto.All participants invited to participate in this study were above 18 years of age, and spoke and understood English well.
Sample Size
Thirty interviews were sought, a sample size likely to be sufficient for data saturation.Data saturation is a concept used in qualitative research methodology to describe the point at which successive interviews will not likely yield any new themes beyond those already achieved. 15,17
Data Collection
A single co-investigator (C.G. ) conducted an openended, face-to-face interview with each participant using a semi-structured guide (Appendix).Themes were explored as they emerged.All interviews were digitally audio-recorded and transcribed.Demographic data such as age, gender, education, and employment were collected.One out of the three clinical vignettes from three different specialtiesgeneral surgery, orthopedic surgery, and neurosurgery-was provided to each participant based on their preference and specialty.Responses from the clinical vignettes were collected and examined to determine the presence or absence of diversity in decision-making.
Data Analysis
Interview responses were collected in tabular form and examined through modified thematic analysis using open and axial coding.Open coding is the deconstruction of information into common groups based on shared ideas, and axial coding involves organizing information into overarching themes. 15,17esearch Ethics Participation in this study was entirely voluntary, and informed consent was obtained from all participants.All audiotapes and anonymized transcripts were encrypted and stored securely.This study was approved by the Research Ethics Board at the University Health Network (Toronto).
RESULTS
A total of 733 emails were delivered; e-mails were sent to 516 consultants (includes scientists and adjunct faculty) of whom 186 opened the e-mail, and to 217 fellows, of whom 131 opened the e-mail.Two senior surgical residents were specifically requested to participate.In total, 32 interviews were conducted.Table 1 shows the details of the demographic data for 32 study participants.Table 2 shows surgeons' responses to survey questions including the number of participants who felt that their patients' age influences their surgical decision-making, the number of participants who felt that their experi-ence working as a surgeon has an influence on their surgical decision-making, the number of participants who felt that their personal views in surgical decision-making outweigh the current prevailing methods of treatment done by the majority of other surgeons, the number of participants who felt that their geographical location (i.e.country, city) influences their surgical decision-making, and participants' views on whether non-financial incentives have an effect on their surgical decision-making.
Thematic Analysis
Seven over-arching themes were drawn from the interview data and are described below.
Patient factors, especially age, are important factors in decision-making
Twenty-eight out of 32 surgeons interviewed felt that patients' age was an important factor to consider when deciding on a specific treatment option.With all of the advances in technology and anesthesia, many surgical procedures are routinely performed on elderly patients.However, surgeons felt that elderly patients were often given less aggressive or more cautious surgical approaches focused on quality of life as opposed to cure.Alternately, younger patients were often recommended more aggressive surgical approaches focused on cure or longevity.
In pediatric surgery, certain surgical procedures performed at a later stage in the condition, when the patient was older and had matured, produced better and longer-lasting surgical outcomes.Gender was an important factor to consider when the surgical procedure would affect fertility, especially in females, as well as slight variations in surgical approach.For example, women undergoing craniotomy often requested minimal hair shaving, whereas men did not.In orthopedic surgery, women were often found seeking treatment for their knee arthritis at a later stage in the disease, whereas men were found seeking medical attention at an earlier stage and received better outcomes.This may also be linked to cultural differences, where women are often the primary caretakers at home and often give priority to other family members' health needs over their own.However, surgeons felt that cultural differences did not affect their decision-making but felt that it was important to recognize and respect them.Jehovah's Witness patients' non-acceptance of blood transfusion is well established and respected.Aside from that, surgeons felt patients' religious beliefs did not affect their decision-making, but recognized the importance to respect them.Surgeons also felt that differences in patient personalities did not impact their decision-making but rather their approach to communicating with the patient.Patient preferences were important factors (such as an aversion to surgery), but depended heavily on the clinical situation.
Surgeons' personal factors influence their decision-making
Male and female surgeons did not think that their gender influenced their decision-making.However, female surgeons recognized that they were more sensitive around surgical procedures that directly impacted their patients' fertility and felt that they were able to understand and relate to their patients better in relation to fertility surgery compared to non-fertility surgery.Cultural differences amongst surgeons did not have any impact on their surgical decision-making.Surgeons who had a strong religious belief felt that it was wrong to allow patients to die in an emergency situation even if the outcomes would not likely provide a favorable quality of life.However, some of these surgeons acknowledged that through experience and wisdom passed down from their mentors, allowing patients to die in specific circumstances would be the best thing to do rather than saving their life and leaving them in a severely compromised state.They also felt that these situations required one to really know who their patients are and what they would have wanted.
Thirty out of 32 surgeons interviewed agreed that their years of experience as a surgeon influenced their surgical decision-making.Less experienced surgeons felt uncomfortable to perform unfamiliar and more complex cases; however, they were very open to learning how to approach and perform specific surgical cases and engage in learning if it could benefit their surgical practice.Junior surgeons also felt more pressure to be up to date with all of the medical literature, advances in technology, and new techniques.Although the majority of junior surgeons preferred the tried and true surgical procedures, there were a few surgeons who felt comfortable and excited to be an early adopter of innovative proce- Do non-financial incentives play a role in surgical decision-making? 3 dures and would seek help and guidance from senior surgeons when necessary.Senior surgeons felt more familiar and comfortable with performing complex cases as well as seeking help or referring patients to specialized surgeons or colleagues when necessary.
As surgeons got more experienced, they were more in favor of becoming sub-specialized in their field of surgery.Some surgeons acknowledged that their own personal views about a disease and its treatment might trump the current prevailing methods of treatment used by the majority of surgeons.
Training location influences decisionmaking
Surgeons thought that where they trained has a great impact on the type of surgeons they become, learning from both positive and negative role models.They also thought that working at an academic hospital impacts their decision-making because the majority of their decisions are based on evidence-based medicine and the availability of guidelines.These surgeons felt that working at an academic hospital provides them with easy access to other surgeons who are sub-specialized experts.
Having the support of colleagues, mentors, and even surgical trainees is an important factor that helps surgeons feel more comfortable and prepared to take on challenging surgical cases.With surgical trainees such as residents and fellows, the learning experience for both consultants and trainees is never-ending.In these environments, it is crucial to have weekly morbidity and mortality rounds, where surgeons learn from each other's complications.All surgeons recognized this as a strong factor that led to better decision-making
The diagnosis heavily influences decision-making
All the surgeons thought that the diagnosis, degree of severity or stage of the disease, and medical comorbidities heavily influence their decision-making.Surgeons often refer their patients to medical consultants and anesthetists to assess fitness for surgery and to optimize their patients' medical status.Medical co-morbidities are often a factor in all disciplines within surgery that may alter the risk-benefit ratio for a specific surgical patient.
Geography, socioeconomics, and resource availability influence decisionmaking
Surgeons thought that their geographical location influences their decision-making.The country/ province/city/hospital the surgeon practices in has influence on the access to resources, such as specific surgical instruments.Surgeons also thought that working at an academic hospital provides them with greater access to such tools and equipment that may not be available in other hospitals.Occasionally these surgeons would receive surgical referrals from non-academic hospitals because of the lack of surgical resources available there and were happy to take on such cases.
Surgeons' comfort or championing of a procedure affect decision-making
Surgeons felt that if a surgical procedure they were very comfortable with produced as good outcomes as other procedures, they would usually select that procedure.Most surgeons were more comfortable with "tried and true" methods, and few surgeons were comfortable being early adopters of novel techniques unless they were the innovator.All surgeons felt very comfortable to offer a procedure they "champion" to their patients, and appreciated the need to be aware of not disadvantaging their patients by doing this.Surgeons also thought that they would be more likely to receive referrals from other doctors whose patients were in need of their specialized care or surgical procedure.If there was a known expert who had significantly better outcomes because he/she used a different technical surgical approach, surgeons said they would not have a problem referring their patients to that expert.Surgeons' egos do play a role, and many surgeons admitted that their view of a specific condition and how it should be treated could overrule a body of evidence that stated otherwise, except for goodquality Class 3 evidence.
Personal gains to the surgeon are not strong factors in decision-making
All surgeons were aware of the potential conflicts of interest in everyday practice, for example, when a surgeon is involved in a clinical trial and it would help the trial reach fruition by recommending a certain procedure.Twenty-nine out of 32 participants felt that receiving a higher reimbursement to perform one surgical procedure over another would not cloud their judgment.However, surgeons were aware that this may be a factor that possibly affects other surgeons, and some were aware of specific examples.
Responses to the Clinical Vignettes
One clinical vignette in the field of neurosurgery, orthopedic surgery, or general surgery was posed to assess variability in approaches (Appendix).Out of nine neurosurgeons who answered the neurosurgical vignette, there were six different ways recommended to approach the case.Orthopedic surgery had six different ways to approach the same vignette from six different surgeons.Although there were only three general surgeons who answered the general surgery vignette, eight other surgeons who had knowledge and background in general surgery also answered; there were seven unique ways from the 11 surgeons.The clinical vignettes were designed for the sole purpose of capturing diversity among the responses by surgeons and to enhance the concept that decision-making among surgeons is quite variable.
DISCUSSION
One would think in our modern era of high-tech surgery, where almost anything is possible, that the surgical solution to most problems, common or rare, would be clear, but we are far from this situation.One might surmise that patients would be perplexed and possibly disturbed to know that so many different approaches to their problem exist, rather than one obvious approach agreed on by most surgeons.
The responses to the clinical vignettes are a simple demonstration of how different surgeons make different clinical decisions and how variability in the responses per vignette proves that there is a need within the surgical community for guidance on how to approach decision-making in a more unified and systematic manner.
Medical condition factors consist of diagnosis, prognosis, signs and symptoms, the acuity, and whether the medical condition is a benign or a malignant one.Taking all of these components into consideration is important as it helps surgeons determine the urgency of the treatment and the type of treatment plan or surgical procedure required for their patient.
Information factors include the availability of guidelines or Class 1 evidence on a specific treatment or surgical procedure.4][25] Surgeons working at academic hospitals of the University of Toronto found that occasionally they would receive referrals from non-academic hospitals due to the unavailability of specific surgical equipment and/or expertise.Although surgeons felt that their institution provided many resources regarding surgical equipment, they expressed a concern that there was still room for improvement.
3][4][5]26 This helps the surgeon get a better understanding for their patient as well as help to establish a better rapport with them.Age in particular, whether working with the pediatric age group or with adults and the elderly, is a factor that surgeons still consider.Pediatric surgeons found that certain procedures provided better outcomes to their patients if carried out later on in their development.
][29][30][31] These factors are all very important for a surgeon to keep in mind and to have open and honest conversations with their patients about.This in turn empowers the patient to be able to make a well-informed decision regarding their possible treatment options, possibilities of complications, and their overall outcomes of the treatment plan, as well as allowing patients and their families to better prepare themselves mentally, emotionally, and financially for the upcoming lifestyle changes and challenges they may have to face in the near future.These factors also help surgeons get a better picture as to how their patients will have to prepare for surgery before and afterwards, allowing surgeons to better assist their patients through this difficult and vulnerable experience in their life.
Surgeon factors play a very significant role in surgeons' decision-making.5][6] The more a surgeon is familiar with and experienced in a specific procedure (i.e.his/her comfort level), the stronger this factor becomes in the decision-making process.Although surgeons in general prefer the tried and true procedures over newer innovative surgical procedures, they recognized the importance of the newer surgical procedures and appreciated that surgery would never advance without them.However, surgeons are cautious about providing these newer surgical procedures to patients and are aware of patients' own biases about newer surgical procedures being better.
CONCLUSION
This study reveals five factors-medical condition, information, institutional, patient, and surgeon factors-in surgical decision-making.It also highlights the importance of surgeons re-evaluating and prioritizing four of those factors when there is a lack of information factors available to guide them during the decision-making process.
STUDY LIMITATIONS
This was a qualitative study using a subset of surgeons in a large department of surgery, in an academic health science center within a socialized health-care system.The results may not be generalizable to other health-care systems/hospitals.
Table 1 . Surgeon Demographics. Category n
* Surgeons from outside of Canada enrolled in a surgical or research fellowship program at the University of Toronto.† Six surgical fellows and two senior surgical residents. | 2018-04-03T00:22:41.653Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "5d43d9c665d765bf58032a3870bcef4173c13b23",
"oa_license": "CCBY",
"oa_url": "https://www.rmmj.org.il/userimages/775/1/PublishFiles/784Article.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5d43d9c665d765bf58032a3870bcef4173c13b23",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
221341009 | pes2o/s2orc | v3-fos-license | OVI Traces Photoionized Streams With Collisionally Ionized Boundaries in Cosmological Simulations of $z \sim 1$ Massive Galaxies
We analyse the distribution and origin of OVI in the Circumgalactic Medium (CGM) of dark-matter haloes of $\sim 10^{12}$ M$_\odot$ at $z\sim1$ in the VELA cosmological zoom-in simulations. We find that the OVI in the inflowing cold streams is primarily photoionized, while in the bulk volume it is primarily collisionally ionized. The photoionized component dominates the observed column density at large impact parameters ($\gtrsim 0.3 R_{\rm vir}$), while the collisionally ionized component dominates closer in. We find that most of the collisional OVI, by mass, resides in the relatively thin boundaries of the photoionized streams. We discuss how the results are in agreement with analytic predictions of stream and boundary properties, and their compatibility with observations. This allows us to predict the profiles of OVI and other ions in future CGM observations and provides a toy model for interpreting them.
INTRODUCTION
The analysis of the Circumgalactic Medium (CGM), the gas that resides outside galactic discs but still within or near its virial radius, has the potential of giving us valuable information about the past history of galaxy formation and also some hints of its future evolution (e.g. Tumlinson et al. 2017). The importance of studying the CGM in order to characterize the history of galaxy formation is clear: it has been shown conclusively that galaxies alone contain significantly fewer baryons than would be expected from the standard ΛCDM cosmology (the 'missing baryon problem'). While a significant fraction of these baryons may have been ejected to the Intergalactic Medium (IGM) in E-mail: cjstrawn@ucsc.edu the early stages of galaxy formation (Aguirre et al. 2001), or remained as warm-hot low-metallicity intergalactic gas throughout cosmic time (Shull et al. 2012), studies suggest that 10-100 percent of the cosmic baryon budget of the universe exists in the metal-rich CGM of galactic halos Bordoloi et al. 2014). The CGM is thus certainly significant and possibly even dominant in baryonic matter. Further, by calculations of the total amount of metals produced in all stars, only about 20-25 percent remain in the stars, ISM gas, and dust (Peeples et al. 2014). Recent studies have been consistent with the idea that most of metals produced within stars and released by supernovae feedback or stellar winds reside in the metal-rich CGM (Tumlinson et al. 2011;Werk et al. 2013). The mechanisms collectively known as 'feedback' by which metals, mass, and energy are transported to the CGM, are not yet completely understood, and likely include contributions from several processes like stellar winds, supernovae feedback, and interaction with winds from the central AGN. The interactions between these feedback mechanisms and their relative contributions, and their dependence on halo mass and redshift might be better constrained by studies of the kinematics and temperatures of the ions within the CGM.
The CGM properties are also relevant when studying the future evolution of galaxies. Current models of the z 1 CGM state that cold, relatively lower-metallicity gas inflows from the IGM feed star formation of central galaxies through narrow streams (Keres et al. 2005;Dekel & Birnboim 2006;Dekel et al. 2009;Ocvirk et al. 2008). An overview of the topic is found in Fox & DavÃl' (2017), and references therein. Outside those streams, metal-enriched warm-hot gas (10 4.5 K<T<10 6.5 K) mainly produced by stellar feedback from the central galaxy, and by the virial shock under specific conditions (White & Rees 1978;Fielding et al. 2017;Stern et al. 2020a), fills the rest of the CGM volume. Although widely accepted by the community, these models still suffer from large uncertainties due to the difficulty of observations and of comparing them with numerical simulations or analytic models. A more detailed review of this theoretical picture will also be presented in Section 5.1.
A useful parameter for the interpretation of existing and future data for those larger-scale phenomena is the ionization state of gas within the CGM, which as a rule is highly ionized . While the ionization level of the gas within the CGM can help to constrain the physical interpretation, analyzing the full volume of gas from a single or even several ion species is remarkably difficult (Tumlinson et al. 2017). This is because atoms can be ionized, in general, in two different ways. They can be photoionized (PI), meaning incoming photons from either the galaxy itself or the ultraviolet background light interact with an atom and strip it of electrons, or they can be collisionally ionized (CI), meaning thermalized interactions with nearby atoms will 'knock off' electrons, leaving the atoms in some distribution of ionization states (Osterbrock & Ferland 2006). Broadly speaking, PI gas fractions are a function of density (as denser gas recombines more quickly, biasing gas towards lower ionization states) and CI gas fractions are a function of temperature (as hotter gas will have more kinetic energy per particle, biasing gas towards higher ionization states). Most studies tend to assume that only one of these mechanisms is in play at a time for a given patch of gas, with the rationalization that since denser gas tends also to be cooler, either mechanism will result in cold gas hosting low-ions (e.g. NII, MgII) and hot gas hosting high ions (e.g. NeVIII, MgX). Examples of assuming OVI is in PI-equilibrium include Stern et al. (2016) and assuming OVI is CI include Faerman et al. (2017). However, since OVI was detected in all star-forming galaxies in COS-Halos, such an assumption is very tenuous. In Roca-Fàbrega, S. et al. (2019) (hereafter RF19), it was found that in cosmological simulations of halos which reached roughly Milky-Way masses at z ∼ 1, whether OVI is PI or CI depends strongly on redshift, mass, and position within the CGM, and ranged all the way from ∼ 100 percent PI to ∼ 0 percent PI over the course of their evolution. The main conclusions of RF19 were that OVI is more photoionized in the outer halo than the inner, and that galaxies transform from fully collisionally ionized to mostly photoionized at z=2, after which they diverge by mass, with larger galaxies becoming more collisionally ionized than smaller ones.
Recent detections of very high ionization states such as OVII and OVIII in nearby galaxies indicate that there is indeed a significant warm-hot component of the CGM, and it is a source of major controversy whether OVI should be considered to mostly be cospatial with that gas, whether it should be considered to be mostly cool and more closely connected to H I and low metal ion states, or whether both are relevant simultaneously. We will especially consider Stern et al. (2016) and Stern et al. (2018) (hereafter S16 and S18, respectively). In S16 a phenomenological model is proposed which explains the relations between low, intermediate, and high ionization states as a consequence of hierarchical PI densities, where smaller, denser spherical clouds containing low ions are embedded within larger, less dense clouds containing OVI. This model matched the observed absorption much better than an assumption of a single or small number of densities, and nearly as well as models with a separate gas phase for each ion, with many fewer parameters (see S16, Figure 6). S18, focusing especially on OVI, assumed this hierarchical density strcuture is global, with OVI residing in the outer halo and the low ions residing in the inner halo. They claimed that the majority of the OVI gas detected is located outside ∼ 0.6R vir . This radial distance, which is defined as the approximate radius of the median OVI particle, is called R OVI . In S18, both a 'high-pressure' (CI) and a 'low-pressure' (PI) scenario are presented which are consistent with the data. However, the PI scenario more naturally explains the observed N HI /N OVI values of 1 − 3 seen at large impact parameters, and also alleviates the large energy input in the outer halo required by the CI scenario (see also Mathews &Prochaska 2017 and). It has also been suggested that the CGM might have both phases present at the same time in different regions, most recently in Wu et al. (2020).
Due to the low emission measure of the CGM, it is necessary to do most observations through absorption spectra. With the relatively low number of luminous background sources from which to observe CGM absorption lines, for many years only a few data points were available from each survey, from either quasar spectra (e.g. Rupke et al. 2005) or using background galaxies as sources (Steidel et al. 2010). In recent years with the release of the COS-Halos data from Hubble Space Telescope (Werk et al. 2013Tumlinson et al. 2011;Prochaska et al. 2013), there has been an explosion of absorption line studies. This analysis can be used to detect relatively small column densities for different ions, including metal lines, giving insight into the temperature, density, and metallicity information of the gas. However, there are important limitations to this type of analysis. First and foremost, absorption-line studies require a relatively rare alignment between the background source and the foreground galaxy, and there are only a few examples where several lines pass through the same CGM in different places (Lehner et al. 2015) or strong gravitational lensing allows the same source to be seen in multiple locations throughout the CGM (Okoshi et al. 2019). This means many assumptions must be made in order to combine together the data even within an individual survey, such as assuming relatively sim-ilar conditions of the CGM in all L * galaxies, and that the CGM is spherically symmetric (or at least isotropic). The use and results of those assumptions are examined in Mathews & Prochaska 2017;S16). Finally, the lack of visual imaging makes it very difficult to constrain the position of the detected gas along the line of sight, and it may well be at any distance greater than the impact parameter, or gas from different ions may even be in completely separate clouds.
Numerical simulations are thus playing an important role as tools for testing recent theoretical approaches that try to characterize the CGM properties and evolution (e.g. Shen et al. 2013;Faerman et al. 2017;Stern et al. 2018;Nelson et al. 2018;Roca-Fàbrega, S. et al. 2019;Stern et al. 2019Stern et al. , 2020a. Hydrodynamic simulations are commonly used to supplement analytical models regarding the CGM, as they can break the degeneracy between CI and PI gas. While most cosmological simulations have difficulty resolving the CGM due to the Lagrangian nature of the adaptive resolution, where the spatial resolution becomes very poor in the low density CGM/IGM (e.g. Nelson et al. 2016), several recent groups have implemented novel methods to significantly enhance the resolution in the CGM, obtaining results similar to COS-Halos and other observations (Hummels et al. 2019;Peeples et al. 2019;Suresh et al. 2019;van de Voort et al. 2019;Mandelker et al. 2019b;Corlies et al. 2020). In a broad sense, the CGM remains a useful testing ground for these simulations, as the ionization state of the gas will depend sensitively on the feedback mechanisms incorporated into the code. Several other hydrodynamic simulations have begun analyzing the OVI population in the CGM as well, including Suresh et al. (2017) and Corlies & Schiminovich (2016), generally finding that this ion exists in a multiphase medium, and replicating the bimodality between high column densities in star-forming galaxies and low column densities in quenched galaxies, first seen in Tumlinson et al. (2011).
In this work we analyse galaxies from the VELA simulation suite (Ceverino et al. 2014;Zolotov et al. 2015). These simulations are compared with observations in a number of papers (e.g. Ceverino et al. 2015;Tacchella et al. 2016a,b;Tomassetti et al. 2016;Mandelker et al. 2017;Huertas-Company et al. 2018;Dekel et al. 2020a,b), showing that many features of galaxy evolution traced by these simulations agree with observations, although the VELA simulated galaxies form stars somewhat earlier than observed galaxies. Using these simulations, we develop a model in which CGM gas is characterized by its relation to the aforementioned 'cold accretion streams', giving OVI a unique role where PI OVI gas acts as an indicator of these inflows. Much of the CI OVI gas turns out to lie in an interface layer between these inflowing cold streams and the bulk of the CGM. This paper is organized as follows. In Section 2 we describe the simulation suite and our analysis methods. In Section 3 we explain how we robustly differentiate CI and PI gas for a variety of ions. Focusing on OVI, we analyse in Section 4 what this distinction shows us about the CGM and its structure. This is analyzed in 3D space in Section 4.1, in projections and sightlines in Section 4.2, and its dependence on redshift and galaxy mass in Section 4.3. In Section 4.4 we compare the implications of this model to the findings in S16 and S18. In Section 5 we present a physical model for the origin and properties of the different OVI phases in the CGM, relating these to cold streams interacting with the hot CGM. This is the first discussion of shear layer width around radiatively-cooling cold streams in galaxy halos. In Section 5.1 we summarize our current theoretical understanding of the evolution of cold streams in the CGM of massive highz galaxies, as the streams interact with the ambient hot gaseous halo. In Section 5.2 we examine the properties of the different CGM phases identified in our simulations, in light of this theoretical framework. Finally, in Section 5.3 we use these insights to model the distribution of OVI and other ions in the CGM of massive z ∼ 1 galaxies. Our summary and conclusions are presented in Section 6.
VELA
The set of VELA simulations we used is a subsample of 6 galaxies from the full VELA suite (see Table 1 for details about the galaxies chosen). The entire VELA suite contains 35 haloes with virial masses (M v ) 1 between 2×10 11 M and 2×10 12 M at z = 1. The VELA suite was created using the ART code (Kravtsov et al. 1997;Kravtsov 2003;Ceverino & Klypin 2009), which uses an adaptive mesh with best resolution between 17 and 35 physical pc at all times. In the CGM, the resolution is significantly worse than this maximum, as expected. However, most of the mass within the virial radius is actually found to be in cells of resolution better than 2 kpc, as shown in figure 1. This is within an order of magnitude of several high-resolution CGM simulations of recent years (Peeples et al. 2019;Hummels et al. 2019;Suresh et al. 2019;van de Voort et al. 2019;Bennett & Sijacki 2020), although unlike VELA, those simulations required these high resolutions throughout the CGM. This gives the VELA simulations enough resolution for discussions of the CGM to be physically meaningful, at least with respect to higher ions such as OVI which should be less dependent on resolution effects than low ions which originate from small clouds. As we will see, the results agree qualitatively with current models. Alongside gravity and hydrodynamics processes, subgrid models incorporate metal and molecular cooling, star formation, and supernova feedback (Ceverino & Klypin 2009;Ceverino et al. 2010Ceverino et al. , 2014. Star formation occurs only in cold, dense gas (n H > 1 cm −3 and T<10 4 K). In addition to thermal-energy supernova feedback the simulations incorporate radiative feedback from stars, adding a non-thermal radiation pressure to the total gas pressure in regions where ionizing photons from massive stars are produced. Recently, the VELA simulations have been re-run with increased supernova feedback according to (Gentry et al. 2017), and this new feedback mechanism has led to improved stellar masshalo mass relations as in Ceverino et al. (2020 in prep will compare the results of this paper to this newer version in future work. In the VELA simulations, the dark matter particles have masses of 8.3 × 10 4 M , while the average star particle has a mass of 10 3 M . Further details about the VELA suite can be found in (Ceverino et al. 2014;Zolotov et al. 2015). We chose to continue to use the same subsample of the VELA galaxies from RF19. In that work, they were chosen according to their virial masses and the final redshift the simulation reached. This means, we use all halos that have been simulated down to z = 1 which have a final mass greater than 10 11.5 M . This selection criteria derived from our desire to analyse the physical state of gas in galaxies near the 'critical mass' at which the volume-filling CGM phases show a transition from free-fall to pressure-support (Goerdt et al. 2015;Zolotov et al. 2015;Fielding et al. 2017;Stern et al. 2020b).
Analytical approach and analysis tools
In our analysis of the CGM ionization state, we will study both the photoionization and the collisional ionization mechanisms. We will simplify the problem by assuming that photoionization depends only on the metagalactic background light from Haardt & Madau (2012), and not from other location-dependent sources such as the central galaxy. This assumption is motivated by the evidence that local sources have a major effect mostly on the ionization state of the gas in the inner CGM 0.1 − 0.3R vir while gas outside this region receives a negligible fraction of the ionizing radiation from the galaxy (Sternberg et al. 2002;Sanderbeck et al. 2018).
In the VELA simulations photoionization or collisional ionization is not directly simulated. Two kinds of metallicity are explicitly recorded: metallicity from SNIa (iron peak elements) and SNII (alpha elements). In order to analyse the ionization fraction of different ions we will follow a similar approach as the one in RF19. First we will get the total mass and density of the different species (e.g. n O , n C ) by multiplying total SNIa or SNII metal mass by their respective abundances. It is important to mention that although in RF19 we made the assumption that the Type II metals are entirely oxygen, and that Type Ia metals had no oxygen component, here we have relaxed this assumption by using a distribution of metals according to (Iwamoto et al. 1999). However, as nearly 90 percent of all Type II supernovae ejecta is oxygen by mass, the effect of this change was minimal. The second step was to use the cloudy software (Ferland et al. 1998(Ferland et al. , 2013 to assign the corresponding ionization fraction to each ion species, based on the gas temperature, density, and on the redshift. Finally, to access the total population of any ion species, we need to multiply this fraction (e.g. f OVI ) by the total amount of the individual nuclei of that species (e.g. n O ), that is n OVI = f OVI · n O . This procedure was implemented in the simulation analysis package trident (Hummels et al. 2016), which is itself based in the more general yt (Turk et al. 2011) simulation analysis suite. To add these ion number densities in post-processing requires an assumption of local ionization equilibrium within each cell at each timestep. Note that this does not imply that we assume the gas to be in thermal or dynamical equilibrium. The gas can still be experiencing net cooling, or net heating due to feedback processes from the central galaxy.
In order to emulate the absorption-line studies for direct comparison to observations, we create a large number of sightlines (∼ 400) through each CGM. The sightlines are defined via a startpoint and a midpoint. To choose the startpoint, we define a sphere at some maximum radius, outside of the simulation's 'zoom-in region' (extending to ∼ 2R vir from the center). This is for geometrical effect, and to make sure that no significant difference in path length appears between sightlines. It was confirmed by comparing results with all low-resolution (>15 kpc) cells removed and with them included that in no simulation did the gas outside the fiducial region have a significant impact on any results. We define this maximum radius R to be at 6R vir . We randomly choose one of a finite set of polar angles θ and a finite set of azimuthal angles φ according to a probability distribution scheme which distributes the startpoint uniformly across the surface of this sphere. The vector from the galaxy center to the startpoint is defined as normal to an 'impact parameter' plane. A midpoint is then selected from the plane at one of a discrete number of impact parameters, according to a probability distribution which gives a uniform point in the circle r ⊥ ≤ 2R vir . A slight bias is introduced to give r ⊥ = 0 a non-zero chance of selection. However, this has a negligible effect on any results, and only affects our column densities for the few lines that go directly through the galaxy. This Figure 2. A sample sightline is generated from a random point on the outer sphere and directed to a midpoint at a specified impact parameter, denoted as white dots, and projected to twice that length. This is plotted over a projection of total gas density within the simulation, on the same snapshots and at the same angle as Figure 8.
line is extended past the midpoint by a factor of 2. A visualization of a sightline generated by this algorithm is shown in Figure 2.
This strategy is useful for several purposes. First, by choosing a finite set of sightlines, we can use the same statistical analysis methodology as used in observational studies. In particular, in Section 4.4, we can emulate the inverse-Abel transformation used in S18. Second, we save a significant amount of information within each sightline, allowing us to track correlations between ions within sightlines, and the state of gas within the sightlines (see Section 3), instead of losing it in continual averaging. We note especially that most studies of this kind average over impact parameters measured from some small number of 'preferred' axes, (usually the x, y, and z principal axes), instead of directly from random observation angles and impact parameters, as done here.
COLLISIONAL AND PHOTO IONIZATION
In this section we present a physically motivated definition of 'Collisionally Ionized' and 'Photoionized' gas as distinct states which coexist throughout the CGM. We do this both specifically for OVI, as well as for all other ion species. We will refer to these states hereafter as CI and PI, respectively. We will also refer often to the following temperature states, in accordance with RF19, S16, and Faerman et al. (2017).
• Cold gas: T < 10 3.8 K • Cool gas: 10 3.8 K < T < 10 4.5 K • Warm-hot gas: 10 4.5 K < T < 10 6.5 K • Hot gas: T > 10 6.5 K In S18, the two states (CI and PI) were defined by the two peaks in the OVI fraction in temperature-density phase In this figure, O IV and O V are fully CI, OVI and OVII have transition points at log n = −2.5 and −1.8, respectively, and OVIII is fully PI. This can be seen by the end-behavior at high density. The stars indicate the same points as in Figure 4. space, and a line was drawn to separate the two. This defined these as mutually exclusive, and let those two states be meaningfully analysed separately. However it was not clear how to classify gas far from these peaks, where the OVI fraction is nonnegligible but clearly depends sensitively on both temperature and density. We will present a different definition here, which agrees qualitatively with that definition and the procedure in RF19, but has some differences. We can define these two states (CI and PI) graphically, using the data from cloudy at redshift 1, assuming a uniform Haardt & Madau (2012) ionizing background. At a given temperature, the distribution of ions for a single atomic species is a function of density. At sufficiently high density, for each ion the ionization fraction either decreases to 0, converges to a stable nonzero fraction, or (for the neutral atom at low temperatures) increases to 1. At sufficiently low density, ionization fractions drop to 0 for all ions except for 'fully ionized' states. See Figure 3 for examples of both behaviors. There are three 'characteristic' shapes to these graphs: • 'fully CI': flat after some high density, falls directly to 0 without any significant increase at low density (see red line in Figure 3). Clearly photoionization affects this state at low density, but the critical element is that photoionization only destroys this state, and does not create it. So we will claim this ion's creation is density-independent, and therefore only depends on temperature, or in other words, is CI.
• 'fully PI': Does not stabilize at high density, but rather decays to 0 after reaching a maximum at some intermediate density (see grey line in Figure 3). Since the ionization fraction is always a strong function of density, this gas is PI.
• 'transition': Stabilizes at high density, but also contains a maximum which is higher than that stable fraction (see green and pink lines in Figure 3). We will define a transition point to be the density at which the ion fraction is exactly twice the stable CI fraction, and if the maximum is not this high, the ion is considered 'fully CI' because while there is a non negligible PI fraction, it is never dominant (see purple line in Figure 3).
So, for each temperature, each ion can be characterized as PI, CI, or it may have a transition point. Iterating over all temperatures from T = 10 2.5 K to T = 10 7.5 K in steps of 0.1 dex, we found that each species starts out as PI at low temperatures, has transition points for several consecutive increasing T increments, each time decreasing the transition density n, and then becomes fully CI at sufficiently high temperatures. In Figure 4, we plot the transitions in T − n space. Considering the above discussion, we extend the lines on the right to log n = +∞ from the lowest-temperature transition, and on the left to log n = −∞ from the highesttemperature transition. The two transition points from figure 3 are marked again with stars in figure 4. Note that in general ion species do not have the same number of transition points, and in fact OII (orange line in Figure 4) has none. This means it is 'fully PI' below T=10 4.2 K and 'fully CI' above. We speculate that with a higher resolution in T space, a narrow temperature range around 10 4.2 K would be found where transition points exist for OII. It is also true that there are regions of n − T space in which an ion will be classified as PI or CI by this definition even though that ion will have a fraction of approximately zero there. This naturally will have no effect on the distribution of the ion into the two states, as insignificant regions of the graph will have negligible contributions. We see that in this graph, low ions have their transition from CI to PI in or near cool gas, midrange ions have transitions in the middle range of warm-hot gas, and high ions are CI only in hot gas and PI in warm-hot gas 2 . We note that this implies that whether an ion should be considered 'generally' CI or PI throughout the CGM is a more complicated story than usually assumed in the literature. For example, it is accurate to say that low ions (MgII, SiII, OII etc) only exist in cool or cold gas, but for them the CI-PI cutoff is also located within cool gas, so it would be inaccurate to say that those low-ion states are necessarily photoionized (see OII, orange, in Figure 4). A similar statement would be true about OVII and OVIII: they certainly can only exist within warm-hot/hot gas, but this does not imply they are entirely collisionally ionized. We can visualize the results of these definitions using a binary field in yt.
We define a field CI OVI to be 1 if OVI is CI-dominated in that specific cell, 0 otherwise, and PI OVI to be the opposite. Multiplying this field by the actual OVI density allows us to differentiate the two populations of OVI, and similar methods can be applied to other ions.
DISTRIBUTIONS OF PI AND CI OVI GAS
We now analyse the actual spatial distribution of OVI within the CGM of the VELA simulations. Unless otherwise noted, we will refer to the gas outside 0.1 R vir and within R vir as the CGM, though in fact, recent studies (Wilde et al. 2020) have shown that R vir is not really a 'physical' boundary to the CGM and probably underestimates the dynamical conditions of the CGM. However considering our decreased resolution outside the virial radius ( Figure 1) and the fact many analytic models use R vir as a starting point (see below, Section 5.1), we will continue to use this definition. We focus here on VELA07 at redshift 1, but other VELA galaxies are similar at this redshift. This galaxy is plotted in a plane which is approximately face-on, with the x axis being at a 25 degree angle from the galaxy angular momentum, which was calculated in Mandelker et al. (2017), and is a large spiral galaxy. Other views of the same galaxy, including the overall distribution of gas and stars, can be seen in Figures 1, 4, 19, and D2 of Dekel et al. (2020b) 3 . We will start with the distribution in 3D space, and then look within projections at the fractions of CI and PI gas. We will continue to use the terminology from S18 and call the radius of the median OVI ion R OVI , or the 'half OVI radius'. We will show that within the simulation, R OVI is indeed outside half of the virial radius, as suggested by COS-Halos (S18). Furthermore, because OVI extends past the virial radius and because of the concavity of the deprojected OVI profile (see Section 4.4), we find that R OVI is likely even larger than suggested in S18.
OVI Distribution in 3D space
In Figures 5 and 6 we analyse a 2D slice through a representative simulation (VELA07 at z = 1). As a 2D slice it has zero thickness, however since the simulation has finite resolution the effective thickness is the resolution of the simulation (see Figure 1). Figure 5 shows several macroscopic properties of gas within the simulation. Here the features visible in this plane are the inflowing streams of cool, high-density gas (see Figure 6 for evidence of inflows) and the hot medium surrounding them. We will call these structures the 'streams' and 'bulk', respectively. These are the streams of baryonic matter necessary to feed star formation, and have been studied before in VELA (Zolotov et al. 2015; RF19), and in similar simulations Danovich et al. 2015). While these streams look discontinuous, they only appear so due to minor fluctuations moving them outside the plane of the slice. They are part of the same counterclockwisespiraling dense streams visible in the projection of Figure 2. Overplotting the in-plane velocity, we see that within the hot gas are fast outflows with velocities ∼1000 km s −1 . On this scale, the inflowing speed of the cool gas is not visible. As noted in RF19, the metallicity of these streams is substantially lower than the surrounding hot gas. However, the increased density in these regions more than makes up for their lower metallicity, so we expect them to be detectable in metals. In fact they are essentially traced out by OVI (see below). Finally, we see in the bottom right panel (Figure 5d) that none of these structures are reflected in the pressure diagram, and in fact pressure is almost spherically symmetric, with a maximum in the central galaxy. So, 'overall' the cool inflows are in approximate pressure equilibrium with the rest of the galaxy. However strong pressure variations are observed in locations with strong inflowing velocity. The correlation between pressure fluctuations and inflowing streams in these simulations should be taken as an object for further study in the future.
Focusing on OVI within the same slice, Figure 6a and 6b show the separation of the slice into PI cells (left) and CI cells (right) according to the definition in Section 3 (see green line in Figure 4). Since they are defined to be mutually exclusive, the filled cells in the left panel appear as white space in the right panel, and vice versa. We see from this that CI gas fills the majority of the volume of this slice (more quantitative results, which do not depend on the specific snapshot and slice orientation, can be found in Table 2) and PI gas is found only inside the cool inflows. This follows from the temperature distribution since, as expected from Section 3 the PI-CI cutoff is nearly equivalent to a temperature cutoff. Also marked in the left plot is the velocity of the PI cells within the slice. Since the OVI ions are added to the simulation in post-processing, the velocity is the overall gas velocity in that region, not the separate velocity of only OVI. This shows that PI OVI clouds, and therefore also the streams shown in Figure 5 are generally inflowing and rotating, with a characteristic velocity of ∼ 100 km s −1 , significantly slower than the outflows. It is contained within filaments which become smaller in cross-section as they spiral towards the central galaxy.
Next we analyse the distribution of OVI by mass (instead of volume as in the previous paragraph) between the two states. It is clear from the top panels of Figure 6(a and b) that the OVI number density is higher in the PI clouds than in the CI bulk, but since the clouds fill only a small fraction of the CGM, it is not a priori clear which phase would dominate in either sightlines or in the CGM overall. In RF19 it was found that this depended strongly on redshift and galaxy mass. All galaxies at high redshift were CI-dominated and become more PI-heavy with decreasing reshift, eventually diverging by mass, with larger galaxies approaching CI-domination again at low redshift, and smaller galaxies remaining PI-dominated to redshift 0. We find here (Ta-ble 2, column 3) that the PI gas contains about 2/3 of the OVI mass, showing OVI is primarily found in the CGM in cooler, lower-metallicity gas, but that a nonnegligible fraction remains CI.
Before we can address which phase will dominate within sightlines, there is one other important feature of Figure 6. While the number density of the CI bulk is significantly lower than the PI clouds, within the CI slice ( Figure 6b) there are small regions of high number density. These are present only along the edges of the PI clouds themselves. To get a more quantitative understanding of this 'interface layer', we used a KDTree algorithm to query each CI gas cell within 0.1-1.0 R vir as to its nearest PI neighbor. We define the interface (for now, through see Section 5 for more details) as CI cells which had any PI neighbor within 2 kpc. In the bottom left panel, (Figure 6c) we show only the interface cells as defined above. These cells consist of ∼ 92 percent of the CI OVI mass within the virial radius, while occupying only ∼ 3.3 percent of the CI volume. Therefore, in addition to dividing the CGM into CI and PI components, we believe it is useful to further divide the CI gas into two phases: interface and bulk. An interface layer like the one shown in Figure 6 is often found surrounding cold dense gas flowing through a hot and diffuse medium (Gronke & Oh 2018, 2020Ji et al. 2019;Li et al. 2019;Mandelker et al. 2020a;Fielding et al. 2020). In Section 5 we will present a model for the physical origin and properties of the interface layers found in our simulations.
The fraction of gas in each phase, calculated using both a 2 kpc and a 1 kpc boundary, is shown in Table 2. A complicating factor, which is outside the scope of this paper to address, is that within these boundary layers, gas is unlikely to be in ionization equilibrium (Begelman & Fabian 1990;Slavin et al. 1993;Kwak & Shelton 2010;Kwak et al. 2011;Oppenheimer et al. 2016) and so it is possible that the mass distribution of CI gas in the two phases will differ significantly from that presented here. In particular, it was found in Ji et al. (2019) that nonequilibrium ionization can increase the column densities of OVI by a factor of ∼ (2 − 3) within turbulent interface layers.
In this simulation, outflowing warm gas is generally too hot to have a significant OVI contribution, making the bulk of the volume CI but negligible in OVI outside the inner halo (∼ 0.3R vir ). We are not claiming that the total gas density in these warm-hot outflows is extremely low, but only their OVI number density. This could be due to a low value of any of the contributing factors of ion fraction, density, or metallicity. We find in these simulations that it is primarily the ion fraction which causes this bulk volume to be negligible in OVI, compared to both the interface and the cool streams (see Section 5.3). Outflows will never have low metallicity, as the outflows are driven by supernova winds. Additionally, as seen in Figure 5, the high density and low metallicity inside the streams mostly cancel one another. However, both of those effects are much smaller than the the ion fraction dependence (Figure 6d).
The distribution of OVI into the three categories within 0.1 − 1.0R vir for each VELA simulation, and the 'stacked' results (the sum of the total values from each category) are shown in Table 2. Here we see that in each simulation, photoionized OVI consists of ∼ (40 − 90) percent of all OVI by mass, with an average of approximately 62 percent, and the . The projection profile of several hundred sightlines per galaxy, randomly generated as described in Section 2.2. This is then disambiguated into total PI and CI profiles, as solid, dashed, and dotted lines, respectively. Each individual galaxy, with the same distinctive linestyles, are shown in colors. Errorbars represent the 16th and 84th percentiles, with the median value indicated by the dot.
CI gas is mostly concentrated within the interface. This parallels the findings of RF19, where it was found that cool gas dominates the CGM by mass, and warm-hot gas dominates the CGM by volume. Taking PI and CI OVI to be analogues of 'cool' and 'warm-hot' gas respectively, we see that OVI has a similar distribution. An interface of 1 kpc contains about two-thirds of CI gas, while a 2 kpc interface contains almost 90 percent of CI gas. From a volume perspective, the CI bulk occupies the vast majority (∼ 90 percent) of the CGM, and this does not change appreciably when we consider a 2 kpc boundary layer instead of a 1 kpc boundary. There are effects which both underestimate and overestimate the amount of gas in these interfaces. In the outer parts of the CGM, the resolution is in fact worse than 1-2 kpc, and so even cells adjacent to PI clouds might not register as interface cells, underestimating that value. On the other hand, in the inner part of the CGM the resolution is much better and the 1-2 kpc cutoff might include some gas which is not dynamically 'boundary layer gas' (see Section 5).
OVI Within Sightlines
While we see in Table 2 that by mass, the majority of all OVI within the CGM is PI, the projection of OVI through sightlines will distort the distribution, biasing the observed OVI gas towards the outer halo compared to the impact parameter. This is because the impact parameter of gas is the minimal distance of gas along the sightline, and all gas interacted with will be at that distance or farther. We saw in RF19 that, regardless of galaxy mass and redshift, the outer halo was generally more PI than the inner halo, so we should expect the average sightline to be more PI than the gas distribution itself. However, the small volume filling factor could conceivably lead to a majority of sightlines not hitting any PI gas whatsoever.
We tested this in two ways. First, we used the random sightline procedure defined in Section 2.2. In each sightline, the total OVI column density is recorded, in addition to the Table 2. Distribution of CGM gas (within 0.1 − 1.0R vir ) within the selected VELA snapshots at redshift z = 1. Columns 3, 4, and 5 are the mass distribution of OVI into PI gas, CI edge gas, and CI bulk gas (so they should sum to 100 percent, ignoring truncation errors). Columns 6, 7, and 8 are the volume distribution into the same categories. Column 9 is the percentage of CI gas by mass within the edge. The same analysis is repeated with an assumption of a 1 kpc edge and a 2 kpc edge.
fraction in the CI and PI states. The sightline can then be broken down into a 'total OVI column density', a 'CI OVI column density' and a 'PI OVI column density'. In Figure 7, sightlines are collected by impact parameter from all six galaxies at z = 1 and at each impact parameter, the median column density for each category is calculated. They are shown in black along with the 16th and 84th percentiles as error bars, with the total OVI in solid lines, the PI OVI in dashed lines, and the CI OVI in dotted lines (a slight offset in x between the lines is included for visualization, but the data is aligned in impact parameter). The individual galaxy medians are also included in lighter colors in the background, with the same format 4 . Data points for OVI from Werk et al. (2013) are included for order-of-magnitude reference, but it is important to note that that data is not directly comparable to the values from the VELA simulations, as the COS-Halos data is from significantly lower redshift than z = 1, which is the lowest redshift reached by these simulations. So, while the median OVI column density is approximately 0.5 dex lower than the observations, the redshift difference means that this does not necessarily represent a disagreement between the observations and the VELA simulations.
The main result of this study was that the sightlines become dominated by PI gas at impact parameters outside ∼ 0.15R vir . While CI gas is significant inside this radius, it falls off quickly to undetectable levels in the outer halo. The CI gas predictions of 10 12−13 roughly agree with the OVI columns of Ji et al. (2019), who also found that CI OVI is found primarily in an interface layer, though unlike them we find that PI OVI is also significant in sightlines. It is also significant that the PI column density is approximately constant out to high impact parameters, only falling by approximately a factor of 2 at r ⊥ = R vir , while CI column density falls by a factor of almost 1000 within the same distance.
However, it is possible that the median values do not accurately convey the distribution, which due to the low filling 4 Colors for individual galaxies in Figure 7 are the same as in Figure 10.
factor of PI gas, could be very nonuniform. We also created using yt a projection through the full simulation volume. In Figure 8, we show the CI fraction of the gas along projected sightlines. This projection has the same horizontal (y) and vertical (z) coordinates as Figure 6, and the black circles continue to indicate 0.1 and 1.0 R vir . However, each pixel in this cell is the integral of all slices along the line, and this has the effect of making each pixel a sightline orthogonal to the image. So, a blue pixel in this image is 100 percent PI, a red pixel is 100 percent CI, and intermediate values are indicated by the color bar. We added to this image a black masking image which sets all pixels with a total OVI column density less than 10 13 cm −2 to black, representing nondetections. This limit was chosen according to the presence of several systems with 10 13 < N OVI < 10 13.5 found in the CASBaH survey Prochaska et al. (2019). If we were to adopt a threshold of 10 13.5 , the picture would broadly stay the same, though slightly more of the picture would be blacked out.
We see broadly the same phenomena in this image. In the inner halo (up to about 0.2R vir ), OVI is uniformly CI (red). Then, with a fairly small transitionary r ⊥ band (white) it switches to being nearly 100 percent PI (blue). We can see that while the detectable gas is PI outside some minimal radius, the covering fraction of all sightlines (defined here as the fraction with N OVI > 10 13 ) is not 100 percent. Over all six selected VELA galaxies, the covering fraction remains ∼ 70 per cent out to r ⊥ = R vir .
So the situation is fairly complex. The volume (at least in these relatively large galaxies which are still star-forming) is overwhelmingly dominated by CI gas, but the density of OVI within this 'bulk' region is so low that it contributes almost nothing to the sightline's OVI column density. This is shown by the projection fraction ( Figure 8) being PIdominated everywhere outside 0.3R vir whenever the projection isn't empty. Since we have established that a strong majority of CI gas is in fact an interface layer on PI clouds, this result is unsurprising. Wherever there is a significant amount of CI OVI, a PI gas cloud must be nearby. While these clouds may be small, their 3D nature makes them dominant over the essentially 2D surfaces of CI gas in the interface regions. Figure 8. Fraction of gas within a projected sightline which is collisionally ionized. Each pixel represents a sightline in the x direction, orthogonal to the plane of the image. A blue pixel intersects with only PI gas, a red pixel intersects only CI gas. As in Figure 6, the circles represent 0.1R vir and R vir . All pixels which have a overall OVI column density < 10 13 cm −2 are blacked out, since OVI column densities < 10 13 cm −2 are not observable with COS Prochaska et al. (2019).
Sightlines which only pass through the CI bulk region, and not through the interfaces, are visible here as the blacked out nondetections. These two images imply that almost all of the OVI which would be observed in absorption spectra is PI.
It remains an open question whether the CI-dominated region in the inner halo (0.1-0.3R vir ) follows from the same interface structure, or if the character of the CGM in this region is substantially different. The inner halo was found to be highly irregular and different from the outer halo in cosmological simulations similar to VELA in Danovich et al. (2015). This could mean that in this part of the CGM specifically, the model of CI OVI as primarily a skin on PI clouds breaks down. It is also possible that in this inner halo region, the fixed size (1 or 2 kpc) of the interface might be larger than strictly necessary, and could sample gas which is not dynamically connected to the PI gas it happens to be near. Within this region, there is a lot of warm-hot, metalrich gas outflowing due to stellar feedback, and its effects on the overall dynamics of the gas distribution in Table 2 are substantial. However since sightlines will preferentially sample the outer halo, this should not change our overall conclusions.
Halo mass and redshift dependence
Now we will compare how the effects shown in Figure 7 change with mass and redshift. In RF19 the mass and redshift dependence of the ionization mechanism of OVI in the CGM was as follows (see RF19, figure 14). All galaxies start out with their OVI population entirely CI-dominated. This is a function of three effects: First, the low ionizing background at high (z > 2.5) redshift, second, the fact that at high redshift, the cold inflows are almost metal-free, and third, at higher redshift the streams are denser and more self-shielded (Mandelker et al. , 2020b. The galaxies then experience a decrease in their CI fraction with time as the ionizing background becomes more significant and streams become less dense. As galaxies approach redshift zero, their OVI ionization mechanisms diverge according to their mass. Low-mass galaxies end up completely PI at late times, while high mass galaxies become mostly CI again, following the formation of a virial shock which heats up most of the CGM. We can see some of the same effects in the time-series sightline projections (Figure 9). In this figure we repeat the procedure of Figure 7, including showing the profile of the median sightline with error bars representing the 40th and 60th percentiles, except binning the galaxies into mass bins of 0.5 dex at specific snapshots instead of combining all sightlines together into a single 'overall' curve. The smaller errorbars here compared to Figure 7 are to avoid overlapping lines. All data points are offset slightly in r ⊥ so the error bars are visible, but should be read as vertically aligned in the apparent groups. The substantial bias of impact parameter profiles towards outer-CGM gas, as discussed in Section 4.2, means PI gas still dominates for most redshifts and masses. However, the trend from RF19 is evident in the form of the decreasing 'transition impact parameter' where PI gas becomes dominant. At z = 3, we see that only the smallest galaxies (blue line) have such a transition, and the larger galaxies (orange and green lines) remain CI-dominated out to r ⊥ = R vir . Moving on to redshift 2, we see that both of the available mass bins have roughly the same crossing point at ∼ 0.6R vir , and CI dominates inside that impact parameter, PI dominates outside. At low redshift (z = 1), we see that this transition from CI to PI-dominated sightlines happens at ∼ 0.2R vir as shown before. It is also worth noting that the CI gas seems to drop much more dramatically with impact parameter at redshift 1, and generally the OVI column density drops below 10 13.5 at the outer halo for the first time.
We see that there is not a significant mass dependence of either the CI or the PI gas. Unlike in RF19, at z = 3 they diverge mostly for sightlines which pass through the galaxy, and at z=1 and z=2 there is little change in column density with mass between the available bins.
In this set of simulations, we are not studying any galaxies which have a smaller virial mass than M vir = 10 11 M . As presented in RF19, these small galaxies will allow inflows to reach all the way to the disc, which means that this model would suggest they are all PI. Studying whether this is generally true in smaller galaxies will be the subject of future work.
Comparison with Observations
Using a phenomenological analysis of the COS-Halos data, S16 proposed that cool and relatively low density clouds produce the observed OVI columns of 10 14.5 cm −2 and a comparable amount of neutral hydrogen (N HI /N OVI ∼ 3), while higher density clouds embedded in or at smaller scales than the OVI clouds produce low ions and larger HI columns. This density structure suggests that sightlines with N HI 10 15 cm −2 intersect the OVI clouds but not the low-ion clouds, and hence N HI and N OVI should be correlated in these sightlines, while sightlines with N HI 10 15 cm −2 intersect both the OVI clouds and the low-ion clouds, and hence N OVI should be independent of N HI . In Figure 10 we show that sightlines through the VELA simulations follow the same pattern. This indicates that the 'global' version of the S16 model, where OVI and N HI ∼ 10 15 columns originate from the outer halo and low-ions and N HI 10 15 columns originate from the inner halo, replicates the behavior in VELA.
It should be noted that simulated HI distributions in the CGM are resolution-dependent (Hummels et al. 2019), and so it is possible that N HI is not converged. This however should only have an effect at the highest N HI , where the HI-OVI curve shown in Figure 10 has already flattened out. Also, we find that N OVI in the VELA simulations is a factor of ∼ 3 lower than in COS-Halos, potentially due to the higher redshift of z ∼ 1 analyzed in VELA relative to z ∼ 0.2 in COS-Halos.
We can also check whether the sightlines allow us to infer correctly the 3D distribution of OVI. S18 showed that the column densities observed from COS could, under an assumption of spherical symmetry, be used to extrapolate the total OVI mass in the CGM as a cumulative function of radius. Assuming that all galactic CGMs from COS-Halos were broadly similar, one can use an inverse-Abel transformation on the column densities of OVI to predict the total mass of OVI in the CGM of an average galaxy, relative to Rvir (Mathews & Prochaska 2017;Stern et al. 2018). In S18, the purpose of this was to make the argument that the median OVI ion's radius, or R OVI , actually exists outside half of the virial radius, and so is more emblematic of the outer CGM than the inner part, even in sightlines with impact parameters less than R OVI . This makes the assumption, however, that the CGM is spherically symmetric. In Figure 7, we showed the median OVI column densities for a set of mock sightlines within the simulation. We will assume Figure 10. Comparison of the HI vs OVI column densities of all of the sightlines through the different VELA simulations. The black curve shows the theoretical prediction from the phenomenological model presented in S16, fitted to the COS-halos data (Werk et al. (2013), blue). those median results are spherical and then apply the same inverse-Abel transformation algorithm to them as in S18. This can be compared to the real distribution of OVI gas. We find in Figure 11 that the inverse-Abel transform indeed approximately recovers the actual mass of OVI within the virial radius to within ∼20 per cent. An interesting distinction between the two curves is rather their shape. We find that the deprojected curve appears to be concave down, so it would be overrepresented in the inner CGM and underrepresented in the outer CGM, while the real OVI gas distribution is approximately linear out to the virial radius. Its different concavity (compare Figure 11, with S18, Figure 1, top) leads to our placing the (deprojected) R OVI closer to the Figure 11. Using the data from Figure 7, an inverse Abel transformation is performed on the mock sightlines through the stacked VELA simulations to determine an approximate mass of OVI within the CGM, and a half-mass radius R OVI (dotted lines). This is compared to the actual distribution and half-mass radius from integrating over the simulation directly (solid lines).
Figure 12. The model suggested by the inflow patterns for the CGM. While it seems that there is a coincidence between the inflows and the PI-CI cutoff for OVI, other ions, especially CIV, also appear to be regulated by these structures as in Figure 16.
inner CGM than the actual R OVI . This suggests then that in S18 itself, it is likely that the prediction for R OVI was an underestimate, because their deprojection was indeed concave down and we have shown that a linear radial profile in real space will lead to a concave-down profile in deprojectionspace. This means that most of the OVI in real observed sightlines might be near the edge of the virial radius, and there may even be a significant component in the IGM, if the virial radius is taken to be the boundary of the CGM.
In addition, our results that OVI traces cool inflows, combined with the Tumlinson et al. (2011) result that OVI is absent around quenched galaxies (albeit at lower redshift), may be evidence that the feedback mechanism which quenches galaxies also directly affects the cool inflows. We plan to study this effect in future work, using simulations which reach lower redshifts and higher masses.
PHYSICAL INTERPRETATION OF THE INTERFACE LAYER
There is existing literature regarding how the structure of galaxy formation is strongly regulated by inflows from the cosmic web into the galaxy through the CGM (e.g. Keres et al. 2005;Dekel & Birnboim 2006;Dekel et al. 2009;Fox & DavÃl' 2017, and references therein). We suggest that the metal distribution of the CGM might be governed by the same structures, and propose a three-phase model for future observers to fit absorption spectrum data to. A cartoon picture is shown in Figure 12. There are three regions of the CGM: the inside of the cool-inflow cones (hereafter the cold component, or cold streams), the outside (hereafter the hot component, or hot CGM), and the interface between these two components. The interaction between the inflowing cold streams and the ambient hot CGM induces Kelvin-Helmholtz instabilities (KHI) and thermal instabilities at the interface, causing hot gas to become entrained in the flow through a strongly cooling turbulent mixing layer of intermediate densities and temperatures (Mandelker et al. 2020a, hereafter M20a). We posit that the CI interface layer we find in our simulations represents precisely such a mixing layer. The general properties of radiatively cooling interface layers induced by shear flows were studied in Ji et al.
(2019) and Fielding et al. (2020). The conditions of the cold streams and hot CGM, which set the boundary conditions for the interface region, as a function of halo mass, redshift, and position within the halo were studied in Mandelker et al. (2020b) (hereafter M20b), based on M20a. In this section, we combine the insights of these studies to explain the physical origin and the properties of the multiphase structure seen in our simulations. We begin in Section 5.1 by summarizing our current theoretical understanding of the evolution of cold streams in the CGM of massive high-z galaxies, as they interact with the ambient hot gaseous halo. In Section 5.2 we examine the properties of the different CGM phases identified in our simulations, in light of this theoretical framework. Finally, in Section 5.3 we use these insights to model the distribution of OVI and other ions in the CGM of massive z ∼ 1 galaxies.
KHI in Radiatively Cooling Streams
Using analytical models and high-resolution idealized simulations, these have focused on pure hydrodynamics in the linear regime (Mandelker et al. 2016) and the non-linear regime in two dimensions (Padnos et al. 2018;Vossberg et al. 2019) and three dimesnsions (Mandelker et al. 2019a). Others have incorporated self-gravity (Aung et al. 2019), idealized MHD (Berlok & Pfrommer 2019), radiative cooling (M20a), and the gravitational potential of the host dark matter halo (M20b). We begin by summarizing the main findings of M20a regarding KHI in radiatively cooling streams. There, we considered a cylindrical stream 5 with radius R s , density ρ s , and temperature T s , flowing with velocity V s through a static background (V b = 0) with density ρ b and temperature T b . The stream and the background are assumed to be in pressure equilibrium, so χ ≡ ρ s /ρ b = T b /T s , where we have neglected differences in the mean molecular weight in the stream and the background. The Mach number of the flow with respect to the sound speed c b in the background is The shear between the stream and the background induces KHI, which leads to a turbulent mixing region forming at the stream-background interface. The characteristic density and temperature in this region are (Begelman & Fabian 1990;Gronke & Oh 2018) In the absence of radiative cooling, the shear region engulfs the entire stream in a timescale where α ∼ (0.05 − 0.1) is a dimensionless parameter that depends on the ratio of stream velocity to the sum of sound speeds in the stream and background, M tot = V s /(c s + c b ) (Padnos et al. 2018;Mandelker et al. 2019a). When radiative cooling is considered, the non-linear evolution is determined by the ratio of t shear to the cooling time in the mixing region, with γ = 5/3 the adiabatic index of the gas, k B Boltzmann's constant, n mix the particle number density in the mixing region, and Λ(T mix ) the cooling function evaluated at T mix . If t shear < t cool, mix , then KHI proceeds similarly to the nonradiative case, shredding the stream on a timescale of t shear (Mandelker et al. 2019a). However, if t cool, mix < t shear , hot gas in the mixing region cools, condenses, and becomes entrained in the stream (M20a). In this case, KHI does not destroy the stream. Rather, it remains cold, dense and collimated until it reaches the central galaxy. Similar behaviour is found in studies of spherical clouds (Gronke & Oh 2018, 2020Li et al. 2020), and planar shear layers (Ji et al. 2019;Fielding et al. 2020). Streams with t cool, mix < t shear grow in mass by entraining gas from the hot CGM as they travel towards the central galaxy. The stream mass-per-unit-length (hereafter linemass) as a function of time is well approximated by (M20a) where m 0 = πR 2 s ρ s is the initial stream line-mass, and the entrainment timescale is with t sc = 2R s /c s the stream sound crossing time, and t cool the minimal cooling time of material in the mixing layer. which in practice has a distribution of densities and temperatures rather than being a single phase described by eqs.
(1)-(2). If the stream is initially in thermal equilibrium with the UV background, the minimal cooling time occurs approximately at T = 1.5T s , but any temperature in the range ∼ (1.2 − 2)T s works equally well (M20a). The density is given by assuming pressure equilibrium. This mass entrainment causes the stream to decelerate, due to momentum conservation. A large fraction of the kinetic and thermal energy dissipated by the stream-CGM interaction is emitted in Lyα, which may explain the extended Lyα blobs observed around massive high-z galaxies (Goerdt et al. 2010, M20b).
Stream Evolution in Dark Matter Halos
In order to address the evolution of streams in dark matter halos, M20b, following earlier attempts (Dekel & Birnboim 2006;Dekel et al. 2009), developed an analytical model for the properties of streams as a function of halo mass and redshift. We here focus on ∼ 10 12 M halos at z ∼ 1 (Table 1), and refer readers to M20b for more general expressions. Near the halo edge, at R vir , the streams are assumed to be in approximate thermal equilibrium with the UV background, yielding temperatures of T cold ∼ 2 × 10 4 K. The temperature in the hot CGM is assumed to be of order the virial temperature, with M 12 = M vir /10 12 M and (1 + z) 2 = (1 + z)/2. The stream and the hot CGM are assumed to be in approximate hydrostatic equilibrium. Accounting for order-unity uncertainties in the above quantities, the density contrast between the stream and the hot CGM is predicted to be in the range χ ∼ (20 − 200), with a typical value of ∼ 70. The density of the hot gas is constrained by the dark matter halo density in the halo outskirts, the Universal baryon fraction, and the fraction of baryonic matter in the hot CGM component, which has constraints from observations and cosmological simulations. Together with the χ values quoted above, this gives the density in streams as they enter R vir . This is predicted to be n s ∼ (3 × 10 −4 − 0.01) cm −3 , with a typical value of 10 −3 cm −3 .
In M20b, the stream is assumed to enter the halo on a radial orbit, with a velocity comparable to the virial velocity, The mass flux entering the halo along the stream is given by the total baryonic mass flux entering the halo, and the fraction of this mass flux found along streams, where one dominant stream typically carries ∼half the inflow, while three streams carry > ∼ 90% (Danovich et al. 2012). The stream density, velocity, and mass flux can together be used to constrain the stream radius. This is predicted to be R s /R vir ∼ (0.03 − 0.5), with a typical value of ∼ 0.2, and where the virial radius is given by Inserting the above constraints for the stream and hot CGM properties into eqs.
(1)-(4) leads to the conclusion that t cool, mix < t shear in virtually all cases, even if the streams are nearly metal-free (M20b). Streams are thus expected to survive until they reach the central galaxy, and grow in mass along the way. Figure 13. Radial profiles of physical properties in the three OVI phases in the CGM of VELA07 at z = 1. Blue lines represent PI gas, associated with cold streams, orange line represent bulk CI gas, associated with the hot CGM, and green lines represent CI interface gas, associated with the mixing layer between the cold streams and hot CGM. Dotted lines represent best-fit power law relations for the radial profile of the same color and type, fit in the radial range r = (0.5 − 1)R vir , and the fits themselves are listed in the panel legends. We also show profiles for all CI gas in black. Left: Temperature profiles, showing the mass-weighted average temperature in each radial bin. Centre: Gas density profiles within each radial bin. Right: Cumulative volume occupied by each phase at radii ≤ r. Only the stream volume power law is shown.
Within the halo, at 0.1 < r/R vir < 1, M20b assumed both the stream and the background to be isothermal, and to have a density profile described by a power law, with x ≡ r/R vir and 1 < β < 3. The stream and halo thus maintain pressure equilibrium at each halocentric radius, with a constant density contrast χ. The stream is assumed to be flowing towards the halo centre, growing narrower along the way. The stream radius at halocentric radius r is with m(r) the stream line-mass at halocentric radius r, m 0 the line-mass at R vir , and R s the stream radius at R vir . In general, m(r) > m 0 due to the mass entrainment discussed above. However, in practice, the line mass of streams on radial orbits in 10 12 M halos at z ∼ 1 grows by only ∼ (5 − 40) percent by the time the stream reaches 0.1R vir (M20b). We can thus approximate a ∝ x β/2 . In this case, it is straightforward to show that the cumulative volume occupied by the stream interior to a halocentric radius r is M20b assumed that the mass entrainment rates derived by M20a (eqs. 5-6) could be applied locally at each halocentric radius. When doing so, they used the scaling t cool ∝ n −1 ∝ x β and t sc ∝ a ∝ x β/2 µ 1/2 , with µ ≡ m(r)/m 0 .
They then derived equations of motion for the stream within the halo, where the deceleration induced by mass entrainment counteracts the acceleration due to the halo potential well. These equations were solved simultaneously for the radial velocity and the line-mass of streams as a function of halocentric radius. For 10 12 M halos at z ∼ 1, the line-mass at 0.1R vir was found to be ∼ (5 − 40) percent larger than at R vir , while the radial velocity was ∼ (75 − 98) percent of the free-fall velocity.
Turbulent Mixing Layer Thickness
Several recent studies have examined the detailed physics behind the growth of turbulent mixing layers and the flux of mass, momentum, and energy through them (Padnos et al. 2018;Ji et al. 2019;Fielding et al. 2020). Using idealized numerical simulations and analytical modeling, these works considered a simple planar shear layer between two semiinfinite domains, without (Padnos et al. 2018) and with (Ji et al. 2019;Fielding et al. 2020) radiative cooling. While this is different than the cylindrical geometry we have thus far considered, the physics of shear layer growth are expected to be similar in the two cases.
By equating the timescale for shear-driven turbulence to bring hot gas into the mixing layer with the minimal cooling time of gas in the mixing layer, Fielding et al. (2020) obtain an expression for the mixing layer thickness, δ, (see their Figure 14. Radial profiles of dynamical properties in the three OVI phases in the CGM of VELA07 at z = 1. Line colours are as in Figure 13. Left: Total velocity in each component, normalized by the halo virial velocity, V vir . Centre: Radial velocity, normalized to total velocity in each radial bin. Solid (dashed) lines represent outflowing (inflowing) gas. Right: Mass-per-unit-length (line-mass) within each state.
They find that V turb ∼ (0.1 − 0.2)V s independent of other parameters, such as the density contrast. A similar result was found for the turbulent velocities in mixing layers around cylindrical streams in the absence of radiative cooling (Mandelker et al. 2019a). In the context of the M20b model described above, if we assume that eq. (13) can be applied locally at every halocentric radius, this implies that δ/a ∝ (x β y/µ) 3/4 , where y = (V s (r)/V s (R vir )) 2 is the stream velocity at radius r normalized by its velocity at R vir , squared. In practice, for 10 12 M halos at z ∼ 1, δ/a ∼ (0.01 − 0.1) throughout the halo. For R s ∼ 0.25R vir ∼ 40 kpc, this implies δ ∼ 0.4 − 4 kpc near the outer halo, and slightly narrower towards the halo centre. This is comparable to our assumed values of δ ∼ (1 − 2) kpc for defining the CI interface gas in the CGM of our simulations, and can serve as a post-facto justification of this ad-hoc choice. The simulations of Ji et al. (2019) have different resolution, initial perturbation spectrum, and cooling curve than those of Fielding et al. (2020). They also explore a different range of parameter space, and differ in their analysis methods. All these lead them to propose a different expression for the mixing layer thickness, based on their simulations. The main difference in their modeling is that they assume that pressure fluctuations induced by rapid cooling are what drive the turbulence in the mixing region, rather than the shear velocity. They suggest the following expression for the where we have normalized the cooling rate, density, and temperature by typical values found in our simulations (see Section 5.2). In the context of the M20b model described above, where the stream is isothermal with density and radius following eqs. (10)-(11), this implies δ/a ∝ µ −1/2 , which is nearly constant throughout the halo. This is comparable to our assumed interface thickness of ∼ (1 − 2) kpc in the outer halo, but predicts a narrower interface layer closer to the halo centre, where the stream becomes narrower as well.
Importantly, even if the mixing layer thickness itself is unresolved, the mass entrainment rate and the associated stream deceleration and energy dissipation, are found to be converged at relatively low spatial resolution of ∼ 30 cells per stream diameter, which is the scale of the largest turbulent eddies (M20a; see also Ji et al. 2019;Gronke & Oh 2020;Fielding et al. 2020). This is comparable to what is achieved in the VELA simulations.
Comparison to Simulation Results
We now analyze the overall properties of the three identified states of gas in the VELA simulations. Each cell is assigned to one of the states (PI, CI-interface, or CI-bulk). PI gas is defined as in Section 3. The 'interface CI' gas cells are defined via the following two criteria (besides being CI). (1), they are within 2 kpc of a PI cell, as described in 4.1, and (2), they have an OVI number density above 10 −13 cm −3 , to allow the interface to become smaller than 2kpc as the Table 3. Properties of the cold streams and the hot CGM at the halo virial radius, as inferred from our model, in the six VELA simulations examined in this work. From left to right we list the VELA index, the ratio of hot CGM temperature to the halo virial temperature, the density ratio between the cold stream and hot CGM, the volume density in the cold streams, the ratio of stream radius to halo virial radius assuming one stream in the halo, the average ratio of stream radius to halo virial radius assuming three streams, the ratio of stream velocity to the halo virial velocity, the ratio of stream radial velocity to total velocity, the metallicity in the cold streams, and the metallicity in the hot CGM. The range of these parameters found in the simulations is consistent with the predictions of M20b.
resolution improves in the inner halo. Any CI cell not classified as 'interface CI' is classified as 'bulk CI' instead. The first criteria is justified based on Equations 13 and 14, and the second will be discussed in Section 5.3. In Figure 13 we show the temperature, density and volume of each phase from 0.1 to 1.0R vir . For each component, we fit a power-law relation to the profiles at r > 0.5R vir , and list the best-fit relation in the legends. We restrict ourselves to the outer half of the halo when fitting the profiles in order to minimize the effects of galactic feedback and of the non-radial orbit of the stream (see below), both of which are not accounted for in the analytic model of M20b described in Section 5.1. Indeed, in many cases, the profiles noticeably change around r ∼ (0.4 − 0.5)R vir , when these effects likely become important.
In Figure 13, we see that the temperature of PI gas is > ∼ 3×10 4 K at R vir , and decreases roughly as r 0.8 to a temperature of < ∼ 10 4 K at 0.1R vir . Nonetheless, at r > 0.4R vir , this gas is close to isothermal at 3 × 10 4 K. The drop in temperature towards lower radii is due to increasing density (centre panel), shortening the cooling time and reducing the heating by the UV background. The bulk CI gas has a temperature of < ∼ 3T vir at R vir , increasing roughly as T ∝ r −0.5 towards 0.4R vir ∼ 60 kpc. At smaller radii, the temperature increases sharply as hot outflowing gas from the galaxy becomes more prominent and the pressure rises (see Figure 5b and Figure 5d). This also corresponds to the radius where the OVI CI fraction sharply increases (Figure 8). At 0.1R vir , the bulk CI gas reaches temperatures of ∼ 20T vir . These extremely large temperatures are likely dominated by hot feedbackinduced outflows from the galaxy. The CI interface, which contains the vast majority of total CI gas mass (Table 2), has temperatures much closer to T vir throughout the CGM. The average temperature of the all CI gas is nearly isothermal at ∼ 2T vir . All in all, we find the temperature profiles of the PI gas and CI gas consistent with the expected behaviour for cold streams and the hot CGM, respectively, as described in Section 5.1.2.
The density in the PI gas near R vir is ∼ 3 × 10 −4 cm −3 . This is consistent with the predicted densities of cold streams near R vir of 10 12 M halos at z ∼ 1, albeit towards the low-end of the expected range 6 . The density increases to-wards the halo centre roughly as r −2.3 . This is much steeper than the density profile in the CI bulk, which scales as r −1.5 outside of ∼ 0.4R vir , and has an even shallower slope at smaller radii. The steeper increase of the PI gas density towards the halo centre compared to the CI bulk, allows the two phases to maintain approximate, though not perfect, pressure equilibrium throughout the halo despite the decrease (increase) in the temperature of PI (CI bulk) gas towards the halo centre (see also Figure 5d). At R vir , the PI gas is ∼ 85 times denser than the CI bulk, consistent with the predicted density contrast between cold streams and the hot CGM (M20b). The CI interface also maintains approximate pressure equilibrium with the PI gas and the CI bulk throughout the halo, with density and temperature values roughly the geometric mean between those two phases, as expected for turbulent mixing zones (eqs. 1-2).
The volume occupied by the PI gas interior to radius r scales as r 2.54 , in agreement with eq. (12) given the slope of the CI bulk density profile. Assuming that the total volume of the PI gas is composed of n streams, we can infer the typical stream radius by equating the right-hand-side of eq. (12) with V 0 /n, where V 0 ∼ 1.5 × 10 6 kpc 3 is the total volume of PI gas at R vir shown in Figure 13. The result is R s /R vir ∼ 0.25, 0.30, and 0.40 for n = 3, 2, and 1 respectively. Most massive high-z galaxies are predicted to be fed by 3 streams (Dekel et al. 2009;Danovich et al. 2012), with a single 'dominant' stream containing most of the mass and volume. Visual inspection of VELA07 at z = 1 reveals that n = 2 is likely the best value (see Figure 2, and the top-right and bottom-right of Figure 5a). We also note that if the stream is not radial, but rather spirals around the central galaxy, as in Figure 2, the total stream volume will be larger than inferred from eq. (12), and this can also be included by an effective n > 1 for a single stream. Regardless, the inferred values of R s /R vir for n = (1 − 3) are consistent with expectations (M20b).
These results for the temperature, density, and volume of the three CGM phases lead us to conclude that we can associate the PI gas with cold streams, the CI bulk gas with a background hot halo, and the CI interface gas with a turbulent mixing layer forming between the two as a result of KHI (M20a). While we have focused our discussion on VELA07, the other galaxies examined in this work exhibit very similar properties, and are all consistent with this association. We list their properties in Table 3, all of which are consistent with the predictions of M20b. To further solidify this point, we now examine the profiles of velocity and line-mass of the PI gas, and compare to predictions for the evolution of cold streams flowing through a hot CGM (M20b).
The left-hand panel of Figure 14 shows radial profiles of the total velocity magnitude for the three CGM components. The PI and CI interface gas both have velocities of ∼ V vir at R vir . While the velocity at r > 0.5R vir is nearly constant, their velocity at 0.1R vir is ∼ 1.6V vir , slightly less than the free fall velocity at this radius, which is ∼ 2.5V vir assuming an NFW halo with a concentration parameter of c ∼ 10. The CI bulk has velocities of order ∼ 2V vir at R vir , and increases by a similar factor between R vir and 0.1R vir . These super-virial velocities are due to strong winds (see Figure 5b), and are consistent with the super virial temperatures in this component seen in Figure 13.
In the middle panel of Figure 14 we show profiles of the radial component of the velocity normalized to the total velocity at that radius, for the three CGM phases. The CI bulk gas is outflowing almost purely radially from 0.1R vir to R vir . The PI gas, on the other hand, is inflowing from R vir to 0.1R vir , but with a significant tangential component. This is consistent with models for angular momentum transport from the cosmic web to growing galactic disks via cold streams (Danovich et al. 2015). These tangential orbits can be inferred from Figure 2, where a stream can be seen spiralling in towards the central galaxy. Such orbits were not considered by M20b, who only considered purely radial orbits for the streams. We therefore cannot strictly apply the predictions of their model to the stream dynamics within the halo. However, we expect that the model should work reasonably well in the region r > ∼ 0.5R vir , where the orbit is mostly along a straight line before the final inspiral begins. The magnitude of the radial component of the CI interface gas velocity is comparable to that of the PI gas. However, this component experiences both net inflow and outflow intermittently, likely depending on the orientation of the inflowing stream with respect to the outflowing bulk gas.
In the right-hand panel of Figure 14 we show the linemass (mass-per-unit-length) of the three CGM components as a function of halocentric radius. The line-mass of the PI gas increases by ∼ (5 − 10) percent from R vir to 0.4R vir , comparable to the predictions from the model of M20b. It then proceeds to increase rapidly, growing by more than a factor of 5 during the inspiral phase at r < 0.4R vir . We also note that at all radii, the line-mass of the CI interface gas is ∼ 5 percent of the line-mass of the photo-ionized gas. This implies that the mass flux of hot gas being entrained in the stream is proportional to the stream mass, which is indeed predicted to be the case (eq. 5). This strengthens our association of the PI gas and CI-interface gas with cold streams and the turbulent mixing layers that surround them, respectively.
Suggested 'Inflowing Streams' Model for OVI
Since both substantial components of OVI (PI gas, and CI interface gas) are closely linked to the physical phenomenon of inflowing cold streams, as discussed above, we suggest that OVI absorption sightlines in the CGM, and possibly metal absorption spectra more broadly, should be modeled as a three-phase structure following Figure 12. There are three phases to the CGM: Inside of the cool-inflow cones, their interface, and the outside bulk region. While we do not expect the exact temperatures or densities from the VELA simulations to be followed in the real Universe, considering the still relatively unconstrained feedback mechanisms, the structure given here could be used with detections of a variety of ions in a survey such as COS-Halos or CASBaH (Burchett et al. 2019;Prochaska et al. 2019) to fit these physical properties. These streams, which narrow as they approach the galaxy, can be characterized geometrically as 'spiraling cones', with a fit to their number n, their average cross-sectional radius a(r), and their interface size δ(r). Internally, these streams would have a temperature, density, and metallicity which depends on r as well. The properties of these streams will change with redshift, and so could explain some of the differences between the z ∼ 1 data here and the lower-redshift COS-Halos results, including that the streams are expected to get wider as z approaches 0 (Dekel et al. 2009).
Within each phase, we would suggest that each ion density should fit to a power law with radius, with some exceptions as we will describe. An example of this is shown in Figure 15. Here we see that the total oxygen within the streams, interface and bulk all increase as they approaches r = 0, reflecting the increase in both density and metallicity there. In the streams, this increased density ends up lowering the OVI ion fraction so much that the total density of OVI within streams remains nearly constant throughout the CGM. At the same time, the interface layer gas maintains a constant ion fraction as its density increases, since in CI this fraction is nearly density-independent. Therefore, its OVI density increases to become higher than that of the PI gas in the inner halo. Finally, the bulk gas has both a low oxygen density and a low OVI fraction, and is irrelevant throughout the halo. Combining this plot with the volume plot in the right panel of Figure 13 which showed that in the inner halo the interface tends to fill approximately the same volume as the stream it envelops, leads to the conclusions from this paper and RF19 that PI gas is more significant in the outer halo, and that sightlines mostly intersect PI gas outside 0.3 R vir (see Section 4.2). The second threshold (besides the requirement to be within 2 kpc of a PI cell) of n OVI >10 −13 cm −3 for interface gas is shown here to be not too high of a threshold, as the OVI number density for CI bulk gas is actually generally still higher than 10 −13 cm −3 and the interface is significantly higher, so its properties do not come primarily from selection bias. A higher threshold would decrease the cumulative volume in the interface, but otherwise would not significantly change its properties.
We briefly describe the procedure to fit other ions to this model here. Since the temperature change with radius in each phase is significantly less than the density change ( Figure 13), we would begin by assuming constant temperatures for the bulk, interface, and stream, with characteristic values as determined by Begelman & Fabian (1990); Gronke & Oh (2018); M20a; M20b, and other alternatives. Given those temperatures, we determine using the procedure of Section 3 whether the ion will be PI, CI, or transitionary. If the ion is determined to be CI, its fraction (under the assumption of constant temperature) will be constant and its density will be therefore a constant times the phase density, unless this is lower than the critical CI density, in which Figure 15. Radial profiles of OVI properties in the three OVI phases in the CGM of VELA07 at z = 1. Line colours are as in Figure 13. Left: Total oxygen number density, determined by total density, total metallicity, and overall oxygen abundance. Centre: OVI ion fraction within each phase.Right: OVI number density within each phase, which is in effect the product of the previous two panels. NeVIII. Note these images are of the same slice as Figure 6, but with a slightly lower dynamic range, reflecting carbon and neon's lower abundances compared to oxygen. The same three phases, including the thin interfaces, are visible in these other ions. case we will instead assume the CI contribution is zero (see Figure 8). If the ion is determined to be PI, its ionization fraction f X i (i.e. the fraction of the ith state of atom X) can be simplified by assuming a broken power law, with a positive power in f − n space below some density, an approximately flat region, and then a negative power in f − n space above some density. This decomposition is justified by examining the PI ions in Figure 3, and images like it at other temperatures. If the breaks between these multiple power laws do not overlap with the density ranges within the three phases (≈ 2 dex, depending on which phase is under discus-sion), the ion number density itself will follow a power law: where n X i is the number density of that ion state in that phase, Z is the metallicity, and this equation will apply in any of the stream, interface, or bulk phases of the CGM.
On the other hand, if the breaks between the power laws do overlap with the density ranges, the function will be much more complicated and probably cannot be well modeled. If the ion is at a 'transitionary' temperature, it can be modeled as a broken power law with four segments, adding an additional flat curve at high density. This does not change the procedure, except to increase the likelihood that the model will break down due to the additional power law break. In this picture, OVI is unique only in that its line of distinction between PI and CI mechanisms, as we defined in Section 3, happens to coincide with the temperature distinction between the streams and their interfaces. We have shown in Sections 5.1 and 5.2, by comparing the phases defined by the OVI ionization mechanisms to theoretical studies of cold streams and their properties, that these inflowing streams are identifiable with the regions of PI OVI. There is no reason to believe that other commonly-observed ions should have a meaningful CI boundary layer on the edge of PI clouds, or indeed that they are PI within the cold streams, and CI outside of them. We show two other ions in Figure 16 as an example, one of which (CIV) has a lower ionization energy than OVI while the other (NeVIII) has a higher ionization energy. CIV appears here to be even more negligible outside the cold streams, and within the streams falls off more strongly with radius (see the top right and bottom right clouds within the slice). The interface layers have lower CIV density (green, instead of blue), as opposed to comparable or higher OVI density. On the other hand, NeVIII is not localized to the streams at all, but rather has a higher density in the bulk material, and is highlighted in the interface layer in particular, which has a higher NeVIII density than either the bulk or the stream. The fact that the same streams and interface layers identified in OVI are also visible in NeVIII, though with totally different relative ion densities, is further evidence that the stream interface layers are a real phenomenon in the simulation, even though they were detected using the definition of the OVI CI-PI cutoff and not their other physical properties. This dependence is summarized in the following list, which shows how this model can lead to vastly different distributions throughout the CGM for similar (e.g. lithium-like, or containing three electrons) ions.
• For medium-ion states (e.g. CIV), we have n stream n interface ∼ n bulk . • For mid-to-high ion states (e.g. OVI), we have n stream ∼ n interface n bulk • For high-ion states (e.g. NeVIII) we have n interface n bulk ∼ n stream We do not here include a prediction for low ion states. While these could be fit to this model, they are likely subject to resolution limits (Hummels et al. 2019), so small clouds which are not produced in VELA could form a substantial contribution. The testable predictions of this model are that gas in mid-level ions reside in the inflows and can be detected all the way to the outer halo and beyond, while high ions (NeVIII) are significant throughout the bulk, and not strongly correlated to HI.
SUMMARY AND CONCLUSIONS
In this work we study properties of OVI in the CGM of ∼ 10 12 M halos at z ∼ 1 from the VELA simulations. We introduce a procedure for identifying all ions as photoionized or collisionally ionized, depending on the density, temperature, redshift, and assumed ionizing background, with negligible 'overlap'. This causes low ions to convert from PI to CI at lower temperatures than high ions, resulting in large regions where some ions could be PI and others CI simultaneously. We run mock sightlines through the simulations and compare the results with data from observations, suggesting a toy model for use in future work.
The main results of our analysis can be summarized as follows: • Photoionized cool inflows: PI OVI is found entirely within filamentary cool inflows from outside the CGM. While they fill only a tiny fraction of the CGM volume, most of the OVI in the CGM is located inside them. • Collisionally Ionized Interface Layer: The cool inflows have a warm-hot thin interface layer, which is the primary source of CI OVI.
• Low-density Collisionally Ionized Bulk: The bulk of the OVI in the CGM by volume is CI. However this phase is too hot to have a significant OVI component at all, and is negligible in terms of total mass. This results in undetectably low column densities outside of the inner halo. • OVI sightlines are mostly PI in the outer halo of massive galaxies at z ∼ 1: Since sightlines naturally represent the outer halo more than the inner, the cool inflows structure above leads to OVI column densities being dominated by PI gas for all impact parameters outside 0.2 − 0.3R vir .
• Assumptions of spherical symmetry underestimate OVI median radius: The non-spherical nature of the 'cool inflows' model leads inverse Abel transformations (as in S18) to predict that the median OVI particle is located at around 0.6R vir . However, this is an underestimate compared to the actual distribution of gas. Most OVI is therefore likely located very near, or beyond, the virial radius.
• Inflows characterize the OVI structure of the CGM of massive galaxies at z ∼ 1: We propose a model in which metal absorbers are characterized by their number densities in three distinct phases: inside cool inflows, outside the inflows in the bulk CGM volume, and in an interface layer between these two phases. This geometrical structure is characterized by the characteristic radius of the inflowing stream (which is itself a function of halocentric radius), a(r), and the thickness of the interface layer, δ. This model appears consistent with analytical predictions about the gas distribution from the interaction between cold streams and the hot CGM (M20b). • OVI is unique in tracing both the stream and interface: While the three-phase cool streams structure we describe here is a general prediction for observations of the CGM, OVI has a PI-CI cutoff which matches the difference between the stream and interface conditions. Future work will apply the same framework for distinguishing PI and CI gas to other ions. We are especially interested in CIV and NeVIII, which are included in the Steidel et al. (2010); Bordoloi et al. (2014) and CASBAH (Burchett et al. 2019;Prochaska et al. 2019) surveys, among others, at redshifts z 1, so their actual column density values could be compared to those in VELA. This would let us further develop this three-density CGM model, n stream , n interface , and n bulk . We will also follow the same idea in other simu-lations that reach lower redshifts and different mass ranges, but which have good enough resolution to show these cool inflows in the CGM. Finally, a comparison with the new generation of the same VELA galaxies (Ceverino et al. in prep.) will allow us to directly compare the effects of increased feedback on the CGM with the same initial conditions. | 2020-08-28T01:01:17.065Z | 2020-08-27T00:00:00.000 | {
"year": 2020,
"sha1": "09b2d80c982d8ba1502f4381f5068a1d32a86de2",
"oa_license": null,
"oa_url": "https://eprints.ucm.es/id/eprint/67995/1/rocafabrega01preprint.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "09b2d80c982d8ba1502f4381f5068a1d32a86de2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
271483124 | pes2o/s2orc | v3-fos-license | Pretraining of 3D image segmentation models for retinal OCT using denoising-based self-supervised learning
Deep learning algorithms have allowed the automation of segmentation for many biomarkers in retinal OCTs, enabling comprehensive clinical research and precise patient monitoring. These segmentation algorithms predominantly rely on supervised training and specialised segmentation networks, such as U-Nets. However, they require segmentation annotations, which are challenging to collect and require specialized expertise. In this paper, we explore leveraging 3D self-supervised learning based on image restoration techniques, that allow to pretrain 3D networks with the aim of improving segmentation performance. We test two methods, based on image restoration and denoising. After pretraining on a large 3D OCT dataset, we evaluate our weights by fine-tuning them on two challenging fluid segmentation datasets utilising different amount of training data. The chosen methods are easy to set up while providing large improvements for fluid segmentation, enabling the reduction of the amount of required annotation or an increase in the performance. Overall, the best results were obtained for denoising-based SSL methods, with higher results on both fluid segmentation datasets as well as faster pretraining durations.
Introduction
Modern imaging modalities such as fundus imaging and optical coherence tomography (OCT) are essential in the field of Ophthalmology, as they allow to precisely study and follow disease progression.OCT in particular enables imaging the cross-section of the retina and obtaining 3D high-resolution volumetric scans.This helps to precisely monitor a series of relevant clinical biomarkers, such as retinal fluid volume or retinal layer thickness, which are crucial for the surveillance of retinal diseases and for the adjustment of patient treatment [1][2][3].Nevertheless, manual measurements of these biomarkers are particularly cumbersome and automation is required.
The current state-of-the-art solutions for automated biomarker quantification are based on Deep Learning (DL), which has already been shown to achieve high performance in the context of OCT analysis [4,5].These models have high modelling capacity and are able to process complex data such as entire 3D OCTs, and they are already used to automatically monitor retinal biomarkers such as retinal fluid [6] or to facilitate the diagnosis of pathologies by detecting subclinical biomarkers [7,8].For instance, for patients suffering from neovascular Age-related macular degeneration (nAMD), DL-based algorithms allow for reliable quantification of different fluid types, and enable improved tracking of the fluid activity [9].Similarly, in diabetic macular edema (DME) patients [10], automated segmentation allowed the investigation of predictive biomarkers in patients undergoing anti-VEGF treatment against nAMD.
However, current segmentation often relies on supervised learning with 2D U-shaped DL networks [11] (U-Nets), whose training requires large annotated datasets.These annotations are difficult to collect because they require expert knowledge and are very costly and cumbersome to produce.
This relative data-inefficiency limits the performance and hinders the implementation of automated segmentation systems for new biomarkers, new imaging devices or new populations.For classification problems, it is easier to find large pretrained networks exploiting extensive public supervised datasets (for instance ImageNet).However, in our case, two limitations arise: the available datasets are far from the target domain (retinal OCTs) and no pretrained weights are available for segmentation networks.A solution to overcome this problem revolves around self-supervised learning (SSL) [12].SSL provides training mechanisms, which do not require annotations, and allow to pretrain networks or learn relevant representations.Indeed, a lot of unannotated OCTs are accumulated in daily clinical care or during clinical studies, and this data can be capitalised on to increase the final performance of segmentation models.
Related work
SSL allows to exploit unlabelled data to pretrain deep learning models or to learn representations and there are multiple strategies to implement it.Namely, a lot of effort has been made recently in the field of contrastive learning.These methods exploit multiple views of the same samples to build robust representations and excel at learning discriminating features.SimCLR [13], MoCo [14] or VICReg [15] are examples of relevant state-of-the-art contrastive-learning methods.However, as they often only pretrain an encoder network and rely on Siamese or more complex structures, which have an important computational overhead, they are less suitable to pretrain models for 3D segmentation.
On the other hand, information-restoration SSL consists in retrieving some information from transformed input samples.The whole input or just some parameters associated with the transformations can be restored.When the whole image or volume input is restored, it allows to train a U-Net, which weights can then be transferred (by fine-tuning) to segmentation tasks.Typical transformations include inpainting, rotation, solving jigsaw or retrieving colourisation [16][17][18][19].Moreover, denoising autoencoders [20] have recently regained attention thanks to diffusion generative models [21], which can be implemented as a set of denoising tasks.The fundamental objective of these methods is to implicitly approximate the underlying distribution generating a dataset.Although interesting, these generative models are usually difficult to train, and require large computing capability to be applied on complex 3D images.In a simplified setting, denoising can also be used to extend pretrained "off the shelf" encoders into full segmentation networks [22].
The rarity and complexity of annotations in medical imaging have encouraged the application of SSL, and it has shown great potential.For instance, with a "Rubik's cube" pretext task in [23], the authors were able to improve brain CT classification and brain MRI segmentation tasks.A transformer-based model, trained as a masked autoencoder [24], was able to improve a large set of classification tasks in [25].In [26], authors used an adapted contrastive method, to pretrain a model, which exhibited improved performance on Dermatology and Chest X-ray classification tasks.Other works have focused on longitudinal modeling to improve prediction tasks, as for instance [27], in which a modified temporally-informed non-contrastive loss was applied.
Additionally, to the applications focused on classification tasks, some works have specifically targeted medical segmentation with self-supervised learning.Most of these methods rely on various forms of image restoration tasks, which allow to pretrain U-Net networks.For instance, in [28] after multiple patch swaps) and evaluated on brain tumor segmentation, where this pretraining improves performance.Similarly, Model-Genesis [29], extended the image restoration idea by employing multiple corruption methods targeting local and global patterns as well as intensity distributions before performing image restoration, which allowed the improvement of prediction and segmentation tasks on multiple 3D image modalities.They demonstrated SSL pretraining allows to accelerate training time or to reduce amount of annotated data for downstream tasks.With the goal of performing few-shot segmentation of CT and MRI images, the work of [30] successfully devised a self-supervised technique based on superpixel pseudo-labels.Another approach, mixing contrastive learning and restoration SSL [31] allows to pretrain a 3D transformer network to improve segmentation performance on CT and MRI images.However, the contrastive task, the rotation prediction and the inpainting tasks make it a very memory-heavy architecture, which can make experimentation and hyperparameter tuning difficult.Although these works do not cover the retinal OCT modality, they indicate promising applicability for 3D OCT segmentation.
Contribution In this work, we focus on SSL and pretraining models on 3D OCTs to improve downstream segmentation tasks.Specifically, we explore two image-restoration methods based on different corruption tasks.An advantage of these methods is that they provide pretrained weights for segmentation networks (U-Net or encoder-decoder type) and use a single pretraining loss, facilitating the hyperparameter search.Image-restoration also allows us to work on patches which simplifies the processing of 3D OCTs.Therefore, we rely on two SSL methodologies (Fig. 1), the first one, Denoise, is based on an image denoising task and the second one, MultiTask, is based on a set of image reconstruction tasks.Additionally, we explore hybrid approaches where a pretrained encoder is already available, from "off-the-shelf" weights such as from the supervised video dataset Kinetics [32].
We tested these methods on a large in-house 3D OCT dataset (more than 70'000 volumetric OCTs), and for each method, we explored two alternatives: pretraining the entire U-Net, i.e., both the encoder and the decoder, or only the decoder paired with an encoder pretrained with off-the-shelf weights.These four models were evaluated on two challenging datasets from two typical population for retinal fluid segmentation: Choroidal Neovascularization (CNV) and DME groups with an in-house dataset for CNV (Fluid-CNV dataset) and a public one for DME (UMN-DME dataset).
Methods
Our goal is to effectively pretrain deep learning models using SSL before fine-tuning their weights to excel at segmentation tasks.We rely on two SSL methodologies (Fig. 1): Denoise, and MultiTask.In this section, we describe both methods, how their training loss is computed, and the different transformation functions used during training.
Denoise pretraining
The first approach to pretraining is inspired by [22], and consists of a denoising-based selfsupervised method.Unlike the original paper, where authors focus on extending a pretrained ImageNet encoder to a U-Net by adding a decoder and pretraining it with SSL.We pretrain the whole network with the SSL method.The task consists of separating the added Gaussian noise from the original input.By performing this task, the goal is to learn features that can separate the relevant patterns or structures of the OCT from the added noise.
In this approach, the original input OCT x is modified by adding Gaussian noise ϵ, whose scale is controlled by the scalar parameter σ, and the model tries to extract the noise ϵ from the input image by minimizing the L 2 loss between the added noise and the predicted noise, in contrast to predicting the original input as in classical denoising.
We predict noise instead of the original input following a modification inspired by denoising diffusion models, which is shown to improve the results [22].
MultiTask pretraining
This approach is a generalized form of image restoration SSL, based on Model-Genesis [29].It consists of learning to restore an image corrupted with a series of transforms carefully chosen to target different properties in the training images.The general pipeline is the following: Let x be the input image from the unlabelled dataset, x is transformed through a series of corruption methods T i .This corrupted image x i is fed as input to an image-to-image network (U-Net-like) f θ which tries to reconstruct the original input with x r .The network f θ is trained by minimizing the L 2 loss between the original and reconstructed image: The chosen corruption transforms are applied in this order: non-linear intensity change, local voxel shuffling, inpainting and outpainting (Fig. 2).All these transformations allow to learn relevant representations, which are both robust to a wide range of transformations but also capture low-level information, through the ability to perform local reconstruction, and high-level information by restoring the general structure of the retina.The exact values of the parameters are given in the next section (4.4).
Original
Inpainting Outpainting Shuffle NLIS Original Inpaint Multiple 3D boxes from the volume are filled with a random value.The size of the box is chosen randomly within a predefined range and the filling value is selected from a uniform distribution within the image intensity range.Inpainting allows to break large structures present in the retina such as retinal layers, allowing to learn features capturing the general structure of the retina, by preserving structures such as retinal layers.
Outpaint The external areas of the OCT volume are masked, by using a superposition of large volume masks which are inverted to only keep the internal area.The size of the masks is selected randomly within a predefined range.Since the volumes are cropped during pretraining, the transformation aims at removing external context, forcing the representations to capture it from local information.
Local voxel shuffle The voxels are shuffled within a small volume.The size of the volume and its locations are selected randomly within a specified range.The transformation allows to disturb small structures and degrade fine-grained details, this allows the features to encode texture information more robustly, which is helpful to distinguish anomalous areas such as fluid pockets.
Non-linear intensity change The distribution of intensities is remapped using cubic Bézier curves.In our experiments, we limit the shape of the transformation to avoid inverted intensities by fixing two out of four control points (endpoints).Two remaining control parameters are randomly selected, within the intensity range ([I min , I max ] 2 ) for each volume.This transformation increases the invariance to image intensity variability.
Decoder-only pretraining
For each SSL methods (Denoise and MultiTask), we explored an alternative: loading pretrained weights (from Kinetics dataset) in the encoder and freeze it, resulting in pretraining only the decoder on OCTs.Two questions arise: why freezing the encoder and why using non-domainspecific Kinetics weights?First, this solution is inspired by [22] and it can be of value since it allows to accelerate the pretraining and to take benefit from supervised classification datasets if they are available.Second, an ideal solution would be utilizing an encoder from the same domain as the training data but in our case, large supervised 3D OCT datasets were not available so we adopted encoders pretrained on the Kinetics dataset [32], which were available for our encoder architecture.Such models are denoted with the suffix -D in our experiments, and the models which are fully pretrained have -ED suffix.
Downstream tasks
To evaluate the pretraining strategies, we focus on the 3D OCT segmentation of different types of retinal fluids, which is both challenging and practically useful for clinical research and management of patients with exudative retinal diseases.Indeed, retinal fluid can appear in different pathologies, namely in the following prevalent diseases: Choroidal Neovacularization (CNV), Diabetic Macular Edema (DME), or Retinal Vein Occlusion (RVO).CNV is a late stage of Age-related Macula Degeneration (AMD), where leakage of new vessels leads to fluid pockets in the retina.On the other hand, in DME patients, leakages are caused by damage to the blood retinal barrier and are a consequence of Diabetes.Fluid pockets are further classified depending on their position, and we focus on the two main ones: Intraretinal Cystoid Fluid (IRC) and Subretinal Fluid (SRF).We selected two datasets to perform this segmentation of IRC and SRF fluids.
Datasets
The overview of the datasets used in the experiments is provided in Table 1.
OCT-SSL:
The self-supervised dataset denoted OCT-SSL consisting of 71680 OCTs, from 10156 eyes imaged with the Spectralis scanner (Heidelberg Engineering, DE).The dataset is built from data coming from the imaging repository of OPTIMA Lab, consisting of scans from clinical studies or routine clinical care.The vast majority of patients imaged suffered from AMD, 58% with CNV and 29% with geographic atrophy (GA), as well as a small amount of patients with DME and RVO.
All patients gave informed consent prior to inclusion in the respective studies.This retrospective analysis was approved by the Ethics Committee at MedUni Wien (EK Nr: 1246/2016).All study procedures were conducted in accordance with the Declaration of Helsinki, and all the patient data were pseudonymized.The dataset is split into three subsets for training, validation and test, with ratios of 90/5/5%.It was assured that the pretraining dataset does not overlap with the datasets used for the downstream target segmentation task, which are explained next.
Fluid-CNV: This in-house dataset comprises 84 OCT scans from 84 eyes of patients diagnosed with CNV.OCTs have 49 BScans of size (512 x 1024 px).IRC and SRF subtypes were manually annotated separately.The dataset is split into 5 folds at a patient level to perform cross-validation.All OCTs were acquired with a Spectralis scanner (Heidelberg Engineering, DE).
Fluid-CNV reduced dataset: To evaluate the performance of the models with limited annotations, we created a reduced version of the dataset, where the training and validation sets are randomly reduced to 20, 40, 60 and 80% of the original size.The original five test sets from cross-validation are kept unchanged to be able to compare between different settings.
UMN-DME dataset:
We tested our pretrained models on a public dataset: the DME OCT dataset, presented in [33] and published by the University of Minnesota.The dataset consists of 29 OCT scans from 29 patients with DME, where the SRF fluid was annotated on each scan by two expert clinicians.Each scan has 25 BScans of size (496×1024 px).The scans were acquired with a Spectralis scanner (Heidelberg Engineering, DE).We also performed a 5-fold cross-validation at patient level.
UMN-DME reduced dataset: Additionally, we create a version of the dataset with reduced training data, where we kept only 50% of the training and validation folds while the test folds remained unchanged.Given the small number of patients, this reduced version of the dataset is especially difficult to segment in 3D.
Segmentation evaluation metrics
For our downstream segmentation experiments, we evaluated the results using two metrics.The first metric is the Dice Score, which measures the intersection of the ground truth and the prediction over the cumulative areas of both masks.The second metric is Absolute volume Difference (AVD), which is relevant in our case, where the fluid volume is a clinically important biomarker.
Deep learning setup and preprocessing
Preprocessing and OCT flattening: All OCTs, for pretraining or downstream tasks, were flattened along Bruch's membrane using automated segmentation methods [34][35][36] and [37].The vertical dimension is cropped to a height of 224, and the horizontal dimension resized to 512 and all the BScans are kept for pretraining.The vertical crop is extended for segmentation with a window of 256 x 512 pixels in order to capture the whole retina in extreme cases, e.g., retinal swelling due to the presence of large fluid pockets.The cropping window is selected with a fixed offset with respect to the Bruch's membrane.Lastly, the voxel intensities were normalized to a zero mean and unit standard deviation using the statistics of the training set of the OCT-SSL dataset.
Self-supervised learning: the network is optimized using Mean Squared Error (MSE) loss for both MultiTask and Denoise setups.We used a custom network combining a 3D U-Net [38], with a 3D ResNet18 encoder [39].The epoch with the lowest validation MSE is saved.The OCT volumes are randomly cropped for all pretraining methods to the volumetric shape (128x128) and 48 BScans to allow a reasonable batch size during training.We used the optimizer AdamW with a batch size of 24.We trained with a fixed learning rate of 0.0001.The pretraining experiments are run on a single Nvidia A100 GPU.
Downstream segmentation: the pretrained U-Net is fine-tuned on the downstream segmentation task.The final convolutional layer is replaced with a new convolution layer with one or two output feature maps followed by a sigmoid activation layer.For both datasets, the network is trained with a mix (equal contribution) of binary cross-entropy and Dice loss, for Dice Loss, background was ignored.When there are two fluid classes, they are treated as two binary problems instead of considering a multi-class setting, which was giving lower performance.For the 8 first epochs, only the last convolutional layer is trained, in order to initialize it, then the whole network is fine-tuned for 512 epochs.We tested different learning rates ranging from 0.001 to 0.00001 and the best setting was selected using the validation Dice score.
Denoise transform parameters: Following [22] and our initial experiments, Denoise models are trained with a noise scale σ of 0.22.
Pretraining performance
We present here the results from the pretraining using two different pretraining methods: MultiTask and Denoise.For both methods, under the Encoder-Decoder (ED) and Decoder-only (D) setting, training losses decreased rapidly and saturated after a few dozen epochs.Overall, Denoise pretraining was approximately 50% shorter than MultiTask and hybrid pretraining reduced the pretraining time by 13% for Denoise and by 8% for MultiTask.However, hybrid models (-D) still reached a higher MSE compared to their fully pretrained counterparts (-ED).This was expected since hybrid methods have less trainable parameters.The training curves are displayed in Fig. 3. Examples of reconstruction performed by different approaches are shown in Fig. 4. In this example for the MultiTask-ED method (Fig. 4 second column), we can notice an improvement in the reconstruction of the external limiting membrane (ELM) (see yellow arrow), from an area that was completely masked in the input, compared to its MultiTask-D counterpart when the encoder was already pretrained and frozen.Similarly, for Denoise pretraining, the ED version demonstrates better noise separation around the thin structure of the ELM (blue arrows), which appears sharper after denoising.
Downstream performance on fluid segmentation in CNV
Full dataset: We evaluate four different pretrained models on fluid segmentation for IRC and SRF fluid compartments against a model trained from scratch.Results are displayed in Table 2 and in Fig. 5.We observe that pretraining always improves results for both metrics, with the best results obtained by Denoise-ED pretraining.The overall best Dice for IRC was 0.754 ± 0.024 and 0.819 ± 0.049 for SRF fluid.For IRC fluid, all pretrained models obtained significantly higher Dice score compared to trained from scratch.In terms of AVD, significant improvement is observed for IRC fluid for all models except MultiTask-ED.
For SRF fluid, all pretrained models (MultiTask models and Denoise-ED) except Denoise-D reached a significantly higher (p<0.05)Dice score, and AVD was also lowered.In general, the hybrid model MultiTask-ED gave the smallest gain compared to training from scratch.
Reduced dataset: We evaluated the same models with reduced amounts of fine-tuning data, ranging from 20% to 80% of the original training data.The performance for each metric with respect to the amount of training data is shown in Fig. 6.First, as expected, we can observe that performance increases when more training data is available, however for all data settings, improvement from pretraining was substantial.Overall, the best results are obtained with Denoise-ED method, sometimes paralleled by the hybrid version (Denoise-D).Under the most data-constrained setting (20%), pretraining allowed an improvement of Dice score of 22% for IRC and 6% for SRF.Moreover, as shown in the figures (Fig. 6 dashed line), the best performance from a model trained from scratch can already be achieved with 40 to 60% of the training data, depending on the evaluation metric.For this dataset, pretraining allows to effectively reduce the amount of annotation or to increase the maximal performance.Detailed numerical results are provided in the Annex (Table 4), together with qualitative results of segmentations (Fig. 7 and Fig. 9).
Downstream performance on fluid segmentation in DME
In this dataset, SRF only is considered, and we included the experiment with the limited training data (50%).With the entire dataset, we can observe a clear benefit from pretraining the models, with a significant gain in Dice score (p<0.01)compared to the model trained from scratch.Pretrained models achieved similar results with the best performance by Denoise-ED pretraining for Dice score and AVD.All pretrained models improved SRF Dice score significantly over training from scratch, but the drop in AVD was more limited.Results are listed in Table 3 and displayed in Fig. 8. Reduced dataset: For the reduced dataset, the gain in Dice score obtained with pretraining becomes even larger.Similarly as with the whole training dataset, the best performing model was Denoise-ED.This model obtains higher Dice score (0.792 ± 0.078) than the model trained from scratch on the entire dataset (0.744 ± 0.096).Although AVD scores are better with pretraining, the differences are not significant, probably because of a higher sensibility of the metric to outliers.Results are displayed in Table 3 and Fig. 8.
Conclusion
Segmentation of 3D retinal OCTs has become an important tool in ophthalmology, however, current methods suffer from data inefficiency as segmentation annotations are difficult and costly to produce.To overcome this problem, we explored denoising and restoration-based SSL pretraining: we pretrained with this methods on a large unlabelled dataset, then transferred the weights with finetuning on two fluid segmentation tasks.In our experiments, we could observe that denoising-based SSL was the better strategy.Indeed, while being faster to pretrain and requiring less hyperparameter tuning (a single parameter), it achieved the best results in the majority of evaluations, outperforming other methods.
For both datasets, we repeated the segmentation experiments in a limited data setting.In these experiments, pretraining showed an even greater improvement in performance over training from scratch.For the segmentation of Fluid-CNV and the UMN-DME datasets, around 50% of the training samples allowed to reach the same performance as when using the full training set.Overall, Denoise-based pretraining enabled increasing the maximal segmentation performance or alternatively, reducing the amount of required manual annotation for a certain level of performance.
Contrary to Denoise pretraining, we could observe in our experiments that MultiTask methods had several limitations.The pretraining duration was significantly extended, and involved more hyperparameters (transformations) compared to Denoise (single parameter).Finally, although MultiTask pretraining enhanced performance over scratch training, gains were limited relative to Denoise models.It seemed denoising was a more relevant corruption task than the mix of tasks in MultiTask.The latter could suffer from a wrong balance between the different tasks, which is difficult to correct given the number of hyperparameters.Moreover, denoising is at the heart of other successful methods, such as denoising diffusion models [21] or regularisation problems [40], which proves its versatility and the ability to learn powerful representations with this simple task.
We tested a form of hybrid pretraining, where we used pretrained encoders and extended them to fully pretrained segmentation networks.These models only slightly under-performed their fully-pretrained counterparts on both tasks, despite a reduction in the pretraining time.This could constitute a convenient trade-off if training resources are limited.Moreover, our hybrid models were based on Kinetics encoders, which are expected to be suboptimal for retinal analysis, therefore, for future experiments, it would be relevant to include encoders trained directly on OCTs.
During our study, we limited the size of our main network (3D U-net with a 3D ResNet18 encoder) to facilitate the heavy computational pretraining, which could have limited the final performance.Indeed, to fully exploit 3D pretraining on our large dataset, as future work it would be of interest to repeat these extensive experiments with larger 3D segmentation networks, or even with Transformer-based networks.Although 3D U-Nets are still relevant and keep being the state of the art for medical image segmentation [41], this could also allow us to confirm that the observed gains are present with other types of network architectures.
In conclusion, image restoration SSL, especially based on denoising, allows to effectively pretrain 3D segmentation networks for OCT segmentation.While being easy to set up, it leads to improved downstream performance or enables reducing the amount of required annotation work.
Fig. 1 .
Fig. 1.General pretraining architectures and setup for Denoise and MultiTask self-supervised learning settings, with a transfer to the downstream segmentation task.
Fig. 2 .
Fig. 2. Example of the transformations applied to OCT volumes in MultiTask pretraining.The original OCT volume (left) is transformed randomly through four operations: inpainting, outpainting, local voxel shuffling and non-linear intensity shifts (NLIS)
Fig. 3 .
Fig. 3. Validation MSE during the pretraining of Denoise (left) and MultiTask (right) methods.Each method has the ED (encoder-decoder) version, where the whole network is pretrained and the hybrid version D where the encoder is frozen.The total training time is also displayed for each setting (single Nvidia A100 GPU).
Fig. 4 .
Fig. 4. Examples of MultiTask or Denoise pretraining cases, the upper row represents input volume and the lower row the reconstructed images.(*) For denoising, as the noise is predicted, we displayed the input where we subtracted the predicted noise from the input image to get the denoised image for the sake of visualization.
Table 2 .
Segmentation results on the Fluid-CNV dataset, with Dice Score and Absolute Volume Difference (AVD) for IRC and SRF classes.We report mean value ± standard deviation across folds.Pretrained models are compared against the ones without pretraining and the statistical significance is represented with asterisks (* p-value < 0.05, ** p-value < 0.01, *** p-value < 0.001).
Fig. 5 .
Fig. 5. Segmentation performance on Fluid-CNV dataset, with Absolute Volume Difference (AVD) (first row) and Dice Score (second row) of the four models for two fluid classes SRF and IRC.
Fig. 6 .
Fig. 6.Segmentation performance on Fluid-CNV dataset, with Dice Score and Absolute Volume Difference (AVD) of the four models for two fluid classes, IRC and SRF.The models are fine-tuned with the amount of training data ranging from 20% to 100%.The dotted line represents the performance obtained with no pretraining and 100% of the data.
Fig. 7 .Table 3 .
Fig. 7. Segmentation example on the Fluid-CNV datasets.Each line corresponds to a certain amount of training data from 20-80%, with the segmentations from the five different models.True positives are displayed in green, false positives in blue, and false negatives in red.Table 3. Segmentation results on the UMN-DME dataset, with Dice Score and Absolute Volume Difference (AVD) for SRF, for the entire dataset and its reduced version (50%).We report mean value ± standard deviation across folds.Pretrained models are compared against the ones without pretraining and the statistical significance is represented with asterisks (* p-value < 0.05, ** p-value < 0.01, *** p-value < 0.001).
Fig. 8 .
Fig. 8. Segmentation performance on UMN-DME dataset, with Dice Score and Absolute Volume Difference (AVD) for five models for SRF fluid segmentation.The models are fine-tuned with 50% and 100% of the data.
Fig. 9 .
Fig. 9. Example of a segmentation on the Fluid-CNV and UMN-DME datasets.The ground truth (yellow) is displayed first, then the segmentation from the different models.True positives are displayed in green, false positives in blue, and false negatives in red. | 2024-07-27T15:23:00.662Z | 2024-07-25T00:00:00.000 | {
"year": 2024,
"sha1": "32da13007a10e7e87b55ca16f13e8ec7a52f2d5f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.524603",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9cee68860bcf1ba2c43968317b459414b898e6e4",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237386406 | pes2o/s2orc | v3-fos-license | Global Convolutional Neural Processes
The ability to deal with uncertainty in machine learning models has become equally, if not more, crucial to their predictive ability itself. For instance, during the pandemic, governmental policies and personal decisions are constantly made around uncertainties. Targeting this, Neural Process Families (NPFs) have recently shone a light on prediction with uncertainties by bridging Gaussian processes and neural networks. Latent neural process, a member of NPF, is believed to be capable of modelling the uncertainty on certain points (local uncertainty) as well as the general function priors (global uncertainties). Nonetheless, some critical questions remain unresolved, such as a formal definition of global uncertainties, the causality behind global uncertainties, and the manipulation of global uncertainties for generative models. Regarding this, we build a member GloBal Convolutional Neural Process(GBCoNP) that achieves the SOTA log-likelihood in latent NPFs. It designs a global uncertainty representation p(z), which is an aggregation on a discretized input space. The causal effect between the degree of global uncertainty and the intra-task diversity is discussed. The learnt prior is analyzed on a variety of scenarios, including 1D, 2D, and a newly proposed spatial-temporal COVID dataset. Our manipulation of the global uncertainty not only achieves generating the desired samples to tackle few-shot learning, but also enables the probability evaluation on the functional priors.
I. INTRODUCTION
In recent years, machine learning, especially deep learning, has shown massive success on a range of prediction tasks, such as time-series forecasting [1], geographical and spatialtemporal inference in medical science, engineering, and finance domains [2]. Nonetheless, the uncertainty of machine learning models is less considered than model predictions themselves. When the existing knowledge of a task is not abundant to present a deterministic prediction, uncertainty provides a reasonable guess interval that includes major possibilities of the predictions. In fact, uncertainty can be as equally important as the prediction capability of models, considering that it increases as the expansion of underlying factors. For example, when estimating the life expectancy of a mechanical part, it is more feasible to estimate an approximate timerange than to predict an exact time stamp at which to discard the part as there are more uncontrollable variants in the long term. Besides, modelling the uncertainty helps increase models' tolerance to task diversities in datasets. For instance, when exploring substances from satellite images of a planet, modelling the uncertainty helps incorporate heterogeneous substance candidates that can not be confidently distinguished from the ground truth. Uncertainty is crucially important for predictions related to the pandemic-it determines how much we could trust the classification results of COVID-19 cases [3], the anticipated virus spreading trends [4], and assessment of lockdown policies [5], all of which have a vital impact on the daily lives of many people.
Neural process families (NPF) recently shone a light on predictions with uncertainties by bridging Gaussian processes (GPs) and neural networks. Inherited from Gaussian processes, they make predictions under a joint of correlated normal distributions and present each prediction along with a confidence interval,i.e, uncertainty (See Fig 1). As neural networks show competence in deep feature representation, NPFs advance GPs in modelling complicated functions efficiently. In addition, NPFs are suitable for solving meta-learning tasks, where each task is sampled from a distribution of functions instead of a single function. We illustrate predictions with uncertainties using the examples shown in Fig 1. Given a context set with a cluster of observable data (x C , y C ) := (x i , y i ) i∈C , an encoder network infers a function f that can generate the context set. Then, a decoder network uses the function to make predictions with uncertainties on a target set (x T , y T ) := (x i , y i ) i∈T , where only the locations x T are unveiled. The resulting outputs follow a normal distribution p(y T |x T , f ) = N (µ y , σ 2 y ). We call the standard deviation on local target points σ y T "local uncertainty" in contrast to the global uncertainty to be introduced in latent neural processes later. Eventually, NPF optimizes the parameters in neural nets by maximizing the likelihood of actual target values y T . Latent neural processes [6] hypothesize that the encoded function should not come from a deterministic vector but rather a distribution f ∼ p(f ). As shown in Fig 1, two descent function samples f 1 and f 2 with periodic differences can be generated using the same context set. They both represent the local uncertainty on the target points but obviously have different priors, meaning there exists another uncertainty that determines this general prior, which we call "global uncertainty". Despite some previous efforts [7] We propose a GloBal Convolutional Neural Processes (GB-CoNP) that make predictions with uncertainties to address the above challenges. GBCoNP defines global uncertainty as the learnt posterior of a functional distribution conditioned on a small context set q(z|C). This formalization enables comparisons among datasets and different latent models, including the causal-effect between the intra-task diversity and global uncertainty. It further enables us to discover and edit insightful semantic features with regards to global uncertainty during sample generations. Finally, we evaluate the log-likelihood of GBCoNP with peers on extensive 1D and 2D datasets, and propose a COVID dataset using spatial-temporal uncertainty prediction. This case study is expected to enhance our understanding of the patterns in virus spread and benefit the research community. Our major contributions are three folds: • A new discretized space for global uncertainty projection that is suitable for out-of-range prediction while maintaining the shared global prior. • A causal-effect analysis of the global uncertainty, which is seldom discussed in previous research. Our analysis reveals dataset characteristics, such as intra-task diversity, can depict the stochasticity. • Manipulation of the global uncertainty that empowers sampling with priors. We novelly generalize the applications to a high-dimensional spatial-temporal scenario.
A. Global Uncertainty in Latent Neural Processes
We follow the notations of [7] and denote a context set by (x C , y C ) := (x i , y i ) i∈C , where both the inputs and predictions are given in a meta-regression task. The context set defines a sample from a functional distribution. The objective is to predict on the target inputs x T using this function sample and maximize the likelihood of y T if the target set (x T , y T ) := (x i , y i ) i∈T comes from the same function sample. Similar to conditional variational autoencoders (CVAE), a latent NP models the prediction with a conditional distribution (1): where r C := r(x C , y C ) is a neural network that captures the deterministic part of the functional sample. The prior latent distribution p(z) := N (µ z , σ 2 z ) captures the global uncertainty of the functional sample. As shown in Fig 2(a), r C and z forms the encoder. The decoder network takes the encoded functional condition (r C , z) and the target inputs x T to make predictions of y T . Depending on the inductive biases imposed over the relationships within the context set and between context and target sets, different neural network structures can be adopted for encoders, such as equally weighted (neural processes [6]) and attentively weighted aggregation (attentive neural processes [7]).
Since this prior knowledge of the functional distribution, i.e., global uncertainty p(z) is intractable, previous studies turn to amortised variational posterior p(z) = N (µ z , σ 2 z ) ≈ q(z|r C ) = q(z|x C , y C ) for inference. They typically pass r C through an MLP to get the distributional parameters and then optimize (1) with a differentiable network: A predictive evidence-lower-bound (ELBO) is given based on the variational inference of p(z): Now, the training process is to maximize the log-likelihood log p(y T |x T , x C , y C ) which equals maximizing its lower bound comprised of a conditional log-likelihood based on latent z, i.e., log p(y T |x T , r C , z), minus a non-negative KL divergence. While CVAEs regularize the posterior with a standard normal distribution KL(q(z)||N (0, I)), latent NPs differ in minimizing the divergence from the posterior (obtained from the target set) to the context set during training when the target sets are accessible.
During inference, the prior p(z) = N (µ z , σ 2 z ) is replaced by q(z|x C , y C ). We formalize the global uncertainty as: where comprises a very small proportion of the index set. We did not set the condition C = ∅ since in real-world datasets and the empty set barely carries any function prior, but |C| cannot be too large, otherwise it would leave no space for uncertainty. The mean µ z in (4) determines the sensitivity of final predictions affected by the global uncertainty, whereas the variance σ 2 z implies the diversity of function priors on a certain task.
B. Out-of-range Predictions with Convolution
Out-of-range predictions is an essential characteristic when scaling the NPF to real-world tasks. It requires the model to generalize predictions when the testing task is out of the training range. The recently proposed Convolutional Conditional Neural Processes(ConvCNP) [9] tackle this issue with convolutions and outperform peer NPF members. ConvCNPs assume "translational invariance", i.e., the prediction pattern near a local context data is transferable to the rest of the input space, which can be well addressed by convolution. They omit the latent representation and only decode r and x T . However, instead of directly encoding r from the raw context inputs x C , they introduce a discretization of the indefinite input space x S that incorporates context and target inputs x C , x T and map the deterministic representation on x S : where r S := r(x C , x S , y C ) is the deterministic representation.
r(x C , x S , y C ) is a DeepConvSet that resembles a simplified attention. Given a query of a discretized set x S , DeepConvSet projects the value y C to the space S based on the similarity between the query x S and the key x C . Then, r S is passed through a convolutional neural network ψ(·), which achieves the inductive bias of transitional invariance (shown in Fig 2(b)). This guarantees that the function description only focuses on a local range of inputs so that the model is still effective even when the testing inputs are out of the training range. The DeepConvSet : C → S and the CNN module constitute the encoder. The decoder is another DeepConvSet whose keys and values are x S and r S with queries x T . Without the latent representation, the training and inference share the same workflow.
C. GloBal Convolutional Neural Process(GBCoNP)
While ConvCNPs outperform NPF members on many scenarios, we believe the latent distribution p(z) has a great impact on maintaining the stochasticity for NPs, particularly on the global uncertainty. If the function sample representation r S is deterministic, the resulting prediction will be precise yet "dull" with single distribution parameters µ y and σ 2 y ; In contrast, latent NPs can sample different z values which correspond to diverse priors over the functions. Each prior is able to generate a cluster of (µ y , σ 2 y ). ConvCNP can potentially be tailored to a latent NP, given that the mapping function to the space S can be latent and shared between the context and target set. Therefore, we introduce a member named GloBal Convolutional Neural Process (GBCoNP) that also adopts amortised variational inference on the global uncertainty (shown in Fig 3).
The predictions of GBCoNP follow a conditional distribution with a latent path: where ψ(·) is the convolution module applied on both the deterministic and the latent representation. r S := r(x C , x S , y C ) is a DeepConvSet. Given the convolved functional condition ψ(r S , z) and the target inputs x T , a decoder DeepConvSet maps the condition to the prediction distribution. We use a new space x S instead of the original space x C to obtain the Fig. 3. The model structure for GBCoNP variational inference of the intractable prior p(z). Thus, (6) can be optimized with another differentiable network (ψ(·) omitted for better clarification): According to [6], the joint distribution of p(y T ) must suffice exchangeability to be a stochastic process: where π and π are permutations of the target index set {1, ..., n} and the context index set {1, ..., m}, meaning the permutation of the context and the target points cannot change the prediction outcome. The previous latent NPs need an aggregation module (mean, sum, etc.) on context and target space to ensure q(z|x T , y T ) and q(z|x C , y C ) are permutation invariant and have the identical dimensions for divergence.
Similar to attention, the DeepConvSet already suffices the exchangeability by calculating the inner product between the key (x C or x T ) and query x S . This operation is insensitive to the input order and thus the context and target are projected to the same space S regardless of the order. Presumably, r S can be directly passed through an MLP for z without aggregation. However, we discovered that the aggregation on z can mitigate the coherence deficiency in the samples, a drawback caused by the independent prediction assumption in NPFs [8]. It maybe attributed to the diminishing effect on the divergence between q(z|r S ) := q(z|x S , x C , y C ) and q(z|r S ) := q(z|x S , x T , y T ) after the fusion.
An observation that supports the aggregation is the nontranslational invariance of the pre-aggregated z. Normally, ψ(·) in (6) comprises several cascaded convolutional nets; when we directly convolve on r S concatenated with the pre-aggregated z, the local pattern near a context point (e.g., a fluctuation) is propagated to the entire space S. Besides, similar patterns are obtained when z is derived from an intermediate convolution state of ψ(r S ) and get aggregated afterwards, meaning r S is convolved by a subset of the cascaded nets. Both cases above imply that local patterns on z cannot be as transferable to the whole space as r S can. Therefore, we add an aggregation module to z to ensure every point in the space S gets the same latent prior.
Adopting the variation inference, the objective function for GBCoNP now becomes (9): Prop 1. Evidence-Lower-BOund for GBCoNP: Proof. The conditional probability of the prediction can be built on the marginalization of a joint distribution with a latent variable z.
Since the prior z is intractable, it is replaced by the conditional posterior q(z|r S ) on the space S. Using Jensen's Inequality and the concave log function, we get the lower bound of (10): Computational complexity. An attention layer takes O(nmd), where n and m are the key and query lengths, and d implies the weight dimension [10]. A convolutional layer costs O(nkdf ), where k and f refer to the kernel size and channel depth. Therefore, two DeepConvSets in encoder and decoder cost O(msd) and O(nsd), where m and n refer to the context and target set sizes, and s refers to the grid length of the discretization (s m, n). The computationally expensive part ψ(·) costs O(skdf ) for convolutions on the S space. Compared with ConvCNP, the latent path in GBCoNP brought in two MLP modules: the latent MLP from r S to z and the merger MLP (which compress the concatenation of (r S , z) to a low dimension before convolution). Both modules cost O(s). Such modifications preserve the total number of parameters in convolution and are thus affordable.
III. EXPERIMENTS
We evaluate our proposed model on three groups of datasets covering 1D, 2D and spatial temporal scenarios across a broad range of domains. All the models are built with Python 3.6.8 and Pytorch 1.4.0. Two TITAN RTX GPUs are used for training. Wall-clock time for training one epoch of GBCoNP and ConvCNP is shown in Appendix B.
A. 1D Datasets
In 1D scenarios, each task deals with a series of context points. The objective is to predict the values and uncertainty for the unseen target set (The datasets are detailed in Appendix A). For each task, x values are normalized to [-1, 1] and y values are standard normalized before training. The number of context points range within U (1, 50), and the target set comprises all the data points in the task. Each dataset is trained for 100 epochs and tested for 6 runs. Table I shows the log-likelihood of the latent neural process families along with their conditional members on 1D datasets. The poor results of a neural network (NN) that abandons the context set y T = f (x T ) reflect that non-NP models are probably unsuitable for the meta-setting. ConvCNP and ANP achieve the best baseline log-likelihood in their category; their predictions are compared further with the proposed GBCoNP in Fig 4. Almost all the ground truths lie in the predicted uncertainties for the three methods (except for the periodic kernel), validating that NP families are capable of modeling uncertainty. The highest uncertainty is normally achieved at the furthest point away from all the context points. The latent NP members generate more diverse samples when compared with their counterpart conditional members at the expense of slight performance drop. This may be attributed to the inference of the intractable latent variables z using context data-the resulting distributional gap KL(q(z|r S )||q(z|r S ) causes information loss in exchange for global uncertainty. GBCoNP and ConvCNP produce smoother mean values whereas ANP achieves more diverse samples on real-world datasets. The most fluctuated predictive means tend to occur in the middle of two context points (see RBF, Matern, and HousePricing in Fig 4), where the target points are sensitive to both context points. By virtue of their convolution filters, GBCoNP and ConvCNP are able to mitigate those fluctuations.
With regard to global uncertainty causal-effect analysis, we present the values of µ z and σ z in (4) after training (Table II). Considering that z is a high dimensional distribution and each task batch presents a pair of (µ z , σ z ), we therefore averaged the results on both the dimension and batch levels. Table II indicates that the global uncertainty depends largely on model representation capacity and the intra-task diversity. Model representation capacity refers to constraints a model impose on the global uncertainty. For instance, NP constrains equally loose for all the points, ANP only constrains stricter to the points closer to the context while GBCoNP constrain almost all the points differently with the convolution on S. As the constraints grow (NP < ANP < GBCoNP), the global uncertainty decreases (σ z : NP > ANP > GBCoNP). Intratask diversity implies the meta-setting characteristic of the dataset -how many possibilities there are in this dataset given the same context. For example, although Periodic and SmartMeter are both seasonal datasets, the possible target sets, given a certain context set for Periodic, are not unique; therefore, GBCoNP can predict seasonal means with different amplitudes. In contrast, in SmartMeter, there is only one possible target set corresponding to a context set; causing a smaller σ z in this case( < Periodic).
To manipulate global uncertainty for increasing intra-task diversity, we reduce the latent dimension of z from 128 to 4 and display µ y T with different priors in Fig 5. The variance bound in the prior(4) is relaxed from σ z to 40 σ z for Stock50 and Periodic to amplify the effects. The results show that a group of different functions can be sampled meanwhile fitting well with context data. As highlighted in Fig 5 (a), some dimension controls the trend of the curve after a context point (up/down) while some others control the amplitude of the trend. Fig 5 (b) shows the controlling factors become the amplitude and the phase of a wave, and the resulting positions of context data in the prediction curves shift from a "crest" to a "trough".
B. 2D Datasets
MNIST, SVHN, CelebA32. In 2D scenarios, we aim to inpaint the whole image with uncertainties given a set of context pixels, and we select three image datasets (MNIST, SVHN, CelebA32) for the task. For convolutional-based methods, the context values (x) are built using a binary mask ∈ R W ×H , where U(0, 0.3(W × H)) pixels are unveiled. For non-convolutional baselines, the mask is transformed into a list of 2d inputs representing the relative pixel locations (pixel index/image size). Y values are either 1d or 3d implying the pixel intensity. Each datasets is trained for 50 epochs and tested for 6 runs. Table III shows the log-likelihood of the latent and the conditional neural process families on the 2D datasets. GBCoNP achieves the SOTA performance among the latent members. ConvCNP performs the best conditional result on MNIST, and ACNP outperforms on SVHN and CelebA32. Overall, all the three methods give reasonable results, even with only 5% of the total pixels. The local uncertainty occurs on the strokes of a digit for MNIST and SVHN, while in CelebA, the variances lie in the profile of a face, including face shape, hair, eyes, noise and mouth. The variance on SVHN and CelebA plummets as the amount of context data increases. With 30% of unveiled pixels, all the models can recover the whole image with little local uncertainty. GBCoNP and ConvCNP can generate smoother predictive means (see Fig 6 MNIST digits "2" and "5", and the plate numbers "42" and "25" in SVHN). ANP achieves more coherent local uncertainties regarding variances while GBCoNP and ConvCNP tend to reduce the variances around context points to zero. Table II, when the intra-task diversity is large enough, the global uncertainties achieved by several methods are quite similar (NP ≈ ANP ≈ GBCoNP in MNIST). For instance, given a small set of pixels indicating a number "6", there are plenty of possibilities in this dataset to finish the inpainting, therefore producing a large uncertainty σ z .To manipulate this global uncertainty, we generate images with respect to different priors in Fig 7. Similar to 1D, we relax the prior standard deviation to 12 σ z . For MNIST, the global uncertainty in ANP determines the thickness and extension of a stroke, e.g., a "1" can be transformed to a "9" then to a "4" if the upper stroke of "1" is gradually extended and thickened. Besides, extension towards different directions can result in variants of orientations of a digit (see digit "6" in Fig 7). The results of GBCoNP comply with this extension pattern yet with limited variation due to the model representation capacity as discussed in 1D. For CelebA32, global uncertainty depicts the background and appearance. Changing values across the prior can transit the background color from completely black to white with different shades of brightness. Appearance variations range from the size of the eyes, noses, face shape, to the hair volume and color. -1.08 ± 0.03 -1.30 ± 2E-3 -1.04 ± 0.01 -0.79 ± 0.08 -0.47 ± 0.02 -1.20 ± 0.39 ANP [7] 0.74 ± 0.12 -1.06 ± 8E-3 0.19 ± 0.03 -0.19 ± 0.15 -0.33 ± 0.05 0.80 ± 0.37 GBCoNP 1.24 ± 0.14 0.25 ± 0.04 0.37 ± 0.03 0.03 ± 0.13 0.02 ± 0.05 1.11 ± 0.24 C. Spatial-Temporal Dataset COVID 1 . In spatial temporal scenarios, we use this dataset that contains daily total confirmed coronavirus cases in all 1 https://www.kaggle.com/fireballbyedimyrnmom/ us-counties-covid-19-dataset counties of US from 21/01/2020 to 20/04/2021. We use a window of 14 days with daily total confirmed cases sampled in each task to build temporal features that could depict the trend. Each element is normalized by log (x − x min + 1), where x min suggests the lowest cases therein. Since three points are sufficient to describe the curve, i.e., the last 7 days, the last (4) for 1D, 2D datasets are 1, and 5% total pixels respectively. Covid datasets remains providing all previous records as a context set. 3 days and today, according to our observation, these context points are used for predicting the logged relative growth 7 days after. To fully utilize spatial information, the data for each county are projected on a geographic map (shown in the first row Fig 8), where spatial relationships between counties are displayed while preserving temporal information with color intensity.
As shown in
The whole map is segmented into a grid of 104 of 40×40 cells and a model only processes one cell at a time to minimize the computational cost. For convolutional methods, context values x is a binary mask ∈ R T ×W ×H , where only the last time channel is set to zero. For non-convolutional baselines, the mask is transformed into a list of 3d inputs with the relative spatial and temporal locations where Y values correspond to the log-relative growth. The dataset is trained for 100 epochs and tested for 6 runs.
The last column of table III compares the log-likelihood metrics among NP families. GBCoNP and ACNP achieve the SOTA performance in their categories, and GBCoNP outperforms ConvCNP in stability. Most models have high log-likelihood variances due to random sampling of the cell.
In contrast with non-convolutional neural processes that could only handle a maximum of 40×40×3 = 4800 (one cell in the grid) context points each time, GBCoNP is the only one that can process the whole grid (104 cells) by convolutions which results in a more stabilized metric. Interestingly, the locations with the low mean values tend to have high uncertainty (regions with whiter means have redder variances), as shown in Fig 8(b) and (c)-the model believes that these "safer" regions surrounded by high risk neighbourhoods have higher tendency to be infected in the future. Besides, GBCoNP yileds more precise mean values (purple line in Fig 8(c)) with higher standard deviations compared with ANP (see Fig 8(b)).
Our supplementary experiments on global uncertainty reveal that the COVID dataset has the highest intra-task diversity compared with other datasets, as this covid spread trend has the most diverse possibilities. Manipulation on different priors show the standard deviation change of a region from fairly low risk to high risk.
IV. RELATED WORKS
Neural Processes. To enhance uncertainty modelling with Gaussian Process(GPs), previous efforts [12] [13] focus on representing deep kernels in GPs before Neural Processes(NPs) were proposed. Although they inherited the powerful feature representation from neural networks(NNs), they still suffer from computationally expensive matrix inversion O(n 3 ). NPs novelly represent stochastic processes with neural netorks under two constraints: exchangeability and consistency, which frees the model from inversion with fully backpropagation and induces the whole neural processes family [14]. There are two major branches in the family: conditional NPs(CNP) [11] and latent NPs. Others members add different inductive biases on context and target relationships; some famous ones include Attentive NPs(ANPs) [7], Sequential NPs(SNPs) [15], and Convolutional CNPs (ConvCNPs) [11]. A recent work, ConvNP [8], adds latent path to ConvCNP; but instead of using variational inference, it adopts Monte Carlo estimation for z and focuses on local coherence. Inspired by this, our work further elaborates on the global uncertainty and the related effect.
Generative model with priors. Generative models help increase intra-diversity of data distributions and mitigate few shot learning issues. They generally generate sythetic samples with priors over model condition. For deep learning-based time series generation, [16] applies LSTMs to capture context information and fill the missing values. [17] uses adversarial nets to generate sequence with temporal dynamics across time. In image inpainting with priors, conditional VAEs [18] and GANs [19], [20] manipulate the semantic features in an image (e.g., "smile", "eyeglasses", "age"). However, very few existing studies have meta-learning settings for a distribution of sampling functions, even fewer have considered model uncertainty. [21] employs a meta-agnostic structure for reinforcement learning but uses GPs rather than NPs.
V. CONCLUSION
In this paper, we answered three important questions about the global uncertainty in latent neural processes: How to formalize global uncertainty? What causes and affects global uncertainty? How to manipulate the global uncertainty for data generation? We define the global uncertainty as a prior of z from a latent functional distribution given a small set of context data. We discovered that global uncertainty is affected by the model representation capacity and the data intra-task diversity. Our manipulation of the global uncertainty not only achieves generating the desired samples to tackle few-shot learning, but also enables the probability evaluation on the functional priors. APPENDIX A DATASET DETAILS For 1d datasets, We use synthetic Gaussian processes with 3 kernels to sample values. The kernel functions are as follows: where the x values lie within x ∈ [-2, 2], and the function values y are sampled from y ∼ GP(0, K(x, x )). Training, validating, and testing data sizes are set to 50,000, 10,000 and 5,000. Stock50 2 contains daily trading volumes of 50 stocks from National Stock Exchange India. In each sampling task, 200 days of data are sampled starting from a random date between 11/2016 to 11/2017. X values represent the days count from the first day, and y values represent the corresponding volume weighted average price. Training, validating and testing sizes of the stocks are 36, 10, and 4, respectively. 2 https://www.kaggle.com/rohanrao/nifty50-stock-market-data SmartMeter 3 includes half-hourly average energy consumption readings from 5,567 London households during 03/12/2011 -28/02/2014 [22]. For each task, 100 hours of data are sampled with a random timestamp. X values refer to the relative time gaps measured in days, and y values refer to the consumption in kWh/half-hour. The proportions of the training, validating and testing sizes are 7:2:1 based on time line.
HousePricing 4 comprises monthly average house prices in 9,473 American cities from 01/1996 to 03/2020. One hundred months of data are sampled per sampling task. X values are the time differences measured in months, and values y are prices in dollars. The proportions of the training, validating and testing sets are 7:2:1 based on total number of cities index.
APPENDIX B TRAINING DETAILS
The running time costs of GBCoNP and ConvCNP are compared in Table IV. The total number of epochs for 1D, 2D and Covid datasets is 100, 50, 100 respectively. | 2021-09-03T01:16:14.563Z | 2021-09-02T00:00:00.000 | {
"year": 2021,
"sha1": "f2b7bb1a8225444d975066e4a8421cd1bfdf55fe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f2b7bb1a8225444d975066e4a8421cd1bfdf55fe",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.