id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
271214758
pes2o/s2orc
v3-fos-license
Lot quality assurance sampling survey for water, sanitation and hygiene monitoring and evidence-based advocacy in Bentiu IDP camp, South Sudan Background Every year, 60% of deaths from diarrhoeal disease occur in low and middle-income countries due to inadequate water, sanitation, and hygiene. In these countries, diarrhoeal diseases are the second leading cause of death in children under five, excluding neonatal deaths. The approximately 100,000 people residing in the Bentiu Internally Displaced Population (IDP) camp in South Sudan have previously experienced water, sanitation, and hygiene outbreaks, including an ongoing Hepatitis E outbreak in 2021. This study aimed to assess the gaps in Water, Sanitation, and Hygiene (WASH), prioritise areas for intervention, and advocate for the improvement of WASH services based on the findings. Methods A cross-sectional lot quality assurance sampling (LQAS) survey was conducted in ninety-five households to collect data on water, sanitation, and hygiene (WASH) coverage performance across five sectors. Nineteen households were allocated to each sector, referred to as supervision areas in LQAS surveys. Probability proportional to size sampling was used to determine the number of households to sample in each sector block selected using a geographic positioning system. One adult respondent, familiar with the household, was chosen to answer WASH-related questions, and one child under the age of five was selected through a lottery method to assess the prevalence of WASH-related disease morbidities in the previous two weeks. The data were collected using the KoBoCollect mobile application. Data analysis was conducted using R statistical software and a generic LQAS Excel analyser. Crude values, weighted averages, and 95% confidence intervals were calculated for each indicator. Target coverage benchmarks set by program managers and WASH guidelines were used to classify the performance of each indicator. Results The LQAS survey revealed that five out of 13 clean water supply indicators, eight out of 10 hygiene and sanitation indicators, and two out of four health indicators did not meet the target coverage. Regarding the clean water supply indicators, 68.9% (95% CI 60.8%-77.1%) of households reported having water available six days a week, while 37% (95% CI 27%-46%) had water containers in adequate condition. For the hygiene and sanitation indicators, 17.9% (95% CI 10.9%-24.8%) of households had handwashing points in their living area, 66.8% (95% CI 49%-84.6%) had their own jug for cleansing after defaecation, and 26.4% (95% CI 17.4%-35.3%) of households had one piece of soap. More than 40% of households wash dead bodies at funerals and wash their hands in a shared bowl. Households with sanitary facilities at an acceptable level were 22.8% (95% CI 15.6%-30.1%), while 13.2% (95% CI 6.6%-19.9%) of households had functioning handwashing points at the latrines. Over the previous two weeks, 57.9% (95% CI 49.6–69.7%) of households reported no diarrhoea, and 71.3% (95% CI 62.1%-80.6%) reported no eye infections among children under five. Conclusion The camp’s hygiene and sanitation situation necessitated immediate intervention to halt the hepatitis E outbreak and prevent further WASH-related outbreaks and health issues. The LQAS findings were employed to advocate for interventions addressing the WASH gaps, resulting in WASH and health actors stepping in. Background The United Nations adopted the Human Right to Safe Drinking Water and Sanitation (HRTWS), which calls for universal access to safe, affordable, acceptable, available, and accessible water, sanitation and hygiene (WASH) services by 2030 [1].WASH initiatives are crucial in reducing poverty, promoting equality, and supporting socioeconomic development [2].These services were targets under the Millennium Development Goals (MDGs) for 2015 and are now part of the Sustainable Development Goals (SDGs) for the post-2015 period [3].However, vulnerable populations, particularly in developing countries and refugee settings, have limited access to WASH services, impacting individuals' and societies' health and social wellbeing [4].According to the United Nations (UN), nearly half of the global population lacks safe sanitation, and over 2.2 billion people do not have access to safe drinking water.Approximately 2 million individuals worldwide do not have access to handwashing facilities with soap, and half a million still practice open defecation [5]. Globally, approximately 88% of deaths due to diarrhoeal diseases are attributed to inadequate safe water and poor hygiene and sanitation, of which 60% occur in low-and middleincome countries (LMIC) [6,7].These diarrhoeal diseases (including cholera) kill more children than AIDS, malaria, and measles combined, making diarrhoeal diseases the second leading cause of infectious disease death after pneumonia among children under five, excluding neonatal death [7,8].In 2016, inadequate WASH contributed to 60% of diarrhoeal deaths, which could have been prevented by improving water and sanitation services [9].Furthermore, improved hygiene, sanitation, and safe water access can reduce neglected tropical diseases (such as schistosomiasis and Guinea worm diseases) morbidities by almost 80% [10].Access to safe water, improved hygiene, and sanitation have the potential to prevent at least 9.1% of the global disease burden and 6.3% of all global deaths [11]. Populations in displacement camps and refugee settings are usually in precarious situations and particularly prone to hygiene and sanitation-related diseases.Bentiu Internally Displaced Population Camp (IDP) in South Sudan was established in 2013 as a protection of civilians' camp under the UN for people who sought safety and protection, following the civil war between government and opposition forces.The camp is home to approximately one hundred thousand people, which varies slightly between different seasons (e.g.due to flooding) and ongoing conflicts in the surrounding area [12,13].The nature of the camp presents distinct health challenges due to ongoing insecurity, population movement, seasonal weather variation, climate impact, food insecurity, lack of sufficient water, lack of sufficient hygiene and sanitation services, and the presence of endemic infectious diseases.Primary health risks include diarrhoeal disease (acute watery and acute bloody diarrhoea, including cholera), hepatitis E infection, seasonal malaria, and other vector and waterborne illnesses. The provision of WASH services in the Bentiu IDP camp was shared among several WASH actors.Those WASH actors are heavily dependent on external funding.Due to the coronavirus disease (COVID-19) pandemic, there was a shift in the budget and effort towards COVID-19 treatment and prevention activities.As primarily a health actor, Me ´decins Sans Frontières (MSF) is not responsible for the WASH activities in Bentiu IDP camp but responds to the high levels of WASH-related diseases faced by the community.In addition, MSF is a direct witness of the impact of WASH gaps on the community's health and is thus usually in a good position to advocate for appropriate WASH interventions. Different organisations and institutions have used different methods to assess the WASH situations in camp settings.The United Nations Humanitarian Charter for Refugees (UNHCR), WASH in emergency handbook and Sphere guidelines, annual knowledge, attitude, and practice (KAP) surveys, and monthly reports are mentioned as means of assessment, monitoring, and evaluation [14].Lot quality assurance sampling (LQAS) is an alternative approach for assessing the WASH situation in camp settings [15].The method has been used to assess a variety of health services, including vaccination coverage, disease prevalence and health care monitoring [16][17][18].This method has shown valid outputs at a lower cost and is quicker and easier to implement than extensive surveys [16][17][18][19].Moreover, the findings were used for programmatic monitoring, evaluation, and improvement.This study used the LQAS methodology to quantify the WASH gaps in the Bentiu IDP camp, identify priority intervention areas, mobilise action resources, and support advocacy efforts. Study setting and population Bentiu IDP camp is located in Unity State, South Sudan.It hosts a population of approximately 107,000 people, of which 50.6% are female.The camp is divided into five sectors, which are further divided into 64 blocks, housing from approximately 1,000 individuals to more than 3,200 individuals each [13].The target population comprised all households in all five sectors of the Bentiu IDP camp. Study design A cross-sectional lot quality assurance sampling (LQAS) survey was conducted from August 2nd to 6th, 2021, in Bentiu IDP camp, South Sudan.This survey methodology uses small sample sizes and allows the classification and prioritisation of needs on a smaller geographic level management unit (called the supervision area [SA]) [20]. Study sample A sample size is established at the SA level, and a decision rule is selected (which depends on the sample size), which is the cut-off below which an area is classified as low-performing for an indicator [20].Nineteen households are typically sampled per SA, which ensures that the α (probability of misclassifying an area with high coverage as low) and β (probability of misclassifying an area with low coverage as high) errors are both maintained at 10% (S1 Table) [21]. In this survey, two client populations or 'universes were sampled.The first client population was made up of households in the entire camp, and the second client population of parents/ guardians of children aged less than five years in the households selected for the first population.Nineteen households were sampled from each SA, giving a sample size of 95 for each client population.Probability proportional to size (PPS) sampling was used to identify the blocks from which the 19 households in each SA were selected.Households were selected using random sampling using a Geographic Positioning System (GPS) in Quantum Geographic Information System (QGIS).The selected households were imported into the OpenStreetMap Automated Navigation Directions (OsmAnd) map mobile application to navigate to the household during data collection.A person older than 18 years and with more information in the household or who had spent more time in the household was selected and requested to be the respondent.One child under five from the same household was included to assess the prevalence of WASH-related diseases over the previous two weeks.For households with more than one child under five, a lottery method was used to select one of them. Data collection Two questionnaires (one for each target client population) were used to collect data.One set of questionnaires was about WASH-related indicators, and the other set was about the prevalence of WASH-related disease morbidities in children under five, based on previously used and tested indicators and question sets (S3 File).The data collectors were trained for three days, and a pilot test was conducted for one day.Data were collected using KoBoCollect (https:// kobo.msf.org) using smartphones [22]. Data analysis Data were imported into R software version 4.1.2for descriptive statistical analysis, and a generic Microsoft Excel LQAS analyser from the Liverpool School of Tropical Medicine (LSTM) was used to identify the priorities based on the decision rule table [23].To determine the prevalence and coverage of the indicators at a camp-wide level, responses from all 19 respondents in all five SAs were combined for a total sample size of 95.This allowed for the calculation of the crude average coverage or prevalence for the entire IDP camp.Using population data from each supervision area, a weighted average coverage or prevalence was calculated for the entire IDP camp and each supervision area. Responses were calculated and compared to the decision rules per indicator and the crude and weighted averages using the LQAS table.Data from all SAs were aggregated, and coverage indicators for the whole camp were determined based on indicators from the LQAS Generic Health Results Excel Sheet [24]. The target coverage (%) and target coverage DR (n) are established performance benchmarks for coverage per indicator set by program managers in previous LQAS surveys and targets available in the UNHCR, United States Agency for International Development (USAID), and Sphere Guidelines to rule if the indicator has met the target.First, the aggregated weighted average coverage for an indicator for the entire camp was calculated and compared with the LQAS table (S1 Table ).For example, if the weighted average was 70% for a specific indicator, this equated to a score of 11 out of 19 in the LQAS table (S1 Table ).This is the average DR(n) referred to in this study.The target and Average DRs were then used to classify each indicator's high-priority, medium-priority, and low-priority areas.If the SA's specific indicator performance was less than the target DR and less than the average DR, the SA was classified as a high priority.If the SA's specific indicator performance was above the average DR but less than the target DR, it was classified as medium priority.If the SA's performance was higher than or equal to the target DR, it was classified as low/no priority.In addition, the performance of indicators from this LQAS survey was compared to the most recent LQAS survey (2019). Ethics This study was conducted in accordance with the World Medical Association Declaration of Helsinki Ethical Principles for Medical Research Involving Human Subjects (2013) and the 2016 International Ethical Guidelines for Health-Related Research Involving Human Subjects (CIOMS) [25,26].Ethical approval was obtained from the MSF Ethical Review Board and the Ministry of Health South Sudan Research Ethical Review Board.Community and stakeholder engagement was conducted before the study through camp management and meetings with community leaders.The data collectors administered an information sheet and provided the participants with a copy of the translated information sheet.Verbal informed consent was obtained from all participants. Results The survey included 95 household representatives and 95 children under five.The average household size was eight people.Most respondents (81%) were females (Table 1). Water supply indicators prevalence and coverage Eight out of sixteen indicators of clean water supply met the target coverage.Most households 95.0% (95% CI 90.5%-99.5%)reported that they get water for drinking, washing hands, and dishes from the potable water source or tap stand during the dry and rainy seasons, while 46% (95% CI 36%-57%) of the households had at least 15 litres of water per person per day.In addition, 68.9% (95% CI 60.8%-77.1%) of the households reported water available from the source at least six days a week, and 51.4% (95% CI 44.1-58.7%) of the households reported that they filled their water containers before the water tap stopped running.36.6% (95% CI 26.4-46.8%) of the households can store drinking water for less than a day.Households with acceptable water storage containers (at least narrow mouth, clean, and have a lid) accounted for 36.8%(95% CI 26.5-46.9%)(Table 2). Coverage of hygiene indicators None of the six hygiene indicators met the target coverage.The proportion of households that had their water jugs for cleansing after defaecation was 66.8% (95% CI 49-84.6%),and 17.9% (95% CI 10.9-24.8%) of the households had handwashing setups within their living area.A quarter, 26.4% (95% CI 17.4-35.3%) of households could show one piece of soap, while 57.6% (95% CI 48.4-66.7%) of the households did not wash a dead body and did not wash their hands in a shared bowl after a funeral.As is common in Nuer culture, nearly all households reported eating from a shared plate (Table 3) [27]. Coverage of sanitation indicators Two of the four sanitation indicators met the threshold for target coverage.While 90.5% (95% CI 84.7-96.4%) of households reported using a pit latrine facility for toilet purposes, during an observation, 22.8% (95% 15.6-30.1%) of the households had a sanitation facility at an acceptable level (i.e. it at least had a door, was not full, and the slab was not falling).Almost all (97.4%, 95% CI 94.5-100%) menstruating women in the households reported that they used acceptable dignity kits (either sanitary pads or disposable/reusable cloth), which slightly exceeded the specified target (Table 4). Health indicators Almost two-thirds (59.7%, 95% CI 49.6-69.7%) of parents/guardians reported that their under-five children did not have diarrhoea in the two weeks preceding the survey.In addition, 71.3% (95% CI 62.1-80.6%) of the parents reported no eye infections in the children under five in the preceding two weeks (Table 5). Comparison to the previous LQAS surveys in Bentiu IDP camp (Monitoring and evaluation) The last LQAS survey in Bentiu IDP camp was conducted in 2019 (S1 File).Hence, we sought to compare the results from that survey to the most recent similar survey to identify whether the WASH situation in the camp had improved or deteriorated.All water supply indicators in the 2021 survey remained comparable to the findings of the 2019 LQAS survey and met the target coverage set by program managers and guidelines.However, most hygiene and sanitation indicators had deteriorated compared to the findings of the same survey conducted in 2019.Soap availability per household, hygiene promotion activities, and use of improved sanitation facilities all deteriorated compared to the findings from the 2019 survey.All other indicators remained comparable to the findings in 2019.The prevalence of hygiene-related diseases has also increased, but it is comparable to that found in the LQAS survey 2019 (S1 File). Discussion The survey provided an overview of the WASH situation in Bentiu IDP camp.Fifteen out of twenty-seven indicators did not meet the target coverage levels set by the program managers.Hygiene and sanitation indicators showed the highest gap compared to the expected targets/ benchmarks.The UNHCR and MSF targets for clean drinking and cooking water were met in the survey.However, only 68.9% (95% CI 60.8-77.1%) of the households had access to water for at least six out of seven days, and only half of the households could get their containers filled before the water tap stopped running.According to the minimum standards for Sphere and UNHCR guidelines, water should be accessible at least eight hours daily in an area with 250 people per tap stand that flows 7.5 litres of water/ minute or 17 litres/minute for 500 people [22,23].This standard is not met in the camp, where 98% of respondents reported that water was available only a maximum of two times a day based on the schedule.In addition, only 46% of households met the minimum water availability per household (15 L of water per person per day in an emergency and 20L/per person/day in a post-emergency setup), which indicated that the water supply in the IDP camp was not sufficient to meet the population's needs [14].The proportion of households that used tap water to wash their clothes was 69.8% (95% CI 60.6-79.0%).Water from the oxidation point (where the sewage of the whole camp is collected) is also used to wash clothes, a potential source of all kinds of waterborne diseases, including hepatitis E, cholera, typhoid, and others.This was in line with the concurrent outbreak of the hepatitis E virus in the Bentiu IDP camp and an increase in diarrhoeal diseases.There was a large gap between the number of latrines in an acceptable condition compared to the number of inhabitants in the camp, with more than 70 people per latrine.According to the Sphere guideline, the maximum is 20 people/latrine [28].In addition, as unsafe latrines are risky for women and girls, poorly maintained latrines are less used by communities where women and girls prefer to go to the bush [28]. There might be many reasons to explain the continued poor hygiene and sanitation or deterioration of some of the services compared with the findings of the 2019 LQAS survey.The impact of COVID-19 was also one of the major contributing factors.While most of the WASH and health actors and funds were focusing on the prevention and management of COVID-19, there was also a budgeting and fund shift from the WASH sector to COVID-19 treatment and prevention activities.In addition, due to COVID-19, mass community mobilisation and hygiene mobilisation activities were restricted or limited to small numbers.Furthermore, recommendations from the 2019 LQAS survey were not advocated and enacted thoroughly because of the emergence of the COVID-19 pandemic and deprioritisation of related activities.In addition, there was a major fund cut starting at the beginning of 2021 due to the government declaration of the transition of the camp from protection of civilian camp to IDP.The population number in the IDP camp is also dynamic, with regular population influxes due to conflicts and natural disasters, according to the IOM displacement tracking tool [13].This increased the demand for additional WASH facilities and may have overloaded existing ones, which may have directly impacted the WASH conditions in the camp. In this survey, 40% of the children experienced at least one waterborne disease, mainly diarrhoea, followed by eye infection within the two weeks preceding the survey.This is in parallel with the deteriorated hygiene and sanitation situation in the IDP camp as a health consequence, including the hepatitis E virus outbreak occurring in the camp in 2021.In the whole year of 2021, there was also an increase in the number of diarrhoeal diseases in the outpatient department of the MSF hospital.This increase was higher than the seasonal diarrhoeal diseases increase in previous years.There was a tenfold increase in July 2021 compared to the average number of cases in April-June the same year.In addition, the number of cases with acute jaundice syndrome doubled since April 2021, with a total of 469 RDT-positive hepatitis E cases in Bentiu MSF hospital and five deaths, including one pregnant mother, in 2021.Finally, there were 415 diarrhoeal disease cases, associated admissions, and 15 deaths in Bentiu Hospital in 2021. While the LQAS survey was implemented to evaluate the current levels of the WASH gaps, this has also proved that we can use the findings for quicker and more effective advocacy and that an LQAS can be more than just a monitoring and evaluation tool. We conducted extensive communication and advocacy on the identified gaps, and this gained attention from international donors, national health, and WASH actors, resulting in a physical visit to the site and, ultimately, funding to support activities in the camp.Consequently, the various identified gaps were enacted by the different actors in the camp, and funds were secured from donors.Activities that included the construction of hundreds of replacement latrines, managing dry waste, and establishing a faecal waste management centre were commenced immediately by different actors. Three weeks after the survey, unexpected flooding occurred in the area, which affected the camp directly and through an influx of additional population displaced from the surrounding area.The survey results helped to justify the additional burden in the camp and advocate for an upgrade to the water and sanitation response to an emergency level, where MSF and other WASH actors initiated interventions later. This study had a few limitations.First, it focused mainly on the coverage of the WASH situation in the Bentiu IDP camp.However, it did not assess knowledge, attitude, and utilisation practices, except via a small number of indicators.Access to services without the knowledge of service utilisation limits the full picture of the WASH services in the camp.In addition, most of the questions were based on the self-report of the household representative for the interview.This might have introduced social desirability bias.In addition, this survey did not include a qualitative approach (i.e.focus group discussions, key informant interviews, etc.), which could have provided a more in-depth explanation for some of the hygiene and sanitation practices reported by the community.Moreover, the small sample sizes of the LQAS surveys indicate that they are not highly powered, and the α and β errors of 10% are still relatively high.This can result in incorrect classification of poorly performing areas as having higher or acceptable performance.As a result, it provides a wider confidence interval, and making an inference from the confidence intervals is less precise.While acknowledging the limitations of LQAS, we can also highlight our trust in the findings due to 1) consistency of findings across time with previous surveys and 2) in line with the health facility surveillance data of high AWD and hepatitis E. In conclusion, the LQAS survey identified key gaps in WASH service provision to the internally displaced Bentiu population that needed a rapid response to mitigate and prevent further WASH-related outbreaks.Moreover, this study highlighted that LQAS surveys, combined with a strong advocacy component, can be used for quick evidence synthesis to inform timely interventions for WASH-related gaps in humanitarian camp settings. Table 2 . Water supply and quality coverage indicators in Bentiu IDP camp, August 2021. Established performance benchmarks for coverage per indicator set by program managers † † LQAS value for the Weighed average (this is the cut-off point for medium versus high priority) ** High priority at camp level (indicators didn't meet the target for the average coverage of the SAs') * Medium priority (indicators that didn't meet the target coverage set by program managers and guidelines) https://doi.org/10.1371/journal.pone.0302712.t002 Table 3 . Hygiene practice and coverage indicators, Bentiu IDP camp, August 2021. Established performance benchmarks for coverage per indicator set by program managers † † LQAS value for the Weighed average (this is the cut-off point for medium versus high priority) ** High priority at camp level (indicators didn't meet the target for the average coverage) *Medium priority (indicators that didn't meet the target coverage set by program managers and guidelines) https://doi.org/10.1371/journal.pone.0302712.t003 Table 4 . Sanitation indicators practice and coverage in the Bentiu IDP camp in August 2021. Established performance benchmarks for coverage per indicator set by program managers † † LQAS value for the Weighed average (this is the cut-off point for medium versus high priority) ** High priority at camp level (indicators didn't meet the target for the average coverage) *Medium priority (indicators that didn't meet the target coverage set by program managers and guidelines) https://doi.org/10.1371/journal.pone.0302712.t004 Table 5 . Prevalence of water, sanitation, and hygiene-related disease indicators in the Bentiu IDP camp, August 2021. Established performance benchmarks for coverage per indicator set by program managers † † LQAS value for the Weighed average (this is the cut point for medium versus high priority) ** High priority at camp level (indicators didn't meet the target for the average coverage) *Medium priority (indicators that didn't meet the target coverage set by program managers and guidelines https://doi.org/10.1371/journal.pone.0302712.t005
2024-07-17T05:07:43.658Z
2024-07-15T00:00:00.000
{ "year": 2024, "sha1": "154b1c1bda24625fe773982132bc6840dc722156", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fdb8026a07567228cbc3447c80d22f05d8c3c1ad", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20235845
pes2o/s2orc
v3-fos-license
Learning from the SARS outbreak The appearance in south China and subsequent global epidemic of Sars in 2003, and its re-emergence this year, is focusing attention on the growing threat of animal diseases making the jump from their normal hosts to human populations. Nigel Williams reports. Many new human diseases of animal origin are expected to appear over the coming decades as a result of environmental disruption, global warming and behavioral change, scientists at a meeting at the Royal Society in London warned last month as a growing number of scientists and other bodies contemplate future threats from infections presently occurring in animals that may spread to humans. The reappearance of Severe acute respiratory syndrome (Sars) in China, the potentially lethal pneumonia-like disease that killed 800 people in 27 countries last year, is a signal to the world of the threat from emerging infections that appear to have jumped from animals, researchers said. The first case of Sars since last year's outbreak was confirmed last month in a television journalist from Guangzhou, the southern Chinese city in the province of Guangdong. Further Malik Peiris, professor of microbiology at Hong Kong University and one of the first scientists to identify the Sars virus, said there was an urgent need to monitor viruses that jumped from animals to humans. Many did not cause disease in their original animal hosts so were not picked up by conventional veterinary checks, he said. "We were fortunate. Sars was a global health problem solved with a global effort. It will not be the last and a global effort to solve Learning from the Sars outbreak The appearance in south China and subsequent global epidemic of Sars in 2003, and its re-emergence this year, is focusing attention on the growing threat of animal diseases making the jump from their normal hosts to human populations. Nigel Williams reports. future health problems will be very important," he said. Professor Nan Shan Zhong of the Guangzhou Respiratory Disease Research Institute said there were clues that suggested a link between Sars and the civet cat, bred for food in China. A similar coronavirus to that linked with the Sars outbreak (Sars-CoV) was highly concentrated in the cats' faeces and the first cases last year occurred in animal traders. Thousands of the cats were killed last month after the Chinese government ordered a cull following the latest cases of Sars and banned their sale in markets. All countries have been asked by the WHO to maintain their ability to detect and respond to the re-emergence of the disease. But clinicians and researchers are hampered by a number of factors. These include the fact that there are no specific clinical features of Sars, the lack of a rapid diagnostic test that can reliably detect the Sars-CoV in the first few days after the onset of clinical symptoms, and the seasonal occurrence of other respiratory diseases, including influenza. `The major problem in the coming year will be the management of false alarms triggered by suspect clinical cases,' says Roy Anderson, professor of infectious disease epidemiology at Imperial College London. "Detection of the presence of the virus soon after onset of clinical symptoms remains difficult, with the most sophisticated tests still only providing a sensitivity of 60-70 per cent in samples taken at various times after the onset of fever," he says. Sensitivity can be improved if samples from the lower respiratory tract can be obtained in the second week after the clinical symptoms have appeared, "but the procedures to obtain material for analysis are invasive and not without risk to the patient," he says. Tests based on the detection of immune system markers in blood have appeared to be highly sensitive but only after at least 21 days since the onset of illness. "It is difficult to escape the conclusion that the world community was very lucky this time round, given the low transmissibility of the agent and its biology where clinical symptoms appear well before peak infectiousness, plus the fact that fairly draconian public health measures could be put in place with great efficiency in Asian regions where the epidemic originated," Anderson says. Many other bodies and researchers worldwide are increasingly focusing on animalto-human diseases in the light of last year's Sars outbreak. These species-hopping diseases include influenza, all the more dangerous because the virus can be passed from farm animals to human beings. The bird influenza ravaging several Asian countries -and which has been blamed for the deaths of at least three Vietnamese people -could precipitate a more serious global health crisis than Sars if it goes on to spread by human contact, the WHO has warned. The alarm came as Vietnam reported several more suspected cases and suggested that pigs could be involved in the transmission of the virus from birds to humans. Millions of chickens and ducks have died or are being killed in Vietnam, Japan and South Korea, in efforts to contain the outbreak. Another recent influenza epidemic led to the loss of tens of millions of poultry in Italy, the Netherlands, Germany, Belgium and other European countries, either from the sickness directly or as a result of preventative slaughter measures. Researchers believes some cases of human conjunctivitis and serious influenza resulted from this avian outbreak. And in recent years, researchers have also recorded growing genetic and immunological variability in swine influenza in the face of widespread vaccine use. Influenza remains a constant threat to humans because of the potential of the human strain to acquire major new antigens from avian strains. The terrible Spanish flu pandemic which resulted in the death of 20-40 million people after the First World War was attributed to a major antigenic change. Evidence is growing that HIV passed from monkeys to people through the consumption of bushmeat. And other recent viruses are thought to be of animal origin. The Marburg virus, closely related to Ebola -which has caused lethal haemorrhagic fever in tropical Africa -was discovered more than 30 years ago when green monkeys captured in Uganda were brought to a laboratory in the German city of the same name. Seven people died from violent haemorrhagic fever. A research consortium of German, Finnish, Lithuanian and Swedish laboratories has recently described the various forms of hantaviruses, which cause respiratory diseases, that infect small rodents in eastern Europe. Hantaviruses can pass to humans, particularly forest workers and farmers whose work brings them into contact with these animals. A particularly virulent form has been found in the Balkan states in Europe which has proved lethal in 12 per cent of human cases. The new analysis of the Hantavirus strains should help the development of analytical and diagnostic tests. Global warming is allowing certain species, in particular insects, to colonise new regions where they can further propagate pathogens. Tropical deforestation is bringing humans into contact with animals they have not encountered before. The Hendra and Nipah viruses, discovered in the 1990s, and deadly in some 50 per cent of cases, appear to originate in fruit-eating bats from south-east Asian forests. The increasing contact between humans and animal species is not matched by knowledge of the biochemical mechanisms that create a species barrier to infection. "Pathogen-host interactions are right now the subject of intense research. A better understanding of the pathogenesis mechanisms, facilitated by growing knowledge of genomes, should help us see more clearly," says Alistair MacMillan of the UK's Central Veterinary Laboratory. The USA-EU Biotechnology Research Task Force concluded at its meeting last summer that there was an urgent need to organize a meeting on this topic. This workshop will focus more particularly on experimental hypotheses and models that can potentially verify the molecular and environmental mechanism governing the transmission of infectious agents from one species to another, says Etienne Magnien, director for biotechnologies, food and agriculture at the European Commission's research directorate. There is no doubt Sars has focused thinking on the challenges presented by the emergence of new human diseases arising from animals. Any many are pleased that WHO is strengthening its abilities. "The quick and effective response of WHO to the Sars crisis did much to restore faith amongst the many critics of the effectiveness of international agencies with large bureaucracies and limited resources for action," says Anderson. "This strengthened approach to outbreak response will be reflected in the revised international health regulations, which set out roles and responsibilities for preventing transboundary spread of infectious diseases both on a routine basis and in crisis situations," says David Heymann of the WHO.
2016-10-06T20:14:48.572Z
2004-02-03T00:00:00.000
{ "year": 2004, "sha1": "4393e9a5624d5b965d6cdd6def87492d2be80602", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.cell.com/article/S0960982204000223/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b5bff8d180192051f0b5d5487a717257259c5d37", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
15817842
pes2o/s2orc
v3-fos-license
Animal Models of Calcific Aortic Valve Disease Calcific aortic valve disease (CAVD), once thought to be a degenerative disease, is now recognized to be an active pathobiological process, with chronic inflammation emerging as a predominant, and possibly driving, factor. However, many details of the pathobiological mechanisms of CAVD remain to be described, and new approaches to treat CAVD need to be identified. Animal models are emerging as vital tools to this end, facilitated by the advent of new models and improved understanding of the utility of existing models. In this paper, we summarize and critically appraise current small and large animal models of CAVD, discuss the utility of animal models for priority CAVD research areas, and provide recommendations for future animal model studies of CAVD. Introduction Valvular heart diseases account for over 23,000 deaths annually in the United States, with the aortic valve being the most frequently affected [1]. The aortic valve is composed of three semilunar cusps or leaflets that passively open and close over 25 million times per year to maintain unidirectional blood flow from the left ventricle to the systemic and coronary circulations. To meet its functional requirements under these demanding conditions, the thin, pliable leaflet tissue is organized into three distinct layers: (1) the fibrosa on the aortic side of the leaflet, composed primarily of collagen; (2) the spongiosa in the middle, composed mainly of proteoglycans; and (3) the ventricularis on the ventricular side, composed of collagen and elastin [2]. The cellular components of the aortic valve include a monolayer of valvular endothelial cells (VECs) on the outer surface of the leaflets and valvular interstitial cells (VICs), which populate each of the three layers of the leaflet. VICs are a heterogeneous population of mostly fibroblasts [3], a subpopulation of which are mesenchymal progenitor cells [4]. VICs play the critical role of remodelling and organizing the valve extracellular matrix (ECM) to maintain valve integrity [3,5]. In disease, ECM structure and organization are disturbed, resulting in valve dysfunction. The most common valvular disease is calcific aortic valve disease (CAVD) [1]. CAVD encompasses early sclerosis, characterized by leaflet thickening without left ventricular outflow obstruction, to late stenosis in which leaflets stiffen, flow is obstructed, and cardiac function is compromised. The consequences of CAVD are significant: sclerosis is associated with a 50% increased risk of cardiovascular death and myocardial infarction [6], and the prognosis for patients with stenosis is very poor [7]. Because CAVD is a slowly progressing disease taking several decades to develop, it was once thought to be a passive "degenerative" or "senile" process. However, CAVD is now recognized to be an active pathobiological process that shares many risk factors with atherosclerosis, including hypercholesterolemia, smoking, hypertension, diabetes, chronic renal disease, and male gender [8][9][10]. While many features of human CAVD are well described (particularly for late-stage disease), specific pathobiological mechanisms are not fully understood. Some insight may come from similarities with atherosclerosis, but less than 40% of patients with CAVD have clinically significant coronary atherosclerosis [31], suggesting distinct processes. Thus there is an unmet scientific need to determine pathobiological mechanisms of CAVD and identify new approaches to treat CAVD. Animal models are emerging as vital tools to this end, facilitated by the advent of new models and improved understanding of the utility of existing models. In this paper, we summarize and critically appraise current small and large animal models of CAVD, discuss the utility of animal models for priority CAVD research areas, and provide recommendations for future animal model studies of CAVD. Animal Models of CAVD Animal models are an important platform for studying the initiation and progression of CAVD in vivo, as well as for judging the effectiveness of therapeutic interventions. To be most effective, models should mimic human disease or at least important facets thereof and the conditions in which human CAVD develops. The most common species used to model CAVD are mouse, rabbit, and swine. Of these, only swine develops CAVD naturally with age, but this process is slow and is usually accelerated by diet-induced hypercholesterolemia. Rabbits are not naturally susceptible to CAVD but are responsive to diet-induced hypercholesterolemia, and mice require a genetic predisposition to promote advanced disease. Mouse Models. In the study of CAVD, a large proportion of contemporary animal work is performed in mouse models, as mice are uniquely suited to mechanistic studies of the root causes of aortic valve pathology. Mouse models of CAVD offer a number of advantages, including their small size, easy husbandry, and cost effectiveness. In addition, short generation time, resultant ease of genetic manipulations, and availability of clonal samples allow specific investigation of key molecular mediators of CAVD [32,33]. A summary of the critical elements of mouse models of CAVD is presented in Table 1. Despite these advantages, mouse models do suffer from several important limitations. Firstly, the anatomical structure of mouse aortic valves differs drastically from that of humans. Mice do not have the trilayer aortic valve tissue morphology characteristic of human, pig, or rabbit valves; rather their leaflets are usually only ∼5-10 cells thick and do not exhibit segregated layers [62]. Secondly, and most importantly, wild-type (WT) mice on standard diets do not exhibit spontaneous calcification [32], and consequently the study of CAVD in mouse models requires dietary [34] and/or genetic [34,38,42,43] or other interventions [46,47] to induce a valvular calcium burden. Furthermore, the development of atherosclerosis in inbred mice is dependent on the background strain in use. When comparable hyperlipidemia is induced in C57BL/6J mice, this strain exhibits considerably greater atherosclerotic lesion formation than the FVBN or C3H strains [63][64][65]. While strain-specific susceptibility to CAVD has not been explored in detail, similarities between atherosclerosis and valve disease imply that comparable strain-specific susceptibility to CAVD is present as well. In a related issue, while the C57BL/6J mouse is the most atheroprone strain and most commonly used background in mouse models of CAVD, their valve leaflets contain black pigmentation that may be melanocytes [66] or lipofuscin-containing granules [35]. Regardless of their identity, these dark particulates can appear in the valve interstitium and are easily mistaken as positive von-Kossa calcification staining, thus requiring the use of alizarin red to accurately determine the extent of calcification in C57BL/6J models of CAVD [62]. Typically, the Harlan Teklad TD.88137, or "Western" diet, is used in models with a dietary induction component. This adjusted diet derives 21.2% weight and 42% of total calories from fat, with 0.2% [36,[42][43][44]46] or 0.25% [39,40] cholesterol (mouse diet fat content is typically expressed as percent of total calories from fat). Towler and colleagues first reported extreme hyperlipidemia (∼1040 mg/dL), hyperinsulinemia/hyperglycemia, and mineral deposition in Ldlr−/− mice fed TD.88137 for 16 weeks [36]. Ldlr−/− mice fed a similar high-cholesterol diet (0.15%) develop increases in valve thickness, macrophage accumulation, superoxide production, activated myofibroblasts and osteoblasts, and mineralization [37]. When mice are regularly exercised while the high-cholesterol diet is fed, these pathological symptoms are significantly reversed [37]. The Ldlr−/−;Apob 100 -only mouse has also been modified to incorporate a conditional knockout of the microsomal triglyceride transfer protein (Mttp) under the control of an [39]. In conjunction with the TD.88137 diet, the Reversa mouse develops robust calcific aortic stenosis. Serum cholesterol rises to 800-1000 mg/dL in 6-12 months, and lipid deposition and macrophage infiltration are significantly increased. Profibrotic signalling and myofibroblast activation as measured by pSmad2 and α-smooth muscle actin (αSMA) are elevated, as is procalcific signalling (pSmad1/5/8, Msx2, β-catenin, Runx2, osterix) and superoxide levels, leading to oxidative stress, valvular mineralization, and positive von Kossa/alizarin red staining [39,40]. Importantly, Cre-mediated loss of Mttp activity at six months ("reversal") normalizes serum cholesterol levels, decreases valvular lipid deposition and macrophage infiltration, prevents further calcification, lowers pSmad2/αSMA, lowers pSmad1/5/8/Msx2/Runx2, attenuates oxidative stress, and results in functional improvements in cusp separation [39]. Interestingly, reversal after 12 months does not lower pSmad2 or superoxide levels and does not improve leaflet cusp separation distance [40]. The on-demand "switching'' of cholesterol levels in this model allows, for the first time, regression studies in dietary-induced CAVD independent of pharmacological intervention. Diets with substantially elevated cholesterol levels are, however, not always employed as initiators of mouse CAVD. One such study employed the use of a high-fat, high-carbohydrate diet, where 35.5% of weight and 59% of calories were derived from fat, but no cholesterol was added (<0.1% present) [34]. In WT C57BL/6J mice on this diet, total cholesterol levels are mildly elevated to 166 mg/dL, while Ldlr−/− mice exhibit drastic hypercholesterolemia at 722 mg/dL and develop overt diabetes mellitus. Most interestingly, the high-fat, low-cholesterol diet is able to induce early markers of CAVD even in WT mice: thickened leaflets, black particulates which may be von-Kossa-positive calcification, decreased aortic valve opening area, and increased transvalvular blood velocities are reported, along with CD68positive macrophage and T-lymphocyte infiltration into the valvular interstitium. These macrophages infiltrate primarily on the high-shear ventricular side of the mouse valve [35]. The development of these disease hallmarks in WT mice with only mild hypercholesterolemia may prove to be more relevant to human CAVD. In addition to the Ldlr−/− model, a second common genetically manipulated model is the endogenously hyperlipidemic [63] ApoE-deficient (Apoe−/−) mouse [41-44, 46, 47]. ApoE allows receptor-mediated removal of very-low density lipoprotein (VLDL) from the circulation. However, ApoE also regulates T-cell proliferation and macrophage function and modulates lipid antigen presentation as well as general levels of inflammation and oxidation [67]. In this way, deletion of Apoe may significantly impact the inflammatory response to CAVD in a manner distinctly unrelated to hypercholesterolemia and/or pathogenesis of the human disease. This potential for differential disease progression remains to be studied in detail. Without dietary intervention, Apoe−/− mice develop hypercholesterolemia (∼490 mg/dL) [68], and their increasing age up to 2.5 years is correlated with increases in transvalvular velocity, mild aortic regurgitation, αSMA and osteocalcin (OCN) expression, macrophage and T-cell infiltration, and nodular calcification [41]. Administration of the TD.88137 diet to Apoe−/− mice for 4-5 months induces accelerated early disease with a substantial increase in serum cholesterol to ∼588 mg/dL, thickened leaflets, activated endothelial cells, and subendothelial lesions rich in macrophages (colocalized with MMP-2/9, cathepsin B, αSMA, ALP, Runx2, and OCN expression) [42,43]. Importantly, there is no evidence of von Kossa or alizarin red staining at this early time point, though a bisphosphonate-conjugated imaging particle shows signs of early microcalcification and colocalizes with cathepsin K [42,43]. Another version of the Apoe−/− model includes coadministration of the adipocytokine leptin, a known cardiovascular risk factor [44]. Leptin treatment does not induce hypercholesterolemia nor does atherosclerotic lesion size change, but von Kossa and ALP-positive staining is significantly increased in leptin-treated Apoe−/− mice, and osteopontin (OPN) and OCN expression is also increased. While on a normal diet, oral exposure of Apoe−/− mice to acrolein, a dietary aldehyde generated during inflammation and oxidative stress, induces hypercholesterolemia, macrophage and lipid infiltration, and platelet and endothelial activation [45]. Interestingly, such treatment does not induce fibrosis, nor does it provoke a systemic oxidative stress response. Along with these two common hyperlipidemic models, there exist other interesting genetic models that recapitulate some aspects of CAVD. Knockout of the mineral binding ECM protein matrix GLA protein (Mgp−/−) produces spontaneous ectopic apatite formation in the arterial collagen fibrils and von-Kossa-positive calcification in the aortic valve [48]. An insufficiency of elastin (Eln+/−) produces proliferation of VICs and aortic regurgitation [50]. Hypomorphic expression of fibulin-4, an ECM stabilizing protein, results in thickened leaflets, significant functional impairments, and positive pSmad2, pSmad1/5/8, and von Kossa staining [49]. Mutant tissue displays increased transforming growth factor-β (TGF-β) and bone morphogenic protein (BMP) signalling, while an Affymetrix microarray showed differential expression of a number of immune response genes between the WT and fibulin-4 deficient animals. Of specific interest to the role of inflammation in spurring the onset of CAVD are mice deficient in the antiinflammatory cytokine interleukin-1 receptor antagonist (Il1rn−/−). These animals develop thickened aortic valves infiltrated by macrophages and containing differentiated myofibroblasts, while aged Il1rn−/− mice develop calcified lesions with functional impacts on transvalvular blood velocity [60]. Circulating levels of TNF-α rise dramatically in this model, and the importance of this chemokine to CAVD pathogenesis is underscored by double knockout TNF-α−/−;Il1rn−/− mice which do not develop CAVD [60]. Congenital and Developmental Mouse Models. The presence of a congenital bicuspid aortic valve is associated with drastically increased risk of CAVD [69]. Mutations in the transcriptional regulator NOTCH1 have been shown to cause bicuspid valves and CAVD development in humans. The normally developing mouse valve displays higher Notch1 levels than during postnatal growth [70]. Mice heterozygous for Notch1 (Notch1+/−) fed a Western diet with 0.2% cholesterol for 10 months exhibit fivefold greater aortic valve calcification than WT controls, but do not exhibit bicuspid valves [51]. Recently, mice haploinsufficient for the primary nuclear Notch effector protein recombining binding protein suppressor of hairless (RBPJK) were challenged with a high-cholesterol/cholate diet for a shorter 4-month period [52]. These mice present normal trileaflet aortic valves, and develop thickened and calcified leaflets, macrophage infiltration, collagen deposition, and profibrotic/osteogenic signalling. Interestingly, Notch1+/− mice display relatively little functional impairment when compared to RBPJk+/− mice, implying that other Notch effectors contribute to valvular homeostasis [52]. Periostin is highly expressed in the endocardial cushion during embryogenesis, and its deletion (Postn−/−) induces overexpression of delta-like 1 homolog (Dlk1), a negative regulator of Notch1 [53]. By 10 months of age, the valves of Postn−/− mice exhibit a severely deformed bicuspid-like morphology displaying expression of Runx2, OPN, and OCN, along with significant valvular calcification (von Kossa). Paradoxically, when Postn−/− mice are fed a high-fat diet for four months, they display decreased valve thickness, macrophage infiltration, myofibroblast differentiation, annular fibrosis, and MMP-2/13 expression when compared with WT mice fed the same diet, possibly reflecting a reduced ability of myofibroblasts and macrophages to adhere to and infiltrate the ECM [54]. Periostin expression is mutually exclusive to that of chondromodulin-I (ChmI), an antiangiogenic factor. Aged ChmI−/− mice display increases in valve thickness, lipid deposition, calcification, VEGF-A, and angiogenesis [55]. In humans, expression of endothelial nitric oxide synthase (eNOS) in the valvular endothelium is drastically reduced in bicuspid valves [71], and mice lacking eNOS (Nos3−/−) display a high incidence (∼50%) of bicuspid aortic valve [56]. The susceptibility of this mouse model to CAVD and inflammatory processes is as of yet unexplored. Paradoxically however, Nos3−/− mice do not develop atherosclerosis when fed a high-cholesterol atherogenic diet [72], a phenomenon which may be the result of reductions in eNOS-driven LDL oxidation in the vasculature [73]. Signalling pathways normally associated with embryonic development of the valve have recently been implicated in CAVD pathogenesis, and have become the focus of several mouse models of the disease [5]. While epidermal growth factor receptor (EGFR) has been implicated in the development of cancer, and targeted for inhibition by cancer therapeutics, mice carrying a hypomorphic allele of Egfr (Egfr wa2/wa2 ) exhibit congenitally enlarged valves, and mature mice display valvular OPN expression, macrophage infiltration, and extensive von Kossa-positive calcification [59]. Interestingly, these results imply that cancer patients with congenital valve defects or other cardiovascular risk factors should avoid EGFR inhibition. The Smad6 inhibitory protein regulates TGF-β signalling and mediates endocardial cushion transformation, and mutation of the Smad6 gene (Madh6−/−) in mice produces valvular hyperplasia, outflow tract septation defects, elevated blood pressure, and aortic ossification [57]. Endocardial cushion development is also promoted by the Twist1 transcription factor and is upregulated in human CAVD. Mice engineered with persistent Twist1 (CAG-CAT-Twist1;Tie2Cre mice) develop two-to threefold increases in area, length, and thickness of the aortic valve [58]. Expression of collagen-II, periostin, and the matrix remodelling enzymes MMP-2/13 are also elevated [58]. On the other hand, there are a number of dissimilarities with human disease. Rabbits do not form spontaneous atherosclerotic lesions and therefore require very high cholesterol levels to induce more advanced disease [78-80, 84, 85], unless very long-term studies [74,83,86] or VitD2 supplementation [80,82] are used. Rabbits also have significant differences in their lipid metabolism from humans [87][88][89], which can result in their development of cholesterol storage syndrome while on high-cholesterol diets (0.5-3%), with cholesterol deposited in regions such as the liver, adrenal cortex, and reticuloendothelial and genitourinary systems. Rabbits have also been reported to form atherosclerotic lesions that do not resemble those in humans [74,88,102]. A summary of the critical elements of rabbit models of CAVD is presented in Table 2. VitD2-only treatment has been used in some models to induce advanced CAVD [82,98]. By 8-10 weeks, functional changes indicative of mild stenosis including increased peak transvalvular velocity and pressure gradients, increased echogenicity of the valve [82,98], and decreased aortic valve area [98] occur. Histologically, these valves display increased burden of lipids, leukocytes, macrophages [98], myofibroblasts, T cells [82], and calcification [82,98]. Serum levels of cholesterol, calcium, phosphate, creatinine [98], and calcium-phosphate product [82] were also seen to increase. There is also the indication that VitD2 might increase the oxidative stress (increased thioredoxin-interacting protein) and impair VEC function (increased plasma asymmetric dimethylarginine, an NO inhibitor) [98]. This indicates that VitD2 alone is capable of inducing CAVD resulting in functional impairment of the valve [98]. However, use of a hypercholesterolemic diet in combination with VitD2 at 12 weeks produces greater valvular functional impairment than VitD2 alone [82]. Genetic Rabbit Models. Rabbit models utilizing spontaneous mutations as well as transgenic manipulations are available. These models primarily have alterations in the LDLR and/or apolipoproteins that result in hypercholesterolemia when on a cholesterol-free, limited fat diet. Such models include (1) Watanabe heritable hyperlipidemic (WHHL) rabbits, which have a spontaneous LDLR mutation [84,93,104,105]; (2) St. Thomas Hospital rabbits, which acquire hypertriglyceridemia as well as hypercholesterolemia [94]; (3) rabbits with altered lipid profiles, such as induced human ApoB100 [90,94] or Apo(a) [91]. Of these, only the WHHL rabbit has been used in a study of CAVD to show that hypercholesterolemia-induced calcification may be mediated in part by the LPR5/β-catenin pathway [84]. Comprehensive reviews of genetically altered models of rabbit atherosclerosis have been done by Brousseau and Hoeg [87] and Fan and Watanabe [92]. Porcine Models. Porcine models are regarded as excellent animals for atherosclerosis research [88,89,[106][107][108] and more recently for the study of CAVD [32]. Swine have many similarities to humans, including similar systemic hemodynamic variables and heart anatomy [89], including trilayered aortic valve leaflets. They also have similar lipid profiles [61] and lipoprotein metabolism [106,108] to humans, though their high-density lipoprotein (HDL) level does rise with hypercholesterolemic diets. The porcine genome is of a comparable size to that of humans and is homologous in both sequence and chromosomal structure [32]. Swine naturally develop atherosclerotic lesions, which are accelerated by high-cholesterol/high-fat diets and result in human-type lesions [108][109][110][111]. The size of swine also makes them ideal for studies that characterize leaflet mechanical properties and for studies requiring blood analysis. Size is also the primary limitation of swine, as there is increased complexity and expense in maintaining them. In many cases, this has led to the use of mini- [106,112,113] and micro- [114] swine breeds for atherosclerosis studies, instead of full sized Yorkshire swine. Standard weights at six months are around 33 kg for Yucatan mini, 24 kg for Sinclair mini, and 20 kg for Yucatan microswine. These are significantly smaller than Yorkshire swine (approximately 115 kg at six months). Smaller breeds also develop human-type atherosclerosis lesions ranging from early (3-4 months) [106,113] to advanced lesion (8 months) with a necrotic core, fibrous cap, haemorrhage, calcification, and medial thinning [112]. They may also be very good models for CAVD investigations, but have not been used for this purpose to date. A summary of the critical elements of porcine models of CAVD is presented in Table 2. Diet-Induced Porcine Models. Though porcine models have been principally used in atherosclerosis research, they have recently been employed to study CAVD. Typical hypercholesterolemic swine diets consist of a standard corn/soybean diet with an additional 1.5-2% cholesterol and 10-20% fat, sometimes with 0.7-1.5% sodium cholate (porcine diet additions are expressed as additional weight percentage added). There is some indication that diets started prior to sexual maturity may be more effective at producing advanced disease [108]. Standard lipid profiles for studies ranging 8 International Journal of Inflammation from 2 weeks to 12 months in length show total cholesterol of ∼300-500 mg/dL (2-to 8-fold increase), LDL ∼200-500 mg/dL (4-to 11-fold increase), HDL increases of 1.5to 4-fold, and heterogeneous triglyceride (TG) levels ranging from twofold increase to twofold decrease. Initial CAVD studies show evidence that swine on hypercholesterolemic diets develop human-type disease. Macroscopic focal areas of increased opacity are seen by six months on a hypercholesterolemic diet with [100] or without diabetes [101]. Small early calcific nodules are also seen histologically at 6-7 months of age either with diet for two weeks or six-months [101] and with six month diet and diabetes [100]. Subendothelial lipid infiltration is seen only within the fibrosa layer, increasing with diet duration [100,101]; however there is no frank inflammation seen after two weeks or six months of diet [101]. In normal valves it has been found that the VECs on the aortic side of the valve have an antioxidative, noninflammatory, and calcification permissive phenotype [100]. After treatment with a hypercholesterolemic diet, the aortic VECs display a protective phenotype described as antiinflammatory, antiapoptotic, and anticalcific [101]. Notably, VECs on the aortic side are more responsive to the diet. More investigation is needed to explain the mechanistic foundations of side-specific VEC phenotypes. Atherosclerotic swine exhibit a standard human-type inflammatory response in the vasculature [108], so it is notable that the same is not observed in the valve. However, early human CAVD often does not have many inflammatory cells present [11]. In leaflets with mild disease changes, macrophages were only found in 20% of the lesions and T cells in 55% [13], with greater amounts appearing in stenotic valves [12,13,15], ranging from 59% [15], to 75% [13] of valves analyzed. Genetic Porcine Models. Naturally occurring mutations have also been exploited in swine to develop models of nondiet-induced hypercholesterolemia for atherosclerosis research; they may also be suitable as models of CAVD. These models have mutations in the LDLR and/or apolipoprotein genes. Some common models include (1) inherited hyper-low-density lipoprotein and hypercholesterolemia (∼250 mg/dL) from mutant alleles Lpb 5 , Lpr 1 , and Lpu 1 for ApoB and ApoU with normal LDLR [115,116]; (2) familial hypercholesterolemia (130-490 mg/dL) due to an LDLR mutation with altered lipid profiles [117,118]; or (3) familial hypercholesterolemia due to mutations relating to ApoB and LDLR (<300 mg/dL) [119]. These models are capable of achieving complex atherosclerotic lesions by two to three years of age without diet induction [115]. Other Animal Models. Many other animals have been used historically or in certain niche areas as models of atherosclerosis. They include rats, hamsters, pigeons, guinea pigs, cats, dogs, or nonhuman primates and have been reviewed by Moghadasian et al. [88,107]. Occasionally these animals have been used for CAVD studies [75,120,121] and some may be suitable for studying certain comorbidities that directly relate to CAVD [122,123]. However, they are not standard models for CAVD currently. Emerging CAVD Research Themes and the Role of Animal Models Increased appreciation and understanding of CAVD as an active, cell-mediated process has renewed interest in valve (patho)biology and the possibility for therapeutic intervention. To this end, priority research areas were recently identified by the National Heart and Lung and Blood Institute Aortic Stenosis Working Group [124] and include (1) improving understanding of the basic biology of CAVD, including signalling pathways and the roles of inflammation and biomechanics in disease initiation and progression; (2) determining the unique contributions of comorbidities to disease development; (3) developing highly sensitive imaging modalities to identify early and subclinical CAVD; (4) determining the feasibility of earlier pharmacological intervention. Research in each of these areas currently benefits from animal models and will continue to require the use of appropriate models in the future, as discussed below. Basic Biology of CAVD. The biology of CAVD is complex, involving multiple cellular and molecular regulators, genetic and environmental cues, and interacting signalling cascades [125]. Clearly animal models, notably transgenic mice, are important for dissecting specific signalling pathways and their functional consequences; this topic was recently reviewed [126]. Here we highlight inflammation and biomechanics as two factors that are associated with CAVD and likely influence signalling pathways, but have not been well studied. Animal models are well suited to study inflammation and biomechanics and thus will be critical tools to this end. Inflammation. Despite the strong association of CAVD with inflammation, the effects of inflammatory cytokines and related factors on valve pathobiology have not been thoroughly studied. Many animal models of CAVD demonstrate inflammation and may therefore be well suited to dissecting the role of inflammation in CAVD pathogenesis. Evidence of inflammation in mice includes significant macrophage [34, 37, 39, 41-43, 46, 47, 59] and T-cell [34,41] infiltration in some models. Others have seen increased expression of immune response genes [49], MMPs [43,58], and cathepsins [42,43,46]. Lastly, oxidative stress as measured by superoxide production has been exhibited in a number of models [37][38][39][40]. In rabbits, macrophages [74, 75, 77-79, 82, 83, 95], foam cells [75,79], T cells [82,83], and increased MMP-3 expression [103] are reported in valves along with increased hsCRP levels in the blood [78,85]. In porcine leaflets, VECs on the disease-prone aortic surface progress from a noninflammatory phenotype in normal valves [100] to an antiinflammatory and antiapoptotic phenotype with early disease [101]. While inflammation has been clearly demonstrated in both animal and human histopathological data, studies are typically limited to characterization of the inflammatory response and not the mechanistic causes. Mouse models of CAVD are well suited for studying pathological mechanisms, as conditional knockouts and other genetic manipulations in this species are relatively straightforward. The Il1rn−/− mouse exemplifies this approach, as studies with this model have demonstrated a protective role for the interleukin-1 receptor antagonist in preventing CAVD onset [60]. However, the lack of a trilayer leaflet morphology in mice differs from that of the human, rabbit, and swine and may impact the relevance of findings in mice to human disease. In addition, care must be taken when employing the common Apoe−/− mouse model to specifically study inflammatory CAVD mechanisms, as recent work has implicated ApoE as a regulator of inflammation [67]. Rabbit and porcine valves have three layers and are large enough to allow study of biomechanical forces and their impact on valvular inflammation. For example, altered fluid flow-induced shear stress induces endothelial expression of VCAM-1, ICAM-1, BMP-4, and TGF-β1 [127] and elevated mechanical stretch upregulates MMP and cathepsin activity in porcine leaflets ex vivo [128]. Future animal studies will need to better characterize inflammatory processes at various stages of disease development and begin to dissect the regulatory mechanisms that link inflammation to ECM remodelling and valve dysfunction. The validity and utility of various animal models to study inflammation in CAVD will be further clarified by improved temporal resolution of the pathogenesis of human valvular disease. Biomechanics. The aortic valve exists in a highly dynamic mechanical environment where it is exposed to significant blood flow-induced shear stresses, pressure loads, flexural deformations, and mechanical resistance from the ECM. Each of these mechanical stimuli regulates valve cell biology and therefore likely contributes to both homeostasis and disease [129,130]. However, much of what is known about mechanoregulation of valve biology is based on in vitro or ex vivo studies and the use of normal, nondiseased tissue sources. Animal models are an important, but largely unexploited source of diseased tissue for ex vivo and in vivo studies of valve biomechanics and mechanobiology in CAVD. To date, the mechanical properties of normal porcine valves have been extensively characterized at multiple length scales: whole leaflets [131][132][133][134][135][136][137], individual layers [138,139], or at higher spatial resolution, focal regions within individual layers [140]. Normal porcine aortic valves have also been used to study the effects of aberrant mechanical forces on valve pathobiology ex vivo [127,128,131,136,[141][142][143]. The similarity of the trilayer structure, size, and anatomy of the porcine valve to human valve makes them excellent models for structural and biomechanical studies. To date, the mechanical properties of diseased aortic valve tissue from humans, swine, or other species have not been reported, likely due in part to the limited availability of tissue samples and lack of well-characterized large animal models of CAVD. Such information is critical to defining the role of ECM mechanics in disease regulation [144] and should be a focus of future research. The suitability of porcine valves for biomechanical testing lends promise to the use of porcine CAVD models for the study of biomechanical changes with disease progression. Rabbit models may also be suitable, but their smaller size in comparison to porcine valves makes them less desirable. Mouse valves lack the trilayer humanlike leaflet morphology and are difficult to test mechanically due to their small size (∼500 μm long and 50 μm thick) [62]. However, the speed at which advanced disease can be obtained in mice and the feasibility of studying the effect of genetic knockouts on valve mechanics do provide significant benefit. To do this end, micromechanical test methods, like micropipette aspiration (MA), have been adapted recently to characterize mouse leaflet properties [145]. MA also shows promise for its ability to measure focal, microscale tissue properties. Recently, this technique was used to demonstrate that there is significant spatial heterogeneity in the local elastic modulus within individual layers of normal intact porcine aortic valves, but that on average, the fibrosa layer is stiffer than the ventricularis layer, with distinctly stiff and soft regions in the fibrosa and ventricularis, respectively [140]. The use of MA with animal models of CAVD promises to enable high spatial resolution biomechanical testing that is able to address the focal nature of CAVD. Ultimately, the gap between in vitro/ex vivo biomechanics studies and human disease will need to be bridged by in vivo biomechanical models of CAVD. The valvular mechanical environment and forces experienced by valve cells are defined in part by the external hemodynamic forces that shear and deform the valve leaflets throughout the cardiac cycle. Manipulation of these forces to test direct causal effects of mechanics on valve biology is challenging because of confounding factors that result from invasive manipulation of the valve or perivalvular tissues (e.g., hypertension). As a result, the causal effects of hemodynamic forces on CAVD have yet to be investigated in vivo. Some advances in this area have been made in the study of the effects of hemodynamics on valve development, which were elegantly studied in the zebrafish by using microbeads to impair blood flow [146]. It is believed that aspects of adult valve disease may recapitulate developmental processes [147], and therefore the zebrafish and other developmental models may in fact have some utility for studying the link between hemodynamics and adult valve biology in vivo. Contributions of Comorbidities to CAVD. CAVD often coexists with other conditions, including hypercholesterolemia, diabetes mellitus, chronic renal disease, hypertension, metabolic syndrome, and disorders of calcium or phosphate metabolism [148][149][150]. It is likely that these comorbidities uniquely contribute to the initiation and/or progression of CAVD [124]. A variety of animal models have been used to investigate comorbidities and CAVD, most coupled with a diet that induces hypercholesterolemia to accelerate disease. A frequently studied comorbidity is familial hypercholesterolemia, an inherited form of hypercholesterolemia that can be caused in humans by more than 800 different mutations [151]. Familial hypercholesterolemic models do not require diet induction to induce and accelerate disease, and provide a medium for deeper investigation into specific elements of the disease, especially lipoprotein metabolism. In mice, the Apoe−/− and Ldlr−/−;Apob 100 -only models contain mutations which mimic familial hypercholesterolemia [38,41]. As discussed in earlier sections, the WHHL rabbit [84,93,104,105], the St. Thomas Hospital rabbit [94], and a number of porcine models [115][116][117][118][119] also mimic familial hypercholesterolemia. Of these, only the Apoe−/− and Ldlr−/−;Apob 100 -only mice models [38,41] and WHHL rabbit [84] have been used to study CAVD to date. The impact of a variety of other comorbidities on vascular disease has been studied in various species in which hypercholesterolemia was induced by diet. The unique imbalances caused by diseases such as diabetes, chronic renal disease, hypertension, metabolic syndrome, and high serum minerals likely also influence CAVD progression and could lead to necessary unique insights into disease pathways and inciting factors. To date, however, these models have only been applied to study CAVD in mice. For example, the Apoe−/− mouse model has been combined with surgically induced chronic renal disease (CRD) to recapitulate the accelerated CAVD which accompanies human CRD [152]. This 5/6 surgical nephrectomy model with normal diet induces serum hypercholesterolemia after six months (∼400 mg/dL), as well as prototypically high serum phosphate, creatinine, and cystatin C levels. Valve leaflets stain strongly for microcalcification and macrophages, as detected by near-infrared imaging of targeted nanoparticles [47]. The Apoe−/−;5/6 nephrectomy model has also been combined with the TD.88137 diet and a knockout of the elastolytic proteinase cathepsin S (Ctss−/−). Knockout of cathepsin S reduced valvular calcification, macrophage accumulation, osteogenic and elastolytic activity, and elastin fragmentation [46]. Metabolic syndrome has been recapitulated in WT C57BL/6J mice with the use of a high-fat, high-carbohydrate diet which includes no added cholesterol and produces mild hypercholesterolemia [34], while exercise has been shown to mitigate the effects of the TD.88137 diet in Ldlr−/− mice [37]. The development of diabetes is common in many mouse models of aortic valve disease [34,39], but it is typically a side effect of the studied insult and rarely investigated as a primary initiator of disease [36]. CAVD comorbidities have not been studied in other species, although appropriate putative models do exist. For example, rabbit comorbidity models include diabetes [153], hypertension [154], metabolic syndrome [155,156], and high serum calcium [80,82], along with hypercholesterolemia. Porcine models have also combined diabetes [106,108,[110][111][112] or hypertension [157] with diet-induced hypercholesterolemia. All of these models have been applied to study atherosclerosis, and presumably would be appropriate and effective for studying CAVD, perhaps offering some advantages over murine models. Development of Sensitive Imaging Modalities for Early CAVD Detection. The ability to arrest or slow CAVD progression will likely require early disease detection, before significant calcium burden and hemodynamic dysfunction have occurred. For this reason, there is significant interest in developing sensitive imaging modalities for early stage detection. Targeted fluorescent nanoparticles are one promising strategy. Commercially available nanoparticles specifically targeting cathepsin S, cathepsin K, cathepsin B, macrophages, VCAM-1, MMP-2/9, and hydroxyapatite have been imaged with intravital dual channel fluorescence imaging in a number of animal studies of CAVD [42,43,46,47]. These or other molecular imaging techniques are promising approaches to detect early human CAVD, and clearly preclinical animal models will be critical to their validation. Beyond their application in clinical medicine, sensitive noninvasive imaging methods are also important for animal studies, as they enable tracking of disease progression and measurement of cardiac function in vivo over the duration of an experiment. Due to their relatively large size, imaging of rabbits and pigs is typically performed with clinical machines [80,82,114]. A possible future trend is to combine nanoparticles with standard imaging modalities for targeted detection. For example, iron oxide (MION-47) and MRI imaging were used to detect early CAVD lesions in rabbits fed a low-cholesterol diet by targeting invading macrophages. However, in these experiments, the MION-47 was also taken up by myofibroblasts in control and cholesterol-fed animals, and therefore more specific targeting may be needed [86]. Many mouse studies continue to employ clinical echocardiography using frequency ranges of ∼10-15 MHz to noninvasively examine the functional state of the mouse aortic valve [34,[37][38][39][40][41]49]. Measurements such as transvalvular blood flow velocity, valve opening time, cusp separation distance, and valve opening area are routinely performed. However, due to the high heart rate of an adult mouse, these frequency ranges only capture 10-20 frames per cardiac cycle [158]. More recently, a high-frequency ultrasound system better suited to imaging the mouse heart has been developed [159]. This system operates at 20-40 MHz, offers a ∼fourfold higher spatial resolution and ∼twofold higher temporal resolution than clinical echo machines [160] and has been validated by magnetic resonance imaging (MRI) as an accurate means to image functional and anatomical parameters of the mouse heart [161]. Increasingly, investigators are taking advantage of this improved technology to delve into the impacts on cardiac function in mouse models of CAVD [50,53,59]. Of note, however, is that the higher frequency range of this technique impairs penetration depth and can cause difficulties in successfully imaging aged and obese mice. An alternative is to use high-strength (5-15 T) MRI to image mouse cardiac valves directly [39,43]. Evaluation of Pharmacological Interventions. Currently, there is no medical treatment for CAVD, and clinical trials have yielded mixed, but generally discouraging, results that motivate the identification and development of new pharmacological interventions to slow or stop the calcific process in CAVD [162]. Clearly, animal models of CAVD are and will continue to be essential to evaluating therapeutics. As is clear from the summary below, the efficacy of various pharmacological interventions depends largely on the stage International Journal of Inflammation 11 of the disease at the time of drug administration. Thus, translation of results from animal models to humans will be challenging (as CAVD is intentionally accelerated in most models), but must remain a priority. HMG-CoA Reductase Inhibitors. HMG-CoA reductase inhibitors (statins) were one of the first class of drugs to be directly tested for the treatment of CAVD, with mixed results in both clinical studies [163][164][165][166] and NZW rabbit models [78,83,85]. In the rabbit studies, statins were administered either concomitant with diet initiation [78,85] or 15 months after diet initiation, by which point there was established sclerosis but no calcification or functional abnormalities [83]. In the former case, eight weeks of simultaneous atherogenic diet and statin treatment resulted in decreased total cholesterol, leaflet thickness, OPN, macrophage infiltration, cell proliferation, and Runx2 expression compared with diet-only samples [78]. However, the high-cholesterol diet used in this study induced lesions throughout the leaflet, which does not mimic the fibrosa specificity of human disease. In a subsequent study using a lower cholesterol diet, three months of statin treatment (initiated at the same time as dietary insult) decreased the amount of calcification and increased eNOS expression [85]. In WHHL rabbits, atorvastatin attenuated hypercholesterolemia-induced calcification when administered concomitantly with diet, in part through the LDLR LRP5/β-catenin pathway [84]. In contrast, when statins were administered for 15 months after disease had already been established in NZW rabbits, treatment decreased inflammatory cell infiltration and OPN expression but did not prevent calcification or collagen deposition, reduce lipid burden, or prevent the functional impairment that occurred without statin treatment [83]. Statins have not been extensively tested in mouse models of CAVD, though preliminary studies with pitavastatin in Il1rn−/− mice are reported to have no therapeutic benefit [60]. However, aggressive lipid lowering has been more directly achieved with the Reversa mouse model, which significantly normalizes plasma cholesterol levels with the use of a genetic switch (∼800 → ∼200 mg/dL) [39,40]. Early intervention after six months on the TD.88137 diet was able to reverse nearly all pathological signs of CAVD, but reversal at 12 months after significant calcium burden had already developed did not produce measureable improvements in aortic valve function. In total, preclinical and clinical trials to date do not support statin therapy as a primary treatment for patients with valvular heart disease to slow its progression [124]. The inconsistent results among the clinical trials likely reflect many differences, including enrolment criteria and timing of therapy [124], which are reflected in the animal studies by the extent of disease at the time of treatment. Thus, it is important that animal models mimic the progression and severity of human disease and experimental designs match the clinical situation as closely as possible, so that preclinical data can be translated to inform the design and interpretation of future clinical trials of statins or other pharmacological interventions. Renin-Angiotensin-Aldosterone System Inhibitors. The renin-angiotensin-aldosterone system (RAAS) controls blood pressure and fluid and electrolyte balances, and members of this family may have roles in normal physiological repair of the valve. Dysregulation of RAAS components can have proatherothrombotic effects resulting in pathologic fibrosis and calcification of the aortic valve [167]. The major members of this family implicated in CAVD are ACE, AngII, and AT1R [20,23]. Aldosterone receptors have also been shown to be present in rabbit aortic valves [96]. AngII, produced by ACE and mediated by the AT1R, may contribute to CAVD by promoting macrophage lipid accumulation and inflammation, increasing oxidative stress, impairing fibrinolysis, and stimulating production of the proteoglycan biglycan, which can retain lipoproteins [125]. ACE inhibitors, angiotensin receptor blockers (ARBs), and aldosterone receptor antagonists (ARAs) have begun to be investigated for treatment of CAVD. ACE inhibitors have shown mixed results in retrospective clinical studies with one showing a strong association between the use of ACE inhibitors and decreased valve calcification [168] and the other finding no effects [169]. In NZWR, treatment with the ACE inhibitor ramipril concomitantly with induction of CAVD by VitD2 at 25000 IU (four days per week) for eight weeks retarded the development of CAVD [99]. The ARB olmesartan decreased macrophage and myofibroblast accumulation, decreased OPN, ACE, and Runx2 mRNA expression, and maintained endothelial integrity when administered to NZWR during the last four weeks of an eight-week 1% cholesterol diet treatment [95]. The ARA eplerenone showed no effect in clinical trials of moderate-to severe stenosis [170]. However, administration of eplerenone to NZWR for the last four weeks of an eight-week 1% cholesterol diet elevated aldosterone levels without altering blood pressure and decreased macrophage accumulation, ACE expression, and calcification within the leaflets [96]. Antioxidants. Reactive oxygen species are present in both human [20,23] and rabbit [19] valves during CAVD. The antioxidative compounds tempol and lipoic acid (LA) were tested individually by concomitant administration with a CAVD-inducing diet of 0.5% cholesterol and VitD2 in NZWR. Tempol decreased superoxide presence but led to an increase in hydrogen peroxide, NAD(P)H oxidase subunits, and calcification. LA decreased both superoxide and hydrogen peroxide and led to a decrease in calcification and echogenicity of the valve. These preliminary results showed that ROS may potentiate calcification, particularly in relation to hydrogen peroxide and possibly NAD(P)H oxidase activity, and pharmacological prevention of ROS may prevent or decrease calcific burden [19]. Conclusions and Recommendations When properly chosen and implemented, animal models are a vital tool for the investigation of pathobiological mechanisms of CAVD and potential therapeutic interventions. Many animal models have been shown to recapitulate International Journal of Inflammation important aspects of human CAVD pathobiology, thus enabling detailed investigations that are otherwise unfeasible or impossible to conduct in humans. The development of new models and our improved understanding of the utility of existing models have quickly advanced the impact that animal model-based studies are making within this field. Mouse, rabbit, or swine models each offer speciesspecific advantages for the study of CAVD and represent a fundamental step forward in the translation of basic scientific research into clinically relevant and impactful knowledge. Moving forward, there is a need for improved model characterization to enable direct comparisons between multiple studies and aid in interpretation of findings as they related to human CAVD. At a minimum, the following model parameters necessary for interstudy comparisons should be provided: (1) animal starting age and weight; (2) full diet composition, including % kcal contributions; (3) feeding regime (ad libitum, weight adjusted, or fixed amount, and growth rate); (4) full plasma lipid profiles (TC, HDL, TG, LDL); (5) background strain(s) used in generating knockout mice; (6) age and weight at sacrifice. Models should have full histological characterization of lesion progression, including lesion composition and anatomical location, and valvular function should be impaired in advanced disease models. Some mice and rabbit models acquire hemodynamically significant CAVD, but the same has yet to be shown in swine. However, some swine models develop advanced atherosclerotic lesions and vascular stiffening after eight months of high-cholesterol diet [112], and therefore valvular stiffening and dysfunction may simply take longer than has been studied to date [101]. In order to truly understand the relevance of animal models to human valvular disease, a better understanding of human CAVD pathogenesis is required. As new insights into human disease are revealed, animal models and the data they generate need to be critically reinterpreted. A priority is to determine whether the extreme hypercholesterolemia induced in many standard CAVD models faithfully represents the full progression of human disease. In the future, ageing models of disease may prove to be the best at inducing human-type disease, although time and expense remain prohibitive. To date, there are two aged mouse models [38,41] and one aged rabbit model [83] that maintain moderate cholesterol levels and induce functional disease. Swine models of familial or mild diet-induced hypercholesterolemia could provide the best match with cholesterol levels in humans and when aged, may produce advanced disease. Perhaps the most promising application of well-characterized animal models is to investigate CAVD-initiating events and the mechanisms of its progression to late-stage functional impairment. Animal models clearly offer a significant advantage in this context, as it is difficult to clearly identify disease stage or control for confounding factors in human autopsy or transplant samples. CAVD initiation and early progression have largely been ignored to date, but would provide the richest insight into therapeutic targets to arrest calcification before the putative "point of no return" when calcific burden cannot be reversed. Inflammation, ECM adaptation, and early procalcific signalling likely play important roles, but have not yet been studied in detail. There has been little focus on, for example, the mechanisms that regulate macrophage/T-cell infiltration, inflammatory signalling cascades, or layer-specific ECM changes and their role in modulating VIC phenotype. It is prospective investigations of molecular regulators of valve homeostasis and disease progression like these that will benefit from wellcharacterized and validated animal models and are most likely to lead to the discovery of novel treatments for CAVD.
2014-10-01T00:00:00.000Z
2011-08-02T00:00:00.000
{ "year": 2011, "sha1": "c6f2ee19816a531b074a54cc774389f7057aecdb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/iji/2011/364310.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "431038b3bbe4bd636287e64bd7ac02ec25f4e50f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
7349809
pes2o/s2orc
v3-fos-license
Evaluation of suspected malignant hyperthermia events during anesthesia Background Malignant hyperthermia (MH), a metabolic myopathy triggered by volatile anesthetics and depolarizing muscle relaxants, is a potentially lethal complication of general anesthesia in susceptible patients. The implementation of modern inhalation anesthetics that research indicates as less potent trigger substances and the recommended limitations of succinylcholine use, suggests there may be considerable decline of fulminant MH cases. In the presented study, the authors analyzed suspected MH episodes during general anesthesia of patients that were referred to the Wuerzburg MH unit between 2007 and 2011, assuming that MH is still a relevant anesthetic problem in our days. Methods With approval of the local ethics committee data of patients that underwent muscle biopsy and in vitro contracture test (IVCT) between 2007 and 2011 were analyzed. Only patients with a history of suspected MH crisis were included in the study. The incidents were evaluated retrospectively using anesthetic documentation and medical records. Results Between 2007 and 2011 a total of 124 patients were tested. 19 of them were referred because of suspected MH events; 7 patients were diagnosed MH-susceptible, 4 MH-equivocal and 8 MH-non-susceptible by IVCT. In a majority of cases masseter spasm after succinylcholine had been the primary symptom. Cardiac arrhythmias and hypercapnia frequently occurred early in the course of events. Interestingly, dantrolene treatment was initiated in a few cases only. Conclusions MH is still an important anesthetic complication. Every anesthetist must be aware of this life-threatening syndrome at any time. The rapid onset of adequate therapy is crucial to avoid major harm and possibly lethal outcome. Dantrolene must be readily available wherever MH triggering agents are used for anesthesia. Background Malignant hyperthermia (MH) is mostly an inherited subclinical myopathy triggered by volatile anesthetics and depolarizing muscle relaxants in susceptible individuals, leading to a potentially lethal hypermetabolic reaction of skeletal muscle due to a disturbance of myoplasmic calcium homeostasis. Characteristic clinical signs of MH during a general anesthesia include hypoxemia, hypercapnia, tachycardia, muscular rigidity, acidosis, hyperkalemia and hyperthermia [1]. While expected genetic predisposition for MH is stated to be as frequent as 1:2.000, the prevalence of MH episodes varies regionally from 1:10.000 to 1:220.000 [2,3]. In contrast to fulminant MH episodes, abortive courses might occur more frequently, but are difficult to diagnose due to the alleviated symptoms. Recent developments in anesthesiology apparently have lead to a decrease in severe MH crisis over the last years: Halothane, a potent MH triggering agent, is no longer used in clinical routine in western countries [4] and currently applied volatile anesthetics, e.g. isoflurane, sevoflurane or desflurane, in some cases significantly decelerate the onset of an MH reaction compared to halothane [5,6] and are more likely to lead to abortive MH with eased symptoms. Furthermore, the recommended indications for succinylcholine, another possible MH triggering agent, have been limited by international anesthesia societies [7]. Considering all these facts, the aim of the present study was to investigate, whether MH is still a relevant anesthetic problem in our days. Methods With approval of the local ethics committee (application number: 263/11, ethics committee of the University of Wuerzburg) data of patients who where referred to the MH unit of the Department of Anesthesia and Critical Care of the University of Wuerzburg for diagnostic muscle biopsy and subsequent in vitro contracture testing (IVCT) between 2007 and 2011 were evaluated. Based on available patient documents and medical records the intraoperative events were examined. To confirm the suspicion of an MH crisis, applied triggering agents, clinical symptoms e.g. cardiac arrhythmia, increase of end-tidal carbon dioxide ≥ 45 mmHg, rises of patients' body temperature ≥ 38.5°C and possible use of dantrolene were analyzed. Besides that, the medical records were reviewed for severe postoperative complications, e.g. neurological deficits, disseminated intravascular coagulation (DIC), acute renal failure or signs of rhabdomyolysis according to maximum creatine kinase (CK) levels. If blood gas analysis was implemented, pH ≤ 7.2, base excess ≤ -5 mmol/l and PaCO 2 ≥ 50 mmHg defined a severe metabolic response. Only patients with a suspected MH episode during general anesthesia due to the estimation of the responsible anesthesiologist, completed IVCT and genetic analysis of the ryanodine receptor gene were included in the investigation. In referred patients a diagnostic IVCT with increasing caffeine and halothane concentrations in separated tissue baths was performed according to the guidelines of the European MH Group [8]. A contracture ≥ 2 mN at caffeine 2 mM and halothane 0.44 mM lead to the diagnosis MH susceptible (MHS). If significant contractures occurred after one of the drugs only, patients were classified as MH equivocal (MHE, MHE for halothane (MHEh) or caffeine MHEc). If no significant contracture was observed the patients were rated MH non-susceptible (MHN). In addition, for each patient, the clinical grading scale (CGS) by Larach and colleagues, which includes metabolic and muscular parameters as well as changes in cardiac rhythm and body temperature, was applied retrospectively. According to the grading scale 3 to 15 points were calculated for each parameter and added to receive a score. This score allowed allocation to individual MH-ranks (0 = MH almost never, 3-9 = MH unlikely, 10-19 = MH somewhat less than likely, 20-34 = MH somewhat greater than likely, 35-49 = MH very likely, > 50 = MH almost certain) [9]. Results Between 2007 and 2011 a total of 124 patients underwent a muscle biopsy followed by IVCT at the MH lab of the University of Wuerzburg. Overall 19 of these patients had been referred to the MH unit because of a suspected MH event during general anesthesia on the basis of estimation of the attending anesthesiologists. In the remaining patients MH diagnostics were initiated due to MH susceptibility in the family history, an unexplained rhabdomyolysis or to exclude a myopathic disorder in association with persistently elevated CK levels. Diagnostic findings Muscle biopsy and IVCT detected MH susceptibility in 7 (37%; 7 male) of the 19 patients. In 8 patients (42%; 3 male, 5 female) MH susceptibility could be excluded. Muscle bundles of 4 patients (21%; 1 male, 3 female) developed a pathologic contracture only after exposure to halothane but not after caffeine (MHEh). Interestingly, initial applied CGS rated the probability of an MH crisis as "almost certain" (> 50 points) in 2 MHS patients and "very likely" (35 -49 points) in 5 MHS and 1 MHEh patients, while in 3 MHEh patients the likelihood was classified as "less than likely" (10 -19 points). Noteworthy, MH was assumed "greater than likely" (20 -34 points) in 6 MHN patients and "less than likely" or "almost never" in 1 MHN patient each by CGS. Genetic screening detected mutations in the ryanodine receptor gene (Gly4037Alafs, Glu2174Ala, Val4234Leu) of 3 MHS patients. In 16 (84%) patients the suspected MH event occurred between 2006 and 2010 (6 MHS, 3 MHEh, 7 MHN). The remaining 3 patients had been 10 years old or younger at the time of incident (1992,1995,1998) and therefore muscle biopsy in these patients was delayed until the age of 16 years according to our hospital standard operating procedures. Since the MH diagnostic was performed within the study period, these 3 patients were included in the evaluation, even if the applied triggers were halothane or enflurane respectively. Interestingly, 2 MHS individuals with suspected MH in their history had undergone at least one uneventful general anesthesia in the past. The histopathological examinations revealed a myopathic tissue syndrome in combination with cell clumps indicating a possibly neurogenic component in 1 MHEh patient, who had received succinylcholine as sole trigger agent. In the other patients there was no evidence of a muscular pathology (Table 1). Cardiac arrhythmias were reported in 42% of the 19 cases. Hereof, an unexplained sinus tachycardia with heart rates between 90 to 135 per minutes were documented in 38% of the patients (3 MHS), while in 62% (2 MHS, 1 MHEh, 2 MHN) tachyarrhythmia were observed. In 11% of the patients, who received sevoflurane (1 MHS) or succinylcholine (1 MHEh) solely no arrhythmias were seen. In the remaining suspected cases the cardiac rhythm was not documented in patients' medical records. An increase of end-tidal carbon dioxide > 45 mmHg during the course of anesthesia was noticed in 42% (5 MHS, 3 MHN). However, body temperature increases ≥ 38.5°C were only reported in 11% of the analyzed cases (1 MHS, 1 MHN). In 47% of the MH suspected cases (7 MHS, 1 MHEh, 1 MHN) an increase of CK levels > 10.000 U/L following MH trigger application was observed. Despite the suspected MH diagnosis, dantrolene was administered only in 37% (5 MHS, 1 MHEh, 1 MHN) for treatment of the observed symptoms (Table 2). Trigger agents and clinical presentations According to the medical records of the referred patients, no persistent or temporary complications e.g. DIC, acute renal failure or neurological deficits were reported during recovery after the suspected MH episode. Blood gas analysis Interestingly, in only 37% of the patients with suspected MH event an arterial blood gas analysis was documented to verify the assumed MH diagnosis. However, a relevant metabolic acidosis with pH ≤ 7.2, base excess ≤ -5 mmol/l and PaCO 2 ≥ 50 mmHg was observed in 21% (3 MHS, 1 MHN). Besides that, serum potassium levels were remarkable elevated ≥ 5 mmol/l in 16% of the cases (2 MHS, 1 MHN) ( Table 3). Discussion Even though MH is a rare complication of general anesthesia, the presented cases clearly demonstrate that this life threatening muscular hypermetabolism is still a relevant risk requiring immediate and consequent treatment by the responsible anesthesiologist to avoid serious harm to the patient. After the first description of MH by Denborough numerous cases of fulminant MH as well as in vitro investigations had been published in the following years, identifying halothane and succinylcholine as potential MH triggering agents [10]. While the metabolic deterioration in the course of an MH crisis induced by halothane seems to be a direct consequence of an interaction with the sarcoplasmic ryanodine receptor, the pathophysiological mode of action of succinylcholine has remained unknown. For instance, in vitro succinylcholine increased halothaneinduced muscular contractions of MHS patients, but no contracture could be observed after exposition to succinylcholine alone [11]. Even systemic application of succinylcholine could not reproducibly elicit an MH episode in susceptible swine [12,13]. In humans, according to an evaluation of the North American MH Registry and a recently performed European multicentric study, succinylcholine triggered MH in absence of an inhalation anesthetic only in 0.7% or 1% respectively of the investigated cases [14,15]. Since the definitively underlying mode of action of succinylcholine to elicit MH remains unclear so far, the pharmacological characteristics of this agent may enable a possible explanation of it's role to induce MH. Following intravenous application succinylcholine activates the nicotinergic acetylcholine receptor and provokes a local depolarization of the cell membrane. The transient depolarization of voltage-gated receptors in combination with an influx of extracellular calcium via acetylcholine receptors could lead to a significant increase of intracellular calcium concentrations and after exceeding a certain threshold MH may occurs in affected individuals. In this context, muscular fasciculation and rigidity caused by succinylcholine was considered to be causal for MH. Consequently, a masseter spasm following succinylcholine was postulated to be an early sign of an imminent MH episode. However, specificity of this clinical sign is limited due to the subjective appraisal and the fact, that jaw tightness is a common side effect of succinylcholine, but only in half of the patients associated with MH susceptibility [16]. Similar results were obtained in our investigation. MH susceptibility was confirmed in only 50% of the suspected MH cases, where a succinylcholine-induced masseter spasm was noticed. Interestingly, histological examination of 1 MHEh patient who solely received succinylcholine revealed suspected myopathological finding. Although, neuromuscular disorders are common in MHE patients [17], it remains unclear, if these muscular alterations were responsible for the increased sensitivity to succinylcholine in this patient. Generally, the likelihood of succinylcholine-induced MH seems to be extremely low, however there is little doubt, that combination with a volatile anesthetic potentiates the onset and the clinical symptoms of an MH event [18]. Remarkably, despite the possibly serious side-effects like MH, hyperkalemia or cardiac arrest, succinylcholine was actually applied to secure the airway in 79% of the referred patients. In part, this approach was reasonable due the higher risk of aspiration in case of trauma or abdominal surgery. However, according to published guidelines the use of the non-depolarizing muscle relaxant rocuronium and if needed followed by application of sugammadex to reverse the neuromuscular blockade might be an adequate alternative to avoid succinylcholine associated adverse effects [7,19]. In contrast to succinylcholine, the impact of all inhalation anesthetics used in daily clinical routine in the development of an MH crisis is beyond dispute. However, dependent on the applied volatile anesthetic the time interval between induction of anesthesia and clinical symptoms of an MH episode seems to vary. For instance, Hopkins and colleagues reported, that in susceptible patients the onset of MH was statistically significant faster after halothane exposure compared to enflurane or sevoflurane [5]. Equally, fulminant MH episodes after isoflurane, sevoflurane or desflurane seem to occur with temporal delay [20,21], while halothane may induce MH within minutes [5]. In MHS animals, similar results were seen after intramuscular injection of halothane or sevoflurane. The induced local hypermetabolic responses measured by local muscular lactate and carbon dioxide pressure increase were more distinct after halothane than after sevoflurane application [22,23]. Furthermore, in vitro the effect on muscular contractures of MHS muscle bundles varies between halothane and modern volatile anesthetics at equivalent concentrations [24]. These different clinical appearances of MH following volatile anesthetic application might be caused due to differences in the calcium releasing potency of these diverse agents. For example, sarcoplasmic calcium release at cellular level was significant smaller after sevoflurane or desflurane exposure compared to equimolar halothane concentrations [25,26]. In the analyzed anesthetic events of the present evaluation, MH episodes were induced by established MH triggers like halothane or isoflurane as well as by modern volatile anesthetics, e.g. sevoflurane or desflurane. Although, in the majority of the cases inhalation anesthetics were combined with succinylcholine and only in one case sevoflurane was applied solely, our findings emphasized the MH trigger potency of newer volatile anesthetics. Beside masseter spasm cardiac arrhythmias are further early symptoms of imminent MH. Equally to a retrospective analysis from the United States, where the incidence was estimated 40% [14], in the presented investigation the occurrence of unexplained cardiac alterations was 42%. On closer examination the incidence of cardiac symptoms was even higher in the MHS group with either sinus tachycardia or tachyarrythmia as the leading signs. The low incidence of testified metabolic acidosis might be attributed to the failure to obtain arterial blood gas analysis in the acute phase of the MH reaction or due to dantrolene pretreatment. For example, one patient's blood gas analysis was performed not until the arrival on the intensive care unit and after treatment with dantrolene, showing an unremarkable blood acid status, while in contrast the intraoperative end-tidal carbon dioxide increased relevant to 56 mmHg in this patient. Overall, in only 37% of the MH suspected cases a blood gas analysis was conducted to verify the suspected diagnosis. This line of action is remarkable, since the presence of an acidosis supports the reasonable suspicion of MH in these cases. Hyperthermia is a dramatic but often late sign of MH, reflecting the proceeding metabolic breakdown in affected individuals. Hence, temperature monitoring during general anesthesia is recommended if MH triggers are used, since in a couple of cases hyperthermia was the only sign of MH [14]. Fulminant MH episodes may be marked by a rapid increase in body temperature at a rate of 1-2°C every five minutes [27]. Stunningly, only in 11% of the suspected MH cases (1 MHS and 1 MHN) a remarkable hyperthermia with an increase in core temperature ≥ 38.5°C was noticed. The overall low incidence of core temperature rises in the presented study might be attributable to the initiated dantrolene treatment or the possible absence of temperature monitoring. The pathological changes during MH crisis are based on an uncontrolled increase of myoplasmic calcium, resulting in an ongoing skeletal muscular contracture and loss of cellular integrity leading to hyperkalemia and rhabdomyolysis [28]. Although the surgical trauma itself might cause a significant increase in CK levels, postoperative unexplained excessive hyperCKemia should lead to a diagnostic workup to exclude MH susceptibility as underlying pathology. The reason for the remarkable CK increase up to 24.732 U/L in one of the MHN patients following succinylcholine remains unclear. A not yet diagnosed myopathy could not definitely be excluded, but based on the advanced age of the patient and the inconspicuous histological findings it seems very unlikely. In contrast to the estimation, that nearly 70% of MH families carry mutations in the ryanodine receptor gene [29], the genetic prevalence of 27% in the analyzed MHS cases was overall low. Noteworthy, even if the Val4234Leu variant of one MHS patient has recently been mentioned in context of a novel exome sequencing method for MH relevant mutations [30], none of the detected genetic variants had been accepted as causative for MH according to the European MH Group database, which includes so far 31 approved mutations of the more than 200 identified ryanodine receptor gene variants [31]. However, it is important to mention, that absence of a causative mutation does not reliably exclude MH susceptibility. To confirm or exclude MH a muscle biopsy followed by an IVCT must be carried out in these patients [32]. After introduction of dantrolene in clinical use a causal treatment of MH has been available since the late 1970's. The mode of action of this drug is based on inhibition of the sarcoplasmic reticulum calcium release without increasing the reuptake of calcium ions into the sarcoplasmic reticulum [33]. According to current guidelines application of dantrolene is an essential part in the treatment of an MH crisis [34,35]. However, only 37% of the patients in the presented investigation received dantrolene for causal MH therapy. Nevertheless, the importance of consequent dantrolene treatment is absolutely clear [36], even if the hypermetabolic state in some of the presented cases was already terminated by discontinuation of MH trigger substances. Once surviving fulminate MH episodes several reports documented severe complications, e.g. acute renal failure from rhabdomyolysis, DIC, congestive heart failure or intestinal ischemia due to the uncontrolled metabolic reaction and myocyte death [27]. Fortunately, the review of the medical records of the referred patients, did not detect any serious harms to the patients after an MH episodes, which importantly delayed recovery. To draw conclusions about the likelihood of MH among the suspected incidents, the "Clinical Grading Scale" (CGS) established by Larach and colleagues assessed clinical and metabolic parameters, e.g. muscle rigidity, rhabdomyolysis, acidosis, increases in body temperature and cardiac arrhythmias [9]. The validity of the CGS may be reduced due to limited availability of complete data sets and hence, often does not satisfactorily correlate with the IVCT results [37]. The false negatives as well as the false positive diagnosis obtained by CGS calculation in our analysis are likely a result of the fragmentary available medical records. Thus, sole evaluation of the CGS seems not to be adequate to prove MH susceptibility. Finally, anesthesiologists must be aware that uneventful previous general anesthesia does not exclude MH susceptibility [14]. For instance, two of the MHS patients reported a history of exposure anesthesia in the past. The reason why some patients develop MH after first exposition to MH triggering agents, while others do not, still remains unclear and might be explained by an individual cellular compensation mechanism lowering myoplasmic calcium concentrations. Conclusions Analysis of the presented data might be limited by partly incomplete documentation as well as the individual interpretation. Nevertheless, in conclusion MH still is a relevant complication these days and every anesthesiologist must be prepared to recognize the symptoms of MH crisis and to start sufficient treatment. While fulminate courses of MH are easy to diagnose, abortive presentations with solitary or alleviated symptoms are more difficult to detect and pose an enormous challenge to the attending anesthesiologist. The initiation of an adequate and consequent treatment including the application of dantrolene and termination of MH trigger application is essential for patients' prognosis and survival. Besides that, every patient after a suspected MH event should be referred to a MH center for further counseling.
2016-05-12T22:15:10.714Z
2013-09-23T00:00:00.000
{ "year": 2013, "sha1": "9ca5e3bf44991ee18966db69e280c20e3f06e19f", "oa_license": "CCBY", "oa_url": "https://bmcanesthesiol.biomedcentral.com/track/pdf/10.1186/1471-2253-13-24", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2223c5e8947e76c3f4546c598f1ab6fbb1cbf63a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237471297
pes2o/s2orc
v3-fos-license
Application of Modern Clinical Risk Scores in the Global Assessment of Risks Related to the Diagnosis and Treatment of Acute Coronary Syndromes in Everyday Medical Practice This article presents an overview of contemporary risk assessment systems used in patients with myocardial infarction. The full range of risk scales, both recommended by the European Society of Cardiology and others published in recent years, is presented. Scales for assessing the risk of ischemia/death as well as for assessing the risk of bleeding are presented. A separate section is devoted to systems assessing the integrated risk associated with both ischemia and bleeding. In the first part of the work, each of the risk scales is described in detail, including the clinical trials/registers on the basis of which they were created, the statistical methods used to develop them, as well as the specification of their individual parameters. The next chapter presents the practical application of a given scale in the patient risk assessment process, the timing of its application on the timeline of myocardial infarction, as well as a critical assessment of its potential advantages and limitations. The last part of the work is devoted to the presentation of potential directions for the development of risk assessment systems in the future. Introduction The 21st century is a time of tremendous development of medicine, and a period of implementing newer pharmacological and technological solutions into treatment. Miniaturization and advances in electronics and technology have allowed the use of interventional treatment methods in modern medicine on an unprecedented scale. Cardiology has been a leader in this field for many years. There has been a huge evolution in the treatment of heart disease over the past decades. The best example of this is myocardial infarction (MI). The process of treating this disease has gone from the use of simple drugs and the obligatory several-day bed regimen, through targeted fibrinolytic drugs, to modern, highly technologically developed methods of interventional cardiology allowing the patient to leave the hospital after a few days [1]. The simultaneous rapid development of cardiology and technology resulted in the creation of special logistic systems creating a network of medical units focused on optimizing the treatment process of patients with MI [2]. All the above-mentioned achievements make the modern methods and results of treating patients with MI radically different from those used and achieved at the beginning of the 21st century, which directly translated into a decrease in mortality rates due to MI over the last decades [3,4]. Unfortunately, despite the undoubted benefits of using modern methods of treatment, the other side of the coin should also be seen. The development of medical technology, the implementation of new devices and an increasingly aggressive approach to the treatment of MI is inevitably associated with the appearance of a completely new type of adverse events and complications [5]. Of course, in the final analysis of the population, the sum of benefits exceeds the potential threats; however, analyzing individual cases, complications, and adverse events may be the reason not only for prolonged hospitalization, but also for the patient's death. Another aspect that is noteworthy is the suddenness of events and complete unpredictability in the treatment of MI. A clear distinction should be made between the risk of elective procedures (as part of the treatment of stable coronary artery disease) and the risk of procedures performed in terms of MI. The latter are initially burdened with a much higher risk of death and adverse events resulting from the severe clinical condition of the patient, as well as from aspects not related to the patient's condition (time of the day when procedures are performed, experience of the staff on duty, staff fatigue). In some cases, the use of interventional cardiology methods, despite the indisputably proven indications and benefits, paradoxically may result in serious complications and actually shorten the patient's life. Currently, there is a lack of reliable tools that could be used by a physician, sometimes making a very difficult decision to apply or withdraw from interventional treatment [6]. The creation of a simple tool enabling a realistic assessment of the risk of complications and death of the patient would be very helpful for the physician during the clinical evaluation of the patient. The above facts were also noticed by the initiators of the guidelines of the European Society of Cardiology (ESC) regarding the diagnosis and treatment of both STelevation myocardial infarction (STEMI) [7] and without persistent ST segment elevation (NSTEMI) [8]. In the above documents, increasing importance is attached to the assessment of the risk of death of the patient, both during hospitalization and in the period after discharge from the hospital. The body of a patient suffering from a heart attack treated with modern pharmacotherapy is a field of struggle between two opposing forces. On the one hand, it is a state of local excessive coagulation in the coronary artery associated with the release of pro-inflammatory and pro-thrombotic factors from the ruptured plaque, causing the formation of a blood clot blocking the flow in the coronary artery [9]. On the other hand, there is an opposing force associated with the pharmacological treatment used. The final result and effectiveness of treatment depends on the balance of the above factors. Of course, nowadays treatment of MI is not only pharmacotherapy, but also interventional cardiology methods, allowing for the implantation of a stent into the coronary artery, as well as mechanical removal of the thrombus from the artery [10]. By analogy with the two competing forces described above, it is possible to distinguish two main threats to the life of a patient with MI: myocardial ischemic events (primary on admission or recurrent during hospitalization) and massive bleeding. Modern risk assessment systems for patients with MI were constructed on a similar basis. The aim of this study was present contemporary models of risk assessments in patients treated for MI, followed by an attempt to critically analyze the possibility of using these risk scales in everyday medical practice, pointing to the benefits and gaps, as well as showing the directions of development for new ones. Material and Methods This paper is a systematic review of literature on the applied clinical scores related to both the risk of an ischemic event/death and bleeding during the treatment of all forms of acute coronary syndrome (ACS). First, the scores recommended by the ESC in the guidelines for the treatment of MI will be discussed. Second, we will present the latest systems for assessing the risk of death or major adverse cardiovascular events (MACE), which are not recommended by ESC and based on large multicenter clinical trials conducted exclusively in the last decade. Data on the scales recommended by ESC were obtained on the basis of the recent STEMI/NSTEMI treatment guidelines [7,8]. In order to find other risk scores, PubMed (https://pubmed.gov, accessed on 26 August 2021) was used to search English language risk models that were developed in last decade and using the date patients with MI. This paper does not describe risk assessment models based on small, single-center studies or risk scales used in the assessment of patients with stable coronary artery disease (CAD) undergoing invasive procedures. Results The first attempts to create a model for assessing the risk of death in patients treated for MI began in the 1960s [11][12][13]. These models were developed at a time when reperfusion therapy was not yet used. Subsequent scores developed with the introduction of widespread use of fibrinolytic drugs [14][15][16][17]. Some of these scores were in the form of simple risk stratifications schemes that could be applied directly at the patient's bedside without the use of a computer [14,15]. These systems were developed by using general measures of severity of illness, such as the Acute Physiology and Chronic Health [16], whereas others were based on expert opinion and prior investigation [14]. The 21st century is the era of medicine based on the principles of EBM and comprehensive guidelines indicating methods of diagnosis and treatment of particular diseases. Contemporary clinical scores applicable to risk assessment in a patient with ASC are presented in the ESC guidelines for the treatment of STEMI [7], NSTEMI [8], and guidelines for revascularization [18]. The recently available 2017 STEMI treatment guidelines raise the importance of introducing scores to assess the risk of death or recurrence of ischemic events in everyday clinical practice. The suggested scale is the TIMI score and the GRACE score. However, the guidelines do not give these suggestions an official class or recommendation. Use of the TIMI Scale in the Risk Assessment of Patients with STEMI The TIMI (thrombolysis in myocardial infarction) score is a clinical score developed 21 years ago to assess the short-term risk of patients with STEMI at hospital presentation [19]. The score assesses the risk of 30-day mortality after a heart attack. The TIMI risk score is simple and quick to use, convenient in everyday practice, and possible to use at the patient's bedside. It does not require the use of computer devices or internet access. TIMI score was based on subsequent analysis of Intravenous nPA for Treatment of Infarcting Myocardium Early II (in TIME II) trial [20] which was multicenter, worldwide trial which enrolled patients with STEMI within 6 h of symptoms onset. Patients meeting the inclusion criteria were randomly assigned to fibrinolytic therapy with either lanoteplase or alteplase. Vital signs were assessed through 30 days and every 6 months. Using the multivariate regression analysis method, 16 independent predictors of mortality were identified. The discriminatory capacity of the full 16 variable regression model was assessed by using the area under the receiver operating characteristic curve (ROC) (c statistic) as an index of model performance (c statistics of 1 indicating perfect discrimination) [21]. The above described multivariate analysis model demonstrated a strong discriminatory capacity (c statistics 0.784). Ten variables, accounting for 97% of the predictive capacity of the multivariate model, constituted TIMI risk score. TIMI score is a simple arithmetic sum of points values assigned to independent risk factors: 1. STEMI of anterior wall of heart or left bundle branch block (LBBB) in electrocardiography (ECG) recording-1 point, 6. diabetes or history of hypertension or angina-1 point, 7. Use of the TIMI Risk Score in the Risk Assessment of Patients with NSTEMI The paper introducing the clinical application of the TIMI score in the short-term risk assessment of patients with NSTEMI was published in 2000 [24]. The methodology, statistical analysis, and the way of creating the score were analogous to the TIMI score described above, used to assess the risk of patients with STEMI. The score was based on the analysis of two phase 3, international, randomized, double blind trials: the Thrombolysis in Myocardial Infarction (TIMI) 11B trial [25] and the Efficacy and Safety of Subcutaneous Enoxaparin in Unstable Angina and Non-Q-Wave MI trial (ESSENCE) [26]. There were four study cohorts: one test cohort and three validation cohorts. A total of 1957 patients with NSTEMI/UA (unstable angina) were assigned to receive unfractionated heparin (test cohort) and 1953 to receive enoxaparin in TIMI 11B; 1564 and 1607 were assigned respectively in ESSENCE. The three validation cohorts were the unfractionated heparin group from ESSENCE and both enoxaparin groups. In both studies, for the purpose of creating the TIMI score, the following composite endpoint was adopted: all-cause mortality, new or recurrent MI, or severe recurrent ischemia prompting urgent revascularization. Endpoint incidence was assessed in the study cohort population within 14 days of randomization. Initially, 12 variables were selected as predictors of endpoint occurrence. In second stage, each factor was tested independently in a univariate logistic regression model; those that achieved a significance level of p < 0.20 were selected for testing in a multivariate stepwise logistic regression model. The final TIMI score consists of seven factors: 65 years or older, at least three risk factors for coronary artery disease, prior coronary stenosis of 50% or more, ST segment deviation on ECG at presentation, at least two anginal events in prior 24 h, use of aspirin in prior 7 days, and elevated serum cardiac markers. The TIMI risk score is an arithmetic sum of points values assigned to particular risk factors where a value of 1 is given when a factor is present and 0 when it is absent. For the point on coronary stenosis, a value of 0 was assigned if no cardiac catheterization had been previously performed or if a prior cardiac catheterization revealed no coronary stenoses of 50% or more; a value of 1 was assigned if a prior cardiac catheterization revealed at least 1 coronary stenosis of 50% or more. The frequency of the assumed composite endpoint increased significantly as the TIMI risk score increased in the test cohort in TIMI 11B: 4.7% for a score of 0/1; 8.3% for 2; 13.2% for 3; 19.9% for 4; 26.2% for 5; and 40.9% for 6/7. The C statistic for the model in the test cohort was 0.65. The pattern of increasing event rates with increasing TIMI risk score was confirmed in all three validation groups. The use of the TIMI scale in the risk assessment of patients with NSTEMI is currently not recommended. The 2015 NSTEMI guidelines for infarction treatment also mention the TIMI score, however the latest 2020 guidelines [8] explicitly recommend other scales to assess the risk of these patients. Use of the GRACE Risk Score in Assessing the Risk of Patients with MI Another scale recommended by the ESC for risk assessment in patients with MI is the GRACE risk score. This score applies to both STEMI and NSTEMI patients. Currently, due to the very good discriminative performance value, the GRACE score is the basic score recommended in the assessment of a patient with NSTEMI. In the ESC guidelines, it received a class IIa, level of evidence (LOE): B recommendation. In its assumption, it assesses the risk of myocardial ischemia and is based on the GRACE registry analysis [27]. GRACE (the Global Registry of Acute Coronary Events) is a large, prospective, multinational observational study of patients hospitalized with ACS. A total of 18 cluster sites in 14 countries in North America, South America, Europe, Australia, and New Zealand were collaborating in GRACE. Patients were followed up at 6 months after hospital discharge to identify recurrent coronary events, use of various medications, and mortality. Importantly, this registry included unselected patients with all forms of MI, which made it a very good reflection of the general population. Over time, several successive versions of GRACE scores have been developed to assess both short-term and long-term risk of death in patients with MI. Initially, these scores were used as a printed nomogram, and later internet calculators and mobile applications were developed. In fact, all subsequent models of the GRACE score are based on the same or a similar group of risk factors assessed on admission to the hospital; however, the weight of these variables differs depending on the version of the score. The original version of the GRACE risk score was used to assess the in-hospital risk of death in a patient with MI [28]. The score was based on the analysis of data from the GRACE registry (11389 patients with MI, both STEMI and NSTEMI, enrolled in the study in the period from 1 April 1999 to 31 March 2001). The score was created based on a model of a linear relationship between a given predictor and the risk of death. These relationships were assessed using the methods of multivariate logistic regression analysis. Finally, eight variables accounting for 89.9% of the prognostic information were identified: four continuous variables: age, SBP, heart rate, serum creatinine; three binary variables: cardiac arrest on admission, increased concentrations of cardiac biomarkers, and ST-segment deviations on admission; and one categorical variable: Killip class on admission. Each risk factor has an appropriate number of points, and the values of continuous variables are assigned to a specific range of values and a corresponding number of points is assigned. The sum of the individual values of the risk factors allows us to read the risk of in-hospital death as a percentage. For example: the sum of points below 60 is related to the risk of ≤0.2%, the value of 140 points-2.9%, 200 points-18%, ≥250 points-risk ≥52%. The c statistic of this model is 0.84, indicating excellent discrimination. This model performed well in all major subgroups. The c statistics for patients with STEMI was 0.83 in comparison to NSTEMI patients (0.82), for patients with (0.81) and without (0.83) elevated cardiac markers at presentations, for patients 65 years or younger (0.78) in comparison to patient older than 65 years (0.82). External validation of the model was performed on a subsequent sample of 3972 patients from GRACE registry (c statistic 0.85) and on a data set from the GUSTO IIb study (c statistic 0.79), which confirmed excellent discrimination. This scale was presented in the form of a printed nomogram, which allows for quick calculation of the risk of in-hospital death in patients with MI. The next version of the GRACE score allows for the assessment of the patient's risk of death within 6 months of hospital discharge [29]. This scale, like the previous one, was based on the analysis of data from the GRACE registry (15,007 patients MI, both STEMI and NSTEMI, enrolled in the study in the period from 1 April 1999 to 31 March 2002). As the endpoint of the study, the incidence of all-cause deaths within 6 months of hospital discharge following an MI was assessed. The model was created by using a multivariable Cox proportional hazards regression backward elimination technique. The model was validated on the consecutive 7638 patients enrolled in GRACE between 1 April 2002 and 31 December 2003. Statistic c for the study cohort and validate cohort was 0.81 and 0.75 (respectively), which confirmed the good discriminating value of the scale. C statistic for different types of MI (STEMI, NSTEMI, UA) were also similar. The finally developed risk prediction tool for all forms of ACS included nine variables: age, history of MI, history of congestive heart failure (CHF), pulse rate, SBP, serum creatinine concentration, cardiac biomarkers concentration, ST segment depression, and not having PCI performer in hospital. The continuous variables were grouped into ranges, each range of continuous variables and each binary variable were assigned an appropriate number of points. The score value was calculated by summing up the values of individual variables. The attached reference plot nomogram shows the risk of death corresponding to the total risk score. It should be noted that this scale can be used only when the patient is discharged from the hospital (only then we can assess the occurrence of all risk factors), and to calculate the point value of the score for continuous variables, we take into account the parameters or results of laboratory tests on admission to the hospital (first laboratory tests performed). The next version of the GRACE scale was created only in electronic form (a calculator on the website, an application for a mobile phone) [30]. It is identified as GRACE 1.0. The scale is a combination and extension of the existing models of the GRACE scale, because it allows for the calculation of the risk of death and death or non-fatal MI, either 1. in the hospital period, 2. in the period from admission to 6 months after discharge, 3. in the period from hospital discharge to the 6th month of follow-up. In total, it makes it possible to calculate six different risk values. This scale, like the previous one, was based on the analysis of data from the GRACE registry (21,688 patients with MI, both STEMI and NSTEMI, enrolled in the study in the period from April 1999 to September 2005). The study adopted two main endpoints: all cause death or the composite measure of death or non-fatal MI during admission to hospital or after discharge. A Cox regression model was used to calculate hazard ratios and 95% confidence intervals to examine the individual relations between a particular predictor and death and death or MI during follow up. The final model for the risk score of death or death and death or MI was constructed using a multiple Cox regression backward analysis. Statistic c for the final model of score was 0.70 (for the composite end point: death and death or MI from hospital admission to 6 month follow up) and 0.82 (for end point: all cause of death from hospital admission to 6 month follow up). The model was subjected to internal validation on the consecutive 22,122 patients enrolled in GRACE cohort, as well as external validation using GUSTO IIb data set of 12,142 patients with ACS, confirming the very good discriminating value of the scale. C statistic for internal validation was 0.81 and 0.73 (all cause of death and death or MI from hospital admission to 6 month follow up, respectively). Finally, the prepared risk calculator (in electronic or online form) includes individual risk factors: 1. for the risk of death and death or MI in the in-hospital period/in the period from admission to the sixth month of observation: age, heart rate, SBP, serum creatinine concentration, Killip class, ST segment deviation, cardiac arrest at admission, elevated cardiac enzymes/markers; 2. for the risk of death and death or MI in the period from discharge to 6 months of follow-up: age, heart rate, systolic blood pressure, serum creatinine concentration, CHF, in-hospital primary coronary intervention (PCI), in-hospital coronary artery bypass grafting (CABG), past history of MI, ST segment depression, elevated cardiac enzymes/markers. Another version of the GRACE scale, the last in the series and published in 2014, is version 2.0 [31]. This scale allows for risk assessment of a total of five different risk values: typical for the previous GRACE scale values-in-hospital death risk, risk of death within 6 months of admission-and new possibilities of risk values-risk of death within 1 year after admission to hospital, risk of death or recurrent MI within one year of MI, risk of death within 3 years of admission to hospital. The GRACE 2.0 scale, like its predecessor, is only available in the form of an electronic or online risk calculator. The scale was based on the analysis of data from the GRACE registry (32,037 patients with MI, both STEMI and NSTEMI, enrolled in the study from January 2002 to December 2007; additionally, for the purpose of analyzing the risk of death within 3 years from admission to the hospital, a separate group of 1274 patients was analyzed). The final model included eight classic variables for the previous versions of the GRACE scale: age, heart rate, SBP, serum creatinine concentration, Killip class, ST segment deviation, cardiac arrest at admission, elevated troponin or other necrosis cardiac biomarkers. The novelty of the GRACE 2.0 risk score is the possibility of an alternative replacement of the serum creatinine concentration value (in the absence of data) with the information about renal failure in the patient (in binary mode: 0/1) and the Killip class with the information about the patient's previous use of diuretics (in the binary: 0/1). The above solution was introduced earlier in the mini GRACE scale. The above-mentioned scale is based on the analysis of 64,312 patients from the MINAP database (Myocardial Ischaemia National Audit Project). It has been proven that this approach also demonstrated good performance (c statistic 0.825) [32]. Another novelty of the GRACE 2.0 scale is the possibility of assessing the risk in a longer time range, namely after 1 and 3 years after a heart attack. For the first time in the history of the development of the GRACE scale, it was possible to assess the risk of death in the long term. Another novelty is the introduction to the analysis of non-linear methods of assessing the relationship between risk and a given predictor, which significantly improved the discriminating value of the obtained model. The value of the c statistics for the GRACE 2.0 scale-a model for assessing the risk of death within 1 year of admission to hospital, a model for assessing the risk of death or MI within 1 year of admission, and a model for assessing the risk of death within 3 years of admission to hospital was: 0.82, 0.74, and 0.78 (respectively). The scale was externally validated on a population of 3059 patients from the FAST-MI registry (French Registry of Acute ST-elevation and non-ST-elevation Myocardial Infarction), confirming the good performance. Other Risk Scores Assessing the Risk of Ischemia/Death in Patients with MI (Not Included in the ESC Guidelines) Over the last 20 years, many risk scores have been developed to assess the risk of death in patients with MI. Some of them are completely inapplicable nowadays due to the dynamic development of methods of treating MI. The beginning of the 21st century is a period of a breakthrough in cardiology, the end of the thrombolytic treatment era and the rapid development of interventional cardiology methods. Risk assessment systems that were created at the beginning of the century (2000-2010) do not reflect the current reality, both in terms of the methods of treatment used, the scale of implementation of invasive treatment, pharmacotherapy (new antiplatelet and anticoagulant drugs), population characteristics, as well as technological development in the field of devices and stents used in everyday practice of the catheterization laboratory. A new generation of drug eluting stents (DES) was introduced for treatment, and the use of bare metal stents (BMS) has practically ceased. Some scales, such as the GRACE risk score described above, were systematically updated, which allowed them to remain credible today. At the beginning of the 21st century the following risk assessment systems were created: PAMI [33], SIMPLE risk index [34], CADILLAC [35], ZWOLLE [36], RISK-PCI [37]. For several years since their creation, these scales have been described in numerous and detailed publications, their mutual comparisons and comparisons with the GRACE scale have also been made [38,39]. In this paper, we will present the latest systems for assessing the risk of death or major adverse cardiovascular events (MACE), which are based on large multicenter clinical trials conducted in the last decade. TIMI DYNAMIC Risk Score The TIMI DYNAMIC risk score [40] was published in 2013 and is a kind of extension and supplementation of the original TIMI scale [19]. It was supposed to be a simple, bedside clinical scale allowing to assess the risk of death in a patient with STEMI within 1 year of discharge. The TIMI DYNAMIC scale is a prospectively validated scale for the reclassification of patients with STEMI based on in-hospital events. It consists of the classic risk factors for the TIMI scale, which are based on the parameters collected on admission to the hospital, and new variables which were major clinical events occurring during the hospitalization (in hospital events). The scale is based on the analysis of data from the ExTRACT-TIMI 25 study (Enoxaparin and Thrombolysis Reperfusion for Acute Myocardial Infarction Treatment)-a double-blind, international study which randomly assigning 20,506 patients with STEMI to either enoxaparin or unfractionated heparin as an adjunctive therapy to fibrinolysis [41]. The study endpoint was death or recurrence of MI within one year of discharge from the hospital. Each new variable (in hospital events) was first subjected to univariate Cox analysis (to confirm significance) and then tested in the multivariate Cox analysis to assess the effect on the risk of death over a 1-year followup period. Ultimately, the TIMI DYNAMIC scale included six new variables. Baseline variables taken from original TIMI scale were retested to reconfirm the significance. Each variable was assigned an integer value based on the odds ratio, and the final score was the sum of these values. The TIMI DYNAMIC scale consisted of eight variables assessed on admission to hospital: age (65-74 years-2 points, >75 years-3 points), SBP (<100 mmHg-3 points), heart rate (>100 bpm-2 points), Killip class II-IV-2 points, STEMI of anterior wall of heart or LBBB in ECG recording-1 point, diabetes or history of hypertension or angina-1 point, weight of patient <67 kg-1 point, time to treatment >4 h-1 point, and another six variables assessed at discharge: recurrent MI-1 point, stroke-5 points, major bleed-1 point, arrythmia (atrial fibrillation, ventricular fibrillation, ventricular tachycardia)-2 points, renal failure-3 points, CHF or cardiogenic shock-3 points. The maximum number of points to be obtained is 29. The absolute risk of death during 1 year from discharge in relation to the TIMI DYNAMIC score was assessed as below: 0-1 point-1.3%; 2 points-2.3%; 3 points-3.6%; 4 points-5.5%, 5 points-7.8%; 6-7 points-13.5%; ≥8 points-24.8%. The value of the c statistic for the TIMI DYNAMIC scale is 0.76, which confirms a good discriminating value of this scale. The scale was externally validated on a cohort of 3 454 patients with STEMI infarction included in the TRITON TIMI 38 study [42]. The predictive capacity of TIMI Dynamic risk score remained consistent for 1-year mortality with c statistic of 0.81. EPICOR Risk Score EPICOR web-based risk calculator [45] is used to assess the risk of death within 2 years of hospital discharge after MI and is a continuation and development of the previous version of this system for assessing the risk of death within 1 year of MI [46]. The scale was developed based on the analysis of the EPICOR study (long-term follow up of antithrombotic management patterns in acute coronary syndrome patients) [47] and EPICOR Asia [48]. Both of these studies are prospective, international, observational studies which enrolled 23,489 patients from 28 countries across Europe, Latin America, and Asia, who were hospitalized for MI (STEMI, NSTEMI, and UA) within either 24 h (EPICOR) or 48 h (EPICOR Asia) of symptom onset, and who survived to hospital discharge. The risk score model was created using identified predictive variables and forward stepwise Cox regression. The model was internally validated using a bootstrap method. In the original model, 17 independent mortality predictors were determined: age, low ejection fraction (EF), no coronary revascularization/thrombolysis, elevated serum creatinine concentration, poor EQ-5D score, low hemoglobin, previous cardiac or chronic obstructive pulmonary disease, elevated blood glucose, on diuretics or an aldosterone inhibitor at discharge, male sex, low educational level, in-hospital cardiac complications, low body mass index (BMI), STEMI diagnosis, and Killip class. EuroQoL (EQ-5D) is a quality of life generic questionnaire which grades each of five parameters: mobility, self-care, ability to perform usual activities, pain/discomfort, and anxiety/depression as 'no problem' (zero points), 'moderate' (one point) or 'a severe limitation' (two points). A risk score is calculated from the risk coefficients of the linear predictors for the overall model. A simplified model was also created to facilitate the practical application of the scale, which contained only 11 variables (six of the variables removed which had a somewhat lesser impact on patient risk: on diuretics and on aldosterone inhibitor at discharge, education level, in-hospital complications, BMI and Killip class. The EPICOR risk-scoring system provided excellent discrimination capacity (c statistic 0.80, 95% CI (0.79-0.82)). A simplified risk model with 11 predictors gave only slightly weaker discrimination (c statistic 0.79, 95% CI (0.78-0.81)). ACEF Risk Score Originally the ACEF scale (age, creatinine, and EF) was designed to assess the risk of elective cardiac surgery [49]. Due to its simplicity and the possibility of quick application at the patient's bedside, a number of attempts have been made to adapt this scale to the price of the risk associated with other groups of patients and other clinical situations: patients undergoing stent implantation [50], and particularly in challenging patient subgroups such as those with left main disease [51]; bifurcation lesions [52]; heavily calcified lesions undergoing rotational atherectomy [53]; chronic total occlusions [54]; and patients with severe aortic stenosis undergoing transcatheter aortic valve replacement [55]. The scale value is calculated using the following formula: age/left ventricular ejection fraction (LVEF) + 1 (if creatinine >176 µmol/L). The use of the ACEF scale in the assessment of patients with MI was confirmed on the basis of the prospective, multicenter Swiss ACS cohort, which consecutively enrolled (between 12.2009 and 10.2012 at four university hospitals in Switzerland) 2168 patients undergoing coronary angiography for ACS (STEMI or NSTEMI/UA) [56]. Coronary revascularization by either PCI or CABG was performed according to current guidelines and recommendations. The primary endpoint of the study was all-cause mortality. The secondary endpoint was major adverse cardiac and cerebrovascular events (MACCE) defined as a composite of all-cause mortality, nonfatal MI, clinically indicated repeat coronary revascularization, definitive stent thrombosis, and transient ischemic attack/stroke [57]. Optimal ACEF score cut-off values were calculated by decision tree analysis, and patients were divided into low-risk (≤1.45), intermediate-risk (>1.45 and ≤2.0), and high-risk groups (>2.0). Thus, the score result does not indicate the absolute risk of death, but only classifies the patient into one out of three groups with increasing risk. Multivariate Cox proportional hazards regression models were calculated for 30-day and 1-year rates of mortality and MACCE both for continuous ACEF score values and ACEF score groups. The bootstrap re-sampling technique was used for internal validation. Cumulative incidence rates of 1-year mortality and MACCE according to ACEF score groups were estimated with the Kaplan-Meier method. In multivariate analysis, the ACEF score emerged as an independent predictor of 30-day rates of all-cause mortality (adjusted HR 3. 35 89, p ≤ 0.001). The analysis of the study also confirmed that the ACEF score achieved a similar predictive performance as the GRACE and CRUSADE scores. It was finally confirmed that the ACEF score independently predicts short-and long-term survival and adverse events in patients presenting with ACS referred for coronary revascularization. Clinical Scales to Assess the Risk of Bleeding during the Treatment of MI Undoubtedly, the use of dual antiplatelet therapy (DAPT) reduces the number of ischemic episodes in patients undergoing coronary stent implantation, but at the same time increases the risk of bleeding [58]. The latest ESC guidelines recommend a 12-month duration of DAPT in patients after both STEMI and (class I LOE: A). Bleeding is the most common complication after stent implantation and it shortens the life span, extends hospitalization, and lowers the quality of life [59]. Bleeding is also a common problem complicating treatment of MI. Clinical trials involving patients with NSTEMI demonstrate that major bleeding is associated with a 5-fold increase in 30-day mortality [60]. Assessing the risk of bleeding in an MI patient is therefore an important part of the treatment process and is a major challenge for the modern physician. The ESC guidelines for the treatment of NSTEMI [8] suggest the use of appropriate risk scores to assess the risk of bleeding (class IIb, LOE: B). The profile and characteristics of bleeding in a patient with MI change over time: in the in-hospital period there is a predominance of access site bleeding, related with the coronary intervention, and in the post-discharge period there is a predominance of gastrointestinal bleeding, related with antiplatelet therapy [61]. The risk factors for bleeding also change over time after implantation of stents. A similar division can also be seen in the risk scores of bleeding: scores assessing short-term risk (in-hospital, 30 days from admission to hospital) and scores assessing long-term risk (approximately 1 or 2 years after MI). The latter are most often scores integrating the assessment of risk related to ischemia and bleeding and are used to assess the optimal duration of DAPT. The CRUSADE Risk Score The Crusade [62] score is a bleeding risk assessment system recommended by the ESC guidelines for the treatment of NSTEMI [8]. The scale is based on eight parameters (clinical data, laboratory test results, history data) and is intended to be used during the admission of a patient with MI to the hospital to assess the risk of major bleeding. The scale was based on the analysis of "can rapid risk stratification of unstable angina patients suppress adverse outcomes with early implementation of the ACC/AHA" guidelines (CRUSADE) Quality Improvement Initiative database of high-risk NSTEMI patients admitted to U.S. hospitals [63]. The analysis population consisted of 89,134 patients enrolled across 485 U.S. sites from February 2003 through December 2006. The study population was then divided randomly into a derivation cohort (80%, n = 71,277) and a validation cohort (20%, n = 17,857). The main aim of the study was to assess the relationship between individual covariates and the fact of major bleeding. Intracranial hemorrhage, documented retroperitoneal bleed, hematocrit (HCT) drop ≥12% (baseline to minimum value), any red blood cells (RBC) transfusion when baseline HCT ≥ 28%, or any RBC transfusion when baseline HCT < 28% with witnessed bleed was considered major bleeding in the clinical analysis. Potential variables with clinically and statistically significant univariate relationships with major bleeding were included in the multivariate model. It should be noted that the scale does not include information on treatment (conservative vs. invasive), information on the number of antithrombotic drugs taken (antiplatelet, anticoagulant, or glycoprotein IIb/IIIa inhibitors (GPI)) as well as the patient's age. The CRUSADE bleeding score demonstrated a moderate discriminatory capacity in the derivation (c statistic 0.71) and validation cohorts (c statistic 0.70). Discriminative ability of the CRUSADE score was also confirmed in particular subgroups of patients: patients receiving ≥2 antithrombotic medications and those receiving <2 antithrombotic medications (c statistics 0.72 and 0.73, respectively) and patients receiving ≥2 antithrombotic medications treated with a conservative approach (no catheterization) versus treated with an invasive approach (catheterization) (c statistic 0.68 vs. 0.73). It is also worth noting that the intrahospital mortality increases with the occurrence of major bleeding during hospitalization, as well as with the CRUSADE score. ACUITY Risk Score The ACUITY risk score [64] is another of the scales recommended in the ESC guidelines for the treatment of NSTEMI. The scale assesses the risk of major bleeding within 30 days of admission to hospital. The scale is based on the analysis of databases from the ACUITY (acute catheterization and urgent intervention triage strategy) and HORIZONS-AMI (harmonizing outcomes with revascularization and stents in acute myocardial infarction) trials. ACUITY trial [65] enrolled 13,819 patients with moderate-and high-risk ACS (NSTEMI or UA) and was conducted to evaluate the safety of using various antithrombotic regimens before cardiac catheterization (heparin (unfractionated or enoxaparin) plus a GPI, bivalirudin plus a GPI, or bivalirudin monotherapy have been administered). All patients enrolled in the study underwent coronary angiography within 72 h of admission and were then qualified for PCI, CABG, or conservative treatment. HORIZONS-AMI trial [66] enrolled 3602 STEMI patients who presented within 12 h after symptom onset and were invasive treated. Patients were assigned to treatment with unfractionated heparin plus a GPI or to bivalirudin monotherapy. Antiplatelet therapy-aspirin and clopidogrel-was also used in both studies. Major bleeding was defined in both trials as intracranial or intraocular bleeding, access site hemorrhage requiring intervention, reduction in hemoglobin of ≥4 g/dL without or ≥3 g/dL with an overt bleeding source, reoperation for bleeding, or blood product transfusion. Bleeding was assessed as related or not related to CABG. The endpoint was major bleeding within 30 days and death within 1 year follow up. The integer risk score derived from multivariate logistic regression model consists of the summation of six integers from each baseline variable: gender (male 0 points, female 8 points); age (<50 years-0 points, 50-59-3 points, 60-69-6 points, 70-79-9 points, ≥80-12 points); serum creatinine concentration (mg/dL) (<1.0-0 points, 1.0-1.19-2 points, 1.2-1.39-3 points, 1.4-1.59-5 points, 1 6-1.79-6 points, 1.8-1.99-8 points, ≥2.0-10 points); white blood target count (giga/L) (<10-0 points, 10-11.9-2 points, 12-13.9-3 points, 14-15.9-5 points, 16-17.9-6 points, 18-19.9-8 points, ≥20-10 points); anemia (yes-6 points, no-0 points); clinical presentation (STEMI-6 points, NSTEMI-2 points, UA-0 points) representing the individual risk of bleeding if the patent received heparin plus a GPI. If bivalirudin is administered instead, 5 points are subtracted from the integer score. The four risk ranges of bleeding have been defined: low, moderate, high, and very high, corresponding to integer scores <10, 10-14, 15-19, and ≥20, respectively (with 30-day non-CABG-related bleeding rates of 1.9%, 3.3%, 6.9%, and 12.4%, respectively, in patients treated with a heparin plus a GPI and 0.7%, 2.0%, 3.7%, and 8.4%, respectively, in patients treated with bivalirudin monotherapy). C statistic of model was 0.74 and confirmed a good discriminative ability of this risk score. The relationship between the fact of bleeding and the risk of death within 1 year of follow-up was also analyzed. The model of nine independent predictors of 1-year mortality were identified using the multivariable Cox model (advanced age, elevated white blood cell count and serum creatinine concentration, diabetes mellitus (DM), reduced hemoglobin, smoking, sex, previous MI, and clinical presentation (STEMI, NSTEMI)). Antithrombotic treatment regimen was not an independent predictor of mortality. Both the occurrence of non-CABG-related major bleeding and MI within 30 days were independent predictors of subsequent mortality, when added to this multivariate model. The relationship between the severity of bleeding and the consequent risk of death has also been proven. It is worth noting that development of an isolated large hematoma ≥5 cm without more severe bleeding was not a statistically significant predictor of subsequent mortality. The relationship between CABG related major bleeding and subsequent mortality was analyzed separately, showing no statistically significant relationship between them (HR: 1.21, 95% CI: 0.81 to 1.80, p = 0.34). BleeMACS Risk Score BleeMACS score is a simple clinical tool for bedside risk estimation of 1-year postdischarge serious bleeding in patients with MI [67]. A BleeMACS score calculator is available in a mobile app. The scale was based on the BleeMACS (bleeding complications in a multicenter registry of patients discharged with diagnosis of acute coronary syndrome) analysis [68]. BleeMACS was a retrospective, observational, multicenter study involving 15,401 consecutive patients from 15 hospitals (from 10 countries located in North and South America, Europe, and Asia) with ACS diagnosis and underwent in-hospital PCI, with data of follow-up during at least 1 year. Recruitment lasted from November 2003 through June 2014. The study population was divided into a derivation cohort (70% of patients) and an internal validation cohort (30% of patients). The primary endpoint of the study was serious spontaneous bleeding within the first year after hospital discharge. Serious spontaneous bleeding was defined as any intracranial bleeding or any other bleeding leading to hospitalization and/or RBC transfusion (≥1 unit). Bleeding and/or RBC transfusions related to procedures or surgeries were not considered as spontaneous bleeding. Potential predictors of bleeding risk were assessed by Fine-Gray proportional hazards regression analysis. A point score was calculated by summing the weighted integers (range 0 to 80 points). Patients were classified into quartiles of the BleeMACS risk score: very low-risk (≤7 points), low-risk (8 to 16 points), moderate-risk (17 to 24 points), and high-risk (≥25 points). The final value of cumulative incidence of bleeding during 1 year of observation can be obtained using an electronic calculator (application on a mobile device). The BleeMACS risk score has been thoroughly validated, both within the internal and external validation cohort. An external validation was performed using data from SWEDEHEART registry (Swedish Web-system for Enhancement and Development of Evidence-Based care in Heart Disease Evaluated According to Recommended Therapies) [72] PARIS Risk Score The PARIS scale is another risk assessment model that has not been included in the ESC guidelines. The purpose of the PARIS scale [73] was to create two separate models assessing both the risk of a bleeding and a thrombotic event within 2 years of PCI with DES implantation. The system of these 2 separate scales (more precisely, the balance between the scores achieved on both scales) can help the clinician decide on the duration of DAPT. A risk scale for major bleeding (MB) and risk score for coronary thrombotic events (CTE) was developed. The risk model was developed on the basis of the analysis of the patient population from the PARIS registry (Patterns of Non-Adherence to Anti-Platelet Regimen in Stented Patients) [74]. This registry was a prospective, multicenter, observational study conducted in the US and Europe between July 2009 and December 2010. The inclusion criteria were successful stent implantation in at least one native coronary artery and the patient was discharged on DAPT. The registry was designed to examine the impact of different modes of DAPT cessation on incidence of clinical adverse events. The endpoint of the study during the 2-year follow-up was any DAPT cessation and/or the occurrence of any ischemic or bleeding adverse events. Coronary thrombotic events were defined as the occurrence of a stent thrombotic (ST) of a non-stent-related coronary thrombotic complication (spontaneous myocardial infarction). MB was defined as the occurrence of Bleeding Academic Research Consortium type 3 or 5 bleed [71]. Anemia was classified as a hemoglobin level <12 g/dL in men and <11 g/dL in women. Integer Risk Score for CTE events consisted of the following covariates: DM (none-0 points, non-insulin-dependent-1 points, insulin-dependent-3 points); ACS (no-0 points, yes, Troponin-negative-1 points, yes, Troponin-positive-2 points); current smoking (yes-1 points, no-0 points); CrCl < 60 mL/min (present-2 points, absent-0 points); prior PCI (yes-2 points, no-0 points); prior CABG (yes-2 points, no-0 points). For CTEs, the scores ranged from 0 to 10, and patients were grouped according to low (0 to 2), intermediate (3 or 4), and high (≥5) thrombotic risk. The absolute risk differences in CTE and MB for each patient could be treated as an indirect marker of a patient's overall ischemic and bleeding risk. Differences greater than 0 indicate that risks from thrombosis exceed those of bleeding (and for these patients prolonged time of DAPT should be considered), whereas risk differences less than 0 indicate the opposite. The PARIS risk score has been thoroughly validated within the external validation cohort. An external validation was performed using data from ADAPT-DES (Assessment of Dual Antiplatelet Therapy With Drug-Eluting Stents) [75]. The final models of risk score exhibited discrimination ability: for the CTE model c statistic was 0.70 for the entire cohort and for MB model in the overall population a c statistic was 0.72. The final model of risk score exhibited moderate performance in the external validation cohort (the c statistic of 0.65 and 0.64 for CTE and MB risk scores, respectively). PRECISE DAPT Risk Score The PRECISE DAPT score [76] is another system for assessing the risk of bleeding complications in patients undergoing DES implantation procedures (both in stable coronary disease and in ACS). This scale, apart from the assessment of the risk of bleeding within 1 year after discharge from the hospital, is also a valuable tool for the clinician, allowing them to assess the most optimal duration of DAPT at the start of therapy. DAPT (aspirin and P2Y12 inhibitors) significantly reduces the risk of ischemic recurrences in patients after coronary stent implantation. On the other hand, this benefit is counterbalanced by higher bleeding risk, which is linearly related to the treatment duration. Both the risk of ischemia and the risk of bleeding negatively affect the survival rate in this group of patients. As standard, according to the ESC guidelines, the duration of DAPT after stent implantation in ACS is 12 months. This time, depending on clinical indications, disease burden, and the patient's individual risk profile, can be shortened or extended. On the one hand, shortening DAPT duration from 12 months to 6 or 3 months significantly reduced bleeding incidence; on the other hand, prolonged treatment beyond 12 months reduced the incidence of both stent-related and non-stent-related ischemic events. The scale was based on the analysis of eight multicenter, contemporary, randomized clinical trials (RCT) (14,963 patients were enrolled in 139 different clinical sites from 12 countries worldwide) concerning patients treated with DAPT after coronary stenting. DAPT consisted of an association of aspirin plus a P2Y12 inhibitor, most commonly clopidogrel (88%). The primary endpoint of this analysis was out-of-hospital bleeding defined according to the TIMI definition [69] occurring 7 days or later after the initial invasive procedure. Using Cox proportional hazards regression, a multivariable model of risk assessment for bleeding was created. A final five-item bleeding risk score was developed from the previous model. Selected predictor values were scaled and rounded to a score with integer values between 0 and 100. The PRECISE DAPT scale includes age, CrCl, hemoglobin, white-blood-cell count at baseline, and previous spontaneous bleeding. The ability to identify patients at high bleeding risk was visualized by Kaplan-Meier cumulative bleeding incidence curves. Use the web calculator or mobile app to calculate the scale value and the corresponding bleeding rate. Kaplan-Meier bleeding rates were consistently separated by score quartiles (very low risk: ≤10; low risk: 11-17; moderate risk: 18-24; and high risk: ≥25). In five of the eight analyzed studies, patients were randomly assigned to two possible DAPT duration patterns: 12 or 24 months (5050 patients) and 3 or 6 months (5031 patients). The effect of DAPT duration on the risk of a bleeding or ischemic episode across bleeding risk score quartiles was also analyzed. Significant increase in bleeding with a long (12-24 months) rather than short (3-6 months) duration of treatment was observed in patients at high bleeding risk, but not in those without a high bleeding risk profile (very low risk, low risk, and moderate risk). Concurrently, longer DAPT duration reduced the composite ischemic endpoint (MI, definite ST, stroke, target vessel revascularization) in those at non-high bleeding risk, but not in those at high bleeding risk. A scale value of 24 is indicated as the cutoff point; in patients who achieved ≥24 points, the risk of bleeding with a longer duration of DAPT (over 12 months) significantly exceeded the benefits of reducing the incidence of ischemic events in this group of patients. A similar relationship was observed in the group of patients with MI: PRECISE DAPT score ≥25 showed a significant increase in TIMI bleeding incidence after longer than 12 months DAPT duration, whereas those with a non-high PRECISE-DAPT risk score (<25) did not. At the same time, longer DAPT duration reduced the composite ischemic endpoint at a non-high PRECISE-DAPT score, but not in those with a PRECISE-DAPT score ≥25. The PRECISE-DAPT score showed a good discriminatory capacity: c statistic 0.73 (95% CI 0.61-0.85) for out-of-hospital TIMI major or minor bleeding and 0.71 (95% CI 0.57-0.85) for TIMI major bleeding within 12 months after stent implantation. The score discrimination was consistent regardless of the clinical subgroups of patients (stable coronary artery disease vs. MI) or treatment with clopidogrel vs. ticagrelor. The PRECISE-DAPT score has been thoroughly validated in the context of two independent PCI-treated populations. An external validation was performed using data from the Platelet Inhibition and Patient Outcomes (PLATO) registry (8595 patients) [77] and from the BernPCI registry (6172 patients). The PLATO trial included patients with STEMI or NSTEMI randomly assigned to receive DAPT with either clopidogrel or ticagrelor in addition to aspirin for up to 12 months. The BernPCI registry included all patients undergoing PCI at Bern University Hospital, Switzerland, between February 2009 and December 2014. The c-indices for TIMI major or minor bleeding were 0.70 (95% CI 0.65-0,74) in the PLATO trial and 0.66 (95% CI 0.61-0.71) in the BernPCI registry. DAPT Risk Score The DAPT risk score [78] is one of several scales to assess the risk of post-discharge bleeding in patients after stent implantation and treatment with DAPT. The purpose of using the DAPT risk score in patients after coronary stent implantation was an attempt to identify patients for whom the expected benefits related to the reduction of ischemic events resulting from DAPT prolongation beyond 12 months would outweigh the increased risk of bleeding. The DAPT score is based on a secondary analysis of the DAPT study [79], which was conducted from August 2009 to May 2014 in 11 countries and enrolled 25,682 patients after PCI with DES or BMS treated with thienopyridine plus aspirin for 12 months. At 12 months, eligible patients who were free from major bleeding and ischemic events were randomized to continued thienopyridine + aspirin therapy (5862 patients) and aspirin + placebo (5786 patients) for the next 18 months. The primary ischemic endpoint was a composite of MI or Academic Research Consortium definite or probable ST [80] and the primary bleeding endpoint was moderate or severe bleeding, as defined by the GUSTO criteria [70]. It is assumed that prolongation of DAPT reduces the risk of ischemia at the expense of increasing the risk of bleeding. When creating the DAPT score it was assumed, however, that some heterogeneity in the effect of DAPT prolongation is possible; namely, in some patients, prolongation of DAPT will reduce the risk of ischemia without significantly increasing the risk of bleeding. In order to create a model to identify these patients, separate scales were created using Cox regression methods for the risk of ischemia and the risk of bleeding. For each patient after randomization, a "benefit-risk difference" was determined, the value of which was the absolute difference between the predicted ischemic reduction and predicted bleeding increase (resulting from randomizing patients to the group treated with thienopyridine plus aspirin). A linear regression model was created, using benefit-risk difference as the outcome and all predictors from the ischemia and bleeding models. Variables that contributed more than 1% of the observed variation in estimated benefit-risk difference were included in a final clinical score. All variables were assigned an integer score of 1 or 2 (or −1 to −2) based on the beta coefficient. The range of scores was between −2 and 10, assigned points as follows: 0 for age <65, −1 for age 65-<75, −2 for age ≥75, 2 for vein graft PCI, 1 for current cigarette smoker or within past year, 1 for DM, 1 for MI at presentation, 1 for stent diameter <3mm, 2 for history of CHF or LVEF <30%, 1 for prior PCI or prior MI, and 1 for paclitaxel-eluting stent. In DAPT study derivation cohort higher score quartile was associated with higher rates of ischemia events, whereas lower score quartiles were associated with higher rates of bleeding events. Furthermore, patients randomized to aspirin + thienopyridine therapy who were assigned to higher score quartiles presented larger observed risk reductions in incidence of ischemic event, patients assigned to lower score quartiles presented greater observed risk increases in bleeding. Its median was considered to be the cut-off value of the scale. In the group of patients with predictive scores ≥2 (n = 5917), randomization to continued thienopyridine was associated with larger reductions in risk of ischemia compared with those with scores <2, (n = 5731). Conversely, randomization to aspirin + thienopyridine was associated with smaller increases in bleeding among high score patients compared with low score patients. The DAPT risk score was externally validated within the PROTECT Trial [81] which enrolled patients undergoing PCI and randomized to receive sirolimus-eluting (SES) vs. zotarolimus-eluting stents (ZES). The study was conducted from June 2007 to July 2014 in 36 countries. Patients without an ischemic or hemorrhagic event within the first 12 months of observation were enrolled in the validation cohort (n = 1836). Both the primary risk models (ischemic and bleeding risks) and the final prediction score performance in stratifying risks of ischemic and bleeding events were validated. In PROTECT Trial patients were not randomized to different durations of DAPT (DAPT duration was modified by treatment indication). The models used to derive the predictive score showed modest discrimination ability in the validation cohort (ischemic model c statistic 0.64, 95% CI 0.58-0.70, bleeding model c statistic 0.64, 95% CI 0.55-0.73). The ability of the clinical prediction score to stratify ischemic and bleeding risk was evaluated by comparing overall rates of ischemic and bleeding events among patients with high vs. low score in the validation cohort. The rate of ST or MI between 12-30 months after PCI was higher among high score patients compared with low score patients (1.5% vs. 0.7%, respectively). Rates of moderate or severe bleeding were not significantly different by score group (0.36% among high score patients and 0.52% among low score patients). Discussion The utility of modern risk assessment systems in patients treated for MI is indisputable. This is confirmed both by the official guidelines issued by the ESC and by daily clinical practice. The benchmark risk score should be simple to use, accessible at the bedside, and easy to interpret. Risk assessment complements the clinical assessment of the patient and should be repeated in a different moment in the timeline of ACS (from admission, through treatment, discharge from hospital, to long-term post-hospital care). The risk assessment scores are valuable tools for the physician, supporting the process of making clinical decisions, pointing to the optimal pharmacotherapy, helping to determine the duration of hospitalization, and facilitating the selection of post-hospital management strategies. When analyzing the increasing number of emerging risk assessment systems, several groups can be distinguished according to the following criteria: assessed endpoint: risk of ischemia/death or bleeding risk or risk of a composite endpoint (death and MACE), 2. time perspective of the risk assessment: short-term assessment of the risk (in the inhospital period or within 30 days) or long-term assessment of risk (different duration of the follow-up period), 3. spectrum of analyzed ASCs: assessment of the risk in patients with STEMI or NSTEMI/UA or patients with the full spectrum of ACS, 4. the moment at which the risk analysis is performed: assessment based on the basic clinical and laboratory parameters collected on admission to the hospital or risk assessment based on the analysis of other data obtained during hospitalization or performed at the end of hospitalization, 5. how to use the scale: simple, bedside nomograms or electronic calculators (via internet or mobile app). In the course of daily clinical practice, the physician must select the appropriate risk scale for the appropriate patient at the appropriate time point of treatment. The area of application of individual risk scales often overlaps, which makes it even more difficult to choose the appropriate scale. When using the risk score, it is worth being aware of the size and clinical characteristics of the cohort on the basis of which the given risk assessment system was created, the criteria that exclude patients with derivation cohort, the methods of its treatment (conservative vs. invasive), the selection of pharmacotherapy (including the use of new antiplatelet drugs), and clinical presentation (STEMI vs. NSTEMI/UA). The closer the characteristics of the research cohort to the clinical scenario we are assessing, the more the predictive value of the scale increases. Proper, extensive external validation of the scale is also very important. Due to the rapid development of interventional cardiology methods and devices in recent years, it is also worth paying attention to the period in which patients were enrolled in the derivation cohort. It is impossible to create one perfect scale applicable in every situation and in every clinical scenario; however, in everyday practice it seems impractical to use separate scales for each of the possible risks. The physician using a given risk assessment system should know its limitations as well as its strengths. The first of the objections concerning many scales created on the basis of the clinical randomized trials are the exclusion criteria used when recruiting for the trial. As a result, the research cohort does not reflect the real population of patients. A brief description of the potential advantages and limitations of each risk score is presented below. The TIMI scale is recommended by the ESC to assess the risk of death within 30 days of a STEMI patients. The risk assessment is made on admission to the hospital. Despite the undoubted advantages (numerical derivation cohort, simple scale, easy to apply at the bedside, does not require electronic devices, validation on a large group of patients), it should be remembered that the scale was created over 20 years ago and was based on the analysis of patients with STEMI treated exclusively by fibrinolysis. Nowadays, the vast majority of patients are treated only invasively and the introduction of new antiplatelet drugs and stents significantly changes the fate of patients. Nevertheless, a paper was published in 2014 [82] on the use of the TIMI score in patients treated with PCI (the derivative cohort consisted of 8073 PCI-treated STEMI patients, enrolled to the prospective, observational Belgian STEMI registry from January 2007 to February 2011). TIMI risk score still provides acceptable discrimination for the prediction of 1-year mortality (c statistic 0.72). The version of the TIMI scale dedicated to patients with NSTEMI/UA (currently not recommended for use by the ESC guidelines) should also be mentioned here. Like the previous one, this is the scale used on admission to hospital and is designed to assess the risk of death, heart attack, or urgent revascularization within 14 days of admission to hospital. In this case, as before, the main limitation of the scale is the derivation cohort maladjustment (patients enrolled in clinical trials in 1996-1998, not treated invasively), as well as the moderate discriminating value of the scale (c statistic 0.65). The last scale from the TIMI group of scales is the TIMI DYNAMIC risk score-a simple-to-use clinical scale that allows one to assess the risk of death of a patient with STEMI within 1 year of discharge. This scale is used to reclassify patients with STEMI based on in-hospital events. It is worth noting that the DYNAMIC TIMI scale is one of the few that takes into account the occurrence of in-hospital events (recurrent MI, stroke, major bleed, arrythmia, renal failure, CHF, or cardiogenic shock) in the long-term risk assessment process. One should also be aware that the ExTRACT TIMI 25 study had quite numerous exclusion criteria (e.g., cardiogenic shock, renal failure) and was based on fibrinolytic therapy. The scale was validated externally on a small cohort of patients (1829) enrolled in the TRITON TIMI study, where 99% of patients underwent PCI and were treated with prasugrel/clopidogrel. Despite differences in the characteristics of the studied cohorts, predictive capacity of TIMI Dynamic risk score remained consistent for 1-year mortality with c statistic of 0.81. The GRACE risk score is currently the most widely used and best validated scale for assessing the risk of MI patients. ESC recommends it as the basic scale for assessing the risk of patients with NSTEMI (class II a LOE B). It should also be mentioned that the GRACE scale was included in the decision-making process regarding the treatment of patients with NSTEMI. The GRACE score above 140 points qualifies the patient to the group of high-risk patients, which is related to the recommendations for implementing an early routine invasive strategy within 24 h of admission (class I LOE A). Over the last 20 years, new versions of the GRACE scale have been developed, based on subsequent analyses of patients from the GRACE registry. The advantages of this scale include the simplicity of use that allows it to be used at the patient's bedside. Initially, the score result was read on a special nomogram, and an internet calculator/mobile phone application was developed for subsequent versions of the scale. The scale was created on the basis of a very large international registry of patients with the entire spectrum of ACS, which favors a faithful reflection of the real population of patients with MI (the scale can be used both when assessing the risk of patients with STEMI and NSTEMI). The subsequent versions of the scale were based on new research cohorts separated in the following years from the GRACE register, which meant that the new versions of the scale took into account the progress in the development of pharmacotherapy and invasive treating of MI. The subsequent revisions of the GRACE scale give the physician a unique opportunity to assess both short-term and long-term risk; risk assessment of death as well as risk assessment of the composite endpoint of death or recurrence of non-fatal MI. Risk assessment using the GRACE scale can be performed both on admission (assessing in-hospital risk) and on discharge from the hospital (assessing the risk over a period of 6 months, 1 year, and 3 years). The scale also introduced the possibility of an alternative replacement of the serum creatinine concentration value with information about the patient's renal failure (in the absence of data) and the value of the Killip class with information about the use of diuretics by the patient previously. This allows for wider, even easier, and more common use of the GRACE scale in assessing the risk of patients with MI. The subsequent versions of the GRACE scale have a very good predictive value, assessed both in the research cohort and in the validation cohorts. The most serious allegations against the GRACE scale include (despite subsequent updates) the maladjustment of the research cohort to the modern population (GRACE 2.0 was based on patients enrolled in the registry between January 2002 and December 2007). Doubts are also raised by the small size of the study cohort (1274 patients), on the basis of which a model for the analysis of the risk of death within 3 years from admission to the hospital was developed. Another risk assessment system is the ACTION Registry-GWTG risk model [43]. In principle, it is a simple scale based on clinical, electrocardiographic, and laboratory parameters collected on admission to hospital, used to assess the in-hospital risk of patients with both STEMI and NSTEMI undergoing modern invasive treatment. Its advantages include ease of use, a very large derivative cohort, good discriminating value (c statistic 0.83), and use in the entire spectrum of currently treated ACS. The limitations of the scale include: the scale was based on the voluntary registry, patients transferred from one hospital participating in the study to another present a question of outcome attribution, the possibility of assessing the risk of death only during hospitalization, and no external validation. EPICOR risk calculator [45] assesses the risk of death within 2 years of hospital discharge following MI. Its advantages include the possibility of risk assessment in patients with the entire spectrum of ACS, a very numerous, heterogeneous derivation cohort from Europe, Latin America, and Asia, and high predictive value (c statistics 0.8). The disadvantages include undoubtedly the complexity of the scale (17 parameters), the need to assess subjective factors (EuroQoL-quality of life generic questionnaire), the fact that some in-hospital complications (bleeding, stroke, infection) were not included in the model, there was no external validation of the risk model, and it is not possible to assess the risk of developing other endpoints than death (e.g., non-fatal ischemic events). The ACEF scale is another system for assessing the risk of patients after an MI [56]. The advantages include the simplicity of the scale, the ability to assess the risk of patients with the entire ACS spectrum, the ability to assess both short-term (30 days) and long-term (1 year) risk, the derivation cohort was treated with modern methods of invasive cardiology, and the ACEF score was a predictor of not only all-cause mortality, but also of MACCE and transient ischemic attack/stroke. The limitations of this scale include a small research cohort and the lack of external validation. The CRUSADE [62] scale is the first of the scales dedicated to risk of bleeding; this risk score is officially recommended by the ESC in the guidelines for the treatment of NSTEMI (class II b, LOE: B). The scale is based on eight simple clinical parameters collected on admission to hospital and assesses the risk of major bleeding during hospitalization. The advantages of this system include a very numerous derivation cohort, a relatively easy way to use it in everyday practice (it does not require the use of electronic devices), a varied treatment of patients from the research cohort (invasive, conservative, and cardiosurgical treatment), and the scale is based on the CrCl assessment, not serum creatinine concentration (CrCl better reflects kidney function). On the disadvantage side, it should be noted that the study took place in the years 2003-2006, the study cohort included patients with only NSTEMI (patients with UA were excluded), the study cohort also excluded patients taking warfarin, transferred to other hospitals, patients who died within 48 h of admission, the analysis of bleeding in patients undergoing CAGB was limited only to the preoperative period, only hematocrit levels (not hemoglobin) were analyzed, the scale does not include information about history of prior bleeding or bleeding diathesis, patients enrolled in cohort study were not treated with new antiplatelet drugs, and. The discriminatory capacity of CRUSADE score was moderate (c statistic 0.72). The ACUITY risk score [64] is another of the scales recommended in the ESC guidelines for the treatment of NSTEMI. The scale is based on seven clinical parameters collected on admission to hospital and assesses the risk of major bleeding within 30 days of hospital admission (it was also adapted to the nine-component version to assess mortality 12 months after discharge from the hospital). The advantages of this system include a relatively easy method of application in everyday practice (it does not require the use of electronic devices), a varied treatment of patients from the study cohort (invasive, conservative and cardiosurgical treatment), the possibility of assessing short-and long-term risk, and the study cohort was composed of patients with both NSTEMI/UA and STEMI. The limitations of this scale include the currently rare anticoagulant treatment during PCI used in the study cohort (administration of, inter alia, bivalirudin plus and GPI or bivalirudin in monotherapy), patients were not treated with new antiplatelet drugs (clopidogrel was used, ticlopidine in case of allergy to the latter), no external validation of the scale, and the moderate discriminant value of the scale (c statistic 0.74). Both of the above-mentioned risk scales (CRUSADE and ACUITY) do not take into account changes in the practice of interventional management in the form of a very frequently used radial approach, which is a proven factor reducing the number of bleeding complications [83]. BleeMACS risk score [67] is a clinical tool for risk estimation of 1-year post-discharge serious bleeding in patients with MI. The advantages of this risk score include the possibility of risk assessment in patients with the entire spectrum of ACS (the study cohort included patients with all forms of ACS); a very numerous, heterogeneous study cohort from North and South America, Europe, and Asia, which guarantees a true representation of the population of patients with MI; a seven-factor simple risk score model; patients enrolled in study and validation cohort were partially treated with new antiplatelet drugs; and the scale was validated both internally and externally on a large group of patients. The value of the c statistic for the scale in the research cohort was 0.71, which is a moderate value. It is noteworthy that, unlike the risk scores discussed above, BleeMACS has been positively validated for use in patients receiving oral anticoagulation. The disadvantages of this risk assessment system include the need to use an electronic calculator (application on a mobile device) to determine the final value of cumulative incidence of bleeding during 1 year of observation, as well as not taking into account potential changes in antithrombotic therapy, such as switches between antiplatelet drugs or DAPT discontinuation. The antithrombotic treatment regimen was intentionally omitted during construction of the risk score because in daily clinical practice the prescription of antithrombotic drugs conditioned by the individual risk of bleeding. However, an analysis examining the performance of the BleeMACs score in different antithrombotic regimens was assessed in the SWEDEHEART population. The score discrimination capacity was similar in the different DAPT regimens (c statistic 0.65 for aspirin plus clopidogrel and for aspirin plus ticagrelor, and of 0.63 for aspirin plus prasugrel; c statistic 0.69 for oral anticoagulation plus a single antiplatelet drug and of 0.60 for triple therapy-oral anticoagulation plus DAPT) as well as in population non-PCI patient. The PARIS risk score [73] is actually a composite of two separate scales and is an interesting example of a system that includes the ability to assess the overall major risks of a patient with MI. The scale assessed the risk of bleeding and thrombotic events within 2 years of PCI with DES stent implantation. The advantages of this system include possibilities of assessing the balance between the risk of ischemia and the risk of bleeding in a given patient, which may be helpful in determining the duration of dual antiplatelet therapy (the possibility of extending beyond the standard period). The authors of the study suggested that, in the case of CTE ≥5, extension of DAPT duration (regardless of the MB level) should be considered. This solution requires further research. The research cohort was based on a large, international observational study conducted in 2009-2010 and patients treated with oral anticoagulants were not excluded from the cohort, all of which contributes to a good reflection of the actual patient population. The scale is based on simple clinical and laboratory parameters assessed during discharge from the hospital (it can be used at the patient's bedside). The Paris scale for long-term risk assessment focused on adverse events mainly related to the duration of DAPT (ST and bleedings), rightly not paying much attention to PCI-related periprocedural bleeding (which has a greater impact on short-term in-hospital mortality). The undoubted disadvantages of this risk assessment system include the fact that the vast majority of the study cohort consisted of patients with stable angina undergoing elective PCI (ACS constituted only 37.8% of all patients). Another aspect limiting the use of the PARIS scale is the fact that patients were mainly treated with clopidogrel, which does not reflect the current trends in pharmacotherapy related to PCI. It is also worth mentioning the moderate discriminating capacity of the scale in the external validation cohort (the c statistic of 0.65 and 0.64 for the thrombotic and bleeding risk scores, respectively). The DAPT risk score [78] is used to assess the risk of post-discharge bleeding patients after stent implantations and treated with DAPT. Assessment with this scale should take place after 12 months of DAPT. The derivation cohort consisted of patients enrolled in DAPT-a large international clinical trial conducted in 2009-2014 [79]. Unfortunately, only 73.8% of these patients were ACS patients. It is a scale that is relatively simple to use, containing both clinical parameters related to PCI as well as echocardiography. The disadvantages of this scale include the fact that the study cohort excluded patients undergoing long-term anticoagulant therapy, patients with elective surgery requiring discontinuation of antiplatelet therapy for >14 days, and patients with a life expectancy <3 years (due to inclusion criteria for the DAPT study). Most of the patients enrolled in the DAPT study received treatment with clopidogrel + aspirin, which differs from modern standards. Some patients had an implanted BMS (14.4%), and most of the implanted DES were of the first generation (which may overestimate the frequency of ST/ischemic events). The ischemic model and bleeding model showed similar moderate discrimination (c statistic 0.70, 0.68; respectively). It is also worth noting that the statistical test for interaction did not show a difference in the effect of continuation of long-term DAPT on mortality in high vs. low prediction score groups. Both the primary risk models (ischemic and bleeding risks) and the final prediction score performance in stratifying risks of ischemic and bleeding events were validated. The models used to derive the predictive score showed modest discrimination ability in the validation cohort (ischemic model c statistic 0.64, bleeding model c statistic 0.64) [81]. The final risk score was validated on a limited basis, with a retrospective analysis in 1970 patients and calculation of the score at a different time point (6 vs. 12 months) than in the study cohort used to develop the scale [84]. The latest ESC guidelines recommend a 12-month duration of DAPT in patients after both STEMI and NSTEMI (class I, LOE: A) [7,8]. This period may be changed in the presence of important contraindications. The ESC guidelines for the treatment of NSTEMI recommend the use of risk scores designed to assess the benefits and risks associated with different durations of DAPT (class IIb, LOE: A). The two risk assessment systems described above (DAPT and PARIS) have the same goal-to assess the possibility of extending the DAPT beyond 2 years. Both these systems complement each other. It is worth noting that one of these systems was developed on the basis of a clinical trial (DAPT), and the other was based on an observational register (PARIS), which has certain advantages and limitations related to it. For both scales, patients in the study cohort were treated mainly with clopidogrel. The PARIS score should be done after PCI and the DAPT score should be done after 12 months of DAPT. The latest ESC guidelines for the treatment of NSTEMI quite thoroughly discuss the issue of prolonging the duration of DAPT beyond 12 months, suggesting a combination of ticagrelor + aspirin first, and then prasugrel + aspirin or clopidogrel + aspirin if the patient is not eligible for ticagrelor treatment. This treatment regimen should be used in patients at high risk of ischemic events and without increased risk of major bleeding (class IIa LOE: A) and in patients at moderate risk of ischemic events and without increased risk of major bleeding (class IIb LOE: A). The guidelines accurately provide a definition of high and moderate risk of ischemic events as well as a definition of an increased risk of major bleeding. In the case of guidelines for the treatment of STEMI infarction, administration of DAPT >12 months (ticagrelor 60 mg twice a day) may be considered in patients at high risk of ischemic events who tolerated the current DAPT without bleeding complications (class IIb LOE: B). As can be seen, the ESC guidelines do not provide for the use of the PARIS or DAPT scale to assess the possibility of extending the duration of DAPT above 12 months. The PRECISE DAPT [76] score is another system for assessing the risk of bleeding complications in patients undergoing PCI with DES stent implantation and receiving DAPT. The scale assesses the risk of a bleeding event within 1 year of surgery in both patients with stable angina and MI. One of the advantages of this score is undoubtedly its simplicity: it consists of five parameters (two clinical and three laboratory) and should be used as soon as the patient is admitted to the hospital. It is true that, in order to determine the predicted incidence of bleeding within 1 year of DAPT, you need to use an internet calculator/mobile app; however, at the patient's bedside, a physician can classify the patient as one of the bleeding risk groups. The score is recommended by the ESC in the NSTEMI treatment guidelines as a tool for assessing the possibility of reducing DAPT time (in patients with a score ≥25, discontinuation of P2Y12 inhibitor treatment up to 3 mc may be considered) (class IIa, LOE: B). In patients at high risk of bleeding on the PRECISE-DAPT score (i.e., ≥25 points), prolonged DAPT was associated with no benefit for ischemic events with a significant increase in the risk of bleeding complications. On the other hand, longer therapy in patients not at high risk (i.e., with PRECISE-DAPT scores <25) does not increase the risk of a bleeding events and is associated with a significant reduction in the incidence of ischemic events. The advantages of this scale also include a very large study cohort based on participants of eight multicenter, international, modern, RCTs, which well reflects the actual population of patients treated with DAPT. Patients with MI (the entire ACS spectrum) accounted for 55.6%. Unfortunately, randomized trials have their limitations, one of which was the exclusion of patients taking oral anticoagulants from the study cohort. The PRECISE-DAPT score showed a moderate discriminatory capacity: c statistic 0.73 for out-of-hospital TIMI major or minor bleeding and 0.71 for TIMI major bleeding. Another limitation of the scale is the fact that the majority of patients in the study cohort were treated with the aspirin plus clopidogrel (88%). The score has undergone extensive external validation in the context of two independent PCI-treated populations (patients with stable coronary disease and ACS). The PRECISE DAPT score showed an average discrimination ability for TIMI major or minor bleeding (c statistics ranging from 0.66 to 0.7 depending on the validation cohort). ACS patients from validation cohorts have been treated with all three P2Y12 inhibitors: clopidogrel, prasugrel, and ticagrelor. When only patients with ACS who underwent PCI and were treated with prasugrel or ticagrelor (4424) were included in the validation cohort, the scale showed little predictive value for major bleeding over a mean follow-up of 14 months (statistic c 0.653). Due to the fact that the PRECISE DAPT scale has not been prospectively assessed in RCT, its value in terms of improving patient outcomes is still unclear. Conclusions Over the past decade, a wide variety of risk assessment systems have been developed for patients with myocardial infarction. In the article, the authors described only the more important scales, because when analyzing the literature, one can find many reports on new risk scales, which are beyond the scope of this article. Despite the multitude of these scales, none of them seems to be perfect. The question is what criteria should be met by risk assessment systems in the future and how they should be built. It seems obvious that the discriminating value of the scale for indicating patients at high risk of death will depend on the size and diversity of the study cohort on the basis of which the score will be created. The larger and more diverse the study cohort, the greater the chance of a more complete reflection of the real population of patients with MI. Enrolling patients from different countries/continents in the study cohort will allow for a significant diversity of the population, as well as taking into account the differences in the organization of local health systems. Risk assessments systems are based either on large international randomized trials or on observational ACS registries. Both have advantages and disadvantages as described above in the consideration of the practical limitations of the risk scores. It has been confirmed that the relationship of dependencies between individual risk factors and outcomes may be less apparent in randomized as opposed to observational studies [85]. The perfect solution seems to be to create a study cohort based on the largest possible observational ACS registry and then validate the developed risk score in prospective RCT. When analyzing individual risk assessment systems, it can be concluded that there is a certain group of parameters (clinical, laboratory, procedural, etc.) that should always be taken into account when constructing any risk score for a patient with MI. The predictive value of the same parameter may differ depending on the profile of the studied population, the used invasive and pharmacological treatment methods, as well as change its value within the same population depending on the time frames adopted during the analysis (short-term assessment vs. long-term assessment). Therefore, the continuous development of cardiology will force the creation of new risk assessment scores, which must be constantly updated with changes in the characteristics of the population of patients with MI, treatment methods, and current pharmacotherapy. The wide application of modern medical technologies has resulted in the emergence of new, previously unknown, unintended complications related to the applied pharmacological and surgical treatment-the so-called serious adverse events (SAE), which go far beyond the definition of MACCE or definitions of major bleeding. SAE are defined according to the Harvard Medical Practice Study definition: "an unintended injury or complication that results in disability at the time of discharge, death or prolonged hospital stay caused by health care management rather than by the patient's underlying disease process" [86]. An example of SAE may be complications related to PCI (coronary artery dissection, aortic dissection, coronary artery rupture, contrast-induced nephropathy) as well as related to intensive medical therapy, which is increasingly required in patients with MI (pneumothorax, infectious complications). Although SAEs are not always associated with patient death, they significantly affect the duration of hospitalization and long-term prognosis. Another issue that is not discussed more broadly in the context of risk assessment of a patient with MI is the risk assessment resulting from the overall logistics of the treatment process. It is not about time delays during the infarction treatment procedure, the effect of which on the effectiveness of treatment is well described [87], but about improper organization of the workplace, the impact of the time of day or night, day of the week (weekday, weekend, holiday day) in which the patient is admitted to the hospital (or undergoing invasive procedure), or finally due to the experience of the staff or their workload on a given day. In the literature, there are only individual reports on the importance of these issues for the final results of treatment in a patient with MI [88]. Additionally worth mentioning is the phenomenon of "treatment risk paradox" consisting in the unintentional less aggressive treatment of high-risk patients. It has been proven that as the GRACE score increases, less aggressive treatment is used (e.g., patients undergo coronary angiography less frequently) [89]. The reasons for this may be presumed to be in the physician's reliance only on his own clinical assessment when assessing the patient with MI. Patients at high risk of dying from MI are elderly patients, with more comorbidities, and with more advanced CAD (multivessel disease), which undoubtedly translates into a significant increase in the risk of SAE during invasive procedures and follow-up. This may to some extent explain the treatment risk paradox. It should be remembered that the currently recommended risk scores in MI patients assess the risk only of classic ischemic or bleeding events. Currently, there are no risk scores assessing the risk of other adverse events associated with the invasive treatment than MACCE. Only the assessment of the balance between the risk of a traditional ischemic/bleeding events (and the associated increased risk of death) and the risk of a SAE during invasive treatment, and the translation of this balance into the final results regarding patient mortality, would allow clinicians to identify the group of patients with MI that will benefit from the implemented invasive treatment. Looking to the future, the need to use risk assessment systems in the management of patients with MI appears to be unquestionable. Modern medicine, and above all cardiology, cannot exist without integral risk assessment systems. On the one hand, there are treatment standards set by the ESC guidelines (the guidelines are mainly based on the results of RCT) and, on the other hand, the assessment of their practical implementation in the form of mortality rates. The link between both sides are risk assessment systems. The future of risk assessment systems seems to be multi-directional. Firstly, the tendency is to create a comprehensive scale that takes into account all classic risks, as well as new types of risks related to the logistics of the treatment process. The second direction is the creation of risk scales assessing the risk of SAE, clinical scales assessing the interrelationships between logistic factors and the incidence of SAE, as well as scales that take into account both of the above-mentioned elements in the context of assessing short-and long-term prognosis. Another direction seems to be a return to the evaluation of parameters related to the PCI procedure; in recent years, there have been no scales developed that take into account the parameters of invasive procedures (the recently published scales taking into account these parameters were the ZWOLEE and SYNTAX scales published at the beginning of the 21st century). A solution worth attention also seems to be the creation of a scale combining both classic clinical parameters and hemodynamic parameters related to PCI, based on the population of patients treated with new antiplatelet drugs and with the use of new generation of DES.
2021-09-11T06:17:04.722Z
2021-08-28T00:00:00.000
{ "year": 2021, "sha1": "2ae2b696c1bf72d55bec25974abb224192eb2fc9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/17/9103/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9cf2c4098fdccc3183e8676a548395dee74b43e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
46198190
pes2o/s2orc
v3-fos-license
Trends of Device Utilization Ratios in Intensive Care Units During 10 Years in South Korea: Results from the Korean National Healthcare-Associated Infections Surveillance System Abstract Background Device-associated healthcare-associated infection (DA-HAI) is an important issue related to safety of patients. It is important to reduce unnecessary device utilization in order to decrease DA-HAI rates. Therefore, we investigate to the time trend of device utilization (DU) ratios and DA-HAI rates to analyzed collected data for 10 years through the Korean National Healthcare-associated Infections Surveillance System (KONIS) which is voluntarily participating in hospitals. Methods We investigate the time trend of DU ratios and DA-HAI rates from 2006 through 2015 in KONIS participating intensive care units (ICUs). DA-HAI rates were calculated as the numbers of infections per 1,000 device-days and DU were calculated as a ratio of device-days to patient-days. The pooled incidences of DAIs and DU ratios were calculated for each year of participation. Results Data were collected on 5,325,176 catheter-days and 6,358,829 patient-days in the 190 participating ICUs between July 2006 and June 2016. From 2006 to 2015, year-wise ventilator utilization ratio (V-UR) per 1000 patients-days increased significantly from 0.40 to 0.454 (F = 6.27, P < 0.0001), year-wise urinary catheter utilization ratio (UC-UR) show gradually increased trend from 0.83 to 0.84 but non-significantly (F = 1.66, P = 0.0951), and year-wise c-line utilization ratio (CL-UR) was gradually decreased non-significantly from 0.55 to 0.52 (F = 1.62, P = 0.1059). In subgroup analysis, Medical ICU (F = 2.79, P = 0.0034) or hospital with more than 900 beds (F = 3.07, P = 0.0015) related to increased significantly V-UR. Rate of ventilator associated pneumonia significantly decreased from 3.48 in 2006 to 1.00 in 2015 (per 1000 ventilator-days, F = 27.62, P < 0.0001). Also, rates of catheter associated UTI and c-line associated blood stream infection significantly decreased from 1.85 to 0.88 (per 1000 catheter-days, F = 10.14, P < 0.0001) and from 3.40 to 2.20 (per 1000 catheter-days, F = 14.17, P < 0.0001). Conclusion In Korea, all of the DA-HAIs have shown a significant reduction in the last 10 years, however V-UR has year-wise significantly increased trend for past 10-years, also UC-UR and CL-UR have not decreased trend significantly. We need effort to make reduction of device utilization ratios. Disclosures E. J. Kim, Korean Nosocomial Infections Surveillance System (KONIS): Investigator, Research support; Y. HOURS. Choi, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; H. Y. Kim, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research support; Y. G. Kwak, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research support; T. H. Kim, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; H. B. Kim, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; S. H. Park, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; M. Lee, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; S. O. Lee, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; J. Y. Choi, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; P. G. Choe, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; S. K. Lim, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research grant; S. R. Kim, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research support; 
 M. J. Shin, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research support; S. Y. Yoo, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research support; H. Yoo, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research support; 
 J. Y. Choi, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research support; S. H. Han, Korean Nosocomial Infections Surveillance System (KONIS): Board Member, Research support RP result for virus Mean Duration ABX after test result Of the patients with pneumonia; 11 had + RP for virus (7-HMV), 4 had co-infection with + bacteria with mean PCT of 0.62 and mean duration of ABX 6 days after test result; of the 7 with no bacterial co-infection the mean PCT was 0.12 with mean duration of ABX 0.28 days after the test result Conclusion. The results of the RP led to a decrease in ABX duration, which was most profound in the patients for whom AST intervened. LOS was also reduced. Utilization of RP and PCT facilitated better ABX use. Background. Rapid diagnostic tests can reduce time to organism identification and susceptibility results, allowing for more rapid optimization of antibiotic therapy. We sought to determine whether a qualitative immunochromatographic assay (Alere™ PBP2a Culture Colony test) that differentiates methicillin-susceptible S. aureus (MSSA) from methicillin-resistant S. aureus (MRSA) could optimize time to appropriate therapy for patients with skin and soft-tissue infections (SSTIs) and nosocomial pneumonia caused by S. aureus. Methods. Adult patients admitted to The Johns Hopkins Hospital with a respiratory or wound culture growing S. aureus between July-October 2015 (baseline period) and July-October 2016 (intervention period) were included. The primary outcome was time to optimal antibiotic therapy from specimen collection before and after implementation of the PBP2a assay. Secondary outcomes were (1) time to antibiotic de-escalation from specimen collection, (2) length of hospital stay, and (3) number of vancomycin levels. An unadjusted analysis was conducted using Chi-square or Fisher's exact test for categorical variables and Wilcoxon rank-sum test for continuous variables. Results. 189 patients met eligibility criteria (119 baseline, 70 intervention). There were no significant differences in characteristics of patients between periods. Overall time to optimal therapy decreased during the intervention period compared with baseline (IQR 0-24.7 hours vs. 0-64.2 hours, P = 0.02). In the subset of patients with SSTIs, time to optimal and de-escalation of antibiotic therapy was reduced during the intervention period compared with baseline (IQR 0-6.6 vs. 0-70.8, P = 0.02 and IQR 0-26.5 vs. 0-65.5, P = 0.05, respectively), but not with pneumonia. Length of hospital stay (median 6 days in each, P = 0.60) and number of vancomycin levels (median 0 vs. 1, P = 0.33) were similar before and after assay implementation. Conclusion. There was a reduction in time to optimal antibiotic therapy after implementation of the PBP2a assay driven by changes in SSTI regimens but not pneumonia regimens. Incorporation of a rapid test to differentiate MSSA from MRSA be a useful addition to antibiotic stewardship initiatives to optimize therapy for patients with MSSA infection. Disclosures Background. Device-associated healthcare-associated infection (DA-HAI) is an important issue related to safety of patients. It is important to reduce unnecessary device utilization in order to decrease DA-HAI rates. Therefore, we investigate to the time trend of device utilization (DU) ratios and DA-HAI rates to analyzed collected data for 10 years through the Korean National Healthcareassociated Infections Surveillance System (KONIS) which is voluntarily participating in hospitals. Methods. We investigate the time trend of DU ratios and DA-HAI rates from 2006 through 2015 in KONIS participating intensive care units (ICUs). DA-HAI rates were calculated as the numbers of infections per 1,000 device-days and DU were calculated as a ratio of device-days to patient-days. The pooled incidences of DAIs and DU ratios were calculated for each year of participation. Conclusion. In Korea, all of the DA-HAIs have shown a significant reduction in the last 10 years, however V-UR has year-wise significantly increased trend for past 10-years, also UC-UR and CL-UR have not decreased trend significantly. We need effort to make reduction of device utilization ratios. Disclosures Background. S. aureus device-related infections (DRIs) cause substantial pediatric morbidity and hospitalization costs. Biofilm formation is postulated to play an important role in the pathogenesis of DRIs. We hypothesized that S. aureus isolates from DRIs would differ in antibiotic susceptibility, genotype, and ability to form biofilm in vitro compared with skin and soft-tissue infection (SSTI) isolates. Methods. Patients at Texas Children's Hospital (TCH) and their isolates were identified from a prospective S. aureus study database from 2008 to 2016. Age and date of infection matched SSTI control isolates were selected 4:1. Demographic and clinical data was collected retrospectively. Isolates were genotyped by pulsed field gel electrophoresis. In vitro biofilm formation was assessed for DRI and a subset of SSTI isolates using a modified microtiter plate assay and scoring system (APMIS 2007;115:891-9). Data was analyzed with Fisher's exact or Wilcoxon rank-sum test. Conclusion. S. aureus DRI isolates were significantly more likely to be MSSA and nonUSA300 compared with SSTI isolates. Using this biofilm model, isolates from DRIs were less likely to be associated with very strong biofilm production than SSTIs. We speculate that biofilm phenotype (ica dependent vs surface proteins) may play a more important role than level of production in pathogenesis of pediatric S. aureus DRIs (J
2019-01-27T14:53:02.401Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "4433518fc9885244f38017679e3c49df11a7979a", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/article-pdf/4/suppl_1/S629/20429194/ofx163.1670.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4433518fc9885244f38017679e3c49df11a7979a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264286138
pes2o/s2orc
v3-fos-license
Primary omental ectopic pregnancy: A rare case report Key Clinical Message Primary abdominal pregnancy is the rarest form of ectopic pregnancy in which the fertilized ovum is directly implanted into peritoneal cavity. This condition poses significantly high risk of maternal morbidity and mortality. Here, we present a case of 5 weeks of primary omental pregnancy managed by laparotomy. | CASE PRESENTATION A 21-year-old unmarried woman presented to the emergency department with a complaint of dull aching, nonradiating lower abdominal pain and per vaginal bleeding for 1 day.It was associated with five episodes of non-projectile vomiting with a background history of amenorrhea for the past 4 weeks.There was no history of UPT done either at home or at health center.She was a non-smoker and had no history of previous gynecological or abdominal surgery, pelvic inflammatory disease, or history of insertion of intrauterine devices. On presentation to the emergency department, blood pressure was 100/60 mm Hg, heart rate was 108 beats per minute, and respiratory rate was 20 breaths per minute.On examination, she had mild pallor, and on abdominal examination, abdomen was mildly distended and tenderness was noted on lower abdomen.On per speculum examination of genitalia, cervix was smeared with blood and bulky uterus was appreciated on bimanual examination. In the emergency department, crystalloids were started through intravenous line.Blood was drawn, urine sample was collected, and investigations were sent which were reported as urine pregnancy test as "positive," hemoglobin: 8.6 gm%, total leukocyte count: 12,800 per mm 3 with polymorphonuclear cells of 84%.Blood grouping was done and reported as O positive following which blood bank was contacted, two pints of packed red cells were arranged, and cross matching was done.Emergency ultrasonography of abdomen and pelvis showed moderate free fluid in the abdomen which on needle aspiration revealed hemorrhagic fluid (Figure 1).The patient was shifted to the operation theater and emergency laparotomy was performed which confirmed the hemo-peritoneum with content of approximately 500 mL of free fluid and approximately 200 g of blood clots (Figures 2 and 3).Product of conception like material with dimension of 6 cm × 4 cm was found attached to the greater omentum.On inspection intra-abdominally, the uterus, bilateral fallopian tubes, and bilateral ovaries were found to be intact.The free fluid was drained out by suctioning, clots were removed, and the POC like material was resected completely, to be sent later for histopathological examination.Histopathological examination of the collected tissue revealed extensive areas of hemorrhage, necrosis, neutrophilic infiltration, and sclerosed villi (Figure 4).It was negative for hydatidiform mole and malignancy.The diagnosis of abdominal pregnancy in the omentum was thus confirmed. | CASE DISCUSSION Ectopic pregnancy is a condition where the ovum fertilized by spermatozoa gets implanted and develops outside of the endometrial cavity.Sites of common implantations are uterine tubes, ovaries, and intra-abdominal cavity.Its incidence is nearly 1% of all pregnancies in developed countries and 1 to 3% in African and Asian developing countries. 6pproximately 95% of ectopic pregnancies occur in fallopian tube (70% in ampulla, 12% in isthmus, 11% in fimbriae, and 2%-3% in the interstitium), 3% in ovaries and 1% in the abdomen. 7Abdominal pregnancy albeit rare is a life-threatening issue where implantation of fertilized ovum occurs in the peritoneum or peritoneal organs.Abdominal pregnancy is further classified as primary and secondary.Secondary abdominal pregnancy occurs after defect in the passagerupture of uterus or tubes or the ovaries-of fertilized egg through tubes or ovaries leading to implantation in abdomen, whereas primary abdominal pregnancy is exceptional, occurs by direct implantation. 8Diagnosis is made by taking Studdiford's criteria into consideration.Studdiford's criteria were established in 1942 for the diagnosis of abdominal pregnancy.The presence of intact fallopian tubes and ovaries, absence of any utero-peritoneal fistulae, and pregnancy early enough in gestation to eliminate the possibility of secondary implantation after primary nidation of the tube prove the diagnosis of primary abdominal pregnancy. 3,5arious risk factors are established throughout the years in the background of ectopic pregnancy.Prior tubal surgery, prior ectopic pregnancy, pelvic inflammatory disease, sexually transmitted disease, current smoking, in utero diethylstilbestrol exposure, low socioeconomic status, recent use of progesterone-only pills, and intrauterine devices, myomata, and history of allergy all increases the risk of ectopic pregnancy. 9,10Ectopic pregnancy is more common in developing countries because of high prevalence of these risk factors.However in our case, none of the aforementioned risk factors were identified. The clinical feature of abdominal pregnancy is highly variable and can differ from that of tubal pregnancy.These include abdominal pain, scanty per vaginal bleed in the back ground of amenorrhea.Severe abdominal pain is one of the consistent findings in the case of abdominal pregnancy. 10,11In this case, our patient presented with severe lower abdominal pain with PV bleeding.Although ultrasound examination is the preferred diagnostic method, only 50% of early abdominal pregnancy cases can be detected through this procedure.In this particular case, abdominal ultrasonography revealed moderate free fluid in the abdomen, which was confirmed to be hemorrhagic fluid upon aspiration.After laparotomy, intact fallopian tubes and ovaries were found with no utero-peritoneal fistulae.Since the gestational age of less than 20 weeks of gestation is considered early, we could rule out the possibility of secondary implantation as the gestational age in this case was only 5 weeks.Furthermore, for the evaluation of ectopic pregnancy, non-contrast MRI using T2-weighted imaging is considered to be sensitive, specific, and accurate. 12However, due to the poor hospital setting and deranged vitals of patient at the time of the presentation, MRI could not be done. The management of abdominal pregnancy depends on the estimated gestational age at presentation and clinical presentation.There are three modalities of treatment in case of ectopic pregnancy, that is, expectant management, medical management, and surgical intervention.Expectant management is used mainly in tubal ectopic pregnancy. 13Medical management is recommended specifically for abdominal pregnancies located in the liver or spleen, as surgical intervention in such cases can lead to significant bleeding. 14Most common treatment modality of choice for abdominal pregnancy is surgery where laparotomy and laparoscopic surgery are available options.Laparotomy was considered safer than laparoscopic procedure due to the risk of uncontrollable perioperative hemorrhage from the implantation site but in cases where the implantation site allows non-surgical excision and in cases of early diagnosis of abdominal pregnancy (less than 12 weeks), laparoscopic surgery is more preferred. 15,16Even though our case was diagnosed earlier than 12 weeks of gestation, laparotomy was preferred to laparascopy because the site of pregnancy was not known precisely before the procedure.Thus, laparotomy was used as diagnostic and a therapeutic procedure in our case.Many cases can have negative laparotomy but in our case, we could identify the gestational sac intraoperatively and managed it properly. Risk of torrential perioperative bleeding always persists.Thus, complete removal of the placenta should be done only after identification of blood supply and its ligation. 17Since our patient was at 5 weeks of gestation (early abdominal pregnancy), the risk of bleeding was significantly reduced. The establishment of early diagnosis of ectopic pregnancy in poor resource setting in rural area of underdeveloped countries like Nepal is a big challenge.It can be improved by the education of healthcare workers and early referral system with low threshold after recognition of risk factors and awareness of atypical presentation. F I G U R E 1 USG abdomen showing moderate fluid collection in the peritoneal cavity (white arrow). F I G U R E 2 Laparotomy showing product of conception with approximate dimension of 6 cm × 4 cm.F I G U R E 3 Intra-abdominal clots with attached omentum.F I G U R E 4 Histologic examination (hematoxylin and eosin staining) showing chorionic villi lined by trophoblastic cells with extensive area of hemorrhage and necrosis.
2023-10-19T05:22:15.140Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "933a83641e4a0600461e4531478466852fc61456", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "933a83641e4a0600461e4531478466852fc61456", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
221158987
pes2o/s2orc
v3-fos-license
Epidural Naloxone Attenuates Fentanyl Induced PONV in Patients Undergoing Lower Limb Orthopaedic Surgeries. a Prospective Randomized Double-Blind Comparative Study Abstract Background and aim Epidural administration of opioids with local anaesthetics is a popular choice for perioperative pain relief. But opioid induced side effects limit their use for postoperative analgesia. Hence, this study was designed to evaluate the effectiveness of epidural naloxone, an opioid receptor antagonist, in reducing PONV in patients receiving epidural fentanyl. Methods After obtaining the Institutional Ethics Committee approval and written informed consent, 46 patients, between 18–80 years, of either sex, with ASA physical status 1–3, undergoing lower limb orthopaedic surgeries were enlisted for this prospective, randomized, double blind comparative study. Subjects were allocated to one of the two groups and received epidurally, either fentanyl with bupivacaine (Group C, n = 23) or fentanyl with bupivacaine and naloxone 2 mcg (Group N, n = 25), for reducing postoperative pain. PONV score and Wong Bakers Scale (WBS) for pain score were recorded at 6, 12 and 18hrs, postoperatively. Results All patients were comparable with respect to age, gender, ASA PS, height, body weight as well as duration of surgery. A statistically significant decrease in PONV score was observed in Group N at 6 and 12 hours, postoperatively. The patients who required rescue antiemetic were also significantly lower in Group N at 6 and 12 hours. The mean WBS score for pain also showed significant reduction in Group N at 6 hours, postoperatively. Conclusion Concomitant use of low dose epidural naloxone and fentanyl is effective in attenuating PONV, besides enhancing analgesia in the earlypostoperative period. INTRODUCTION The practice of anaesthesiology has now become remarkably safe with rare occurrences of mortality and morbidity. Hence, those less severe adverse events of anaesthesia are gaining more significance recently. Postoperative nausea and vomiting (PONV) is still the most troublesome sequelae encountered in the recovery room, inspite of new advances in its prevention and treatment. [1] Eventhough a minor complication, it not only causes significant agony and annoyance to the patient, but it also results in profound patient dissatisfaction with the overall quality of anaesthesia. [2] Besides, the recent heightened interest in ambulatory procedures has shifted the focus more on PONV, as its incidence may prolong discharge [3] or cause undesirable hospital stay. [4] Lower limb orthopaedic procedures are commonly associated with severe postoperative pain, therefore adequate alleviation of pain is essential during this period. [5,6] An infusion of a local anaesthetic-narcotic mixture given epidurally, is a commonly used method for analgesia after lower limb orthopaedic surgeries, [7] and previously, epidural morphine was widely used. [8] However, it had been associated with a higher incidence of PONV, pruritus and respiratory depression. [9,10] Fentanyl, a highly lipophilic synthetic opioid, is used instead of morphine, as it causes a lesser incidence of delayed respiratory depression due to its rapid absorption and clearance from CSF. [11] However, PONV still remains high even in those patients administered with epidural fentanyl. [12] Numerous studies in the past had reported the efficacy of small doses of opioid antagonist naloxone, administered intravenously or epidurally, for the maintenance of analgesia with marked reduction in morphine, buprenorphine and sufentanil associated PONV. [13][14][15][16] But no similar studies on epidural fentanyl have been found till date. Hence, this study was formulated to assess the effectiveness of epidural naloxone in attenuating PONV in patients receiving epidural fentanyl for pain relief after lower limb orthopaedic procedures. 50 mcg and naloxone 2mcg. Epidural bolus was repeated at 6, 12and18hours following surgery. PONV and WBS pain scores were monitored by the staff nurse in PACU, who was unaware of the patient group allocation. Patients were also blinded as both the preparations were colourless. PONV and pain intensity was recorded at 6, 12 and 18 hours, post operatively.PONV was evaluated using a PONV score: 0= no nausea or vomiting, 1= nausea only, 2= vomiting once, 3=vomiting more than once. Rescue antiemetic ondansetron 4 mg IV was given to all patients with PONV score ≥ 1. Pain intensity was assessed using Wongbakers FACES pain scale (WBS). [18] Sample size was calculated from the study, [16] with a power of 80% and a significance level of 5% and the minimum sample size needed was calculated to be 23 for each group. The statistical calculations were performed using the software SPSS (Statistical Presentation System Software, SPSS Inc.) version 15.0.Categorical data was represented in the form of frequencies and proportions.Continuous data was represented as mean and standard deviation.Chi square test or Fisher exact t-test was used as test of significance for qualitative data.Independent t test or Mann Whitney U test was used as test of significance to identify the mean difference between two quantitative variables.Repeated measure ANOVA was used as test of significance to assess the pain score.Level of significance was set at 0.05. RESULTS We screened 55 patients for this prospective, parallel-group, double blind, randomized comparative study.Nine patients were exempted from our study for not satisfying the inclusion criteria; four patients had significant myocardial impairment, three patients had contraindication for administering central neuraxial blockade, two patients were not willing for spinal anaesthesia. Finally, a total of 46 patients were enlisted, randomized and assigned into two groups of 23 each.All the patients of both groups finished the study and were followed up and evaluated ( Figure 1). Both groups were comparable with respect to the distribution of age, gender, ASA PS, body weight, height and BMI.No statistically significant difference was noted in maximum sensory level achieved as well as duration of surgery (Table 1). PONV scores of Group N were significantly lower when compared to Group C at 6 and 12 hours, post operatively ( Table 2). A statistically significant decrease in mean PONV score was also observed in Group N at 6 and 12 hours ( Table 3). The rescue antiemetic consumption was also significantly lesser in Group N at 6 and 12 hours in the post-operative period( Figure 2). MATERIALS AND METHODS After procuring Institutional Ethics Committee approval and written informed consent, 46 patients, between 18-80 years, of either sex, with ASA physical status 1-3, undergoing lower limb orthopaedic surgeries were enrolled in this prospective, randomized double blind comparative study.Patients with allergy to study drugs, contraindications to neuraxial anaesthesia, chronic opioid use, severe myocardial, renal, or hepatic impairment, psychiatric illness or nausea and vomiting during the operation were exempted from our study. A preanaesthetic examination was conducted on the preoperative day and a written file with all the details regarding the anaesthesia technique to be performed was provided to all the enlisted patients, before taking consent. On arrival at the operating room on the day of surgery, all standard monitors were connected and baseline Heart Rate (HR), SpO2, Systolic Blood Pressure (SBP), Diastolic Blood Pressure (DBP) and Mean arterial Blood Pressure (MABP) of all patients were recorded. Premedications with midazolam 0.5-1 mg intravenous (IV) and ondansetron 50 mcg/kg IV were given to all patients. All patients were positioned in left lateral decubitus position for combined spinalepidural anaesthesia and local anaesthesia was given in the skin at L2-L3 space, following which a Tuohy needle of 18gauge was introduced via midline approach. Epidural space wasidentified using a loss of resistance technique and epidural catheter was introduced. Later, with a 25gauge Quincke needle,lumbar puncture was done at L3-L4 space. After confirming the subarachnoid space by free flow of CSF, spinal anaesthesia was given using 3ml of 0.5% heavy bupivacaine. Electrocardiogram(ECG), HR and SpO2 was monitored continuously and blood pressure was recorded non-invasively every 5 min till end of the procedure. Onset of analgesia and level of sensory block was also noted. The sensory block level was assessed at the maximal level of cold sensation at the midclavicular line using an alcohol swab bilaterally. The intensity of motor block was evaluated with the modified Bromage scale: [17] 0 = no motor block, 1 = inability to raise the extended leg, but able to move knees and feet, 2 = inability to raise extended leg and move knee, but able to move feet, 3 = complete motor block of lower limb. In post anaesthesia care unit (PACU), sensory block level was checked every 15 min and when there was regression of sensory level below T10 dermatome, epidural analgesia was administered. Using a computer-generated randomization list, the enrolled 46 patients were assigned into two groups of 23 each using opaque, sealed and serially numbered envelopes. For epidural analgesia, Group C: received 5 ml of bupivacaine 0.125% with fentanyl 50 mcg. Group N: received 5 ml of bupivacaine 0.125% with fentanyl *significant at the 0.05 level and troublesome complication, with incidence as high as 20-30 %. [14] Hence, avoidance of PONV was always a higher priority among patients, [19] sometimes even more than postoperative pain.The commonly seen risk factors for PONV are age, female gender, non-smoking status, history of PONV or motion sickness, post-operative opioid use and extended duration of anaesthesia.In our study, both groups were comparable with respect to age, gender, ASA PS as well as BMI.No significant difference was noticed either in maximum sensory level achieved or duration of surgery, among the groups. There had been many studies in the past, aimed at investigating or reducing the side effects of neuraxially administered lipophilic as well as hydrophilic opioids.Among them, trials on epidural naloxone and its effects in decreasing the side effects of epidural opioids are very few. [13][14][15][16] And, no studies had been conducted so far investigating the effects of epidural naloxone on lipophilic opioid fentanyl, which also significantly increases PONV with epidural administration. WBS pain scores showed a statistically significant reduction in Group N at 6 hours, post operatively( Table 2). The mean pain score in Group N was also significantly lower at 6 th hour in the postoperative period when compared to Group C (Table 4). DISCUSSION Concomitant epidural administration of local anaesthetic with opioids is a popular choice in perioperative analgesia due to its synergistic effect. However, opioid induced sideeffects such as nausea and vomiting, pruritus, respiratory depression and urinary retention can cause severe distress and dissatisfaction to the patients regarding the overall surgical and anaesthesia experience.This may limit the use of opioids for postoperative pain relief. PONV, in particular, is of a major concern with the use of neuraxial opioids, as it is considered as the most undesirable CONCLUSION A low dose epidural naloxone significantly attenuates PONV induced by epidural fentanyl, besides enhancing the analgesic effect during the early postoperative period. *Acknowledgement: We thank Mr.Kevin Suresh, who conducted the statistical analysis of the data of our study. We also express our sincere gratitude to all the patients who participated in the study and to the staff of Department of anaesthesiology. On comparing with hydrophilic morphine, a significantly lower PONV was observed with epidural fentanyl, [20] but fentanyl showed a similar degree of incidence in PONV when compared with another lipophilic opioid, sufentanil. [21] The increase in PONV seen with epidural morphine may be due to its longer duration of action, as it remains in the CSF and spinal tissues for an increased period, while opioids thatare more lipid soluble are rapidly absorbed from the epidural space.Eventhough the exact mechanism of opioid induced PONV is not fully known, a possible mechanism may be due to the stimulation on the chemoreceptor trigger zone by the mu-opioid receptor, resulting in vomiting.Hence, opioid receptor antagonist, naloxone may play a vital role in attenuating PONV. In our study,statistically significant lower mean PONV scores were observed in naloxone group at 6 th and 12 th hours post operatively.The rescue antiemetic consumption was also predominantly lesser in naloxone group during the early postoperative period.These results were similar to the findings obtained byChoi et al. [14] and Kim et al. [16] Choi et al., in his study,concluded that epidurally administered naloxone significantly decreased morphine-induced side effects such as pruritis and nausea in a dose dependent fashion without affecting analgesia. [14] While, Kim et al. evaluated the effect of epidural naloxone in decreasing PONV in patients receiving epidural sufentanil for postoperative analgesia and found out that concomitant epidural infusion of sufentanil and low dose naloxone not only reduced PONV but also enhanced the analgesic effect of sufentanil. [16] Similarly, during this study, a statistically significant reduction in pain scores were noticed in naloxone group, especially in the early postoperative period. The mean pain scores were significantly lower at 6 th hour, post operatively.These findings also support the evidence from some previous studies. [14][15][16] The potential explanation for these effects observed with low doses of naloxone may be due to: a) low-dose naloxone enhancing the release of endogenous opioid peptides, as it blocks presynaptic auto inhibition of enkephalin release, [22] and b) Gs protein-coupled excitatory opioid receptors causing hyperalgesia often noticed with opioid administration, is directly and competitively antagonized by low-dose naloxone, while not attenuating analgesia mediated by the inhibitory Gi/Go-coupled opioid receptors. [23] There were a few limitations in our study.The dose used for naloxone, fentanyl and bupivacaine was not weight-based but was in agreement with other previous studies on opioids and local anaesthetics.Also, we did not take into consideration the agents used for premedication and intraoperative sedation,which might have had an impact on the incidence of PONV.The postoperative monitoring was not extended beyond 24 hours in the PACU. Further studies are advocated in these areas.
2020-08-19T13:10:21.381Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "bb3bd4c012bf30371a87c4cb98af44d1c59bf425", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2bd5125e2c92cde3e95942bfbf0d72fc23a5e1e2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35307081
pes2o/s2orc
v3-fos-license
Qualification of Fin-Type Heat Exchangers for the ITER Current Leads The ITER current leads will transfer large currents of up to 68 kA into the biggest superconducting magnets ever built. Following the development of prototypes and targeted trials of specific manufacturing processes through mock-ups, the ASIPP (Chinese Institute of Plasma Physics) is preparing for the series fabrication. A key component of the ITER HTS current leads are the resistive heat exchangers. Special R&D was conducted for these components at CERN and ASIPP in support of their designs. In particular several mock-ups were built and tested in room temperature gas to measure the dynamic pressure drop and compare to 3D CFD models. Introduction The Current Leads (CL) are key components of the ITER superconducting magnet system. The CLs provide the cold/warm transitions for the large currents fed to the ITER superconducting coils from the warm power supply and distribution system. By using hybrid High-Temperature Superconductor (HTS) / resistive copper leads, the heat load into the cryogenic system at the cold end of the leads can be reduced by a factor of up to five with respect to fully resistive leads. Resulting savings on cost of operation more than compensate for the extra cost of manufacture. ITER requires a total of 60 current leads, 18 for the TF coil system, 12 for the CS coil system, 12 for the PF coil system and 18 for the correction coil system, with a total current capacity of approximately 2.6 MA. Following the first successful large scale application of HTS current leads using Bi2223 superconductor in CERN's LHC project [1], a "demonstrator prototype" for these current leads was built and tested at KIT (Karlsruhe Institute of Technology) in 2002-2006 [2]. The demonstrator verified the basic design concepts discussed below and confirmed the choice of 50 K as supply temperature for the cooling of the resistive heat exchanger. After the decision to entrust the Chinese ITER party with the in-kind contribution of the ITER current leads, ASIPP (the Institute for Plasma Physics) in 2009 and 2010 undertook the manufacturing and testing of the first pre-prototypes [3], [4]. ITER then released the final designs of the three types of CLs in 2011 [5]. The hybrid design of the ITER current leads consists of a copper heat exchanger section, covering the temperature range from 300 K to 65 K, and a superconducting section, covering the 65 K to 4.5 K range. The resistive heat exchanger design is scaled from that of the LHC HTS current leads: it is a fin-type, cooled with zig-zag counter-flow of GHe, which enters at 50 K and 0.4 MPa and exits at the warm terminal at about 300 K. The HTS section between 5 K and 65 K, which uses Bi-2223 superconductor tapes with a gold-doped silver matrix, is conduction-cooled from the bottom end, which in turn is cooled by a flow of supercritical helium at ~5-6 K, 0.5-0.6 MPa. Following the development of the CL designs, a multi-step qualification program was launched in ASIPP, including mock-ups [6,7] and prototypes (now being tested), each of these having to be completed successfully before fabrication of the leads proper could commence. Also included in the qualification program were two mock-ups of the resistive heat exchanger, one of the CC type (10 kA) and another of the TF type (68 kA). The heat exchanger is the most complex and thus the key component of the HTS leads. In this paper we discuss in detail the experiences gained during the qualification process for these heat exchanger mock-ups, including the testing at ASIPP and CERN, and the cross-check against computer models. Heat Exchanger for the ITER Current Leads The optimization of the HX aims at minimizing the helium mass flow rate required for its cooling in nominal operating conditions. For the modelling, this corresponds to imposing to the temperature profile a zero gradient at the room temperature end, in which case all the heat generated by Joule heating along the copper rod is transferred into the helium flow. At the same time, the geometry of the heat exchanger must assure the required heat exchange surface and heat exchange efficiency. The strong temperature dependence of material properties must be taken into account in the optimization. The following five aspects of the HX affect cooling mass flow and safety requirements: 1) high efficiency, 2) sufficient time before thermal runaway in the so-called LOFA ("Loss of Flow Accident"), 3) optimum dimensions (length to cross-section ratio) for minimizing conductive heat inflow, 4) reasonably low pressure drop (≤ 1.5 bar), and 5) low resistance of the connection to the HTS module (and to the warm terminal). The design of the heat exchangers is discussed in the following. Heat Exchanger Design The HX for the 68 kA TF-type current lead is illustrated in figure 1. The cooling helium gas is confined to follow the zigzag path along the HX by the enveloping, stainless steel tube that is tightly fitted over the HX core. An advantage of this fin-type design, especially for a larger production, is that quality of assembly and therefore performance can be controlled via a detailed geometrical survey (as opposed to extensive testing as performed here in the frame of the qualification program). Although most of the leads are of the pulsed type (the only exception is the TF lead), it was decided to optimize the heat exchanger designs for DC operation at the peak current of the electrical cycle. This is a conservative and thus safe approach, but justified, given the large stored energy of the ITER magnet system transported through these leads. The design current and the main geometrical parameters are summarized in table 1. The fin thickness is 3 mm, as is the spacing between fins. The length is the total length of the heat exchanger measured between the end fins; the cut dimension in the table is the depth of the segment that is removed from alternating sides to permit zig-zag flow. As for the LHC leads, the central hole of diameter 16 mm (10 mm in the CC-type) is a channel serving as a conduit for the instrumentation wires. Following the method first developed for the LHC current leads, electron beam welding technology is chosen for joining the HX to the room temperature terminal at the warm end, and to the transition section and HTS module at the cold end. This ensures excellent electrical and mechanical connection of the components and uniform quality can be ensured by industrial QC. Table 1. Dimensions of the heat exchangers of the ITER leads. The cut is the segment cut from the edge of the fin to allow helium to pass (see figure 1). Cuts are on alternating sides of the HX. Manufacturing Approach The defined assembly tolerance H7/g6 between the HX core and the external sleeve is important for the current lead performance, as it minimizes the amount of gas bypassing the HX core. Another is maintaining the straightness of the HX core during machining. The CC HX core, for example, sags under its own weight by about 0.2 mm. The TF-type HX is machined in less than a week on a 5-axis CNC machine with a straightness deviation of less than 70 microns over 1 m. The complete dimensional survey of the HX core in a Coordinate Measurement Machine (CMM) takes about one day. The diameter of the sleeve in the TF-type CL is required to be less than 70 µm greater than that of the 188 mm diameter HX core. The sleeve is honed to achieve these tight tolerances on the diameter, shape and surface finish. Special tooling was prepared to facilitate the precision checking of its final dimensions. Special tooling was also developed to slide the sleeve over the HX. Before assembly the honed tube is heated to about 180 ℃, increasing its diameter by 0.25 mm for the CC-type HX, and 0.43 mm for TF-type HX. Then the tube is more or less manually slid over the HX core. This operation usually does not take more than 10 seconds. Then the tube is welded in place. Heat Exchanger Mock-ups Two different HX mock-ups were manufactured by two Chinese suppliers, Juneng (TF-type, figure 2) and Keye (CC type), under the supervision of ASIPP. The fabrication and test of the mock-ups was the first step of the qualification of the manufacturing procedures, which was completed with the fabrication of the HX for the prototype leads. Pressure Drop Tests Pressure drop tests with Nitrogen (GN2) gas at room temperature were performed on the mock-up copper HX of the current leads in ASIPP (GN2). In combination with calibrated CFD models the measured pressure drops can give information about the amount of bypass-flow. Pressure taps were mounted at the opposite ends and along the HX (with the capillaries perforating the honed tube). The taps were distributed uniformly along the length so that each pair of adjacent taps covers about a quarter of HX length. Differential pressure gauges of several different Full Scales (FS) were purchased to cover a range from 0.1 kPa to 100 kPa. The mass flow rate was measured downstream of the outlet using a fully-opened, calibrated flow controller from Hastings (for measurements below 3 g/s) and the ASIPP flow controller of 5 g/s FS had to be used for a few points above 3.5g/s despite the doubts in its calibration. The test consisted of measurements taken at different inlet pressures and nitrogen flow rates. The quality of the data was checked in-line by comparing the sum of pressure drops of the subsections with that measured with the differential gage covering the entire mock-up. Especially for low flow-rates at the limit of the resolution of the differential pressure gages it was useful to plot the data in a log scale (as shown in figure 3) to follow the transition from laminar to turbulent regime. In some cases the gages had to be exchanged for higher resolution, smaller FS gages to correctly measure these points. Collapsing the data for different inlet pressures using P in ∆P in log-log scales (figure 3) reveals that the pressure drop obtained correctly scales into a common line. In the turbulent regime, P in ∆P showed a consistent scaling for both CC and TF type HX, with the transition from laminar to turbulent flow at Similar measurements, with matching results, were also performed at CERN, further confirming the quality of these experimental data. CERN also performed tests using Helium (GHe). The data for these tests are also included in figure 3. As expected the GHe and GN2 ∆P data for a given type of HX at the same mass flow rate ṁ differ by the inverse ratio of the densities ρ (here a factor 7). Model To support the HX design effort, a 3D numerical model was prepared using the Comsol Multiphysics ® isothermal flow module. Reynolds-averaged Navier-Stokes (RANS) equations together with the k-ε turbulence model are used to represent the turbulent flow. The model couples complex phenomena interacting with each other such as Joule heating, fluid flow and heat transfer by conduction and convection, and uses material and gas temperature dependent properties [8]. The model can now be used to cross-check against the experimental data from the HX mock-ups. CFD Modelling of pressure drop To develop a complex 3D FEM model and optimize the mesh quality a reduced length model was built, consisting of a 6 cm long section (5 meanders of the HX), which represents 1/15th part of a 10 kA HX and 1/16th part of 68 kA HX respectively. Additionally, the inlet and the outlet gas parts in both cases have been extended by 3 cm to avoid entrance effects causing convergence problems. The geometry of the 10 kA (CC) short model is represented in figure 4. The simulations were carried out for different mesh configurations The best results were obtained by using the physics-controlled mesh; this type of mesh is characterized by 5 boundary layers, corner refinement and surface meshes that are more accurate than volume meshes. Such a mesh requires approximately 5 million elements and 8 million degrees of freedom (for the 1/16 th model of the HX). Reasonably good results (3.58% relative error) with a limited number of degree of freedom (0.93 millions) were obtained by using physics-controlled mesh with a predefined 'normal' level of accuracy and reducing the number of boundary layer to 3. However, the results with the best accuracy were given by physics-controlled meshes with an increased number of elements. By comparing the results of such meshes with varying number of elements we can reasonably conclude that the results would not significantly change by further increasing the number of elements. Pressure Drop Calculation Based on the above considerations, the physics-controlled normal mesh with 3 boundary layers has been chosen to simulate the flow of room temperature N2 gas in the full length isothermal model of the 10 kA HX (see figure 5). The model has 8.6 million degrees of freedom, which is almost at the limit of the computing power. From the short model study the expected error from that limited mesh quality is 4.3 %. The simulation results are nevertheless in reasonably good agreement with the experimental data, indicating that the HX mock-ups manufactured have no significant bypass-flow. Discussion of the data Although the main purpose of the experiments is the checking of the quality of the manufacturing through the comparison of the measured to the calculated pressure drops, further data analysis is of interest for improving the understanding of the pressure drop mechanism in the zig-zag type HXtypes. The established practice for pressure drop analysis is the representation of data in terms of a "friction" coefficient C f (equ. 1), where A is the cross-section of the flow channel responsible for a majority of the pressure drop. While A is yet to be ascertained, C f A -2 can be directly obtained from the experimental data according to (equ.1). When plotted against ReP, where P is the wetted perimeter and Re the corresponding Reynolds number (equ. 2) the laminar and turbulent flow regimes can be clearly identified with distinct Re scaling of Re −1 and Re −α ,ߙ~0.1−0.3. In addition, pressure drops not due to viscosity (µ), such as the surface roughness, orifices and sharp bends, add a Re independent constant friction coefficient which becomes dominant at very high Re. In figure 5, the experimental pressure drop data of TF and CC HX mock-ups (from figure 3) are presented as C f A -2 vs ReP. It becomes immediately evident that the CC and TF HX have similar flow characteristics. In addition the transition from laminar to turbulent flow is clearly identifiable. The nominal friction factors of CC and TF HX differ by a factor 1.8 at the same nominal Reynolds number. At present, the experimental data lead us to the following observations. 1-a satisfactory scaling of measurements at different inlet pressures and for different gases; 2-as expected change in friction factor scaling with Reynolds number in transition from laminar to turbulent flow; 3-a consistent and similar behaviour shown by both CC and TF HX; 4-the similar friction coefficient scaling from CFD model is more likely to be significant than coincidental; The reliability/validity of the experimental data would be considerably strengthened if the similarity in the friction coefficient scaling between CC and TF could be unified into a single correlation ‫(݂ܥ‬Re) with a plausible definition of flow channel geometries (A, P and D h , the hydraulic diameter). The underlying flow geometry for a unified friction factor correlation, if it does indeed exists, must be contained within the flow passage a pair of zigzag fins, which repeats along the HX. There are two key copper core with a form drag similar to the flow across a cylinder. For unifying the nominal friction coefficient scaling of CC and TF into a common ‫(݂ܥ‬Re) correlation, the underlying flow passage geometry should yield the same critical Reynolds number for transition from laminar to turbulent flow. As shown in figure 5, the experimental data indicate the critical mass flow rate are ‫3.0=ܥܥ,ܿ‪ṁ‬‬ g/s and ṁܿ,ܶ‫4.0=ܨ‬ g/s. Therefore the ratio of flow passage perimeter between TF and CC should be ‫.3/4=ܥܥܲ/ܨܶܲ‬ Since the overall diameter D of TF HX is about 1.7 times that of CC HX, the perimeter ratio of plane 2 for the U-bend zigzag flow passage between TF and CC is ~ 1.6, significantly larger than the required 4/3. Similarly the relevance of the flow around the centre copper core can also be eliminated as the perimeter ratio in this case is more than 2.4. In contrast, the perimeter ratio for the narrow channel flow passage is 1.3 and fits almost perfectly with the required 4/3. Could the narrow channels between the fins indeed be the only possible flow passage underlying a common ‫(݂ܥ‬Re) correlation? Further experiments and calculations are needed to verify this hypothesis and to explain the friction factor ratio of 1.8 and thus improve the understanding of the pressure drop in the ITER zig-zag HX designs. Summary This paper reports on the development of heat exchangers for the ITER current leads via mock-ups. Experiments to assess the quality and performance of these heat exchangers were conducted and compared to large 3D numerical models. This comparison confirms the absence of significant bypassflow in the mock-ups, thus validating the chosen manufacturing approach. The heat exchangers for the ITER current leads are therefore now sufficiently verified to launch series production. In order to complete the theoretical understanding of the flow in the described heat exchangers, further investigation of the unified pressure drop mechanism between these different types of heat exchangers based on a common design should be conducted in the future. Disclaimer The views and opinions expressed herein do not necessarily reflect those of the ITER organization.
2017-09-12T19:44:26.612Z
2015-12-18T00:00:00.000
{ "year": 2015, "sha1": "5fa8c85991bcf0696d07cd72742af3f4508d47ee", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/101/1/012119/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "51a27bf50da8f1bb89873feac6702193a82c8906", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
257475417
pes2o/s2orc
v3-fos-license
Ecology of Pollen Storage in Honey Bees: Sugar Tolerant Yeast and the Aerobic Social Microbiota Simple Summary Historically, the storage of collected pollen by honey bees was thought to rely on microbes to enhance pollen nutrition. However, this hypothesis has found little empirical support. More recent experiments that quantified pollen storage time, microbial load relative to pollen mass, and variation in the microbiota clearly indicate that honey bees do not rely on microbial enzymes to alter the nutritional quality of collected pollen. Here, we quantified abiotic factors that suppress microbial growth in stored pollen and determined microbial abundance relative to pollen mass using both culturing and molecular assays. We found that microbial growth is quickly suppressed by added honey- and host-supplied enzymes, but that sugar tolerant yeasts subsist longer than bacteria in stored pollen. This work contributes to our understanding of host–microbial interactions in the honey bee and highlights the aerobic social microbiota, a symbiotic and omnipresent collection of native bacteria and yeasts that dominate the social resource space of the honey bee colony and hive. Abstract Honey bee colonies are resource rich and densely populated, generating a constant battle to control microbial growth. Honey is relatively sterile in comparison with beebread: a food storage medium comprising pollen mixed with honey and worker head-gland secretions. Within colonies, the microbes that dominate aerobic niches are abundant throughout social resource space including stored pollen, honey, royal jelly, and the anterior gut segments and mouthparts of both queens and workers. Here, we identify and discuss the microbial load in stored pollen associated with non-Nosema fungi (primarily yeast) and bacteria. We also measured abiotic changes associated with pollen storage and used culturing and qPCR of both fungi and bacteria to investigate changes in stored pollen microbiology by both storage time and season. Over the first week of pollen storage, pH and water availability decreased significantly. Following an initial drop in microbial abundance at day one, both yeasts and bacteria multiply rapidly during day two. Both types of microbes then decline at 3–7 days, but the highly osmotolerant yeasts persist longer than the bacteria. Based on measures of absolute abundance, bacteria and yeast are controlled by similar factors during pollen storage. This work contributes to our understanding of host–microbial interactions in the honey bee gut and colony and the effect of pollen storage on microbial growth, nutrition, and bee health. Introduction Although the vast majority of bee species live solitary lives, social species such as the honey bee dominate the pollination landscape. The life history transition of corbiculate bees from solitary/annual to social/perennial involved highly selective fine tuning of microbial control and social immunity [1][2][3][4]. Two of four primary features of eusociality, overlapping generations and cooperative brood care, are tied to complex behavioral traits that evolved under continuous pressure from both opportunistic and infectious disease [5]. However, in both solitary and social bees, microbes co-opted from the floral environment have been As the reservoir for both aerobic and anaerobic microbiota, beebread is 50% honey by weight [15]. Much of the preservative properties of beebread are provided by the abiotic properties of honey and the addition of host enzymes. When added to nectar, glucose oxidase secreted by the workers results in a chemical reaction that produces hydrogen peroxide and gluconic acid (glucono-lactone), increasing the free hydrogen ion concentration and lowering the pH of honey below pH 4 [31,32]. This chemical reaction requires H 2 O and O 2 and is associated with a mechanical dehydration of nectar, a worker behavior known as bubbling: the repeated transfer of nectar from the social stomach to the mouthparts, mixing in host enzymes, and exposing the viscous liquid to the atmosphere. This mechanical dehydration and chemical reaction happens quickly, before the forager even returns to the hive [33]. The most common bacteria in beebread and honey have evolved to tolerate this osmotic and oxidative stress, and grow very quickly, producing acid via fermentation of glucose and fructose. Within a week, the beebread environment becomes overly toxic and microbes either die or enter a state of enzymatic stasis [34]. Consistent with this model, a couple studies using next-generation sequencing have revealed a wealth of microbial diversity in beebread, but microscopy and molecular work demonstrates that microbes are sparse and fungal hyphae are absent in 1-week-old beebread [8]. However, the addition of a small amount of water to beebread causes a rapid bloom of various microorganisms, primarily yeasts [35]. Monitoring Pollen Deposition and Age Colonies maintained at apiaries in Tucson, AZ, USA were provided additional top boxes with empty drawn comb and wax foundations to allow beebread deposition. Based on experience, we designed our sampling effort to procure sufficient pollen cells of known age and restrict most bees from consuming the known-age pollen before it could be sampled. Thirteen colonies were selected for monitoring of stored pollen that was to be collected and packed over the next 24 h. A frame from each colony was selected that was part of, or adjacent to, the brood area, had open brood, already contained some stored pollen, but had sufficient open space for new pollen deposition. The pattern of stored pollen present when the experiment began was scored by overlaying a transparent acrylic sheet and circling cells of pollen present. A cell was considered filled when the bottom of the cell was completely covered with pollen. On subsequent days of monitoring, a cell was considered empty if the bottom of a marked cell was visible. Colonies were scored and sampled over the course of a week. Having identified the newly deposited pollen, we used push-in cages made from hardware cloth (2 × 3 inches in size) to sequester the newly deposited pollen from further deposition. Ten or so young nurse bees were allowed to remain with the cells under the cages during this period. Abiotic Factors Affecting Pollen Preservation To provide environmental context for culturing and microbial enumeration, we determined the pH and water content associated with naturally collected pollen as a function of storage time. As detailed above, we tracked stored pollen "beebread" by known storage age, measuring the pH with an electrode designed to quantify semi-solid samples (pH spear from Eutech Industries) and water content by desiccation and subtraction. We, first, weighed the beebread sample, and then desiccated the sample in a drying oven (Precision Scientific), determining the difference in weight attributable to water. We, then, assessed the relationship of beebread age with both pH and water content using regression analysis of log transformed data, performed in either Sigma Plot or SAS [36]. Culturing Beebread in Spring and Summer Dearth We examined the change in beebread microbiotas: differentiating yeast, molds, and bacteria in beebread by number and type. To estimate the population density of each major type, we combined high-resolution light microscopy (ZEISS, Dublin, CA, USA) with standard plate counts, examining variously aged beebread. Beebread was monitored and sampled according to Anderson et al. [8], prior to packing (corbicular pollen removed from the legs of foragers), and following packing at two and six days of age in both August of 2014 and April of 2015. In August and April, we monitored eight and thirteen colonies, respectively, to quantify microbe type and absolute abundance in beebread aged 0-7 days. To determine beebread age, the pattern of beebread on a chosen frame was recorded by overlaying a transparent acrylic sheet and circling cells with pollen present, then using a colored marker as in Anderson et al. [8]. We, then, tracked frames daily for newly filled cells, and beebread age was defined using a variety of colors. To quantify microbial colony-forming units (CFUs), we plated replicate samples on both plate count agar (PCA) and Sabaroud dextrose agar (SDA), media with neutral and acidic pH that support a broad spectrum of fungal growth [11]. In addition to fungus, these media support the growth of three of the most prominent co-evolved hive bacteria typically found in beebread, Apilactobacillus kunkeei, Bombella apis (previously Parasaccharibacter apium), and Fructobacillus fructosus, the major contributors to fermentation of hive food stores, beebread, and honey [11]. We quantified microbial growth via plate counts in spring 2015 using PCA and SDA with and without two added antibiotics (chloramphenecol (12.5 µL/mL) and ceftazidime (5 µL/mL)). Beebread samples of various known age were cored with straws and suspended in 600 µL of physiological saline (0.9% NaCl, 0.1% Tween 80, 0.1% Peptone). Triplicate plates containing antibiotics or not were produced from this initial suspension. We used the remaining 300 µL to produce triplicate serial dilutions. As the source of beebread cultures, corbicular pollen pellets (one from each forager) were suspended in 400 µL physiological saline, vortexed for 5 min on medium speed, and plated without dilution. After three days of growth, we scored and counted the plates. Following log transformation of the data, we used t-tests or ANOVA to compare CFU abundance across time periods performed in either Sigma Plot or SAS [36]. Microscopic Identification Four replicate plates were examined separately for each of the time periods. Following culturing, microbial colonies were picked from the petri dish and visually designated as bacteria or fungi according to their morphology under light microscopy at 1000×. The plate was first bisected four times to create eight equal-sized areas, and 10 microbial colonies were picked and examined from each of these eight areas (n = 80 per plate). CFUs were confirmed as bacterial, mycelial, or yeast based on shape and size. Yeasts were of discernible shape, showing budding characteristics and observable nuclei under the greatest magnification (1000×). CFUs were scored as mold if mycelial structure was present under low magnification and/or if mycelia present had characteristics such as obvious branching or septa. Bacterial CFUs were discernible only at the greatest magnification, were coccoid or rod shaped, had no observable nuclei, no budding characteristics, and were nonbranching. Actinobacteria were assumed from the following collection of characteristics: mycelial-like structures observable at the greatest magnification, no nuclei or sporulation characteristics, and cells roughly the same width as a typical rod-shaped bacteria. When plates contained >300 CFUs, the proportion of colonies in the same section exhibiting the same morphology were classified as the same organism based on the law of large numbers. If 30 of 300 CFUs chosen at random are all type A, this reflects the strong probability that the remaining CFUs are type A. Sampling Long-Term Pollen Storage In a second related experiment, we quantified the ratio of fungi to bacteria in beebread following eight weeks of storage over the winter months. As source material, we sampled eight healthy 8-10 frame colonies maintained just south of Tucson, AZ, USA near Santa Rita, AZ, USA. We used a sterile straw to remove six beebread cores from each of the eight colonies just prior to December 7, repeating this sampling eight weeks later (February 8). Beebread cores were sampled to represent a variety of frame locations within the hive relative to the expected center of the overwintering cluster (high/low/inside/outside). There were no available sources of pollen during this time, and the beebread present within the hive was caged with hardware cloth to prohibit worker consumption but allow ambient temperature and humidity produced by colony respiration. 2.5.1. Isolating Microbial DNA DNA was isolated from pooled beebread cores following methods to concentrate and separate microbial DNA from stored pollen grains. We pooled 6-10 core samples prior to DNA extraction, then normalized to 0.25 g of stored pollen for the DNA extraction procedure. We isolated DNA according to Anderson et al. 2014 [8] with the following changes: Following the addition of lysis buffer to the samples, they were bead beaten with a Mini-Beadbeater-16 (BioSpec #607, Bartlesville, OK, USA) for 30 s, then moved to an ice bath for 30 s. This cycle was repeated two additional times for a total of 90 s of mechanical disruption by bead-beating. We then extracted DNA using the GeneJet Genomic DNA Purification Kit (Thermo Scientific #K0722, Waltham, MA, USA) following the protocol for Gram-positive bacteria. Estimating Microbial Load of Long-Term Pollen Storage We estimated the size of both bacterial and fungal communities in beebread using degenerate bacterial primers and qPCR accompanied by a dilution series of known plasmid standards. To quantify bacteria, we first extracted total genomic DNA from nontransformed DH5α™ cells (E. coli). The 16s gene template was amplified using forward primer 27F (5 -AGAGTTTGATCCCTCAG-3 ) and reverse primer 1522R (5 -AAGGAGGTG ATCCAGCCGCA-3 ). For fungal quantification, total genomic DNA was extracted from S. cerevisiae cells. The 18s gene template was amplified using forward primer PanFun-gal_18S_F (5 -GGRAAACTCACCAGGTCCAG-3 ) and reverse primer PanFungal_18S_R (5 -GSWCTATCCCCAKCACGA-3 ). This primer set does not amplify the ubiquitous microsporidian Nosema. We created plasmid vectors using Invitrogen's pCR TM 2.1 TOPO TM cloning vectors per the manufacture's specifications. Ligated vectors were then transformed into DH5α™ cells per the manufacture's specifications. Transformed colonies were selected and grown overnight in broth. Cells were then pelleted out and the plasmid DNA was purified using the Thermo Scientific GeneJET Plasmid Miniprep Kit (#K0503). The mass of a single plasmid molecule was calculated per the formula provided by Applied Biosystems. An Implen nanophotometer P300 was used to assess DNA concentration of the purified plasmid solution and subsequent 10-fold serial dilutions were made. The dilutions were then used as the standards for the qPCR quantification. See Liu et al. (2012a) and Liu et al. (2012b) for additional information on Bactquant and Fungiquant molecular assays [37,38]. Following log transformation, we performed t-tests comparing time periods and regression analysis examining the ratio of fungi to bacteria using either Sigma Plot or SAS [36]. Abiotic Factors of Early Beebread Storage We measured pH and water content of stored pollen or "beebread" over the first seven days of storage. Beebread pH decreased significantly over the seven-day period following pollen collection by foragers (t 36 = 5.9, p < 0.00001). The proportion of water available in the beebread also decreased significantly over the assessed period as a function of storage time and decreasing pH ( Figure 1). We found a significant negative association of pH with water availability, explaining half of the variation in the model (Adjusted R 2 = 0.51, F 2,10 = 12.6, p < 0.005) and indicating that the progressive water loss is significantly associated with an increase in hydrogen ion concentration. following pollen collection by foragers (t36 = 5.9, p <0.00001). The proportion of water available in the beebread also decreased significantly over the assessed period as a function of storage time and decreasing pH ( Figure 1). We found a significant negative association of pH with water availability, explaining half of the variation in the model (Adjusted R 2 = 0.51, F2,10 = 12.6, p < 0.005) and indicating that the progressive water loss is significantly associated with an increase in hydrogen ion concentration. Figure 1. Abiotic factors associated with pollen storage that mitigate microbial growth. The figure on the left represents the variation in pH from stored pollen "beebread" from seven hives tracked by beebread age. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and dots are outliers. Beebread pH decreased significantly over the seven-day period following pollen collection by foragers. The panel on the right is a subset of beebread samples that represent variation in pH and its relationship to water availability. The proportion of water available in beebread decreased significantly as a function of storage time and decreasing pH. Microbial Growth in Beebread Our culturing results show an initial steep decline in abundance from corbicular pollen to 1-day-old beebread ( Figure 2). From one to two days, both bacteria and fungi grew steeply and significantly to their highest levels, and then leveled off, with both fungal and bacterial numbers declining steadily from 3 to 6 days. Based on an ANOVA of log transformed values, their abundance differs significantly by time (F5,66 = 2.98, p < 0.01). Microbial load decreased significantly when comparing the peak of growth (2 days) to the values obtained after six days of storage (t22 = 2.1, p < 0.05). Figure 1. Abiotic factors associated with pollen storage that mitigate microbial growth. The figure on the left represents the variation in pH from stored pollen "beebread" from seven hives tracked by beebread age. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and dots are outliers. Beebread pH decreased significantly over the seven-day period following pollen collection by foragers. The panel on the right is a subset of beebread samples that represent variation in pH and its relationship to water availability. The proportion of water available in beebread decreased significantly as a function of storage time and decreasing pH. Microbial Growth in Beebread Our culturing results show an initial steep decline in abundance from corbicular pollen to 1-day-old beebread ( Figure 2). From one to two days, both bacteria and fungi grew steeply and significantly to their highest levels, and then leveled off, with both fungal and bacterial numbers declining steadily from 3 to 6 days. Based on an ANOVA of log transformed values, their abundance differs significantly by time (F 5,66 = 2.98, p < 0.01). Microbial load decreased significantly when comparing the peak of growth (2 days) to the values obtained after six days of storage (t 22 = 2.1, p < 0.05). following pollen collection by foragers (t36 = 5.9, p <0.00001). The proportion of water available in the beebread also decreased significantly over the assessed period as a function of storage time and decreasing pH (Figure 1). We found a significant negative association of pH with water availability, explaining half of the variation in the model (Adjusted R 2 = 0.51, F2,10 = 12.6, p < 0.005) and indicating that the progressive water loss is significantly associated with an increase in hydrogen ion concentration. Figure 1. Abiotic factors associated with pollen storage that mitigate microbial growth. The figure on the left represents the variation in pH from stored pollen "beebread" from seven hives tracked by beebread age. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and dots are outliers. Beebread pH decreased significantly over the seven-day period following pollen collection by foragers. The panel on the right is a subset of beebread samples that represent variation in pH and its relationship to water availability. The proportion of water available in beebread decreased significantly as a function of storage time and decreasing pH. Microbial Growth in Beebread Our culturing results show an initial steep decline in abundance from corbicular pollen to 1-day-old beebread ( Figure 2). From one to two days, both bacteria and fungi grew steeply and significantly to their highest levels, and then leveled off, with both fungal and bacterial numbers declining steadily from 3 to 6 days. Based on an ANOVA of log transformed values, their abundance differs significantly by time (F5,66 = 2.98, p < 0.01). Microbial load decreased significantly when comparing the peak of growth (2 days) to the values obtained after six days of storage (t22 = 2.1, p < 0.05). Figure 2. Total microbial growth in August (primarily yeast and bacteria) from fresh corbicular pollen (CP) and beebread sampled daily for 5 days at 24 h increments. Identical samples were plated in triplicate on standard plate count agar (PCA) and Sabouraud dextrose agar (SDA). Each box plot displays the sampled variation from eight colonies. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and dots are outliers. Abundance differs significantly by time (F 5,66 = 2.98, p < 0.01). Distinguishing Microbial Type Despite the use of media tailored to fungal versus bacterial organisms, we observed similar growth of both bacteria and fungus on both plate count agar and Sabouraud dextrose agar, including bacterial growth on plates spiked with antibiotics ( Figure 3). When the growth medium contained no antibiotics, bacterial colonies outnumbered fungal colonies in corbicular pollen and fresh beebread but not in six-day-old beebread. Again, the abundance of both fungi and bacteria increased by an order of magnitude from corbicular pollen to 2 days of age, and then decreased significantly from 2 to 6 days of age. Figure 2. Total microbial growth in August (primarily yeast and bacteria) from fresh corbicular pollen (CP) and beebread sampled daily for 5 days at 24 h increments. Identical samples were plated in triplicate on standard plate count agar (PCA) and Sabouraud dextrose agar (SDA). Each box plot displays the sampled variation from eight colonies. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and dots are outliers. Abundance differs significantly by time (F5,66 = 2.98, p < 0.01). Distinguishing Microbial Type Despite the use of media tailored to fungal versus bacterial organisms, we observed similar growth of both bacteria and fungus on both plate count agar and Sabouraud dextrose agar, including bacterial growth on plates spiked with antibiotics ( Figure 3). When the growth medium contained no antibiotics, bacterial colonies outnumbered fungal colonies in corbicular pollen and fresh beebread but not in six-day-old beebread. Again, the abundance of both fungi and bacteria increased by an order of magnitude from corbicular pollen to 2 days of age, and then decreased significantly from 2 to 6 days of age. Figure 3. Culture dependent results from colony build-up during spring bloom. Using a dilution series, and Sabouraud dextrose agar, we cultured fungal and bacterial growth from fresh corbicular pollen (0 days old) and pollen stored for 2 or 6 days. Box plots display variation in microbial growth across 13 colonies sampled over 6 days. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and the dots are outliers. Samples were cultured with (+) and without (−) the addition of antibiotics chloramphenecol and ceftazidime. Following growth, microorganisms (CFUs) were exhaustively identified as bacteria (grey) or yeast (white) using light microscopy. Initially, bacterial blooms outnumbered fungal colonies, and then, from two to six days of age, yeasts were cultured in significantly greater quantities than were bacteria ( Figure 4). Yeast colonies accounted for the vast majority (>99%) of total fungal counts overall. Figure 3. Culture dependent results from colony build-up during spring bloom. Using a dilution series, and Sabouraud dextrose agar, we cultured fungal and bacterial growth from fresh corbicular pollen (0 days old) and pollen stored for 2 or 6 days. Box plots display variation in microbial growth across 13 colonies sampled over 6 days. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and the dots are outliers. Samples were cultured with (+) and without (−) the addition of antibiotics chloramphenecol and ceftazidime. Following growth, microorganisms (CFUs) were exhaustively identified as bacteria (grey) or yeast (white) using light microscopy. Initially, bacterial blooms outnumbered fungal colonies, and then, from two to six days of age, yeasts were cultured in significantly greater quantities than were bacteria (Figure 4). Yeast colonies accounted for the vast majority (>99%) of total fungal counts overall. Long-Term Pollen Storage: FungiQuant and BactQuant Fungi were detected at greater copy numbers than bacteria in both December and February ( Figure 5), but following the transformation of copy number to cell number (CFUs), our qPCR values were roughly consistent with our culturing results. Budding yeasts are known to have 100-150 copies of rRNA genes per cell, while bacteria have far fewer (4.2 on average). Based on regression analysis of log-transformed cell count estimates (Figure 6), bacterial and fungal (yeast) loads in beebread were highly correlated in both Long-Term Pollen Storage: FungiQuant and BactQuant Fungi were detected at greater copy numbers than bacteria in both December and February ( Figure 5), but following the transformation of copy number to cell number (CFUs), our qPCR values were roughly consistent with our culturing results. Budding yeasts are known to have 100-150 copies of rRNA genes per cell, while bacteria have far fewer (4.2 on average). Based on regression analysis of log-transformed cell count estimates ( Figure 6), bacterial and fungal (yeast) loads in beebread were highly correlated in both December (R 2 = 0.41, F = 31.2, p < 0.0001) and at the height of the winter forage dearth just prior to spring bloom in February (R 2 = 0.39, F = 20.7, p < 0.0001). Long-Term Pollen Storage: FungiQuant and BactQuant Fungi were detected at greater copy numbers than bacteria in both December and February ( Figure 5), but following the transformation of copy number to cell number (CFUs), our qPCR values were roughly consistent with our culturing results. Budding yeasts are known to have 100-150 copies of rRNA genes per cell, while bacteria have far fewer (4.2 on average). Based on regression analysis of log-transformed cell count estimates (Figure 6), bacterial and fungal (yeast) loads in beebread were highly correlated in both December (R 2 = 0.41, F = 31.2, p < 0.0001) and at the height of the winter forage dearth just prior to spring bloom in February (R 2 = 0.39, F = 20.7, p < 0.0001). . Box plots display the variation in microbial abundance. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and dots are outliers. Comparing log-transformed 16S rRNA gene copy number estimates, bacterial load decreased significantly overwinter (t 80 = 4.9, p < 0.00001), but fungal load remained similar (t 80 = 1.5, p = 0.14). December (Dec) and again on the eighth of February (Feb). Box plots display the variation in microbial abundance. The red line is the mean, black the median, boxes are at 25% and 75%, whiskers at 5% and 95%, and dots are outliers. Comparing log-transformed 16S rRNA gene copy number estimates, bacterial load decreased significantly overwinter (t80 = 4.9, p < 0.00001), but fungal load remained similar (t80 = 1.5, p = 0.14). . Discussion Social resource space in a honey bee colony includes nutrient processing: stored food and larval feeding. The honey bee has evolved a variety of specialized mechanisms to cope with pathogen challenge throughout social resource space [1,39]. Our results confirm that beebread storage is associated with the rapid formation of an extreme acidophilic and xerophilic microenvironment evolved to preserve nutrition by limiting microbial growth [8,33]. Both pH and water availability drop steadily and rapidly for up to seven days post pollen packing. Quickly after collection, beebread becomes highly acidic at the interface with oxygen, partly the result of GOX producing hydrogen peroxide and gluconic acid [40] but also through the production of organic acids by lactic acid bacteria and sugartolerant yeasts. Over the first seven days of storage, water availability in beebread decreases significantly below the level required by most microbial life [41]. Microbes found with prevalence and abundance throughout the nutrition-processing network include the aerobic social microbiota, pathosphere bacteria at low prevalence and abundance, and core gut bacteria of workers and queens, all of which survive the beebread medium with variable success to be transmitted to new adult generations [42]. Based on past results and lab experience sequencing isolates from stored pollen, these bacteria are mostly Bombella apis, Fructobacillus fructosus, and Apilactobacillus kunkeei with lesser amounts of Enterobacteraceae and core gut bacteria Lactobacillus firm5, Frischella, Gilliamella, and Bifidobacterium [11,43]. Thus, beebread represents a microbial "seedbank" comprising native gut bacteria and yeasts, protective social symbionts including those that populate the queen, and various opportunistic microbes that survive over the long term via sporulation or other desiccation-resistant mechanisms. Discussion Social resource space in a honey bee colony includes nutrient processing: stored food and larval feeding. The honey bee has evolved a variety of specialized mechanisms to cope with pathogen challenge throughout social resource space [1,39]. Our results confirm that beebread storage is associated with the rapid formation of an extreme acidophilic and xerophilic microenvironment evolved to preserve nutrition by limiting microbial growth [8,33]. Both pH and water availability drop steadily and rapidly for up to seven days post pollen packing. Quickly after collection, beebread becomes highly acidic at the interface with oxygen, partly the result of GOX producing hydrogen peroxide and gluconic acid [40] but also through the production of organic acids by lactic acid bacteria and sugartolerant yeasts. Over the first seven days of storage, water availability in beebread decreases significantly below the level required by most microbial life [41]. Microbes found with prevalence and abundance throughout the nutrition-processing network include the aerobic social microbiota, pathosphere bacteria at low prevalence and abundance, and core gut bacteria of workers and queens, all of which survive the beebread medium with variable success to be transmitted to new adult generations [42]. Based on past results and lab experience sequencing isolates from stored pollen, these bacteria are mostly Bombella apis, Fructobacillus fructosus, and Apilactobacillus kunkeei with lesser amounts of Enterobacteraceae and core gut bacteria Lactobacillus firm5, Frischella, Gilliamella, and Bifidobacterium [11,43]. Thus, beebread represents a microbial "seedbank" comprising native gut bacteria and yeasts, protective social symbionts including those that populate the queen, and various opportunistic microbes that survive over the long term via sporulation or other desiccationresistant mechanisms. Consistent with previous results characterizing beebread [34,44], we found that yeasts are the dominant microbial cell type and subsist longer than lactic acid bacteria during beebread storage, a testament to their co-evolved nature. This indicates that the explosion of CFUs in the first few days of pollen storage as recorded in previous work [8] is due in large part to the growth of yeasts, but not mold, in the pollen. This process of preservation may continue past seven days as our trendline suggests. We identified very few filamentous fungi in early beebread with our limited culture time, but results indicate that a variety of filamentous and mycotoxin-producing fungi can be found in beebread with some regularity [45]. We found strong correspondence between our two methods of quantification, with culture-dependent results returning similar cell counts as molecular results. The vast majority of fungi cultured from beebread were yeasts, but we found a greater ratio of fungi/bacteria using the DNA-based approach. Critically, 99% of fungal CFUs that grew on SDA and PCA media were identified as yeast when examined under a microscope, suggesting that the majority of rRNA genes identified using the FungiQuant assay were also yeast. The sugar-tolerant yeasts that occur in beebread are highly specialized commensal fungi [44,46,47], and some use polyols to counteract the osmotic pressure generated by high sugar concentrations [48]. Consistent with other systems, culture-dependent methods often fail to confirm various fungi implicated by PCR of the 18S rRNA gene [49]. All recorded estimates of microbial abundance in beebread are consistent with very low microbial biomass relative to available surface area [8]. Other estimates are similar for bacterial abundance [50], but fungal (yeast) abundance throughout social resource space remains to be verified. The yeasts enumerated in this study [28,44,51] belong to the native and aerobic social microbiome [52], ubiquitous throughout social resource space including mouthparts, crops, midguts, and larvae and food stores [3,28,42,53,54]. Our findings agree with the conclusions of Tauber et al. [46,55] that yeasts (perhaps many different species) are constitutive functional members of the honey bee microbiota recycled by food storage and processing. Based on positive correlations with the core ileum bacteria, we hypothesize a system wherein symbiotic commensal yeasts are niche specialists similar to the bacterial gut symbionts [56]. Consistent with our findings, the efficient and fast growing aerobic microbiota of beebread inhibits the growth of other less favorable fungi and bacteria, many of which are vectored from floral or water sources. Following an initial drop in microbial abundance after corbicular pollen was packed into the wax cell, we saw a rapid bloom of honey-tolerant native bacteria and yeasts, primarily aerobes. This aerobic interface represents a hostile and highly selective xerophilic and acidophilic microenvironment wherein the availability of moisture, atmospheric oxygen, and bee-supplied enzymes results in a layer of oxidative activity that kills or inhibits the growth of most microbes. This effect is analogous to the "respiratory burst", a cellular-level immune response of mammals and invertebrates that mitigates microbial growth using a targeted release of reactive oxygen species [57].
2023-03-12T15:40:00.155Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "52e2f11b24a4a8bfda6e0293c2ba4bffa1ec6a5c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4450/14/3/265/pdf?version=1678255200", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b7e1776c38990ae210bc44e76e3b49f9464903f", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
258163183
pes2o/s2orc
v3-fos-license
Disparities in anterior cervical discectomy and fusion provision and outcomes for cervical stenosis Highlights • Cervical stenosis represents a progressive pathology with great variation in trajectory.• Anterior cervical discectomy and fusion (ACDF) may modify the course of one's disease.• There are significant disparities in accessibility and outcomes of ACDF.• Research and policymaking should be prioritized on more equitable application of ACDF. Introduction Degenerative cervical spine disease is the most common cause of cervical stenosis (CS), the primary contributor to cervical spinal cord and nerve root disorders [ 1 , 2 ]. Approximately 80-90% of adults aged greater than 50 years old demonstrate evidence of degenerative disc disease on magnetic resonance imaging (MRI) [ 3 , 4 ]. Patients with mild disease are often asymptomatic, but those with chronic compressioninduced damage to the adjacent cord (ie, myelopathy) or nerve roots (ie, radiculopathy) may experience progressive deterioration of function and sensation, loss of hand dexterity, balance issues, paresthesias, lower extremity fatigue, and neck pain [ 2 , 5 ]. Treatment for CS varies with disease severity. The cost-effectiveness, relative safety, and validated effectiveness of anterior cervical discectomy and fusion (ACDF) has led to widespread use in symptomatic patients [6][7][8] . Early intervention may be valuable, as studies have found healthcare resource utilization (HRU), in the forms of pain management and outpatient charges, to be extremely high in the years leading up to ACDF [9] . Racial and sociodemographic disparities have become increasingly salient in the discussion of equitable health care and innovation that benefits all [ 10 , 11 ]. Socioeconomic status (SES) disparities in health care and their impact on outcomes are well-established [12][13][14] . Multiple studies surveying the field of neurosurgery have repeatedly identified associations between SES, race, or insurance status and rates of complications, readmission, and even underutilization of treatments at public hospitals, complications, and readmission rates [15][16][17][18] . Using the National Inpatient Sample (NIS) database, this retrospective study explores associations between sociodemographic factors and outcomes to identify the impact of social and economic determinants on access to ACDF for CS. Data source and patient selection The Healthcare Cost and Utilization Project's NIS, the largest publicly accessible database of all-payer inpatient data, reports data on millions of U.S. hospitalizations per year [19] . Using the International Classification of Disease, 10th Revision (ICD10), the NIS was queried from 2016 to 2019 for patients who were diagnosed with CS (M48.02). Of those patients, we compared those who received an ACDF (0RB3, 0RT3, 0RG1070, 0RG10A0, 0RG10J0, 0RG10K0, 0RG13170, 0RG13A0, 0RG13J0, 0RG13K0, 0RG1470, 0RG14A0, 0RG14J0) and those who did not. All patient information was anonymized to minimize any sources of potential bias and to protect confidentiality. Data characteristics and outcome measures Baseline demographics, including age, sex, race, insurance status, and comorbidities were extracted from the dataset and explored in patients with myelopathy, plegia, and bowel-bladder dysfunction. Race was stratified as White, Hispanic, Black, and Asian/Pacific in concordance with patient characteristic categories in the NIS. Quartile (Q) median household income categories were extracted and standardized according to each patient's zip code. As per Healthcare Utilization Project, Q1 is defined as the 0th to 25th percentile; Q2 is the 26th to 50th percentile; Q3 is 51st to 75th percentile; and Q4 is 76th to 100th median income percentile. Sociodemographic status herein encompassed Payer status, median household income, and insurance coverage. Complications assessed included tracheostomy, pneumonia (PNA), acute kidney injury (AKI), sepsis, pulmonary embolism (PE), and deep vein thrombosis (DVT). Outcomes measured were total hospital charges, length of stay (LOS), and complications. Discharge disposition and prolonged LOS ( > 10 days) were used to measure HRU. Frailty was derived from the 11-factor modified frailty index (mFI-11), which summates functional status, diabetes mellitus, chronic obstructive pulmonary disease, congestive heart failure, myocardial infarction, cardiac interventions, hypertension with medication, peripheral vascular disease, impaired sensorium, transient ischemic attacks without deficit, and cerebrovascular accident with neurological impairments. The mFI-11 has been previously validated in the context of spine surgery [20] . All ICD10 codes and indices utilized for the analysis can be found in the Supplementary Materials. Furthermore, the Elixhauser Comorbidity Index (ECI) was also applied to assess for comorbidities amongst the population to account for variation in the health baselines of various subgroups. Comorbidities within the ECI were selected for evaluation when they were present in at least 3% of the population. Elevated ECI was defined as having an ECI greater than 75th percentile [21] . Statistical analysis Statistical analysis was conducted using SPSS Statistical Software (IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY: IBM Corp). Baseline characteristics were analyzed using descriptive statistical analysis. Pearson's chi-square test was used to measure odds ratios for categorical variables. Multivariate analysis was performed to analyze predictors of complications and outcomes. Characteristics of patients with advanced disease Of the 155,300 patients undergoing ACDF, 5,660 (3.6%) reported myelopathy and 1,605 (1%) reported plegia. 660 (0.04%) patients undergoing ACDF had bowel or bladder dysfunction. Patients greater than age 65 were more likely to present with myelopathy ( When assessing the impact of socioeconomic factors, patients in the highest income quartile (Q4) were found to demonstrate fewer associations with myelopathy and bowel bladder; these patients were significantly less likely to experience plegia (OR 0.515, CI 95% 0.445-0.595, p < .001). Those in the lowest quartile of median income were more likely to experience myelopathy ( Discussion Lower SES status demonstrated consistent associations with more advanced degenerative spine disease, less favorable outcomes, and increased HRU in our cohort of patients with CS undergoing anterior surgery. There were also significant disparities across races in terms of disease severity and incidence of complications. The perseverance of these trends across, race, SES, and insurance status suggests concerning disparities that merit attention in the continued effort towards equitable All bolded values represent significant findings to a standard of < 0.05. distribution of spine care. While the efficacy of surgical treatment for CS has been well established, the true benefit is difficult to measure when social factors affect the trajectory of pathology and access to the procedure [ 22 , 23 ]. We found that African American and Hispanic patients displayed evidence of more advanced disease with progressive neurologic dysfunction. With regards to CS, one study demonstrated that African American patients routinely endured longer time to diagnosis and treatment of CS compared to Caucasian patients [24] . Similar trends have been documented in these populations with regards to other neurological pathologies [ 23 , 25 ]. One study found greater disease severity of multiple sclerosis in African Americans, while another study on primary spinal cord tumors found that African Americans were less likely to receive surgical intervention for the same diagnosis as non-African American patients [ 25 , 26 ]. Alosh et al. [27] found an approximately three-fold difference in incidence of anterior cervical spinal surgeries between white women and Hispanic women. Furthermore, our study also found increased rates of complications and mortality in patients of non-white races who underwent ACDF. While these differences may be related to the increased preoperative disease severity, Khan et al. [28] suggests similar findings in African American patients undergoing spine surgery could be attributed to increased likelihood of being treated at low-volume hospitals with fewer resources, specialists, and integrated technology. Some studies suggest such racial disparities reflect deeply ingrained societal and cultural beliefs that percolate into and manifest as differences in inter-group experiences with the health-care system [ 24 , 29 ]. SES also appeared to significantly impact the progression of patients' degenerative disease process, with those in the lowest income quartile often subject to the most debilitating manifestations and the most damaging medical and financial consequences. CS patients in lower SES groups may not have access to care until they have progressive symptoms. Additionally, while some surgeons may perform ACDF on mildly symptomatic patients with severe CS based on imaging, patients of lower SES may be less likely to be subject to imaging that may lead to detection or to receive early surgery that could slow, minimize, or reverse the advancement of the disease due to their limited resources or access to health care [30] . Studies utilizing the Distressed Community Index as a measure of SES found SES to be an independent predictor of postoperative complications, and that risk-stratifying patients based on SES improved postoperative outcomes and complication rates [31][32][33] . Analysis of 10,030 ACDF patients yielded positive associations between low SES, postoperative complications, and LOS [28] . Increased rates of complications, longer lengths of stay, and lower rates of routine discharge may incite a harmful feedback loop as the potential loss of wages from inability to work may further compromise the SES status of those already in an unstable position. Furthermore, the additional financial burden on those with strained resources has been associated with a less adherence to systemic therapies and poorer patient outcomes [ 34 , 35 ]. Relatedly, Medicaid insurance status was associated with increased total charges and overall healthcare costs. Medicaid's relationship to SES imparts similar obstacles such as longer time to diagnoses, increased rates of complications and neuropathies, higher surgical costs, and longer intensive care unit stays [36][37][38] . These obstacles may be the result of differences in referral patterns by providers based on patient race or geographic aggregation of high-quality hospitals in predominantly white neighborhoods [39] . The difficulty obtaining preoperative and postoperative care may complicate the course of treatment for ACDF [37] . Any factor that impacts access to routine health care may contribute to the higher rate of severe disease in patients within the lower SES. Lastly, one study implicates the financial repercussions as a source of implicit bias, stating that physicians who treat patients within the lower SES receive less reimbursement and face a financial disadvantage due to the higher cost of treatment [31] . This highlights the ethically complex intersection of accessible, subsidized health care and provider compensation. Altogether, it is critical to consider how a patient's outcome may be compromised by the multiplicative effects of their identities as part of multiple at-risk groups. Patients who fall within multiple subgroups associated with worse outcomes may incur a disproportionately greater burden. As such, the synergistic effects of the sociodemographic factors that affect postoperative outcomes should be integrated into risk-based assessments. For example, African American patients and Hispanic patients were found to live in the most resource-deprived areas with predominantly low-volume pituitary surgery hospitals and to be insured by Medicaid, all factors lending themselves to worse postoperative outcomes [40] . Our findings validate the layered nature of these barriers, which may deter patients within any of these populations from actively pursuing health maintenance measures and from agreeing to interventions. It has been established that the best operative outcomes are attained in patients with a history of a year or less -early intervention is key [2] . As such, untimely treatment of patients creates an aging population with unhampered worsening disease, potentially making them poorer surgical candidates. In general, older and frail patients were more likely to experience the more severe manifestations of the disease process and less able to tolerate the stress induced by ACDF surgeries. Therefore, the value of preventive neurosurgery becomes paramount not only for optimizing patients' outcomes but also for minimizing unnecessary healthcare expenditure, as proactive care may curtail the economic burden of complicated inpatient postoperative recoveries [41] . Risk-based assessments should follow a synergistic model to provide individualized care within a biopsychosocial model. Limitations As is the case with most retrospective studies, it is only possible to derive correlations between the findings as opposed to definitive causal relationships. Temporality is difficult to establish, and the NIS does not capture the arc of future follow-up interactions for patients. Furthermore, database analyses are subject to inter-hospital variation due to individual differences in coding and identification of patients It is also difficult to account for the nuanced subtleties and biases present on an individual provider level that may contribute to the perceived phenomena. Likely, there are many pre-existing conditions and sociodemographic factors not captured in our analyses that may amalgamate to impact patients' overall health. Also, some variables may serve only as limited proxies for the metric of interest; for example, median income based on a patient's zip code may not completely reflect a patient's financial status. Nonetheless, we believe our analyses captures several factors of prime importance of which providers should be aware when considering patients as candidates for ACDF. Conclusion Due to the variety of environmental and sociodemographic factors that influence health, accessibility, and affordability of health care, certain subpopulations are more likely to benefit optimally from ACDF, while others may experience a significant delay in care that markedly impacts their quality of life. Certain populations face a greater risk of serious adverse outcomes such as sepsis, tracheostomy, and AKI that not only affect the trajectory of recovery but also greatly increase HRU and decrease patient quality of life and satisfaction. These disparities call for active systemic solutions that may increase the equitable distribution of neurosurgical care and promote early intervention for prevention of disease progression. Declarations of Competing Interests None of the authors report any relevant disclosures or conflicts of interest. Funding No funding was necessary for the completion of this work. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.xnsj.2023.100217 .
2023-04-16T15:16:27.006Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "83bb935fc3a8971c5852835a12cce252866f83dd", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "0f1879eca343ec37982912cb1d7db6e3981241f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245988264
pes2o/s2orc
v3-fos-license
Trends in Diabetes Mortality in Urban and Rural China, 1987–2019: A Joinpoint Regression Analysis Purpose Diabetes mellitus is emerging as an epidemic worldwide, and the incidence and prevalence of diabetes have drastically changed in China over the past 30 years, but data on its mortality rate are scarce. This study aimed to analyze the time trends of mortality rates among patients with diabetes in the rural and urban population in China between 1987 and 2019. Methods The research data come from China’s annual report on national health statistics and the Chinese Health Statistics Yearbook. Age-standardized mortality rates were calculated by using the direct method based on the World Standard Population from the WHO. Joinpoint regression analysis was employed to estimate the annual percent change and average annual percentage changes of mortality rates of diabetes mellitus. Results An overall trend for increment in diabetes mortality was observed. The crude mortality rates and age-standardized mortality rates of diabetes for urban and rural residents in China showed a significant increasing trend between 1987 and 2019. Mortality due to diabetes in urban areas has been higher than in rural areas for 30 years. However, due to the rapid increase of rural diabetes mortality in the past decade, the gap between the two gradually narrowed. The age-standardized mortality rates of diabetes increased by about 38.5% in urban areas and 254.9% in rural areas over the whole study period. In addition, the age-standardized mortality rate of females with diabetes was higher than that of males, but this pattern began to change in urban areas in 2012. Finally, the age-standardized mortality rates in the elderly population in China are higher with a faster growth rate, especially in rural areas. Conclusion The mortality rate of diabetes is on the rise in China. The rapid growth of the mortality rate of diabetes in rural areas leads to the reduction of the urban–rural gap. Male mortality rates in urban areas have surpassed those of women. At the same time, the mortality rate of diabetes showed obvious elder-group orientation. As China’s population ages, the burden of death and disability caused by diabetes and its complications will continue to increase. These results indicate that diabetes has become a significant public health problem in China. Such an effect increases the demand for strategies aimed at the prevention and treatment of diabetes mellitus. In addition to the prevention and intervention of diabetes in high-risk groups, it is also necessary to establish diabetes screening networks to identify patients with mild symptoms. Early detection and timely intervention can effectively reduce the incidence and mortality of diabetes. INTRODUCTION Diabetes mellitus (DM) is a severe chronic disease that occurs either when the pancreas does not produce enough insulin (a hormone that regulates blood glucose) or when the body cannot effectively use the insulin it produces (1). Persistent hyperglycemia and long-term metabolic disorder may lead to systemic organ damage, dysfunction, and failure (2). DM is also a known risk factor for blindness, vascular brain diseases, renal failure, and limb amputations (3). Diabetes of all types can lead to complications in many parts of the human body and increase the overall risk of premature demises. DM is emerging as an epidemic all over the world (4). In 2019, the International Diabetes Federation (IDF) reported that 463 million adults worldwide suffer from diabetes. If effective preventive measures are not taken, it is expected to increase to 700 million by 2045 (5). According to the latest data released by IDF, about 6.7 million adults will die of diabetes and its complications in 2021, equivalent to one death every 5 s. 1 The epidemic of diabetes is one of the most alarming public health issues of the 21st century, especially for lower-middleincome countries (6). Compared with high-income countries, the prevalence of diabetes in middle-income and low-income countries develops faster (7). It was predicted that from 2010 to 2030, there will be a 67% increase in the prevalence of diabetes in these countries (8). Although from a global perspective, the overall health level has improved and life expectancy has increased, diabetes is still the second largest factor affecting life expectancy (9). There was also an increase in both type 1 and type 2 diabetes prevalence among children and adolescents (10). As the largest developing country globally, China currently has an enormous number of diabetic patients. With a large population, China has 35.5 million people who are 65 years of age or older, and it is expected to increase to 54.3 million by 2030 (5). With the deepening of aging in China, the disease burden and problems caused by DM will be further expanded. The primary purpose of this study was to examine the trends of temporal changes in the annual motility rate of DM in China during 1978-2019, using the joinpoint regression model developed by Kim et al. (11), which can be used to deal with long-term disease data of multiple trend segments scientifically and fit well. We also compared the disparity of mortality rates between the urban and rural areas, which would be useful for preventing and controlling DM in the future. Data Sources The DM cases were defined by the International Classification of Diseases (ICD) codes: ICD-9 (249-250) for data collected before 2001 and ICD-10 (E10-E14) for data from 2001 onwards. The mortality data in urban and rural populations were derived from the Chinese Health Statistical Annual Report (1988Report ( -2002 census. Data were available by gender and age groups, from 0, 1-4, 5-9 to 80-84 and up to 85+ years. The trends in age-specific mortality of DM before 15 years old were not analyzed owing to the extremely low mortality. Trends in Mortality Rates Mortality and population data were organized into 5-year age groups, up to 85+ years, to correspond with age categories used in the WHO World Standard Population (WSP) as reference (12). We calculated age-standardized mortality rates (ASMRs) per 100,000 with 95% CI for each study year (1987-2019) using the direct method, based on the WSP and annual age-specific crude mortality rates (CMRs). To calculate the annual variation in mortality rates and identify significant change points, we performed joinpoint regression analysis using Joinpoint Regression Program from the Surveillance Research Program of the National Cancer Institute Version 4.9.0.0 (Statistical Research and Applications Branch National Cancer Institute, USA). Joinpoint analysis identifies the best fit for inflection points ("joinpoints") at which there is a significant change in trends using a series of permutation tests, with Bonferroni adjustment for multiple comparisons (13). In this study, joinpoint analysis was used to identify years (as the independent variable) with significant changes in mortality rate over the study period and the size of these changes (as the percentage change in rate per year). Using a natural loglinear model enables the analysis of a constant percentage change in rate over time. We allowed up to five joinpoints by utilizing a Monte Carlo permutation method and evaluated whether there was a difference from no shift in each segment using z-test and a p-value of less than 0.05 as statistically significant. Trend of Crude Mortality and Age-Standardized Mortality of Diabetes Mellitus in Urban and Rural Areas of China As shown in Figure 1, the CMRs of DM (DM-CMRs) in urban and rural areas of China show an overall upward trend during the period of interest, but there are certain fluctuations in urban mortality rates. The DM-CMR in urban areas is higher than that in rural areas, and that in females is higher than in males. From the perspective of gender differences, the gap of DM-CMR in urban areas has been decreasing in recent years, while that in rural areas is expanding. The ASMRs of DM (DM-ASMRs) from each year in urban males and females, and rural males and females between 1987 and 2019 are shown in Figure 2. Continuing increasing trends on ASMR were observed in both rural males and females, while fluctuations occurred in urban areas. The ASMR of DM in the urban population is generally higher than that in the rural population during the whole study period, but the gap is gradually narrowing. Most of the time, the ASMR of females in urban areas was higher than that in males, but this situation changed after 2012. This indicates that the mortality rate of DM among men in urban areas of China has increased rapidly in the past decade. At the same time, the gender gap of ASMR in rural areas has become smaller. The CMR and ASMR of DM at the beginning (1987) and the end (2019) of the study period are shown in Table 1, followed by the average annual percentage changes (AAPCs) during the 33 years and annual percentage change (APC) for each sub-period. The joinpoint analysis indicated that from 1987 to 2019, the CMR of DM in both urban areas and rural areas in China showed a significant growth trend, with an average annual growth rate of 3.5% (95% CI 1.8-5.20) and 6.4% (95% CI 4.1-8.7), respectively. The yearly mortality of diabetes significantly increased since 1987 by +4.3% (95% CI 2.6-6.0) in the urban male population while +2.8% (95% CI 1.2-4.4) in urban females. Meanwhile, the average annual growth rate of DM mortality in males and females in rural China is about 6.4% (95% CI 4.1-8.7), much higher than that in urban areas. The ASMR of diabetes significantly increased since 1987 by +4.3% (95% CI 1.9-6.8) yearly in rural areas, and two joinpoints were identified for the rural population (a significant increase by 6.40% (p < 0.05) from 1997 to 2002 and an insignificant decrease from 2002 to 2012, followed by a considerable increase by 3.94% (p < 0.05) from 2012 onwards). Trends in Age-Specific Mortality Rate Across the entire period, the DM-ASMR for adults aged 40 to 45 years showed no significant change in rural and urban China but varied among other age groups. Further analysis found that the DM-ASMR for females decreased in the age groups 30-35 and 35-40, with −3.2% (95% CI −4.8, −1.5) and −2.9% (95% CI −3.9, −1.9) per year, respectively. However, in those aged 55 years or older, the ASMR of DM has shown a significant increasing trend. The ASMR of urban males increased by +3.6% (95% CI 2.5, 4.6) yearly in the age group 60-65, +4.5% (95% CI 3.1, 6.0) yearly in the age group 65-70, +4.8% (95% CI 3.5, 6.0) yearly in the age group 70-75, +4.5% (95% CI 3.5, 5.5) yearly in the age group 75-80, +5.4% (95% CI 4.0, 6.7) yearly in the age group 80-85, and +5.6% (95% CI 5.3, 7.8) yearly in the age group 85+. The pattern of DM mortality trend for women in rural areas is similar to that of men. However, the significant increase in mortality rate occurred earlier and increased faster ( Table 2). As shown in Figure 3, from the study period, the elderly group (75+) showed a significant increase in DM-ASMR, and in rural areas, this situation expanded to as early as 55 years old. This indicates that the mortality rate of diabetes in the Chinese elder population has been increasing over the past several decades, and the situation in rural areas is severer. DISCUSSION In this study, we observed a significant increase in diabetes mortality in China over the past three decades. The DM-CMR increased by about 170% in urban areas and 560% in rural areas between 1987 and 2019. At the same time, the DM-ASMR increased by 38.5% and 254.9%, respectively. Second, mortality due to diabetes in urban areas has been higher than in rural areas for 30 years. However, in the past decade, the gap between the two gradually narrowed due to the rapid increase of rural diabetes mortality. In addition, the mortality rate of females with diabetes is higher than that of males, but this pattern began to change slightly in 2007 in urban areas. In other words, the male mortality rate has reversed the female's, and this change gradually became significant in 2012. Finally, China's elderly DM mortality has shown a clear upward trend within the whole period of interest. The overall increasing trend of diabetes mortality in China is consistent with international comparative studies on diabetes mortality patterns. Most developed countries have shown a decline in diabetes mortality in the last three decades-the United Kingdom, Germany, Nordic countries, France, and Malta (14). In contrast, a rise in diabetes mortality was recorded in most developing countries-Armenia, Bosnia and Herzegovina, Georgia, and Estonia (15). The reduction in diabetes mortality in western countries may be attributable to the availability and use of improved treatment and management, as well as the reductions in major risk factors, while the increased mortality in developing countries probably reflects lifestyle changes that accompany industrialization, including increased consumption of animal fat, obesity, and physical inactivity (16)(17)(18)(19). With the development of the economy and the improvement of living standards, the population's spectrum of disease and death has undergone tremendous changes. The cost of diabetes surpasses that of other chronic diseases, posing a heavy economic burden to the current healthcare system as well as the general welfare of each family (20). The occurrence of diabetes results from many factors such as heredity and social environment. Hypertension, hyperlipidemia, overweight, obesity, smoking and drinking, and the change of social environment factors will increase the probability of diabetes (15,(21)(22)(23)(24). Diabetes has been implicated in all-cause mortality, especially in deaths related to cardiovascular and cerebral vascular disease (25,26). Additionally, diabetes is often associated with premature deaths from non-communicable diseases (27) as well as communicable diseases, including SARS (28), H1N1 (29), and COVID-19 (30). Among adults in China, diabetes was associated with increased mortality from a range of cardiovascular and non-cardiovascular diseases (31). The obesity epidemic caused by changes in people's lifestyle and diet is considered to be an important reason for the aggravation of the diabetes epidemic in China, and it has been proven to be one of the critical factors leading to the onset of diabetes (32)(33)(34)(35). Over the past few decades, China's rapid social and economic development has led to changes in people's lifestyles, reduced exercise time, longer sitting time, poor dietary structure, and increased life and work pressure, which has led to an increasing prevalence of obesity. According to data from the China Health and Nutrition Survey (CHNS), the obesity rate rose from 4.0% in 1993 to 10.7% in 2009 (36), and the latest national prevalence estimates for 2015-2019, based on Chinese criteria, were 34.3% for overweight and 16.4% for obesity in adults (≥18 years) (37). Dietary and healthcare interventions for lowering the prevalence of diabetes are needed in both urban and rural China. Intensive glycemic control is beneficial to the treatment of diabetic microvascular complications and can reduce mortality (38). Therefore, it is necessary to strengthen the individual and social management of diabetes. Compared with the mortality rate in rural areas in China, the urban mortality rate is significantly higher since 1987. This may be related to higher socioeconomic status, higher living standards, better medical resources, and a higher diagnosis rate (20). However, the annual growth rate of diabetes mortality in rural residents was higher than that in urban areas in recent years, and the gap between the two gradually narrowed. Similarly, a study of the trends in diabetes mortality in China from 2003 to 2012 indicated that the diabetes mortality rate was generally higher in urban areas than in rural areas, but the gap was narrowing (39). Studies have pointed out that the level and accessibility of regional medical services will have a greater impact on diabetes mortality (40). Although the medical service and medical insurance system in rural areas of China have been established paralleling its urban part (41), their capacity is still substantially weaker than that in urban areas (42), leading to poor corresponding treatment and management of diabetes. Furthermore, socioeconomic status is also an essential factor affecting the prevalence of diabetes and its complications (43). People in rural areas have relatively low socioeconomic status, which is a potential risk factor. Accelerating urbanization could also be one reason (39), which could lead to changes in lifestyle and reduced physical activity among rural residents. In addition, studies have shown that education affects health awareness, with higher levels of education associated with higher levels of health awareness, which is related to greater access to health services and a lower risk of mortality (20). The low level of education in rural areas may lead to backward awareness of diabetes prevention and control. Our study also found gender differences in the pattern of time trend from DM mortality. In general, whether in rural or urban areas, the mortality rate of DM among females is higher than that of males, but the pattern began to reverse in urban areas in 2007 and became significant in 2012. Studies on gender comparison of diabetes mortality in China found that the mortality rate of females was generally higher than that of males, which may be related to the influence of sex hormones or other physiological and biochemical factors in females (44). Increased social, competitive pressures may also exacerbate unhealthy lifestyles such as smoking and alcohol consumption among men, thus increasing the risk of death due to diabetes (20). The results of this study are also supported by many relevant domestic research results. An analysis of DM mortality rate in Gusu District, Suzhou, Jiangsu province, from 2011 to 2019, showed that males had higher crude and age-standardized diabetes mortality rates than females, which may be related to the prevalence of chronic disease risk factors such as overweight, obesity, alcohol consumption, smoking, and insufficient intake of vegetables and fruits in male. And the average annual growth rate of the probability of premature death due to diabetes in males is 2.6 times that in females, suggesting that the risk of death due to diabetes in young males is higher than that in females (44). A study on the disease burden of DM in Guangzhou, China, from 2017 to 2019 shows that the loss from DM among male and female residents is 3.92 and 3.12 disability-adjusted life years (DALYs) per 1,000 population, respectively (45), and some studies attributed the gender difference in diabetes mortality to the difference in the age structure of the male and female populations (46). A gender-disaggregated analysis of DM mortality in Wujin District of Changzhou city from 2009 to 2019 showed that non-demographic factors contributed more to diabetes mortality in males, while demographic factors contributed more to diabetes mortality in females (46). From the perspective of age-specific diabetes mortality, China's DM mortality of the elderly population has shown a clear upward trend within the whole period of interest. Diabetes is a typical disease of the elderly. It has been identified that the age effect is a very important risk factor for diabetes mortality (39). A study on the prevalence of diabetes among residents aged 15 and above in Chongqing, China, in 2018, shows that the prevalence of diabetes increases with age (47), which leads to a corresponding increase in the death rate from DM. One of the main reasons why diabetes is more common in the elderly is that the ability to use glucose is reduced due to functional deterioration. As the aging of the Chinese population further deepens, the problem will become more prominent. Diabetes is one of the leading causes of blindness, amputation, heart disease, renal failure, and premature death in the elderly (48)(49)(50)(51)(52). For older adults with DM, preventing diabetic complications is the top priority. This requires comprehensive management of elderly diabetic patients, including the control of multiple risk factors such as hyperglycemia, hypertension, dyslipidemia, overweight and obesity, and hypercoagulability, and necessary drug treatment based on lifestyle intervention. At the same time, it is also necessary to set individualized control goals according to the patient's age, disease severity, life expectancy, the severity of complications or comorbidities, etc. In addition, we should focus on strengthening the diabetes management of older women to control the rapid increase in their diabetes mortality. Strengths and Limitations A strength of the present study is that the data available were of an extensively long period. Although the government has publicly published relevant data after 2003, the data before 2003 are not for the public. Relevant government departments have authorized this research to use data from earlier years for analysis. To our knowledge, this study has the most prolonged period to quantify the trends of DM mortality, which can better reflect the long-term trend of the diabetes death pattern of China. In addition, we also analyzed the heterogeneity of diabetes deaths in China between urban and rural areas and between different genders. The potential limitations of this study include the following. First, during the study period, the classification system for coding the cause of death changed from ICD-9 to ICD-10. However, previous studies of comparability between ICD-9 and ICD-10 only observed slight differences in definitions of coding methods, which did not generate distortions in the number of deaths due to DM in essence (53). Yet China first adopted the ICD-10 international disease classification statistical standard in 2002. Some urban and rural death cause statistics stations did not submit ICD-10 death cause statistics annual reports at that time, so the mortality rate was relatively low, and consequently, the classification of some death causes was not accurate. This situation gradually improved, and the mortality rate became more accurate around 2007. The decline in overall mortality in the years after 2002 is related to this, but this does not affect the overall trend analysis and the main conclusions of this study. Second, the joinpoint regression model cannot explain the covariates or influencing factors of the outcome variables. In addition, the micro-data of DM cases are inaccessible, leading to difficulty in analysis flexibility such as calculating the overall mortality rate from urban and rural areas. Despite these limitations, this study helped to elucidate diabetes death trends in China, which still need to be clarified in analytical epidemiological studies in the future. CONCLUSIONS The mortality rate of diabetes is on the rise in China. The rapid growth of the mortality rate of diabetes in rural areas leads to a shrinking urban-rural gap. Male mortality rates in urban areas have surpassed those of women. The elderly group sees a higher mortality rate of diabetes. As China's population ages, the burden of death and disability caused by diabetes and its complications will continue to increase. More prevention and intervention policies are needed, including establishing a monitoring system for identifying early DM patients, strengthening intervention and rehabilitation for diabetic patients, and advocating a healthier lifestyle. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. These data can be found here: https://data.cnki.net/area/Yearbook/Single/ N2020020200?z=D09. this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Copyright © 2022 Su, Wang, Dong, Hu, Xu, Peng, Wang and Zheng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Su et al. Trends in Diabetes Mortality Frontiers in Endocrinology | www.frontiersin.org January 2022 | Volume 12 | Article 777654
2022-01-17T14:09:37.537Z
2022-01-17T00:00:00.000
{ "year": 2021, "sha1": "430cd15bfce56017dfca06776af4a1e6c737980a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "430cd15bfce56017dfca06776af4a1e6c737980a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235354708
pes2o/s2orc
v3-fos-license
Effect of Oxidative Stress on Diaphragm Dysfunction and Exercise Intervention in Chronic Obstructive Pulmonary Disease Chronic obstructive pulmonary disease (COPD) can cause extrapulmonary injury such as diaphragm dysfunction. Oxidative stress is one of the main factors causing diaphragm dysfunction in COPD. Exercise plays a positive role in the prevention and treatment of diaphragm dysfunction in COPD, and the changes in diaphragm structure and function induced by exercise are closely related to the regulation of oxidative stress. Therefore, on the basis of the review of oxidative stress and the changes in diaphragm structure and function in COPD, this article analyzed the effects of exercise on oxidative stress and diaphragm dysfunction in COPD and explored the possible mechanism by which exercise improves oxidative stress. Studies have found that diaphragm dysfunction in COPD includes the decline of muscle strength, endurance, and activity. Oxidative stress mainly affects the structure and function of the diaphragm in COPD through protein oxidation, protease activation and calcium sensitivity reduction. The effects of exercise on oxidative stress level and diaphragm dysfunction may differ depending on the intensity, duration, and style of exercise. The mechanism of exercise on oxidative stress in the diaphragm of COPD may include improving antioxidant capacity, reducing oxidase activity and improving mitochondrial function. INTRODUCTION Chronic obstructive pulmonary disease (COPD) is a chronic respiratory disease that progresses slowly. It is characterized by irreversible obstructive breathing. It is often related to smoking and can lead to chronic respiratory failure. With the aging of the world population and the high rates of smoking in Asia, COPD has become an increasingly serious problem in the twenty-first century (Raherison and Girodet, 2009;López-Campos et al., 2016). Respiratory muscle dysfunction is the main feature of acute and chronic respiratory failure in COPD. Dysfunction of the diaphragm, the main respiratory muscle, exists in all stages of COPD (Donaldson et al., 2012). Studies have shown that diaphragm dysfunction is associated with increased risk of hospitalization due to acute exacerbation of COPD, and inspiratory myasthenia due to diaphragm dysfunction is associated with dyspnea, hypercapnia, respiratory failure, and premature death (Caron et al., 2009;Vilaró et al., 2010). Diaphragm dysfunction in COPD is mainly manifested in the structure and function, including sarcomere disruption, a loss of strength, and an increased susceptibility to failure in the face of a particular load (Orozco-Levi et al., 2001;Orozco-Levi, 2003). It is caused by different local and systemic factors. Pulmonary hyperinflation and increased work of breathing that occur in COPD are the main contributing factors to diaphragm dysfunction. In addition, systemic factors include inflammation, oxidative stress, hypoxia, and exacerbation. These factors can also influence the function of the diaphragm by inducing modifications in its local microenvironment. Oxidative stress affects the contractility and fatigability of the diaphragm and is one of the important mechanisms of diaphragm dysfunction in COPD (Langen et al., 2003;Gayan-Ramirez and Decramer, 2013;Gea et al., 2013). The structural changes caused by oxidative stress can weaken the strength of the diaphragm and promote muscle injury, and the increased production of free radicals in the diaphragm is related to respiratory failure caused by resistance breathing (Borzone et al., 1994;Klimathianaki et al., 2011). Therefore, studying the effects of oxidative stress on diaphragm function in COPD patients is beneficial to reduce the dysfunction and improve the quality of life and survival rate of these patients. Exercise training can improve the function of the diaphragm possibly by improving the oxidative ability and efficiency of the skeletal muscle after exercise training, which decreases alveolar ventilation and dynamic hyperinflation, thus reducing diaphragm load (Nici et al., 2006). COPD is a systemic disease (Singh et al., 2019). Previous studies have found that inspiratory muscle training as a supplement to pulmonary function rehabilitation therapy can improve inspiratory muscle strength, but it does not provide additional benefits in exercise ability, quality of life, or dyspnea (Schultz et al., 2018). Therefore, compared with respiratory training, exercise training may have a better effect on the symptoms and function of COPD (Emtner and Wadell, 2016). Exercise training is undoubtedly the best choice to improve the muscle quality, function, and quality of life of patients with COPD (Emtner and Wadell, 2016;Paneroni et al., 2017). Previous studies have found that exercise training exerts a certain regulatory effect on the oxidative stress level of COPD diaphragm (Bowen et al., 2017a). However, the regulatory mechanism of exercise intervention on diaphragm dysfunction and oxidative stress in COPD patients remains unclear. Therefore, this study explored the relationship between oxidative stress and diaphragm dysfunction in COPD and reviewed the effects of exercise on oxidative stress in COPD diaphragm and its possible mechanism, so as to provide theoretical guidance and new ideas for exercise rehabilitation of diaphragm dysfunction in COPD. DIAPHRAGM DYSFUNCTION IN COPD In COPD, many different factors have beneficial or harmful effects on the structure and function of the diaphragm, and the positive adaptive changes of the diaphragm at least offset the influence of the latter to a certain extent (Gea et al., 2013). The following will discuss the performance of the diaphragm in COPD from two aspects of structure and function. Structural Changes of the Diaphragm in COPD Among the positive adaptive changes of the diaphragm in COPD, the fiber type shifts toward type I fiber with higher oxidation degree. In previous studies, the proportion of slowtwitch fibers in the diaphragm was significantly higher than that in the control group, especially in moderate to severe patients (Levine et al., 1997;Ottenheijm et al., 2008;Barreiro et al., 2019). Endurance training increases the mitochondrial capacity of the diaphragm in COPD, and capillary proliferation is a late process in the adaptation (Ribera et al., 2003;Doucet et al., 2004). A shortening in the length of sarcomeres is another part of the positive adaptive change of the muscle (Orozco-Levi et al., 1999). Some of these adaptations appear to be proportional to the severity of airway obstruction, pulmonary hyperinflation, or air trapping in COPD. Thus, with increasing severity of COPD, these alterations may help the diaphragm cope with a higher workload associated with increased airflow limitation (Orozco-Levi et al., 1999;Wijnhoven et al., 2006). Among the harmful effects of COPD on the diaphragm, muscle atrophy is one of the causes of diaphragm dysfunction. Studies have shown that in patients with severe COPD, the cross-sectional area of all types of muscle fibers is reduced by 40-60% compared with non-COPD controls and that apoptosis and atrophy of the diaphragm induced by myostatin overexpression can lead to diaphragm weakness and respiratory muscle dysfunction (Levine et al., 1997;Zhou et al., 2018). In addition, changes in lung morphology in COPD patients lead to diaphragm dysfunction. Chronic and dynamic hyperinflation forces the respiratory muscles to contract, leading to the shortening of the diaphragm far from its optimum length, which significantly affects its ability to produce force (Klimathianaki et al., 2011;Gayan-Ramirez and Decramer, 2013). The domelike shape of the diaphragm is also crucial to its function. However, in patients with acute exacerbation of COPD, the lung is significantly affected by biomechanics, so it bears an increased load, lowering the diaphragm until it is almost flat between the chest and the abdomen (Bruells and Marx, 2018), which may change its shape. Finally, type I muscle fibers produce less force than type II muscle fibers, which may result in lower diaphragm muscle strength in mild to severe COPD patients than in healthy controls (Levine et al., 2003;Ottenheijm et al., 2005). Functional Changes of the Diaphragm in COPD In terms of function, the increase of type I fibers, mitochondrial volume, and capillary number changes the energy metabolism of the diaphragm from glycolysis to oxidation (Gosker et al., 2000), resulting in the increase of aerobic capacity. Changes in the length of sarcomeres would partially counterbalance the negative effects induced by the displacement of the diaphragm length-tension curve on diaphragmatic force in patients with COPD (Gea et al., 2013). The adaptive mechanism mainly involves long-term exposure of the diaphragm to inspiratory load, making it necessary to remain active throughout the patient's life, which will imitate muscle training (Barreiro and Gea, 2015). Therefore, the adaptive change of the diaphragm positively changes its strength and endurance. However, in COPD patients, the strength and endurance of the diaphragm often decrease despite experiencing positive adaptive changes that make it more resistant to fatigue. Atrophy, length, and shape changes decrease the strength and endurance of the diaphragm. In addition, diaphragm dysfunction in COPD also includes a lower diaphragmatic mobility (Gonçalves et al., 2018). Studies have shown that diaphragmatic mobility seems to be associated with airway obstruction, pulmonary hyperinflation, ventilatory capacity, and feeling of breathlessness (Rocha et al., 2017). Another study found that the reduction in diaphragmatic mobility in COPD patients was not influenced by respiratory muscle strength or pulmonary hyperinflation (Dos Santos Yamaguti et al., 2008). This may be caused by different sample sizes and different patient positions during testing. The loss of diaphragmatic mobility appears to be a decisive factor of decreased exercise tolerance and increased dyspnea in COPD patients (Paulin et al., 2007). In stable COPD, diaphragmatic injury and adaptation achieve a special balance (Barreiro and Gea, 2015). From a clinical point of view, this special balance seems to provide sufficient lung ventilation for the survival of patients. However, the positive adaptive changes of the diaphragm are not sufficient to restore normal muscle strength and endurance; thus, this special balance leads to reduced diaphragm function, loss of muscle mass, and increased susceptibility to fatigue and injury (Orozco-Levi, 2003). Similowski et al. (1991) found that at the same lung volume, the diaphragm function of patients with stable COPD was equivalent to that of normal individuals, which may be because the compensatory phenomenon seems to counterbalance the harmful effects of hyperinflation on diaphragm contractility and inspiratory function. The predominance of factors such as exacerbations, nutritional abnormalities, and aging would make the balance tend to damage the diaphragm. For example, in advanced COPD, the harmful effects caused by injury, oxidative stress, and enhanced proteolysis will prevail over the beneficial effects, thus leading to respiratory failure and eventual death of the patients (Barreiro and Gea, 2015). During the period of acute-on-chronic hyperinflation in COPD, the ability of the diaphragm to generate transdiaphragmatic pressure is reduced, and these changes are exaggerated (Polkey et al., 1996). In conclusion, there is a special balance between the damage and adaptive changes of diaphragm structure and function in COPD. Although the positive adaptive changes of the diaphragm in COPD can partially offset its dysfunction, the muscle strength, endurance, and activity of the diaphragm still show a downward trend. The mechanism of diaphragm injury is very complex, and oxidative stress is one of the important mechanisms. OXIDATIVE STRESS PROMOTES DIAPHRAGM DYSFUNCTION IN COPD The production of reactive oxygen species (ROS) is critical to diaphragm function. However, oxidative stress occurs when antioxidants are insufficient to resist the formation of free radicals . The mechanism of oxidative stress on diaphragm structure and function in COPD is very complex. Oxidative stress can activate diaphragmatic proteinase, increase protein oxidation, and decrease calcium sensitivity in COPD. Oxidative Stress Activates Diaphragmatic Protease in COPD Oxidative stress is one of the factors that activate the protease pathway. The ubiquitin-proteasome system (UPS) is the main pathway of intracellular protein degradation. UPS plays a crucial role in the cascade that leads to the degradation of contractile proteins, thereby promoting the development of muscle atrophy, which constitutes the first step in the pathogenesis of diaphragmatic myasthenia (Ottenheijm et al., 2007;Debigaré et al., 2010). The activity of UPS in COPD diaphragm is enhanced (Ottenheijm et al., 2006), and oxidative stress may promote the activation of this pathway. Pomiès et al. (2016) showed that oxidative stress was involved in the in vitro atrophy of COPD skeletal muscle cells through the UPS signaling pathway. Mild oxidative stress can also increase protein degradation in the diaphragm by increasing the expression of the major components of UPS (Gomes-Marcondes and Tisdale, 2002). However, this pathway can only hydrolyze the peptide chain, not the complete myofibril. The calpain system can hydrolyze the skeletal protein in the myofibril, release the myofilament, and then activate UPS to play the role of protein hydrolysis (Liu et al., 2017). In COPD diaphragm, the expression of the calpain system is increased. Oxidative stress may increase the expression of calpain in the diaphragm, induce calpain activation, and then lead to atrophy of diaphragm fiber and dysfunction of contraction (Whidden et al., 2010). In conclusion, oxidative stress activates diaphragmatic proteinase in COPD, resulting in increased protein degradation, diaphragmatic atrophy, and dysfunction. Oxidative Stress Increases Diaphragmatic Protein Oxidation in COPD Oxidative damaged proteins may misfold, and these misfolded proteins may form insoluble aggregates, posing a serious threat to cells (Kriegenburg et al., 2011). Barreiro et al. provided the first evidence of ROS and reactive nitrogen on oxidative modification of diaphragmatic protein in smokers and animals exposed to cigarette smoke for a long time. Oxidative damage of diaphragmatic protein may lead to muscle loss and dysfunction in smokers and COPD patients, and the level of diaphragmatic protein oxidation is significantly negatively correlated with muscle strength (Ottenheijm et al., 2008;Barreiro et al., 2010). In severe COPD patients, diaphragmatic protein oxidation, which is involved in energy production [creatine kinase (CK)] and contractile function [myosin heavy chain (MyHC)], may be part of the reasons for the decrease of diaphragmatic strength and diaphragm dysfunction (Marin-Corral et al., 2009). CK has been proved to be the main target of ROS exposure in vitro and in vivo. Oxidative modification of the muscle protein may have a negative impact on CK activity, leading to enzyme inactivation, and the oxidized protein is easier to be degraded, which may lead to muscle loss and dysfunction in smokers and COPD patients (Marin-Corral et al., 2009;Barreiro et al., 2010). MyHC is the basic unit of myosin, which plays an important role in ensuring the normal work of muscle cells. Carbonylation is the key to trigger the activation of the oxidation pathway (Carlos et al., 2014). A previous study found that the carbonylation degree of MyHC in the diaphragm of severe COPD was five times that of the healthy group, and the carbonylated MyHC experienced rapid degradation. The decrease of non-carbonylated MyHC could explain the decrease of the maximum transdiaphragmatic pressure and maximum inspiratory pressure in patients with severe COPD (Levine et al., 2013). In conclusion, oxidative stress in COPD diaphragm causes protein oxidation, the activity of the protein modified by oxidation is decreased, and the oxidized proteins can be easily degraded by protease, especially proteins with energy production and contraction functions, which leads to the dysfunction of COPD diaphragm. Oxidative Stress Reduces Diaphragmatic Calcium Sensitivity in COPD In COPD patients, a single fiber is less sensitive to calcium produced by force and slows down the cross-bridge cycling kinetics, which may lead to muscle weakness during submaximal activation (Ottenheijm et al., 2005). Studies have found that excessive production of free radicals is related to impaired contractility. Long-term oxidative stress can increase the level of cytoplasmic calcium ions and reduce the production of diaphragmatic muscle strength, which indicates that oxidative stress reduces the sensitivity of diaphragmatic muscle fibers to Ca 2+ . As a kind of reactive nitrogen, nitric oxide can reduce Ca 2+ sensitivity by reducing the number of cross bridges in the strongly bound state and weaken cross-bridge cycling kinetics during submaximal activation (Heunks et al., 2001). Ca 2+ binding with troponin induces a series of protein structural changes and ATP hydrolysis to release energy, which plays a key role in the process of muscle contraction. Therefore, the decrease of Ca 2+ sensitivity will affect the contraction process of the diaphragm. In addition, decreased Ca 2+ sensitivity requires higher cytosolic calcium to maintain equivalent force generation, resulting in increased consumption of ATP by Ca 2+ -ATPase. At the set submaximal activation, decreased Ca 2+ sensitivity may affect the production of diaphragmatic force in vivo (Ottenheijm et al., 2007). These results suggest that oxidative stress may affect diaphragmatic muscle strength in COPD by reducing Ca 2+ sensitivity. In conclusion, oxidative stress can make diaphragm protein more easily degraded by activating proteases and oxidizing proteins, and the damage to oxidized proteins will also impair its function. In addition, oxidative stress can reduce the sensitivity of diaphragm fiber to Ca 2+ , which has an important impact on the structure and function of the diaphragm in COPD patients. Therefore, the regulation of the oxidative stress level in COPD diaphragm may be an important target to improve the structure and function of COPD diaphragm and prevent the further development of the disease. Effect of Exercise on Oxidative Stress of the Diaphragm in COPD Exercise training is an important part of pulmonary rehabilitation (Spruit et al., 2013;Gimeno-Santos et al., 2014) and can regulate the oxidative stress level of the diaphragm in COPD; however, the effects of exercise may differ depending on the intensity, duration, and style of exercise. Effect of Different Exercise Intensities on Oxidative Stress of the Diaphragm in COPD Exercise intensity is the core component of exercise prescription, and appropriate exercise intensity is very important for the recovery of COPD patients. Vieira Ramos et al. (2018) conducted moderate-intensity (50% of maximal speed) aerobic training on mice with COPD for 24 weeks and found that it upregulated the antioxidant gene of the diaphragm, controlled the decrease in diaphragm muscle mass in mice with COPD, and improved diaphragm atrophy caused by cigarette smoke exposure. However, the 2007 American Pulmonary Rehabilitation Guidelines indicated that high-intensity training may produce better physiological training effects, including reduced ventilation per minute and heart rate, thereby reducing dyspnea during submaximal exercise (Ries et al., 2007). The high-intensity training here refers to the intensity close to the individual's peak level, which is a kind of relatively high intensity and is operationally defined as achieving at least 60-80% peak working speed during incremental maximum exercise test, as opposed to absolute high-intensity exercise. On the contrary, a single session of strenuous exercise may increase the oxidative stress level of the diaphragm in COPD patients, leading to diaphragm injury, dysfunction, or fatigue. This is because COPD patients may be in a state of calcium deficiency, and strenuous exercise enhances the calcium-restricted induced oxidative stress (Itoh et al., 2004). Studies have found that moderate-intensity (50-80% VO 2max ) rather than high-intensity training has a beneficial effect on oxidative stress in elderly patients (Bouzid et al., 2015). This result may be due to reduced ventilatory capacity, abnormal gas exchange, and skeletal muscle dysfunction in elderly patients with COPD, which affect their performance in exercise and reduce their exercise ability and maximum work rate (Lopes et al., 2018). Moreover, strenuous exercise depleted muscle antioxidant vitamin levels, which may impair overall antioxidant protection (Ji et al., 1998). Therefore, in the exercise training of COPD, high-intensity aerobic exercise within the maximum peak exercise level may be more conducive to the regulation of oxidative stress level and functional recovery of the diaphragm in COPD than moderate-intensity exercise; however, elderly patients may be more suitable for moderate-intensity exercise than high-intensity exercise, and strenuous exercise should be avoided. Effects of Different Exercise Durations on Oxidative Stress of the Diaphragm in COPD A previous study found that 24 weeks of treadmill aerobic training improved the oxidative stress level of the diaphragm in mice with COPD (Vieira Ramos et al., 2018). Six weeks of highintensity intermittent exercise (HIIT) attenuated oxidative stress in the diaphragm of smoke-exposed mice (Bowen et al., 2017a). In addition, many studies have found that long-term (≥4 weeks) aerobic training can improve the activity of antioxidant enzymes and the resistance of the diaphragm to intracellular ROS, reduce the oxidative modification of key proteins and enzymes, improve oxidative stress level of the diaphragm, and prevent diaphragm dysfunction (Oh-ishi et al., 1997;Mangner et al., 2013Mangner et al., , 2016. However, other studies have found that shortterm exercise training can also improve oxidative stress of the diaphragm. Vincent et al. (2000) found that 5 days of 65% VO 2max short-term endurance exercise training can protect rat diaphragm from oxidative stress induced by contraction and reduce lipid damage induced by oxidation after long-term contraction. Smuder et al. (2012) found that 10 days of endurance exercise with 70% maximal oxygen consumption can protect rat diaphragm from mitochondrial oxidative damage induced by mechanical ventilation. In addition, Bowen et al. (2017b) found that 2-week HIIT reduced oxidative stress in the diaphragm and directly prevented oxidant-mediated diaphragm dysfunction in hypertensive mice. In conclusion, long-term aerobic exercise (≥ 4 weeks) seems to have a better regulatory effect on oxidative stress level of the diaphragm in COPD. As a more time-saving option, short-term exercise training can improve the oxidative stress level of the diaphragm, and its effect on oxidative stress level and function of the diaphragm in COPD is worthy of further study. Future studies should investigate the effects of different exercise durations on oxidative stress and function of the diaphragm in COPD, so as to determine the best exercise prescription with the best effect and the least cost. Effects of Different Exercise Styles on Oxidative Stress of the Diaphragm in COPD Many studies have found that aerobic exercise can improve the oxidative stress level of the diaphragm in COPD. Vieira Ramos et al. (2018) conducted treadmill aerobic training on mice with COPD and found that it improved the antioxidant capacity of the diaphragm. Li et al. (2020) found that after aerobic exercise in water, the nicotinic acid metabolism level in the diaphragm of COPD rats was upregulated, thus protecting the diaphragm and ventilatory muscle from oxidative damage and improving diaphragm muscle strength and ventilatory function. However, this experiment did not directly detect oxidative stress indicators. HIIT is a kind of aerobic exercise. Bowen et al. (2017a) found that HIIT attenuated oxidative stress of the diaphragm in smokeexposed mice and increased diaphragm muscle strength. In addition, some studies found that resistance exercise (a kind of anaerobic exercise) can reduce serum lipid peroxidation and provide protection against oxidants (Vincent et al., 2002). Alcazar et al. (2019) found that the combination of HIIT and strength training can improve blood protein carbonylation and systemic oxidative stress in elderly COPD patients. In conclusion, different forms of aerobic exercise (water, land, and HIIT) have a good regulatory effect on the oxidative stress level of COPD diaphragm. Resistance exercise and combined exercise (resistance exercise and endurance exercise) can also affect systemic oxidative stress in COPD, but their effects on diaphragm function and oxidative stress level are still unknown and need to be further studied. The related literature of exercise intervention on oxidative stress of the diaphragm is shown in Table 1. Exercise Improves Antioxidant Capacity Exercise training can accelerate the clearance of free radicals, reduce oxidative modification and injury, regulate the level of oxidative stress, and improve diaphragm function through the activation of antioxidants in the diaphragm. Antioxidants in the body include enzymatic and non-enzymatic substances. The antioxidant enzymes, including heme oxygenase-1 (HO-1), quinone oxidoreductase 1, catalase (CAT), glutathione peroxidase (GPX), and superoxide dismutase (SOD), are regulated by the nuclear factor-erythroid 2-related factor 2 (NRF2) pathway (Li and Duan, 2011). Studies have shown that exercise can upregulate the expression of HO-1 by upregulating the NRF2/HO-1 antioxidant pathway (Vieira Ramos et al., 2018), which can regulate the balance of oxidation and antioxidants, protect the diaphragm from atrophy caused by cigarette smoke exposure, and improve diaphragm dysfunction. SOD, GPX, and CAT are considered to be the first line of defense against oxidative stress in vivo (Powers and Jackson, 2008). Studies have found that exercise can lead to exercise-induced oxidative stress, but endurance training can improve the activities of antioxidant enzymes (Mn-SOD, Cu, Zn-SOD, GPX, and CAT), which may improve the resistance of rats to intracellular ROS, so as to protect the diaphragm from oxidative damage (Oh-ishi et al., 1997). However, the effect of exercise on the antioxidant capacity of the diaphragm remains to be fully elucidated. Nakatani et al. (2005) found that habitual exercise increased SOD activity in the rat diaphragm, but no evidence showed that it increased the activity of CAT or GPX. On the contrary, acute exercise can improve the activities of GPX and CAT in the diaphragm of untrained rats, but has no significant effect on the activities of Mn-SOD and Cu,Zn-SOD (Oh-ishi et al., 1997). Another study showed that exercise training increased the protein abundance of SOD1, GPX1, and CAT in the cytosol of rat diaphragm fiber, whereas only SOD2 and GPX1 were increased in the mitochondria of the diaphragm (Smuder et al., 2012). The differences in the test results may be caused by different exercise interventions, different diseases and physiological states of rats, or different diaphragm test sites. In addition, glutathione has an antioxidant effect; however, endurance training had no significant effect on the diaphragmatic glutathione in hamsters with emphysema . In conclusion, exercise training may reduce oxidative damage by improving the antioxidant capacity of the diaphragm and improve the atrophy and dysfunction of the diaphragm in COPD, but the specific mechanism still needs to be further studied. Exercise Reduces Oxidase Activity Exercise can regulate the oxidative stress level of the diaphragm in COPD by reducing the activity of oxidase. NADPH oxidase (NOX) is the main oxidase. NOX, a membrane-bound complex, is a complex enzyme system found in phagocytes and epithelial cells. At the same time, NOX is the main enzyme generating reactive oxygen. Compared with non-smokers, the number of neutrophils and macrophages migrating to smokers' lungs is increased, and ROS can be produced through the NOX system (Macnee, 2001), leading to oxidative stress of the diaphragm in COPD patients. Bowen et al. (2017a) found that exercise training can reduce the activity of NOX in the diaphragm of mice with COPD, regulate the oxidative stress level, weaken protein degradation, reverse the extrapulmonary damage to the diaphragm caused by smoking, enhance the muscle strength of the diaphragm, and restore the diaphragm function. In addition, the study found that the oxidative stress produced by NOX may be closely related to the expression level of proinflammatory factors such as tumor necrosis factor (TNF), and the antioxidant effect was partly responsible for the decrease of the expression level of pro-inflammatory factors (Wieczfinska et al., 2019). Blockage of TNF-α can significantly reduce the number of inflammatory cells and the production of superoxide anion in the diaphragm and improve the structure and function of the diaphragm (Domínguez-Álvarez et al., 2014). Therefore, exercise can not only reduce the production of ROS directly, but also reduce the production of ROS by reducing the expression level of pro-inflammatory factors, so as to regulate the oxidative stress level and improve the diaphragm structure and dysfunction in COPD. Exercise Improves Mitochondrial Function Exercise can regulate the oxidative stress level of the diaphragm in COPD by improving mitochondrial function. Mitochondrial dysfunction is a trigger factor of COPD and other diseases related to respiratory aging. Mitochondrion is one of the main endogenous sources of ROS, and its damage may increase the electron leakage of the electron transport chain and the formation of ROS, causing oxidative stress (Białas et al., 2016;Fang et al., 2019). Mitochondria of the diaphragm of rats exposed to cigarette smoke for a long time are damaged, showing fuzzy Z disk, unclear structure and focal myofilament rupture and dissolution (Sheng et al., 2020), which lead to the increased production of ROS. AMP-activated protein kinase (AMPK) has specific regulation on all aspects of mitochondrial biology and homeostasis, and exercise training can improve mitochondrial function by activating the AMPK pathway (Herzig and Shaw, 2018). Recent studies have found that the production of mitochondrial ROS is closely related to the AMPK pathway, which can limit the FIGURE 1 | The mechanism of exercise on oxidative stress of the diaphragm in COPD. Exercise may reduce ROS production, regulate diaphragm oxidative stress level, and improve diaphragm structure and dysfunction in COPD through: (1) activating the NRF2 signaling pathway and upregulating HO-1 and other antioxidant enzymes; (2) inhibiting the activity of NOX and reducing the expression of TNF; and (3) activating the mitochondrial-related AMPK/PGC-1α/UCP2 pathway. HO-1, heme oxygenase-1; NOX, NADPH oxidase; TNF, tumor necrosis factor; ROS, reactive oxygen species; AMPK, AMP-activated protein kinase; PGC-1α, peroxisome proliferator-activated receptor gamma coactivator-1α; UCP2, uncoupling protein 2. production of ROS, participate in the clearance of mitochondrial ROS, and increase the quality and function of the mitochondria by upregulating the expression of mitochondrial uncoupling protein (UCP) (Xie et al., 2008;Rubattu et al., 2016;Hasan-Olive et al., 2019). Peroxisome proliferator-activated receptor γ coactivator 1 (PGC1) is the downstream effector of AMPK, and most genes involved in mitochondrial metabolism are controlled by the PGC1 family (Herzig and Shaw, 2018). UCP2 is a cationic carrier protein on mitochondrial inner membrane, which can inhibit the production of ROS and protect mitochondrial function. It is also considered to be a direct target of PGC-1α transcriptional regulation (Jia, 2011;Gu et al., 2014). Studies have found that exercise training can upregulate the expression level of AMPK in skeletal muscle and reduce the production of mitochondrial ROS by upregulating the AMPK/PGC-1α/UCP2 pathway (Gu et al., 2014;Lin et al., 2020). Therefore, exercise may improve mitochondrial function and reduce ROS production by upregulating the AMPK/PGC-1α/UCP2 pathway, thereby regulating oxidative stress and improving dysfunction in the diaphragm of COPD. The mechanism of exercise on oxidative stress of the diaphragm in COPD is shown in Figure 1. CONCLUSION The structure and function of the diaphragm in COPD are impaired and adaptive. In the stable period, the positive adaptive changes of the diaphragm can partially offset its dysfunction and achieve a special balance to meet the patient's ventilation needs. The deterioration and acute exacerbation of the disease will disrupt the balance, thus damaging and further increasing the dysfunction of the diaphragm, which in severe cases may lead to respiratory failure and death. Diaphragm dysfunction mainly includes limitation of muscle strength, endurance, and mobility caused by many factors. Oxidative stress may affect diaphragm function in COPD through protein oxidation, protease activation, and decreased Ca 2+ sensitivity. Exercise training can improve diaphragm dysfunction by regulating oxidative stress in COPD. Longterm (≥4 weeks) relatively high-intensity aerobic exercise seems to have a good effect on improving oxidative stress and dysfunction of the diaphragm in COPD. Compared with high-intensity exercise training, moderate-intensity (50-80% VO 2max ) exercise training is more suitable for elderly patients. The mechanisms by which exercise influences oxidative stress in the diaphragm of patients with COPD may include increasing antioxidant activity, decreasing oxidase activity, and improving mitochondrial function. However, few studies have investigated the effect and mechanism of exercise on the oxidative stress level of COPD diaphragm. Thus, future studies should explore the effect and mechanism of different exercise prescriptions, including exercise style, intensity, and duration, on the oxidative stress level in the diaphragm of COPD. Furthermore, because the ultimate function of the COPD diaphragm depends on a special balance between injury and adaptation, more research related to the effects of exercise on this special balance should be conducted in the future. AUTHOR CONTRIBUTIONS WBW, XDL, and BZZ contributed to the design of this review. BZZ, JL, and PJL conducted the literature search selection and completed data extraction. BZZ completed data analysis and wrote the initial draft of the manuscript. PJL and JL contributed important intellectual content and put forward suggestions for revision. BZZ and PJL revised the article. WBW, XDL, and PJL supervised the quality of articles. All authors have read and approved the submitted version. FUNDING This review was supported by the National Natural Science Foundation of China (Nos. 81902307 and 82072551).
2021-06-07T13:27:16.276Z
2021-06-07T00:00:00.000
{ "year": 2021, "sha1": "eb6cc56790b2479de5b0b30df2daf2b20f953b76", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.684453/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb6cc56790b2479de5b0b30df2daf2b20f953b76", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265263881
pes2o/s2orc
v3-fos-license
Norcantharidin-Encapsulated C60-Modified Nanomicelles: A Potential Approach to Mitigate Cytotoxicity in Renal Cells and Simultaneously Enhance Anti-Tumor Activity in Hepatocellular Carcinoma Cells Objective: The objective of this study was to examine the preparation process of DSPE-PEG-C60/NCTD micelles and assess the impact of fullerenol (C60)-modified micelles on the nephrotoxicity and antitumor activity of NCTD. Method: The micelles containing NCTD were prepared using the ultrasonic method and subsequently optimized and characterized. The cytotoxicity of micelles loaded with NCTD was assessed using the CCK-8 method on human hepatoma cell lines HepG2 and BEL-7402, as well as normal cell lines HK-2 and L02. Acridine orange/ethidium bromide (AO/EB) double staining and flow cytometry were employed to assess the impact of NCTD-loaded micelles on the apoptosis of the HK-2 cells and the HepG2 cells. Additionally, JC-1 fluorescence was utilized to quantify the alterations in mitochondrial membrane potential. The generation of reactive oxygen species (ROS) following micelle treatment was determined through 2′,7′-dichlorofluorescein diacetate (DCFDA) staining. Results: The particle size distribution of the DSPE-PEG-C60/NCTD micelles was determined to be 91.57 nm (PDI = 0.231). The zeta potential of the micelles was found to be −13.8 mV. The encapsulation efficiency was measured to be 91.9%. The in vitro release behavior of the micelles followed the Higuchi equation. Cellular experiments demonstrated a notable decrease in the toxicity of the C60-modified micelles against the HK-2 cells, accompanied by an augmented inhibitory effect on cancer cells. Compared to the free NCTD group, the DSPE-PEG-C60 micelles exhibited a decreased apoptosis rate (12%) for the HK-2 cell line, lower than the apoptosis rate observed in the NCTD group (36%) at an NCTD concentration of 75 μM. The rate of apoptosis in the HepG2 cells exhibited a significant increase (49%), surpassing the apoptosis rate observed in the NCTD group (24%) at a concentration of 150 μM NCTD. The HK-2 cells exhibited a reduction in intracellular ROS and an increase in mitochondrial membrane potential (ΔψM) upon exposure to C60-modified micelles compared to the NCTD group. Conclusions: The DSPE-PEG-C60/NCTD micelles, as prepared in this study, demonstrated the ability to decrease cytotoxicity and ROS levels in normal renal cells (HK-2) in vitro. Additionally, these micelles showed an enhanced antitumor activity against human hepatocellular carcinoma cells (HepG2, BEL-7402). Introduction Norcantharidin (NCTD) is a derivative of Chinese medicine cantharidin (CTD) after the removal of two methyl groups at 1 and 2 positions of cantharidin [1].NCTD has been shown to inhibit the growth of solid tumors, such as liver cancer, esophageal cancer, gastric cancer, with the advantages of the white blood cell increase, immunity regulation and no bone marrow suppression [2][3][4].NCTD has been used for many years in China to treat liver cancer and hepatitis through oral administration and injection.It is available in the form of demethylcantharidin tablets and sodium demethylcantharidate for injection. However, despite its lower toxicity compared to cantharidin, NCTD still involved nephrotoxicity in clinical settings [5].Following oral administration of a high dose, the glomerular epithelial cells exhibited turbidity and swelling.Toxicology of NCTD in mice found that ROS levels in kidney tissues of mice were significantly increased after the administration of NCTD, and attributable injuries such as tissue congestion and vasodilation occurred [6].The results of significant toxic effects on normal human cells indicated that NCTD had no selective inhibition of tumors.In addition, the poor solubility, short half-life and low LD50 of NCTD are the disadvantages that limit its clinical application [7]. In order to mitigate the toxic damage to normal cells, we hypothesized that antioxidants could be potential candidates for reducing oxidative stress in cells.This is particularly important due to the high levels of ROS induced by NCTD in normal tissues, which can result in cell injury [6].It was reported that fullerene, one of the allotropes of the carbon nanomaterial family, had powerful antioxidant properties with long-lasting activity.However, its poor water solubility limits the biological application.Fullerenol, a derivative of fullerene that is soluble in water, showed high ROS scavenging activity both in vivo and in vitro [8][9][10][11].It has been reported that fullerenol exhibits protective effects against doxorubicin-induced cytotoxicity in the lungs, kidneys and testes of rats [12][13][14][15][16].In previous experimental studies, we employed micelles that were modified with fullerenol in a drug delivery system that contained doxorubicin hydrochloride (DOX).The findings indicated that the fullerenol (C60)-modified micelles demonstrated reduced cytotoxicity in normal cell lines (L02, H9c2, GES-1) in comparison to free DOX in vitro.The results also showed that micelles modified with C60 exhibited a reduction in intracellular ROS in the H9c2 cells in comparison to free DOX [17].However, it is still uncertain whether fullerenol has the potential to play a role against the oxidative stress induced by other anticancer drugs in normal tissues. Polymer micelles have emerged as a promising approach for drug delivery, offering a potential solution to the challenges associated with low bioavailability and poor solubility of free drugs [18].Polymer micelles are generated through the process of self-assembly involving amphiphilic polymers.Drugs can be either grafted with polymers to produce pharmacologically active polymer systems or encapsulated in the nanoscale-diameter micelles by polymer self-assembly, which provided opportunities for solubility and stability enhancement of the drugs [19,20].The micelles' appropriate diameter, typically ranging from 100 to 200 nm, facilitates their accumulation within the tumor microenvironment by leveraging the enhanced permeability and retention (EPR) effects [21][22][23].Polymerized micelles have demonstrated enhanced pharmacokinetic properties in preclinical animal models, as well as improved therapeutic effectiveness and superior safety.Several polymeric micellar formulations have advanced to the clinical stage, either undergoing clinical trials or having obtained approval for human use [24][25][26].For instance, regulatory authorities in Korea, China and other countries have granted approval for the use of Genexol ® PM, Nanoxel ® M, Zicheng ® and other drugs as effective treatments for cancer [27,28]. In the present investigation, distearyl phosphacylethanolamine-polyethylene glycol (DSPE-PEG 2000) was used as the skeletal material of the micelles, which was further modified by C60(OH) 22 .DSPE-PEG, which has been approved by the US FDA, is frequently employed for the encapsulation of proteins, peptides and other pharmaceuticals.This utilization aims to extend the half-life of these substances in the bloodstream and enhance their stability.It has also been reported that the utilization of DSPE-PEG as a carrier in a nanodrug delivery system demonstrated enhancements in the cellular uptake and cytotoxicity of insoluble anticancer drugs [29,30].Hence, this study focused on the preparation of the DSPE-PEG-C60 micelles loaded with NCTD and aimed to investigate their cytotoxicity in both the tumor cells and the normal cells. Determination of NCTD by HPLC As demonstrated in Figure 1, the retention period of NCTD was approximately 4.1 min (Figure 1A), and DSPE-PEG-C60 did not interfere with NCTD detection (Figure 1B,C).The NCTD regression curve exhibited a solid linear relationship between 59.47 uM and 2973.54uM (Figure 1D), and the correlation coefficient (R 2 ) was larger than 0.9996. Molecules 2023, 28, x FOR PEER REVIEW 3 of 18 preparation of the DSPE-PEG-C60 micelles loaded with NCTD and aimed to investigate their cytotoxicity in both the tumor cells and the normal cells. Determination of NCTD by HPLC As demonstrated in Figure 1, the retention period of NCTD was approximately 4.1 min (Figure 1A), and DSPE-PEG-C60 did not interfere with NCTD detection (Figure 1B,C).The NCTD regression curve exhibited a solid linear relationship between 59.47 uM and 2973.54uM (Figure 1D), and the correlation coefficient (R 2 ) was larger than 0.9996. Preparation and Characterization of Micelles The results are presented in Table 1.The sizes of the micelles decreased progressively as the vectors increased.The particle sizes were measured to be 117.8nm (vector: NCTD = 10:1 (W/W)), 91.57nm (vector: NCTD = 15:1 (W/W)) and 45.74 nm (vector: NCTD = 20:1 (W/W)) (Table 1, Batches 1, 2 and 3).The findings of the study also demonstrated that the micellar sizes were influenced by the ultrasound time and power.The micelles were able to achieve the desired particle size of about 100 nm in diameter by subjecting them to the ultrasonic treatment for 10 min at an ultrasound power of 200 W.The particle sizes decreased with both an increase and decrease in the ultrasound time and power (Table 1, Batches 2, 4, 5, 6, 7).Finally, the processing parameters that were adopted included an Preparation and Characterization of Micelles The results are presented in Table 1.The sizes of the micelles decreased progressively as the vectors increased.The particle sizes were measured to be 117.8nm (vector: NCTD = 10:1 (W/W)), 91.57nm (vector: NCTD = 15:1 (W/W)) and 45.74 nm (vector: NCTD = 20:1 (W/W)) (Table 1, Batches 1, 2 and 3).The findings of the study also demonstrated that the micellar sizes were influenced by the ultrasound time and power.The micelles were able to achieve the desired particle size of about 100 nm in diameter by subjecting them to the ultrasonic treatment for 10 min at an ultrasound power of 200 W.The particle sizes decreased with both an increase and decrease in the ultrasound time and power (Table 1, Batches 2, 4, 5, 6, 7).Finally, the processing parameters that were adopted included an NCTD-to-carrier weight ratio of 1:15, an ultrasound time of 10 min and an ultrasound power of 200 W.The micelles loaded with NCTD exhibited a high encapsulation efficiency (EE) of 91.90% and a drug loading content (LC) of 5.92%.These micelles also demonstrated an ideal particle size distribution of 91.6 nm and a negative charge of −13.8 mV.The unmodified DSPE-PEG micelles were also prepared using the same process, with a similar particle size of 96.1 nm, as shown in Table 2 and Figure 2. TEM images depicting the micelles are presented in Figure 3. (EE) of 91.90% and a drug loading content (LC) of 5.92%.These micelles also demonstrated an ideal particle size distribution of 91.6 nm and a negative charge of −13.8 mV.The unmodified DSPE-PEG micelles were also prepared using the same process, with a similar particle size of 96.1 nm, as shown in Table 2 and Figure 2. TEM images depicting the micelles are presented in Figure 3. Compared to the results obtained from DLS, the particle size appeared smaller in TEM detection.This is because the micelles undergo shrinkage during the drying process, leading to a reduction in particle size.In dynamic light scattering (DLS), however, micelles form hydration layers with surrounding water molecules, which leads to larger results in particle size detection using DLS.Compared to the results obtained from DLS, the particle size appeared smaller in TEM detection.This is because the micelles undergo shrinkage during the drying process, leading to a reduction in particle size.In dynamic light scattering (DLS), however, micelles form hydration layers with surrounding water molecules, which leads to larger results in particle size detection using DLS. Stability of Micellar Solution The impact of temperature on the stability of the micellar solution is demonstrated in Figure 4.The micellar solution of DSPE-PEG-C60/NCTD exhibited a relatively stable behavior, with minimal variation in particle size over a period of 7 days at a temperature of 4 °C.However, the turbid phenomenon of DSPE-PEG-C60/NCTD micellar solution was observed at room temperature during the experimental process. Drug Release Assay As depicted in Figure 5, the release of the DSPE-PEG-C60/NCTD and DSPE-PEG/NCTD micelles in PBS buffers (including 1% SDS) followed the Higuchi equation.After a duration of 6 h, the cumulative release rates of NCTD from the DSPE-PEG-C60/NCTD micelles and the DSPE-PEG/NCTD micelles were observed to be 56% and 58%, respectively, in pH 7.4 PBS buffers.The in vitro release experiments demonstrated a sustained release of approximately 90% of NCTD over a period of 48 h, with no significant burst release observed (Figure 5A).In line with our earlier research, these findings showed that the introduction of C60(OH)22 had no appreciable effect on the micelles' release kinetics. Since the pH of tumor tissues is much lower than that of normal tissues, the release profile of NCTD from the micelles was evaluated at pH 5.5 and 6.5 (Figure 5C,E). Stability of Micellar Solution The impact of temperature on the stability of the micellar solution is demonstrated in Figure 4.The micellar solution of DSPE-PEG-C60/NCTD exhibited a relatively stable behavior, with minimal variation in particle size over a period of 7 days at a temperature of 4 • C.However, the turbid phenomenon of DSPE-PEG-C60/NCTD micellar solution was observed at room temperature during the experimental process. Stability of Micellar Solution The impact of temperature on the stability of the micellar solution is demonstrated in Figure 4.The micellar solution of DSPE-PEG-C60/NCTD exhibited a relatively stable behavior, with minimal variation in particle size over a period of 7 days at a temperature of 4 °C.However, the turbid phenomenon of DSPE-PEG-C60/NCTD micellar solution was observed at room temperature during the experimental process. Drug Release Assay As depicted in Figure 5, the release of the DSPE-PEG-C60/NCTD and DSPE-PEG/NCTD micelles in PBS buffers (including 1% SDS) followed the Higuchi equation.After a duration of 6 h, the cumulative release rates of NCTD from the DSPE-PEG-C60/NCTD micelles and the DSPE-PEG/NCTD micelles were observed to be 56% and 58%, respectively, in pH 7.4 PBS buffers.The in vitro release experiments demonstrated a sustained release of approximately 90% of NCTD over a period of 48 h, with no significant burst release observed (Figure 5A).In line with our earlier research, these findings showed that the introduction of C60(OH)22 had no appreciable effect on the micelles' release kinetics. Since the pH of tumor tissues is much lower than that of normal tissues, the release profile of NCTD from the micelles was evaluated at pH 5.5 and 6.5 (Figure 5C,E). Drug Release Assay As depicted in Figure 5, the release of the DSPE-PEG-C60/NCTD and DSPE-PEG/NCTD micelles in PBS buffers (including 1% SDS) followed the Higuchi equation.After a duration of 6 h, the cumulative release rates of NCTD from the DSPE-PEG-C60/NCTD micelles and the DSPE-PEG/NCTD micelles were observed to be 56% and 58%, respectively, in pH 7.4 PBS buffers.The in vitro release experiments demonstrated a sustained release of approximately 90% of NCTD over a period of 48 h, with no significant burst release observed (Figure 5A).In line with our earlier research, these findings showed that the introduction of C60(OH) 22 had no appreciable effect on the micelles' release kinetics. Approximately 94.3% of the released NCTD was observed from the micelles at pH 6.5, while about 97.4% was observed at pH 5.5 within 48 h.There was no significant difference in the release of NCTD from the micelles in different pH environments.This result suggests that the release of NCTD from micelles is pH-independent. In Vitro Cytotoxicity Examination Cell activity was assessed using the CCK-8 method.The cytotoxicity of the DSPE-PEG-C60/NCTD micelles was evaluated in comparison to free NCTD and the DSPE-PEG/NCTD micelles in the L02, HK-2, HepG2 and BEL-7402 cell lines.As depicted in Since the pH of tumor tissues is much lower than that of normal tissues, the release profile of NCTD from the micelles was evaluated at pH 5.5 and 6.5 (Figure 5C,E).Approximately 94.3% of the released NCTD was observed from the micelles at pH 6.5, while about 97.4% was observed at pH 5.5 within 48 h.There was no significant difference in the release of NCTD from the micelles in different pH environments.This result suggests that the release of NCTD from micelles is pH-independent. In Vitro Cytotoxicity Examination Cell activity was assessed using the CCK-8 method.The cytotoxicity of the DSPE-PEG-C60/NCTD micelles was evaluated in comparison to free NCTD and the DSPE-PEG/NCTD micelles in the L02, HK-2, HepG2 and BEL-7402 cell lines.As depicted in Figure 6, the encapsulation of NCTD into the micelles resulted in a substantial increase in the inhibition effect on the tumor cells of HepG2 and BEL-7402.The IC 50 of the DSPE-PEG-C60/NCTD micelles was found to be 90.07 µM in the HepG2 cells and 63.20 µM in the BEL-7402 cells.These values were lower compared to the IC 50 of the free NCTD group, which was 117.50 µM in the HepG2 cells and 102.62 µM in the BEL-7402 cells.This phenomenon can be attributed to the surface-active properties of the carrier (DSPE-PEG and DSPE-PEG-C60), which can penetrate the cell membrane, inducing the increase in cell membrane fluidity and accelerating the transmembrane turnover of NCTD, contributing to the increased uptake of NCTD by cells and the enhanced cytotoxicity of the NCTD micelles [31][32][33][34]. the inhibition effect on the tumor cells of HepG2 and BEL-7402.The IC50 of the DSPE-PEG-C60/NCTD micelles was found to be 90.07 μM in the HepG2 cells and 63.20 μM in the BEL-7402 cells.These values were lower compared to the IC50 of the free NCTD group, which was 117.50 μM in the HepG2 cells and 102.62 μM in the BEL-7402 cells.This phenomenon can be attributed to the surface-active properties of the carrier (DSPE-PEG and DSPE-PEG-C60), which can penetrate the cell membrane, inducing the increase in cell membrane fluidity and accelerating the transmembrane turnover of NCTD, contributing to the increased uptake of NCTD by cells and the enhanced cytotoxicity of the NCTD micelles [31][32][33][34]. Interestingly, the micelle groups did not exhibit an elevated cytotoxic effect on the HK-2 cells in comparison to the free NCTD group.The DSPE-PEG-C60/ NCTD micelles exhibited less toxicity (IC50 = 36.59μM) than the free NCTD group (IC50 = 23.05μM) on the HK-2 cells.A potential explanation for this observation was that the HK-2 cell line was more susceptible to NCTD than the other cell lines.This increased susceptibility may have compromised the cytotoxicity enhancement capacity of the DSPE-PEG/DSPE-PEG-C60 carrier.Interestingly, the micelle groups did not exhibit an elevated cytotoxic effect on the HK-2 cells in comparison to the free NCTD group.The DSPE-PEG-C60/NCTD micelles exhibited less toxicity (IC 50 = 36.59µM) than the free NCTD group (IC 50 = 23.05µM) on the HK-2 cells.A potential explanation for this observation was that the HK-2 cell line was more susceptible to NCTD than the other cell lines.This increased susceptibility may have compromised the cytotoxicity enhancement capacity of the DSPE-PEG/DSPE-PEG-C60 carrier. Cell Apoptosis Assay Using Acridine Orange/Ethidium Bromide (AO/EB) Staining Given that hepatocellular carcinoma is the primary clinical context in which NCTD is applied, and considering its significant association with nephrotoxicity, the staining subjects for this study were chosen to be the HepG2 and HK-2 cells.The effect of the DSPE-PEG-C60/NCTD micelles on the cell apoptosis was detected using the AO/EB staining.As depicted in Figure 7, the representative fluorescence microscopic images of the doublestained cells reveal that significant apoptosis was detected in the free NCTD group, whose cells were stained with EB and are shown orange-red in color (Figure 7B,F).The DSPE-PEG-C60/NCTD micelle group showed less orange-red fluorescence than the free NCTD group, indicating a decrease in the cytotoxicity of the DSPE-PEG-C60/NCTD micelles in the HK-2 cell line (Figure 7D).In the case of the HepG2 cells, both the DSPE-PEG/NCTD micelles and the DSPE-PEG-C60/NCTD micelles demonstrated higher induction of apoptotic cells with orange-red fluorescence compared to the free NCTD group (Figure 7G,H). Cell Apoptosis Assay Using Acridine Orange/Ethidium Bromide (AO/EB) Staining Given that hepatocellular carcinoma is the primary clinical context in which NCTD is applied, and considering its significant association with nephrotoxicity, the staining subjects for this study were chosen to be the HepG2 and HK-2 cells.The effect of the DSPE-PEG-C60/NCTD micelles on the cell apoptosis was detected using the AO/EB staining.As depicted in Figure 7, the representative fluorescence microscopic images of the doublestained cells reveal that significant apoptosis was detected in the free NCTD group, whose cells were stained with EB and are shown orange-red in color (Figure 7B,F).The DSPE-PEG-C60/NCTD micelle group showed less orange-red fluorescence than the free NCTD group, indicating a decrease in the cytotoxicity of the DSPE-PEG-C60/NCTD micelles in the HK-2 cell line (Figure 7D).In the case of the HepG2 cells, both the DSPE-PEG/NCTD micelles and the DSPE-PEG-C60/NCTD micelles demonstrated higher induction of apoptotic cells with orange-red fluorescence compared to the free NCTD group (Figure 7G,H). Mitochondrial Membrane Potential Assay The functional integrity of mitochondria was evaluated by JC-1 staining.The representative fluorescence microscopic images demonstrate that the administration of free NCTD resulted in a reduction in the number of JC-1 aggregates (red fluorescence) and an increase in the quantity of JC-1 monomers (green fluorescence) in the HK-2 cells (Figure 8B), when compared to the DSPE-PEG-C60/NCTD micelle treatment group (Figure 8D).This observation suggests a significant mitochondrial damage in the HK-2 cells exposed to free NCTD. With respect to the HepG2 cells, the diminished orange fluorescence and the increased green fluorescent intensity of the DSPE-PEG-C60/NCTD micelles, in comparison with the control group and the NCTD-treated group, indicated the collapse of the mitochondrial membrane potential, as shown in Figure 8H. Mitochondrial Membrane Potential Assay The functional integrity of mitochondria was evaluated by JC-1 staining.The representative fluorescence microscopic images demonstrate that the administration of free NCTD resulted in a reduction in the number of JC-1 aggregates (red fluorescence) and an increase in the quantity of JC-1 monomers (green fluorescence) in the HK-2 cells (Figure 8B), when compared to the DSPE-PEG-C60/NCTD micelle treatment group (Figure 8D).This observation suggests a significant mitochondrial damage in the HK-2 cells exposed to free NCTD. Cell Apoptosis by Flow Cytometry The impact of the DSPE-PEG-C60/NCTD micelles on cellular apoptosis was also assessed using flow cytometry and the Annexin V-FITC/PI double-staining method.As shown in the representative pseudo-color plots and the apoptosis ratio of the cells (Figure 9A-E), the average apoptosis rates of the HK-2 cells was 5% (control), 36% (NCTD group), 21% (DSPE-PEG/NCTD group) and 12% (DSPE-PEG-C60/NCTD group) at a 75 μM NCTD concentration.The DSPE-PEG-C60 micelles demonstrated a significant inhibitory effect on the toxicity induced by NCTD, leading to a notable decrease in the cell apoptosis ratio.This study provided additional evidence to support the notion that incorporating C60(OH)22 into the drug delivery system resulted in a notable decrease in the toxicity of NCTD in the HK-2normal cell line. The HepG2 cells were also subjected to apoptosis detection to examine whether the cytotoxicity of the DSPE-PEG-C60/NCTD micelles against tumor cells was increased, as observed in the results of the cytotoxicity examination.The representative pseudo-color plots and the apoptosis ratio of the cells indicated that the NCTD-loaded DSPE-PEG-C60 micelles had an increased apoptosis ratio of approximately 48% at a concentration of 150 μM NCTD, compared with 24% in the free NCTD group (Figure 9F-J).With respect to the HepG2 cells, the diminished orange fluorescence and the increased green fluorescent intensity of the DSPE-PEG-C60/NCTD micelles, in comparison with the control group and the NCTD-treated group, indicated the collapse of the mitochondrial membrane potential, as shown in Figure 8H. Cell Apoptosis by Flow Cytometry The impact of the DSPE-PEG-C60/NCTD micelles on cellular apoptosis was also assessed using flow cytometry and the Annexin V-FITC/PI double-staining method.As shown in the representative pseudo-color plots and the apoptosis ratio of the cells (Figure 9A-E), the average apoptosis rates of the HK-2 cells was 5% (control), 36% (NCTD group), 21% (DSPE-PEG/NCTD group) and 12% (DSPE-PEG-C60/NCTD group) at a 75 µM NCTD concentration.The DSPE-PEG-C60 micelles demonstrated a significant inhibitory effect on the toxicity induced by NCTD, leading to a notable decrease in the cell apoptosis ratio.This study provided additional evidence to support the notion that incorporating C60(OH) 22 into the drug delivery system resulted in a notable decrease in the toxicity of NCTD in the HK-2normal cell line. The HepG2 cells were also subjected to apoptosis detection to examine whether the cytotoxicity of the DSPE-PEG-C60/NCTD micelles against tumor cells was increased, as observed in the results of the cytotoxicity examination.The representative pseudo-color plots and the apoptosis ratio of the cells indicated that the NCTD-loaded DSPE-PEG-C60 micelles had an increased apoptosis ratio of approximately 48% at a concentration of 150 µM NCTD, compared with 24% in the free NCTD group (Figure 9F-J). Intracellular ROS Level Evaluation The ROS levels in the cell lines treated with the micelles or free NCTD were assessed by flow cytometry with DCFDA staining.The NCTD-induced oxidative stress in the HK-2 cells resulted in an increased ROS level with a 1.9-fold increase in DCF fluorescence compared with the untreated cells at a 75 μM NCTD concentration.And the DCF level of the DSPE-PEG-C60 micelle group was 1.14-fold compared to the control group, which was lower than that of the free NCTD group and similar to that of the control group (Figure 10A,B).The results indicated the strong capability of C60(OH)22 to reduce oxidative stress in the HK-2 cells. On the contrary, the DCF fluorescence level of the DSPE-PEG-C60/NCTD micelle group was 1.6 times that of the control group, more than that of the free NCTD group in the HepG2 cells at a 150 μM NCTD concentration (Figure 10C,D).The result confirmed the increased cytotoxicity of the NCTD-loaded micelles against the HepG2 cells. Intracellular ROS Level Evaluation The ROS levels in the cell lines treated with the micelles or free NCTD were assessed by flow cytometry with DCFDA staining.The NCTD-induced oxidative stress in the HK-2 cells resulted in an increased ROS level with a 1.9-fold increase in DCF fluorescence compared with the untreated cells at a 75 µM NCTD concentration.And the DCF level of the DSPE-PEG-C60 micelle group was 1.14-fold compared to the control group, which was lower than that of the free NCTD group and similar to that of the control group (Figure 10A,B).The results indicated the strong capability of C60(OH) 22 to reduce oxidative stress in the HK-2 cells. On the contrary, the DCF fluorescence level of the DSPE-PEG-C60/NCTD micelle group was 1.6 times that of the control group, more than that of the free NCTD group in the HepG2 cells at a 150 µM NCTD concentration (Figure 10C,D).The result confirmed the increased cytotoxicity of the NCTD-loaded micelles against the HepG2 cells. Discussion The physicochemical characteristics of micelles, including particle size, shape and surface charge, are important for their fate in vivo.Nanoparticles within the range of 100-200 nm in diameter have the ability to infiltrate and amass at tumor locations.Micelles were prepared with an approximate size of 100 nm in this study.These micelles exhibited a negative charge in the physiological environment due to the introduction of C60(OH) 22 . This negative charge has the potential to enhance stability by inhibiting the interaction between micelles and negatively charged vascular endothelial cells or plasma components in vivo [35].PEG was utilized to create a compact coating on the micelles' surface, with the purpose of prolonging the circulation time of the micelles and delaying their phagocytic clearance [36]. Although the EPR effect of nanomedicines on solid tumors is widely acknowledged, the increase in nanomedicine exposure in tumor tissue is only around 20% to 30% compared to normal tissues [37,38].The necessity to study the reduction in severe side effects of anticancer drugs arises from the challenge of achieving complete drug enrichment in tumors.C60(OH)n has been reported as an antioxidant protector against cytotoxicity induced by chemotherapeutic drugs, as well as in other protective applications such as a radical scavenger to shield cells from radiation and as an antagonist of glutamate receptors [12][13][14][15][16].Many studies have reported that fullerenols exhibit protective effects against doxorubicin (DOX)-induced cardiotoxicity, hepatotoxicity and nephrotoxicity in rats subjected to high doses of DOX in vivo.The cytoprotective effects of fullerenols against doxorubicin (DOX)-induced damage in normal cells in vitro were also observed in our previous studies [17].In the present study, an investigation was conducted to examine the antioxidant properties of C60(OH) 22 and its amphiphilic derivative DSPE-PEG-C60 in protecting normal HK-2 and L02 cells against cytotoxicity induced by NCTD.The observed decrease in cytotoxicity against the HK-2 and L02 cell lines aligns with the findings of our previous investigations.The NCTD-loaded DSPE-PEG-C60 micelles demonstrated a significant protective effect on normal cells, specifically the HK-2 cells, in comparison to the free NCTD group (with a mole ratio of NCTD:DSPE-PEG-C60 = 1:0.7).However, in vitro experiments did not reveal any protective effects of C60(OH) 22 on the cells treated with NCTD (with a mole ratio of NCTD:C60(OH) 22 = 1:0.7-2).One possible reason was that DSPE-PEG-C60 was more readily taken up by the cells than C60(OH) 22 . It was also found that both of the carriers (DSPE-PEG-C60 and DSPE-PEG) enhanced the activity of NCTD on tumor cells significantly in this study.After the drugs or drugloaded micelles reach the tumor tissue, the internalization of the drugs into the tumor cells is one of the key steps to exert the antitumor activity.The inefficient cell uptake of NCTD by tumor cells was improved by the amphiphilic carrier (DSPE-PEG or DSPE-PEG-C60). Interestingly, in the HK-2 cells, the carriers did not increase the cytotoxicity of NCTD but decreased it (Figure 6).The microscopic fluorescence images of the AO/EB staining show that cell treatment with NCTD caused significant cell apoptosis compared to the control group.The DSPE-PEG-C60 micelles showed protection against NCTD with reduced cell apoptosis compared to the NCTD group in the HK-2 cell line (Figure 7).The loss of mitochondrial membrane potential (∆Ψ) in the cells signifies the early stage of apoptosis.The microscopic fluorescence images of JC-1 staining show that NCTD treatment caused severe cell apoptosis, indicated by the increased green fluorescence and decrease red fluorescence, compared to the control group, whose healthy cells showed red fluorescence with high mitochondrial membrane potential (∆ψM).The DSPE-PEG-C60/NCTD micelles were less potent than NCTD in damaging mitochondria, as evidenced by a relatively high ratio of red fluorescence to green fluorescence in the HK-2 cell line (Figure 8).The apoptosis examination was further validated by flow cytometry.The percentages of apoptotic cells were 36% and 12% in the NCTD and DSPE-PEG-C60/NCTD micelle groups, respectively (Figure 9).The decreased cell apoptosis of the DSPE-PEG-C60/NCTD micelles was mainly attributed to the antioxidative stress of DSPE-PEG-C60.NCTD treatment led to a 1.9-fold increase in the ROS levels, as evidenced by the increased DCF fluorescence, in comparison to the control group.In contrast, micelles modified with C60 exhibited comparable levels of ROS to the control group.These levels were significantly lower than those observed in the free NCTD group when tested on the HK-2 cell line (Figure 10). Concurrently, the micelles loaded with NCTD demonstrated enhanced cytotoxicity against the HepG2 tumor cells.The DSPE-PEG-C60/NCTD micelles induced greater mitochondrial damage compared to the NCTD group, as demonstrated by a reduction in red fluorescence and an elevation in green fluorescence in the HepG2 cells.In the context of flow cytometry detection, it was observed that the DSPE-PEG-C60 micelles, when loaded with NCTD, exhibited a significantly higher apoptosis ratio (48%) as compared to the free NCTD group, which showed an apoptosis ratio of 24%.The elevated apoptotic ratio observed in the HepG2 cells and the decreased apoptotic ratio observed in the HK-2 cells, following the treatment with the C60-modified micelles, can be attributed to the combined effects of increased intracellular uptake facilitated by the carrier (DSPE-PEG-C60) and the interference of the antioxidant capacity of the C60 derivative with the oxidative stress induced by NCTD.Considering that hepatocellular carcinoma is the primary clinical indication for NCTD, which is mainly associated with nephrotoxicity, the utilization of the DSPE-PEG-C60/NCTD micelles could potentially provide a favorable safety profile, thus enhancing its clinical utility. Preparation of Micelles The NCTD-loaded micelles were prepared using the ultrasonication method.NCTD (10 mg) and DSPE-PEG-C60 (or DSPE-PEG) (150 mg) were carefully weighed and put into a 50 mL centrifuge tube.Then, DMSO (1 mL) and deionized water (10 mL) were added into the centrifuge tube.After the complete dissolution of NCTD and carriers, DMSO was removed by dialysis (MW = 100).NCTD-loaded micelles were obtained by ultrasound of the dialysate followed by filtration (0.22 µm).The optimization of the preparation method, including the feed ratio, ultrasound time and ultrasound power, was investigated (Table 1). Characterization of Micelles 4.5.1. Morphology The diameter and morphology of NCTD-loaded micelles were obtained using transmission electron microscopy (TEM, H-7650, Hitachi, Tokyo, Japan).The micellar solution was dropped onto a carbon-coated copper grid and allowed to dry naturally in the air.The micellar morphology was observed immediately using a transmission electron microscope.The micellar size distribution, zeta potential and PDI were determined using dynamic light scattering (DLS) with the Zetasizer Nano ZS-90 instrument (Malvern, UK). Determination of Entrapment Efficiency The drug loading capacity and encapsulation efficiency were determined through ultrafiltration centrifugation.NCTD micellar solution (2 mL) was added to the upper centrifuge tube and centrifuged at 4 • C at 12,000 r/min for 20 min.The concentration of NCTD in the filtrate was determined by HPLC, obtaining the content of free NCTD in micellar solution (W 1 ).Afterwards, the concentration of NCTD in micellar solution was determined by HPLC, obtaining the total content of NCTD (W 2 ) in micelles.The mass of the carrier in the solution was W 3 .The drug loading content (LC) and the encapsulation efficiency (EE) were calculated using the following formula: Stability Evaluation NCTD-loaded micellar solution (10 mL) was placed in centrifuge tubes either at room temperature or at 4 • C for 0, 7, 14, 21 and 28 days.The stability of micelles system was investigated by the detection of the micelle particle size distribution (PSD), PDI and zeta potential. In Vitro Drug Release Assay In order to fit the requirements of "sink condition", SDS was added to PBS buffer solution to improve the solubility of NCTD in the buffer.NCTD-loaded micelles solution (1 mL) was put into a dialysis bag (MW = 2000), which was placed in 30 mL PBS buffer solution (containing 1% SDS) and stirred at 37 • C for 48 h at 100 r/min.The released NCTD concentration was determined at a predetermined time point using HPLC method.The release rates of NCTD from the micelles were determined by dividing the amount of NCTD released within a certain period of time by the initial NCTD content of the micelles. Cell Lines and Cell Culture Human hepatocellular carcinoma (HCC) cell lines (BEL-7402, HepG2) and immortalized normal human hepatocytes (L02) were obtained from the Cell Bank of the Chinese Academy of Sciences in Shanghai, China.Immortalized normal human kidney cells (HK-2) were purchased from Pricella Life Technology Co., Ltd.(Wuhan, China).HepG2 cells and L02 cells were cultured in DMEM medium supplemented with 10% FBS and 1% antibiotics (100 U/mL penicillin G and 0.1 mg/mL streptomycin).BEL-7402 cells were maintained in RPMI 1640 medium supplemented with 10% fetal bovine serum (FBS) and 1% antibiotics.HK-2 cells were maintained in DMEM F12 medium supplemented with 10% FBS and 1% antibiotics.Cells were maintained at 37 • C in a humidified environment with 5% CO 2 . Cell Viability Assay Cell activity was detected by CCK-8 method.Logarithmic growth phase cells were seeded onto 96-well plates (3 × 10 3 /well) for 24 h and then treated either with free NCTD or with NCTD-loaded micelles at the equivalent NCTD dosage for 72 h.CCK-8 solution (10 µL) was added to each well.The cells were incubated for another 30-60 min.The absorbance value (OD value) of each hole was measured at 450 nm by MK-3 microplate reader (Thermo Fisher Scientific, United States).The untreated control samples had a default cell activity value of 100%.The cell inhibition rate was calculated using the following formula: cell inhibition rate = [(OD450 control well -OD450 administration well)/OD450 control well] × 100%.4.6.2.Acridine Orange/Ethidium Bromide (AO/EB) Staining Cell apoptosis was detected by dual acridine orange/ethidium bromide (AO/EB) staining.Cells in the logarithmic growth phase were seeded into 24-well plates (5 × 10 4 /well), and the plates were incubated in a CO 2 incubator (37 • C, 95% humidity and 5% CO 2 ) for 24 h.Cells were treated either with free NCTD or with NCTD-loaded micelles at the same concentration of NCTD for another 24 h, followed by the digestion with trypsin and collection.Dual fluorescent staining solution (12 µL) containing acridine orange and ethidium bromide was added to each group.After a half-hour incubation, the morphology of apoptotic cells was examined using a fluorescent microscope (Nikon Eclipse Ti-S, Tokyo, Japan) at 10× magnification. JC-1 Staining To measure mitochondrial membrane potential, cells at logarithmic growth stage were incubated in 24-well plates (5 × 10 4 /well) for 24 h and treated with free NCTD or NCTD-loaded micelles at the same concentration of NCTD for another 24 h.Then, the treated cells were washed and incubated with DMEM/medium containing 10 µg/mL JC-1 in the dark at 37 • C for 30 min.The cells were washed with the staining buffer, and JC-1 fluorescence was visualized using a Nikon Ti-S microscope at 20× magnification. Apoptosis by Flow Cytometry The effect of NCTD-loaded micelles on cell apoptosis was also evaluated by flow cytometry.The cells in logarithmic growth stage were cultured on 6-well plates (3 × 10 5 /well) for 24 h and then treated either with free NCTD or with NCTD-loaded micelles at the same concentration of NCTD for additional 24 h.Then, the cells were collected, treated using the Annexin V-FITC/PI apoptosis assay kit and incubated in the dark at room temperature for 15 min.The stained cells were examined by flow cytometry and analyzed using the CytExpert software (Version 2.4).4.6.5.Intracellular ROS Detection ROS production upon micelle treatment was detected by 2 ,7 -dichloroflfluorescein diacetate (DCFDA) staining using a DCFDA based cell kit (Biosharp).Cells at logarithmic growth stage were cultured on 6-well plates (3 × 10 5 /well) for 24 h and treated with free NCTD and NCTD-loaded micelles respectively at the same concentration of NCTD for another 24 h.The cells were collected and stained with DCFDA in the dark at 37 • C for 30 min, which were detected by flow cytometry and analyzed using CytExpert software. Statistical Analysis All data were generated from three independent experiments and are presented as the means ± SD.The data were analyzed using Student's t-test by Prism software version 8.0 (Graph Pad Software Inc., San Diego, CA, United States), and the critical level of significance was set at p < 0.05. Conclusions In this study, we prepared the DSPE-PEG-C60/NCTD micelles with an optimal particle size of 91.57nm (PDI = 0.231) and a negative zeta potential of −13.8 mV.This study showed that the DSPE-PEG-C60/NCTD micelles decreased cytotoxicity in normal renal cells (HK-2) Figure 5 . Figure 5.The cumulative release kinetics of NCTD from micelles and the Higuchi equation. Figure 5 . Figure 5.The cumulative release kinetics of NCTD from micelles and the Higuchi equation.Cumulative release kinetics and Higuchi equation of NCTD micelles at different pH: pH = 7.4 (A,B), pH = 6.5 (C,D) and pH = 5.5 (E,F). Figure 7 . Figure 7. Fluorescence microscopic images showing the live and dead cell assay of NCTD and NCTD-loaded micelles in HK-2 cells and HepG2 cells: control (A), free NCTD (B), DSPE-PEG/NCTD (C), and DSPE-PEG-C60/NCTD (D).Cells were treated with micelles or free NCTD at equivalent concentrations of 75 μM NCTD (for HK-2 cells) or 150 μM NCTD (for HepG2 cells) for 24 h.The cells were then stained with acridine orange/ethidium bromide and observed under a fluorescence microscope.(A-D) shows fluorescence microscopic images of HK-2 cells, (E-H) shows Fluorescence microscopic images of HepG2 cells.The scale is 100 nm. Figure 7 . Figure 7. Fluorescence microscopic images showing the live and dead cell assay of NCTD and NCTD-loaded micelles in HK-2 cells and HepG2 cells: control (A), free NCTD (B), DSPE-PEG/NCTD (C), and DSPE-PEG-C60/NCTD (D).Cells were treated with micelles or free NCTD at equivalent concentrations of 75 µM NCTD (for HK-2 cells) or 150 µM NCTD (for HepG2 cells) for 24 h.The cells were then stained with acridine orange/ethidium bromide and observed under a fluorescence microscope.(A-D) shows fluorescence microscopic images of HK-2 cells, (E-H) shows Fluorescence microscopic images of HepG2 cells.The scale is 100 nm. Figure 9 . Figure 9. Effects of micelles and free NCTD on apoptosis of HK-2 cells and HepG2 cells: control (A), free NCTD (B), DSPE-PEG/NCTD (C) and DSPE-PEG-C60/NCTD (D).Cells were treated with micelles or free NCTD at equivalent concentrations of 75 µM NCTD (for HK-2 cells) or 150 µM NCTD (for HepG2 cells) for 24 h.Afterward, Annexin V/PI staining and flow cytometry detection were performed.(A-D) Representative pseudo-color plots of Annexin V/PI staining of HK-2 cells.(E) Apoptosis ratio of the calculated HK-2 cells.(F-I) Representative pseudo-color plots of Annexin V/PI staining of HepG2 cells.(J) Apoptosis ratio of the calculated HepG2 cells.The values presented are the means ± standard deviations of three independent experiments (** p < 0.01, *** p < 0.001 and **** p < 0.0001).Different colored dots represent the number of cells.
2023-11-18T16:07:33.814Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "eb0140aea25303c2e7e8242f29ab01b380975f9e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/22/7609/pdf?version=1700049991", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f50a10910c15cfccccc85641a7349da6619dcc1e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
6638467
pes2o/s2orc
v3-fos-license
Angiostrongylus vasorum: Experimental Infection and Larval Development in Omalonyx matheroni The susceptibility and suitability of Omalonyx matheroni as an intermediate host of Angiostrongylus vasorum and the characteristics of larval recovery and development were investigated. Mollusks were infected, and from the 3rd to the 25th day after infection, larvae were recovered from groups of 50 individuals. The first observation of L2 was on the 5th day, and the first observation of L3 was on the 10th day. From the 22nd day on, all larvae were at the L3 stadium. Larval recovery varied from 78.2% to 95.2%. We found larval development to be faster in O. matheroni than in Biomphalaria glabrata. Our findings indicate that this mollusk is highly susceptible to A. vasorum. Infective L3 were orally inoculated into a dog, and the prepatent period was 39 days. This is the first study to focus on O. matheroni as an intermediate host of A. vasorum. Introduction The nematode Angiostrongylus vasorum is a parasite of wild and domestic canids. Adult worms are found in the right ventricle, pulmonary artery, and its branches, where sexual reproduction and oviposition take place. The first-stage larvae (L1) hatch in the alveoli, migrate up the bronchial tree, and are swallowed and then excreted into the environment along with the host feces. Infection frequently leads to pneumonia, loss of racing performance, coughing, and anemia [1]. Severely infected dogs may develop cardiac insufficiency and pulmonary fibrosis, followed by weight loss, hemorrhagic diatheses, and death [2,3]. Several terrestrial and aquatic mollusks may act as intermediate hosts [4][5][6][7]. The genus Omalonyx (Pulmonata: Stylommatophora) belongs to the family Succineidae, which is composed of hermaphroditic terrestrial pulmonates that are morphologically diverse. Omalonyx sp. have a reduced flat shell and slug-like body, and they can be found in humid soil and in macrophytes [8][9][10]. They have a broad geographical distribution east of the Andes in South America and in the Lesser Antilles Islands [9], including localities where A. vasorum is known to occur [11,12]. These mollusks are important intermediate hosts of the trematode Leucochloridium [13][14][15] and are able to support the life cycle of Angiostrongylus costaricensis in the laboratory [12]. There is no record of Angiostrongylus naturally infecting Omalonyx. This investigation aimed to evaluate the susceptibility and suitability of Omalonyx matheroni as an intermediate host of A. vasorum and to analyze the parasite's larval development from L1 to L3. Studies on the development of A. vasorum in different hosts contribute to the understanding of the parasite's biology and of the host-parasite relationship. 2.1. Mollusks. young individuals (from 25 to 30 days old) of O. matheroni (n = 1150) measuring from 9 to 14 mm in length, raised under laboratory conditions, and from parental specimens from Pampulha Lake in Belo Horizonte, Minas Gerais State, Brazil were employed in this trial. Larval Development. The amount of larvae recovered each day is presented in Table 1. Mean L3 recovery is reported as the number of L3 recovered divided by the number of L1 that penetrated the host. These proportions were between 78.2% and 95.2%. Dog Infection. Larvae were first detected in the feces in the 39th day (512 per g of feces) and increased until the 60th day (3320 per g of feces). This increase was followed by a gradual decrease that reached 1120 larvae per gram of feces on the 100th day ( Figure 1). During this 100-day period, the amount of larval release varied, but larvae were never absent. Discussion Nematodes of the genus Angiostrongylus, including the species A. vasorum, can infect a wide spectrum of intermediate hosts of the class Gastropoda [18]. This system thus represents an interesting experimental model for the study of the host-parasite relationship. The susceptibility of a mollusk to a protostrongylid has been defined in terms of L1 penetration capability, the possibility of L3 development and time required to complete larval development [19,20]. The present investigation demonstrates the susceptibility and suitability of O. matheroni as an intermediate host of A. vasorum. The percentage of L3 recovery in O. matheroni varied from 78.2% to 95.2%. The high percentage of larval recovery confirms our findings and indicates that this mollusk is highly susceptible to A. vasorum. Infective L3 recovered from these mollusks developed into fertile adults. L1 were observed in the feces of the infected dog. Several factors influence the larval development of protostrongylids in the intermediate host such as environmental conditions (i.e., temperature) and biological conditions (i.e., hosts species and age) [21][22][23][24]. Geritcher [21] emphasized that among the environmental factors affecting the development of protostrongylid larvae in snails, the most important is temperature [21]. Low temperature (18 to 20 • C) increases the time of development of the larvae, whereas high temperatures accelerate their development (25 to 28 • C), as observed for the genus Angiostrongylus [17,25,26]. In this work, we observed that larval development of A. vasorum is faster in O. matheroni than in other known intermediate hosts [17,27]. This conclusion is based on comparisons with data that is available in the literature. Experimental infection of several species of terrestrial mollusks (maintained at 18 to 23 • C) allowed the first observations of L3 on the 16th and 17th day after infection [27]. Such low temperatures increase the time of larval development, and we are focusing our discussion on works that were performed at higher temperatures (25 to 28 • C). In a trial where B. glabrata was maintained at 25 to 27 • C, L2 were recovered between the 7th and 8th day after infection and L3 on the 14th and 15th [17]. Our results for O. matheroni demonstrated that L2 can be observed for the first time on the 5th day after infection and L3 can be observed for the first time on the 10th day. Furthermore, after 21 days, almost all larvae recovered were L3. The exploitation of hosts' immune response by the parasite was discussed by Damian [28], and the encapsulation of A. costaricensis in veronicellidae slugs has been considered an example of such a process [29]. Larvae were observed in the feces of the experimentally infected dog 39 days after infection. These results corroborate those of Bessa et al. [7], Oliveira-Júnior et al. [30], and Barçante et al. [16], who observed a prepatent period varying from 28 to 108 days afterinfection. In view of the high reproductive rates of O. matheroni and the feasibility of laboratory rearing (accelerated larval development, efficient larval recovery, and larval viability), we consider such mollusks very useful for the maintenance of the A. vasorum cycle in the laboratory. Moreover, this mollusk is also an interesting experimental model for studies on the host-parasite relationship of A. vasorum and its intermediate hosts.
2014-10-01T00:00:00.000Z
2011-05-31T00:00:00.000
{ "year": 2011, "sha1": "9955b5f67c7ecc0337861e1a59c77209ad1a56b6", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jpr/2011/178748.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2638783ecd81eab9bbb09e0ed881ab07327dea2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258843374
pes2o/s2orc
v3-fos-license
Assessing coastline recession for adaptation planning: sea level rise versus storm erosion The Sixth Assessment report (AR6) of the Intergovernmental Panel on Climate Change (IPCC) states with high confidence that most sandy coasts around the world will experience an increase in coastal erosion over the twenty-first century. An increase in long term coastal erosion (coastline recession) along sandy coasts can translate into massive socio-economic impacts, unless appropriate adaptation measures are implemented in the next few decades. To adequately inform adaptation measures, it is necessary to have a good understanding of the relative importance of the physical processes driving coastline recession, as well as of linkages between consideration (or not) of certain processes and the level of risk tolerance; understandings that are hitherto lacking. Here, we apply the multi-scale Probabilistic Coastline Recession (PCR) model to two end-member sandy coastal types (swell dominated and storm dominated), to investigate where and when coastline recession projections are dominated by the differential contributions from Sea Level Rise (SLR) and storm erosion. Results show that SLR substantially increases the projected end-century recession at both types of coasts and that projected changes in the wave climate have only a marginal impact. An analysis of the Process Dominance Ratio (PDR), introduced here, shows that the dominance of storm erosion over SLR (and vice versa) on total recession by 2100 depends on both the type of the beach and the risk tolerance levels. For moderately risk-averse decisions (i.e. decisions accounting only for high exceedance probability recessions and hence do not account for very high amounts of potential recession—for example, the placement of temporary summer beach cabins), additional erosion due to SLR can be considered as the dominant driver of end-century recession at both types of beaches. However, for more risk-averse decisions that would typically account for higher potential recession (i.e. lower exceedance probability recessions), such as the placement of coastal infrastructure, multi-storey apartment buildings etc., storm erosion becomes the dominant process. The results of this study provide new insights on which physical processes need to be considered when and where in terms of numerical modelling efforts needed for supporting different management decisions, potentially enabling more streamlined and comprehensive assessments of the efficacy of coastal adaptation measures. www.nature.com/scientificreports/ (defined therein as a shoreline retreat exceeding 100 m relative to the present-day shoreline position) by midcentury, increasing to about 2/3rds by the turn of the twenty-first century. In terms of economic consequences of coastline recession, Hinkel et al. 5 have estimated that the cost of migration over the twenty-first century due to coastal erosion could reach US $ 1 Trillion. To avoid or minimize socio-economic impacts that are certain to arise from coastline recession [5][6][7][8] , especially in heavily inhabited and utilized locations, adaptation strategies and measures that mitigate coastal risks are needed, particularly considering the lucrative economic rewards often provided by investments in the coastal zone. Furthermore, as adaptation planning needs to account for the fairly long lead-times associated with strategies such as managed retreat 9 , action is needed sooner rather than later. While there is now broad awareness that climate change is very likely to result in coastline recession along open sandy coasts (in the absence of additional sources of sand supply or physical barriers to recession) 4,10-13 , it is still more or less generally presumed that this recession will be driven by sea level rise. But this simplistic view does not hold in all situations. This is because, in addition to SLR, there is also a hysteresis effect of the storm erosion-beach recovery cycle that contributes to the long term position of the coastline, and hence to the amount of coastline recession [14][15][16] . Could this storm erosion leave a larger signal on total coastline recession at some types of coasts? Or could the effect of storms be largely ignored (thus greatly reducing the complexity of the modelling effort required) at some coastal types? Does the inclusion of storm erosion in computing coastline recession become more important for increasingly risk averse decision making? These are all questions that become very relevant where decision making at local scale (i.e. at a given site) is concerned; questions that to date remain unanswered. This study attempts to answer some of these questions, through the application of the Probabilistic Coastline Recession (PCR) model, under the high-emission SSP5-8.5 scenario over the twenty-first century, at two distinctly different study sites representing two main categories of sandy coastal types around the world. The PCR model, fully described in Ranasinghe et al. 14 , is a physics-based, multi-scale, probabilistic model that provides probabilistic projections of coastline position change (generally over a 100 year period) due to the combined effect of SLR and storm erosion. The model is multi-scale in that it concurrently and seamlessly accounts for coastline position change due to both SLR and storm erosion (due to the combined effects of storm surge and storm waves), while allowing beach recovery between storm events. The PCR model uses data-fitted distributions of storm wave conditions and water levels, to generate a 100-year synthetic storm time series that is then superimposed on a chosen SLR trajectory to calculate the associated movements of the coastline over ~ 100 years within a Monte Carlo framework. The ~ 100-year simulation is repeated multiple times (~ 1000) until convergence is obtained for the projected coastline recession at low (e.g. 0.05) exceedance probabilities (see "Methods" for a more detailed description of the model). The model, and its derivatives, have been applied at several locations around the world (Australia, France, Japan, Sri Lanka) [15][16][17][18][19][20][21] . Here we apply the PCR model to two beaches that are representative of two main categories of open coasts (in terms of hydrodynamic forcing) present around the world: swell dominated (Narrabeen beach, Australia) and storm dominated (Noordwijk aan Zee strand, The Netherlands) (see "Methods" for study site descriptions). In terms of governing morphodynamic processes too these two beaches are diametrically opposite with Narrabeen being swash dominated and Noordwijk aan Zee being drift dominated, albeit without any noteworthy alongshore gradient in longshore sediment transport rate, for the most part. The model implementation for the two sites is identical to that described in Ranasinghe et al. 14 and Wainwright et al. 15 (for Narrabeen, Profile #4) and Li et al. 17 (for Noordwijk aan Zee, JARKUS profile # 8250). In both cases, the toe of the dune is taken as the coastline position indicator, and the model is run for the entirety of the twenty-first century with 3 different wave climates, with and without SLR (SSP5-8.5), as summarized in Table 1. Note that, as indicated in Table 1, here we only changed the storm wave heights to account for climate change driven variations in storms, while all other storm parameters (e.g. wave period, direction, post-storm profile recovery) were kept unchanged at present-day values. As storm surge projections for both sites indicate negligible changes over the twenty-first century 22,23 , the present-day storm surge climate was assumed to remain unchanged over the twenty-first century. Results The projected coastline recessions by the end of the twenty-first century (relative to end of the twentieth century) for the two sites under different forcing conditions are shown in terms of exceedance probabilities ranging from 0.01 to 0.5 (expected value) in Fig. 1. Comparison of Simulations #1 and #2 immediately show that SLR substantially increases the projected end-century recession at both sites (by about 25 m and 15 m at Narrabeen and Noordwijk aan Zee respectively). However, the consideration of projected changes in the wave climate results in rather small differences in the recession projected by the end of the century at both sites (Simulations #2, #3 and #4). In general, at exceedance probabilities of around 0.5, the maximum recession projected by end-century at Note that the difference in the shapes of the recession distributions (i.e. shape of the curves in Fig. 1 left vs right) for the two study locations is due to the semi-log axis plotting done here, and when the curves are plotted on a linear scale the shapes of the curves are in fact similar. However, a noteworthy difference in the response characteristics of the two sites is that the rate of recession (i.e. first derivative) slowly increases for lower exceedance probabilities at Narrabeen, as indicated by the slightly concave shape of the curve from about 0.1 to 0.01 probability ( Fig. 1 left), while at Noordwijk aan Zee, the rate of recession slowly decreases in the same low probability range as indicated by the slightly convex shape of the curve (Fig. 1 right). This is because the storm wave height hardly increases for lower exceedance probabilities at Noordwijk aan Zee 26 while at Narrabeen they keep increasing for the full exceedance probability range considered here 29 . The contribution of SLR to the total recession by end of the century can be approximated by subtracting the results of the simulation with storms only (Simulation #1) from that of the simulations that account for both storms and SLR (Simulations #2 to #4). The SLR-alone recession computed in this way would include the effect of non-linear interactions between SLR and storms, which is in fact a unique feature of the PCR model. Here, we introduce the Process Dominance Ratio (PDR), defined as: which compares the relative contributions from storm erosion and SLR to total recession. A PDR value of greater than 1 indicates storm dominance while a PDR value less than 1 indicates the dominance of SLR on total recession. Figure 2 shows the Process Dominance Ratio (PDR) for the two sites considered here. Process dominance and implications. Inspection of Fig. 2 indicates that the question of whether SLR or storm erosion dominates twenty-first century coastline recession does not have simple answer. Rather, the answer depends on both the coastal type and the exceedance probability of interest (representing the risk tolerance level of decision makers). Figure 2 shows that SLR can be considered to dominate the amount of total recession at both coastal types when making moderately risk-averse decisions which can be based on the expected value of recession (i.e. 0.5 exceedance probability). Such decisions could for example be related to the placement of temporary coastal structures (e.g. summer beach cabins/restaurants) or low-cost recreational facilities. At the other extreme of very highly risk-averse decisions (e.g. construction of multi-storey apartments and tourist hotels, coastal infrastructure placement) that would typically accommodate the possibility of larger recessions with much lower exceedance probabilities, storm erosion can be considered to dominate over the SLR effect at both coastal types. Figure 2 shows that, while storm erosion starts to dominate over the SLR effect only for recessions with exceedance probabilities lower than about 0.05 at the swell dominated beach, the transition from SLR dominance to storm dominance occurs at a recession with much higher exceedance probability (0.3) at the storm dominated beach. These results also have implications in terms of the modelling effort required to support different management decisions. For example, at both coastal types, management decisions that are willing to accept a 0.5 probability of an investment being subjected to erosion damage by 2100 may well be sufficiently supported by modelling SLR driven coastline recession in detail. However, a decision at a storm dominated beach concerning an investment Methods Study sites. Narrabeen beach, Sydney, Australia. Located about 20 km north of Sydney, the Narrabeen-Collaroy embayment comprises a 3.6 km sandy system bounded by Narrabeen headland to the North and Long Reef Point headland to the south (Fig. 3a,b). The sandy beach is composed of quartz sand with a median grain diameter D 50 ≈ 0.3 mm 27 and is backed by dunes. Tides in the region are semi-diurnal and microtidal 28 and storm surges in the area are small 23,29 . All year round, Narrabeen is exposed to swells propagating from the southern ocean 28 and as a result the wave climate at Narrabeen is swell dominated. The average significant wave height (H s ) in the area is 1.6 m, with a peak wave period of 10 s, and the dominant wave direction is from the south-south-east (SSE). Although storms are more common during winter, there is no great seasonal variation in the wave climate 29,30 . Water level data is available since 1914 from the nearby Fort Denison tide gauge, while continuous wave data is available since 1971 from Botany Bay (non-directional) and Long reef (directional) gauges. Beach profiles have been surveyed at Narrabeen beach at a monthly frequency since 1976 31 . The results presented in this study pertains to the central profile, known as Profile #4, shown in Fig. 3b. Noordwijk aan Zee strand, The Netherlands. Noordwijk aan Zee strand is a sandy beach located along the ~ 120 km long central Dutch coast, commonly known as the "Holland coast'" (Fig. 3c,d). The beach is backed by dunes 17,32 , and the median grain diameter is 0.15-0.25 mm 33 . Tides in the region are semi-diurnal and microtidal 34 www.nature.com/scientificreports/ occurs after 10 years, SLR over this 10 year period would have resulted in the elevation of the MWL by 10 × (SLR/year). However, due to the slow nature of post-storm dune recovery, it is very unlikely that the dune would have completely recovered to its original position in the 10 year period between the two 1 in 10 year storms (i.e. hysteresis effect). Due to this hysteresis effect, say the dune only advanced 5 m seawards from its eroded position during the 10 year period between the two storms. When the second 1 in 10 year storm occurs under this situation of elevated MWL and a dune that is already 5 m landward of its present day position, at least an additional 10 m of dune retreat can be reasonably expected due to the second storm. The net dune retreat over the 10 year period then would be 15 m (10 − 5 + 10). As the MWL keeps increasing due to SLR, this process will repeat many times, leading to a net retreat of the coastline. The PCR model simulates this physical process, using the below described workflow). 1. Generate a long (typically ~ 100 years) time series of storms using Callaghan et al. 's 29 JPM synthetic storm generator (see below for a brief description of the JPM approach). 2. Using IPCC sea-level rise projections, estimate the sea-level rise, relative to a benchmark value (typically beginning of the twenty-first century), at the time that each storm in the synthetic time series occurs. www.nature.com/scientificreports/ 3. For each storm in the generated storm time series, estimate coastline recession due to the combined effect of the storm and SLR at the time of each storm, while allowing for profile recovery between storms, using a physics based storm erosion model that is appropriate for the selected coastline indicator (in this application, Larson et al. 's 36 dune impact model was used for Narrabeen beach (see Ranasinghe et al. 14 ), while a combination of DUNERULE 37 and Xbeach 38 was used for Noordwijk aan Zee strand (see Li et al. 17 ); the erosion models are calibrated for the respective sites using measured historical storm erosion data-for more details please see Ranasinghe et al. 14 and Li et al. 17 ). The combined effect of SLR and storm erosion is simulated in the PCR model by running the storm erosion model for each storm with the SLR induced elevation of MWL at the time of that storm. 4. Track and store the position of the coastline indicator throughout the simulation period. 5. Subtract the initial position of the coastline indicator from its final computed position (averaged over the last 2-3 years of the simulation) to estimate coastline recession during the simulation period (~ 100 years). 6. Repeat 1-5 until computed low exceedance probability (e.g. 0.05) recessions converge (i.e. bootstrapping). The JPM (Joint Probability Model), adopted in step 1 of the PCR workflow above to generate the synthetic storm time series is fully described in Callaghan et al. 29 . This approach fits marginal, dependency and conditional distributions to long time series of forcing parameters (i.e. storm wave height, storm duration, storm wave period, storm wave direction, storm spacing, and storm surge), which are then used within a Monte Carlo simulation to derive a time series of storms and their associated characteristics. Limitations. This study concentrated only on sandy coasts, and as such the potential applicability of the results presented here are likely limited to the ~ 31% of the ice-free global coastline that is sandy 39 . At other types of coasts (e.g. muddy coasts, cliffed coasts), the relative dominance characteristics of storm erosion versus sea level rise on total coastline recession might be different from what is reported here. While the two study sites considered here can be taken to represent end-members of sandy coast types that are more or less diametrically opposite with respect to wave and surge climate (Narrabeen-swell dominated, negligible surge; Noordwijk-storm dominated, high surge) as well as nearshore morphodynamics (Narrabeenswash dominated; Noordwijk-drift dominated), there are many other intermediate types of sandy coasts within the full spectrum of the wide range of sandy coasts present around the world. Furthermore, different geologic controls and shelf dynamics may be present at different sites around the world. The PCR model applications here do not consider alongshore gradients in longshore sediment transport or fluvial sediment supply to coasts. While these omissions do not affect the results at the two study sites, at other sites such process may need to be included in the model as sediment sources or sinks. Post-storm profile recovery is an important process that needs to be represented in PCR model simulations. Being a reduced complexity model, by design, the PCR model does not account for the full range of coastal forcing conditions that can be experienced (e.g., tides, waves, winds, ocean currents etc.) over the simulation period, but operates by running sequences of individual storms (together with SLR), with simplified inter-storm profile recovery parameterizations. Thus, the model does not account for potentially important long-term beach and dune migration processes that may be occurring. Although observed post-storm profile recovery rates were available (and used in the PCR applications) for the two case study sites described in the present study, this may not be the case at most other locations. Dastgheib et al. 16,20 present a "reverse-engineering" solution that could be used in such situations to derive an estimate of average profile recovery rate via a few exploratory PCR simulations combined with observed coastline position change rates. However, what would be ideal is to use a process based, yet simple formulation that could estimate post-storm profile recovery rate at any location based on easily available parameters. To the authors knowledge, such a formulation doesn't exist yet. Another limitation of the present version of the PCR model is its assumption that the profile shape remains unchanged while it moved landwards (during storms) and seawards (during inter-storm recovery periods). While this does not account for increased erosion when a storm occurs on an accreted profile and decreased erosion when storms occur in quick succession on an already eroded profile, the net effect of such storm-by-storm differences in erosion would likely be small over a long simulation period, especially considering that a typical PCR simulation performs a 100-year simulation about 1000 times within the model's Montecarlo framework. In terms of profile adjustment to SLR, PCR model applications do account for the raising of the dune toe and MWL in response to SLR, although this is done in different ways for different applications, as appropriate for the erosion model used in the PCR application (e.g. in the Narrabeen beach PCR application, the term that represents the vertical elevation difference between the dune toe and the start of swash in the Larson et al. (2004) dune impact model is increased by the amount of SLR that occurs between storm (i) and storm (i − 1), when computing the dune erosion for storm (i)). The PCR model results presented in this study are limited, by design, to the end of the twenty-first century projections under the high-emission SSP5-8.5 climate scenario. Consideration of such a high-end scenario enables a clear separation of different system response characteristics to forcing, as was the focus of this study (i.e. determination of when storm erosion dominated over sea level rise, and vice versa, in terms of total coastline recession). The dominance of one process over the other may not be as clear for lower emission scenarios. In contrast, the relative dominance of SLR over storm erosion in total coastline recession would become even more clear for high-end SLR projections, such as for example, the low likelihood high impact (LLHI) SLR storyline presented in IPCC AR6.
2023-05-24T06:17:49.382Z
2023-05-22T00:00:00.000
{ "year": 2023, "sha1": "2a81128ad9c3dcae082c03744991ab1db4af97b5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7fbd66bfd0afd7935199cb8a8f02f6dde42be485", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
220634296
pes2o/s2orc
v3-fos-license
Salpingectomy may decrease antral follicle count but not live birth rate for IVF-ET patients aged 35–39 years: a retrospective study Purpose Problems with fallopian tubes are one of the main reasons for women to undergo in vitro fertilization-embryo transfer (IVF-ET). A large proportion of women with ectopic pregnancy, fallopian tube obstruction and hydrosalpinx have had one or both fallopian tubes removed by salpingectomy. With increasing age, ovarian reserve deteriorates, the numbers of retrieved oocytes, available embryos and high-quality embryos are reduced, and the live birth rate for women treated with IVF treatment is affected. Thus, it is important to understand how salpingectomy affects live birth rates for IVF patients of different ages. This study analyzed how patients’ age and salpingectomy influenced ovarian reserve, ovarian response and pregnancy outcomes for infertile women undergoing IVF-ET. Methods A total of 1922 patients that underwent IVF-ET treatment from January 1, 2012, to December 31, 2018, were included in this retrospective study. The patients were divided into two groups according to whether or not they had a previous history of salpingectomy. The salpingectomy (group A, 534 patients) and control groups (group B, 1388 patients) were then further divided into two subgroups according to patient age (age<35 years, and age 35–39 years). Ovarian reserve, ovarian response, and IVF outcomes were investigated for each subgroup. Logistic regression model was used to estimate the relationship between clinical pregnancy and live births and patients’ baseline characteristics. Results In the salpingectomy group, antral follicle counts (AFC) were significantly lower for the subgroup aged 35 to 39 years compared with the control group. But this difference did not appear in women younger than 35 years. In addition, there were no significant differences in levels of basal follicle stimulation hormone (FSH), basal luteinizing hormone (LH), basal estradiol (E2), total gonadotropins (Gn) dose, duration of Gn, numbers of retrieved oocytes, fertilization rates, numbers of available embryos, live birth rates, clinical pregnancy rates, miscarriage rates, ectopic pregnancy rates, or multiple pregnancy rates between the salpingectomy group and the control group (P > 0.05). Age is a risk factor for the clinical pregnancy and live birth. Conclusion Salpingectomy may decrease antral follicle count but not live birth rate for IVF-ET patients aged 35–39 years. The increased female age was negative related with clinical pregnancy and live birth. Introduction Infertility is estimated to affect 8-12% of couples of child-bearing age worldwide [1]. However, with the rapid development of assisted reproductive technology, an increasing number of infertile women may still achieve a successful pregnancy through the process of in vitro fertilization-embryo transfer (IVF-ET). Infertility may be caused by ovulation disorders, fallopian tube issues, endometriosis, male disorders, and other factors [2]. Tubal infertility caused by fallopian tube obstruction, hydrosalpinx, tubal ligation, or salpingectomy accounts for 25-35% of female infertility [3]. Many women underwent salpingectomy (removal of one or both fallopian tubes) due to ectopic pregnancy, hydrosalpinx or fallopian tube obstruction. According to the guideline issued by The American College of Obstetricians and Gynecologists (ACOG), salpingectomy is a routine therapy for ectopic pregnancy (EP) cases when the patient is exhibiting signs of intraperitoneal bleeding or ongoing pelvic pain, or when patients have contraindications to more conservative medical treatments [4]. The American Society for Reproductive Medicine (ASRM) also recommends salpingectomy as a therapy for patients with extensive dense peritubal adhesions, a surgically irreparable hydrosalpinx, or a fallopian tube that is damaged beyond repair by infection or endometriosis [5]. IVF was first used to treat cases of tubal factor infertility, and tubal infertility is one of the chief indications for IVF. The effect of salpingectomy on ovarian reserve and ovarian response in women who underwent IVF treatment is still a controversial issue. Some studies have found that basal follicle-stimulating hormone (bFSH) levels were significantly increased, and serum anti-Müllerian hormone (AMH) levels were decreased in women who had a salpingectomy compared to women with tubal factor infertility but no salpingectomy [6,7]. Pereira et al. suggested that IVF-ET patients who had a previous unilateral salpingectomy required higher gonadotropin (Gn) doses and longer stimulation duration when undergoing a controlled ovarian stimulation (COS) compared to patients who had not undergone salpingectomy [8]. However, some investigators showed that there was no significant influence on ovarian reserve, ovarian response, or the number of retrieved oocytes before and after salpingectomy [9][10][11][12][13][14][15][16]. With advancing female age, declines in number and quality of oocytes are the primary reasons for decreasing IVF outcomes, especially in terms of live birth rates [17]. The effect of female age and previous salpingectomy history on live birth rates during the IVF cycle is the main focus of this investigation. Some recently published studies demonstrated that the clinical pregnancy rate for women undergoing IVF after salpingectomy was comparable to that for women with no previous salpingectomy [8,9,16,18]. We investigated whether salpingectomy affected the ovarian reserve, ovarian response and pregnancy outcomes for infertile women of different ages undergoing IVF. In our present study, 1922 women who underwent IVF treatment within a five-year period were included and divided into the salpingectomy group and control group as described above. We found that bFSH levels, basal luteinizing hormone (bLH) levels, basal estradiol (bE 2 ) levels, antral follicle counts (AFC), total Gn dose, and duration of relatively the same for both groups. There also were no significant differences between the two groups based on live birth rates, clinical pregnancy rates, spontaneous abortion rates, ectopic pregnancy rates and multiple pregnancy rates. Within the salpingectomy group, AFCs were significantly lower for the age 35-39 years patient group (group A2) compared to the control group (group B2). Study population This retrospective cohort study included patients receiving IVF treatment in the Department of Human Reproductive Medicine, Beijing Obstetrics and Gynecology Hospital, Capital Medical University during the period from January 1, 2012 to December 31, 2018. A total of 1922 patients were recruited for this study. For all patients, controlled ovarian stimulation data is included from the first IVF cycle only. This study was approved by the ethics committee of Beijing Obstetrics and Gynecology Hospital, Capital Medical University. Patients were divided into salpingectomy group and control group according to whether they had salpingectomy before. In each group, patients were then divided into two subgroups according to age (less than 35 and 35-39 years). Inclusion criteria were age <40 years, infertility duration > 12 months and regular menstrual cycle (21-35 days). For the salpingectomy group, patients were chosen if they had undergone unilateral or bilateral salpingectomy to treat tubal pregnancy or hydrosalpinx and the interval time between salpingectomy and IVF was 1-3 years. For the control group, patients were selected if they were diagnosed with unilateral or bilateral fallopian tube obstruction but had not undergone salpingectomy. Patients diagnosed with endometriosis, adenomyosis, polycystic ovarian syndrome, uterine fibroids, genital organ deformity, autoimmune disease, metabolic disorders, infertility caused by male factors, or who had been treated with steroids or immune inhibitors in the 6 months prior to IVF were excluded from the study. IVF treatment Patients were treated according to one of four controlled ovarian stimulation protocols based on age and ovarian reserve: GnRH-agonist long protocol (GnRH agonist administration in the luteal phase of the previous cycle), prolonged GnRH-agonist protocol (prolonged GnRHagonist administration in Day 2-3 of the previous cycle), GnRH-agonist short protocol (daily GnRH agonist administration starting with menstrual day 1-2 of the IVF cycle), GnRH antagonist protocol (daily GnRH antagonist administration from Day 6-7). Exogenous Gn (human menopausal gonadotrophin, 75 IU, Livzon Pharmaceutical Group, China; or Menopur, 75 IU, Ferring GmbH, Germany; or Gonal-F, 75 IU, Merck Serono, Germany) were administered at a dose ranging from 150 to 375 IU/d based on patients' age, BMI and ovarian reserve. Patients had serial transvaginal ultrasound scans and serum LH, E 2 and P were measured until the follicles reached maturity. The dosage of Gn was adjusted depending on the ovarian response. Human chorionic gonadotrophin (hCG) (250 μg, Merck Serono Inc., Geneva, Switzerland) was subcutaneously injected at night when the maximum follicle reached 18 mm or at least two follicles reached 17 mm in diameter based on transvaginal ultrasound. Oocyte retrieval was conducted by transvaginal ultrasound-guided follicular puncture 36-38 h after hCG trigger. All patients received embryo transfer after oocytes were retrieved. Embryo transfer was performed on day 2 or day 3 after oocyte retrieval. The luteal phase was supported by daily oral progesterone (Progesterone capsules, 100 mg, bid, Xianju Pharma, China and Progesterone soft capsules, 0.2 g, tid, Besins Manufacturing Belgium, France or Progesterone sustained-release vaginal gel, 90 mg, qd, Merck Serono, Germany) starting on the day of oocyte retrieval. Pregnancy was diagnosed using a positive serum hCG test 14-16 days after embryo transfer and clinical pregnancy was confirmed if the gestational sac could be observed by transabdominal ultrasound 28-35 days after embryo transfer. Progesterone support was maintained until the 10th week of pregnancy. Outcome measure We analyzed number of retrieved oocytes, fertilization rate, clinical pregnancy rate, spontaneous abortion rate, ectopic pregnancy rate, live birth rate, and multiple pregnancy rate. Live birth was defined as one or more newborns with vital signs after 28 completed weeks of gestation. Ectopic pregnancy was defined as a pregnancy occurring outside the uterine cavity. Multiple pregnancy was defined as more than one fetus for a pregnancy. The main measure of interest was live birth rate (per cycle and per transfer). Live birth was defined as one or more babies showing signs of life after 28 weeks of gestation. Statistical analysis SPSS 23.0 software (IBM Corporation, New York, America) was used for data analyses. In the comparison of groups and subgroups (group A vs. group B, group A1 vs. group A2, group B1 vs. group B2, group A1 vs. group B1, group A2 vs. group B2), statistical differences for non-normally distributed continuous variables were evaluated using Mann-Whitney U-test. Categorical variables and rates were evaluated using chi-square test and Fisher's exact test. Logistic regression analysis was performed for clinical pregnancy and live birth including the following covariates: age, BMI, infertility duration, basal hormone levels, and antral follicle count. Twosided P values < 0.05 were considered statistically significant. Results are presented as mean ± standard deviation. Patients' baseline characteristics A total of 1922 patients in their first IVF cycle were recruited for the present study. Patients who had previous history of salpingectomy due to ectopic pregnancy or hydrosalpinx were divided into the salpingectomy group (group A, 534 patients), and patients receiving IVF treatment due to fallopian tube obstruction were included in the control group (group B, 1388 patients). The patients' baseline characteristics are summarized in Table 1. All patients were similar in age, infertility duration, BMI, basal FSH levels, basal LH levels, basal E 2 levels and AFC. No significant differences were found between the patients in the two groups (P > 0.05). Table 2 shows a summary of the different COS parameters and pregnancy outcomes for the salpingectomy group and control group. Durations and total dosages of Gn, endometrium thicknesses and hormone levels on hCG day, numbers of oocytes retrieved, fertilization rates, numbers of available embryos, and pregnancy outcomes were not significantly different between patients in the salpingectomy and control groups (P > 0.05). Ovarian response markers and pregnancy outcomes for different subgroups In order to clarify whether salpingectomy affects ovarian response and live birth rates for infertile women of different ages, patients in the salpingectomy and control groups were each divided into two subgroups according to age: age less than 35 (salpingectomy group: 376 patients, namely group A1; control group: 995 patients, namely group B1) and age 35-39 years (salpingectomy group: 158 patients, namely group A2; control group: 393 patients, namely group B2). Table 3 shows the ovarian response markers and pregnancy outcomes in each subgroup. No statistical differences were found between bFSH and bE 2 levels, clinical pregnancy rate, spontaneous abortion rate, live birth rate, ectopic pregnancy rate or multiple pregnancy rate of any two subgroups (group A1 vs. group A2, group B1 vs. group B2, group A1 vs. group B1, group A2 vs. group B2, P > 0.05). In the salpingectomy group, AFCs were significantly lower for the 35-39 year age subgroup compared to the control group (group A2 vs. group B2, P < 0.05). However, this trend was not found in patients < 35 years between salpingectomy group and control group (group A1 vs. group B1, P > 0.05). We also found that AFCs were significantly lower in group A2 (35-39 year age subgroup) compared to the group A1 (< 35 year age subgroup, P < 0.05). Nevertheless, this change was not found between < 35 year age subgroup and 35-39 year age subgroup in patients who had not undergone salpingectomy (group B1 vs. group B2, P > 0.05). In the same age subgroups, patients with a history of salpingectomy had lower clinical pregnancy rates and live birth rates and higher miscarriage rates compared to the control group, but this difference is not statistically significant (group A1 vs. group B1, group A2 vs. group B2, P > 0.05). The clinical pregnancy rate and live birth rate were lower and the miscarriage rate was higher with increasing age both in the salpingectomy group and the control group (group A1 vs. group A2, group B1 vs. group B2). Nevertheless, there were no significant differences among the subgroups (P > 0.05). Logistic regression of pregnancy outcomes in patients after IVF treatment The logistic regression model (Table 4) showed that the age was the independent predictor of clinical pregnancy Other baseline characteristics such as BMI, infertility duration, basal hormone levels, and antral follicle count were not associated with pregnancy outcomes in the model. Discussion IVF-ET has become the primary method for women with a history of salpingectomy to achieve a successful clinical pregnancy and live birth. In present study, we found that salpingectomy may decrease AFCs but not affect live birth rate for infertile women aged 35-39 years. Moreover, age is a risk factor for the clinical pregnancy and live birth. Ovarian reserve is an indicator of the number and quality of follicles in the ovaries and is frequently assessed during IVF treatment with AFC and serum markers such as FSH, E 2 , AMH, or inhibin-B [19,20]. Based on previous study, the effects of salpingectomy on ovarian reserve and response remains unclear. Dar et al. compared the duration and quantity of Gn administered during IVF treatment, and clinical pregnancy rate in 26 patients who underwent unilateral salpingectomy. There was no significant difference between any of the measured parameters [15]. A retrospective study reported that salpingectomy did not influence the ovarian response in 36 women who had a previous salpingectomy to treat hydrosalpinx or ectopic pregnancy [21]. In another retrospective analysis of 94 women with tubal factor infertility who underwent IVF-ET, there were no differences in ovarian response between women who had a previous salpingectomy compared with women who were diagnosed with tubal-factor infertility and had not undergone salpingectomy [10]. However, one published retrospective study that investigated 198 women who underwent unilateral or bilateral salpingectomy versus an infertility group without salpingectomy showed that the serum AMH was lower, and the FSH was significantly higher in women without salpingectomy compared with those with bilateral salpingectomy [7]. Chan et al. compared ovarian volume, AFC, and 3D power Doppler indices in 32 patients who underwent unilateral salpingectomy due to ectopic pregnancy. AFC and 3D power Doppler indices have a significant decrease on the operated side compared with the non-operated side in the laparoscopy group [22]. These results suggested that ovarian reserve may be affected by salpingectomy. Our study showed that there were no statistical differences between the bFSH levels, bLH levels, bE 2 levels, AFC, or total dosage and duration of Gn of patients who had a history of salpingectomy and the control group. Methodological variations and sample sizes in studies may have accounted for differences in results. It is well known that ovarian reserve is closely related to a woman's age. To further explore how age and salpingectomy affect women who were undergoing IVF treatment, we divided women with or without salpingectomy into two subgroups according to age (age < 35 years and 35-39 years). We investigated the differences in ovarian reserve markers for women of different ages who had undergone the salpingectomy. The findings of our study show that women 35-39 years of age with salpingectomy history had significantly decreased AFCs. However, the live birth rate after IVF treatment did not change significantly compared to women without salpingectomy of the same age. There was no significant of ovarian reserve indicators and pregnancy outcomes in patients < 35 years old with and without salpingectomy history. A prospective cross-sectional study included 381 women aged 20-50 years with regular menstrual cycle and count number of AFC based on transvaginal ultrasound in the early follicular phase. The result showed that there were no significant differences in AFC among different age groups under 35 years, whereas a significant decrease after 35 years [23]. Bishop et al. analyzed the relationship between live birth and ovarian reserve in 9489 cycles among 8214 patients aged 21-44 years. The younger diminished ovarian reserve patients (age < 35 years old) indicated by bFSH and AFC, have the same live birth rate to normal ovarian reserve patients. Likewise, diminished ovarian reserve has poor correlation with oocyte quality for patients in this age group [24]. The results of the present study support the hypothesis that younger women may have partial compensation for declining ovarian reserve after salpingectomy, whereas that effect decreases with increased of female age. However, there were no statistical differences in live birth rates (the main parameter of interest) and other IVF outcomes, such as fertilization rates, clinical pregnancy rates, spontaneous abortion rates, ectopic pregnancy rates and multiple pregnancy rates between the salpingectomy group and the control group, regardless of the age of the patient. It was also found that regardless of whether the patient had undergone salpingectomy or not, the live birth and clinical pregnancy rates of younger patients were higher than those of older patients, but there were no significant differences among the subgroups. Age is one of the strongest predictors of success for IVF treatment [25]. With increasing age, the decreased number of follicles and the reduced quality of oocytes are the reasons for increased spontaneous abortion rates and decreased clinical pregnancy and live birth rates. The reduced oocyte quality is usually due to abnormalities of the spindle in the oocyte nucleus, an increase in the incidence of aneuploidy, a decrease in the proliferation, and an increase in the apoptotic of granulosa cells [26]. Moreover, age-related reductions in ovarian reserve and oocyte quality may be due to decreased androgen levels [27], oxidative stress [28], dysfunctional cohesins [29], shortening of telomeres [30], and impaired mitochondrial metabolic activity [31]. We have noticed some deficiencies in our study. First, patients were not subgrouped according to unilateral or bilateral salpingectomy, which may affect overall outcomes. However, some studies have shown that unilateral or bilateral salpingectomy does not affect ovarian reserve function and ovarian response [32,33]. Second, we did not analysis the effect of different underlying indications for salpingectomy on ovarian reserve, ovarian response, and pregnancy outcome. However, a study suggested that there were no significant differences in number of oocytes retrieved, oocyte fertilization rates, and clinical pregnancy rates for patients with different salpingectomy indications [8]. Finally, we measured basal hormone levels and AFC to assess ovarian reserve, but did not measure AMH levels. A well-designed randomized controlled trial with a large sample size may overcome the drawbacks of retrospective analysis to further elucidate whether salpingectomy affects ovarian reserves, ovarian responses and IVF outcomes. Conclusions Our investigation suggests that in women aged 35 to 39 years, salpingectomy may significantly decrease AFC, which may indicate declined ovarian reserve. However, live birth rates as a result of IVF treatment did not differ significantly between patients 35-39 years old and those < 35 years old. Moreover, bFSH, bLH and bE2 levels, total Gn doses, duration of Gn, fertilization rates, numbers of available embryos, and other pregnancy outcomes were similar between the salpingectomy group and the control group, regardless of the age of the patient. The increased female age was negative related with clinical pregnancy and live birth. The increased female age was significantly negative associated with clinical pregnancy and live birth. To ensure more comprehensive analysis, further studies should use a larger sample size and include all relevant data.
2020-07-20T13:37:09.704Z
2020-07-20T00:00:00.000
{ "year": 2020, "sha1": "8d37e845e73ee5786df8878bf748f82a4f7050cc", "oa_license": "CCBY", "oa_url": "https://ovarianresearch.biomedcentral.com/track/pdf/10.1186/s13048-020-00678-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fa7f90b6c01d0054054b83babbec673c40fe3c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54874772
pes2o/s2orc
v3-fos-license
Quasi-radial nodal solutions for the Lane-Emden problem in the ball We consider the semilinear elliptic problem \begin{equation}\label{problemAbstract} \left\{\begin{array}{lr}-\Delta u= |u|^{p-1}u\qquad \mbox{ in }B\\ u=0\qquad\qquad\qquad\mbox{ on }\partial B \end{array}\right.\tag{$\mathcal E_p$} \end{equation} where $B$ is the unit ball of $\mathbb R^2$ centered at the origin and $p\in (1,+\infty)$. We prove the existence of non-radial sign-changing solutions to \eqref{problemAbstract} which are \emph{quasi-radial}, namely solutions whose nodal line is the union of a finite number of disjoint simple closed curves, which are the boundary of nested domains contained in $B$. In particular the nodal line of these solutions doesn't touch $\partial B$. \\ The result is obtained with two different approaches: via nonradial bifurcation from the least energy sign-changing radial solution $u_p$ of \eqref{problemAbstract} at certain values of $p$ and by investigating the qualitative properties, for $p$ large, of the least energy nodal solutions in spaces of functions invariant by the action of the dihedral group generated by the reflection with respect to the $x$-axis and the rotation about the origin of angle $\frac{2\pi}{k}$ for suitable integers $k$.\\ We also prove that for certain integers $k$ the least energy nodal solutions in these spaces of symmetric functions are instead radial, showing in particular a breaking of symmetry phenomenon in dependence on the exponent $p$. Introduction We consider the semilinear Lane-Emden problem −∆u = |u| p−1 u in B u = 0 on ∂B (1.1) where B ⊂ R 2 is the unit ball centered at the origin and p > 1. It is well known that (1.1) admits a unique positive ground state solution which is radially symmetric. Observe that the oddness of the nonlinearity implies that u is a solution of (1.1) if and only if −u is a solution, so there is also a unique negative solution to (1.1). Moreover, due to the oddness of the nonlinear term, standard variational methods give the existence of infinitely many sign-changing solutions. While the ground state solution of (1.1) has been widely investigated, not much is known for nodal ones. Among these one can select the least energy nodal solutions, which can be obtained by minimizing the associated energy functional on the nodal Nehari set in the Sobolev space H 1 0 (B) (see [CCN] for details). We denote a least energy sign-changing solution by u p . In [BW] it has been shown that ( u p ) = 2 and m( u p ) = 2, (1.3) where (u) is the number of nodal regions of u and m(u) is the Morse index of the solution u (see Section 3 for the definition). Moreover in [BWW] it has been proved that u p partially inherits the symmetries of the domain, being foliated Schwarz symmetric, namely axially symmetric with respect to an axis passing through the origin and nonincreasing in the polar angle from this axis (see also [PW]). Since the domain B is radially symmetric one can restrict to the Sobolev space of radial functions H 1 0,rad (B) and prove the existence of infinitely many sign-changing radial solutions for (1.1). More precisely it can be proved that for every m ∈ N 0 := N \ {0} there exists a unique radial solution to (1.1) that satisfies u(0) > 0 (1.4) and such that (u) = m (see [NN], [K1]). We denote by u p the unique radial least energy sign changing solution to (1.1) which satisfies (1.4), clearly (u p ) = 2. (1.5) Morover it has been proved in [AP] that m(u p ) ≥ 4 (1.6) (see also [DIP3] where m(u p ) has been explicitly computed for p large and also [DIP4] where the previous estimate on the Morse index has been generalized to any radial solutions with m nodal regions, with bound given by the number 3m − 2). Comparing the information on the Morse index in (1.3) with the one in (1.6) one gets that the radial solution u p is not the least energy sign-changing solution in the whole space H 1 0 (B), namely that u p = u p . As a consequence the monotonicity of u p with respect to the polar angle (as recalled above u p is foliated Schwarz symmetric) must be strict at some region, and in [PW] it is actually proved that, for p > 2, the monotonicity is always strict. Moreover in [AP] it has been also proved that the nodal set of u p touches the boundary of B. One can also restrict to the Sobolev space H 1 0,k (B) of the functions in H 1 0 (B) which are even and 2π k -periodic in the angular variable, for k ∈ N 0 and similarly show the existence of infinitely many sign-changing symmetric solutions in H 1 0,k (B), among which we denote by u k p the least energy ones. Anyway a priori it is not clear whether this procedure produces new solutions or not. Indeed, clearly u 1 p = u p (since u p is axially symmetric) and, even though u k p = u p for k ≥ 2 (since if they coincide then u p would be 2π k -periodic in the angular variable and so necessarily radial by the Schwarz symmetry, getting a contradiction), u k p could be radial. In particular it would be interesting to show the existence of sign-changing solutions to (1.1) which belong to H 1 0,k (B) but are not radially symmetric, having nevertheless a quasi-radial shape, in the sense of the following definition: Definition 1.1. We say that a solution of (1.1) is quasi-radial if its nodal set is the union of a finite number of disjoint simple closed curves which are the boundary of nested domains contained in B. Observe that the nodal line of a quasi-radial solution doesn't touch the boundary of the ball B. Clearly any radial solution is quasi-radial. By the asymptotic estimates for the energy of the solutions of (1.1) in [RW], the obvious inequality E p (u k p ) ≤ E p (u p ) and the upper bound pE p (u p ) ≤ α · 4πe, for p large, proved in [GGP] for a certain value α ∈ (4.5, 5), one derives the following upper bound on the number of nodal regions of u k p : (u k p ) ≤ 4 ∀k ∈ N 0 , for p large. Combining this bound with the results in [DIP1] (which hold in symmetric and simply connected domains, more general than the ball) it then follows that the least energy symmetric solution u k p is quasi-radial when k ≥ 4 and p is large, (1.7) from which in particular one also derives (u k p ) = 2 and m(u k p ) ≥ 4, for k ≥ 4, for p large. (1.8) Observe that the properties in (1.7), (1.8) are satisfied also by u p (see (1.5), (1.6)), hence the question of the existence of symmetric but non-radial solutions of (1.1) which are quasi-radial is still open. Moreover, as p ∈ (1, +∞) and k ∈ N 0 vary, one would like to investigate whether u k p coincides with the radial least energy nodal solution u p or not. We start by giving a positive answer to the first question, showing the existence of three distinct solutions to (1.1) which belong respectively to H 1 0,k (B) \ H 1 0,rad (B), for k = 3, 4, 5. Each solution bifurcates from the least energy radial nodal solution u p at certain values of p and close to the bifurcation point it is quasi-radial. The result is the following, where X k := H 1 0,k (B) ∩ C 1,α (B) (and C 1,α (B) denotes the space of C 1 (B) functions with Hölder derivatives): Theorem 1.2. For any k = 3, 4, 5 there exists at least one exponent p k ∈ (1, +∞) such that (p k , u p k ) is a nonradial bifurcation point for problem (1.1). The bifurcating solutions are sign-changing, belong to X k and close to the bifurcation point they have two nodal domains and are quasi-radial. Moreover the bifurcation is global and, letting C k be the continuum that branches out of (p k , u p k ), then either C k is unbounded in (1, +∞) × X k or it intersects {1} × X k . Finally at any point along each branch C k either the solution belongs to X k \ X j , ∀j > k or it is radial, in particular the continua bifurcating from different values of k can intersect only at radial solutions. Our second goal is to understand whether the least energy symmetric solution u k p , k ∈ N 0 , p ∈ (1, +∞), coincides with the radial least energy nodal solution u p or not, and this is analyzed in next result: Theorem 1.3. Let u k p be the least energy sign-changing solution of (1.1) in the space H 1 0,k (B), k ∈ N 0 , then there exist δ > 0 and p > 1 such that: i) for k = 2: u k p is non-radial both for p ∈ (1, 1 + δ) and p ≥ p ; ii) for k = 3, 4, 5: u k p is radial for p ∈ (1, 1 + δ) and non-radial when p ≥ p ; iii) for k ≥ 6: u k p is radial for p ∈ (1, 1 + δ). Clearly when u k p is radial then it coincides with u p (up to the sign). The fact that the symmetry of the domain is not totally caught by these least energy solutions is reasonable, since we are dealing with sign-changing solutions, anyway the symmetry breaking phenomenon when k = 3, 4, 5 (case ii)) and its dependence on the value of the exponent p were totally unexpected. It is also interesting that we can identify the symmetries of the solutions at which this symmetry breaking phenomenon occurs. Theorem 1.3-ii) combined with (1.7) and (1.8) provides another example for nonradial symmetric sign-changing solution of (1.1) which are quasi-radial in the sense of Definition 1.1. Differently with respect to Theorem 1.2, this result is now for any p large enough: Corollary 1.4. Let k = 4, 5, then u k p is not radial but it is quasi-radial for p large enough. In particular u k p = u p . Moreover u k p = u p and (1.8) holds. We conjecture that the bifurcating solution in X k found in Theorem 1.2 not only exists for any p ≥ p k but also coincides with u k p , when k = 4, 5 and even 3. Differently from the higher symmetry cases considered in Corollary 1.4, when k = 3 we do not expect u k p to keep the quasi-radial shape for large p. For k = 2 we believe that u k p is not radial for all p and also not quasi-radial (when p is close to 1 it could be proved rigorously, see Remark 10.6), for k = 1 we recall that u k p = u p for any p ∈ (1, +∞). The case k ≥ 6 and p large is not covered by the previous result, we conjecture that u k p is radial, observe that this is not in contrast with (1.8). The asymptotic behavior, as p → +∞, of the least energy sign-changing solution u k p of (1.1) in the spaces H 1 0,k (B) will be object of a subsequent paper [GIP]. Next we briefly explain the main ideas to get Theorem 1.2 and Theorem 1.3. The bifurcation in Theorem 1.2 is with respect to the exponent p of the nonlinearity, previous results in this direction can be found for instance in [GGPS] and [G]. Observe that the bifurcation can occur only at values p at which the least energy nodal radial solution u p is degenerate and that a sufficient condition to identify degeneracy points is to have a change in the Morse index of u p . This paper starts then from the recent results in [DIP3] where the Morse index of the radial least energy sign-changing solution u p is computed for large values of p, proving the existence of an exponent p > 1 such that: This result is only for large p and it strongly relies on the asymptotic behavior of u p as p → +∞, which has been described in [GGP]. Indeed, an asymptotic analysis of the behavior of the solution u p as p → 1 shows that a suitable re-normalization of u p converges to the second radial eigenfunction of the Laplace operator with Dirichlet boundary conditions (see Lemma 6.4) and this allows to compute the Morse index of u p for p close to 1, showing that it has a different value in this range. More precisely in Proposition 6.1 we get the existence of δ > 0 such that m(u p ) = 6 ∀ p ∈ (1, 1 + δ). (1.10) Hence (1.9) and (1.10) prove that along the branch of radial solutions (p, u p ) of (1.1) there should be points at which the Morse index increases and this change of the Morse index of u p in the interval (1, +∞) suggests bifurcation from u p . We underline that in the convex domain B this phenomenon is specific of signchanging solutions, since the positive solution in B is unique and non-degenerate (for the uniqueness in more general convex domains see [DGIP]). Anyway this is the first time that a non-radial bifurcation result from sign-changing solutions in convex domains is observed and there was no chance to get it before the study of the Morse index in [DIP3]. To prove the result in Theorem 1.2 we need first to analyze the degeneracies of the solution u p . This is the goal of Sections 4, 5 and 6. We first consider in Section 4.1 an auxiliary singular weighted eigenvalue problem (1.11) which has the same kernel and the same number of negative eigenvalues of the linearized operator at u p (see Lemma 4.2) and whose main advantage relies on the fact that, in addition, a classical spectral decomposition into radial and angular part may be applied to it (Lemma 4.4). The weighted eigenvalue problem (1.11) belongs to the class of eigenvalue problems which has been studied in [GGN], where the eigenvalues for (4.1) have been variationally characterized in the case when they are negative. Since u p is the radial least energy nodal solution, then in the space of radial functions its Morse index is 2, in Section 4.2, in view of the spectral decomposition, we estimate the two negative radial eigenvalues of problem (1.11) from above and from below by certain consecutive eigenvalues of −∆ S 1 . The proof is based on the approximation of the negative eigenvalues of problem (1.11) by the negative eigenvalues of a family of weighted eigenvalue problems in annuli already studied in [DIP3], in particular we can extend some previous estimates in [DIP3] related to the negative radial eigenvalues in annuli to the negative eigenvalues for the singular problem (1.11). As a consequence of our estimates we get information about the Morse index of the solution u p (Lemma 4.5) and a general characterization of its degeneracy (Proposition 4.7), for any p > 1. Finally, thanks to (1.9) and (1.10), we get more specific results both in the case p large and p close to 1 (see Sections 5 and 6). Observe that, due now to the spectral decomposition, we can decompose any solution of the linearized equation at u p (and more in general each solution of the eigenvalue problem (1.11)) along spherical harmonics, which in R 2 are the functions cos(jθ), sin(jθ) with j ∈ N, getting in particular an explicit representation of the solutions of the linearized equation when they are nontrivial (and more in general of the eigenfunctions of (1.11) associated with negative eigenvalues). As a consequence we can then identify the symmetries of those functions which are responsible of the degeneracy of u p (or which give rise to negative eigenvalues for the linearized operator at u p ). This aspect has been investigated in Section 8, where the symmetric spaces (H 1 0,k (B) and) X k have been introduced and the degeneracy and Morse index of u p in these spaces studied (see Proposition 8.5, 8.6 and 8.7). The reason for restricting to the spaces X k is to isolate a unique function in the kernel of the linearized operator; more precisely, on one side it allows to select one suitable spherical harmonic (between sin and cos) that produces degeneracy and, on the other side it avoids a possible double degeneracy due to the contemporary vanishing of two eigenvalues, possibility that cannot be ruled out and it is specific of sign-changing solutions. Since we do not know explicitly the solution u p , it is not clear whether the transversality condition of the well-known Crandall-Rabinowitz Theorem (for one dimensional kernel) is satisfied or not. Anyway the bifurcation result may be obtained here using a degree argument. The separation of the branches is obtained defining suitable cones K k ⊂ X k of monotone functions introduced by Dancer in [D2] and using the degree in cones, see [A] (see Section 9 for the definitions of the cones). The quasi-radiality is inherited from the radial least energy solution u p , since near the bifurcation point the bifurcating solution is a small perturbation of it (see Remark 9.5). Along the branch instead the number of nodal regions and the shape of the solutions may change, anyway the characterization of the behavior for branches of non-radial solutions may be a very difficult task to investigate, we also conjecture that the branches exist for every p ≥ p k . In this paper we have focused on the radial least energy sign-changing solution u p of (1.1). A bifurcation result similar to Theorem 1.2 could be obtained from any nodal radial solution u m p of (1.1) with m > 2 nodal regions, provided information about its Morse index when p is large is available. In this case we expect that the symmetries which cause the degeneracy and hence produce branches of bifurcating solutions, should be of the same type of the one for functions in X k (which derive by the symmetry groups of spherical harmonics), but with different values of k, probably k ≥ 6. Moreover one could think to extend the bifurcation result in Theorem 1.2 also to higher dimension N ≥ 3, when p ∈ (1, N +2 N −2 ). Indeed the behavior of all the radial sign-changing solutions of (1.1) has been studied in [DIP4] and in particular their Morse index has been explicitly computed when p is sufficiently close to N +2 N −2 , giving for instance, for the radial least energy sign-changing solution u p : m(u p ) = 2 + N, for p close to N +2 N −2 . Similarly as in the 2-dimensional case, we expect a change in the Morse index of u p as p varies from 1 to N +2 N −2 . Indeed u p should converge as p → 1 to the radial Dirichlet eigenfunction with 2 nodal regions of the Laplace operator in B and this would imply m(u p ) = 2 + N + (N + 2)(N − 1) 2 , for p close to 1. Again a change in the Morse index should give a nonradial bifurcation result. An extra difficulty in dimension N ≥ 3 would be to identify the symmetry groups of the spherical harmonics, which are much more involved than those of the 2-dimensional spherical harmonics, see for instance [AG]. Next we discuss the main ideas behind the proof of Theorem 1.3, which is contained in Section 10. The non-radial part is a byproduct of the study of the symmetry groups that cause the degeneracy and the bifurcation from u p . Indeed in order to prove that u p and u k p do not coincide one would like to compare their Morse indexes and show that they are different. However the computation of m(u k p ) may be very difficult, but if we restrict to the symmetric spaces H 1 0,k (B) then the Morse index of u k p is always 2 (see Lemma 10.1). On the other side we are able to compute the symmetric Morse index also for the radial solution u p (Proposition 8.5 and 8.6). Observe that it is computed only for p close to 1 and p large since it is deduced, among other things, from the asymptotic analysis of u p as p → 1 and as p → +∞ respectively. The proof of the radial part of Theorem 1.3 is more involved. It relies on a careful blow-up procedure in the spirit of [GS] for showing L ∞ bounds for the solutions u k p (see Proposition 10.4). Once an L ∞ bound is available one can deduce the result by studying the asymptotic behavior of the solutions u k p as p → 1 (see the proof of Proposition 10.5). In particular a delicate expansion of u k p ∞ at p = 1 up to the second order is needed. Getting a uniform L ∞ bound is somehow standard for solutions with uniformly bounded Morse index, since one shows that the bound on the Morse index is preserved as p → 1, while the blow-up analysis of unbounded solutions in L ∞ -norm leads to solutions to limit problems in unbounded domains, whose Morse index is not finite, thus reaching a contradiction. The main problem here is that for the least energy symmetric solutions u k p we do not have a bound for the full Morse index, but only for the k-Morse index (see Lemma 10.1), while in the rescaling procedure the symmetries are not preserved. To overcome this technical difficulty we exploit the symmetry of u k p and reduce problem (1.1) to the circular sector S k of the ball of amplitude π k , for k ∈ N 0 . In particular we are able to convert the bound on the k-Morse index to a bound on the full Morse index of u k p in the sector S k (Morse index for a mixed Dirichlet-Neumann problem, see Lemma 10.2) and finally we perform the blow-up argument in S k . Also the blow-up procedure in S k requires special care, since we have to deal with mixed boundary conditions and, above all, with the angular points of S k . For these reasons the analysis of the rescaled solutions includes several different cases, depending upon the location of the maximum points in the sector. Anyway in all the cases we end-up with solutions to a limit linear problem in unbounded domains with either Dirichlet or Neumann or mixed boundary conditions, whose Morse index is finite. Finally studying the Morse index of solutions for these limit problems (Proposition 10.3) we get a contradiction. Contents Finally we state a Proposition which provides the behavior, at the singularity, of solutions to a singular ordinary differential equation. This result is partially contained in [GGN,Lemma 2.4], although one implication is new and proved here. Using the fact that along a sequence r n → 0 it holds we get as n → ∞ r β+1 n ψ (r n ) = o(1). Observe now that the function v(r) = r β satisfies We multiply (2.2) by v, we multiply (2.4) by ψ, we integrate on (r n , R), with R ∈ (0, 1), we subtract the two equations and we get R rn r β+1 h(r)ψ(r) dr = r β+1 n ψ (r n ) − βr β n ψ(r n ) − R β+1 ψ (R) + βR β ψ(R) and, passing to the limit as n → ∞ which implies for any t ∈ (0, 1) (2.5) The boundedness of h(s) and ψ(s) then gives which, together with (2.5) gives and this implies the thesis in case β < 2. When β ≥ 2 instead we have |ψ(t)| ≤ Ct 2 for β > 2 and |ψ(t)| ≤ Ct β−ε for β = 2 where 0 < ε << 1. Inserting these estimates into (2.6) then we have which, together with (2.5) gives which implies the thesis when β < 4. We can repeat the procedure. At each step the set of values of β at which (2.3) is satisfied increases by 2. Then for every value of β the thesis follows after a finite number of steps. Linearized operator It is well known that L p admits a sequence of eigenvalues which, counting them according to their multiplicity, we denote by where the first inequality is strict because it is known that µ 1 (p) is simple. We also recall their min-max characterization and Q p : H 1 0 (B) → R denotes the quadratic form associated to L p , namely Since u p is a radial solution to (1.1) we can also consider the subsequence of (µ i (p)) i∈N 0 of the radial eigenvalues of L p (i.e. eigenvalues which are associated to a radial eigenfunction) that we denote by and which are all simple in the space of radial functions. For the eigenvalues µ i,rad (p) an analogous characterization holds: where R p is as in (3.3) and H 1 0,rad (B) is the subspace of the radial functions of H 1 0 (B). Moreover it is known that µ 1,rad (p) = µ 1 (p). The Morse index of u p , denoted by m(u p ), is the maximal dimension of a subspace X ⊆ H 1 0 (B) such that Q p (v) < 0, ∀v ∈ X \ {0}. Since B is a bounded domain this is equivalent to say that m(u p ) is the number of the negative eigenvalues of L p counted with their multiplicity. The radial Morse index of u p , denoted by m rad (u p ), is instead the number of the negative radial eigenvalues µ i,rad (p) of L p . Proof. Given a solution w α for the problem where T > 0, it is not difficult to see (see [SW]) that w α is differentiable with respect to α and that it is radially non-degenerate in (0, T ) if and only if ∂wα ∂α | r=T = 0. Observe that u p solves (3.8) with α = u p (0) > 0 and T = 1. Moreover for any α > 0 (3.8) has a unique solution w α which is obtained by scaling u p as Hence it is immediate to check that ∂wα ∂α | r=T (α) = 0, from which it then follows that u p is radially non-degenerate. Morse index and degeneracy of u p The section is organized as follows: we first consider an auxiliary weighted eigenvalue problem (problem (4.1) below), whose main advantage, as we will see, relies on the fact that it shares with L p the same spectral properties (see Lemma 4.2) and, in addition, a classical spectral decomposition into radial and angular part may be applied to it (Lemma 4.4 in the section). The study of the auxiliary problem is carried out for any p > 1, getting information about the Morse index of the solution u p (Lemma 4.5) and a general characterization of its degeneracy (Proposition 4.7). An auxiliary weighted eigenvalue problem. We consider the auxiliary eigenvalue problem where β ∈ R and p > 1. Observe that, since p|u p | p−1 ∈ L ∞ (B), (4.1) belongs to the class of eigenvalue problems which has been studied in [GGN], where the eigenvalues for (4.1) have been variationally characterized in the case when they are negative. In the following we recall the variational characterization obtained in [GGN]. In particular they have observed that when the associated Rayleigh quotient is greater or equal than zero there is a compactness problem, but as far as the quotient is strictly negative, the eigenvalues and eigenfunctions maintain the usual properties of the classical ones. Let us denote by H the closure of C ∞ 0 (B) with respect to the norm v 2 H = B |∇v| 2 + v 2 |x| 2 dx. Notice that H ⊂ H 1 0 (Ω) and the inclusion is strict (consider for instance the function w(x) = 1 − |x| 2 ). For η, ξ ∈ H we write Observe that if ψ, ψ ∈ H are weak solutions to (4.1) related respectively to the eigenvalues β and β, β = β then ψ ⊥ H ψ (4.3) (just multiply (4.1) by ψ, the equation (4.1) for the eigenvalue β by ψ, integrate and subtract). We define and Q p is as in (3.4). From [GGN,Proposition 2.1] we know that when β 1 (p) < 0 then this infimum is achieved at a radial function ψ 1 ∈ H, ψ 1 > 0 in B \ {0}, which solves Moreover β 1 (p) is simple (in H). In this case we can then define which again is achieved when it is negative (see [GGN,Proposition 2.3]) and any function ψ 2 ∈ H at which β 2 (p) is achieved solves (4.8) and by definition ψ 1 ⊥ H ψ 2 , then ψ 2 must change sign. More in general, by iterating, if β j (p) < 0 and ψ j ∈ H is a function where it is achieved, for j = 1, . . . , i − 1, we can define which (again [GGN,Proposition 2.3]) is achieved if it is negative and, in such a case, any function ψ i ∈ H at which β i (p) is achieved solves and changes sign. The following relation holds between the Morse index of u p and the number of negative eigenvalues of the weighted problem (4.1): Lemma 4.2 ( [GGN], Lemma 2.6). The Morse index (resp. radial Morse index) of u p coincides with the number of negative eigenvalues (resp. negative radial eigenvalues) of problem (4.1) counted according to their multiplicity. Proof. The first statement is a consequence of Lemma 3.2 and Lemma 4.2. Observe that the value β 3,rad (p) is well defined by (4.12), being both β 1,rad (p) and β 2,rad (p) negative, moreover β 3,rad (p) ≥ 0 from Lemma 4.1 and Lemma 4.2, since m rad (u p ) = 2 by Lemma 3.2. In particular even if β 3,rad (p) = 0 it cannot be an eigenvalue for (4.1) because H ⊂ H 1 0 (B) and u p is radially nondegenerate by Lemma 3.3. To show that β 3,rad (p) = 0 we let φ j ∈ H rad be the function where β j,rad (p) is achieved for j = 1, 2, we choose the test functions if |x| ≤ ε defined for 0 < ε < 1 and we let where a ε , b ε ∈ R are given by so that η ε is orthogonal in the sense of (4.2) to φ j , j = 1, 2 for any ε ∈ (0, 1). Moreover observe that by our choice of the test functions η ε there exists C = C p > 0 such that for any ε ∈ (0, 1). Since β j,rad (p) < 0 for j = 1, 2, by Proposition 2.3 we have that This last estimate together with the definition of η ε then implies that so that a ε and b ε are uniformly bounded. From (4.12) and the orthogonality between η ε and φ j , j = 1, 2 then β 3,rad (p) (4.16) An easy computation shows that and, using that φ j , j = 1, 2 solves (4.13), that φ 1 ⊥ H φ 2 and recalling the definition of a ε , b ε , we then get The last equality, together with (4.14) and the boundedness of a ε , b ε implies that for any ε ∈ (0, 1). Finally, using again the definition of a ε , b ε we have The conclusion then follows using (4.16) and Here and in the following we denote by α k , k ∈ N the spherical harmonics in dimension 2, namely the homogeneous harmonic polynomials of degree k considered on the unit sphere S 1 ⊂ R 2 . They can be written explicitly, using the polar coordinates x = (r cos θ, r sin θ) Recall that the set (α k ) k∈N is a complete orthogonal system for L 2 (S 1 ), hence any function v ∈ L 2 (B) can be written as Moreover if v(r, θ) is continuous in the origin, then 2πcv(0) = h 0 (0) (where c is the constant in (4.17)) and h k (0) = 0, ∀k ≥ 1. (4.20) Recall also that the eigenvalues of the Laplace-Beltrami operator −∆ S 1 on the unit sphere S 1 are the numbers k 2 , k ∈ N, that they have multiplicity 1 if k = 0 and multiplicity 2 if k ≥ 1, and that the spherical harmonics α k are the eigenfunctions associated to the eigenvalue k 2 . For the negative eigenvalues of (4.1) we then have the following spectral decomposition into radial and angular part, where the angular part is given by the eigenvalues of −∆ S 1 : (4.21) Conversely for every (j, k) ∈ {1, 2} × N such that β j,rad (p) + k 2 < 0 there exists i ∈ {1, . . . , m(u p )} (i depending also on p) for which (4.21) holds. Step 1. We show the first statement. By Lemma 4.1 and Lemma 4.2 the value β i (p), for any i = 1, . . . , m(u p ), is a (negative) eigenvalue for problem (4.1) and so there exists a function ψ = 0 which satisfies (4.1) with β = β i (p). Decomposing ψ along spherical harmonics (see (4.18), (4.19)), we write Then, since ψ = 0 and (α k ) k is a complete orthogonal system for L 2 (S 1 ), it follows that h k = 0 for some k ∈ N, moreover it satisfies Integrating the last term by parts we get where β i (p) − k 2 ≤ β i (p) < 0. Next we show that it satisfies also the condition Indeed using (4.23) we get where last estimate follows from (4.1). In the same way we obtain showing (4.25). By Lemma 4.1, Lemma 4.2 and Lemma 4.3 problem (4.24)-(4.25) admits only two negative eigenvalues which coincide with β 1,rad (p) and β 2,rad (p). Then (4.24)-(4.25) has a nontrivial solution h k (related to a negative eigenvalue) if and only if β j,rad (p) = β i (p) − k 2 for some j = 1, 2. This ends the proof of the existence of (j, k) ∈ {1, 2} × N which satisfies (4.21). Step 2. We show the converse statement. Let (j, k) ∈ {1, 2} × N be such that β j,rad (p) + k 2 < 0, let φ j be an eigenfunction associated to the radial eigenvalue β j,rad (p) (which is simple in the space of the radial functions) and α k be an eigefunction of −∆ S 1 associated to the eigenvalue k 2 (see (4.17)). Then easy computation shows that the number β j,rad (p) + k 2 is a negative eigenvalue for the weighted problem (4.1) with eigenfunction given by As a consequence, by Lemma 4.1 and Lemma 4.2, there exists i ∈ {1, . . . , m(u p )} for which (4.21) holds. Step 3. We prove that the eigenspace of a negative eigenvalue β(p) of problem (4.1) is spanned by the functions in (4.22). Let m ∈ N 0 be the multiplicity of β(p), so there exists an index ∈ N, ≥ 1 such that is the number of subsequent indexes i in our notation). By Step 1. for every i = , . . . , + m − 1 there exists a couple (j, k) ∈ {1, 2} × N for which (4.21) holds (some of the couples may coincide). Then considering the set as seen in Step 2. all the functions in (4.28) with (j, k) ∈ I are eigenfunctions for (4.1). Observe that since β j,rad (p) is simple in the space of radial functions and α k are the functions in (4.17) one obtains all the functions in (4.22), which are linearly independent. Last we prove by contradiction that the eigenspace of β(p) consists only of the functions in (4.22). So let us assume the existence of another eigenfunction ψ = 0, then similarly as in Step 1. we can write Since ψ = 0 then there exists s ∈ N such that h s = 0. Then, as in Step 1. we can prove that for any s such that h s = 0 there exists t s ∈ {1, 2} such that β(p) = β ts,rad + s 2 and h s = φ ts . (4.31) As a consequence (4.30) becomes and so the orthogonality condition (4.29) gives As a consequence, for any (j, k) ∈ I either s = k or if s = k then necessarily t s = j, namely the couple (t s , s) ∈ I. Since (4.31) holds this contradicts the definition of the set I. Morse index and characterization of the degeneracy of u p . In the next result we estimate the two negative radial eigenvalues of the auxiliary weighted eigenvalue problem (4.1) from above and from below by consecutive eigenvalues of −∆ S 1 . The proof is based on the approximation of problem (4.1) by a family of weighted eigenvalue problems in annuli already studied in [DIP3], in particular we exploit some previous estimates in [DIP3] related to the negative radial eigenvalues to this family of approximating auxiliary problems. As a consequence of our estimates we also get that the Morse index of u p is even for any p > 1 and uniformly bounded in p. Moreover the estimate of the two negative radial eigenvalues of (4.1) is the starting point to characterize the degeneracy of u p , this last result is contained in Proposition 4.7 at the end of the section. and m(u p ) = 2j (4.34) Moreover j(p) ≤ C for any p > 1, where the constant C > 0 does not depend on p. Proof. Let us consider the set and the weighted eigenvalue problem and let us denote by β n i (p), β n i,rad (p), i ∈ N 0 its eigenvalues, counted with their multiplicity, and the radial eigenvalues, which are simple in the space of radial functions, respectively. Clearly the following characterizations hold Step 1. We show that for any p > 1 there exists a unique j = j(p) ∈ N, j ≥ 2 and n p ∈ N 0 such that m(u p ) = 2j, (4.40) and for n ≥ n As already proved in [DIP3,Proposition 4.3] there exists n p ∈ N 0 such that 2 Lemma 3.2 = m rad (u p ) = #{negative eigenvalues β n i,rad (p)}, for n ≥ n p , (4.43) namely β n 1,rad (p) < β n 2,rad (p) < 0 ≤ β n 3,rad (p) < . . . , for n ≥ n p , (4.44) where the strict inequalities are due to the fact that the radial eigenvalues are simple in the space of the radial functions. From [DIP3,Proposition 4.5] we also know that for any p > 1 there exists n p ∈ N 0 such that β n 2,rad (p) > −1, for any n ≥ n p . (4.45) Hence (4.42) follows immediately from (4.44) and (4.45). In order to conclude the proof observe that from [DIP3,Proposition 4.3] we also know that We show that, as a consequence of (4.46), using (4.44)-(4.45) and a spectral decomposition for the eigenvalues β n i (p) in A n , necessarily also (4.40) and (4.41) hold. Indeed recall that for the eigenvalues in A n the spectral decomposition holds: where as before k 2 are the eigenvalues of the Laplace-Beltrami operator on the unit sphere S 1 , and the eigenfunctions ψ associated to the eigenvalue β n i (p) may be obtained by multiplying the spherical harmonics α k associated to k 2 (given in (4.17)) with the radial j-th eigenfunction φ n j for problem (4.36), similarly as we already did in (4.28): (4.48) By (4.47) and (4.44) it follows that the modes k that contributes to the Morse index of u p are those such that The case j = 2 in (4.49) is possible only when k = 0 by (4.45). Hence by (4.48) and recalling that there is only 1 spherical harmonic for k = 0 (see (4.17)) we get only 1 contribution to the Morse index in this case. The case j = 1 gives instead 1 contribution (for k = 0) and moreover, by (4.46), it must also give other contribution (k ≥ 1). As a consequence (4.41) holds. Hence by (4.48) and recalling that there are two spherical harmonics for k ≥ 1, and only 1 for k = 0 (see (4.17)) we get in this case that the total contribution of β n 1,rad (p) to the Morse index is then 2(j − 1) + 1. Summing up all the contributions from both j = 1 and j = 2 we get (4.40). Step 2. We show that for any p > 1 the sequence (β n i,rad (p)) n is monotone nonincreasing and The monotonicity of (β n i,rad (p)) n follows by the variational characterization (4.38) of these eigenvalues and by the canonical embeddings , ∀n ∈ N 0 . By the monotonicity of (β n i,rad (p)) n we can define the values Step 1 < 0, i = 1, 2. (4.50) Then the proof of Step 2. consists in proving that Let φ n i be the radial eigenfunction of problem (4.36) corresponding to the radial eigenvalue β n i,rad (p) and normalized so that (4.53) By Step 1. we know in particular that β n i,rad (p) < 0, so from (4.36) we get from which it follows that the sequence ( φ n i ) n is bounded in Hence, up to a subsequence, φ n i → φ i as n → +∞ weakly in H 1 0 (B), strongly in L 2 (B) and pointwise a.e. in B and then as a consequence (4.57) φ i is radial and moreover, since Moreover there exists C > 0, independent on n ∈ N, such that indeed, if by contradiction we have that B φ n i 2 |x| 2 → +∞, then by (4.55) and (4.57) we derive which is in contradiction with (4.50). Observe that by the bounds in (4.56) and (4.57) and the estimate in (4.53) we also get that δ i (p) > −∞. By (4.56) and (4.60) it follows that φ i ∈ H rad . (4.61) Moreover (4.60) implies that, up to a subsequence, φ n i converges to φ i also weakly in L 2 1 |x| 2 (B). Multiplying (4.36) by ϕ ∈ H and integrating on A n , we have (where the only boundary term is the one on the interior part of ∂A n since ϕ(x) = 0 when |x| = 1). Now by the weak convergence of φ n i → φ i in H 1 0 (B) and in L 2 and (4.65) (4.67) Passing to the limit into (4.62) and using (4.63), (4.64), (4.65) and (4.66), we get in particular by (4.61) we can choose ϕ = φ i = 0 by (4.58), so since δ i (p) < 0 we have from (4.68) that the quadratic form evaluated at φ i is negative As a consequence, since m rad (u p ) = 2, it must be is the function where the negative weighted radial eigenvalue β j,rad (p) is achieved for j = 1, 2, which satisfies (4.13) (and so Q p (φ j ) < 0) and such that φ 1 ≥ 0 and φ 1 ⊥ H φ 2 (so φ 2 changes sign). Choosing the test function ϕ = φ j , j = 1, 2 into (4.68) and using the equation (4.13) for φ j we also get hence the only possibility is that there exists α ∈ R such that for i = 1, 2. By (4.59) it follows that necessarily there exists α ∈ R such that Moreover (4.41) and (4.42) proved in Step 1. and the definition of δ i (p) in (4.50) imply that δ 1 (p) < −1 ≤ δ 2 (p), hence δ 1 (p) = δ 2 (p) and so by (4.69) necessarily there exists β ∈ R such that φ 2 = βφ 2 δ 2 (p) = β 2,rad (p) which concludes the proof of (4.51). (4.34) is the same as (4.40) in Step 1. Moreover passing to the limit in (4.41) and in (4.42) proved in Step 1 and using the results of Step 2 we obtain (4.33) and (4.32) respectively, where the strict inequalities are due to the monotonicity of the sequences (β n 1,rad (p)) n and (β n 2,rad (p)) n . Last we show that there exists C > 0 independent of p such that − C ≤ β 1,rad (p) (< 0) for any p > 1 (4.70) from which the uniform bound on j(p) then follows and this concludes the proof. Let φ p ∈ H be a function where β 1,rad (p) is achieved, then by (4.13), choosing v = φ p , we have: We recall the following pointwise estimate for u p which has been proved in [DIP2]: for a certain C > 0 (see property (P k 3 ) in [DIP2, Proposition 2.2], observing that in the radial case the origin is the only absolute maximum point of |u p | and that k = 1 by [DIP2,Proposition 3.6]). The conclusion follows combining (4.72) with (4.71). Remark 4.6. In the proof of Lemma 4.5 we have introduced the auxiliary weighted eigenvalue problems (4.36) in the annuli A n , n ∈ N 0 and, as an intermediate step, we have shown that for any fixed p > 1 the sequence (β n i,rad (p)) n of the i-th radial eigenvalues for these problems is monotone non-increasing and We stress that by the spectral decomposition in (4.47) we also get the non-increasing monotonicity of the sequence of the j-th eigenvalues (β n j (p)) n of the problems (4.36). Moreover combining (4.73), the spectral decomposition in (4.47) and Lemma 4.4 we also know the limit for the negative ones: The auxiliary weighted eigenvalue problems (4.36) in the annuli A n , n ∈ N 0 is the same already introduced and studied in [DIP3] when computing the Morse index of u p for large p. Next we investigate the degeneracy of the solution u p , for any p > 1. This result will be useful to characterize the degeneracy of u p in the case of large p. Moreover we will need it to identify the possible bifurcation points and select the eigenfunctions related to them. Proposition 4.7 (Characterization of degeneracy). For any p ∈ (1, +∞) let j = j(p) ∈ N, j ≥ 2 be as in Lemma 4.5. The solution u p is degenerate if and only if Moreover the space of the solutions to the linearized problem (3.7) at a value p that satisfies (4.74) and/or (4.75) is spanned by where φ 1 , φ 2 are the eigenfunctions associated to the first and second radial eigenvalue β 1,rad (p), β 2,rad (p) respectively. Hence Ker(L p ) has dimension 0 when neither (4.74) nor (4.75) hold, dimension 2 when either (4.74) or (4.75) holds, and dimension 4 when (4.74) and (4.75) hold. Step 1. We show that if u p is degenerate then (4.74) or (4.75) hold. If u p is degenerate, problem (4.78) admits a solution v which is continuous in B by elliptic regularity. Then we can decompose v along spherical harmonics, namely for k ∈ N we consider the radial function where α k is an eigefunction of −∆ S 1 associated to the eigenvalue k 2 (see (4.17)-(4.20)). Since (α k ) k is a complete orthogonal system for L 2 (S 1 ) and v = 0, then necessarily h k = 0 for some k ∈ N. Moreover, similarly as in Step 1 in the proof of Lemma 4.4, it is easy to show that h k , for these values of k, is a nontrivial solution to the problem Observe that k ≥ 1, since u p is radially nondegenerate by Lemma 3.3, so (see (4.20)), one has also h k (0) = 0. (4.81) Next we show that h k satisfies also the condition Indeed, since v ∈ H 1 0 (B), we can argue as in the proof of (4.27) to get 1 0 r(h k ) 2 < +∞ (4.83) and moreover, using Proposition 2.3, we also have that h k (r) = O(r k ), as r → 0, which implies By Lemma 4.1, Lemma 4.2 and Lemma 4.3 problem (4.80)-(4.83)-(4.84) admits only two negative eigenvalues which coincide with β 1,rad (p) and β 2,rad (p). Hence we conclude that h k is nontrivial if and only if β i,rad (p) = −k 2 for some i = 1, 2 and k ≥ 1. The equalities (4.74) and (4.75) then follow remembering that, by Lemma Step 2. We show that if (4.74) or (4.75) hold then u p is degenerate. where φ i is an eigenfunction associated to the radial eigenvalue β i,rad (p) and α k is an eigefunction of −∆ S 1 associated to the eigenvalue k 2 (see (4.17)). Then easy computation shows that if (4.74) (resp. (4.75)) holds then v i,k with i = 1 and k = j (resp. i = 2 and k = 1) solves (4.78). To prove that the space of solutions to (4.78) is spanned by the functions in (4.76) and/or (4.77), recall that α k is an orthogonal basis for L 2 (S 1 ), hence any nontrivial solution v to (4.78) may be written in L 2 (B) as v(r, θ) = +∞ k=0 h k (r)α k (θ) (4.86) with h k defined as in (4.79). Then the same arguments used in Step 1 imply that when only (4.74) holds then h k = 0 for any k = j and so (4.86) reduces to v(r, θ) = h j (r)α j (θ) with h j eigenfunction associated to the radial eigenvalue β 1,rad (p), namely h j = φ 1 ; similarly when only (4.75) holds then h k = 0 for any k = 1 and so (4.86) reduces to v(r, θ) = h 1 (r)α 1 (θ) where h 1 is now the eigenfunction associated to the radial eigenvalue β 2,rad (p), namely h 1 = φ 2 . The case p large In [DIP3], exploiting the asymptotic analysis of u p for p → +∞, it has been already proved that Proposition 5.1. There existsp > 1 such that m(u p ) = 12 ∀ p ≥p. (5.1) Moreover one can also improve some partial result in [DIP3] about the asymptotic behavior of the first eigenvalue (β n 1,rad (p)) n of the auxiliary weighted problem (4.36) (cfr. [DIP3,Theorem 6.1]) and deduce the following asymptotic result for the first eigenvalue β 1,rad (p) in the ball (whose proof is sketched at the end of the section, see also [AG2] for a detailed proof.) Lemma 5.2. Using the general analysis previously done in Section 4 (Lemma 4.5 and Proposition 4.7), combining it with Proposition 5.1 above and with the asymptotic result in Lemma 5.2, we completely characterize the degeneracy of the solution u p when p is large. Our result reads as follows: Proposition 5.3. There exists p > 1 such that for any p ≥ p where φ 2 is the eigenfunction associated to the second radial eigenvalue β 2,rad (p) . Hence Ker(L p ) for p ≥ p has dimension 0 when (5.4) does not hold and dimension 2 when it holds. Proof. The proof follows from Lemma 4.5, Proposition 4.7 and observing that by Proposition 5.1 j(p) ≡ 6 for p ≥p and that moreover by Lemma 5.2 there exists p (≥p) such that the equality β 1,rad (p) = 36 is never attained when p ≥ p . We conclude the section with: Sketch of the proof of Lemma 5.2. In [DIP3,Theorem 6.1] it has been proved that, if β n 1 (p) is the first eigenvalue of the auxiliary weighted problem (4.36) in the annulus A n , then there exists a sequence n p such that n p → ∞ as p → ∞ and Here we want to show that the proof for the annulus A np in [DIP3] can be adapted to the ball B, so that one gets the same asymptotic result for the first eigenvalue β 1 (p) in the ball. The proof of the convergence (A.1) in [DIP3] deeply relies on the study of the behavior of the radial solution u p as p → ∞, it is quite long and involved and requires several steps, which we now try to retrace. Let us first recall that u p admits two limit problems, one obtained when rescaling u p with respect to its maximum point, which is 0 (the scaling parameter in this case is ε + p defined by ε + p −2 = pu p (0) p−1 ) and the second one obtained rescaling u p with respect to its minimum point s p (with scaling parameter given by ε − p ) (see [GGP,Theorem 1] for the rigorous statement of the result). As in [DIP3] the idea to prove the result is now to consider the eigenvalue problem associated to β 1 (p), rescale properly the first eigenfunction ψ 1,p using the scaling parameters ε ± p and pass to the limit into the rescaled equations. More precisely, similarly as in [DIP3], we will obtain again that the rescaled eigenfunction ψ + 1,p (x) := ψ 1,p ε + p x vanishes, while the other rescaled eigenfunction ψ − 1,p (x) := ψ 1,p ε − p x converges (in some sense) to the first eigenfunction of some eigenvalue limit problem, whose first eigenvalue is exactly the value − 2 +2 2 in (A.1). One of the main point, crucial to pass to the limit in the rescaled equation and get the limit eigenvalue problem, is to prove the analogous of Lemma 6.2, 6.3 and 6.4 in [DIP3], where some estimates on the first eigenvalue β np 1 (p) and on the associated rescaled eigenfunction are obtained. Similar estimates can be easily obtained exactly in the same way as in [DIP3] directly for the first rescaled eigenfunctions ψ ± 1,p in the ball (without restricting to A np ) and imply in turn the convergence ψ ± 1,p → C ± ψ ± 1 as p → ∞ (A.2) in some sense (in particular uniformly on compact sets of R 2 ), where ψ ± 1 are the first eigenfunctions of certain limit eigenvalue problems, associated respectively to eigenvalues β ± 1 . In particular, since we can prove that β + 1 = −1 and we already know that The other main point, following the proof of (A.1), is to show that ψ − 1,p does not vanishes. This step requires a deep analysis on the behavior of the function [0, 1] r → p|u p (r)|r 2 as p → ∞ and luckily this behavior does not depend on the annulus A np and this produces estimates in all of the ball B. As a consequence, we can follow step by step the proof of Proposition 6.6 in [DIP3], getting analogously that for some K > 0. The rest of the proof then follows similarly as in [DIP3]. One can find in [AG2] all the details. 6. The case p close to 1 Let us fix some notation. We denote by (λ i ) i the sequence of the Dirichlet eigenvalues of −∆ in B, counted with their multiplicity. Moreover let (ϕ i ) i be a basis of eigenfunctions in L 2 (B) associated to λ i . We also denote by (λ i,rad ) i and (ϕ i,rad ) i the subsequences of the radial eigenvalues and eigenfunctions respectively (it is well known that λ i,rad are simple in the space of radial functions and that ϕ i,rad has i − 1 zeros). In order to obtain the previous result we need to analyze the behavior of the solution u p , as p is close to 1. We will show that u p converges, as p → 1, to the second radial Dirichlet eigenfunction of −∆ in the ball B (Lemma 6.4 below). Hence let us recall some useful result for the Dirichlet eigenvalues and for the second radial eigenfunction of −∆ in B. 6.1. Asymptotic behavior of u p as p → 1. We now analyze the asymptotic behavior of u p , as p → 1. In particular we obtain an expansion of its L ∞ -norm up to the second order which will be useful for the proof of Theorem 1.3 (see Proposition 10.5). Since this convergence holds for every subsequence, then it holds for the sequence concluding the proof. 6.2. Proof of Proposition 6.1. Using Lemma 6.4 and Lemma 6.3 we can finally prove Proposition 6.1. Proof of Proposition 6.1. The proof of (6.1) consists in showing that for p sufficiently close to 1 m(u p ) = m (ϕ 2,rad ) + 1, (6.24) where m (ϕ 2,rad ) = 5 by Lemma 6.3. We divide it into three steps. First observe that forū p defined from u p as in (6.12) Step 1. We show that m(u p ) ≥ m (ϕ 2,rad ) + 1, for p sufficiently close to 1. Let Q p : H 1 0 (B) → R be the quadratic form in (3.4) and let us consider the first 5 Dirichlet eigenfunctions ϕ 1 , . . . , ϕ 5 of −∆ in B and the corresponding eigenvalues λ 1 , . . . , λ 5 . Then by (6.25) we have that for i = 1, . . . , 5 and p sufficiently close to 1, since λ i < λ 2,rad by Lemma 6.3, where for the equality in ( ) we have used (6.13) and the Lebesgue dominated convergence theorem thanks to (6.12). Recalling that the eigenfunctions ϕ i are orthogonal in L 2 (B) and hence in H 1 0 (B) this means that the Morse index of u p is at least 5 for p sufficiently close to 1. But from (4.40) in Lemma 4.5 we already know that m(u p ) must be always even, then the Morse index of u p is at least 6 for p sufficiently close to 1. Observe that the non-positive eigenvalue µ i (p) is bounded for p close to 1, indeed by the standard variational characterization of µ 1 (p) for p close to 1. Let p n be a sequence converging to 1, then the eigenfunction v i,n : (6.29) Moreover p n u pn pn−1 ∞ |ū pn | pn−1 v i,n + µ i (p n )v i,n ≤ C and then, up to a subsequence, v i,n →φ i in C(B) where φ i ∞ = 1 by the uniform convergence and, using (6.13) and (6.12), it follows thatφ i solves whereμ i = lim n→+∞ µ i (p n ) ≤ 0. This means thatφ i is an eigenfunction of the Laplace operator associated to the eigenvalue λ 2,rad +μ i , namely there exists j = 1, 2, . . . such thatμ i = λ j − λ 2,rad andφ i = C j ϕ j where C j = ± ϕ j −1 ∞ . Sinceμ i ≤ 0, by Lemma 6.3 we have necessarily that j ∈ {1, 2, 3, 4, 5, 6}. Moreover, since the convergence in (6.26) holds for any subsequence, then it also holds for the sequence. Last we prove (6.28). Let l = i be such that µ l (p) ≤ 0. We can take v l,p orthogonal in L 2 (B) to v i,p . The uniform convergence inB implies then that hence j(i) = j(l). Lemma 7.1. The maps p → β i,rad (p) are analytic in p and the sets of degenerate points in (7.1), when not empty, consist of only isolated points. Moreover P j = ∅, for j = 3, 4, 5 and there exists an odd number s j (≥ 1) of isolated values p j 1 , . . . , p j s j ∈ (1 + δ, p ) (where δ and p are as in Proposition 6.1 and Proposition 5.3 respectively) such that P j = {p j 1 , . . . , p j s j } j = 3, 4, 5. Proof. In [D2] it is proved that for any smooth bounded domain Ω ⊂ R 2 for any p > 1 except possibly for isolated p the equation −∆u = u p in Ω, u = 0 on ∂Ω has a non-degenerate positive solution. The proof relies on the fact that the map (u, p) −→ (−∆) −1 (u p ) is real analytic when considered in a suitable cone of positive weighted functions. This proof cannot be directly applied for sign-changing solutions, and so we need to adapt the proof of the analyticity for sign-changing radial fast decay solutions in the exterior of the ball used in [DW], which holds in R N , with N ≥ 3. Following [DW] we letw p (s) = r 2 p−1 u p (r), for r = e s . This function satisfies We consider, for z > 0, the rescaled function w(t) =w p (z −1 t) that satisfies in (−∞, 0) with the boundary conditions in (7.3). We let s 1 be the unique zero of w(t) in (−∞, 0) and we consider problem (7.4) in one of the intervals (−∞, s 1 ) or (s 1 , 0) with Dirichlet boundary conditions (also at infinity). Of course we have that r 1 = e z −1 s 1 is the unique zero of u p . Problem (7.4) is equivalent to solve where Ω 1 = B(0, e z −1 s 1 ) or Ω 2 = B \ B(0, e z −1 s 1 ) and u is radial. The Dancer result for positive solutions in [D2] implies then that the positive solutions w 1 z,p and w 2 z,p to (7.4), in (−∞, s 1 ) and (s 1 , 0) respectively, depend analytically on p and z. Lastly, following the proof of Lemma 3.2 part c) in [DW], one can show the existence of z p close to 1 and analytic in p such that the functioñ is C 1 in s = z −1 p s 1 . This proves that p → u p is analytic. The fact that u p is analytic with respect to p implies that the eigenvalues β 1,rad (p), β 2,rad (p) are analytic [K2]. Moreover by (5.2) and (6.4) it follows that p → β 1,rad (p) is not constant in (1, +∞) and so the solutions p ∈ (1, +∞) to β 1,rad (p) = −j 2 are isolated and can accumulate only at +∞. By (5.3) and (6.5) instead, either p → β 2,rad (p) is constant and there are no solutions p ∈ (1, +∞) to β 2,rad (p) = −1 or it is not constant in (1, +∞) and in this case the solutions p ∈ (1, +∞) to β 2,rad (p) = −1 are isolated and can accumulate only at +∞. Finally (5.2) and (6.4) imply also that β 1,rad (p) + j 2 changes sign for some p ∈ (1 + δ, p ) (precisely at an odd number of values of p), when j = 3, 4, 5. Morse index and degeneracy in symmetric functions spaces To prove the bifurcation result in Theorem 1.2 and also to prove Theorem 1.3 we need to introduce some spaces of symmetric functions. To this end we let O(2) be the orthogonal group in R 2 , O k ⊂ O(2), for k ∈ N 0 , be the subgroup of rotations of angle 2π k and τ ∈ O(2) be the reflection with respect to the x-axis, i.e. τ (x, y) = (x, −y) for any (x, y) ∈ R 2 . For any k ∈ N 0 , we denote by G k ⊂ O(2) the subgroup generated by the elements of O k and by τ (8.1) and by The functions in the spaces H 1 0,k (B) clearly possess the following invariances (in polar coordinates (x, y) = (r cos θ, r sin θ)): for every r ∈ (0, 1] and for every θ ∈ [0, 2π]. Note that in general θ + 2π k / ∈ [0, 2π], if this occurs we mean that v(r, θ) = v(r, θ + 2π k − 2π) and similarly we do when Observe that when k = 1 then O 1 is the trivial subgroup of O(2) given by the identity map and the functions in H 1 0,1 (B) are only invariant by the reflection τ . Clearly the radial solution u p ∈ H 1 0,k (B), for every k ∈ N 0 . As a consequence, letting as before (µ i (p)) i∈N 0 be the sequence of the eigenvalues of the linearized operator L p at u p (see Section 3), we can consider its subsequence (µ i,k (p)) i∈N 0 of the G k -symmetric eigenvalues (i.e. eigenvalues associated to an eigenfunction that belongs to H 1 0,k (B)) for any k ∈ N 0 , which can be characterized as where R p is the usual Rayleigh quotient as in (3.3). By the principle of symmetric criticality the functions v i that attains µ i,k (p) are indeed solutions to the eigenvalue problem associated to the linearized operator, i.e. they satisfy and are invariant by the action of G k . It is known that µ 1,k (p) = µ 1,rad (p) = µ 1 (p), for any k ∈ N 0 , since v 1 is a radial function. We then define the k-Morse index of u p , that we denote by m k (u p ), as the number of the negative G k -symmetric eigenvalues µ i,k (p) of L p counted with multiplicity. To compute the k-Morse index of u p it is useful the following result, analogous to the one in Lemma 4.2: Lemma 8.1. The k-Morse index of u p coincides with the number of the negative G k -symmetric eigenvalues of the weighted problem (4.1) counted according to their multiplicity. The proof of the previous result is an easy adaptation of the arguments in [GGN,Lemma 2.6] and relies on the variational characterization of the negative G k -symmetric eigenvalues of the weighted problem (4.1) (i.e. the eigenvalues whose eigenfuntions belong to H 1 0,k (B)). Indeed observe that they are a subsequence of the eigenvalues of the weighted problem (4.1) and that, as we have already seen in Section 4.1, they can be variationally characterized exactly when they are negative. More precisely, by the principle of symmetric criticality, we can now restrict to the subspace H k of the G k -symmetric functions of H (H k ⊂ H 1 0,k (B)) and define and, if β j,k (p) < 0 for j = 1, . . . , i − 1 where φ j ∈ H k is the function where β j,k (p) is achieved for j = 1, . . . , i − 1 and solve So similarly as in Lemma 4.1 one can prove the following variational characterization, which then gives the characterization of the k-Morse index in Lemma 8.1 above: Lemma 8.2. The negative G k -symmetric eigenvalues of problem (4.1) coincide with the negative numbers β i,k (p)'s in (8.6)-(8.7). Moreover the corresponding eigenfunctions, which solve (4.1), are in H k and can be chosen to be orthogonal in the sense of (4.2). Remark 8.3 (G k -invariance of the eigenfunctions). Recall that, according to the spectral decomposition result in Lemma 4.4 and using Lemma 4.3, we can decompose the negative eigenvalues of the weighted problem (4.1) as β n,rad (p) + j 2 < 0 (8.9) for some n = 1, 2 and some j ∈ N, where β n,rad (p) are the negative radial weighted eigenvalues as defined in Section 4.1. Moreover the eigenfunctions associated to each (n, j) ∈ {1, 2} × N in the decomposition (8.9) are explicitly known by Lemma 4.4, indeed they are: φ n (r) cos(jθ) and φ n (r) sin(jθ) where φ n (r) is a radial eigenfunction associated to the simple radial eigenvalue β n,rad (p). Recall also that, by (4.22), the eigenspace related to each negative eigenvalue of problem (4.1) is generated by these eigenfunctions, with (n, j) varying among all the possible associated decompositions. Hence the G k -invariance of the eigenfunctions is known, precisely one has that: a) for j = 0, the eigenvalues β 1,rad (p) < β 2,rad (p) < 0 are simple in the space of the radial functions and each one produces 1 radial eigenfunction φ n (n = 1, 2 respectively) of problem (4.1), which belongs to H 1 0,k (B) for every k ≥ 1; b) for every j ≥ 1, the eigenfunction φ n (r) sin(jθ) doesn't belong to any space H 1 0,k (B), k ≥ 1 (since the reflection τ ∈ G k ); c) for every j ≥ 1, the eigenfunction φ n (r) cos(jθ) is in H 1 0,j (B); d) for every j ≥ 2, the eigenfunction φ n (r) cos(jθ) belongs also to the spaces H 1 0,k (B) such that k ∈ N 0 is a factor of j (we write k | j) (in particular it always belongs to H 1 0,1 (B)), while it doesn't belong to the spaces H 1 0,k (B) when k ∈ N 0 is not a factor of j. In the next section we will use the following result: Lemma 8.4. Let p ∈ (1, +∞). The linearized operator L p has a negative eigenvalue with eigenfunction in H 1 0,k (B) \ H 1 0,rad (B) if and only if β 1,rad (p) + k 2 < 0 (8.10) Proof. Lemma 8.1 implies that L p has a negative eigenvalue in H 1 0,k (B) \ H 1 0,rad (B) if and only if the weighted problem (4.1) has a negative eigenvalue in the space H k \H rad . By the spectral decomposition given in Lemma 4.4 then, when (8.10) holds problem (4.1) has the negative eigenvalue β(p) = β 1,rad (p) + k 2 with corresponding eigenfunctions φ 1 (r) sin(kθ) and φ 1 (r) cos(kθ), the second of which belonging to H k \ H rad . When, instead β 1,rad (p) + k 2 ≥ 0 the negative eigenvalues of problem (4.1) are: β i,rad (p), for i = 1, 2 with corresponding eigenfunctions φ i (r) ∈ H rad so that they do not belong to H k \H rad and β 1,rad (p)+j 2 for some j ∈ {1, . . . , k−1} with corresponding eigenfunctions φ 1 (r) sin(jθ) and φ 1 (r) cos(jθ) neither of which belong to H k since j < k, by Remark 8.3. This means that when (8.10) is not satisfied then the linearized operator does not admit any negative eigenvalue in H 1 0,k (B)\H 1 0,rad (B) concluding the proof. By exploiting the information about the location of the weighted radial eigenvalues β n,rad (p), n = 1, 2 obtained in the previous sections we can also derive information about the k-Morse index of the radial solution u p which will be useful to prove the non-radial part in Theorem 1.3 (see Section 10). Indeed using the results in Section 5 and Section 6, we can explicitly compute the k-Morse index of u p , for p large enough and for p close to 1 respectively: Proposition 8.5. Let p > 1 be as in Proposition 5.3. Then for any p ≥ p m k (u p ) =          7 for k = 1 4 for k = 2 3 for k = 3, 4, 5 2 for k ≥ 6 (8.11) Proof. By Lemma 8.1 in order to compute m k (u p ) we have to count the linearly independent eigenfunctions to the weighted problem (4.1) which are associated to a negative eigenvalue and belong to the symmetric space H 1 0,k (B). From Proposition 5.3 we know that for p ≥ p it holds −36 < β 1,rad (p) < −25, −1 ≤ β 2,rad (p) < 0. Then all the negative eigenvalues are given by (8.9) with j = 0 for n = 2 0, 1, 2 for n = 1 The conclusion follows again by Remark 8.3. Finally we can characterize the degeneracy of u p in the symmetric spaces. We know from Proposition 4.7 that u p is degenerate if and only if β 1,rad (p) + j 2 = 0 for some j = j(p) > 1 or β 2,rad (p) + 1 = 0 and these equalities can hold at the same time. As we will see in next result, the restriction to the symmetric spaces on one side rules out the degeneracy due to the second case, on the other side reduces the kernel of L p to be 1-dimensional. Proposition 8.7 (Characterization of degeneracy in H 1 0,k (B)). Let δ > 0 and p > 1 be as in Proposition 6.1 and Proposition 5.3 respectively. Let k ∈ N 0 . i) if p ∈ (1, 1 + δ) then u p is non-degenerate in H 1 0,k (B) for any k ≥ 1; ii) if p ≥ p then u p is non-degenerate in H 1 0,k (B) for any k ≥ 2; iii) if p ∈ (1 + δ, p ) then u p is degenerate in H 1 0,k (B) for k ≥ 2 if and only if there exists j ≥ 2 such that β 1,rad (p) = −j 2 and k | j. In this case the kernel of L p in H 1 0,k (B) is one dimensional and it is spanned by the function φ 1 (r) cos(jθ). Proof. i) is obvious, since u p is non-degenerate in H 1 0 (B) when p ∈ (1, 1 + δ) (Proposition 6.1). ii) follows from the characterization of the degeneracy of u p in H 1 0 (B) for p large. Indeed when u p is degenerate in H 1 0 (B) by Proposition 5.3 the kernel of L p is spanned by the two functions φ 2 (r) sin(θ) and φ 2 (r) cos(θ) and neither of the two belong to H 1 0,k (B), when k ≥ 2. iii) follows from the characterization of the degeneracy of u p in H 1 0 (B) given in Proposition 4.7. Indeed, observe that the solution to the linearized equation v in (4.77) do not belong to H 1 0,k (B) when k ≥ 2, hence Ker(L p ) = {0} in H 1 0,k (B) if and only if p satisfies the equation (4.74). To conclude let us recall that in this case Ker(L p ) is spanned by the functions φ 1 (r) sin(jθ) and φ 1 (r) cos(jθ) (see (4.76)) and that φ 1 (r) sin(jθ) ∈ H 1 0,k (B) for k ≥ 2, while φ 1 (r) cos(jθ) ∈ H 1 0,k (B) for any k | j. Remark 8.8 (Odd change in the k-Morse index). From Proposition 8.7 -iii), Lemma 8.1 and the usual spectral decomposition of the negative eigenvalues of the weighted problem (4.1) it follows that p ∈ (1, +∞) is a value at which the k-Morse index m k (u p ), k ≥ 2 changes if and only if there exists j ≥ 2 such that k | j and p ∈ P j , where P j is defined in (7.2). Moreover the change in the k-Morse index is always odd (precisely ±1). In particular a sufficient condition for p to be a k-Morse index odd changing point is that p ∈ P k . The bifurcation result In this section we will find nonradial solutions to (1.1) bifurcating from the curve of radial solutions (p, u p ), looking for fixed points of the operator T : (1, +∞) × We will restrict to the G k -invariant functions introduced in Section 8, in particular let us define the spaces where H 1 0,k (B) is the symmetric space in (8.2). We also set Obviously u p ∈ X rad ⊂ X k , for every p ∈ (1, ∞) and for every k ≥ 1. We will look for solutions in X k which bifurcate at some point (p k , u p k ). Proposition 8.7-iii) characterizes the values of p at which u p is degenerate in X k , we will show bifurcation for any p in the subset P k (see (7.2)) of degenerate values, for k = 3, 4, 5. Observe that for any fixed p the operator T (p, ·) is compact and continuous in p and that also its restriction to the subspaces X k , k ≥ 2 is still compact (and continuous in p). In particular we will prove that the continuum of bifurcating solutions belongs to X k \ X j , ∀j > k until they are non-radial, thus separating the branches related to different values of k. In order to get this property we restrict the operator T to suitable cones K k in X k , defined, similarly as in [D1], by imposing some angular monotonicity to the G k -symmetric functions. Hence for k ∈ N 0 let us define the cone: where v θ denotes the derivative with respect to the angle θ of the polar coordinates. By definition X rad ⊂ K k ⊂ X k for any k ≥ 1 and the monotonicity in the definition implies the following separation property: which will be crucial in order to separate the branches. The complete statement of our bifurcation result is contained in Theorem 9.1 below, which is the main result of the section, Theorem 1.2 in the introduction follows from it. Let P k , k ∈ N 0 be the subset of degenerate exponents defined in (7.2). By Lemma 7.1 we know that ∅ = P k = {p k 1 , . . . , p k s k }, when k = 3, 4, 5 (where s k ≥ 1 is an odd integer). We then have: . . , s k } for k = 3, 4, 5 are nonradial bifurcation points from the curve of radial solutions (p, u p ) and the bifurcating solutions belong to the cone K k . The bifurcation is global and the Rabinowitz alternative holds. Moreover, for every k = 3, 4, 5 there exists at least one exponent p k ∈ {p k 1 , . . . , p k s k } such that, letting C k be the continuum that branches out of (p k , u p k ) then either it is unbounded in (1, +∞) × K k or it intersects {1} × K k . Finally C k ∩ C j ⊂ X rad for any j = 3, 4, 5, j = k. The proof of Theorem 9.1 can be found at the end of the section. The core of the proof consists in getting bifurcation at the degenerate points at which there is a change in the fixed point index of T (p, ·) at u p relative to the cone K k (index introduced in [D]). These degenerate points (p, u p ) are given by any p ∈ P k (see Proposition 9.4). Observe that at p ∈ P k also the k-Morse index of u p has a (odd) change (see Remark 8.8). First we show that: Lemma 9.2. The operator T (p, ·) maps X k into X k and in particular K k into K k . Proof. Let w ∈ X k and let z = T (p, w). Since w ∈ C 1,α (B) then z ∈ C 3,α (B) and by definition of T , it is a classical solution to −∆z = |w| p−1 w in B, z = 0 on ∂B. (9.6) Letz(x) = z(g(x)), for g ∈ G k . Thenz is a solution to (9.6), because w ∈ X k and −∆ is invariant by the action of G k . This impliesz = z getting that z ∈ X k . It remains to show that when w ∈ K k also the monotonicity assumption on w is preserved by T . Since z ∈ C 3,α (B) we can compute z θ = ∂z ∂θ and letting w θ = ∂w ∂θ , we have that z θ is a classical solution to −∆z θ = p|w| p−1 w θ in (0, 1) × (0, π k ), z θ (1, θ) = 0 on ∂B. When u p is an isolated fixed point for T (p, ·) we can consider its index relative to the cone K k (see [D]), which we denote by ind K k (T (p, ·), u p ). We can compute ind K k (T (p, ·), u p ) when u p is non-degenerate in X k . In this case the characterization in Proposition 8.7-iii) implies in particular that β 1,rad (p) + k 2 = 0, we then have: Lemma 9.3. Let k ≥ 2 and p be such that u p is non-degenerate in X k then ind K k (T (p, ·), u p ) = 0 if β 1,rad (p) + k 2 < 0 1 if β 1,rad (p) + k 2 > 0 Proof. By Lemma 9.2 we can consider the operator T restricted to the space X k , namely T : (1, +∞) × X k −→ X k for some k ≥ 2. Let us denote by T u the Frechét derivative of T with respect to u. Since u p is non-degenerate in X k , then I −T u (p, u p ) : X k −→ X k is invertible. We can then apply Theorem 1 in [D] getting that ind K k (T (p, ·), u p ) = 0 if T u has the property α ind X k (T u (p, u p ), 0) if T u does not have the property α (9.7) where we refer to [D] for the definition of the property α. Moreover, since u p is isolated in X k (again by its nondegeneracy) and since I − T u (p, u p ) is invertible we have where deg is the usual Leray-Schauder degree in the Banach space X k , U r (u p ) := {w ∈ X k : u p − w < r} and the last equality follows by standard results for the Leray Schauder degree of linear, compact, invertible maps (see for instance [AM]). The characterization of the degeneracy in X k (see Proposition 8.7-iii)) implies in particular that β 1,rad (p) + k 2 = 0 at the non-degenerate point p, the rest of the proof is devoted to show that T u has the property α if and only if β 1,rad (p) + k 2 < 0. (9.9) In this case indeed (9.7) and (9.8) implies the result since by Lemma 8.4 and Lemma 3.2 one has m k (u p ) = 2, when β 1,rad (p) + k 2 > 0. The property α in (9.7) is stated in [D,Lemma 2]. Following the same notations we have that the linear map T u (p, u p ) has the property α if and only if (Lemma 2-(a) of [D]) the spectral radius of T u (p, u p ) is greater than 1 when restricted to the orthogonal complement to X rad in X k , which we denote by X ⊥ rad (observe that in our case the subspace S up in [D] is X rad ). Equivalently, as observed also in [D1, proof of Theorem 1], T u (p, u p ) has the property α if and only if there exist t ∈ (0, 1) and h ∈ X ⊥ rad such that h = tT u (p, u p )h, namely, recalling the definition of T , such that the linear equation −∆h − tp|u p | p−1 h = 0 in B h = 0 on ∂B (9.10) admits a nontrivial solution h ∈ X ⊥ rad for some t ∈ (0, 1). This is equivalent to say that zero is an eigenvalue of the problem with eigenfunction in X ⊥ rad for some t ∈ (0, 1). We denote by µ t the smallest eigenvalue of this problem in X ⊥ rad , which depends on t. By the variational characterization of the eigenvalues µ t is decreasing in t. Moreover µ 0 > 0, since when t = 0 µ 0 is the first Dirichlet eigenvalue in X ⊥ rad of the Laplace operator in B which is strictly positive. When t = 1 instead µ 1 is the smallest eigenvalue in X ⊥ rad of the linearized operator L p . When µ 1 is negative then there exists a t ∈ (0, 1) such that (9.10) has a solution in X k \ X rad . When µ 1 is positive instead then µ t > µ 1 > 0 for any t ∈ (0, 1) and equation (9.10) does not have a solution in X k \ X rad . Finally from Lemma 8.4 we have that µ 1 < 0 if and only if β 1,rad (p) + k 2 < 0 and this concludes the proof of (9.9). As a consequence one can characterize the set of the points p at which the index ind K k (T (p, ·), u p ) changes: Proposition 9.4 (Change in the fixed point index relative to K k ). p ∈ (1, +∞) is a value at which ind K k (T (p, ·), u p ) changes, for k ≥ 2 if and only if p ∈ P k , where the set P k is the one defined in (7.2). Proof. If p ∈ P k then (p, u p ) is an isolated degenerate point (Lemma 7.1), as a consequence the values p = p k h ± δ are non-degenerate for any δ > 0 small and by definition of P k we also have [β 1,rad (p+δ)+k 2 ][β 1,rad (p−δ)+k 2 ] < 0. The conclusion then follows by Lemma 9.3 applied at the points p = p k h ± δ. Viceversa if ind K k (T (p, ·), u p ) changes at p then by Lemma 9.3 p satisfies β 1,rad (p) = −k 2 and β 1,rad (p) + k 2 changes sign at p. This implies that necessarily p ∈ P k . 9.1. Proof of Theorem 9.1. Proof. Step 1. Non-radial local bifurcation in K k Let us consider p k h for a certain h ∈ {1, . . . , s k }. By Proposition 9.4 we know that ind K k (T (p, ·), u p ) changes as p crosses p k h , namely that for any δ > 0 small ) is not a bifurcation point in (1, +∞) × K k , then we can find δ > 0 and a neighborhood O of {(p, u p ) : p ∈ (p k h − δ, p k h + δ)} in (p k h − δ, p k h + δ) × K k such that u − T (p, u) = 0 for every (p, u) in O different from (p, u p ). We can choose δ > 0 such that (9.11) holds. Letting O p := {v ∈ K k : (p, v) ∈ O}, it then follows that there are no solutions to u − T (p, u) = 0 on ∪ p∈(p k h −δ,p k h +δ) {p} × ∂O p and there is only the radial solution (p, u p By the homotopy invariance of the fixed point index in the cone, see [D], then we have that , which is in contradiction with (9.11). This proves the local bifurcation. The bifurcating solutions belong to K k since T maps the cone in itself (Lemma 9.2) and are non-radial for p close to p k h since u p is radially non-degenerate by Lemma 3.3. Step 2. Global bifurcation and Rabinowitz alternative We can adapt the proof of Theorem 3.3 in [G]. One of the main differences is that now, since the cone K k is not a Banach space, we substitute the Leray-Schauder degree used in [G] with the degree in the convex cone K k , which we denote by deg K k (I − T (p, ·), O, 0), for any open (with the induced topology) set O in K k . The degree in the convex cone has been introduced in [A] (where it is called index), its definition arises directly from the Leray-Schauder degree (to which it coincides when the cone is a Banach space) and in particular it admits the same properties of the Leray-Schauder degree (normalization, additivity, homotopy invariance, permanence, excision, solution property, etc, see [A,Theorem 11.1 and 11.2]). Following [G], let S := {(p, u p ) : p ∈ (1, +∞)} ⊆ (1, +∞)×K k be the curve of radial least-energy solutions, let Σ k be the closure of the set {(p, v) ∈ ((1, +∞) × K k ) \ S : v solves (1.1)} and let C k be the closed connected component of Σ k bifurcating from (p k h , u p k h ). Assume by contradiction that the Rabinowitz alternative, namely one of the following, does not occur: Then as in Step 2 in the proof of [G,Theorem 3.3] we can then construct a suitable Then we can follow the proof of Step 3 and Step 4 in [G,Theorem 3.3], recalling now that, for Λ c : , u p k h ±δ ) for any c < c 0 . The fixed point index relative to the cone K k can be then computed in p k h ± δ and it assumes either the value 0 or 1 (Lemma 9.3). The proof of Step 3 and 4 of [G,Theorem 3.3] can be repeated and so we get a contradiction. We can now adapt the proof of [G2,Proposition 2.3], again using the degree in the convex cone K k which is, as already observed, either 0 or 1 in a neighborhood of the isolated (in X k ) solution u p . The main difference is that, in the final part of the proof of [G2,Proposition 2.3] we now obtain, following the notations of [G2], that for every p k l ∈ P k . This implies again that the number of points p k l ∈ P k which belong to C k , including (p k h , u p k h ), has to be even if C k is bounded. Since the total number s k of points in P k is odd (see Lemma 7.1), then there exist at least one value p k ∈ {p k h } h=1,...,s k at which either i) or ii) holds. Step 3. Conclusion Since the bifurcating solutions are not radial for p close to p k h , the separation property (9.5) implies that near the bifurcation points hence it contains only radial solutions. Remark 9.5 (Shape of the bifurcating solutions). Observe that from the definition of the space X h and from the separation property (9.5) of K k it follows that K k ∩ X h = X rad , ∀h > k (9.12) and so, as stated in Theorem 1.2 in the introduction, either the bifurcating solution belongs to X k \ X j , ∀j > k or it is radial. Moreover, since the kernel of the linearized operator is one dimensional when restricted to the spaces X k (Proposition 8.7-iii)), we can get an expansion of the bifurcating solution found in Theorem 9.1 near the bifurcation point (p k , u p k ), even if we cannot apply the Crandall-Rabinowitz result to obtain some regularity on the solutions set. Indeed, applying Proposition 2.4 in [G2] we know that there exists ε 0 > 0 such that for any 0 where α ε → 0 as ε → 0, φ 1 (r) > 0 is a first eigenfunction of the weighted eigenvalue problem as defined in Proposition 4.7 and ψ ε (r, θ) ∈ X k is such that ψ ε ∞ = o(α ε ) as ε → 0. As a consequence, near the bifurcation point, the solutions we found not only are in X k \ X rad but, being small perturbation of the radial least energy solution u p , they also inherit from u p the property of having two nodal domains and of being quasi-radial in the sense of Definition 1.1. We remark that along the branch the number of nodal regions of the solutions may change and that moreover far from the bifurcation point they may also loose the quasi-radial shape and their nodal line could touch the boundary. Remark 9.6 (Multiple bifurcation). Observe that we can obtain a solution to (1.1) by rotating the solution v in Theorem 9.1 of an angle α. This solution coincides with the one bifurcating from u p in the direction w(r, θ) = φ 1 (r) (a sin(kθ) − b cos(kθ)) ∈ Ker(L p ) with α = arctan(−a/b), lettingτ be the reflection with respect to the hyperplane ax + by = 0 and restrincting to the spaces is the group generated by O k and by the reflectionτ . Remark 9.7 (Bifurcation via odd change in the k-Morse index of u p ). We stress that in order to get the bifurcation result one could work directly in the space X k , k = 3, 4, 5 without restricting to the cones K k ⊂ X k substituting the degree in the cone K k with the usual Leray-Schauder degree in X k . Anyway the bifurcation result obtained in this way is only partial, since a priori different branches of solutions could coincide. The advantage of restricting to the cones K k in the proof of Theorem 9.1 is that set K k ∩ K j contains only radial functions when k = j, and this allow to separate the branches. where ν denotes the outer normal vector to the boundary of S k . Hence u k p is a classical solution to In next result we convert the bound on the k-Morse index in (10.1) into a bound on the full mixed-Morse index of u k p in the sector S k . Lemma 10.2. Let u k p be the least energy sign-changing solution to (1.1) in the space H 1 0,k (B). Then for any p ∈ (1, +∞) the mixed eigenvalue problem admits only 2 negative eigenvalues µ. Proof. Because of Lemma 10.1 the Dirichlet eigenvalue problem admits only two linearly independent eigenfunctionsψ 1 andψ 2 which are invariant by the action of G k , are regular, by elliptic regularity theory, and which correspond to a negative eigenvalue, say µ k 1 and µ k 2 . By the symmetry properties ofψ i it is straightforward to see, that, the restriction ofψ i to the sector S k satisfies (10.3) corresponding to the same eigenvalue µ k i < 0 for i = 1, 2. This shows that the number of negative eigenvalues of (10.3) is at least two. Viceversa, if problem (10.3) possess m > 2 negative eigenvalues µ i corresponding to the eigenfunctions ψ 1 , . . . , ψ m (that we take orthogonal in L 2 (S k )), then, denoting byψ 1 , . . . ,ψ m the extension of ψ 1 , . . . , ψ m to B under the action of G k , it is easy to see thatψ 1 , . . . ,ψ m ∈ H 1 0,k (B) solve (10.4) corresponding to the eigenvalues µ 1 < · · · ≤ µ m < 0 and are orthogonal in L 2 (B) contradicting Lemma 10.1. This shows that the number of negative eigenvalues for problem (10.3) is at most two concluding the proof. In order to get an uniform L ∞ bound for the solution u k p we want to perform a blowup argument in the sector S k exploiting the uniform bound of the mixed Morse index in Lemma 10.2. This blow-up procedure in S k requires special care, since we have to deal with mixed boundary conditions and above all with the angular points of S k . For these reasons the analysis of the rescaled solutions includes several different cases, depending on the location of the maximum points in the sector which gives different shapes of the limiting domain. Anyway in all the cases we end-up with solutions to a limit linear problem in unbounded domains with either Dirichlet or Neumann or mixed boundary conditions, whose Morse index (or symmetric Morse index) is finite. In order to rule-out this possibility we will need the following symmetric version of a well known non-existence result: where C ∞ 0,G (Σ) denotes the subspace of C ∞ 0 (Σ) of the functions invariant with respect to the action of G. Proof. Let us consider first the case of Σ = R 2 . Let us denote, as usual, by λ j , j ∈ N, the Dirichlet eigenvalues of −∆ in B, since G preserves B, we can consider among them the subsequence λ G j of the eigenvalues corresponding to G-invariant eigenfunctions. Let ψ G j be the G-invariant eigenfunction associated to λ G j , then it is easy to see that the function where B R is the ball centered at the origin with radius R. Observe that for any integer m > 0 and for any subgroup G of O(2) there exists Since the functions ψ G 1 , . . . , ψ G m ∈ C ∞ 0,G (Σ) and are linearly independent (and orthogonal in L 2 (B R )), this means that the G-Morse index of any nontrivial solution u to (10.5) is greater or equal than m, for any m ∈ N showing the result in case of Σ = R 2 . When Σ = R 2 + we let λ + j be the sequence of Dirichlet eigenvalues of −∆ in B ∩ R 2 + and (λ + j ) G the subsequence of the eigenvalues invariant with respect to the action of G with associated G-invariant eigenfunctions ψ G j . Then defining as before the rescaled function ψ G j , it solves and the thesis follows similarly as in the previous case. We are now ready to perform the blow-up analysis in S k to get a uniform L ∞ bound for the solutions u k p . Proposition 10.4. Let u k p be a least energy sign-changing solution to (1.1) in the space H 1 0,k (B) and let δ > 0. Then there exists C > 0 such that for any p ∈ (1, 1 + δ). Proof. Assume by contradiction that there exists a sequence p n → 1 such that, letting M n := u n ∞ with u n := u k pn , M pn−1 n → ∞ as n → ∞. Let P n = (x n , y n ) be the points at which |u n (P n )| = M n . W.l.o.g. we can assume u n (P n ) = M n and, by the symmetry properties of u n , also that P n ∈ S k ∪ Γ 2 ∪ Γ 3 ∪ {O}. We may also assume that P n → P 0 := (x 0 , y 0 ) ∈S k . We restrict the functions u n to the sector S k and define the functions In the sequel we analyze the asymptotic behavior of the rescaled functions u n and get a contradiction by mean of Proposition 10.3. We need to consider several cases depending upon the localization of the limit point P 0 inS k . The underlying idea of each case is that the sequence of solutions u n converges to a non-trivial solution u to (10.5) either in R 2 or in a halfplane with Dirichlet boundary conditions. Moreover the bound on the Morse index of u n obtained in Lemma 10.2 is preserved when passing to the limit problem. This last property, together with Proposition 10.3, implies u = 0 giving always a contradiction. Thus M pn−1 n is bounded and this ends the proof. for some (x, y) ∈ S k , moreover a point (x, y) belongs to S k if and only if x > 0 , y > 0 , y x < tan π k and 0 < x 2 + y 2 < 1. (10.9) As a consequence we deduce that ( x, y) ∈ Ω n if and only if the following inequalities are all satisfied: x + x n < tan π k (10.12) 0 < x 2 n + y 2 n + M 1−pn n x 2 + y 2 + 2M 1−pn 2 n ( xx n + yy n ) < 1 (10.13) From now on we denote by d n the distance between P n and ∂S k , namely (10.14) Step 1. P 0 ∈ S k Observe that in this case d n M pn−1 2 n → +∞ as n → +∞. Indeed, since P 0 ∈ S k , by (10.9) x 0 > 0, y 0 > 0, x 2 0 + y 2 0 < 1 and y 0 x 0 < tan π k , so that, since M pn−1 n → ∞ as n → +∞, any point ( x, y) ∈ B R satisfies (10.10), (10.11), (10.12) and (10.13), for n large enough, namely for any R > 0 B R ⊆ Ω n for n large enough. Elliptic estimates imply that, up to a subsequence u n → u uniformly on compact sets of R 2 . By the argument in [GS] u is defined in all of R 2 , it is a nontrivial weak solution to (10.5) in Σ = R 2 and satisfies u(0) = 1. Finally we show that the Morse index of the limit function u is less or equal than 2, this contradicts Proposition 10.3 and proves the thesis in the case P 0 ∈ S k . Assume, by contradiction, that the Morse index of u as a solution to (10.5) is greater than 2. Then there exist at least 3 functions ψ 1 , ψ 2 , ψ 3 ∈ C ∞ 0 (R 2 ) such that ψ i are linearly independent (orthogonal in L 2 (R 2 )) and where Q is the quadratic form as defined in (10.7). Since ψ i are supported in a ball B R then, the uniform convergence of u n → u on compact sets of R 2 implies that for n large enough. Then the functions ψ i (x, y) for n large enough, are orthogonal in L 2 (S k ) and satisfy S k |∇ ψ i | 2 − p n |u n | pn−1 ψ 2 i < 0 for i = 1, 2, 3. Then, letting ψ i ∈ C ∞ 0 (B) be the G k -invariant extension of ψ i to the ball B, it holds B |∇ψ i | 2 − p n |u n | pn−1 ψ 2 i < 0 for i = 1, 2, 3 contradicting the fact that the k-Morse index of u n is two (Lemma 10.1). Step 2. P 0 ∈ Γ 1 In this case we have to consider the two possibilities either d n M pn−1 2 n → ∞ or d n M pn−1 2 n → s > 0, for d n as in (10.14) (the fact that s > 0 is a consequence of the Dirichlet boundary conditions on Γ 1 and can be deduced exactly as in the paper [GS]). Then, as in the proof in [GS] the rescaled functions u k n → u as n → ∞ uniformly on compact sets of Σ, where u is a nontrivial solution (recall that u(0) = 1) either to (10.5) in Σ = R 2 in the first case or in Σ = R 2 + in the second case (up to a rotation and a translation) satisfying (10.6). Moreover one can prove similarly as in Step 1 that u has finite Morse index, contradicting again Proposition 10.3. Step 3. P 0 ∈ Γ 2 ∪ Γ 3 We give the details of the proof only in the case P 0 ∈ Γ 2 since the case P 0 ∈ Γ 3 can be handled in a similar way. In this case d n = y n → 0 (d n as in (10.14)) and x n → x 0 as n → ∞ with 0 < x 0 < 1, hence a point ( x, y) ∈ B R satisfies (10.11), (10.12) and (10.13) for n large enough, and so it belongs to Ω n if and only if (10.10) holds, namely when Case 1: y n M pn−1 2 n → ∞. In the first case it follows that any ball B R ⊂ Ω n for n large enough, namely Ω n → Σ = R 2 and so, as in Step 1, u n → u uniformly on compact sets of Σ, where u is a nontrivial solution to (10.5) in R 2 that satisfies u(0) = 1 and that has finite Morse index, getting a contradiction. Case 2: y n M pn−1 2 n → s ≥ 0. In this case instead Ω n → Σ := {(x, y) ∈ R 2 : y > −s} for some s ≥ 0 and u n → u on compact sets of Σ where u is a solution to (10.5) in Σ := {(x, y) ∈ R 2 : y > −s} that satisfies a Neumann boundary condition on ∂Σ. When s > 0, 0 ∈ Ω n for n large enough, hence u is nontrivial since u(0) = 1 by the uniform convergence on compact sets. Finally by translating this limit nontrivial solution in the y-direction we then end-up, when s > 0, with a nontrivial solution u to (10.5) in Σ = R 2 + with Neumann boundary conditions on ∂Σ. Next we treat the case s = 0 and show that again the limit solution u is nontrivial. Observe that y = −M pn−1 2 n y n ∈ ∂Ω n and that in the case s = 0 it belongs to a neighborhood of 0 for n large. By the elliptic regularity up to the boundary (see Lemma 6.18 in [GT]) for the equation −∆ u n = f n with f n = | u n | pn−1 u n , we obtain a uniform bound on the gradient of u n in Ω n ∩ B ρ , for ρ sufficiently small (indeed by definition | u n | ≤ 1 on ∂Ω n , hence |f n (x)| ≤ 1 and we use the fact that u n ∈ C 2,γ (Γ 2 )). This implies that where C is the uniform bound on the gradient. Choosing F in the set Σ = {(x, y) ∈ R 2 : y > 0} and sufficiently close to 0 and passing to the limit in the previous inequality one then has u(F ) > 0, namely u is non-trivial. Summarizing, for any s ≥ 0, we have obtained a non-trivial solution u to (10.5) in Σ := R 2 + that satisfies a Neumann boundary condition on ∂Σ. Moreover, as a consequence of Lemma 10.2, similarly as in Step 1, one can easily prove that the maximal number of linearly independent functions ψ i in the space C ∞ 0 (R 2 + ) ∩ { ∂ ψ i ∂y y=0 = 0} that make negative the quadratic form Q is at most 2. As a consequence, the even extension of u to the whole R 2 is a nontrivial solution to (10.5) in Σ = R 2 which has finite G-Morse index, where G here is the group generated by the reflection with respect to the x-axis. Again this is not possible by Proposition 10.3. large enough and then u(0) = 1), it satisfies Dirichlet boundary conditions on the hyperplane x = β 2 and has finite Morse index. This (up to a translation) contradicts again Proposition 10.3. Case 3: (10.18) and (10.19) hold. Now Σ = {(x, y) ∈ R 2 : y > −α}, u satisfies Neumann boundary conditions on the hyperplane y = −α. If α > 0 then, as before, u(0) = 1 and so it is nontrivial. In this case we translate this solution in the y-direction getting a solution to (10.5) in R 2 + that satisfies Neumann boundary conditions and we obtain a contradiction as in Step 3-Case 2 . In the case α = 0 we observe that d n = y n (where d n as usual is the distance in (10.14)). Indeed P 0 = B implies that d n = min{dist(P n , Γ 2 ), dist(P n , Γ 1 )}, where dist(P n , Γ 2 ) = y n and dist(P n , Γ 1 ) = 1 − x 2 n + y 2 n , moreover 1 − x 2 n + y 2 n ≥ y n if and only if Since d n = y n , then y = −M pn−1 2 n y n ∈ ∂Ω n and moreover it belongs to a neighborhood of 0 for n large, hence we can reason as in Step 3-Case 2 and use the elliptic regularity up to the boundary to obtain a uniform estimate on the gradient of u n in a neighborhood of 0, showing that u is nontrivial. Again we obtain a contradiction as at the end of Step 3-Case 2 . Case 4: (10.18) and (10.20) hold. Now Σ = {(x, y) ∈ R 2 : y > −α, x < β 2 }, u satisfies Dirichlet boundary conditions on the hyperplane x = β 2 and Neumann boundary conditions on the hyperplane y = −α. As before when α > 0 we have that 0 ∈ Ω n when n is large enough and then u(0) = 1, namely u is nontrivial and so we translate it ending with a nontrivial solutionū to (10.5) inΣ = {(x, y) ∈ R 2 : y > 0, x < 0}, with Dirichlet boundary conditions on x = 0 and Neumann boundary conditions on y = 0. When α = 0 one proves (10.21) as in the previous case, so again d n = y n for large n. Then y = −M pn−1 2 n y n ∈ ∂Ω n and it belongs to a neighborhood of 0 for large n, so we can prove that u is nontrivial using again the elliptic regularity up to the boundary as in the previous situation. Also in this case we translate u ending with a nontrivial solutionū to (10.5) inΣ = {(x, y) ∈ R 2 : y > 0, x < 0}, with Dirichlet boundary conditions on x = 0 and Neumann boundary conditions on y = 0. Finally observe that as a consequence of Lemma 10.2, using arguments similar to the ones in Step 1, one can prove that the maximal number of linearly independent functions ψ i ∈ C ∞ 0 ({(x, y) ∈ R 2 : y ≥ 0, x < 0}) ∩ { ∂ ψ i ∂y y=0 = 0} that make negative the quadratic form Q is at most 2. Thus, by extendingū to Σ := {(x, y) ∈ R 2 : x < 0} in an even way, we obtain a solution to (10.5) in Σ which has finite G-Morse index, where G here is the group generated by the reflection with respect to the x-axis. This is again in contradiction with Proposition 10.3. Step 5. P 0 = O In this case we can assume w.l.o.g. that d n = y n , since P 0 = O implies that d n = min{dist(P n , Γ 2 ), dist(P n , Γ 3 )}, dist(P n , Γ 2 ) = y n and w.l.o.g (up to rotation) we may consider only the case dist(P n , Γ 2 ) ≤ dist(P n , Γ 3 ). We may also assume that y n ≤ x n and yn xn ≤ tan π 2k (if x n = 0). Then a point ( x, y) ∈ B R (0) for some R > 0 belongs to Ω n if and only if conditions (10.11) and (10.12) are satisfied. Indeed (10.13) is easily verified. We have to distinguish different cases, since either y n M where it is obvious that (10.22) implies (10.24) and that (10.25) implies (10.23) with α ≤ β (since y n ≤ x n ). In this case also (10.24) holds and d n M pn−1 2 n → ∞, hence (10.11) and (10.12) are satisfied for large n and so Ω n → R 2 . Then u n → u uniformly on compact sets of R 2 where u is a nontrivial (since u(0) = 1) solution to (10.5) in R 2 of finite Morse index, giving a contradiction to the results of Proposition 10.3. Case 2: (10.23) and (10.24) hold. (10.12) is satisfied for large n while (10.11) is satisfied for large n if and only if y > −α. Hence the limit domain is Σ = {(x, y) ∈ R 2 : y > −α} and u n → u uniformly on compact sets of Σ where u is a solution to (10.5) in Σ that satisfies a Neumann boundary condition on y = −α of finite Morse index, in the sense of Step 3. Moreover when α > 0 then 0 ∈ Ω n and this implies that u is nontrivial getting a contradiction. When α = 0 we observe that y = −M pn−1 2 n y n ∈ ∂Ω n and it belongs to a neighborhood of 0. We can therefore apply the elliptic regularity up to the boundary as in Step 3 getting that u is nontrivial. Thus a contradiction arises as in the previous case. Then the limiting domain Σ is a positive cone in R 2 with vertex in (−β, −α) and with amplitude π k (the same of S k ) Σ = (r cos θ − β, r sin θ − α) : r ∈ (0, +∞), θ ∈ [0, π k ] Then u n → u uniformly on compact sets of Σ where u is a solution to (10.5) in Σ that satisfies a Neumann boundary condition on ∂Σ. When α, β = 0 then 0 ∈ Σ and we can infer that u is nontrivial. The same is true when α = 0, since β > 0 and in this case we have that y = −M pn−1 2 n y n ∈ ∂Ω n and belongs to a neighborhood of 0, so we can reason as in Step 3 the and show that u is nontrivial. Moreover in both the cases u has finite Morse index, since the maximal number of linearly independent functions ψ i in C ∞ 0 (Σ) ∩ { ∂ ψ i ∂ν | ∂Σ = 0} (ν denotes the outer normal to ∂Σ) that make negative the quadratic form Q is at most two due to Lemma 10.2. Translating u with respect to one or both the axes we end-up with a functionū that satisfies (10.5) in {(x, y) ∈ R 2 : x > 0, y > 0, y x < tan π k } and Neumann boundary conditions. Finally the G k extension ofū to the whole R 2 (which is well defined due to the Neumann boundary conditions) is a non trivial k-symmetric solution to (10.5) in R 2 which has k-Morse index at most 2. This contradicts the result in Proposition 10.3. Case 4: (10.25) holds with β = 0. In this case also condition (10.23) holds with α = 0. We consider the solution u n in the whole ball B (without restricting it to the sector S k ) and we define n B and also | v n | ≤ 1. The rescaled domain B n → R 2 and v n → v uniformly on compact sets of R 2 where v is a solution to (10.5) which has k-Morse index at most 2 (observe that since we are rescaling with respect to the origin the symmetries are preserved). To obtain a contradiction via Proposition 10.3 we need to show that v is nontrivial. This easily follows since v n ( P n ) = 1, where P n = (M pn−1 2 n x n , M pn−1 2 n y n ) and by assumption P n → 0, so that v(0) = 1. This end the proof. Now we are in the position to consider the asymptotic behavior of the nodal least energy solutions u k p as p → 1 and to conclude the proof of the radial part of Theorem 1.3. Proposition 10.5. The least energy nodal solutions u k p are radial for any k ≥ 3 when p is close to 1. Proof. Step 1. We show that for any sequence p n > 1 converging to 1 where c is as in (6.14). Let M n := u k pn ∞ , we have shown in Proposition 10.4 that M pn−1 n is bounded, we can then repeat the proof of Lemma 6.4 proving that M pn−1 n → λ andū k n → Cϕ in C(B) up to a subsequence, with C = ±1 where λ is an eigenvalue of −∆ in B with Dirichlet boundary conditions, ϕ is a corresponding eigenfunction with ϕ ∞ = 1. Moreover ϕ is invariant by the action of G k (sinceū k n are for every n) and, following the ideas in Step 1 in the proof of Proposition 6.1 we can show that m k (ϕ) ≤ m k (u k pn ), hence m k (ϕ) ≤ 2 by Lemma 10.1. Since the k-symmetric eigenvalues of −∆ are known and since we are assuming k ≥ 3, this means that necessarily either λ = λ 1,rad or λ = λ 2,rad . We show that the case λ = λ 1,rad cannot hold. Indeed, following similar ideas as in Step 2 of the proof of Proposition 6.1, since ϕ 1,rad has Morse index 0, one gets that the 2 negative k-symmetric eigenvalues of the linearized operator at u k pn (recall m k (u k pn ) = 2 by Lemma 10.1) converge both to 0 and that the corresponding eigenfunctions (that we can take to be orthogonal in L 2 (B)) converge to two orthogonal solutions of This is not possible, since λ 1 is simple, so λ = λ 2,rad . Reasoning exactly as in the proof of Lemma 6.4, we can then prove (10.27). Assuming w.l.o.g. thatū k n (0) > 0 for n large, we also havē u k n → ϕ 2,rad = J 0 (ν 02 |x|) as n → ∞ in C(B), getting (10.26). Step 2. We show that u k p = u p for p close to 1, where as usual u p is the least energy nodal radial solution to (1.1). We can then pass to the limit as n → ∞ into (10.36) and using (10.37) and (10.30) we get 0 = C B log ϕ 2,rad − c ϕ 2 2,rad + C B ϕ 2 2,rad which implies, using the definition of c in (6.14), that 0 = C B ϕ 2 2,rad namely that C = 0, contradicting the definition of C in (10.34) and ending the proof. Remark 10.6. One could prove, reasoning as in the proof of Proposition 10.5, that u 2 p → ϕ in C 1 (B) as p → 1, where ϕ is an eigenfunction of −∆ corresponding to the eigenvalue λ 4 = λ 5 , which is not quasi-radial. The convergence in C 1 (B), by the Hopf lemma then implies that u 2 p is not quasi-radial for p close to 1.
2017-09-11T10:03:25.000Z
2017-09-11T00:00:00.000
{ "year": 2017, "sha1": "474e192dddb8b972444fc384f004e76575216e99", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "474e192dddb8b972444fc384f004e76575216e99", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
265436000
pes2o/s2orc
v3-fos-license
Extracellular Superoxide Dismutase Attenuates Hepatic Oxidative Stress in Nonalcoholic Fatty Liver Disease through the Adenosine Monophosphate-Activated Protein Kinase Activation Oxidative stress is key in type 2 diabetes-associated nonalcoholic fatty liver disease (NAFLD). We explored whether extracellular superoxide dismutase (EC-SOD) activates adenosine monophosphate-activated protein kinase (AMPK) to enhance antioxidant synthesis and lipid metabolism in NAFLD. Human recombinant EC-SOD (hEC-SOD) was administered to 8-week-old male C57BLKS/J db/db mice through intraperitoneal injection once a week for 8 weeks. Target molecules involved in oxidative stress and lipid metabolism were investigated. hEC-SOD improved insulin resistance and systemic and hepatic oxidative stress characterized by increases in urinary 8-hydroxy-deoxyguanosine and 8-isoprostane levels in db/db mice and a decrease in DHE expression in the liver, respectively. Hepatic SOD3 expression in db/db mice was reversed by hEC-SOD, which improved hepatic steatosis, inflammation with M2 polarization, apoptosis, autophagy, fibrosis and lipid metabolism in db/db mice, as reflected by the changes in serum and hepatic markers, monocyte chemoattractant protein-1, tumor necrosis factor-α, TUNEL-positive cells, Bcl-2/BAX ratio, beclin1 and LC3-II/LC3-1. At the molecular level, hEC-SOD increased phosphorylated-AMPK related to CaMKKß, activation of peroxisome proliferative-activated receptor-gamma coactivator (PGC)-1α and dephosphorylation of forkhead box O (FoxO)1 and their subsequent downstream signaling. In HepG2Cs cells using AMPKα1 and AMPKα2 siRNA, hEC-SOD demonstrated a protective effect via the direct activation of both AMPK-PGC-1α and AMPK-FoxO1. EC-SOD might be a potential therapeutic agent for NAFLD through the activation of AMPK-PGC-1α and AMPK-FoxO1 signaling in hepatocytes, which modulates lipid metabolism, leading to anti-inflammatory, antioxidative and antiapoptotic effects and improving autophagy in the liver. Introduction Nonalcoholic fatty liver disease (NAFLD) is characterized by the accumulation of fat in the liver and is intimately intertwined with metabolic syndromes [1].The "two-hit" model partially explains the varying progression to more severe hepatic inflammation and fibrosis in patients with hepatic steatosis [2].The first "hit" occurs when insulin resistance (IR) leads to the accumulation of fat in the liver cells.The second "hit" is triggered by the production of reactive oxygen species (ROS) due to the excess fat in the liver.The overproduction of ROS, exceeding the capacity of the antioxidant system to neutralize them, induces oxidative stress.This oxidative stress activates a cascade of inflammatory cytokines that contribute to the progression of necroinflammation and fibrosis.Furthermore, oxidative stress disrupts the delicate balance between antioxidant defenses and pro-oxidant factors in the liver, further promoting the evolution of NAFLD [3].The molecular mechanism of lipotoxicity is characterized by endoplasmic reticulum stress, inflammation, impaired autophagy and excessive oxidative stress related to mitochondrial dysfunction [4].Among these, oxidative stress is thought to be the most important causative agent of lipotoxicity-induced organ damage [5,6]. Therefore, enzymatic antioxidants play a crucial role in providing effective protection against oxidative damage due to their capability to break down ROS.Among these enzymatic antioxidants, superoxide dismutase (SOD) holds particular importance for living cells, as the majority of ROS originate from superoxide [7].SOD catalyzes the conversion of superoxide into oxygen and hydrogen peroxide.Mammals possess three distinct forms of SOD, each characterized by its metal ions and subcellular localizations.Copper-zinc SOD (SOD1) resides in the cytosol, manganese SOD (SOD2) is located in the mitochondria, and extracellular SOD (EC-SOD or SOD3) primarily occupies extracellular spaces, playing a unique role in maintaining tissue homeostasis in the extracellular environment [8,9].In previous studies involving patients with type 2 diabetes and metabolic syndrome, it has been reported that serum EC-SOD activity is negatively correlated with insulin resistance [10,11].However, the role of EC-SOD as a potential therapeutic target for NAFLD has yet to be elucidated. Adenosine monophosphate-activated protein kinase (AMPK) is a metabolic sensor and regulator of systemic energy balance [12].The activation of AMPK and its subsequent downstream signaling pathway of lipid metabolism reduces the accumulation of triglycerides, fatty acids and cholesterol in the liver and attenuates the development of NAFLD [13].AMPK also plays a central role in controlling energy metabolism by modulating a plethora of other downstream targets, including proliferator-activated receptor gamma coactivator-1α (PGC-1α), forkhead box O transcription factor (FoxO), the mammalian target of rapamycin and silent information regulator 1 [14,15].PGC-1α is widely recognized as a transcriptional coactivator, being primarily responsible for regulating mitochondrial biogenesis and oxidative metabolism.As another additional transcriptional factor, FoxO exerts its influence on an array of cellular processes, including stress resistance, oxidative stress and cell survival [16,17]. We previously reported that several AMPK activators mitigate the severity of diabetic kidney disease by improving oxidative stress through the activation of AMPK and its downstream signaling pathways [18][19][20].However, the protective effects of EC-SOD via the activation of AMPK in NAFLD remain poorly understood.We propose that human recombinant EC-SOD (hEC-SOD) has the potential to alleviate metabolic-dysfunctioninduced oxidative stress, inflammation and apoptosis through the activation of AMPK and associated downstream targets. Preparation of Recombinant EC-SOD Recombinant EC-SOD was synthesized following previously described methods [21].Briefly, 293 cells underwent transient transfection using the SOD3 construct for 48 h.Post-transfection, the supernatant was harvested and underwent purification via Ni-NTA agarose column (Qiagen, Valencia, CA, USA), which was then followed by dialysis.The activity of the purified SOD3 was quantified using an SOD assay kit (Dojindo, Sunnyvale, CA, USA).Prior to being administered to mice or used in vitro, SOD3 was filtered to remove any endotoxin. Animals and Treatment Male C57BLKS/J db/m and db/db mice, aged eight weeks, were sourced from Jackson Laboratories (Bar Harbor, ME, USA).These genetically modified mouse models are widely used in biomedical research, particularly for studying obesity, diabetes and related metabolic disorders.Metabolically, they display insulin resistance, hyperglycemia and dyslipidemia, mirroring human NAFLD.Their liver pathology, including steatosis, inflammation and fibrosis, closely resembles the progression in humans from fatty liver to more severe stages like NASH and cirrhosis.They were allocated into four groups and maintained on a standard chow diet.Db/db and age-gender-matched db/m mice (each group n = 8) received weekly intraperitoneal injections of hEC-SOD (3500 U/(kg/day), 120 µL) for eight weeks.Concurrently, control groups of db/db and db/m mice (each group n = 6) received equivalent doses of saline.At 16 weeks, the mice were anesthetized with 30 mg/kg tiletamine/zolazepam (Zoletil; Virbac, Carros, France) and 10 mg/kg xylazine hydrochloride (Rompun; Bayer, Leverkusen, Germany) and then euthanized.Liver tissues were promptly excised and preserved in 10% buffered formalin.Blood samples were drawn from the left ventricle, with plasma stored at −70 • C. Blood and Urine Parameters Blood samples from the animals were collected following overnight fasting.Fasting blood glucose levels were assessed using an Accu-check meter (Roche Diagnostics, St Louis, MO, USA).Glycosylated hemoglobin (HbA1c) levels were determined using an autoanalyzer (Bayer Corporation, Elkhart, IN, USA).Aspartate aminotransferase and alanine aminotransferase were measured using absorbance assays.Plasma insulin concentrations were measured using an RIA (Alpco, Salem, NH, USA).Plasma insulin levels were determined through a radioimmunoassay (RIA) by Alpco.The homeostatic model assessment for insulin resistance (HOMA IR ) was calculated using the formula fasting glucose (mmol/L) × fasting insulin (mU/L)/22.5.Oxidative DNA damage and lipid peroxidation were evaluated by measuring 24 h urinary levels of 8-hydroxy-deoxyguanosine (8-OH-dG; OXIS Health Products, Portland, OR, USA) and 8-epi-prostaglandin F2α (isoprostane; OXIS Health Products), respectively. In Vitro Study The HepG2 human hepatocellular carcinoma cell line was acquired from the American Type Culture Collection (ATCC, Rockville, MD, USA) and was propagated following the ATCC's guidelines.The growth medium for HepG2 cells consisted of Eagle's Modified Minimum Essential Media (EMEM) enhanced with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin, both sourced from the ATCC.The cells were incubated at a constant temperature of 37 • C with 5% CO 2 /95% air and maintained at over 85% relative humidity.Cultivation of the cells in T-75 flasks was conducted weekly when they reached approximately 80% confluence, and the medium was replaced an additional time each week.Cells at passages 4-8 were used for all the experiments.HepG2 cells were exposed to low-glucose (LG) (5 mmol/L D-glucose), high-glucose (HG) (35 mmol/L D-glucose), palmitic acid (PA) (500 µM, Sigma-Aldrich) and HG+PA, with or without the additional 70 h administration of hEC-SOD (0.1 or 0.5 U/mL).Knockdown of AMPKα1 (#5562-1), AMPKα2 (#5563-2) and the negative control (#SN-1003) was performed using predesigned small interfering RNA (siRNA) sequences purchased from Bioneer (Daejeon, Republic of Korea).Transfection of siAMPKα1 and α2 was performed using Lipofectamine RNAiMax (Thermo Scientific), according to the manufacturer's instructions.After transfection, cells were treated with hEC-SOD (0.5 U/mL) for 24 h. Statistical Analysis The results are presented as the mean and standard deviation.For multiple comparisons, ANOVA along with Bonferroni adjustment was utilized, employing SPSS 21.0 software (IBM, Armonk, NY, USA).A p value of less than 0.05 was deemed to indicate a statistically significant difference. Characteristics of Experimental Mice Groups The body weight of db/db mice was significantly heavier than that of db/m and db/m hEC-SOD mice (p < 0.001), while a decrease in body weight was observed in db/db hEC-SOD mice (p < 0.001).The liver weight was markedly increased in db/db mice (p < 0.001), after having been reduced with hEC-SOD treatment (p = 0.001).Epididymal fat mass also exhibited a significant difference following hEC-SOD administration (p < 0.001).The serum level of fasting blood glucose and HbA1c were significantly higher in db/db and db/db hEC-SOD mice compared with db/m and db/m hEC-SOD mice (p < 0.001).While hEC-SOD treatment did not have an impact on blood glucose levels in diabetic mice, it notably reduced serum insulin levels and improved HOMA IR in db/db mice (p < 0.001).Urinary isoprostane and 8-OH-dG and serum MCP-1 and tumor necrosis factor-α (TNF-α) levels decreased significantly following hEC-SOD treatment in db/db mice (p < 0.001).hEC-SOD administration led to a significant reduction in the serum levels of alanine transaminase and aspartate transaminase in db/db mice (p < 0.001) (Table 1). Effects of hEC-SOD on Intrahepatic Histologic Changes Associated with Fibrosis, Inflammation and Lipid Accumulation Figure 1 provides compelling evidence of a notable improvement in liver histology resulting from hEC-SOD treatment in db/db mice.The degree of hepatic steatosis, characterized by macrovesicular steatosis with prominent fat droplets, did not differ significantly between db/m and db/m hEC-SOD mice and was further increased in db/db mice.Notably, hEC-SOD treatment in db/db mice led to a marked reduction in hepatic steatosis (Figure 1A).This was corroborated by a substantial decrease in the accumulation of intrahepatic lipid droplets, as evidenced by the reduced levels of perilipin-2 and Oil Red O staining in db/db mice following hEC-SOD treatment (Figure 1A-C, p < 0.001 and p < 0.001, respectively).Furthermore, hEC-SOD treatment resulted in significant improvements in hepatic inflammation and fibrosis, as evidenced by TNF-α, trichrome and TGF-β staining (Figure 1A,D-F, p < 0.001, p < 0.001 and p < 0.001, respectively).In line with this, TUNEL-positive cells in the hepatocytes of all experimental groups were counted and analyzed.The number of TUNEL-positive hepatocytes in db/db mice was significantly attenuated by hEC-SOD treatment (Figure 1A,G, p < 0.001).respectively).Furthermore, hEC-SOD treatment resulted in significant improvements in hepatic inflammation and fibrosis, as evidenced by TNF-α, trichrome and TGF-β staining (Figure 1A,D-F, p < 0.001, p < 0.001 and p < 0.001, respectively).In line with this, TUNELpositive cells in the hepatocytes of all experimental groups were counted and analyzed.The number of TUNEL-positive hepatocytes in db/db mice was significantly attenuated by hEC-SOD treatment (Figure 1A,G, p < 0.001). Intrahepatic SOD Isoforms and SOD3 Expression in Response to hEC-SOD Treatment The levels of intrahepatic SOD isoforms (SOD1, SOD2 and SOD3) were measured, as presented in Figure 2. Compared with db/m and db/m hEC-SOD mice, liver tissue from db/db mice exhibited significant reductions in the expression of SOD1, SOD2 and SOD3 (Figure 2A).Following hEC-SOD administration, there was a notable increase in the levels of all SOD isoforms (SOD-1, 2 and 3) in db/db mice, though this was lower than that in db/m and db/m hEC-SOD mice.(Figure 2A-D, p < 0.001, p = 0.015 and p < 0.001, respectively).These findings suggest that hEC-SOD treatment can recover the expression of SOD in the steatotic liver of diabetic mice.In addition to changes in intrahepatic SOD expression, DHE staining was conducted to assess the extent of ROS formation.DHE staining revealed an increase in ROS levels in db/db mice, and administration of hEC-SOD resulted in a significant reduction in ROS (Figure 2E,F, p < 0.001). Intrahepatic SOD Isoforms and SOD3 Expression in Response to hEC-SOD Treatment The levels of intrahepatic SOD isoforms (SOD1, SOD2 and SOD3) were measured, as presented in Figure 2. Compared with db/m and db/m hEC-SOD mice, liver tissue from db/db mice exhibited significant reductions in the expression of SOD1, SOD2 and SOD3 (Figure 2A).Following hEC-SOD administration, there was a notable increase in the levels of all SOD isoforms (SOD-1, 2 and 3) in db/db mice, though this was lower than that in db/m and db/m hEC-SOD mice.(Figure 2A-D, p < 0.001, p = 0.015 and p < 0.001, respectively).These findings suggest that hEC-SOD treatment can recover the expression of SOD in the steatotic liver of diabetic mice.In addition to changes in intrahepatic SOD expression, DHE staining was conducted to assess the extent of ROS formation.DHE staining revealed an increase in ROS levels in db/db mice, and administration of hEC-SOD resulted in a significant reduction in ROS (Figure 2E,F, p < 0.001). Effects of hEC-SOD on Intrahepatic Expression of Phospho/Total AMPK and Associated Downstream Signaling Pathways including Phospho-/Total FoxOs and PGC-1α Insulin resistance and metabolic dysfunction markedly reduced phospho-Thr 172 AMPK/total AMPK expression in the liver of db/db mice compared with that in the liver of db/m and db/m hEC-SOD mice.(Figure 3A).To further elucidate the action of SOD on the AMPK pathway, the expression of CaMKKβ, an upstream kinase of AMPK, was measured.In db/db mice, a significant decrease in the hepatic expression of CaMKKβ was observed, while there was a notable increase in the expression of this kinase in db/db hEC-SOD mice (Figure 3A,B, p < 0.001).Accordingly, treatment with hEC-SOD restored the phosphorylation of Thr 172 AMPK/total AMPK levels to the levels of db/m and db/m hEC-SOD mice, indicating an increase in AMPK activity in db/db mice (Figure 3A,C, p < 0.001).The expression of PGC-1α and FoxO1, which are downstream targets of AMPK, was analyzed to investigate the changes in AMPK signaling.Decreased expression of PGC-1α was seen with hEC-SOD treatment in the liver of db/db mice groups (Figure 3A,D, p < 0.001).Increased expression of pFoxO1 was attenuated by hEC-SOD treatment in the liver of db/db mice groups (Figure 3A,E, p < 0.001).These findings suggest that hEC-SOD treatment activated AMPK and brought about changes in the expression of associated downstream signaling targets including FoxO and PGC-1α. Effects of hEC-SOD on Intrahepatic Expression of Phospho/Total AMPK and Associated Downstream Signaling Pathways Including Phospho-/Total FoxOs and PGC-1α Insulin resistance and metabolic dysfunction markedly reduced phospho-Thr 172 AMPK/total AMPK expression in the liver of db/db mice compared with that in the liver of db/m and db/m hEC-SOD mice.(Figure 3A).To further elucidate the action of SOD on the AMPK pathway, the expression of CaMKKβ, an upstream kinase of AMPK, was measured.In db/db mice, a significant decrease in the hepatic expression of CaMKKβ was observed, while there was a notable increase in the expression of this kinase in db/db hEC-SOD mice (Figure 3A,B, p < 0.001).Accordingly, treatment with hEC-SOD restored the phosphorylation of Thr 172 AMPK/total AMPK levels to the levels of db/m and db/m hEC-SOD mice, indicating an increase in AMPK activity in db/db mice (Figure 3A,C, p < 0.001).The expression of PGC-1α and FoxO1, which are downstream targets of AMPK, was analyzed to investigate the changes in AMPK signaling.Decreased expression of PGC-1α was seen with hEC-SOD treatment in the liver of db/db mice groups (Figure 3A,D, p < 0.001).Increased expression of pFoxO1 was attenuated by hEC-SOD treatment in the liver of db/db mice groups (Figure 3A,E, p < 0.001).These findings suggest that hEC-SOD treatment activated AMPK and brought about changes in the expression of associated downstream signaling targets including FoxO and PGC-1α. Effects of EC-SOD on Intrahepatic Expression of Perilipin-2, PPARα/γ, Phospho-ACC and SREBP-1c The expression of perilipin-2 decreased following hEC-SOD treatment in the liver of db/db mice (Figure 4A,B, p < 0.001).Furthermore, hEC-SOD treatment improved the decreased expression of PPAR-α (Figure 4A,C, p < 0.001).Conversely, PPAR-γ expression increased in db/db mice and decreased after hEC-SOD treatment, indicating that hEC-SOD is involved in the PPAR pathway (Figure 4A,D, p < 0.001).Along with the upstream changes, which involved the activation of PGC-1α and PPARα, as well as the attenuation of PPARγ, phosphorylation of ACC was significantly increased (Figure 4A,E, p < 0.001).Additionally, the expression of SREBP-1c and ChREBP was significantly decreased in the liver of db/db mice following hEC-SOD treatment (Figure 4A,F,G, p < 0.001 and p < 0.001, respectively).These findings suggest that the prometabolic effect is anticipated, with fatty acid oxidation and mitochondrial biogenesis increasing in the liver tissue.3.6.Effect of EC-SOD on Intrahepatic Expression of Bcl-2, BAX, Beclin-1 and LC3 The antiapoptotic and pro-autophagy effects of hEC-SOD treatment were determined.The expression of Bcl-2 and BAX, which are involved in the regulation of apoptosis, was altered in the liver of db/db mice (Figure 5A).The antiapoptotic protein Bcl-2 was downregulated, while the proapoptotic protein BAX was upregulated.The increase in the BAX/Bcl-2 ratio was suppressed by the administration of hEC-SOD in the liver of db/db mice (Figure 5A,B, p < 0.001).Subsequently, the expression of autophagy-related proteins, Beclin-1 and LC3, which are regulated by AMPK activation, was determined.Suppressed expression of Beclin-1 and LC3 proteins was recovered by hEC-SOD treatment in the liver of db/db mice (Figure 5A,C-F, p < 0.001, p < 0.001, p < 0.001 and p < 0.001, respectively), suggesting a pro-autophagy effect of hEC-SOD. Inflammatory Cytokines and Cellular Alterations in the Liver Following EC-SOD Treatment Intrahepatic expression of MCP-1, TNF-α and IL-6 in the liver of db/db mice was restored by treatment with hEC-SOD to the levels of those in db/m and db/m hEC-SOD (Figure 6A-D, p < 0.001, p < 0.001 and p < 0.001, respectively).These findings suggest that hEC-SOD treatment ameliorated the inflammatory conditions in db/db mice.Increased expres- Inflammatory Cytokines and Cellular Alterations in the Liver Following EC-SOD Treatment Intrahepatic expression of MCP-1, TNF-α and IL-6 in the liver of db/db mice was restored by treatment with hEC-SOD to the levels of those in db/m and db/m hEC-SOD (Figure 6A-D, p < 0.001, p < 0.001 and p < 0.001, respectively).These findings suggest that hEC-SOD treatment ameliorated the inflammatory conditions in db/db mice.Increased expression of CD68 and Gr-1 in the liver of db/db mice was reduced by hEC-SOD treatment (Figure 6A,E,F, p < 0.001 and p < 0.001, respectively).In db/db mice, the expression of arginase I was significantly decreased, while the expression of arginase II increased.However, upon treatment with EC-SOD, these changes were reversed, with arginase II expression markedly decreasing and arginase I expression increasing (Figure 6A,G,H, p < 0.001 and p < 0.001, respectively).The level of iNOS, which is an indicator of macrophage M1 polarization, was elevated in db/db mice and significantly reduced by hEC-SOD treatment (Figure 6A,I, p < 0.001).These results indicate that EC-SOD attenuates hepatic inflammation in a steatotic liver by decreasing the production of proinflammatory cytokines from macrophage M1 polarization, without affecting macrophage M2 polarization. In Vitro Studies (HepG2) The effect of hEC-SOD treatment on HepG2 cells, a human hepatoma cell line, was evaluated in terms of inflammation, oxidative stress and apoptosis.A significant decrease in the ratio of phosphorylated-Thr 172 AMPK to total AMPK was observed in the HG, PA and HG+PA groups compared with the LG-treated group (Figure 7A,B, p < 0.001, p < 0.001 and p < 0.001, respectively).Following the administration of hEC-SOD, the ratios of phosphorylated-Thr 172 AMPK to total AMPK were significantly improved across all conditions.Moreover, these improvements exhibited a dose-dependent trend, with higher doses of hEC-SOD (0.1 to 0.5 U/mL) leading to greater increases in the ratios.(Figure 7A,B).This dose-dependent effect further substantiates that these observed changes are indeed a response to SOD.To further explore the role of hEC-SOD treatment in activating the AMPK pathway, experiments using siRNAs for AMPKα1 and AMPKα2 were conducted.The expression of both phospho-Thr 172 AMPK and total AMPK was successfully suppressed in HepG2 cells that had been transfected with siRNAs for AMPKα1 and AMPKα2 (Figure 8A-D, p < 0.01, p < 0.01 and p < 0.01, respectively).and p < 0.001, respectively).Following the administration of hEC-SOD, the ratios of phosphorylated-Thr 172 AMPK to total AMPK were significantly improved across all conditions.Moreover, these improvements exhibited a dose-dependent trend, with higher doses of hEC-SOD (0.1 to 0.5 U/mL) leading to greater increases in the ratios.(Figure 7A,B).This dose-dependent effect further substantiates that these observed changes are indeed a response to SOD.To further explore the role of hEC-SOD treatment in activating the AMPK pathway, experiments using siRNAs for AMPKα1 and AMPKα2 were conducted.The expression of both phospho-Thr 172 AMPK and total AMPK was successfully suppressed in HepG2 cells that had been transfected with siRNAs for AMPKα1 and AMPKα2 (Figure 8A-D, p < 0.01, p < 0.01 and p < 0.01, respectively).There were no significant differences observed in the expression levels of SOD1 and SOD2 in HepG2 cells after treatment with siRNAs for AMPKα1 and AMPKα2 (Figure 9A-C).While the expression of CaMKKα did not exhibit significant differences (Figure 9A,D), the expression of CaMKKβ was significantly decreased in in the groups treated with siR-NAs for AMPKα1 and AMPKα2 (Figure 9A,E, p < 0.001 and p < 0.001, respectively).This suggests that the knockdown of AMPKα1 and AMPKα2 was specifically associated with CaMKKβ.Notably, the reduced expression of CaMKKβ was significantly ameliorated following the administration of hEC-SOD (Figure 9A,E, p < 0.01 and p < 0.01, respectively).The ratio of phosphorylated liver kinase B1 (LKB1) to total LKB1 decreased upon treatment with siRNAs for AMPKα1, but this was reversed after the administration of hEC-SOD (Figure 9A,F, p < 0.05 and p < 0.05, respectively).Consistent with the observed changes in CaMKKβ expression, transfection with siRNAs for AMPKα1 and AMPKα2 in HepG2 cells resulted in a significant decrease in the ratio of phospho-Thr 172 AMPK to total AMPK and the expression PGC-1α (Figure 9A,G,H, p < 0.001 and p < 0.001, respectively).The decreases in the pAMPK/total AMPK ratio and PGC-1α expression were significantly improved in the hEC-SOD-treated group (Figure 9A,G,H, p < 0.001 and p < 0.01, respectively).Phosphorylation of FoxO1 was significantly increased by siRNAs for AMPKα1 and AMPKα2 but ameliorated by hEC-SOD treatment (Figure 9A,I, p < 0.001 and p < 0.01, respectively).There were no significant differences observed in the expression levels of SOD1 and SOD2 in HepG2 cells after treatment with siRNAs for AMPKα1 and AMPKα2 (Figure 9A-C).While the expression of CaMKKα did not exhibit significant differences (Figure 9A,D), the expression of CaMKKβ was significantly decreased in in the groups treated with siRNAs for AMPKα1 and AMPKα2 (Figure 9A,E, p < 0.001 and p < 0.001, respectively).This suggests that the knockdown of AMPKα1 and AMPKα2 was specifically associated with CaMKKβ.Notably, the reduced expression of CaMKKβ was significantly ameliorated following the administration of hEC-SOD (Figure 9A,E, p < 0.01 and p < 0.01, respectively).The ratio of phosphorylated liver kinase B1 (LKB1) to total LKB1 decreased upon treatment with siRNAs for AMPKα1, but this was reversed after the administration of hEC-SOD (Figure 9A,F, p < 0.05 and p < 0.05, respectively).Consistent with the observed changes in CaMKKβ expression, transfection with siRNAs for AMPKα1 and AMPKα2 in HepG2 cells resulted in a significant decrease in the ratio of phospho-Thr 172 AMPK to total AMPK and the expression PGC-1α (Figure 9A,G,H, p < 0.001 and p < 0.001, respectively).The decreases in the pAMPK/total AMPK ratio and PGC-1α expression were significantly improved in the hEC-SOD-treated group (Figure 9A,G,H, p < 0.001 and p < 0.01, respectively).Phosphorylation of FoxO1 was significantly increased by siRNAs for AMPKα1 and AMPKα2 but ameliorated by hEC-SOD treatment (Figure 9A,I, p < 0.001 and p < 0.01, respectively).Effects of siRNA for AMPKα1 or AMPKα2 and hEC-SOD on SOD isoforms, CaMKKα/β, phospho-LKB1, total LKB1, phospho-Thr 172 AMPK, total AMPK, PGC-1α, phospho-Ser 256 FoxO1 and total FoxO1 in HepG2 cells.HepG2 cells were treated with transfection reagent and 50 nM of control siRNA, AMPKα1 siRNA or AMPKα2 siRNA.Then, the cells were exposed to 0.5 U/mL hEC-SOD.(A) Representative Western blot for SOD1, SOD2, CaMKKα, CaMKKβ, phospho-LKB1, total LKB1, phospho-Thr 172 AMPK, total AMPK, PGC-1a, phospho-Ser 256 FoxO1 and total FoxO1.The relative protein levels of (B) SOD1/GAPDH, (C) SOD2/GAPDH, (D) CaMKKα/GAPDH, (E) CaMKKβ/GAPDH, (F) phospho-LKB1/total LKB1, (G) phospho-Thr 172 AMPK/total AMPK, (H) PGC-1α/GAPDH and (I) phospho-Ser 256 FoxO1/total FoxO1 were quantified via densitometry (n = 3).* p < 0.05, ** p < 0.01 and # p < 0.001 compared with other groups.CaMKK, calcium/calmodulin-dependent protein kinase kinase; LKB1, liver kinase B1. Transfection with siRNAs for AMPKα1 and AMPKα2 resulted in the suppression of PPAR-α expression while enhancing PPAR-γ expression.(Figure 10A-C, p < 0.001 and p < 0.001, respectively).These observations were reversed upon administration of hEC-SOD, leading to an increase in PPAR-α and a decrease in PPAR-γ expression (Figure 10A-C, p < 0.01 and p < 0.001, respectively).In the context of the PPAR family's downstream pathway, further examinations were conducted for phospho-ACC, SREBP-1c and ChREBP.The transfection of siRNAs for AMPKα1 and AMPKα2 led to a marked reduction in phospho-ACC levels but the upregulation of SREBP-1c and ChREBP expression (Figure 10A,D-F, p < 0.001, p < 0.001 and p < 0.05, respectively).Administration of hEC-SOD reversed these effects, resulting in elevated phospho-ACC levels and a reduction in the expression of both SREBP-1c and ChREBP (Figure 10A,D-F, p < 0.01, p < 0.01 and p < 0.05, respectively). Transfection with siRNAs for AMPKα1 and AMPKα2 resulted in the suppression of PPAR-α expression while enhancing PPAR-γ expression.(Figure 10A-C, p < 0.001 and p < 0.001, respectively).These observations were reversed upon administration of hEC-SOD, leading to an increase in PPAR-α and a decrease in PPAR-γ expression (Figure 10A-C, p < 0.01 and p < 0.001, respectively).In the context of the PPAR family's downstream pathway, further examinations were conducted for phospho-ACC, SREBP-1c and ChREBP.The transfection of siRNAs for AMPKα1 and AMPKα2 led to a marked reduction in phospho-ACC levels but the upregulation of SREBP-1c and ChREBP expression (Figure 10A,D-F, p < 0.001, p < 0.001 and p < 0.05, respectively).Administration of hEC-SOD reversed these effects, resulting in elevated phospho-ACC levels and a reduction in the expression of both SREBP-1c and ChREBP (Figure 10A,D-F, p < 0.01, p < 0.01 and p < 0.05, respectively). Discussion Our studies showed that EC-SOD might be a potential therapeutic agent for NAFLD through the direct activation of AMPK-PGC-1α and AMPK-FoxO1 signaling in hepatocytes, which modulates lipid metabolism, leading to anti-inflammatory, antioxidative and antiapoptotic effects and improving autophagy in the liver.This study holds significance Discussion Our studies showed that EC-SOD might be a potential therapeutic agent for NAFLD through the direct activation of AMPK-PGC-1α and AMPK-FoxO1 signaling in hepatocytes, which modulates lipid metabolism, leading to anti-inflammatory, antioxidative and antiapoptotic effects and improving autophagy in the liver.This study holds significance for unveiling the antioxidant effects and metabolic insights associated with EC-SOD in NAFLD, establishing it as a potential therapeutic target.In a study by Cui et al., the overexpression of SOD3 in C57BL/6 mice subjected to a high-fat diet (HFD) effectively blocked obesity, fatty liver and insulin resistance [11].Our study provides compelling evidence of the amelioration of steatosis with hEC-SOD administration, demonstrating histological improvements that extend to steatosis and fibrosis through activating AMPK and its associated downstream targets involved in lipid metabolism (Figure 1).To the best of our knowledge, this is the first report on the therapeutic impact of hEC-SOD in ameliorating hepatic steatosis. SOD3, a member of the SOD family, has potential as a biopharmaceutical agent for inflammatory diseases [21].It can attenuate inflammation by reducing ROS levels and modulating cellular signals.Moreover, SOD3 does not need intracellular translocation, unlike SOD1 and SOD2.Furthermore, SOD3 has a longer circulation half-life of about 20 h compared with SOD1 and SOD2, which have half-lives of about 20 min and 5-6 h, respectively [22,23].SOD3 has demonstrated its protective role in various tissues, including the lung, kidney, skin and retina, by mitigating the effects of oxidative stress [20,[24][25][26].Gao et al. provided evidence that SOD3 serves as a protective factor secreted by adipocytes in response to HFD-induced obesity [10].Their study revealed a significant upregulation of SOD3 expression in the adipose tissue of HFD-fed mice.Notably, Sod3 knockout mice exhibited an altered phenotype characterized by increased obesity, insulin resistance, enlarged adipose tissue and elevated triglyceride accumulation.This study revealed a significant reduction in hepatic EC-SOD expression in a steatotic liver, which was ameliorated through the administration of hEC-SOD, as confirmed with Western blot and immunohistochemical staining (Figure 2A-D).Our results show improvements in oxidative stress markers, which include 24 h urinary isoprostane, a recognized measure of oxidative stress, and 24 h urinary 8-hydroxy-deoxyguanosine, a marker indicative of DNA damage resulting from oxidative stress (Table 1).Furthermore, quantitative assessment of ROS levels in liver tissue using DHE staining demonstrated a significant reduction in ROS in response to hEC-SOD administration (Figure 2E,F).These improvements extended to a reduction in systemic and hepatic inflammatory markers, including MCP-1 and TNF-α, following hEC-SOD administration (Figure 6). NAFLD exhibits a multitude of clinical phenotypes and considerable heterogeneity owing to the intricate nature of its pathogenesis and diverse clinical circumstances [27].This condition has the potential to advance to more severe complications, including liver cirrhosis and hepatocellular carcinoma, driven by inflammation and fibrosis.Although effective treatments have been established for other components of the metabolic syndrome, including diabetes, hypertension, hyperlipidemia and even obesity, there is a notable absence of specific pharmaceutical interventions for NAFLD at present [28].In a landscape where current treatments have limited effectiveness, antioxidants are gaining attention as a potential new therapeutic target [29,30].The application of SOD, which has been investigated in various animal models and clinical trials, holds promise across a spectrum of conditions, from hypoxic damage and cardiovascular diseases to neurodegenerative disorders and metabolic diseases [31]. Sun et al. showed that EC-SOD was involved in liver damage through the AMPK pathway, even without metabolic dysfunction [32].They reported that Sod3 −/− mice exhibited spontaneous liver injury and fibrosis.Notably, their findings suggest that SOD3 deficiency exacerbated liver fibrosis by suppressing AMPK signaling.Furthermore, in a previous study by our research team, the administration of hEC-SOD was shown to improve diabetic nephropathy through AMPK-PGC-1α-Nrf2 and AMPK-FoxOs signaling pathways [20].AMPK is a critical protein involved in regulating cellular energy metabolism and becomes active when cellular energy levels decrease.In a recent study, Wang et al. demonstrated that within the NAFLD model, CAMKKβ functions as the initiating kinase for the AMPK-mediated antioxidant defense mechanism, which protect hepatocytes from lipotoxicity [33].In the current study, we observed a decrease in the expression of CAMKKβ in db/db mice, which was restored following hEC-SOD administration (Figure 3A,B).This finding provides significant validation of previous research results.PGC-1α is a transcriptional coactivator protein that plays a significant role in the mitochondrial life cycle and response to ROS.Stimuli that activate AMPK have been reported to inhibit FoxO1dependent transcription [34].In the current study, we observed an improvement in the pAMPK/total AMPK ratio and the expression of PGC-1α in diabetic mice after hEC-SOD administration (Figure 3).Additionally, hEC-SOD treatment significantly reduced pFoxO1 expression in db/db mice.These findings suggest that targeting the AMPK-FoxOs signaling pathway through EC-SOD activation could be advantageous in mitigating intracellular oxidative stress and apoptosis in a steatotic liver.In the same line, our data strongly indicate that EC-SOD directly contributes to the reduction in oxidative stress by modulating the AMPK pathway. PPARs are nuclear receptors involved in regulating lipid metabolism and inflammation.Among the various PPAR subtypes, PPARγ plays a pivotal role in promoting hepatic steatosis by increasing the expression of SREBP-1c [35].This upregulation leads to heightened synthesis of fatty acids and cholesterol in the liver.Additionally, the downregulation of PPARγ results in the phosphorylation of ACC, a key enzyme in fatty acid synthesis, resulting in a decrease in lipogenesis [36].In our study, diabetic mice exhibited an increase in the expression of PPARγ, along with reduced phosphorylation of ACC and elevated expression of SREBP-1c and ChREBP (Figure 4).Following hEC-SOD administration, we observed a decrease in PPARγ along with an attenuation of SREBP-1c and an increase in ACC phosphorylation.This evidence suggests the inhibition of fatty acid synthesis in the liver. One of the key factors in the mechanism of liver damage due to oxidative stress is the increase in apoptosis.The Bax/Bcl-2 ratio is a crucial indicator of the balance between pro-apoptotic and antiapoptotic proteins in the Bcl-2 family [37].We found a reduction in apoptosis as indicated by the decreased Bax/Bcl-2 ratio with SOD3 administration (Figure 5).This indicates that the administration of hEC-SOD improved the apoptotic environment.Furthermore, we observed a decrease in MCP-1, TNF-α, IL-6, CD68 and Gr-1 following hEC-SOD administration (Figure 6).These results indicate that hEC-SOD has both antioxidant and anti-inflammatory effects locally and systemically.Additionally, a decrease in iNOS and arginase II, along with an increase in arginase I, indicates macrophage polarization from M1 to M2, contributing to inflammation resolution and tissue repair in the liver. A key discovery in our research was a detailed understanding of the molecular mechanism of SOD3's function in hepatocytes, as determined through rigorous in vitro analyses.Upon the application of AMPKα1 and AMPKα2 siRNA to HepG2 cells, there was a significant reduction in CAMKKβ expression and AMPK phosphorylation.However, these results were restored with the administration of hEC-SOD (Figure 9).Moreover, the restorative effect of hEC-SOD on pAMPK was dose-dependent, with higher hEC-SOD doses leading to greater ratio increases (Figure 7).In alignment with our previously mentioned in vivo study results, we observed that various components of the AMPK downstream pathway, including PGC-1α, pFoxO1, PPAR α/γ, pACC, SREBP-1c and ChREBP, exhibited improvement subsequent to the administration of hEC-SOD. The use of anti-hyperglycemic drugs such as pioglitazone, glucagon-like peptide 1 analogues and sodium-glucose cotransporter 2 inhibitors had raised expectations for their potential efficacy in improving NAFLD.However, it has been found that these drugs have limited effectiveness in achieving histologic improvements in NAFLD.However, our study demonstrated that the administration of hEC-SOD resulted in an improvement in hepatic steatosis independent of blood glucose levels.In our research, despite the absence of significant differences in blood glucose and HbA1C levels before and after hEC-SOD administration, an improvement in HOMA IR was observed (Figure 1).EC-SOD was previously reported to protect against adipose tissue inflammation and insulin resistance [11].Furthermore, serum EC-SOD levels were shown to be negatively correlated with insulin resistance in patients with type 2 diabetes [38][39][40].In our study, administration of hEC-SOD not only reduced oxidative stress but also decreased insulin resistance.We hypothesize that these dual effects of EC-SOD contributed to the histological improvement in hepatic steatosis observed in our study.One of the major clinical unmet needs in NAFLD is the absence of biomarkers [41,42].While there have been some past studies that measured SOD levels in the blood of patients with hepatic steatosis, these studies have not gained widespread recognition due to discrepancies in their results [43][44][45].Several studies have been published on the utility of EC-SOD as a biomarker for hepatic failure in various liver diseases [46][47][48].Considering the strong association observed between EC-SOD and hepatic steatosis in this study, it is conceivable that EC-SOD could be employed as a biomarker.Its use in conjunction with known biomarkers or measuring differences in its level before and after treatment provides a potential avenue for assessment. Conclusions Our study demonstrated the effectiveness of hEC-SOD treatment in improving steatotic liver disease.This improvement was closely associated with the activation of AMPK and its associated pathways, such as AMPK-PGC-1α and AMPK-FoxO1 phosphorylation, leading to improvements in lipid metabolism, oxidative stress and inflammation.These findings suggest that hEC-SOD might act as a potential therapeutic agent for lipotoxicity in NAFLD via the direct activation of AMPK in hepatocytes. Figure 1 . Figure 1.Effects of hEC-SOD treatment on liver phenotypes in db/m and db/db mice.The figure shows the liver steatosis, inflammation and fibrosis markers in different groups of mice.(A) Representative sections of hematoxylin and eosin stain, Oil Red O stain, Masson's trichrome staining, Antioxidants 2023 , 12, x FOR PEER REVIEW 12 of 22in a steatotic liver by decreasing the production of proinflammatory cytokines from macrophage M1 polarization, without affecting macrophage M2 polarization. Figure 6 . Figure 6.Effects of hEC-SOD treatment on inflammatory markers in db/m and db/db mice.(A) The expression levels of MCP-1, TNF-α, IL-6, CD68, Gr-1, Arginase I/II and iNOS are shown in representative Figure 7 . Figure 7. Dose-dependent effects of hEC-SOD treatment on AMPK expression in HepG2 cells exposed to different media.HepG2 cells were cultured with LG, HG or PA media and treated with various doses of hEC-SOD.(A) The expression levels of phospho-Thr 172 AMPK and total AMPK are shown in representative Western blots.(B) The relative protein levels of phospho-Thr 172 AMPK/total AMPK were measured via densitometry (n = 3).** p < 0.01 and # p < 0.001 compared with other groups.AMPK, adenosine monophosphate-activated protein kinase; HG, high-glucose; LG, lowglucose; PA, palmitic acid. Figure 7 . Figure 7. Dose-dependent effects of hEC-SOD treatment on AMPK expression in HepG2 cells exposed to different media.HepG2 cells were cultured with LG, HG or PA media and treated with various doses of hEC-SOD.(A) The expression levels of phospho-Thr 172 AMPK and total AMPK are shown in representative Western blots.(B) The relative protein levels of phospho-Thr 172 AMPK/total AMPK were measured via densitometry (n = 3).** p < 0.01 and # p < 0.001 compared with other groups.AMPK, adenosine monophosphate-activated protein kinase; HG, high-glucose; LG, low-glucose; PA, palmitic acid. Figure 8 . Figure 8. Effects of siRNA for AMPKα1 or AMPKα2 on AMPK expression in HepG2 cells.HepG2 cells were treated with transfection reagent and 50 nM of control siRNA, AMPKα1 siRNA or AMPKα2 siRNA.(A) The expression levels of phospho-Thr 172 AMPK and total AMPK are shown in representative Western blots.The relative protein levels of (B) phospho-Thr 172 AMPK/total AMPK, (C) phospho-Thr 172 AMPK/GAPDH and (D) total AMPK/GAPDH were measured via densitometry (n = 3).** p < 0.01 and # p < 0.001 compared with other groups. Figure 8 . Figure 8. Effects of siRNA for AMPKα1 or AMPKα2 on AMPK expression in HepG2 cells.HepG2 cells were treated with transfection reagent and 50 nM of control siRNA, AMPKα1 siRNA or AMPKα2 siRNA.(A) The expression levels of phospho-Thr 172 AMPK and total AMPK are shown in representative Western blots.The relative protein levels of (B) phospho-Thr 172 AMPK/total AMPK, (C) phospho-Thr 172 AMPK/GAPDH and (D) total AMPK/GAPDH were measured via densitometry (n = 3).** p < 0.01 and # p < 0.001 compared with other groups. Table 1 . Biochemical and physical characteristics of the four groups at the end of the experimental period.
2023-11-26T16:16:35.418Z
2023-11-24T00:00:00.000
{ "year": 2023, "sha1": "33ef61e6393750d5e16b1cbc42cf4678d457adf8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/12/12/2040/pdf?version=1700814115", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8ead12e81912dbb7fa68eda2c6fac05a25866099", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
257404900
pes2o/s2orc
v3-fos-license
Gradient-Free Structured Pruning with Unlabeled Data Large Language Models (LLMs) have achieved great success in solving difficult tasks across many domains, but such success comes with a high computation cost, and inference latency. As developers and third parties customize these models, the need to provide efficient inference has increased. Many efforts have attempted to reduce inference cost through model compression techniques such as pruning and distillation. However, these techniques either require labeled data, or are time-consuming as they require the compressed model to be retrained to regain accuracy. In this paper, we propose a gradient-free structured pruning framework that uses only unlabeled data. An evaluation on the GLUE and SQuAD benchmarks using BERT$_{BASE}$ and DistilBERT illustrates the effectiveness of the proposed approach. By only using the weights of the pre-trained model and unlabeled data, in a matter of a few minutes on a single GPU, up to 40% of the original FLOP count can be reduced with less than a 4% accuracy loss across all tasks considered. Introduction Large Language Models (LLMs) have made great strides in solving difficult tasks across many domains, but this has come at the cost of high parameter counts and significant computational overhead. Developers and third parties can now employ these trained models and create custom versions tailored to their particular applications. Customization makes these models applicable to a wider variety of use cases, but this, even more, highlights the need for efficient inference models. Among these techniques, structured pruning shows promising results in reducing model size, while also improving inference time because the resulting model remains compatible with the underlying hardware. However, most existing approaches are quite complex and require significant engineering effort to implement. Moreover, the process of compression is time-consuming and requires retraining the compressed model to regain accuracy. These limitations make effective compression difficult to realize in practice. Recently, Kwon et al. (2022) proposed a post-training pruning for Transformers that does not require any retraining of the model. Even though this approach avoids expensive retraining, it requires labeled data in the pruning pipeline. LLMs mainly utilize unlabeled data for training and with increased use of pre-trained LLMs by developers and third parties, access to the labeled data is questionable. Especially with the popularity of in-context learning, where the user only provides prompts, the purpose of the task is not necessarily known at compression time. In this scenario, none of the existing pruning techniques can be applied for model compression since they all require labeled data. Even though knowledge distillation (Sun et al., 2020;Jiao et al., 2019;Sanh et al., 2019) trains a student model with unlabeled data, it still requires a large amount of unlabeled data and is expensive to train. This motivates us to investigate whether one can design a structured pruning method that does not require retraining nor labeled data, while avoiding adverse effects on performance. In this work, we propose Kernelized Convex Masking (KCM), a gradient-free framework (Figure 1) that only requires the trained model and sampled raw data to compress the model. We introduce R2D2 as the core of this framework that combines two ranking techniques Representative Ranking (R2) and Data-Driven (D2) to estimate the impor-tance of individual neurons. R2 maps our structured pruning goals into a representative selection problem (Huang et al., 2018), where the goal is to find a small subset of data points that can well represent a large dataset. Specifically, R2 considers the filters of a Feed-Forward Network (FFN) in the trained model as data points in a high-dimensional space, and ranks these by how well a filter can be represented by others. D2, on the other hand, ranks the filters based on statistics gathered from layer-wise model outputs using the raw sampled data. KCM decides which filter to remove by merging the R2D2 rankings across all layers. Since removing filters may still affect accuracy, we apply an existing scaling transformation method in Kwon et al. (2022) to mitigate the effect of their removal. Our main contributions are as follows: • We Propose Kernelized Convex Masking (KCM) pruning framework, a gradient-free structured pruning approach that neither requires labeled data nor retraining. • As the core of KCM, we propose R2D2 that combines two ranking techniques Representative Ranking (R2) and Data-Driven (D2). R2D2 only requires the weights of the trained model and sampled raw data to rank the neurons. An ablation study confirms the importance of combining the two proposed ranking techniques. • Our evaluation on GLUE and SQuAD benchmarks using BERT BASE and DistilBERT confirms the effectiveness of the proposed approach. Compared to when the labeled data is available, KCM is able to reduce up to 40% of the original FLOPs with less than 4% accuracy loss, in a matter of a few minutes on a single GPU. Preliminary In this paper, we focus on pruning the BERT architecture. BERT is a stack of L homogeneous Transformer encoder blocks (Vaswani et al., 2017), each of which consists of a multi-head attention (MHA) layer followed by a Feed-Forward Network (FFN) layer. Due to the fact that FFN layers have a huge impact on model size and inference latency (Ganesh et al., 2021), we focus on the pruning the filters of the FFN layers. In every transformer encoder layer , the F F N (x) with N filters is parame- (2) ∈ R d , and activation function σ: (1) For example, BERT BASE has 12 transformer encoder blocks (L = 12), where the number of filters (N ) is 3072, Structured Pruning by Masking Masking: Given an integer n < N , reducing the number of filters from N to n can be considered as introducing a mask variable m ∈ R N (with n non-zero elements) associated with the outputs of the filters. Objective: Transformer pruning can be formulated as a constrained optimization problem on the mask M ∈ R L×N that represents the mask m of all layers L. There are LN filter mask variables which is much less than the total number of parameters in the model. For example BERT BASE with 110M parameters needs only 36k mask variables (0.03%). Optimal structural pruning is usually defined (Kwon et al., 2022) in the supervised setting with respect to minimizing the accuracy loss of the original model: Cost(M) is the floating point operations (FLOPs) of the pruned model determined by the mask M. In this work, since only unlabeled data is available, the supervised loss L(M) can not be evaluated. Similar to distillation, we consider minimizing the Feature Map Loss (Sun et al., 2020) L F M T for each FFN in layer . until convergence i.e. δ ≤ α 11: SR2[ ]= diagonal(Ci) 12: end for 13: return SR2 filters of all L transformer layers such that Cost(M) be less than C and the loss One way to tackle this problem would be to consider it as a version of the distillation problem, where the goal is to find the optimal mask under the sparsity constraint. However, distillation methods require large amounts of unlabeled data and are very expensive to train (Xia et al., 2022). Proposed Approach Instead, in this work, we propose a gradient-free approach that only uses the weights of the trained model and statistics on layer-wise outputs using the unlabeled data to implicitly minimizes the feature map loss in each layer. Figure 1 shows the overview of our framework, called Kernelized Convex Masking (KCM). KCM takes the trained model Model, sampled unlabeled dataset D and a cost constraint C, and returns a mask M ∈ R L×N that represents the mask of the N filters of all layers L. Framework Overview We introduce R2D2 that combines ranking techniques Representative Ranking (R2) and Data-Driven (D2) to estimate the importance of the filters. As shown in Figure 1, these two approaches independently rank N filters based on the weights and output of the activation function of the FFNs in all layers L. Then KCM merges the results of R2D2 across all layers. The top k filters are selected and the rest will be masked to zero. Note that given a FLOPs constraint C, k is the total number of filters that satisfies constraint C. Finally, we apply a scaling transformation in Kwon et al. (2022) over the selected filters to recover the accuracy drop and reduce the feature map loss in Equation 4. Next, we discuss our framework in more detail. Kernelized Convex Masking(KCM) Algorithm 1 illustrates the end-to-end approach. We first present the details of the proposed R2D2. Then, we discuss how we use these rankings for the final masking. REPRESENTATIVE RANKING (R2) By considering Representative Ranking assumes H (1) is unknown and only uses the weights W (2) to rank the N filters. From the computational geometry perspective the filters in W (2) ∈ R N ×d can be considered as N data points in a d dimensional space. The structured pruning goal can be translated as selecting a subset of data points (filters) to be used as representatives that can describe any data point (filter) in the dataset. There has been a lot of work on finding such a representative set (Kazemi et al., 2022;Killamsetty et al., 2021;You et al., 2020). However, for linear functions, this problem can be reduced to finding a convex hull. The convex hull is a subset of data points that can be used to find the maxima of any linear function. is, in fact, a linear function, the convex hull of W (2) can be considered as a representative of the filters that produce the maxima of F F N regardless of the input H (1) . The challenge is that in a d dimensional space finding the exact convex hull is in order of O(N d/2 ) time, which can be very expensive (e.g. in BERT BASE , d=768). Moreover, the number of convex hull data points radically increases with the number of dimensions. To address these limitations, Table 1. Comparison of the different structured pruning methods studied in this work. , and show if a method has the specific feature or not. N/A means not applicable. To simplify notation, we show gradient-free with (!∇). Supervision-free indicates not using labeled data. Convex hull approximation is well-studied area. Among existing methods Kernelized Convex Hull Approximation (KCHA) (Huang et al., 2018) is one of the approaches that can be applied to our problem. Algorithm 2 shows the proposed Representative Ranking based on the KCHA. Specifically for each layer , we seek a positive coefficient matrix nal elements of C indicate whether the corresponding data instances are extreme points. Huang et al. (2018) solves this problem as a Semi-NMF problem (Ding et al., 2008), rather than a Non-negative Least Square problem, and adopts a multiplicative updating rule as the solver: , and |A| is the absolute values of A. Please refer to Huang et al. (2018) for more detail. (2) can be considered as K(W (2) , W (2) ). In this paper, we use a Gaussian kernel. Since the kernel value is positive and K(W (2) , W (2) ) = 1, the updating rule of Semi-NMF algorithm can be modified as: Algorithm 2 illustrates the steps of the Representative Ranking, where for each layer the coefficient matrix C is independently calculated using Equation 7. The algorithm then returns the diagonal of C as the ranking score of the filters. The width of the Gaussian kernel σ, and the convergence rate α are hyperparameters. In our experiments, we observe that setting σ = 1.0 and α = 0.01 works for all tasks considered. Moreover, on average it takes less than 20 iterations to converge. DATA-DRIVEN RANKING (D2) Representative Ranking (R2) assumes H (1) is unknown and ranks the filters solely based on the weights W (2) . One could imagine using a similar ranking approach over W (1) T to rank the N filters. However, as mentioned, the convex hull is only a good representative for finding the maxima of any linear function, and the activation function σ makes H (1) nonlinear. Therefore, to incorporate the nonlinearity introduced by the activation function, Data-Driven (D2) performs a forward pass using sampled unlabeled data, and gathers statistics on the output results of each layer H (1) . It then uses the normalized average of these outputs to rank filters in each layer (Algorithm 1 lines (5 to 7)). MERGE AND SCALE Thus far, KCM ranks N filters of each layer independently by the filter Representative Ranking (R2) and Data-Driven Ranking (D2). In every layer, R2D2 combines the scores of R2 and D2 to capture the importance of filters based on the model weights and the layer outputs of the raw data (Algorithm 1 line 9). In our experiments, we run an ablation study to present the importance of these rankings. Given a FLOPs constraint C, let k be the total number of filters that satisfy C. In other words, the pruned model only should have k active filters across all layers and the rest should be removed. As shown in Algorithm 1, KCM merges the R2D2 scores (S R2D2 ) across layers, and the top k filters are selected to be active in the pruned model (Algorithm 1 lines 10-12). Since after masking some accuracy drop is inevitable, existing structured pruning methods have shown that scaling can be helpful. Thus, similar to Kwon et al. (2022) as shown in Figure 1, we apply a scaling transformation to the selected filters. Such scaling uses only the unlabeled data and, based on the generated mask, aims to reconstruct the layer-wise outputs by scaling the outputs of the active filters. This, in fact, reduces the feature map loss in Equation 4. Table 2. Accuracy degradation of pruning BERTBASE using our method and the prior structured pruning methods with different relative FLOPs. Note that our method is gradient-free (!∇), does not use label of the data and not require retraining (more detail in Table 1) (Li et al., 2016) with the scaling approach from (Li et al., 2016). Mask-Tuning uses labeled data but KCM and Weight-Magnitude-Scale are gradient-free with unlabeled data (Table 1). KCM outperforms Weight-Magnitude and Weight-Magnitude-Scale which highlights the effectiveness of our approach in the absence of labeled data. For 70% and 60% FLOPs constraints, Mask-Tuning that uses labeled data performs slightly better than KCM. Table 3 shows this gap more clearly. Experimental Setup We implemented our framework with PyTorch (Paszke et al., 2019) using the HuggingFace Transformers library. We evaluate the effectiveness of the proposed approach using BERT BASE and Distil-BERT (Sanh et al., 2019) on GLUE and SQuAD benchmarks. For the Data-Driven ranking we use 2K raw data from the training sets. Note that we only use raw input and no label is used. Figure 6, in Appendix B, shows how sample size affects our performance. For the Representative Ranking cal-culation in Algorithm 2, the width of the Gaussian kernel, σ, and the convergence rate, α, are the hyperparameters. In our experiments, we set σ = 1.0 and α = 0.01. Moreover, on average it takes less than 20 iterations to converge. All results are averaged over the runs with 10 different seeds. Please refer to Appendix A for more detail on experimental setup. Baselines from structured pruning methods: Table 1, shows the comparison of the different structured pruning methods specialized for Transformers studied in this work. We compare these methods by 4 important features, including gradient-free (no backward pass), retrain/finetune-free (no retrain/finetune), supervision-free (no use of labeled data), and fast pruning-time. We compare our proposed (Sajjad et al., 2023), DynaBERT (Hou et al., 2020), and EBERT (Liu et al., 2021b). All of these techniques require retraining of the pruned model and/or jointly learning the pruning configurations during training, which leads to high training time, and they are not gradientfree. Specifically, as shown in Kwon et al. (2022) these methods require 5 to 33 hours of retraining. Mask-Tuning (Kwon et al., 2022) is a recent work that does not need retraining but still relies on the labeled data and uses gradient computation to evaluate the importance of each filter. We also compare our method with Weight-Magnitude (Li et al., 2016), which is a light-structured pruning method that is gradient-free, does not retrain the pruned model, and does not use the data at all. We introduce Weight-Magnitude-scale that combines Li et al. (2016) with the scaling approach from Li et al. (2016). Note that the scaling step in Li et al. (2016) only needs unlabeled data, so Weight-Magnitude-scale will have the exact problem setup as our method 1. We would like to highlight that our method KCM, Mask-Tuning, Weight-Magnitude, and Weight-Magnitude-scale finish in less than 7 minutes across all tasks, which is 2 to 3 orders of magnitude faster than the other baselines. We evaluate the performance of our method against all these baselines by the FLOPs-accuracy trade-off of BERT BASE on the GLUE and SQuAD benchmarks. In the experimental results, to simplify the notation, we will indicate gradient-free with (!∇). Table 2 compares the accuracy drop of KCM against prior structured pruning methods in Table 1. Since the baseline accuracy differs slightly from paper to paper, we compare the amount of the accuracy drop from the baseline instead of the absolute accuracy. Similar to Kwon et al. (2022), we use the results without knowledge distillation and data augmentation reported in each paper since these add extra overhead. As one can see, the highest accuracy drop of KCM across all task is −3.62 which reduces 40% of the original FLOPs. Worth mentioning that, while all the baselines require labeled data and leverage the backward pass, our proposed method is gradient-free with unlabeled data. Experimental Results Next, we perform a more thorough evaluation against Mask-Tuning (Kwon et al., 2022), Weight-Magnitude (Li et al., 2016) and Weight-Magnitude-Scale, since their problem setup is closer to ours (Table 1). Figure 2 shows the results on BERT BASE as we vary the FLOPs constraint from 90% to 60%, i.e, reducing 10% to 40% of the original FLOPs. Clearly, KCM outperforms Weight-Magnitude and Weight-Magnitude-Scale, highlighting the effectiveness of our approach in the absence of labeled data. For the 70% and 60% FLOPs constraints, Mask-Tuning performs slightly better than ours. Table 3 shows the gap more clearly. This gap can be explained by the fact that, unlike Mask-Tuning, KCM is gradient-free with unlabeled data. We further evaluate the performance of KCM against Mask-Tuning (Kwon et al., 2022) on DistilBERT for the 70% and 60% FLOPs constraints. As shown in Table 4, interestingly, even though Mask-Tuning leverages the backward pass and labeled data, the proposed KCM performs better than Mask-Tuning on the SQUAD 2.0 benchmark. Moreover the results of both approaches on QQP, and STS-B are quite comparable, showing that even without labeled data and no backward pass the accuracy loss of the pruned model by our KCM method can be minimal. Ablation Studies Importance of our ranking techniques: R2D2 is the core component of our proposed approach KCM that combines the ranking of Representative Ranking (R2) and Data-Driven (D2) (Section 3). We run an ablation study to investigate the importance of these ranking techniques. Recall that R2 ranks N filters based on the weights of the FFNs, while D2 ranks them by the output of the activation function. Figure 3 illustrates how the performance changes if we only use one of these in our framework. While D2-only is our KCM without the R2, R2-only only uses the R2 ranking. As one can see, except in STS-B where D2-only slightly outperforms KCM, using R2-only performs better than D2-only. More importantly, when R2D2 combines them it allows our KCM to leverage both rankings and demonstrates improve-ment across all tasks. Note that the results of D2-only also confirm that using only the output of the activation functions is not always sufficient for pruning and highlights the impact of using the trained model weights (More results in Figure 5). Dynamic neuron selection: Another important feature of our KCM is the fact that it dynamically decides how many neurons from each layer to prune. This feature is an outcome of merging the result of R2D2 across all L layers. Figure 4 illustrates how KCM affects different layers of the BERT BASE . Clearly more pruning occurs over the last three layers, and more than half of the filters in the first two layers are pruned. From KCM point of view, the middle layers seem to be more important across all tasks. Discussion Our KCM is a gradient-free structured pruning framework that neither requires retraining nor labeled data. Here we would like to discuss what if we have a limited labeled data and how our approach can be extended to leverage that. Recall that R2D2 uses the statistics from the unlabeled data to rank filters based on layer-wise output. Thus a simple add-on would be to freeze the trained model, use the limited labeled data and only do one forward-backward pass and gather the gradient over the mask variables. Note that unlike Mask-Tuning (Kwon et al., 2022), we do not calculate the Fisher information since we just want to use the gradient as the new signal for the pruning. To do so, for example layer_1 layer_2 layer_3 layer_4 layer_5 layer_6 layer_7 layer_8 layer_9 layer_10 layer_11 layer_12 1) the gradient information can be used as a new ranking criteria that can be combined into our R2D2 or 2) one can use it to refine the top-k results of our KCM. Specifically, let us assume f i be the least important filter in top-k result of the KCM, and f j be the most important one from the gradient scores. If f j is not already in the top-k results, we can switch f i with f j if the total gradient of the top-k results increases. We implemented this simple greedy solution as an add-on to our KCM and show that indeed having a limited labeled data contributes to improve the accuracy drop. Table 5 shows the result on DistilBERT where only 512 sampled label data is available. Since this is out of the scope of this work, We leave a more thorough investigation as a future work. Pruning is an important area of research for model sparsity that removes insignificant weights in neural networks. While Kurtic et al. (2022); Sanh et al. (2020); Gale et al. (2019); Zhang et al. (2022) proposed second-order, first-order, and magnitude-based pruning methods for Transformers, Chen et al. (2020b;; Prasanna et al. (2020) explored the lottery ticket hypothesis. These methods can significantly reduce the model size; however, they might not offer significant inference speedup since the hardware and cannot efficiently utilize the unstructured sparse patterns. Even though structured pruning methods can be effective for compression and speedup, they can be difficult to implement in practice due to the high computational cost and complexity of the process. Additional training during or after pruning can be up to 10 times more expensive than original model training (Lagunas et al., 2021;Xia et al., 2022), and the pruning pipeline often requires rewriting the training code and involves many additional hyperparameters to adjust (Hou et al., 2020;Lan et al., 2019;Liu et al., 2021a;Yao et al., 2021). Pruning in an unsupervised setting has been studied in Guo et al. (2020b); Browne et al. (2020; for spiking neural networks and fully-connected layers; however, the pruning either happens during training or still requires retraining of the pruned model. In contrast, our structured pruning method neither requires retraining nor labeled data. nsupervised: (Guo et al., 2020b) proposed an unsupervised online adaptive weight pruning method that dynamically removes non-critical weights from a spiking neural network (SNN). (Browne et al., 2020; uses unsupervised kmeans clustering to detect clusters of similar filters, and nodes in fully-connected layers, and prunes those that are redundant. (Aghasi et al., 2020): convex post-processing module, which prunes (sparsifies) a trained network layer by layer, while preserving the internal responses. representative selection problem, low rank decomposition: There has been a lot of work on finding such a representative set (Kazemi et al., 2022;Killamsetty et al., 2021;You et al., 2020) that require training. However since W ( is a linear function this problem can be reduced to finding convex hull over W (2) . Conclusion In this work, we studied the problem of structured pruning with unlabeled data and no backward pass. We proposed a gradient-free structured pruning framework that prunes the filters with the help of our proposed R2D2 that combines two ranking techniques called Representative Ranking (R2) and Data-Driven (D2). We empirically evaluated our frame-work on GLUE and SQuAD benchmarks using BERT BASE and DistilBERT. Compared to when the labeled data is available, our approach achieved up to 40% FLOPs reduction with less than 4% accuracy loss over all tasks considered. B.3. Speedup We evaluated the latency on real hardware and obtained the speedup of KCM on BERT BASE on a single NVIDIA V100 GPU for 60% Flops constraint: We conducted additional experiments on a larger-scale model, BERT LARGE over SQuAD 1.1 . The results in Table 7 indicate that our method outperforms unsupervised baselines, providing further evidence of its efficacy. D. Train-Test Data Discrepancy We ran new experiments with a new dataset called new-Wiki to further evaluate the effectiveness of the proposed method under training-test data discrepancy. As outlined in Miller et al. (2020), new-Wiki is different from the original SQuAD 1.1 dataset and was generated using the Wikipedia dataset. We explored various scenarios involving the sampling of unlabeled data from datasets that differ from the evaluation dataset. In particular, we sample unlabeled data from 1) SQuAD 1.1 -train 2) SQuAD 1.1 -val or 3) new-Wiki and evaluate on SQuAD 1.1 -val. As evident from the results in Table 9, sampling unlabeled data from new-Wiki (and evaluating on SQuAD 1.1 -val) yielded improved performance compared to sampling from SQuAD 1.1 -train and evaluating on SQuAD 1.1 -val. This finding further supports our assertion regarding the applicability and effectiveness of our approach. We further evaluate the effectiveness of our approach on using the finetuned model on SQuAD 1.1 but evaluate on new-Wiki and results are as follows:
2023-03-09T06:42:45.804Z
2023-03-07T00:00:00.000
{ "year": 2023, "sha1": "3d60a54a47b346608430344ff37935d897a14c09", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3d60a54a47b346608430344ff37935d897a14c09", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
225355280
pes2o/s2orc
v3-fos-license
The Objectives and Practical Aspects of Quality Assurance System of Higher Education Each country in the world has its own individual approaches to the quality assurance system of higher education, so the quality of educational services in each country is different. The developing countries should be guided by the standards and recommendations put forward by the world’s leading countries in the field of the assurance system of higher education in order to improve the quality of education services. The purpose of the scientific investigation is to formulate the objectives and analyze the practical aspects of functioning of the quality assurance system of higher education. In the study’s framework of the practical aspects of the higher education’s quality in European Union’s countries, the methods of general analysis have been used, including comparison and grouping; at the same time, the presentation of statistics is also demonstrated by graphical methods. The practical aspects of quality assurance of higher education in European Union’s countries have been analyzed, which is reflected in the dynamics of the number of students who have received higher education, the structure of higher education degree seeking applicants, the employment rate of graduates who have graduated from higher education institutions (Employment rates of recent graduates), the World University Rankings, the Europe Teaching Rankings, rating of the strength of the higher education system (the QS Higher Education System Strength Rankings). Proposals for ensuring the proper quality of higher education and a high level of educational services to educational institutions of the European Union have been presented. Taking this into consideration, the purpose of the academic paper is to form tasks and analyze the practice of the quality assurance system of higher education. The following tasks should be considered and solved in order to achieve the purpose of the academic paper, namely: 1) To investigate the theoretical, methodical and methodological principles of the quality assurance system of higher education; 2) To consider and analyze the features of the formation and development of the quality assurance system of higher education in the member states of the European Union; 3) To provide recommendations to educational institutions of European Union member states on ensuring the proper quality of higher education and a high level of educational services. Literature Review Studies prove that the topic of quality assurance of higher education is presented in the works of many leading scientists and scholars. In view of the findings of a survey conducted in German higher education institutions by Seyfried and Pohlenz (2018), it has been found that the achievement of a high level of quality assurance in higher education would, to a large extent, be supported by higher education management, as well as the approval of cooperation with other educational institutions. Moreover, these authors also emphasize that "quality managers' role as promoters of quality assurance exhibits significant correlations with perceived effectiveness" (Seyfried & Pohlenz, 2018). At the same time, Filippakou (2011) presents a conceptual approach to the idea of quality in higher education. The scientist has proven that the concept of quality in higher education has been ideologically created, and the regime of this quality provides a fairly narrow range of understanding of higher education. In the context of the above, the researcher suggests using discourse and power to emphasize the interconnection of ideology and quality in higher education. Elken and Stensaker (2018) have highlighted three areas for improving the provision of higher education as part of quality upgrading of this process. Therefore, the improvements are related to: the management of the institution, in particular in the sphere of standards compliance; workflow in the direction of balancing the expected results; culture, that is, adherence to academic excellence. It should also be noted that studying the issue of improving higher education theses researchers state that "for the field of practice, the three perspectives may serve as a reminder that in a well-functioning higher education institution, effective coordination is not only about acknowledging management and culture but also a range of local practices that are not always visible in the formalised systems". Cardoso, Rosa and Stensaker (2016) have found that the quality of higher education in Europe is controlled both by relevant national agencies, specializing in higher education quality, and directly by institutions, providing higher education services. Moreover, scientists, in the context of developing a number of literary views on the quality problem of higher education, have determined that quality is characterized by such concepts as culture, responsibility and consistency. However, they offer to consider the quality of higher education in the face of a number of basic obstacles to quality. Herewith, it should be noted that the given research may become the basis for the development of other several effective mechanisms for ensuring the quality of higher education. (2015) have examined basic aspects of assessing the impact of quality assurance in higher education institutions in order to improve the development of the quality of higher education and determine the knowledge effectiveness level provided by higher education institutions. In the context of the investigation, they have conducted a survey of the interested parties (students, teachers, employers, agencies specializing in quality assurance in higher education, heads of curricula, heads of higher educational institutions, government, population, in general, and the international community) concerning examining their points of view on how the modern system of ensuring the quality of higher education has changed. During the survey it has been found that the greatest impact on changing the quality assurance system of higher education was the experience gained in IQA procedures. Moreover, Leiber, Stensaker and Harvey At the same time, Beerkens (2018) has considered an approach to quality assurance of higher education, based on a good deal of evidence identified. Consequently, the approach called the "Golden Standard" is a strict experimental evidence of the fact that it is high-quality tools that act as a means of enhancing students' learning. The proposed approach can be technically complex, quite costly and ineffective. Herewith, the given research states that the developed approach, which is based on evidence to ensure the quality of higher education, is a very attractive concept. In addition to the evidence-based approach the author also highlights the evidence-based approach, based on encouraging others to improve the quality of teaching in higher education institutions. Therefore, according to the researcher, the management of higher educational institutions should make efforts to critically monitor the educational process, in the context of which the material is taught to students, using a system of incentives, as well as collecting information (evidence) on the level of effectiveness of such an approach to ensuring the quality of higher education. The researcher also considers the consequences of implementing a policy approach to quality assurance in higher education, which is also based on evidence. Based on the results of using this approach, it is possible to obtain information (evidence) on the effectiveness of the policy of quality assurance in education by higher education institutions. Jarvis (2014) has found that the dominant role in regulating higher education institutions' management processes lies in quality assurance regimes, based on neoliberal management, where political discourse is guided more by belief than evidence. At the same time, in the context of the study conducted, it was noted that currently almost half of the countries of the world have an appropriate system for ensuring the quality of higher education. For instance, direct state regulation of the quality assurance process of higher education in the member states of European Union is carried out in Denmark -Subject Assessments, in Germany -Subject Accreditation, in Spain (Catalonia) -Performance-based Contracting, and the European Qualifications Framework. The scientist also states that currently the number of controlling bodies and organizations in the process of ensuring the quality of higher education is significantly increasing. At the same time, Ardi, Hidayatno, Yuri and Zagloel (2012) have analyzed the relationship between quality measures in higher education and determined the impact of each quality measure on students' satisfaction. Regarding the detailing of quality measures in higher education, the researchers have selected such indicators, as: dedication of teachers of higher education institutions, commitment of departments of higher education institutions, preparation of courses, quality of services at the university, courtesy, customer improvement, and feedback. Scientists have found that the quality of the material's presentation, the dedication of teachers to the educational process and the suggestions in reviews to improve the quality of higher education had a positive impact on the level of students' satisfaction with higher education. Urbanovic and Wilkins (2013) offered to apply the concept of internationalization in the context of a strategy to improve the quality of higher education in small countries with a comprehensive or universal higher education system. Such actions will make it possible to identify not only opportunities, but also the challenges facing countries in the higher education system. At the same time, the practical application of the concept of internationalization to ensure the quality of higher education, the authors conducted on the example of such a member state of European Union as Lithuania. According to the results of the study, researchers identify a number of strategies (strategy for setting clear goals; strategy for implementing a change management program; strategy for evaluating the effectiveness of internationalization of the quality of higher education; strategy of control over investment in the process of internationalization of the quality of higher education; strategy of comparing higher educational institutions with the world's best higher educational institutions; strategy of investing in staff development; staff competence development strategy, etc.) that can be implemented in small countries, such as Lithuania, in order to improve the quality of higher education. Data and Research Methodology Problems In the study's framework of the practical aspects of the higher education's quality in European Union's countries, the methods of theoretical analysis (to study the theoretical, methodic and methodological principles of the quality assurance system of higher education), also methods of grouping, description, comparison and synthesis (to consider and analyze the peculiarities of the formation and development of the quality assurance system of higher education in the member states of European Union and to provide recommendations to educational institutions of the member states of European Union on ensuring the proper quality of higher education and high level of educational services). Results of the Research As part of the disclosure of the subject matter of this scientific investigation, one should consider the dynamics of changes in the number of students in European Union's countries who have completed higher education. Considering that, in 2017, compared to 2013, the overall dynamics of the number of students in EU countries, as a whole, remained almost unchanged -a slight increase by 0,65% (Table 1). During 2013-2017, the largest number of undergraduate students were present in such countries as: Germany, France, Great Britain, Spain, Italy and Poland; however, in Italy and Poland in 2017 compared to 2013, there was a decrease in the number of students who had completed higher education by 1,9% and 18,5% respectively. In terms of the structure of degree seeking applicants of higher education, the largest share, which in addition to 2013, increased by 0,3% compared to 2013, was held by applicants who had received their initial higher education. (Short-cycle tertiary education) (Figure 1). The share of applicants for a bachelor's degree of higher education In order to study the practical aspects of quality assurance in higher education, the ranking of the leading universities in European Union's countries should be analyzed. Therefore, according to the Times Higher Education magazine, which annually determines the World University Rankings, the following universities are included in the top 10 most prestigious universities of European Union, namely: the University of Oxford, the University of Cambridge and Imperial College London located in Great Britain (Table 2). In addition, according to the World University Rankings, the top 20 most prestigious universities of European Union's countries are located in Great Britain, Germany, Sweden, Belgium, France and the Netherlands. At the same time it should be noted that Times Higher Education magazine also determines the Europe Teaching Rankings. Accordingly, both in 2018 and in 2019, the University of Oxford, the University of Cambridge, UCL, the University of Warwick and the University of Bristol in Great Britain and the University of Navarra located in Spain, remain the most prestigious universities in this rating (Table 3). In the context of studying the quality insurance problems of higher education, the strength of the higher education system of each country of European Union's countries should be analyzed. Therefore, according to the British consulting company Quacquarelli Symonds, Great Britain, Germany, France and the Netherlands have a strong higher education system among the countries of the European Union ( Figure 3). Spain, Italy, Sweden, Belgium and Finland also have a high strength in higher education, in contrast to the top 10 leading countries. Discussion According Based on the studies conducted, as well as accordingto the practical aspects'analysisof the quality system's functioning of higher education in the countries of European Union, it has been established that, in contrast to the generally accepted European standards, the internal quality assurance of higher education in the higher education institutions of each of the countries of the European Union is individual. According to the results of processing the data of the World University Rankings, it has been determined that some of the most prestigious universities in the world, included in the top 10, are the universities of European Union, such, as: the University of Oxford, the University of Cambridge та Imperial College London, located in the United Kingdom. According to The Europe Teaching Rankings presented by Times Higher Education it has been determined that the most prestigious universities are the University of Oxford, the University of Cambridge, UCL, the University of Warwick, the University of Bristol in the United Kingdom and the University of Navarra, located in Spain. Quality assurance should take into account relevant national regulations in the field of higher education and, given that higher education lies at the core of several of the sustainable development goals (SDGs) for 2030, it can also contribute to the success of this global initiative. Concerning the effective equality between women and men, SDG5 contemplates the strengthening of gender equality governance at all levels (Benito & Verge, 2020). In the context of ensuring the proper quality of higher education and a high level of educational services, educational institutions of the countries of European Union are recommended: 1) To review and improve the policy and directions of ensuring the quality of higher education; 2) To approve, review and monitor the results of the implementation of their own curricula, as well as to explore the basic aspects and benefits of training programs implemented by other educational institutions; 3) To improve the quality and level of educational services, and to develop the creative potential of teachers; 4) To conduct a proper assessment of students according to approved criteria and rules; 5) To advance the educational process in accordance with the tendencies of modern innovative and scientific and technical development, etc. National guidelines for prosthetics and orthotics programmes have previously been developed in some countries. The UK had the National Health Service and Quality Assurance Agency subject benchmark for prosthetics and orthotics. Other countries have professional association guidelines for programmes, for example, Australia. Ramstrand and Ramstrand have recently published competency standards for graduates of prosthetics and orthotics in Sweden. Aminian and O'Toole compared international curricula and identified both common and distinct areas. The International Society of Prosthetics and Orthotics has guidelines for the education of prosthetists and orthotists, which were being updated during the initial stages of this project (Hill et al., 2020). Learning outcomes are now common internationally across higher education programmes and a central part of the Bologna Process and the European Qualifications Framework. The intention is to focus higher education on output rather than input. As well as part of the learning, teaching and assessment process, learning outcomes are also linked to governance and management. Learning outcomes are expressed, from the perspective of the student, as what a student is expected to be able to do at the end of the learning activity. They should include a verb that enables observable and assessable outcomes, with Bloom's taxonomy often being used. Conclusions Thus, as part of the disclosure of the subject matter of a scientific paper, it has been established that developing countries should be guided by the standards and recommendations put forward by leading countries in the field of ensuring the quality of higher education in order to improve the quality of educational services. The study has analyzed the practical aspects of quality assurance in higher education in European Union's countries through such indicators as: the dynamics of the number of students who have completed higher education; the structure of degree seeking applicants of higher education; the employment rate of graduates who have graduated from higher education (Employment rates of recent graduates); the World University Rankings; the Europe Teaching Rankings; rating of the strength of the higher education system (the QS Higher Education System Strength Rankings). Herewith, studies of the dynamics of the number of students, enrolled at higher educational institutions, showed that during 2013-2017 the largest number of students was present in such European Union member states as Germany, France, Great Britain, Spain, Italy and Poland. It has been revealed that in the structure of applicants for higher education, the largest share, which also increased in 2017, compared to 2013, by 0, 3%, was occupied by applicants who received primary higher education. It has been determined that the highest employment rates of graduates who graduated from higher educational institutions in 2013 and during 2017-2018 were present in Malta, Germany, the Netherlands, the Czech Republic, Austria, Sweden, Luxembourg, Hungary, the United Kingdom and Denmark, which occupy more than 85 % (excluding the Czech Republic, Sweden, Luxembourg, the United Kingdom in 2013 and Hungary and Denmark in 2013 and 2017) of the total number of graduates of higher educational institutions. According to the results of the research conducted, proposals for ensuring the quality of higher education and a high level of provision of educational services have been presented to the educational institutions of the countries of European Union.
2020-08-06T09:06:12.768Z
2020-08-04T00:00:00.000
{ "year": 2020, "sha1": "8df75966bd9e2e409af8915ceedc85a216eefc23", "oa_license": "CCBY", "oa_url": "http://www.sciedupress.com/journal/index.php/ijhe/article/download/18570/11464", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "07d923accf1214048fe6cd4c44f97411e3f591f9", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Business" ] }
54817246
pes2o/s2orc
v3-fos-license
BACTERIAL SPECTRUM AND PATTERN OF ANTIMICROBIAL SENSITIVITY AMONG OUTPATIENTS WITH PNEUMONIA IN A TERTIARY CARE HOSPITAL OBJECTIVES: To outline the spectrum of bacteria causing pneumonia and the pattern of antimicrobial sensitivity in outpatients with pneumonia in a tertiary care hospital in Himachal Pradesh. METHODS: Sputum of 108 immuno competent pneumonia patients attending outpatient departments of Medicine and Pulmonary medicine of Dr. R. P. Government Medical College, Kangra at Tanda was sent for Gram staining and culture and sensitivity testing. RESULTS: Commensals were detected in most of the cases (32, 29.6%) followed by Staphylococcus aureus in 17(15.7%) and Streptococcus pneumoniae in 16(14.8%). This was followed by three Gram negative organisms namely E Coli (11, 10.2%), Pseudomonas (10, 9.2%) and Klebsiella (8, 7.2%). No growth was obtained in 7(6.5%) and other organisms were isolated in 7(6.5%) specimens. Staphylococcus aureus was sensitive to vancomycin, clindamycin, cefoxitin, azithromycin and cotrimoxazole. Streptococcus pneumoniae was found to be sensitive to vancomycin, clindamycin, gentamicin, azithromycin, penicillin, cotrimoxazole, amoxicillin+clavulanic acid. Klebsiella was found to be sensitive to imipenem, azithromycin, ciprofloxacin, gentamicin and amoxicillin+clavulanic acid. E coli was sensitive to imipenem, gentamicin and amoxicillin+clavulanic acid. Pseudomonas aeruginosa was found to be sensitive to gentamicin, ceftazidime, imipenem, ticarcillin and piperacillin. CONCLUSION: Staphylococcus aureus and Streptococcus pneumoniae are the commonest organism causing pneumonia. Streptococcus pneumoniae is resistant to many antibiotics. Azithromycin can be the first line therapy for pneumonia. Identification of Organisms: The isolates obtained on culture were studied and identified by the standard bacteriological techniques based on colony characters and biochemical tests. [3] Colony Characteristics: the colonies of each organism isolated were studied in respect of size, shape, colour, surface, elevation, opacity, edges, margins and effect of media, pigment production, odour, emulsifiability and consistency. Biochemical Reactions: Various biochemical tests were performed for Identification Gram positive bacteria: were identified by Catalase, Coagulase, Bile solubility, CAMP (Christie, Atkins and Munch Peterson) and Hippurate hydrolysis test. Gram negative bacteria: were identified by Catalase, Oxidase, Motility, IMViC reaction, O-F test, Carbohydrate fermentation test, Amino acid decarboxylation tests. DISCUSSION: In the present study the rate of isolation of organisms from sputum was 63% while in the study by Oberoi et al rate of isolation from sputum culture and blood culture was 32% and 22% respectively. The reason for higher sputum positivity in our study may be better technique for sputum expectoration and rapid transportation of the specimen to the laboratory. In the present study, commensals of throat were isolated in 40(38%) of specimens, and pathogenic bacteria were isolated in 68(62%) of specimens while Shah et al in their study found that in 71(71%) cases no etiological cause was obtained. Bansal et al also found that out of the 70 patients, 53(75.6%) had an identifiable etiology with 12 of these had evidence of a mixed infection. No organisms could be isolated in 17(24.4%) patients. [6,7,8] We found that Gram positive and negative organisms causing pneumonia were almost same i.e. Gram negative 35(51%) and Gram positive 33(49%) while Shah et al found that Gram negative organisms were the commonest cause (19, 65.5%), followed by gram positive (10,34.5%) in their study. [8] In the present study, Staphylococcus aureus was isolated in 17(25%), Streptococcus pneumoniae in 16(24%), E.coli in 11(16%), Pseudomonas aeruginosa in 10(15%), Klebsiella spp. in 8(12%), NFGO in 3(4%) Citrobacter spp in 2(3%), and Acinetobacter spp in 1(1%). This is in contrast to study by Oberoi et al according to which Streptococcus pneumoniae (22, 32.8%) was the commonest organism isolated followed by Pseudomonas aeruginosa (21, 30.9%), E.coli (8,11.7%), Klebsiella spp ( 7, 10.2%). In addition, they also isolated Acinetobacter, Candida albicans, Aspergillus fumigatus and Staphylococcus aureus in a less number of cases. (9, 13.1%) Shah et al found that Fig. 2: Sensitivity profile of Streptococcus pneumoniae isolated from sputum Pseudomonas aeruginosa was the commonest pathogen (10, 34.5%), followed by Staphylococcus aureus (7, 24.1%), Escherichia coli (6, 20.7%), Klebsiella spp. (3, 10.3%), Streptococcus pyogenes (1, 3.5%), Streptococcus pneumoniae (1, 3.5%) and Acinetobacter spp. (1, 3.5%). Other than the fact that they isolated Pseudomonas aerugenosa as the commonest organism, rest of the culture results were similar to the present study. Bansal et al found that, the most frequent pathogen was Streptococcus pneumoniae (19, 35.8%) followed by Klebsiella pneumoniae (12, 22%), Staphylococcus aureus (9, 17%), Mycoplasma pneumoniae (8, 15%), Escherichia coli (6, 11%), β-hemolytic streptococci (4, 7.5%) and other Gram-negative bacilli (5, 9%). [6,7,8] Staphylococcus aureus in Pneumonia patients was sensitive to vancomycin in 17(100%), clindamycin in 16(94.2%), cefoxitin in 14(82.3%), azithromycin in 12(70.6%), gentamicin in 10, (58.8%) and cotrimoxazole in 10(58.8%). Streptococcus pneumoniae was found to be sensitive to vancomycin in 16, (100%) and clindamycin in 12(75%), azithromycin in 9 (56.2%), ceftriaxone in 9 (56.2%), cotrimoxazole in 7(43.7%), and amoxy+clav in 7(43.7%). E. coli in pneumonia patients was sensitive to imipenem (9, 81.8%), gentamicin (8,72.7%), amoxycillin+clavulanate (4, 36.4%). (Fig 26) Pseudomonas aeruginosa was sensitive to gentamicin in 10(100%), ceftazidime in 9(90%), imipenem (5, 50%). Klebsiella spp. was sensitive to imipenem in 8(100%), azithromycin in 6(75%), ciprofloxacin in 6(75%) gentamicin in 4(50%). Therefore in the present study, organisms were highly susceptible to vancomycin, clindamycin, cefoxitin, azithromycin, gentamicin and cotrimoxazole, however resistance to penicillin, amoxycillin+clavulanate and fluoroquinolones was also noted in many cases. In the study by Oberoi et al the antibiotics which showed best sensitivity were third generation cephalosporins, fluoroquinolones and aminoglycosides. This is because of the higher incidence of gram negative pneumonia. The largest experience to date for treatment of Pseudomonas aeruginosa pneumonia have been with combination of a broad spectrum beta lactam antibiotic with an aminoglycoside such as gentamicin. The study revealed that CAP can be caused by different bacteriological agents with preponderance of gram negative over gram positive organisms in isolation from the blood but from sputum culture, higher number of Streptococcus pneumoniae were isolated. If the antibiotics according to the sensitivity pattern are administered to to these patients at an early stage of the disease, morbidity and mortality can be minimized. Iannini et al 2007 in a retrospective multicenter study 87 of 122 patients showed low level macrolide (Erythromycin) resistance, similar to our study. [6,9] In the present study we noted that Streptococcus pneumoniae was resistant to fluoroquinolones (Ofloxacin) in 62.5% cases. Chen et al against the back ground that Fluoroquinolones are now recommended for the treatment of respiratory tract infections due to Streptococcus pneumoniae, particularly when the isolates are resistant to beta lactam antibiotics, performed susceptibility testing. They concluded that the prevalence of pneumococci with reduced susceptibility to fluoroquinolones is increasing in Canada, probably as a result of selective pressure from the increased use of fluoroquinolones. Since, we also found increasing resistance to fluoroquinolones, the cause may be selective pressure due to overuse of these drugs in our area, particularly due to governmental supply of these drugs which are given to patients routinely for various infections including parenteral formulations. Weiss et al conducted a study over the course of a 20-month period, in a hospital respiratory ward in which ciprofloxacin was often used as empirical antimicrobial therapy for lower respiratory tract infections (LRTIs) and demonstrated the ability of S. pneumoniae to acquire multiple mutations that result in increasing levels of resistance to the fluoroquinolones and to be transmitted from person to person. Shefet et al in a Cochrane metaanalysis of 24 trials including 5015 randomized patients found no benefit of atypical cover (Fluoroquinolone monotherapy versus nonatypical monotherapy). Liu et al in a prospective study of 610 patients found that susceptibility of Streptococcus pneumoniae to Penicillin and azithromycin was 77.8% and 20.6% respectively, while we found sensitivity rate to penicillin and azithromycin as 18.7% and 56.2% respectively which was opposite to the findings of Liu et al. [10,11,12,13] CONCLUSIONS:  Staphylococcus aureus and Streptococcus pneumoniae are commonest organisms causing pneumonia.  Streptococcus pneumoniae isolated from the sputum of pneumonia cases has shown resistance drugs like penicillin, amoxicillin/clavulanic and fluoroquinolones.  It was found to be sensitive to clindamycin, macrolides (Azithromycin), third generation cephalosporins (Ceftriaxone) and cotrimoxazole.  For presumptive treatment the first line may be a macrolide (Azithromycin) as it has shown activity against both gram negative and gram positive and combination therapy if required may be ceftriaxone plus azithromycin in such cases.
2019-03-15T13:12:08.436Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "30b08e99c2e906d86054ec451196616149205c5a", "oa_license": null, "oa_url": "https://doi.org/10.14260/jemds/2015/730", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "513453d0c334baff15de0c5774461d57d0b7113d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213197983
pes2o/s2orc
v3-fos-license
Does ‘WOW’ translate to an ‘A’? Exploring the effects of virtual reality assisted multimodal text on Chinese Grade 8 EFL learners’ reading comprehension Purpose In recent years, the incorporation of multimedia into linguistic input has opened a new horizon in the field of second language acquisition (SLA). In the reading aspect, the advent of virtual reality (VR) technology extends the landscape of reading repertoire by engaging learners with auditory, visual and tactile multimodal input. This study aimed to examine the pedagogical potential of VR technology in enhancing learners’ reading comprehension. Methods Three classes including 131 Chinese 8th grade EFL students participated in this study. This study adopted mixed methods methodology and triangulated pre-post-retention tests, questionnaires, learning journals and inter-view data to compare three modes of text input on learners’ reading performance and cognitive processing. Results The results indicated that VR-assisted multimodal input significantly improved learners’ macrostructural comprehension in the short term, whereas there was no significant difference of retention performance. The findings revealed that reading multimodal text did not exceed learners’ memory capacity or impose extraneous cognitive load. Participants mainly reported favorably on the efficacy of multimodal input in assisting their reading. Conclusion This study was the first attempt to integrate VR technology with input presentation and cognitive processing and offered a new line of theorization of VR-assisted multimodal learning in the cognitive field of SLA. Introduction In the field of second language acquisition (SLA), the important role of input has been addressed by many researchers (Ellis, 1994;Gass, 1997;Krashen, 1985;Long, 1996). In the cognitive account of SLA, the exposure to input has been regarded as a necessary condition for second language (L2) development to occur (Fotos, 2000;Gass, 1997). Recently, the modality of input has become a focus of inquiry in SLA due to the development of information and communication technology (ICT), which has transformed the way information is recorded, represented, managed and processed. In this study, I used the term "multimodal text" (Walsh, 2007, p. 26) to encompass a broad concept of engaging in, interacting with, and reflecting upon the text presented by different multimedia. This study was situated in the cognitive field of SLA to test the efficacy of multimodal input on Chinese EFL beginners' reading comprehension by comparing screen-based multimodal text and print-based monomodal text, specifically how learners decode word meanings and memorize details at the microstructural level and construct coherent mental representations at the macrostructural level. The role of multimodality in SLA has been emphasized by advocates of multimedia learning (Rost, 2002), since multimodality provides learners with multisensory information in diverse semiotic codes (Legros and Crinon, 2002). In the past decade, two-dimensional (2D) visuals such as pictures and videos have been explored extensively in providing multimodal input (Lorenz, 2009;Lan and Sie, 2010). The advent of sophisticated virtual reality (VR) technology extends the scope of 2D multimodal input to a three-dimensional (3D) level and engages learners with auditory, visual and tactile multimodal input. Understanding how the affordances provided by VR technology affect language learning and how individuals learn with the assistance of VR is not well understood. This research is expected to take forward the field by providing a pedagogical rationale that understands how students interact with VR-assisted multimodal text and thereby improving reading performance. This study applied VR technology to examine one facet of SLA: reading, particularly the expository reading that situates readers in a content and language integrated learning (CLIL) context. For some students, reading expository text can be an arduous and occasionally frustrating experience (Lightbown & Spada, 2013). One of the underlying reasons for the current study was to examine one potential avenue that could make the reading process more enjoyable, effective and efficient. This study looked at the role of working memory to evaluate whether multimodal input facilitates learners' reading comprehension by exerting modality effect within limited capacity or hinders learners' comprehension by imposing redundancy effect exceeding memory capacity. Thus, the cognitive load approach was also adopted (Paas et al., 2003) to compare learners' cognitive load in three input contexts and explore their perceptions of multimodal input to capture a full picture of how multimodal text affects EFL learners' reading development. To my knowledge, there has been little attempt to compare the effects of multimodal input and monomodal input in the reading aspect of SLA, and this study addressed this gap by making comparisons in three aspects: (1) reading comprehension at macrostructural and microstructural levels in different input conditions, (2) cognitive load imposed by different presentation modes, and (3) learners' perceptions of the multimodal text in the reading treatment. Overall, the study aimed to test the efficacy of multimedia, especially VR technology, in providing multimodal input and enhancing Chinese 8 th grade EFL students reading comprehension, and by doing so, extend existing theories of multimedia learning and offer valuable insight of multimodality in the scholarship of SLA. Theoretical background In general, there are two major strands of theories that link SLA with multimedia learning. The first strand is based on the established Krashen's input hypothesis in SLA while the second perspective draws on a cognitive framework of multimedia learning. In the field of SLA, a substantial amount of empirical studies utilized multimedia to optimize linguistic input, while the cognitive theory has triggered more inquiry into learners' inner mechanism. In this study, the two theoretical perspectives were brought together to conceptualize a framework for multimodal reading assisted by VR technology. Input hypothesis in SLA Multimedia learning is related to a number of SLA theories, one of which is Krashen's input hypothesis, indicating that multimedia tools can be incorporated in the process of L2 learning through the combination of different modes of input (Wang, 2012). According to Krashen (1981, p. 104), "the optimal input is slightly above the present level of learners' competence as an 'i + 1' model." However, in Krashen's account, the scope of the 'i + 1' input is still not clear, and several questions can be raised, such as what degree of increase in difficulty is suitable, and whether the 'i + 1' model is applicable to all EFL learners with different levels of English proficiency. In the context of multimedia learning, input is perceived through both auditory and visual channels, and therefore both words and images are selected to create mental models of language and content. Pictures are connected to build a pictorial model, and words are connected to build a verbal model. To avoid oversimplification, this study moved one step further by utilizing VR technology to provide tactile input as 'i + 1' on the basis of auditory and visual input and examined whether it could improve a group of 8 th grade Chinese EFL learners' reading comprehension within limited memory capacity or exceed their capacities and impede learning. Moreover, the acquisition process is not clearly illustrated in this hypothesis. Krashen (1982, p.21) simply claimed that "a necessary (but not sufficient) condition to move from stage 'i' to stage 'i + 1' is that the acquirer understands the 'i + 1' input." It is arguable whether understanding input alone is enough for acquisition. Hence, the cognitive account in the field of multimedia learning can make up for the deficiency by providing a detailed explanation of cognitive process. Mayer's (2002) cognitive theory of multimedia learning (see Figure 1) is underpinned by three assumptions from cognitive science: (1) the dual channels assumption -there are two separate visual and auditory channels for processing different types of information; Cognitive theory of multimedia learning (2) the limited capacity assumption -there is limited capacity in each channel to process information; and (3) the active processing assumption -learning is an active process that filters, selects and organizes new information and integrates it with prior knowledge. The memory system consists of three storage structures: sensory memory, working memory and long-term memory. Sensory memory acts as a buffer for stimulus received from different modes of input, working memory is short-term memory for temporary retrieval of the processed information, and long-term memory keeps large amount of information over a long period of time. According to Mayer (2009), working memory plays a key role in multimedia learning. Likewise, Sweller et al. (2011) argued that the cognitive load imposed on working memory should be taken into consideration when designing multimedia learning environment since the selection, organization and integration of information occur in working memory. Therefore, it is significant to examine the cognitive load imposed by different modes of input and evaluate how it affect learners' reading comprehension. There are three types of cognitive load distinguished in the literature: intrinsic, extraneous, and germane cognitive load (Brunken et al., 2003;Sweller et al., 1998). Intrinsic load is attributed to the inherent difficulty level of learning material without being affected by instructional design, that is, the consistent level of difficulty of expository text in this study; extraneous load refers to the mental load caused by the presentation format and instructional design, and this is the key aspect this study looked into since it concerns with input modality; germane load results from appropriate instructional design and helps learners construct and process schemas of input. This study focused on the modality of delivering the information and examined whether multimodal input would incur extraneous load and exert the redundancy effect that affects students' performance negatively or increase the germane load and exert the modality effect that enhances their learning outcome. (Mayer, 2002) Critics of this theory often question whether cognition is mediated by something other than words and images (Reed, 2010), and this study answered the question by incorporating tactile input into the framework. With the advent of technologies such as 3D modelling and VR platforms, the possibilities of multimedia learning expand exponentially. Moreno (2006) expanded Mayer's framework (2002) to include "media such as virtual reality, agent-based, and case-based learning environments" (p. 313) by adding manipulative input on the presentation end and constructing tactile sensory memory in the memory system (see Figure 2). The haptic feature of VR technology allows learners to interact with the virtual world and reinforce the information through the third sensory channel on the basis of the dual channels. In this light, VR-assisted multimodal input can provide learners with auditory narration, visual presentation and tactile interaction and promote learners' active processing through the triple memory model in the multimodal learning context. However, this model remains vague about how different multimodal input enter in working memory and construct mental representations by selecting, connecting and organizing information. Therefore, I reconceptualized the framework for the current study by illustrating the working memory part clearly and integrate it with the input hypothesis in SLA. Integrated model of cognitive theory of learning with VR The current research combined the aforementioned theoretical perspectives and conceptualized an integrated model of cognitive theory of learning with VR. Figure 3 models the detailed learning process in the VR-assisted multimodal learning environment, which extends the breadth and depth of learners' exposure to the target text. This model also provides detailed explanations of cognitive processing in terms of mental representations and constructions. The central concept of this theory taps into the input hypothesis in SLA, the human cognitive processing system and the cognitive load principles in providing three modes of input for effective learning without exceeding working memory capacity. Based on the integrated model, learners firstly pay attention to auditory, visual and tactile input attributed to VR affordances, and then process the multimodal information actively in working memory and mentally organize it into verbal, pictorial and haptic models respectively. Finally, the multimodal text input is integrated with existing knowledge and stored in the longterm memory. It is hypothesized that engaging in such cognitive processes in VR-assisted multimodal learning environment enables learners to construct "a coherent mental representation that integrates the textual information and relevant background knowledge" (van den Broek, 2010, p. 453) within memory capacity, and thereby leading to effective learning. Effects of multimodal input on learners' reading performance Some studies confirmed the modality effect of multimodal input on learner' reading performance, and most of which focused on visual and auditory input. Different types of visual input have been proved to be beneficial to learners' reading comprehension. Son (2003) investigated effects of three different types of text formats on learners' comprehension, and the finding showed that computer-based hypertext format paved the way for greater comprehension than paperbased format and computer-based non-hypertext format texts. According to Pearman and Lefever-Davis (2006), CD-ROM storybooks improved school children' overall reading comprehension because students could listen to the vivid narration of the story. In addition, some studies focused on the synergy exerted by the combination of visual and verbal input on learning outcome. Segers and Hulstijn-Hendrikse (2008) investigated the effects of dual input on EFL beginners' cognitive processes in the multimedia learning context, and the result indicated that students who utilized oral presentation with pictures performed better than their counterparts who used written presentation with pictures. Similarly, However, few studies have confirmed the facilitative effects of VR technology in assisting L2 reading. One example is Dev, Doyle, and Valente's study (2002) that adopted Orton-Gillingham technique to provide visual, auditory, and kinesthetic multimodal input to assist special children's reading. The findings showed that the multimodal approach helped children improve their reading abilities out of the special level and the gains were maintained after even two years (Dev et al., 2002). In contrast, some studies were not able to validate the facilitative effects of multimedia on learners' reading performance. According to Rasch & Schnotz (2009), research findings were not able to show that students learned better from text and pictures than from text alone, calling the multimedia principle and the cognitive theory into question. Furthermore, Mangen, Walgermo, and Bronnick (2013) compared the effects of electronic text reading in PDF and paper text reading on tenth graders' reading comprehension in Norway, showing that students who received paper text achieved better reading outcome than the electronic group. The mixed experimental results regarding the effects of multimodal input on learners' reading comprehension calls for further examination. It is noted that the majority of previous studies was limited in providing auditory and visual input, and to date there exists a paucity of studies examining the use of VR-assisted multimodal text in the context of L2 reading and no study has focused on Chinese EFL beginners' expository reading comprehension in a CLIL context. Therefore, this study will address these research gaps by testing the efficacy of VR-assisted multimodal input on Chinese 8 th grade EFL learners' macrostructural and microstructural reading comprehension Effects of multimodal input on learners' cognitive load Some studies confirmed the modality effect of multimedia in lowering learners' cognitive load and improving their learning outcome. Lin & Yu (2012) carried out an experiment via mobile phones in Taiwan, and they divided multimodal input into text mode, textaudio mode, text-picture mode and text-audio-picture mode. The results showed that the text-audio-picture mode imposed lower mode than the others, confirming that modality effect facilitated language learning. Similarly, McClean et al. (2005) argued that animations in the lecture allowed students to process information using the two channels and reduced their cognitive load, thereby improving their retention of biological text. Conversely, redundancy effects were also found in conditions where there were duplicated information, logically unrelated instructional material, and complex content in the multimedia-assisted learning environment (Kalyuga, Chandler, & Sweller, 2000;Sweller et al., 2011). Kalyuga et al. (1999) found that the use of simultaneous duplicated information generated additional cognitive load while information presented in only auditory format rendered performance effective. In a similar vein, Liu and Su (2011) found that simulations loaded with multimedia features increased learners' cognitive load and learners failed to integrate information properly. Learners' and teachers' perceptions towards multimodal input Recently, the qualitative strand of research in multimedia learning has enriched the field by providing explanation or posing challenges to quantitative experimental findings. Along these lines, it is worth mentioning that some studies (Ayres, 2002;Heller, 2005;Neo, 2009;Stepp-Greany, 2002;Wiebe & Kabata, 2010) indicated that learners held positive attitudes towards the integration of multimedia with language learning. Neo (2009) investigated Malaysian students' perceptions in a multimedia project, showing students' positive attitudes with respect to their language learning motivation and teamwork abilities. Nair (2012) applied VR technology in an experiment and found that learners held positive attitudes towards its usefulness as a learning tool. As for teachers' perceptions, Al-Seghayer (2016) assessed English instructors' perceptions towards the effectiveness of electronic text on learners' L2 reading performance. Results showed that instructors held positive attitudes towards the electronic text because it improved accessibility, readers' interaction with the text and stimulated learners' interests in reading. However, few studies inquired into both learners and instructors' perceptions of multimodal input, and even few revealed negative comments of multimodal learning. Thus, this study is of conceptual value of capturing both learners and teachers' positive and negative comments towards multimodal input in the context of L2 expository reading in a comprehensive manner. To sum up, research so far yielded conflicting findings regarding the efficacy of multimodal input on learners' reading comprehension, especially when multimedia tools were directly compared with traditional print medium. This points out the need for further investigation of how multimodal input affects learners' reading comprehension from multiple perspectives. The present study distinguished itself from previous studies in four aspects. Firstly, multiple studies have investigated different multimedia tools such as pictures, audio and video to facilitate learners' L2 acquisition, while VR technology has not been fully explored in language instruction, especially in the expository reading aspect and in Chinese educational setting. Secondly, most studies assessed learners' overall reading performance without examining different aspects. This study divided reading comprehension into microstructural and macrostructural levels and presented a detailed understanding of multimodal input's role in assisting learners' two levels of text processing. Thirdly, this study was the first attempt to examine the effects of VR-assisted multimodal input on learners' reported levels of cognitive load including mental load and mental effort and evaluate the effectiveness of multimodal input on L2 reading from the perspective of cognitive load. Lastly, this study probed into learners' and teachers' subjective cognition through semi-structured interviews and learning journals besides objective performance to provide an extensive and intensive understanding of the efficacy of multimodal input in enhancing Chinese EFL learners' reading comprehension. Research questions Based on the integrated framework of cognitive theory of learning with VR, this study attempted to answer the following questions: 1. What are effects of input modalities (VR-assisted multimodal text, video-assisted multimodal text, print-based monomodal text) on Chinese EFL learners' reading performance? 2. What are effects of input modalities (VR-assisted multimodal text, video-assisted multimodal text, print-based monomodal text) on Chinese EFL learners' cognitive load? 3. What are learners and teacher's perceptions towards multimodal text in assisting reading comprehension? Research Design To address the three research questions, this study adopted a mixed methods research methodology under the guidance of pragmatist paradigm. The mixed methods research design can be briefly summarized in the Table 1 to answer the three research questions. Research paradigm This study is situated in the pragmatist paradigm which "is not committed to any one system of philosophy or reality but focuses on the 'what' and 'how' of the research problem" (Creswell, 2003, p.11). Pragmatism allows independent collection of quantitative and qualitative data and integration of the two strands at the stage of interpretation and inference (Tashakkori & Teddlie, 1998). As shown in Figure 4, the study utilized concurrent triangulation strategy by collecting quantitative and qualitative data respectively and synthesizing findings at the interpretation stage. This strategy is an optimal approach because it costs less time to collect both strands of data in comparison to the sequential method. This study adopted quasi-experimental research design to fully gauge the efficacy of VR-assisted multimodal text input on learners' reading performance. This study tackled the validity issue by selecting three classes at similar level of average academic performance and language proficiency, since the target school has streamed students into three levels (above average, average and below average) based on their academic performance. It is also noted that the period of treatment was short and students' performance could be atypical and unnatural under experimental condition. Thus, the current research also collected a qualitative strand of data out of the class setting, which was conducive to providing a comprehensive understanding of the effects of multimodal input on Chinese EFL learners' reading comprehension. Research site and access The research site was a middle school located in Jiangxi Province, China. The provincial government and the Ministry of Industry and Information Technology in Nanchang, Jiangxi Province, are trying to take the lead in building the city as a world-class VR center and provide support for schools to implement the advanced technology. The research site was the first school that has applied VR technology into secondary education and built a VR lab for instructional use. Moreover, this school has incorporated VR lessons in the curriculum to teach biology, geography and history classes in the VR lab on a weekly basis, whereas the English subject has not been incorporated in VR lessons yet. Participants and sampling The sample in this study was 8 th grade Chinese L1-English L2 learners in the target school. A total of 137 students in grade 8 participated in the study while six students were excluded from the sample because they did not finish either immediate post-test or delayed post-test. As shown in Table 2, the sample was comprised of three classes (N = 131), with 42 students in experimental group A, 46 students in experimental group B, and 43 students in control group C. In the sample, males (n = 64) represented a smaller proportion than females (n = 73). Participants' ages ranged from 12 to 15, with a mean of 14 years (SD = .536). Chinese was their first language, and 96% of students have learned English for more than five years. A total of 62 (45.3%) participants found reading was the most difficult aspect of learning English, especially expository text due to technical vocabulary and unfamiliar content. In terms of multimedia, teachers often use interactive whiteboard to present power point or play videos in class, and 72.3% of participants evaluated it helpful for understanding. There were no significant differences found across the three experimental groups with respect to gender, age or time of learning English. In addition, another important participant is Ms Li, the English teacher of the three classes, who has over 12 years of teaching experience with a master's degree in English education and previous experience of incorporating multimedia tools in English teaching, and she also knows how to operate the research apparatus. Ms Li followed the whole experiment and shared her own insight in the individual interview. This study adopted non-probability purposive sampling technique due to the low feasibility in drawing a random sample from all 8 th grade EFL learners in the target school. To address the threat to the research validity, three classes at the same level of academic performance were selected and assigned into two experimental groups and one control group. Experimental group A read VR-assisted multimodal text with visual, auditory and tactile input, experimental group B read video-assisted multimodal text with visual and auditory input, while the control group C read print-based monomodal text with visual input only. Since the multimodal reading sessions were designed to be learnercentered, Ms Li only led the reading activity with minimal involvement, thereby avoid interfering with learners' reading in the intervention. Research apparatus and treatment materials The research apparatus used in this study is zSpace all-in-one computer. zSpace is an interactive hardware and software platform where learners can listen, watch and feel the multimodal input that cannot be achieved in a conventional computer environment. As shown in Figure 5, it mainly consists of three components: an all-in-one computer with a 24-inch 3D stereoscopic display, three pairs of polarized glasses, and a laser-based interactive stylus. The screen has built-in tracking sensors to trace viewing angles of readers, and it is installed on a stand which tilts it up around 30° for learners to observe. The stylus pen possesses three buttons, one for learners to select the objects shown on the screen, and the other two for learners to zoom in and out to observe the 3D model in a full view. The three components can be activated simultaneously to situate readers in an immersive and interactive learning environment. This study used all-in-one VR computer rather than common VR headsets because it allowed students to interact with peers and teachers in the classroom setting rather than completely absorb themselves in the virtual world. zSpace integrates visual, auditory and tactile elements and offers learners multimodal input. I want to use one reading topic 'water's journey' to demonstrate how zSpace was applied in practice. For experimental group A, the VR-assisted multimodal text was composed of visual input which showed 3D animation of the water cycle and content-related pictures along with words, auditory input which narrated the digital content on the screen with acoustic effects like raindrop and water boiling, and haptic input which allowed learners to feel the water flow as if learners were experiencing it out of the device. For experimental group B, the video-assisted multimodal text was made up of visual and auditory input without haptic feedback. Participants watched the video clips that showed the same textual information of the reading topic with English subtitles as digital text and the cartoon character's narration and sound effects as auditory input in a classroom equipped with projector and computer. Students in the control group C only received the visual input of printbased text. A glossary of new words was given in all three groups. The reading materials used in the treatment were extracted from past test papers in senior high school entrance examination. The justification for choosing these texts is twofold. Firstly, the past exam papers are widely used in 9 th grade as sample test, and students in 8 th grade generally do not have access to them, which ensure that participants are at the same baseline without prior knowledge of reading tasks. Although some reading texts may be demanding for 8 th grade students, the Chinese annotation of new words in the text was given to lower the level of difficulty in accordance with students' level of competence. Secondly, passages used in the authoritative examination were carefully selected and reviewed by experienced English teachers and the Ministry of Education. Thus, the validity of using these tasks to assess learners' reading comprehension was promised. The selection of treatment materials also took the availability of the same content in both VR and video platforms into consideration. As a result, I prepared six expository texts in English of comparable length (200-250 words), general science topics (e.g. water's journey, butterfly's lifecycle, frog's lifecycle) and similar structure with four to five graphs, the last one being a summary of the main idea. Two of these texts were used in the pre-test, two in the immediate post-test and two in the delayed post-test. Data collection The study mainly used reading tests to obtain the objective knowledge of learners' reading performance. The reading test was formatted in multiple-choice questions and blank-filling questions to evaluate learners' reading performance in an objective manner and minimize the potential threat of subjective grading to the research validity. In each reading task, there were five questions in total, with three questions testing microstructural understanding and two examining macrostructural understanding. Microstructural understanding was captured by readers' ability to answer questions based on explicitly stated details in the text correctly (e.g. the correct explanation of butterfly's metamorphosis), whereas macrostructural understanding was assessed by questions on text summary and implications (e.g. the possible effect if water is contaminated in the transmission stage). Students had to complete reading tests three times as pre-test, immediate post-test and delayed post-test respectively. Prior to the intervention, students completed a pre-test including two paper-based expository texts in the intervention and a total of ten questions, and the average score of each component formed the baseline data of participants' expository reading ability at macrostructural and microstructural levels. In addition, the pre-test assessed learners' prior knowledge of reading materials, and the variable of prior knowledge was controlled by removing learners who had sufficient domain-specific knowledge before the intervention. After receiving different modes of input in the intervention, participants were required to complete the immediate reading test. The average score of two post-tests after reading sessions was regarded as the immediate posttest result. The delayed post-test was administered two weeks after the intervention and students had to finish the test without the aid of multimodal text. The delayed post-test scores reflected learners' retention of textual information, and were compared with pre-test and immediate post-test scores to examine the long-term effects of the multimodal reading treatment. All marking was completed without group membership by English teachers with no prior knowledge about the experiment to avoid any bias towards one group or the other. This study utilized the survey instrument in two ways. Firstly, prior to the treatment, a demographic questionnaire (see Appendix 1) was administered to get a snapshot of participants' background information and allow me to have a general understanding of the sample. The participants' profile showed that there was no significant difference among three groups with respects to age, gender, time of learning English and attitudes towards multimedia learning. Secondly, participants were required to complete a cognitive load questionnaire after each reading session to report their invested mental load and effort in reading the expository text. This cognitive load scale has been widely used in literature and this study adapted the scale from the measures of Paas (1992) and Sweller, van Merriënboer, and Paas (1998) and Hwang, Yang, and Wang (2013). This questionnaire consisted of eight items in mental effort and mental load dimensions with a five-point Likert rating scale (see Appendix 2). The Cronbach's alpha was applied to ensure the satisfactory reliability of this instrument. The result of cognitive load ratings was evaluated on a group basis and compared under three input conditions to examine the interrelation between input modality and cognitive load. This study investigated learners' perceptions towards the efficacy of multimodal input through collecting learning journals extensively. Learning journal is one common approach of collecting data in qualitative research (Janesick, 1999) and considered as an effective way to obtain learners' perceptions (Cohen, Manion, & Morrison, 2007). In this study, participants were asked to write a learning journal after each session to provide narrative accounts of their perceptions towards multimodal text as part of their learning experience. The journal topics were based on the coding scheme regarding perceptions of different modes of input and overall reading experiences with multimodal text. The purpose of the journals was to gain a contextual understanding of the participants' experiences in reading expository texts with multimodal input. Additionally, the collection of every participant's journal entry widened the amount of qualitative data besides in-depth interview, and it gave students who preferred to writing rather than talking an alternative way of sharing their thoughts and attitudes of multimodal input in the reading intervention. This study approached teacher and learners' perceptions by conducting focus-group interviews and individual interview. This study utilized focus group interview method to bring group of 6-8 people together to discuss a shared experience (Creswell, 2003). In the current research, the focus group interview was conducted with six students from experimental groups in a semi-structured way. Each group interview lasted for around 20 minutes and all interviews were audio-recorded and transcribed for analysis with interviewees' permissions. In addition, I had an individual interview with Ms Li after the treatment and explored how the experienced English teacher perceive the effects of multimodal text on students' L2 reading comprehension. Thus, both insider and outsider perspectives towards the efficacy were gained from conducting interviews in depth. The interview questions (see Appendix 3) have been checked by two English teachers and applied in the pilot study to ensure the reliability of the instrument. Interviews were conducted in participants' L1 Chinese according to their own preferences so that interviewees could share opinions at ease and the reliability of qualitative findings could be strengthened. Throughout the data collection process, all participants remained anonymous. The data collected by the three instruments were triangulated to test the efficacy of multimodal input on Chinese 8 th grade EFL learners' reading comprehension. Research procedure The entire research procedure can be largely divided into three stages: pre-intervention, reading intervention and post-intervention. In the first phase, I introduced the empirical study to participants and obtained their consent to participate in this study by signing the consent form (see Appendix 4). In addition, there were orientations to familiarize experimental groups with treatment procedures and the use of research apparatus. Baseline data and background information were obtained by having three groups of students finish the pretest and the demographic questionnaire. One week prior to the intervention, I conducted a pilot study to test the reliability of data collection instruments. Five students from each group were invited to participate in the pilot study and contributed valuable suggestions. Based on their feedback and suggestions, I made three major alterations to the original plan. Firstly, three texts were selected for the intervention, while pilot participants found one of them were too difficult to understand even with the help of multimedia. Therefore, this text was deleted and three sessions were modified into two sessions. Secondly, half of the pilot participants found some new words without annotations that may affect their understanding. Thus, after checking with the teacher, new words in the text were annotated in Chinese and a word list was given in the intervention. Thirdly, participants in the experimental group A complained about the text on the screen was too small to read and focus on. Since it was a technical problem and there was no way to expand the text box, this problem was solved by giving paper text in all groups. In the second phase, the reading intervention began in the target school. The treatment was offered in two reading sessions within two weeks. As for the treatment procedure (see Figure 6), it was mainly divided into pre-reading, reading and post-reading. Prior to reading, the teacher introduced the reading task and topic for 5 minutes, and students were given 25 minutes to read text and finish a collaborative reading task based on the given materials. During reading sessions, the teacher took the facilitative role to observe learners' reading and address their doubts in need. Afterwards, the teacher gave corrective feedback of the collaborative task, which helped strengthen students' memorization of the text. The reading session utilized collaborative reading task because of the limited number of research apparatus in the lab, and three students had to share one zSpace computer as a group. The same instructional design was applied to group B and group C to ensure the consistency of treatment. In the third stage, all materials were collected back and the immediate post-test was administered. After finishing the test, students had to complete the cognitive load questionnaire. After each session, students were asked to write a learning journal based on the reading experience. Six students in the two experimental groups were invited on a voluntary basis to participate in a semi-structured interview on the same day, during which they were encouraged to describe their multimodal reading experience, reflect on the usefulness of multimodal text in comparison to their usual reading practice, and how they applied received multimodal input to answer test questions. Two weeks later, a delayed post-test was administered again among three groups to evaluate the retention effect of multimodal input. Table 3 presented a summary of the data collection procedure. Reliability and validity Since reliability and validity are of paramount importance to research findings, the current research invested efforts to address the potential threats of both quantitative and qualitative approaches to reliability and validity respectively. Reliability refers to consistency and replicability of research findings over time (Nunan, 1992). For the quantitative strand of data, the reliability is concerned with the instruments to measure the effects of multimodal input, such as reading tests and cognitive load questionnaire. As for reading tests, the reading passages and questions utilized in the treatment have been selected from authoritative test papers and checked by two English teachers to ensure that questions in each reading test could identify macrostructural and microstructural reading comprehension. Moreover, the testretest reliability of the cognitive load scale has been ensured by the Cronbach's alpha value (α = .87), indicating satisfactory reliability of items. The implementation of pilot study also reinforced the reliability of reading tests and cognitive load scale by making modifications in tandem with learners' English proficiency. For the qualitative strand, several approaches have been applied to rule out potential threats of reliability. Firstly, as for the interview instrument, I prepared open-ended questions to elicit learners' recall of the multimodal reading experience and avoided giving personal opinions in case that participants would change opinions due to others' responses in a group interview (Creswell, 2006). In addition, participants were given the right to choose spoken language freely, and all of them chose to use L1 Chinese so that they could share opinions at ease without worrying making grammatical mistakes. In this regard, in-depth and faithful information can be obtained (Bauer, 2000). In terms of journal entries, three leading questions were provided to help learners reflect the multimodal experience and clarify individual cognition (Moon, 1999). The journal entries were not assessed or rated based on a writing rubric but regarded as an approach to understand all participants' perceptions towards the efficacy of multimodal input, which were quantified to generate a coding pattern at the interpretation stage. Peer examination of the categorical matrix was adopted to enhance its reliability. Thus, the reliability of both quantitative and qualitative data collection and analysis was promised. Validity means "how appropriately and precisely an operationalization matches a construct's theoretical definition" (Mackey & Gass, 2011, p. 204). This study has invested efforts to constitute its internal and external validity. To establish internal validity, the soundness of research design and measuring instruments holds great importance. The potential threat generated by the lack of random sampling in the quasi-experimental research design was addressed by careful selection of three classes from similar average academic background. It is noted that the test-based assessment in multiple-choice format might be criticized because it stands at the behaviorist side to use relatively simple approach to measure learning outcome and cannot capture learners' high-order thinking (Blikstein and Worsley, 2016). However, it is extensively used in educational research due to the high level of objectivity, and the validity of test-based assessment can be strengthened by triangulating data from questionnaires, learning journals and interviews. As for the interview instrument, interviewees' verification and feedback were gained to ensure respondent validation. External validity stresses on the generalizability and applicability of research findings to a wider population and learning contexts (Nunan, 1992). The random sampling is the key to generalizing findings to a wider population. However, it is not practical to select the target sample from different schools all around China. Thus, I selected three classes with the average academic performance and language proficiency in Grade 8 of the target school, because they shared similar characteristics with a wider population of Chinese EFL beginners. Although this study focused on the efficacy of multimodal input on the reading aspect of SLA, the research findings and the expanded conceptual framework can shed light on more multimodal learning scenarios so that the generalizability of findings could be achieved. Ethical considerations The empirical study strictly followed the Ethical Guidelines for Educational Research (BERA, 2018) throughout the entire research process, from designing research to conducting fieldwork to reporting findings. Quantitative data analysis and statistical results The quantitative datasets, including demographic questionnaires, three test scores, cognitive load ratings were entered in Statistical Package for Social Sciences (SPSS) version 24.0 to derive descriptive and inferential statistics. The numerical data was analyzed on group basis to capture the overall pattern rather than individual performance. This study set the level of significance at .05 as the criterion for statistical significance since an alpha level of less than .05 is regarded as statistically significant in most educational research. Effects of different input modalities on learners' reading performance A 3×3 repeated measures MANOVA was conducted to determine the effect of three input modalities (VR-assisted multimodal text, video-assisted multimodal text, print-based monomodal text) on learners' reading comprehension performance that has been divided into overall comprehension, macrostructural comprehension and microstructural comprehension at three times of testing. There were two independent variables, one is time of assessment as within-subject variable and the other is input modality as between-subjects variable. The pre-test score was utilized as the covariate for excluding any interference from learners' prior knowledge. Before performing statistical tests, assumptions including homogeneity of variance, sphericity and normality have been validated. The justification for using MANOVA was twofold. Firstly, the different levels of reading comprehension act as multiple continuous dependent variables, and using MANOVA instead of a series of one-at-a-time ANOVAs can reduce the experiment-wise level of Type I error without rejecting true but weak null hypothesis. Additionally, MANOVA can also test if the relationship among the independent variables change over the intervention, and reveal differences not discovered by ANOVA tests. Table 4 shows the relevant descriptive statistics, including number of participants per group (N), reading test mean scores (M) and standard deviation (SD). There were few differences in learners' pre-test scores, showing that students were at the same baseline level of expository reading ability. In the immediate post-test, participants in the VR group scored the highest on the overall (M = 3.1190, SD = .99271), macrostructural (M = 1.3810, SD = .66083) and microstructural reading comprehension (M = 1.7381, SD = .58683). In the delayed post-test, participants who read VR-assisted multimodal text also performed best on the overall (M = 2.7857, SD = .81258), macrostructural (M = 1.1.429, SD = .41739) and microstructural reading comprehension (M = 1.6429, SD = .72655). Participants who received the monomodal print-based text got the lowest scores in both immediate post-test and delayed post-test. The comparison showed that all groups scored higher in the immediate post-test than pre-test, suggesting that the treatment improved students' reading performance in all conditions. It is noted that experimental group A with assistance of VR achieved higher scores at three levels of reading comprehension in both post-tests than other two groups. Despite a slight decrease, the effect was maintained by three groups at the time of the delayed post-test two weeks later. The results of the MANOVA revealed that the main effect of time within subjects was significant, F (4, 125) = 15.407, p < .01, Wilk's Λ = .670, partial η2 = .330. The main effect of input modality between subjects was significant, F (4, 254) = 2.738, p = .029 < .05, Wilk's Λ = .919, partial η2 = .041. However, there was no statistically significant interaction effect of time × modality, F (8, 250) = .833, p = .575 > .05, Wilk's Λ = .949, partial η2 = .026. One-way ANOVAs were computed to further examine the main effect of modality on short-term and long-term reading performance at overall, macrostructural and microstructural levels between groups. Tukey's post hoc pairwise comparisons were used to identify the significant differences between groups. The results showed that at the time of immediate test, the difference existed between experimental groups and control group, while there was no significant delayed effect of input modality on learners' reading comprehension. In terms of overall reading comprehension, there was significant difference of total reading test score between VR group and paper group, p = .024 < .05. As for macrostructural reading comprehension, there were significant differences between VR group and paper group, p = .033 < .05, and between video group and paper group, p = .047 < .05. In terms of microstructural reading comprehension, no statistically significant difference was found. To answer the first research question, the results indicated that different input modalities had immediate effect on overall reading comprehension between group A (VR-assisted multimodal text) and group C (printbased monomodal text). Moreover, different input modalities had immediate effect on macrostructural reading comprehension between multimodal text group and monomodal text group. Different input modalities didn't have differential immediate effects on learners' microstructural reading comprehension. In addition, there was no significant delayed effect of input modality on any aspect of reading comprehension. Effects of different input modalities on learners' cognitive load To answer the second research question, cognitive load scale was used to assess learners' mental load and mental effort after each session. This study employed Cronbach α to test internal consistency of the cognitive load scale, and the value (α = .87) exceeded .80, demonstrating satisfactory reliability of the items. Oneway ANOVA was performed to compare learners' cognitive load ratings under three input conditions and examine the effects of multimodal input on learners' mental load and mental effort. As shown in Table 5, the means and standard deviations of the cognitive load ratings were 2.5274 and .57995 for experimental group A with VR-assisted multimodal text, 2.5671 and .62256 for experimental group B with video-assisted multimodal text, 2.5843 and .72154 for control group C with paper text. There were slight differences of students' cognitive ratings between groups, among which the control group using paper text had the highest mean of cognitive load ratings. The study further compared two components of cognitive load: mental load and mental effort. For the mental load dimension, the means and standard deviations were 2.1280 and .78287 for the experimental group A, 2.2195 and .69865 for the experimental group B, 2.1570 and .87970 for the control group C. This indicated that video-assisted multimodal text imposed highest amount of mental load on learners than other groups. As for the mental effort dimension, the means and standard deviations were 2.9268 and .68293 for the experimental group A, 2.9146 and .64374 for the experimental group B, and 3.0116 and .72159 for the control group C. Hence, the control group allocated most cognitive capacities in reading the paper text compared with experimental groups. The result of one-way ANOVA shown in Table 6 indicated that there were no statistically significant effects of different input modalities on overall cognitive load, mental load and mental effort (p = .918 > .05; p = .867 > .05; p = .777 > .05). It means that participants in three groups had similar levels of cognitive load after the treatment and multimodal text didn't increase germane load or decrease extraneous load compared with monomodal text input. To sum up, the quantitative finding indicated that VR-assisted multimodal text group achieved better overall reading performance than the other two groups, and multimodal text group attained better macrostructural reading comprehension than monomodal text group in the short term, though no statistically significant relationship between input modality and cognitive load was found. This suggested that VR-assisted multimodal text played an important role in fostering L2 learners' overall and macrostructural comprehension ability in the short term without incurring extraneous cognitive load. Qualitative data analysis and interpretations As for qualitative data, content analysis (Garrison, 2006) was performed to probe into participants' reading experience and perceptions by drawing on the conceptual framework. This current research utilized content analysis because it combines both quantitative (Krippendorff, 2004) and qualitative approaches (Berg, 2001) in alignment with the pragmatist paradigm, and it can be used in an inductive or a deductive way. Another reason for performing content analysis is that it is a particularly useful approach when classifying, summarizing, quantifying and tabulating qualitative data prior to detail explanations. This study used a hybrid process of inductive and deductive approaches to analyze qualitative data, which incorporated both the datadriven inductive approach (Boyatzis, 1998) and the framework-informed deductive approach (Crabtree and Miller, 1999). In this study, the qualitative data analysis was twofold. I used the inductive approach firstly to generate data-driven codes and then I applied deductive approach to generate theory-driven codes, and the two strands of codes were aligned in a systematic way to illustrate learners and teacher's perceptions towards multimodal input. The further analysis began with quantification of qualitative data by using frequency and percentage to show the magnitude of the individual phenomena (Berg, 2001;Morgan, 1993) and reflect the overall tendency, then each coding category would be enriched by in-depth narration. Following this methodology, I initially familiarized myself with the qualitative data that was extracted from the written feedback and narrative accounts through reflective journals and interviews. The transcription of all interview data was done by a voicerecognition software first and then I checked it to see whether all the information was transcribed accurately by listening to the recording again. Afterwards, I translated the transcripts of interview from Chinese to English. I read through the translated data and obtained a general understanding of the whole pattern. The text segments that answered leading questions clearly were highlighted and coded into positive and negative comments, and similar comments were further categorized into certain aspect of multimodal input that assisted learners' reading or impeded their reading. Then, in the categorization process, I drew on the conceptual framework to aggregate codes with similar meanings to develop a categorical matrix. The categorical matrix was built on the multimodal input assisted by VR technology in three sensory modalities: graphic and animation (visual), narration and sound effects (auditory), interactivity and manipulative (tactile). After the units of analysis have been identified, I re-read the original text especially the unmarked text to make sure text segments related to the categorized matrix has been covered (Burnard, 1991). Finally, I summarized the frequency and percentage of participants' comments and presented them in tables for comparison. Participants' perceptions of VR-assisted multimodal text and videoassisted multimodal text were discussed separately to provide a holistic understanding of multimodal text utilized in this study. Learners and teacher's perceptions of VRassisted multimodal text The analysis of reflective journals and semi-structured interviews showed that the participants' perceptions towards VR-assisted multimodal text were mostly positive, particularly in terms of visual input and tactile input. Nevertheless, some participants also stated some negative comments towards the multimodal reading experience regarding the time management, complexity of information and the lack of equipment for reading the multimodal text effectively. Table 7 summarizes the categorical matrix of learners' perceptions about VRassisted multimodal text. Affordances mean what are made possible by the used multimedia while constraints refer to the negative aspects of the multimedia tool that may affect learners' reading comprehension. Visual input Animation 50 VR displayed the animated content vividly, and gives us a sense of immersion. Complexity 26 The visual content was complex, and it was difficult to find all the details. Graphic 43 Some pictures presented the cycle clearly and we can observe things intuitively without imagining it in our mind. Health concern 14 It hurt my eyes and made me feel dizzy when reading for long time. Tactile input Interactivity 45 I can interact with objects in three dimensions to learn more, such as how the chrysalis looks like. Distraction 17 We may focus on playing with 3D models rather than reading the text. Manipulative 31 It gave me a feeling of control, so I can learn at my own pace. Limited operators 7 I was not the operator, so I didn't feel the interaction with butterfly. In terms of auditory input, more than one fifth of students found that narration and sound effects aided their reading comprehension. In the VR-assisted reading treatment, students could hear the recording of the text to learn pronunciation of target words and had the opportunity of reading with the recording. This made it easier for students to connect the sound with the word and remember it when completing the immediate posttest. The sound effects such as the sound of water flowing to the river and caterpillar eating leaves gave learners a vivid feeling of presence. However, some students commented negatively towards the time cost and fast speed of audio recording, which represented areas for improvement of VR-assisted multimodal reading scenario. One student shared her opinion regarding the auditory input in the focus group interview: Sara: The sound effects of the reading materials are vivid and attractive, but I don't think it's necessary for our reading because it takes long time to listen to the recording. Also, my classmate wants to hear some paragraphs twice but I think once is enough since it's a reading task not a listening task. It will be more efficient and effective if every student is given a pair of earphones and they can listen to whatever they want many times. When asked what they liked the most about the VR-assisted multimodal text, half of the participants responded that they found the animation of the text content most interesting, such as the growth of butterfly from caterpillar to chrysalis. This sensory stimulation engaged learners in watching the animation and reading the text. In addition, graphics provided static diagrams to facilitate students' overall understanding of the expository text. In other words, visual cues provided participants quick information that can be directly perceived through watching the screen, unlike many nonvisual cues such as sound effects of target objects which needs to be learned from students' prior knowledge and other information sources. One participant expressed her affection towards visual input as follows: Anne: My favorite thing about the VR reading task is the animation that displays the growth of butterfly vividly, and I don't have to imagine it in my mind because watching the animation is sufficient for me to identify different stages of the butterfly's lifecycle. It is said that one image is worth more than a thousand words. However, after the VR-assisted reading session, I think one animation is worth more than a thousand images, because animation is like a thousand pictures displayed at high speed in a series. Nevertheless, some participants complained about its complexity, which may be explained by the richness of the visual input imposed relatively high level of mental load on learners. In addition, around a quarter of students mentioned the motion sickness and eyes sore they had experienced in the reading process: In a similar vein, Ms Li addressed the health concern from the perspective of a recent published educational policy: Ms Li: Recently a new policy has been introduced in school to limit students' exposure to electronic products, such as mobile phones, computers and tablets. The VR apparatus, though not mentioned in the policy, is still a kind of computer that may pose detrimental effects on learners' health both physically and psychologically. Students may be addicted to it and become short-sighted easily. Therefore, it is not frequently used in daily courses and we must be very careful when using it in class. Tactile input is a unique aspect of VR-assisted multimodal text by giving learners' a sense of touch that can be operated by the stylus pen in the air. More than one third of participants stated that they found the VR-assisted multimodal input helpful because they could interact with 3D graphics and control their pace of learning. It shows that VR-assisted multimodal input can be tailored to individual needs and interests. One student shared his related experience in the interview: Charles: It's amazing! I can drag the butterfly out of the screen and observe it closely by turning it around 360 degrees. You know, in real life, when you get close to a butterfly, it will fly immediately and you can't observe it closely. However, I can catch a butterfly from the screen by using the stylus pen and it won't fly away. I just feel that I can control everything in the virtual world. Though many students commented it was a positive sensory experience, less than one fifth of students were not fully satisfied with it due to distraction and shortage of equipment. Some students admitted that they spent most time playing with the apparatus rather than reading the text. Ms Li, in the individual interview, also mentioned this constraint according to her observation in class: Ms Li: Due to the high price of the research apparatus, we can only afford limited numbers in the VR lab. It is not possible for each student to use one VR apparatus, so group work is necessary in the class. I noticed that some group members, if not sit in the middle to operate the apparatus, sometimes engaged in other irrelevant activities. It is difficult for a me to supervise 9 groups of students simultaneously, and the effective implementation of VR-assisted lessons is largely depended on their self-discipline. I think tactile input is a key feature of VR, but it needs to be utilized more effectively by students in class. It may take some time because students currently are more interested in the instrument itself than the knowledge presented. In summary, participants found reading VR-assisted multimodal text interesting and helpful because narration and sound effects from auditory input, animation and graphics from visual input and interaction and manipulation from tactile input facilitated their understanding of expository texts. Despite the general positive attitude of participants, several problems such as time cost, fast speed, distraction concerning three modes of input were pointed out by some students and the teacher, which may help to explain why there was no significant effect of VR-assisted multimodal text on learners' retention after two weeks. Learners and teacher's perceptions of video-assisted multimodal text Compared with VR-assisted multimodal text, video-assisted multimodal text relies on auditory and visual stimulus. The same categorical matrix for auditory and visual input has been applied to analyze participants' perceptions about video-assisted multimodal text (see Table 8). Auditory input Narration 28 I like watching the video because the character explained the water cycle clearly. Time cost 6 Sometimes I want to switch it back and listen to one part but it took me a lot of time to start it again. Sound effect 11 The sound of butterfly's growth was very vivid, and it helped me remember the process. Speed 48 The character in the video talked too fast, and I could not follow and take notes. Visual input Animation 30 The video showed the water cycle in four animated steps, and I can remember how water changes. Complexity 15 The video content was a little complicated, and I found it hard to understand in English. Graphic 4 I like the last scene of the video, because it summarized all the information in the text and helped me answer questions. Health concern 0 No comment. As for auditory input, a number of participants found watching the video interesting because of vivid narration and sound effects. There was a cartoon character who explained each stage of the expository text clearly, and students regarded the character as a peer to learn from. Background music and sound effects also engaged students in reading the multimodal text. However, nearly half of the students claimed that it talked too fast and they could not control the speed of the video to slow it down. In addition, it was difficult to switch the video back to a certain part and it took long time to watch it from the beginning again. Ms Li shared her similar opinion towards the video-assisted multimodal text as follows: Ms Li: I often use video in class as an introduction, aiming to stimulate students' interests rather than give them a task. Therefore, when students need to answer questions based on the video, they may pay more attention to what it talks about and find that they can get a general idea but it is too fast for them to write down notes in detail. Students like the narrator probably because it is a popular cartoon character and they would be more focused when listening to it than listening to me. Regarding visual input, participants found the colorful and animated display shown in the video could be a great aid for reading comprehension. The majority of information was presented in animation while there was a summarized figure at the end of the video. 30% of students found the animation helpful because it illustrated the whole process of butterfly's growth and water's journey vividly and coherently. However, 15% of students felt overwhelmed because the video contained too much information and the subtitle was in English rather than Chinese. One student described her confusion in detail: Lara: I think the video is interesting and visually attractive, but I still find it hard to understand because the cartoon character talks in English and the subtitle is in English, and it takes me a while to translate it in my mind, but when I finally understand one sentence, the video has already progressed to next stage. Especially I have no prior knowledge of water cycle, so I think it is too difficult for me to understand the video. In contrast to VR-assisted multimodal text, there was no comment of health concern, indicating that video is a widely accepted multimedia tool in the classroom setting and students feel comfortable with it. To sum up, the majority of participants found video-assisted multimodal text aided their comprehension because of cartoon character's narration, vivid sound effects, comprehensive animations and graphics, while some students reported problems such as time cost, fast speed and complex content that need to be tackled through careful selection of videos in accordance to students' level of language proficiency. To answer the third research question, learners mainly held positive attitude towards multimodal input in the reading session, and they found the multimodal text assisted by VR and video interesting and effective in helping them understand the expository text because of multimedia aids in different modalities. It is also noted that some technical problems constrained students from reading effectively and need to be addressed in the future implementation of multimodal text reading. Discussion Based on the research findings, this study argued that VR-assisted multimodal input facilitated Chinese 8 th grade EFL learners' overall and macrostructural reading comprehension in the short term without incurring extraneous cognitive load. The effects of input modalities on learners' reading comprehension The experimental results showed that VR-assisted multimodal input significantly improved Chinese 8 th grade EFL learners' overall and macrostructural reading comprehension in the short term. This supported Jones and Plass' (2002) assumption that "pictorial information provided in addition to text may help support macro-level processing in L2 computer-based reading activities" (p. 548). The findings of the present study corroborated Al-Seghayer's (2007) research that the use of structural devices improved learners' ability to identify main ideas and construct appropriate mental representations of an electronic text. The positive effect of multimodal input on learners' macrostructural processing was also reported by Abdi (2012), which demonstrated that the exposure of electronic texts supported readers' macrostructural construction and organization. Macrostructural comprehension entails two levels of processing, one is selecting important textual information from individual units (construction), and the other is connecting selected information into a coherent mental representation (organization). The positive effects of VR-assisted multimodal input on learners' macrostructural comprehension can be explained by effective monitoring on the two levels of processing. Firstly, the visual support especially the animated feature of VR-assisted multimodal input was effective in introducing thematic units, clarifying complex concepts into simple visual display and providing a holistic understanding of discourse organization. Secondly, the tactile input of VR-assisted multimodal text made it possible for learners to see objects in three dimensions and construct coherent representation of each stage in the butterfly growth or water cycle in a unified form. Thus, participants in the VR-assisted multimodal text group were able to identify individual units, recognize the interrelations and integrate them into coherent mental model, thereby achieving high level of macrostructural comprehension. As for microstructural comprehension, there were no significant differences between multimodal input and monomodal input, indicating that participants who read the multimodal text presented in audio, visual and tactile modes did not remember more words and textual details than participants who read paper text. Similar findings were also found in Ariew and Ercetin's (2004) research, stating that there was no causal relationship between multimedia-assisted annotations and learners' microstructural reading comprehension. This study went beyond multimedia-assisted annotations and included multimodal presentation of the whole text that has not been examined in pervious literature, and the result can be explained by the constraints of multimedia technology and specific reading context in this study. Based on participants' narrative accounts, complex content shown in limited time diverted their attention from certain details and affected their memorization of textual information, although the detail information in the text has been reinforced in different modes of input. In addition, the initial 'wow' effect brought by VR technology could be translated into more attention on the multimedia itself rather than the reading content and language, thereby distracting learners from concentrating on the details illustrated in the text. It is also noted that the study only focused on expository reading in a CLIL context. It is also worth noticing that there was no significant effect of input modality on learners' retention of text, which corroborated previous research findings (Brett, 2001;deHaan, 2010;Moreno, 2002) that certain foreign language multimedia learning environment may not affect learners' language retention in the long term. One possible explanation is that students were engaged in the multimodal reading session during the initial learning phase, while with the diminishing 'wow' effect they are less likely to retrieve newly acquired information to foster long-term learning (Roediger & Karpicke, 2006). Another possible explanation is the lack of incentive for learners to complete the delayed post-test given that they already had finished two similar print-based tests two weeks ago, and the negative testing effect may influence their reading performance. However, the results should be interpreted with caution and situated in the specific research context. The simple test-based accountability may not be able to generate accurate estimates of gains in student performance. Although these scores provide useful information that contributes to students' reading growth, looking at reading test scores only would silence other valuable indicators and bias the evaluation of the multimodal reading intervention. In this study, the reading test was formatted in multiple-choice and blank-filling questions for the objective grading, while students could answer the questions with lucky guess or draw on problem-solving strategies to complete the reading test without retrieving newly acquired knowledge, and this might be partially responsible for the absence of significant treatment effects on learners' retention. Moreover, standardized exams using limited number of closed questions leave little space for learners to display their high-order thinking, such as analysis, evaluation and creativity. Given the limited scope of expository texts, such influence is disproportionate to any intrinsic value they may have on educational outcomes. Furthermore, the conventional paper-based test used in this study was not aligned with the different modalities of text input in the treatment. Students who read multimodal text in the treatment did not finish the post-test in the same format due to technological limitation and complexity of collecting answers digitally. The mismatch between intervention and assessment is likely to affect the research findings. In addition, students may have negative feelings towards test-based assessment due to the stress, previous failed experience and frequent testing, which could lead to demotivation in completing post-tests. In other words, the objective measure of learners' reading performance could not fully capture the effectiveness of multimodal input. Therefore, it is necessary to combine them with other subjective measures, such as qualitative data collection methods used in this study to fully capture students' reading development in the multimodal learning environment. The effects of multimodal input on learners' cognitive load An unexpected finding of the study was that the learners' overall, extraneous and germane cognitive load did not show significant differences when reading, watching and interacting with multimodal text, compared with traditional print-based text reading. This means that neither the modality effect nor the redundancy effect was observed based on learners' self-reported cognitive load ratings in the research. The result corroborated few studies since the major component of literature situated findings in either modality or redundancy effect without the third possible result such as no effect. One explanation of this surprising 'no-effect' finding is the pervasiveness of multimodal literacy in the digital era, since students live in a highly visual world and they are exposed to a multimodal environment both in print and on screen. As the multimodal text becomes the new norm, students may find that reading screen-based text does not require more attention and processing compared with print-based text. It is also noted that the complex content and difficult words in the expository reading reported by some participants did not cause learners' cognitive system to be overloaded. The overwhelming demands of cognitive processing in reading relatively complex expository texts were offset by segmentation, which means dividing the passage into learner-paced segments and allowing learners to fully understand each part of the presentation before moving to the next part by clicking the 'continue' button on the screen. The self-controlled input presentation also reduced students' representational holding at one time and they can process information at their own pace. This segmentation principle and pacing principle underlying the cognitive theory of multimedia learning have been validated by multiple studies (Aly, Elen, & Willems, 2005;Lusk, 2008;Mayer, 1999;Mayer & Chandler, 2001). Thus, the positive effects of text segmentation counteracted the increased mental demand of processing complex content, leading to no statistically significant effect of multimodal input on learners' cognitive load. Although the present study showed that multimodal input did not increase or decrease learner's cognitive load to a large extent, it is premature to conclude that multimodal text assisted by VR and video had no effect on learners' cognitive load. The rationale is twofold. To begin with, this study approached cognitive load by adapting a self-reported scale (Paas, 1992), the validity of which has been confirmed in multiple studies (Szulewski et al. 2018;van Gog & Paas, 2008). The clarity and sensitivity of this subjective technique lend itself to be an extensively used method while at the same time the self-reported nature is regarded as its major weakness. Antonenko & Niederhauser (2010) suggested that cognitive load should be regarded as a dynamic process and the EEG-based physiological measure should be used to measure it to provide a more comprehensive picture than the self-reported scale. Although the present research specified the subjective rating scale into mental effort and mental load and examined the effects of multimodal input on the two constructs respectively, more objective measures can be applied to strengthen the reliability of research findings. Furthermore, the EFL learners in the current research were only engaged in two reading sessions, and the exposure to multimodal text was far from enough to draw a definite conclusion regarding the effects of multimodal input on learners' cognitive load. Thus, this study inquired into participants' perceptions towards the efficacy of multimodal text to understand learners' cognitive processing from an inner perspective. Learners' and teacher's perceptions about the efficacy of multimodal text In general, learners held positive attitudes towards the effectiveness of multimodal text and found multimodal input aided their comprehension. Contextualized images and animation were the most appreciated features of multimodal text, found useful by half of learners in both experimental groups. The significant role of visualization in scientific reading has also been addressed by Mason, Tornatora and Pluchino (2013), because they made complex processes visible and helped readers construct mental representations. Based on journal entries and interview data, the majority of learners believed that VR-assisted multimodal input was more effective in assisting their L2 reading than videoassisted multimodal input and traditional print-based monomodal input that they often receive in daily practice. The main reason lies in the unique exposure of tactile input, which gave learners a sense of immersion that cannot be experienced in other text input. Positive perceptions towards the tactile input were also found in Limniou et al.'s (2008) research that showed 3D immersive VR-assisted learning environment elevated learners' interest and motivation compared with learning in a 2D animated environment. In the VR-assisted reading scenario, learners were exposed to a simulated version of the reality which may not available or possible in the brick-and-mortar classroom. According to learners' narration, the interactive affordance of VR technology allowed them to experience, establish kinesthetic relationships with the virtual world and receive feedback contingent on learners' responses (Moreno et al. 2001), and the manipulative affordance allowed them to take up the active role to learn at their own pace. In this light, the tactile input enabled readers to construct haptic model of the text and place themselves in the reality that has been brought from outside to inside the classroom (Evans and Green, 2006). Nevertheless, some negative aspects of multimodal input have been pointed out in learners' journal entries and the teacher's observation, which constrained readers to construct mental models effectively and prompted practitioners to reflect the potential disadvantages of the multimodal approach in the CLIL reading context. For both experimental groups with multimodal text input, long time of watching electronic screen that was filled with text, sound, graphic and animation was harmful to learners' eyes, especially the immersive nature of VR technology made them feel dizzy after long periods of interacting with the virtual world. Moreover, some students felt lost and overwhelmed in the face of so much information shown in a fast pace. Distraction has also been found to be a factor in technology-enhanced learning environment when students failed to engage in robust learning (Greene, Moos, & Azevedo, 2007). These negative comments were also reported in previous studies in multimedia-assisted language learning (Lu, 2008;Thornton & Houser, 2005;Wang & Higgins, 2006). These constraints explain some participants' negative comments towards the multimodal input, and these should be taken into consideration along with positive effects of multimodal input on L2 reading in the pedagogical design. Theoretical implications for SLA research According to Mayer (2008), theory and practice are actively engaged in dialog, and this dialog can be built when there is a "two-way street between cognitive science and instruction" (p. 760). This means there is a reciprocal relationship between learning theory and educational practice in which the learning theory lays theoretical understanding for the educational practice whereas the instructional practice is designed on the basis of theoretical framework and further inform the theoretical development. This study is such a dialog that allows for interaction between theory and practice. Theoretically, the present study synthesized two theoretical perspectives and brought forward an integrated framework of cognitive theory of learning with VR. Grounded in the integrated framework, this study tested the efficacy of multimodal input on Chinese EFL learners' reading comprehension by collecting and analyzing quantitative experimental results and qualitative interpretations. The overall research findings supported the tenets of both input hypothesis and the cognitive theory of multimedia learning that the provision of comprehensible input through multiple modalities can facilitate learners' information processing and improve their reading performance in the short term. I want to use this research as a platform to inform the SLA field by problematizing underlying assumptions of the conceptual framework. Based on the findings of the multimodal text reading practice, the present research can inform the theoretical underpinnings of multimodal learning in cognitive account of SLA. The central issue of this framework is how multimodal input helps people process information and achieve better learning outcome, and this study specifically revealed how VR-assisted multimodal text fostered learners' reading comprehension and validated the framework. From a holistic perspective, the present research focused on multimodal input provision and learners' cognitive processing of input to generating learning output. This study approached the two theoretical perspectives in SLA and multimedia learning in light of multimodality. Firstly, the current research opened the theoretical lens to encompass multimodal input and examined the way in which VR technology enhances input in a multimedia learning environment. Instead of limiting the scope in providing multimodal input, this study has fully exploited multiple affordances of VR technology in providing auditory, visual and tactile input and explained how the different modes of input entered in learners' working memory. It is worth noting that the constraints of multimodal input were also revealed in the current research, suggesting that negative aspects of multimodal input should also be a focus of inquiry to obtain a comprehensive understanding towards the efficacy of multimodal input. In this light, this research has taken the filed beyond Krashen's theory of comprehensive input to an understanding that how learners process the multimodal input and construct different mental models into coherent understanding. This study also suggested that SLA research with a focus on linguistic input should consider the way how technology changes linguistic input and how learners' access to different affordances of multimodal input might affect acquisition. Secondly, this study expanded the three assumptions (the dual-channel assumption, the limited capacity assumption, the active processing assumption) underlying the cognitive theory of multimedia learning (Mayer, 2005) for further application of this framework in SLA field. Incorporated VR technology into the multimodal learning environment, this study challenged the dual channels assumption and suggested that a third modality could be added to the dual coding hypothesis (Paivio, 1986) and cognitive theory of multimedia learning (Mayer, 2005) since the tactile input is not provided in purely auditory and visual modalities. Built on the triple-channel hypothesis, this study posited that learners could develop an additional haptic mental model of reading materials and improve learning outcome as a result of more thorough processing. The facilitative effect of VR-assisted multimodal input on learners' objective macrostructural comprehension performance validated the triple-channel hypothesis and emphasized the instrumental role of tactile input in assisting learners identify different units of information and construct coherent mental representation. The research findings also indicated that the additional dimension of input modality did not impose cognitive load or alleviate cognitive load. Thus, this study added another possible result of the limited capacity assumption that do not fall into redundancy effect or modality effect, which requires further evidence to measure effects on working memory load in a theory-related manner. Although there was no statistically significant effect of multimodal input on learners' cognitive load, learners reported high level of interest and motivation and positive attitudes towards the effectiveness of multimodal text. Therefore, affective and motivational aspects of multimedia learning can be added to the active processing assumption, which involves not only the integration of prior knowledge with mental models but also the affective domain that may influence acquisition. Together the three extended assumptions form the cognitive basis for multimodal learning and provide a start point for designing multimodal instruction. In summary, this empirical study expanded the theoretical landscape by situating multimodality in SLA through a discussion of input hypothesis (Krashen, 1985) as well as cognitive theory of multimedia learning (Mayer, 2005) in a VR-assisted multimodal learning environment. Guided by the integrated framework, the mixed research findings from objective assessment and subjective narration further extended the scope of comprehensible input and the three assumptions underlying the cognitive theory. The interplay of theory and practice shown in this study provides a solid conceptual ground and empirical evidence for future investigation of multimodality in the field of SLA. Main findings This study argued that VR-assisted multimodal input significantly improved Chinese 8th grade EFL learners' overall and macrostructural reading comprehension in the short term without incurring extraneous cognitive load. Situated in the integrated framework of cognitive theory of multimedia learning with VR, this study examined the effects of VR-assisted multimodal input on Chinese 8 th grade EFL learners' reading comprehension by triangulating objective reading test performance, cognitive load ratings and participants' narrative accounts to obtain an overall understanding of multimodal input in breadth and depth. Based on the research findings, there were statistically significant differences in the short-term reading performance at the level of macrostructure, with superior performance of the students receiving multimodal text treatment. Besides it, there was no effect of multimodal text on students' short-term microstructural reading comprehension and long-term retention of the text. In addition, there were no differences in terms of learners' cognitive load, indicating that multimodal text did not incur extraneous load or increase germane load. Participants' positive feedback displayed the affordances of VR in assisting their expository reading, while the negative comments and suggestions required researchers, teachers and technological designers to further improve multimodal input in accordance with students' cognitive capacity in the multimodal learning environment. Implications Implications for teaching with advanced technology fall into the interdependent categories of materials design and student training. In terms of materials design, teachers need to select appropriate 3D models from the database and incorporate flexibility in teaching materials and assignments so that students can choose among a variety of tools or strategies in order to customize learning in accordance to their levels of language proficiency. Teachers can develop a repertoire of instructional approaches to encourage learners to construct multimodal representation and process information actively. Moreover, teachers need to provide training to orient students to the multimodal text input, otherwise readers may wander at random through multimodal text and not able to construct coherent mental representation among text units. Therefore, teachers need to acquire technology-supported skills and pedagogical knowledge to integrate advanced technology with language teaching. As for technological implications, the constraints mentioned by participants in this study require further improvement of state-of-the-art infrastructure and facilities. Technology providers in the education field need to take pedagogical design into consideration and offer practitioners enough training and technical support to facilitate multimedia assisted instruction and learning. In addition, this study was conducted in a context of governmental support in promoting VR technology in education. The mixed findings remind policy makers to think twice before implement VR technology in the education field to a large extent and treat the immediate 'wow' factor with caution. Despite the benefits of VR, there are still some challenges that should be addressed before implementing at large scale. Limitations and future directions This exploratory research took an important step in integrating VR technology with EFL learners' expository reading, while its generalizability was reduced due to the non-random sampling of participants, short time frame, nature of expository reading and the formal school context. The unanimous background of participants as 8 th grade Chinese EFL learners narrowed the generalizability of findings. Moreover, the study was carried out over a short time frame with only two reading sessions. Therefore, participants had limited exposure to multimodal text, and this study only captured learners' initial excitement towards new form of technology but may ignore their decreasing motivation and affection that influence their learning outcome in the long-term. Future studies can take a longer period of time for the treatment and observe learners' reading trajectory in dealing with multimodal text. Moreover, this study focused on expository reading and situated the treatment in a CLIL context. Different types of reading and other facets of language learning can be the focus of inquiry in future studies. Lastly, this study focused on learning in the school context and utilized conventional assessment tools. There has been very little empirical research on the instructional value of multimodal language learning in informal settings, and it is recommended that researchers deploy advanced multimedia technology to assist students' informal language learning.
2020-03-19T20:15:47.674Z
2019-10-01T00:00:00.000
{ "year": 2021, "sha1": "ef3b148e14b14685ef8a7abbd131190cbb00be46", "oa_license": "CCBY", "oa_url": "https://jlt.ac/index.php/home/article/download/15/20", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e524787648f288ea07526ba9efb1c85ada2394e2", "s2fieldsofstudy": [ "Education", "Linguistics", "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }
118405011
pes2o/s2orc
v3-fos-license
Transversality of Electromagnetic Waves in the Calculus-Based Introductory Physics Course Introductory calculus-based physics textbooks state that electromagnetic waves are transverse and list many of their properties, but most such textbooks do not bring forth arguments why this is so. Both physical and theoretical arguments are at a level appropriate for students of courses based on such books, and could be readily used by instructors of such courses. Here, we discuss two physical arguments (based on polarization experiments and on lack of monopole electromagnetic radiation), and the full argument for the transversality of (plane) electromagnetic waves based on the integral Maxwell equations. We also show, at a level appropriate for the introductory course, why the electric and magnetic fields in a wave are in phase and the relation of their magnitudes. I. INTRODUCTION Electromagnetic waves in free space constitute an important part of the calculus-based introductory physics course. In addition to other properties, all the textbooks we have surveyed include a discussion of the transverse nature of the waves. Specifically, that (plane) electromagnetic waves have the electric field strength E and the magnetic induction B perpendicular to the propagation direction of the wave and to each other so that (E/E, B/B, S/S) is an even permutation of (x,ŷ,ẑ), where S is the Poynting vector. The extent of the coverage and the arguments brought in the textbooks vary widely. While we have not checked all textbooks, none of the ones we have surveyed include a full and detailed theoretical argument for these important properties, even though all textbooks include the infrastructure needed for that purpose, i.e., the (integral) Maxwell equations. Most textbooks derive the wave equations for the transverse components of the E and B fields, but do not show that these wave equations are unique, i.e., that a longitudinal component violates the Maxwell equations. That is, they show that transverse waves are consistent with the Maxwell equations, but not that they are required. We make the case here that the theoretical argument is important for students of the calculus-based introductory course. Much research has shown that understanding and retention of physics is enhanced if important concepts are not brought just as factoids, but rather derived from theoretical and physical argumentation. In the case of transversality of (plane) electromagnetic waves our case is even strengthened, as the argument is certainly at the right level for students of such courses. Indeed, a general demonstration without assuming planar symmetry is clearly beyond the scope of the introductory course, and rightfully belongs to more advanced courses. However, the physics and mathematics needed for the demonstration for plane electromagnetic waves are readily available for students of the introductory course, and in fact are already included in most textbooks we surveyed. Therefore, no new physics or mathematics are needed to demonstrate the transversality of (plane) electromagnetic waves in the introductory course. All that is needed is to arrange the arguments that are already in the student's toolkit to achieve deeper level of understanding. The textbooks we have surveyed (and again, we emphasize our literature search was not exhaustive, as the number of textbooks is indeed very large; we do believe we have found the tendencies of existing textbooks) fall under the following categories. Category A: textbooks that postulate the transversality (but don't derive it), and then show consistency with the (either all four or just the dynamical) Maxwell equations [1]; category B: textbooks that only use the two dynamical equations (Faraday's law and the Ampère-Maxwell law) to derive the wave equations for the (transverse) components of the E and B fields, but don't show that only transverse fields are allowed [2,3]; and category C: textbooks that add to the treatment of category B also a demonstration that a longitudinal component of the E field is disallowed (but not that a transverse component is consistent with the Maxwell equations) [4]. Interestingly, even introductory books that discuss the differential Maxwell equations do not include a demonstration of the transversality of electromagnetic waves [5,6] (we have not surveyed more advanced books, appropriate for upper division courses on electromagnetism). None of the textbooks we have surveyed include physical arguments for the transversality of electromagnetic waves, even though all of them discuss polarization. On the other hand, none of the textbooks we have surveyed include a discussion of the impossibility of genuine spherical electromagnetic waves (either monopole of a combination of multipoles) [9]. Either polarization or hypothetical spherical electromagnetic waves may be used in physical arguments for the transversality of electromagnetic waves. This omission is surprising, as building strong physical intuition is widely considered a primary goal of such courses. Many of the properties of electromagnetic waves can be demonstrated also for the algebra-based introductory course [13], but examination of these arguments and their use in textbooks are beyond the scope of this Paper. Our theoretical argument is by no means original, and surely has been made numerous times. It is however absent in its entirety from all textbooks we have surveyed. Even if it does appear in textbooks we have not surveyed, we believe it is fair to state that most textbooks do not include the full argument, and that similarly most calculus-based introductory physics courses do not discuss it. In advanced courses the transversality of electromagnetic waves is easily shown [10] from the Maxwell equations and the wave equation for the vector potential A µ in the Lorenz gauge, and the skew symmetry of the Maxwell-Faraday tensor F µν := ∂ µ A ν − ∂ ν A µ . Consider, say, a vector potential A µ = A cos(kz − ωt) δ x µ . The only nonvanishing independent components of F µν are F tx = E x and F xz = B y , and E x = (ω/k)B y . These E and B fields manifestly satisfy the aforementioned properties, although the formality of the demonstration -in addition to being inappropriate for the introductory course-is devoid of physical insight. (One may of course try to impose longitudinality: take A µ = A cos(kz − ωt) δ z µ . One then finds that F tz is non-zero, and one might be tempted to identify it with E z . However, the postulated vector potential is not a solution for the Maxwell equations in the Lorenz gauge, as can be readily verified.) One can also show the transversality without invoking tensor analysis, using only vector analysis methods [11]. The demonstration using elementary methods is important, as it involves more physical insight than the advanced, more formal proof. Notably, electromagnetic waves in matter do not have to be transverse [14]. Longitudinal electromagnetic waves (or combinations of longitudinal and transverse waves) indeed exist in inhomogeneous media and in cavities (because of the cavity's boundary conditions) [10]. One may also have in vacuum electromagnetic waves with E × B = 0, but those are standing waves (with vanishing Poynting vector) [15]. These interesting cases are typically beyond the scope of the introductory course. In this Paper we present a full argument for the transversality of (plane) electromagnetic waves in vacuum. We are hopeful that instructors of calculus-based introductory physics courses present a full argument in their classes. We use SI units despite their awkwardness (for the description of electrodynamics), as this appears to be the nearly universal practice in textbooks. We use only the integral form of the Maxwell equations, as nearly all introductory textbooks (for calculus-based physics) refrain from introducing differential operators. In Section II we describe physical arguments for the transversality of electromagnetic waves, based on polarization and on (lack of ) monopole radiation. In Section III we present a full argument at a level appropriate for the introductory calculus-based physics course, as follows: First, we show that E cannot have a longitudinal component, and that E may have a transversal component. We then repeat the arguments for the B field. We then show that the E and B fields are orthogonal to each other and to the direction of propagation, and finally, we show that the E and B fields satisfy the wave equation. We emphasize that the only assumption we make is that of planar symmetry. Except for this assumption (which makes the derivation possible for the introductory course) our argument is general. A. Polarization Many introductory courses include a demonstration of polarization that includes two polaroid films, so that one can be rotated relative to the other. When the polarization directions of the two polaroids are orthogonal, no light is going through the polaroids, and the image on the screen is dark. Assume there were a longitudinal component to the E field. This component may be transmitted through either polaroid, so that a longitudinal component would arrive to the screen through both polaroids even when they are orthogonal. Therefore, with a longitudinal component the screen will never get totally dark when the two polaroids are in a plane perpendicular to the direction of propagation of the light. A variant of this argument was given by Schutz [7]. Schutz considers the two polarizers rotating relative to each other in the plane orthogonal to the direction of propagation. As the brightness on the screen oscillates with the polarizers, one must conclude the electric field acts across its motion, i.e., transversally. In order to make our argument compelling, students need to have at least a qualitative understanding of how simple dichroic devices such as a wire-grid polaroid work (most introductory texts do not discuss in depth other polarizers, such as beam-splitting polarizers, birefringent polarizers, or polarization by reflection). Such understanding may be expected from students of the relevant course: there are three microscopic mechanism that filter out the component of the E field in the plane of the polaroid parallel to the direction of the wires (or polyvinyl alcohol polymers doped with an iodine solution in a commercial polaroid): first, by the Lorentz force law, conduction electrons are accelerated along the wires, and oscillate with the oscillating E field. When colliding with lattice atoms, kinetic energy is transferred to the lattice, thereby transferring energy from the parallel component of the E field to the lattice ("Joule heating"). Second, the oscillating conduction electrons reradiate in all directions, so that half the reradiation is backward (reflection) and the remainder forward. The backward reradiation takes away more energy from the parallel component of the E field. The forward reradiation is not entirely in the direction of the original incident plane wave, so that it scatters and little of it arrives at the detector. Most importantly, the forward reradiation is generally out of phase with the incident E field (phase difference of π radians), thereby reducing it further by destructive interference [8]. Some textbooks explain the polarization effect by stretching a mechanical model too far. In such texts the electric field is modeled by a tension wave in a string that is passing through a fence with parallel beams [3]. The mechanical wave naturally is transmitted only if the string oscillates parallel to the beams. This argument might imply the wrong component of the E field is transmitted. For this reason, those texts refer to the polarization direction of the polarizer, stating that perpendicular components are filtered out. But this argument may then persuade some students to believe a longitudinal component of the E field is also filtered out, such that this sort of argumentation, in addition to not empowering the students with an understanding of simple dichroic devices work, also might prevent them from understanding a simple physical argument for why electromagnetic waves are transverse, and give the students a misleading physical picture of electromagnetic fields oscillating in the space between the wires. B. No monopole radiation Consider a hypothetical monopole electromagnetic wave. Then, a radially pulsating spherical charge distribution would emit such spherical outgoing waves. As the fundamental equations of physics are symmetrical under time reversal (t → −t), an incoming imploding spherical monopole wave would make an otherwise static spherical charge distribution pulsate radially. Each charge pulsates radially because of a radial force acting on it. The form of the Lorentz force law then implies there were a radial E field acting on the charges. But a radial E field is perpendicular to the imploding spherical wave front and therefore in the same direction of its motion, so that the E field would have a longitudinal component. This qualitative argument implies that a monopole electromagnetic wave is necessarily longitudinal. (While students of the introductory course generally have not studied multipole expansions of the electromagnetic field, the argument may still be used in reference to a truly spherical wave front.) We do know, however, that outside a spherical charge distribution the electromagnetic field is static, as the monopole piece of the electromagnetic field is non-radiative. Students are exposed to this argument also in Newtonian gravity: because of the inverse square force law (common to both Newton's and Coulomb's laws) the field strength of any spherical charge (or mass) distribution is the same as if all the charge (or mass) were concentrated at the center. Specifically, the electric field outside a radially pulsating spherical charge distribution is static. Therefore, the longitudinal component of the E field cannot exist, as such a longitudinal component would have to be radiated by a pulsating spherical charge distribution [7]. This physical argument may be cast in a more mathematical form, using a theorem from vector calculus that states that no unit two-dimensional continuous vector field V on a sphere may exist on the entire sphere (0 ≤ θ ≤ π, 0 ≤ φ < 2π) [12]. This theorem is normally beyond the scope of the introductory course. In addition, it requires the assumption of transversality (namely, that V is a two-dimensional vector field on the sphere, or V · r = 0). However, it may be used to argue that any such spherical electromagnetic wave would have to be longitudinal. Show that E does not have a longitudinal component Consider a plane electromagnetic wave traveling in vacuum in the direction ofx. (Our coordinates may be rotated to this orientation.) Because of the planar symmetry the most general electric field strength is E = E(t, x). In what follows we omit the explicit time dependence of fields. We first show there can be no component E x : construct a gaussian surface enclosing a volume V as in Fig. 1. According to Gauss's law, where q is the total charge inside the gaussian surface, and ǫ 0 is the permittivity of vacuum. Here, dA is a surface element normal to the (orientable) surface ∂V , defined conventionally such that it is positive when pointing outward. The arrows indicate a case in which they are not. Then, the surface integral must be nonzero, and proportional to their x derivatives. In the case of E this implies free charges inside the box, i.e., a continuous charge distribution, which contradicts the vacuum assumption. In the case of B that distribution would be of magnetic monopoles. Calculate the integral on the LHS: take as there are no charges. The components E y and E z identically do not contribute to the LHS integral, as these components are not functions of y or z, respectively, and as they are in the plane of the faces of the surface whose normals are in the direction ofx. Notice that there is no flux of E x through the faces of the gaussian surfaces in thê y orẑ directions because of the orthogonality of E x to the surface's normals. This immediately implies ∂E x / ∂x = 0, or E x does not obey a non-trivial wave equation. Therefore, the most general electric field is E = E y (x)ŷ + E z (x)ẑ. Without loss of generality, we may rotate our coordinate system so that E = E y (x)ŷ. Our demonstration depends on the lack of charges. Presence of charges requires a non-vanishing gradient of E x . Indeed, Gauss's law would not be violated for longitudinal waves without assuming vacuum, as there are charges and inhomogeneities in matter. Therefore, this demonstration does not rule out longitudinal waves in matter or inhomogeneous media. Show that E may have a transversal component Consider next an electric field E = E y (x)ŷ, as in Fig. 2. Applying Gauss's law does not yield a restriction on E y , because the LHS of the integral vanishes identically, as E y = E y (x): This step may of course be combined with the preceding one by taking E = E x (x)x + E y (x)ŷ + E z (x)ẑ, and showing that Gauss's law becomes ∂E x / ∂x = 0. In conclusion, a transverse component for E is consistent with Gauss's law. Of course, were there not a transverse component, the solution would become trivial (no electromagnetic wave). Therefore, for any electromagnetic wave a transverse E field is necessary. Show that B does not have a longitudinal component We next use similarly the magnetic Gauss law, The same argument as in Step 1 implies that ∂B x / ∂x = 0, so that also B does not have a longitudinal component. Therefore, B = B y (x)ŷ + B z (x)ẑ. But we have no more freedom to rotate the coordinate system, a freedom which we have already exhausted in rotating it so that E z = 0. We may therefore not set any of the components of B equal to zero. As there are no magnetic monopoles, the B field has to be transverse also in matter. Show that B may have a transversal component Consider next a field B = B y (x)ŷ + B z (x)ẑ. Applying Gauss's law does not yield a restriction on B y or on B z , because the LHS of the integral vanishes identically, as B y = B y (x) and B z = B z (x): Show that the E and B fields are orthogonal to each other and to the propagation direction Consider now an Ampère loop enclosing an area A as in Fig. 3. The Ampère-Maxwell law is where µ 0 is the permeability of free space, i is the total conduction current through the loop, and Φ E is the flux of the E field through it. Here, dS is an element of the curve ∂A along the loop, conventionally taken to be positive counterclockwise. Doing the integral on the LHS, as the B z component is perpendicular to the entire loop, and as there are no electric field flux through the loop (because the E field in in the direction ofŷ) and also no free currents. Therefore, ∂B y / ∂x = 0. Therefore, we find that B = B z (x)ẑ. We therefore find that E · B = 0, i.e., the fields are orthogonal to each other and also to the direction of propagationx. In addition, E × B = E y (x)B z (x)ŷ ×ẑ = E y (x)B z (x)x, i.e., the cross product of the fields is in the direction of propagation of the wave. Show that the E and B fields satisfy the wave equation with speed c This part of the argument appears in most textbooks we have surveyed [2,3,4]. However, without the preceding parts of the argument it is merely a demonstration of consistency of the transverse wave equation with the Maxwell equations, not their unavoidability. We now use the loop in Fig. 4 to evaluate the Ampère-Maxwell law. On the LHS, The flux of the E field is simply Φ E = E y (x) dx dz, so that the RHS is evaluated to equal so that the Ampère-Maxwell law yields We next consider again the Ampère loop in Fig. 3, but this time apply it to the Faraday law, Here, Φ B is the flux of the B field through the loop. The LHS is evaluated as The RHS is Many introductory texts [2,3,4] show that by differentiating Eqs. (1) and (2) and eliminating mixed second derivative terms, one can obtain wave equations. With the identification c 2 = 1/(ǫ 0 µ 0 ) these equations lead to The magnitudes of the E, B fields To show the relation between the magnitudes of the E and B fields we present an adaptation of an argument originally made by Feynman [6]. Consider an infinite planar current sheet, carrying current density j = −jŷ. A current sheet is a two dimensional surface carrying current, and can be approximated by a large number of wires, all carrying current in the same direction, so that the total current per unit length (perpendicularly to the wires) is j = i/ℓ, where i is the total current included in the length ℓ. Current sheets are a very useful model in magnetohydrodynamics (MHD) and in heliophysics. We first recall the B field of a stationary current sheet [5]. Consider a current sheet aligned as in Fig. 5. To find the B field outside the current sheet we construct an Ampère loop whose plane is perpendicular to the plane of the current sheet. The B field must be parallel to the plane of the current sheet, and perpendicular to the direction of the current. (One may be convinced of that result by considering the elementary problem of current wire, and considering the current sheet as the limiting case of infinitely many such current wires.) As this problem is stationary, we may use the original form of the Ampère law (no time-changing flux of an electric field), i.e., where i is the total current going through the loop, whose length in the z-direction is ∆z. Because the current sheet is infinite, the magnitude of the field strength |B| := B is constant along the legs of the loop in the z-direction. The LHS of the Ampère law then is ∂A B · dS = −2B ∆z, and the RHS is simply µ 0 i = −µ 0 j ∆z. Putting the two expressions together, one finds that B = 1 2 µ 0 j. Notably, B is independent of the distance from the current sheet. Consider next a current sheet that is abruptly turned on at time t = 0. That is, j = −j Θ(t)ŷ, where Θ(t) is the Heaviside step function. Before the current is turned on, the B field is zero everywhere. Following the abrupt turning on of the current, a non-zero B field is starting to fill up space, the wavefront propagating at the speed c. Therefore, we have a propagating wavefront, so that in front of the wavefront the B field vanishes, and behind it is uniform as shown above. To find the accompanying E field we use the Faraday law for the Ampère loop in Fig. 6. Only the part of the Ampère loop behind the wavefront has non-vanishing B-field flux. As the wavefront is propagating with speed c, the area of the loop where there is a B field is increasing, so that there is a time-changing B-field flux through the loop. The E field between the wavefront and the current sheet is parallel to the current sheet, and counter-parallel to the current. That is, E = Eŷ. (See Fig. 6.) Using the Faraday law, one finds that the RHS equals ∂A E · dS = −E ∆y, and the flux of the B field through the loop at the time t > t 0 is Φ B = B c(t − t 0 ) ∆y, so that the RHS of the Faraday law equals − dΦ B / dt = cB ∆y. Setting the two hand sides equal to zero, we find E ∆z = cB ∆z, or E = cB . Although our argument was made only for the configuration of a current sheet that is abruptly turned on, the result that E = cB for an electromagnetic wave is general. The general result, however, is beyond the scope of the introductory course. An important consequence is that in an electromagnetic wave the E and B fields are in phase. That is, as at any given time E = cB, when E is maximal so is B, when E vanishes B is zero too, etc. The result that E = cB appears in many introductory texts (e.g., in [3]). However, in such books an extra assumption is typically made, namely that the fields are harmonic and are in phase. Students of the introductory course, who normally are not familiar with Fourier theory, often find it unconvincing to base an important general physical result on the mathematical properties of sinusoidal waves, in addition to having to memorize yet another factoid, namely that the fields are in phase. At this point one is in a position to make the following discussion. We found that E = cB. this means that the ratio E/B for an electromagnetic wave equals c, which depends in magnitude on the system of units. E.g., in SI units this ratio is 3 × 10 8 m s −1 , and in cgs units is it 3 × 10 10 cm s −1 . It is therefore natural to use units that put the E and B fields on equal footing, that is use units in which distance is measured in light-seconds. In such units the speed of light is c = 1 (light-)second/second, or simply c = 1. That is, in these units E = B. This choice of units becomes further motivated physically when one considers the energy density E in the electromagnetic field (not necessarily that of an electromagnetic wave), E = 1 2 ǫ 0 (E 2 + c 2 B 2 ). For an electromagnetic wave E = cB, so that the energy densities stored in the E and B fields are equal. It is therefore suggestive to make the two fields symmetrical by using units in which c = 1. It is often instructive to remind students that SI or cgs units were developed because of their convenience in describing everyday phenomena involving humans, and while all unit systems are in principle equivalent, these are not necessarily the units in which physical phenomena take their simplest or most natural form. We therefore showed that electromagnetic waves are transverse, and that the electric and magnetic fields are perpendicular to each other. Also the directions are so that E × B is in the direction of propagation. We have also shown they have the same speed c, that they are in phase, and that the magnitudes of the E and B is related by E = cB. This demonstration makes use of only the integral Maxwell equations, and is appropriate for the level of most calculus-based physics courses, and includes only arguments already available for the perspective students. We believe its main strength is that it avoids giving the students factoids without deep understanding, and instead empowers the student to gain deeper insight.
2008-08-21T20:15:26.000Z
2008-08-21T00:00:00.000
{ "year": 2008, "sha1": "a0ed8cfe5d828fd37fcc57d12bb0ae075e72e1f9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0808.2993", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a0ed8cfe5d828fd37fcc57d12bb0ae075e72e1f9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9247298
pes2o/s2orc
v3-fos-license
Cognitive Impairments in Hashimoto’s Encephalopathy: A Case-Control Study Background/Aims Hashimoto's encephalopathy is considered as a treatable dementia, but it is often misdiagnosed. We investigated cognitive impairment and the MRI pathology of Hashimoto's encephalopathy patients. Methods The study comprised eight patients with Hashimoto's encephalopathy, 16 patients with mild Alzheimer’s disease and 24 healthy subjects. A neuropsychological battery included assessments of memory, language, attention, executive function and visuospatial ability. Cranial MRI was obtained from all Hashimoto's encephalopathy patients. Results Hashimoto's encephalopathy and mild Alzheimer’s disease showed cognitive impairments in episodic memory, attention, executive function and visuospatial ability, but naming ability was unaffected in Hashimoto's encephalopathy. The MRI of Hashimoto's encephalopathy showed leukoencephalopathy-like type or limbic encephalitis-like type; the lesions did not affect the temporal cortex which plays a role in naming ability. Conclusion Except that the naming ability was retained, the impairments in cognitive functions for the Hashimoto's encephalopathy patients were similar to those of Alzheimer’s disease patients. These results were consistent with the MRI findings. Introduction Hashimoto's encephalopathy (HE) is a rare, controversial neurological disorder associated with high titers of antithyroid antibodies [1,2]. Patients are mostly women [2]. Typically, HE is a steroid-responsive, relapsing-remitting and progressive encephalopathy. The clinical manifestations of HE are various, but frequently involve disorders of consciousness (e.g., confusion, coma), stroke-like episodes, cognitive decline, seizures (including focal or generalized seizures), psychiatric manifestations (such as depression, mania or hallucinations), myoclonus and movement disorders [3][4][5]. Although cognitive impairment is commonly described in cases of HE, the exact nature of the impairment and the associated neuroimaging pathology remain unclear. An investigation of the cognitive symptoms and homologous neuroimaging manifestations of HE will contribute to the differential diagnosis of dementia. Furthermore, as the diagnosis criteria of HE is controversial, such research would help to confirm the diagnosis of HE, and contribute to the assessment of responses to treatment [6]. This study aims to analyze the cognitive impairment characteristics of HE by comparing HE patients with early-onset Alzheimer's disease (AD) patients and a normal control group. In addition, the magnetic resonance imaging (MRI) data of HE patients is retrospectively analyzed to investigate the pathological status of HE. Participants Participants included 8 patients with HE and 16 patients with mild AD, who were recruited from the Memory Clinic, Department of Neurology, Huashan Hospital, from January 2010 to March 2012. The HE patients (the case group) were carefully selected according to the following criteria: 1) clinical manifestations of HE in accordance with generally accepted diagnostic criteria [2,7]: cognitive impairment (Clinical Dementia Rating, CDR = 1) with or without neuropsychological symptoms; seizures; focal neurological deficits or movement disorders; 2) elevated titers of antithyroid antibodies in serum, including antithyroid peroxidase antibodies (anti-TPO; normal ,60 IU/ml), and antithyroglobulin antibodies (anti-TG; normal ,60 IU/ml); 3) negative results for infectious diseases (determined using the Treponema pallidum particle agglutination test and rapid plasma regain test, anti-HIV antibody, anti-rubella IgG, IgM, anti-measles IgM, anti-cytomegalovirus IgG, IgM), tumor antigens (CEA, AFP, CA199, CA125, NSE, SCC et al), para-neoplastic antibodies (immunoblot anti-Hu neuronal nuclear antibody IgG, immunoblot anti-Yo neuronal nuclear antibody IgG, immunoblot anti-Ri neuronal nuclear antibody IgG, immunoblot anti-CV 2 neuronal nuclear antibody IgG et al), and other immune antibodies including ANA, ENA, ANCA, dsDNA, as well as serum voltage gated potassium channel (VGKC) antibodies and anti-NMDA receptor antibodies; 4) cognitive assessment was performed prior to steroid treatment; 5) to achieve the neuropsychological tests, we excluded the patients with disorders of consciousness, visual or auditory deficits, or obvious symptoms of medical or psychiatric dysfunction (including mania and depression) within the previous month; 6) EEG showed no triphasic wave. The mild AD patients (patient control group) met the following criteria [8]: 1) diagnosis of AD according to the DSM-IV [9], (CDR = 1); 2) to achieve the neuropsychological tests, we excluded the patients with disorders of consciousness, obvious medical diseases, psychiatric/psychological dysfunction (including anxiety, mania and depression) within the previous month or visual or auditory deficits. Twenty-four healthy old subjects from urban communities in Shanghai were chosen as the normal control group using cluster sampling. All participants were native Chinese speakers. There were no statistical differences among the three groups in terms of sex or educational level (P.0.05; Table 1). Informed written consent was obtained directly from patients or family members. The ethics committee at Huashan Hospital, Fudan University approved the protocol. Neuropsychological Assessment Participants were given neuropsychological tests by a trained rater who was unaware of the study aims or the patient diagnosis. A comprehensive neuropsychological battery was used, which included assessments of memory, language, attention, executive function and visuospatial ability. All tests have been shown to have good reliability and validity when used in Chinese populations [10]. The specific tests employed were: the Center for Epidemiologic Studies Depression Scale [11]; the Auditory Verbal Learning Test (AVLT) [12,13]; the Rey-Osterrieth Complex Figure Test (CFT) [13], the event-based prospective memory test (EBPM) and the time-based prospective memory test (TBPM) [14,15]; the Boston Naming Test (BNT, 30-item version) [16]; the verbal fluency test (VFT) [17][18][19]; the Trail Making Test, parts A and B (TMT-A, TMT-B) [20]; the Symbol Digit Modalities Test (SDMT) [21]; the Stroop Color-Word Test (SCWT) [22]; the similarity test [23]; the stick test [24]; and the clock drawing test (CDT) [25,26]. (Please refer to Dementia and Geriatric Cognitive Disorders, 2011; 31:284-290 for details). MRI Examination All MR images were obtained using a 3.0-T system (Signa VH/ i, GE Healthcare, Milwaukee, WI, USA) equipped with a standard head coil. The subject's head was immobilized in the head coil with foam padding. The conventional MRI sequences included axial T1-weighted imaging (T1WI), T2-weighted imaging (T2WI) and fluid-attenuated inversion recovery (FLAIR). Coronal images were used because the hippocampus is more easily identified in this acquisition plane. Statistical Analysis The ages of subjects were significantly different between the three groups. Therefore, the overall differences between the three groups were assessed using ANCOVA to remove the effect of age. Post-hoc pair-wise comparisons between the groups were made using the least significant differences test. The level of significance (a) was set at 0.05. Results The average age of the HE group was 38.88611.57 years (3 males, 5 females; Table 1). The educational levels included elementary school (n = 1), junior high school (n = 2), senior high school (n = 3) and college (n = 2). Clinical examination revealed that symptoms/signs of HE included seizures (n = 4), psychological or psychiatric disorders (n = 3), stroke-like episodes (n = 2) and corticospinal abnormalities (n = 1). The clinical features of the HE cases are summarized in Table 2. The tests for antithyroid antibodies (conducted to satisfy the diagnostic criteria) revealed that anti-TPO levels were greater than 1,300 IU/ml in five HE patients; the other three patients had anti-TPO levels greater than 400 IU/ml (i.e., 1253.63 IU/ml, 1113.2 IU/ml and 495.3 IU/ml). Anti-TG levels were greater than 90 IU/ml in all eight cases. Three of the patients were hyperthyroid. Cerebral MRI showed that manifestations of HE varied from white matter alterations to hippocampus swelling to medial temporal lobe/hippocampus lesions ( Figure 1(A, B), Figure 2(A, B)). The means 6 SD of the neuropsychological test scores are summarized in Table 3. As expected, HE patients had greater cognitive impairment than the control subjects in most tests (P,0.05) except EBPM (2; non-language response), TBPM (2), BNT, VFT (fruit), stick test (copy) and CDT. Apart from BNT, there were no significant differences between mild AD and HE patients. Discussion Since the first description of Hashimoto's encephalopathy (HE) by Brain et al. [27] in 1966, around a hundred cases have been reported worldwide. Most of the published case reports comprise a small sample, and there is no comprehensive statistical analysis of the clinical features associated with HE. In fact, HE remains problematic in terms of its pathophysiology, diagnosis and treatment. The diagnostic criteria proposed by Peschen-Rosin et al. in 1999 [28] encompassed seizures, psychiatric disorders and focal neurological deficits, with elevated thyroid antibodies and an excellent response to steroids. However, this classification of HE underestimates the significance of memory loss and the pathological differences observed in MRI, and overestimates the response to steroids. More and more studies are showing that cognitive impairment is one of the main manifestations of HE, but this aspect has been overlooked due to the multiple and protracted neurocognitive manifestations associated with this condition [27][28][29]. In addition, only 50% of patients with HE are responsive to corticosteroids [29]. The current diagnostic criteria confirm that Hashimoto's encephalopathy is a diagnosis of exclusion [30]. In the current study, patients were diagnosed with HE on the basis of typical clinical manifestations and a high titer of antithyroid antibodies (especially anti-TPO) in serum, and after the exclusion of other causes. We have detected most infectious and immune antibodies finally got the negative results. Otherwise, we carried out the inspection of anti-NMDA receptor antibody and voltage gated potassium channel (VGKC) antibody. A serum screening for these two antibodies was routinely done for differential diagnosis of anti-NMDAR encephalitis and limbic encephalitis, which showed the similar manifestations as HE therefore easily confused with the recognition. In our opinion, patient with anti-NMDAR encephalitis might represent more protracted courses and severe outcomes. The autonomic instability such as hypoventilation and lowgrade fever could usually be noted in anti-NMDAR encephalitis, which was scarcely occurred in HE disease. Many anti-NMDAR encephalitis cases were also typically women combined with ovarian teratomas. Our female ones have done routine imaging exam, and found no teratomas. All our patients had negative anti-NMDA receptor antibody and showed better prognosis. We might exclude the anti-NMDAR encephalitis from both clinical manifestation and laboratory examination. Besides, most limbic encephalitis were para-neoplastic disease, since our patients have negative tumor antigens and neuronal nuclear antibodies screening (including anti-Hu IgG, anti-Yo IgG, anti-Ri IgG, anti-CV 2 IgG et al), and negative serum VGKC antibody as well as the elevated titers of antithyroid antibodies (anti-TPO levels were greater than 1,300 IU/ml in five HE patients; the other three patients had anti-TPO levels greater than 400 IU/ml,Anti-TG levels were greater than 90 IU/ml in all eight cases), thus we might not first consider the diagnosis of the limbic encephalitis. It is reported that the disease may present in two clinical types, a sudden vasculitic type presented stroke-like episodes or a progressive subacute type associated with cognitive dysfunction, confusion and memory loss [30]. Our study focused on cognitive impairments of the second type; the vasculitic type of HE was scarcely included. Some reports have suggested that there are elevated CSF protein levels in HE. Lumbar punctures were not performed in this study, but future studies should consider including this for laboratory diagnosis. As there is no gold-standard diagnostic test for HE, neuropsychological tests are an important tool that can help to confirm diagnosis and assess the response to corticosteroid treatment [6]. We carried out a review of international papers that investigated cognitive impairment in HE; the research linking them was limited. Most of the relevant literature concerned case reports, using only Mini-Mental State Examination Scores (MMSE) for a rough assessment [31,32]. There are few case reports that entail a comprehensive examination of cognitive impairment patterns to estimate the steroid response. Cummings [33] reports that pretreatment testing revealed global cognitive impairment, with deficits in: overall mental status; simple and complex attention; nonverbal reasoning; line orientation; and list, story and figure learning and recall. In addition, Mazzu [34] and Fukunaga [35] showed a diffuse pattern of cognitive impairment that eventually progressed toward a selective deficit in executive functions and procedural memory. They also suggested that this pattern of cognitive impairment, characterized by widespread brain involvement, primarily implicated the frontal lobe [34,35]. To investigate the cognitive impairment pattern of HE in a systematic manner, we recruited HE patients with mild, pretreatment cognitive impairment, and compared the results with mild AD patients, and normal elders. To the best of our knowledge, it is the first statistical analysis of cognitive impairment associated with Hashimoto's encephalopathy. The results showed that the scores for the HE group were worse than those of the normal control group in most tests. However, there was no difference between these two groups in terms of BNT. This suggested that there was an overall impairment in cognitive function for the HE patients, but long-term semantic memory (naming ability) remained unaffected. This pattern of cognitive impairment was associated with diffuse brain involvement, but rarely implicated the temporal cortex. However, when comparisons were made between the HE group and the mild AD group (who were matched for education, gender and MMSE scores), there was no significant difference in performance, indicating that HE might show a similar pattern of cognitive impairment to AD. The patients' cognitive function improved to some extent during or after steroid treatment; MMSE scores were elevated to 28-30/30. This is consistent with the generally accepted viewpoint that treatment with steroids promotes marked clinical improvement and usually infers a good prognosis [27][28][29][30]34,35]. This is of considerable importance for patients with rare but treatable causes of encephalopathy presenting with acute or subacute cognitive decline. Unfortunately, a comprehensive follow-up neuropsychological battery was not carried out. Further studies are needed to verify the detailed pattern of improvement following treatment. MRI is widely used as a non-invasive means of studying pathological changes in the CNS, and it can provide a unique insight into the pathological status of HE. Among the eight cases of HE in this study, the MRI revealed two types of manifestation associated with HE: the more frequently reported leukoencephalopathy-like type, and limbic encephalitis-like type. Two of the patients (cases 3 and 7) showed the former manifestation-type, with widespread periventricular hyperintense signals in the cerebral white matter (Figure 1(A, B)). Similar pathologies were reported in two other case reports. One was a 65-year-old woman with subacute deterioration of cognitive function, whose brain MRI revealed diffuse high intensity in the white matter on diffusion and T2-weighted images, which mimicked leukoencephalopathy [36]. The other was a 50-year-old man presenting with lower motor neuron symptoms that had evolved over three years, and changes in behavior associated with attentive and cognitive impairment. This patient's cranial MRI also revealed multiple subcortical white matter lesions [37]. In the current study, limbic encephalitis pathology was observed in cases 2 and 4. The foci of hypersignal on FLAIR for these patients were the bilateral/unilateral medial temporal and neighboring structures (i.e., hippocampus and amygdale; Figure 2(A, B)). Similar findings have been reported previously. Shindo et al. reported an HE case, whose brain FLAIR MRI demonstrated near-symmetrical high signal intensity areas in the bilateral mesial temporal lobes [38]. Lesions in the neighboring structures of the limbic lobe have occasionally been reported, such as localized symmetrical lesions in the bilateral pallidum to the genu of the internal capsule [35]. The remaining four cases in this study showed mild or severe hippocampus swelling but no signal intensity. On the basis of the neurological/psychiatric impairment, we believe that there should be a lesion present in these cases, which may be discovered using functional MRI (fMRI), e.g., blood oxygen level dependent (BOLD) fMRI, or DTI (diffusion tensor imaging). On the whole, the pathology revealed by MRI showed no involvement of the temporal cortex, which is consistent with the cognitive impairment findings that show no deterioration in longterm semantic memory function. However, it should be noted that the information provided by this article only showed the observed phenomena at a specific time point rather than over a period of time. We are currently performing a longer-term follow-up of patients after treatment to evaluate any residual lesions and treatment efficacy. Therefore, more studies are needed to confirm the conclusions of our study and extrapolate these patients' results in larger cohorts. Conclusions Our findings suggest that the impairments in cognitive functions for the HE patients were similar to those of AD patients except the remained naming ability. The MRI of HE showed leukoencephalopathy-like or limbic encephalitis-like type; the lesions did not affect the temporal cortex which plays a role in naming ability. Author Contributions Conceived and designed the experiments: JW JZ XW. Performed the experiments: JW JZ LX YS. Analyzed the data: JW QG. Contributed reagents/materials/analysis tools: LX YS. Wrote the paper: JW.
2016-05-12T22:15:10.714Z
2013-02-08T00:00:00.000
{ "year": 2013, "sha1": "9a6a450b6e67a83448cae4097d55295cfa3d4758", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0055758&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a6a450b6e67a83448cae4097d55295cfa3d4758", "s2fieldsofstudy": [ "Psychology", "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264490170
pes2o/s2orc
v3-fos-license
Cerebral venous sinus thrombosis and thrombocytopenia due to heparin-independent anti-PF4 antibodies after adenovirus infection Not available. Cerebral venous sinus thrombosis (CVST) is a rare cerebrovascular disorder predominantly observed in adults, but can pose unique challenges in pediatric patients. 1The underlying etiologies of CVST are diverse, encompassing hypercoagulable states, infections, and trauma.While concomitant thrombocytopenia is uncommon in CVST cases, 2 instances of CVST and thrombocytopenia have been reported following administration of adenoviral vector-based vaccines, ChAdOx1 nCoV-19 and Ad26.COV2.S. [3][4][5] This condition has been termed vaccine-induced immune thrombotic thrombocytopenia (VITT), which is caused by antibodies against platelet factor 4 (PF4).In this paper, we report on a pediatric case of severe CVST and thrombocytopenia, emerging one week after an adenovirus infection. Recurrent occlusion and profound thrombocytopenia, which responded to IVIG therapy, was observed.Platelet activating anti-PF4/heparin antibodies were identified in patient's serum. Informed signed consent was obtained from the parents of the patient to publish the case. Our patient is a 7-year-old girl who presented to the family physician with a sudden onset of severe headache and vomiting.One week prior, she had fever and conjunctivitis.There was no cranial trauma in the medical history.She was initially hospitalized in a regional hospital where computed tomography (CT) detected bilateral frontal subdural hematoma.Laboratory investigation showed isolated thrombocytopenia (11 x 10 9 /L).After 2 days of hospitalization, she was transferred to the pediatric intensive care unit at our university hospital.At admission, D-dimer was 54 µg/ml fibrinogen equivalent units (FEU) and fibrinogen (40 mg/dl) as well as factor XIII (33%) were decreased.Infection associated immune thrombocytopenia was suspected.Anti-platelet antibodies, which was tested using the monoclonal antibody immobilization of platelet antigens assay, and anti-phospholipid antibodies were not detected.Adenovirus infection was detected in the throat swab by PCR.PCR for following viruses were negative: SARS-CoV-2, Influenza A, Influenza B, RSV, HMPV, Rhinovirus, HSV-1, HSV-2, Enterovirus, CMV, and EBV.Bacterial and fungal cultures were also negative. Magnetic resonance imaging with angiography revealed an extensive thrombosis in the superior sagittal sinus, partially affecting the sigmoid sinus and cortical veins.On day 4, interventional thrombectomy was successfully performed, restoring orthograde drainage from the anterior through the middle and distal segments (Figure 1A-B).After the procedure, the patient experienced retroperitoneal bleeding and subsequently hemorrhagic shock, and received platelet and erythrocyte concentrates.Interventional coiling of the inferior epigastric artery was required to stop the bleeding.After thrombectomy, D-dimer reduced to 3.3 µg/ml. Cranial CT was conducted due to the deterioration of neurological symptoms, revealing rethrombosis of a similar extent to the initial presentation, along with congestive bleeding. Thrombocytopenia, elevated D-Dimer and decreased fibrinogen levels indicated uncontrolled hypercoagulation.Therapeutic anticoagulation with unfractionated heparin (UFH) was started, and a second mechanical thrombectomy was performed on day 5. Platelet count remained below 50 x 10 9 /L despite several platelet transfusions.Given the challenging course with bleeding complications and the suspicion of immune thrombocytopenia, the patient received high-dose intravenous immunoglobulin (a total of 45g IVIG) on days 9 and 10, leading to a swift normalization of the platelet count (>100 x10 9 /L) in the following days (Figure 1C). On day 12, hematoma evacuation and decompressive craniectomy were performed due to progressive subdural hematoma.In the following course, the patient showed gradual improvement.Sonographic monitoring showed stable retroperitoneal hematoma.Coagulation parameters upon IVIG and under anticoagulation showed significant improvement. Discharged on day 33, the patient demonstrated a cooperative demeanor, mild word-finding difficulties, rest tremor, and normal muscle tone and strength.However, concentration and perseverance were still clearly reduced.Anticoagulation was switched to low molecular weight heparin at discharge.The rare co-occurrence of CVST with thrombocytopenia has prompted our consideration of a rare variant of HIT, known as spontaneous HIT, which can manifest without prior heparin exposure.Additionally, heightened awareness in our laboratory driven by the emergence of VITT cases 2 years ago, which also often present with thrombocytopenia and CVST, led us to consider an anti-PF4 antibody related disorder.The retrospective investigation of the sample from day 5 of hospitalization showed a strong IgG PF4/heparin EIA reaction (Zymutest HIA IgG, Hyphen Biomed, Neuville-sur-Oise, France, Optical density (OD) 2.8; normal range: 0-0.5).A modified heparin induced platelet activation (HIPA) assay was performed with addition of exogenous PF4 as previously described. 6The HIPA assay was negative with low concentration of heparin making heparin-induced thrombocytopenia (HIT) very unlikely (Figure 2A).In contrast, the modified HIPA was positive (platelet aggregation within 5 min in the presence of PF4 and within 15-35 min without exogenous PF4), which is a serological pattern that mimics VITT (Figure 2A). 3 Antibody-induced procoagulant platelets, determined by expression of P-selectin and phosphatidylserine (PS) externalisation on the platelet surface, were analyzed using flow cytometer as previously described. 3Antibody-mediated procoagulant platelet formation was observed in the absence of heparin, was decreased in the presence of low concentration of heparin but completely inhibited by high concentration of heparin (Figure 2B).Additionally, serum-induced procoagulant phenotype was inhibited with IV.3 (Fcγ receptor IIa blocking monoclonal antibody anti-CD32 [moAb IV.3; Stemcell TM technologies,Vancouver, Canada], 20 µg/ml) or immunoglobulin (IVIG, 30 µg/ml), indicating FcγRIIA dependency (Figure 2B).Increased PS externalization can initiate plasmatic coagulation and subsequent increased thrombin generation on the platelet surface. 8The ability of patient's serum to induce thrombosis was investigated using a novel ex vivo model for antibody-mediated thrombosis (utilizing the BioFlux 200 system from Fluxion Biosciences, Alameda, USA). 7Patient's serum induced significant thrombus formation with increased fibrin deposition compared to healthy control (mean cumulative area of thrombus (%SAC) ± standard error of mean (SEM): 2.3±0.36 vs. 0.9±0.2,p <0.01; Figure 3).Thrombus formation was markedly inhibited by IV.3 (%SAC±SEM: 0.02±0.02)and IVIG (%SAC±SEM: 1.29±0.21). Our data suggest that anti-PF4 antibodies can be generated after adenovirus infection even without a previous aberrant exposure to heparin or COVID-19 vaccine.These antibodies seem to harbor the ability of inducing thrombosis in a mechanistically similar way as VITT antibodies.Most importantly, our ex vivo data emphasizes the advantage of IVIG to prevent antibody-mediated thrombosis and thrombocytopenia. Antibodies against PF4 lead to HIT and VITT, in which the immune tolerance to PF4 is disrupted, resulting in clonal expansion of B cells and subsequent secretion of anti-PF4 antibodies. 8Recently, platelet-activating anti-PF4 antibodies were detected in an unvaccinated patient with monoclonal gammopathy and multiple thrombotic complications. 9milar to our case, Warkentin et al. published very recently 2 cases (a pediatric and an adult patient) developing anti-PF4 antibodies almost one week after adenovirus infection. 10Both patients had thrombocytopenia and thrombotic events (fatal CVST in the child and multiple arterial and venous thrombosis in the adult).Our patient did not have previous heparin or COVID-19 vaccine exposure.HIT could also be ruled out.No additional risk factors for CVST were identified.The trigger for the development of anti-PF4/heparin antibodies in our patient is not clear.It was suggested that adenovirus or components of vaccine might be responsible for the development of anti-PF4 antibodies in VITT patients. 11Anti-PF4 antibodies from VITT patients recognizes complexes of adenovirus hexon proteins and PF4. 11Furthermore, ChAdOx1 can bind to PF4 as well as coxsackievirus and adenovirus receptor (CAR), which also supports the procoagulant situation of the platelets in the case of an adenovirus infection analogous to VITT. 12 Concurrent pro-inflammatory factors may be the link to an enhanced immune response to PF4 and thus the formation of anti-PF4 antibodies in adenovirus infection. Anticoagulation in CVST is challenging due to bleeding risk, with 40% presenting with hemorrhagic infarct at diagnosis, making immediate heparinization difficult.In our patient, the presence of intracranial hemorrhage and severe thrombocytopenia led to a cautious approach and delayed initiation of heparin.Due to high procoagulant state, patients with HIT and VITT require therapeutic anticoagulation.Heparin is, however, contraindicated in HIT. Nonetheless, successful heparin use has been reported in VITT 13 and therapeutic dose heparin can disrupt the interaction between VITT antibodies and PF4 in vitro. 14Importantly, our patient, despite high anti-PF4/heparin antibodies, didn't develop new thrombosis after heparin treatment and platelet count remained stable. IVIG is a well-established first-line treatment for patients with ITP, but more recently it has been of increasing interest in the treatment of HIT and VITT 15 .IVIG mitigates platelet activation via competitive FcγRIIA binding. 16In VITT patients with CVST, IVIG therapy correlated with lower mortality. 13We observed a rapid increase of platelet count after IVIG therapy (Figure 1C), supporting an immune-mediated platelet activation as the underlying mechanism of the thrombocytopenia.IVIG can be considered in patients with antibody mediated platelet activation and thrombocytopenia. Our current study confirms the recent findings 10 that anti-PF4 antibodies may be responsible for a severe thromboembolic complication such as CVST and thrombocytopenia after adenovirus infection in the absence of prior exposure to heparin or COVID-19 vaccine.This case underscores the importance of studying the role of anti-PF4 antibodies in thrombotic events beyond HIT and VITT.Further research is required to elucidate the underlying mechanisms, which could potentially impact the management of patients with unexplained thrombosis and thrombocytopenia.Platelet rich plasma obtained from healthy individuals in a volume of 37.5 µL was subjected to a 60-minute incubation with serum (5µl) from the patient, all while under rotation.Following this incubation period, the samples were labelled using 3,3′-Dihexyloxacarbocyaniniodid (DiOC6, 2.5 μM; Sigma Aldrich, Saint Louis, USA), Alexa Fluor (AF) 647-Annexin A5 (at a 1:200 dilution), AF 546-labeled human fibrinogen (at a concentration of 8.5 μg/mL), and Hoechst 33342 (at a concentration of 3 μg/mL; Thermo Scientific, Carlsbad, USA).When specified, the PRP was preincubated with IV.3 (20 µg/ml) or IVIG (30 µg/ml).After the labelling procedure, the samples were reconstituted into autologous whole blood. Subsequently, the samples were recalcified and subjected to perfusion through microfluidic channels (BioFlux 200, Fluxion Biosciences, Alameda, USA) at a venous shear rate set at 250 s -1 (equivalent to 10 dyne/cm 2 ) for a duration of 10 minutes.Images were acquired at x40 magnification in different fluorescence channels using a Zeiss Axio Observer 7 microscope.The acquired images were uniformly processed using adjusted threshold settings and the exclusion of any image artefacts using of Fiji image processing software. Figure 1 . Figure 1.Radiological images and course of the platelet counts throughout the hospitalization. Figure 2 . Figure 2. Patient sera induces platelet activation and procoagulant platelet ( B) Flow cytometer: Procoagulant platelet phenotype, determined by co-expression of Pselectin and phosphatidylserine (PS) on platelet surface, was analysed after incubation with patient's sera.Where indicated, platelets were pre-treated with IV.3 (Fcγ receptor IIA blocking monoclonal antibody) or immunoglobulin (IVIG).Patient's serum was tested with washed platelets from four healthy donors.Historical samples of patients with HIT (n=3) and VITT (n=3) was also shown on the figure.Abbreviations: HC, healthy control; **p<0.01,****p<0.0001.
2023-10-27T06:17:29.829Z
2023-10-26T00:00:00.000
{ "year": 2023, "sha1": "b541c5efba68ec0616cb14ab970952f947df6e2c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "8a5edbed288dbe23f96103eaf7031cec620d85a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6170261
pes2o/s2orc
v3-fos-license
Energy and Utility Optimization in Wireless Networks with Random Access Energy consumption is a main issue of concern in wireless networks. Energy minimization increases the time that networks' nodes work properly without recharging or substituting batteries. Another criterion for network performance is data transmission rate which is usually quantified by a network utility function. There exists an inherent tradeoff between these criteria and enhancing one of them can deteriorate the other one. In this paper, we consider both Network Utility Maximization (NUM) and energy minimization in a bi-criterion optimization problem. The problem is formulated for Random Access (RA) Medium Access Control (MAC) for ad-hoc networks. First, we optimize performance of the MAC and define utility as a monotonically increasing function of link throughputs. We investigate the optimal tradeoff between energy and utility in this part. In the second part, we define utility as a function of end to end rates and optimize MAC and transport layers simultaneously. We calculate optimal persistence probabilities and end-to-end rates. Finally, by means of duality theorem, we decompose the problem into smaller subproblems, which are solved at node and network layers separately. This decomposition avoids need for a central unit while sustaining benefits of layering. INTRODUCTION In this paper, an ad-hoc network is considered where there is no infrastructure and intermediate nodes send packets toward their destinations. Use of random access is common in such networks since random access algorithms are inherently distributed as nodes themselves decide when to access the channel [1]. The main characteristic of random access is independent node transmissions. This characteristic results in both an advantage and a disadvantage. The advantage is that there is no need for central controller and the disadvantage is possibility of collision. Collision occurs since there is no central controller and it is possible that two or more nodes transmit simultaneously and their packets collide. Such collisions result in waste of both energy and bandwidth, thus, network parameters such as persistence probability of nodes should be adjusted in order to optimize bandwidth and energy consumption. The importance of energy efficiency in ad-hoc networks stems from multi-hop nature of the network. If nodes of an adhoc network run out of energy some routs may become disconnected [2], therefore the available energy of nodes should be consumed to transmit as much information as possible. Another criterion for good performance of a network is network utility which is a function of allocated channel or rate to each node. Network Utility Maximization (NUM) has recently received much attention in the literature [5], [6], [7]. It is first proposed by Kelly [5] in order to optimize end-to-end rates of the wired networks. It is also used in optimizing transport layer of wireless networks [6], [7]. Nandagopal et. al. [8] used similar approach in proportionally fair channel allocation and [9] developed the idea of optimizing persistence probabilities in random access wireless networks and designing MAC protocols. Energy efficiency and utility maximization are important objectives in Wireless Sensor Networks (WSN). A WSN collects information from different points of the field and it is a performance criterion for WSN to maximize information collected from all regions of the network [3]. Minimizing energy is also important in WSNs, because it is usually impossible to recharge batteries of WSN nodes and when the batteries run out of energy so do the nodes [4]. Thus, WSNs need both utility maximization and energy minimization. In this paper, we investigate both energy minimization and NUM in a bi-criterion optimization problem. We propose distributed algorithms that can be used to tradeoff energy and utility. Energy minimization and lifetime maximization for wireless ad-hoc networks have been the focal point of many research activities [10], [11]. However to the best knowledge of the authors our work is the first one which considers energy minimization in random access networks. [6] and [9] have formulated and solved NUM for random access but they have not considered energy consumption. Tradeoff between utility and network lifetime is investigated in [12] but it has not considered random access as well. The rest of paper is organized as follows. In the next section network model is presented. Then, in section III we concentrate on MAC optimization and define utility as a function of link throughputs. Tradeoff between utility and energy consumption is also found. Cross layer optimization of MAC and transport layer is described in section IV where we optimize both layers in order to minimize energy and provide maximum utility in transport layer. Section V contains numerical results and discusses advantages of cross layer optimization. We conclude paper and review its contributions in section VI. II. NETWORK MODEL Suppose a set of nodes, N, want to transmit their packets through their neighbors using the set of links L. Each node selects one of its links and transmits with probability p ij where i is transmitter index and j is receiver index. We show the transmission probability of node i with P i , which is summation of persistent probabilities of output links. The set of nodes that receive power from node i is shown by N i out and the set of nodes that i receive power from them is shown by N i in . It is evidet that if nodes hear each other symmetrically, then N i out = N i in . However, this assumption is not a presupposition in this paper. We also denote the set of nodes which i transmits to them with O i and the set of nodes that transmit to i with I i . We define connectivity factor by ratio of communication range to network dimension. Thus as the node's power increases connectivity factor and number of out neighbors, |N i out |, will increase. Network links are used by set S of information sources. Source sS  uses subset of links,   L s L  , as a route to transmit its data. The set of sources that share link We suppose that link (i,j) has a fixed capacity of c ij and transmission rate of sources s is y s . III. OPTIMIZATION OF MAC In this section we optimize persistent probabilities in order to maximize network utility function and minimize energy consumption. A common method for solving such a bicriterion optimization problem and achieving Pareto optimal points is scalarization [13]. Using this method, we set objective function as linear combination of energy and utility with λ 1 >0 and -λ 2 <0 as coefficients: A negative coefficient is used for utility function since minimizing -U is equivalent to maximizing U. The constraints of (1) ensure that optimal persistent probabilities have valid values and summation of persistent probabilities of output links of each node is equal to transmission probability of the node. Utility function, U, is defined as summation of link utilities and to achieve a proportional fairness between links we use the same approach as [8] and [9]. We define utility as a logarithmic function of the link throughput. In order to calculate throughput of the links, x ij , we suppose that successful packet reception of each node depends only on transmission of its in-neighbors. Therefore, a packet is received successfully if and only if neither the receiver nor any of the receiver in-neighbors except the transmitter have sent packets in the same time. Thus, throughput of a link is multiplication of successful reception probability and link capacity, and is given by: If the required energy to transmit a packet by node i is equal to e i , average energy consumption of a node in one timeslot is given by E i =e i ×P i . Thus total energy consumption of the network is: Applying (2)- (4) in (1) and reordering terms we have: For further simplifications we prove the following lemma. Lemma 1: Optimal link and node probabilities are related to each other by ** / | | ; , Proof: First we show that optimal link probabilities of node i, should be equal to each other. Suppose ** ;, pp  this will not affect P i and therefore it will neither change E nor violate the constraints. This conversion can increase utility because we have: and since number of output links of i is equal to |O i | we have: ** / | | ; , If we apply Lemma 1 in (5) then utility will become a function of node transmission probabilities. With some algebraic manipulations first and second derivative of utility function with respect to P i are given by: transmits to node j and its neighbors. In this special case, which we ignore hereafter, P i * =min(1, B i /A i ). Therefore, 0 i P f   and 2 f  P is positive definite. Thus, problem (1) is a convex optimization problem and it has a unique solution which is the stationary point of problem and can be found by setting (7) equal to zero. According to the the following Lemma, this stationary point satisfies the constraints. Lemma 2: 0 i P f   has a unique solution in interval (0, 1). Proof: With respect to (7) 0 Since i P f  is continuous and ascending function of P i it will become zero only in one point of interval (0, 1). If values of λ 1 and λ 2 are specified then each node can solve 0 i P f   and achieve optimal solution. In order to do so, each node requires local information such as number of its incoming and outgoing links, and number of incoming links of outneighbors. It should be noted that λ 1 and λ 2 can be used to tradeoff energy and utility. For example, if we set λ 1 =0 then we have ignored energy and the problem becomes utility maximization with solution: we set λ 2 =0, the trivial solution of P i =0 will be achieved. IV. CROSS-LAYER OPTIMIZATION OF MAC AND TRANSPORT In this section we define network utility as a function of end-to-end rates, i.e. The first constraint limits the summation of source rates which pass through link (i,j) to be less than link throughput. Remaining constraints ensure validity of link and node transmission probabilities. In this problem, the objective function is a convex function of node transmission probabilities and source rates and constraints 2 and 3 are linear. However, the first constraint is a nonlinear and non-convex function. In order to formulate the problem as a convex problem, we change variables to log( ) ss zy  and apply logarithmic function to the first constraint (It is obvious that such transformation does not affect constraint since log(·) is an ascending function). It is shown in [13] that log i i e d  is a convex function of δ i . Also log(x i ) can be computed using (3) and is a logarithmic function of link transmission probabilities. Therefore, (10) is a convex problem and we can achieve its global optimum using algorithms such as interior point method or Sequential Quadratic Programming (SQP) [14]. However these algorithms require a central unit which collects information from network's topology, solves (10) and finally sends results to the nodes. In the next section, we propose a distributed solution in which nodes achieve optimal point in an iterative process. A. A Distibuted Algorithm We use the dual decomposition approach to obtain a distributed algorithm. First, we write down the Lagrangian function associated with problem (9), where μ ij is Lagrange multiplier on link l: Note that in this Lagrangian we do not relax transmission probability constraints. The Lagrange dual function can be decomposed into two parts: Master Dual problem is as follows: It is apparent that problem (12) is a function of MAC layer parameters and (13) is a function of transport parameters. Thus, we have decomposed the dual problem into MAC and transport layer problems. Lagrangian multipliers μ ij , are messages that exchange information between MAC and transport layers. Since there is no compact solution for (12) we use the projected gradient method to solve it. Although gradient method is not very fast (in comparison with second order algorithms such as Newton) but its main benefit is that it require only local information. Consequently, we update p ij as follows: In order to solve (13) first we rearrange its terms: Since s m  and 2 0 l  , we have 0 s y  . In order to find μ we should solve (14). This is similar to the approach of [7] for optimization of transport layer. Although [7] considers utility maximization, a few changes are required in our case. Thus, we apply projected gradient method and update Lagrange multipliers as follows: A. MAC Optimization We suppose a network with 100 nodes and calculate optimal transmission probabilities with (7) for a specified value of λ 1 and λ 2 . As is shown in Fig. 1-a the optimal transmission probability is small in dense regions in order to avoid packet collisions. By solving problem (1) for different values of λ 1 and λ 2 with SQP we can find the optimal tradeoff curve of energy and utility. These Pareto optimal points are sketched in Fig. 1-b where we have shown the utility relative to its maximum possible value. In this simulation, we have supposed that the connectivity factor of each node is a random variable with uniform distribution over [0.15, 0.25]. The results are also averaged over 20 networks. Comparison of the optimal point with the uniform node transmission probability case shows that optimal solution is about 12% more energy efficient. In addition, it is about 25% more energy efficient than the uniform link transmission probability scenario. B. MAC and transport Optimization We consider the network shown in Fig 2(a) for cross layer optimization of MAC and transport layers and compute the optimal tradeoff curve by setting λ 2 =1 and changing λ 1 over [0,30]. By setting 1 ; 0 2 1     the problem becomes a utility maximization problem. Therefore, this point shows the maximum achievable network utility. We have also compared layer by layer and cross layer optimization. In layer by layer optimization we have first computed transmission probabilities with MAC utility and energy optimization and then with the achieved throughput of links we optimize source rates. It can be seen that cross layer optimization is about 30% to 50% more energy efficient. It can also be seen that maximum achievable utility with cross layer optimization is 1 unit greater than layer by layer optimization. We have also simulated the distributed algorithm given in section IV.A. We set 1 ; 5 2 1     , transport optimization step as 2 2   and MAC optimization step as α = 10 -4 . It is shown in Fig. 3 that link transmission probabilities and source rates converge after 300 iterations. VI. CONCLUDING REMARKS It is shown in this paper that we can apply mathematical programming to optimize random access network parameters. We also showed that after formulating problem and making some changes, the problem can be decomposed between network layers. One of the main contributions of this paper is computing optimal tradeoff curve of Energy and utility. These curves not only can be used in network design but also specify the achievable utility and energy. In this paper we first optimized MAC layer of wireless random access ad-hoc networks where nodes calculate optimal transmission probabilities with some local information. Energy minimization and utility maximization are special case of the solved problem. We also considered cross layer optimization of MAC and transport layers where utility is a function of source rates. We proposed a distributed algorithm that decompose problem between layers and nodes. Our numerical results show that we assign resources to links that carry a large amount of information.
2010-11-23T06:43:19.000Z
2007-06-24T00:00:00.000
{ "year": 2010, "sha1": "58f8599269b3d9a9622c18a087003709e6a55548", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1011.5117", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "501b7ea96c37cf176b35effd27804e9dde7e0dad", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
70062434
pes2o/s2orc
v3-fos-license
An Instrument for Assessing E-Commerce Web Object Designs against Guidelines for Older Adults Older adults are often reported to have difficulties with navigating websites. In a previous study, we found that one of the difficulties encountered by older adults while navigating an online grocery shopping site was an inability to recognise the 'add to cart' button. We propose to conduct a study to assess the existing button designs applied in e-commerce websites against principles and guidelines for designing for older adults. In this paper we focus on the development of the instrument to be used in the assessment. This instrument includes assessment criteria relating to button visibility, readability, understandability and navigability, which are drawn from existing principles and guidelines on web design for older adults. INTRODUCTION The older population is increasing, and the number of people using the Internet within this population are also increasing. Online shopping is among the top ten computer activities among older adults when they are online (Vroman et al. 2015). Online shopping has been seen as an attractive alternative when physical disabilities hinder older adults from doing traditional in-store shopping. These include difficulties in driving, lifting heavy loads (Morganosky & Cude 2000) and being able to move and walk at a desirable pace (Meneely et al. 2009). For e-commerce websites, successful web navigation has potential monetary value, while frustration due to navigation difficulties (e.g. disorientation) could lead to the abandonment of the site and hence lost revenue. Although websites employ a variety of web navigation features, such as core navigation (i.e. main menu), links, breadcrumbs and others, to provide access to information or indicate location, older adults still often experience navigation difficulties (Murata & Moriwaka 2008;Sjölinder & Höök 2000). Consistent with these previous reports, we also found, in an earlier exploratory study, that older users experienced navigation difficulties in an online grocery shopping site. Difficulty in identifying and selecting 'add to cart' buttons was found to be one of the difficulties. Yet, the 'add to cart' button plays an important role in ecommerce websites as it is a crucial element in enabling actual sales to start. A combination of ever-advancing technology, an ageing population, and older adults' experience of age-related decline in abilities suggests that a lag in technology adoption among the older population may persist for some time. Although this gap in older adults' abilities to use the latest technologies may never be completely eliminated, minimizing this gap is possible by providing better design guidelines, and tools that cater for older adults' needs and capabilities (Charness & Boot 2009). For web design in particular, more appropriate design could improve ease of use and help older users to interact with websites (Lin 2004). A better understanding of the designs practised on existing websites can provide a good starting point for improvements to take place. This paper describes the design of an instrument for systematically assessing current practices for 'add to cart' button designs, with a view to highlighting areas that need improvement in respect of designing for older adults. The instrument draws on available principles, guidelines and recommendations for general web design for older people, but focuses particularly on button design for its relevance to 'add to cart' button design in e-commerce sites. DESIGNING THE EVALUATION INSTRUMENT The design of the evaluation instrument involved three stages: determining the main areas to include in the evaluation, identifying relevant design guidelines relating to those areas, and refining the evaluation instrument. Becker (2004) notes that vision, cognition, and motor skills play an important role in helping older users to use a website. However, these abilities deteriorate with age. Human visual ability is said to start declining between 30 to 40 years of age, and significantly worsens around the age of 65. A person may experience a decline in their ability to adapt to darkness, illumination sensitivity, visual acuity, and also experience hypersensitivity to glare as well as a reduction in the size of their visual field (Fisk et al. 2004). To accommodate changes in visual ability, aspects to be considered when designing systems relate primarily to visibility and readability. For example, Fisk et al. (2004) suggest that text should have appropriate font size, style, spacing, and contrast ratio. Determining the main areas for evaluation Cognitive ability also decreases with age (Biswas & Langdon 2013); and short-term memory problems are clearly seen with ageing (Arch 2008;Saldaño et al. 2014). As information is processed more slowly, it may cause a reduction in the response time of older adults (Raza & Sahar 2013), and this is possibly the reason why older adults' navigation time is twice that of younger people (Sjölinder & Höök 2000). Problems relating to cognitive abilities can also be seen when a technology with a complex interface design, is presented to older adults (Harte et al. 2014). Complex interface designs leading to difficulty or inability to decipher the meaning of the interface is one of the barriers to technology adoption for older adults. Therefore, designs for older users should be easy to interpret and understand. Ageing may also affect a person's physical ability, for example in terms of their control of movement. Older adults experience declines in the ability to control body position or movement, contributing to being less precise, having slow responses and being more error-prone (Fisk et al. 2004). These affect task performance, and consequently, older adults need more time to perform tasks compared to younger people (Sjölinder et al. 2005;Sjölinder & Höök 2000). For example, a study by (Raza & Sahar 2013) that investigated usability and functionality of mobile phone for older adults found that small buttons and small displays caused difficulties in using the technology. Thus, objects should be big enough for older users to click and navigate. Taking into consideration the deterioration in visual, cognitive and physical abilities that people experience in older age, therefore, visibility, readability, understandability and navigability were identified as the main areas to be included in the evaluation instrument. Identifying relevant design guidelines After the main areas to be included in the evaluation instrument were determined, the next step was to identify relevant design guidelines relating to each area. The guidelines were to be drawn from existing sources, and hence a review of relevant sources was conducted. As mentioned by Kurniawan and Zaphiris (Kurniawan & Zaphiris 2005), sources of web design guidelines can be derived from two main streams, that is, academia and industry and these formed the basis for the selection of sources. The sources used in this study are (i) relevant principles and guidelines on designing websites for older adults (Arch & Abou-Zahra 2010;Hodes & Lindberg 2002), (ii) academic research discussing design for older users, design of e-commerce websites, and design of web buttons (Zaphiris et al. 2007;Najjar 2011;Burt & Gibbons 2011;Wells 2003), and (iii) recommendations by practitioners (designers/developers) on the design of 'add to cart' buttons (Bustos 2007;Chaparro 2002;Grath 2013;Messmer 2015;Naidu & Chaparro 2007). The Web Accessibility Initiative (WAI) develops guidelines for Web accessibility, and its Web Content Accessibility Guidelines WCAG 2.0 (Caldwell, Ben and Cooper, Michael and Reid, Loretta Guarino and Vanderheiden 2008) are widelyaccepted in web design. WCAG also include guidelines and techniques for designs that work better for older users with accessibility needs due to ageing (Arch & Abou-Zahra 2010). Guidelines for designing websites targeting older users have also been developed by The National Institute of Aging and National Library of Medicine (NIA/NLM) (Hodes & Lindberg 2002). These guidelines have been cited in many articles, for example in (Hart et al. 2008) which examined the adherence of 40 websites designed for older adults to the guidelines and found that higher success rates of tasks performed were associated with websites that were most compliant to the 'senior-friendly' guidelines. Furthermore, Zaphiris et al. (2007) have published the SilverWeb Guidelines, which extends their previous work (Kurniawan & Zaphiris 2005). (Najjar 2011) offers guidance on designing ecommerce websites, covering the major sections such as registration, catalogue, and checkout. The paper includes a discussion on the design of 'add to cart' buttons. Academic articles discussing button designs specifically were also included in the review of sources of guidelines. Burt and Gibbons in their article (2011) discussed the effects of donation button designs on trust. Through website surveys, several design features including shape, location, the word used on the button, and icons used were reviewed. The results showed that the donation button designs were used to explain the functions and to capture attention, rather than facilitating trust. Only when appropriate information was associated with a button was there an increase in trust. Wells (2003) examined the location of the chat request button for academic chat reference services available in online academic libraries with regards to the usage of services. Based on the data collected through a longitudinal study, it was found that buttons, when placed at appropriate locations, could increase service usage. Articles from the industry perspective that have discussed 'add to cart' button designs were also examined. An evaluation of 'add to cart' button designs practised on e-commerce websites in 2006 was documented (Bustos 2007), and the same websites were re-evaluated in 2014 (Messmer 2015). Among the aspects evaluated were text, icons, size, shape, and colour used for buttons. Some of these sources provide overall web design recommendations while others specifically discuss button designs and 'add to cart' button designs. While the general web design recommendations are not specifically about buttons, in many cases, they still apply to button designs. For example, 'use high contrast between text and background' applies equally to buttons as it does overall web design. From the identified sources, recommendations relevant for button designs were extracted and grouped into the main criteria of evaluation defined in 2.1. For example, 'use large buttons' was extracted from NIA/NLM guidelines and grouped into navigability, as large buttons which are easier to click on can ease navigation. The first version of the evaluation instrument consisted of the derived list of recommendations which were scored on a 'yes/no' basis in terms of whether or not button designs met the recommendations. Refining the evaluation instrument The instrument was refined by reducing redundant guideline practices and revising the instrument in a pilot study. To reduce redundancy, any recommendations that carried similar meaning were merged. For instance, one suggestion was to 'include concise text labels with icons' (Hodes & Lindberg 2002). The two components of this suggestion, (i) use concise text, and (ii) use text and icons together, were merged with two suggestions from other sources that carried similar meaning -'provide descriptive labels' (Arch & Abou-Zahra 2010) and 'combine text with graphic/icon (e.g. shopping cart)' (Chaparro 2002;Naidu & Chaparro 2007;Burt & Gibbons 2011). The evaluation instrument was piloted on five websites. The results from the pilot assessment highlighted further revisions to be made to the instrument. One of the revisions involved the visibility criterion which was 'use a different colour from the surrounding (background)' (Hodes & Lindberg 2002). This was revised such that the evaluation instrument would specifically take note of the button colour and the surrounding colour. The guideline that suggested 'any link should be visually distinct' (Arch & Abou-Zahra 2010;Bustos 2007) was further itemised as two criteria which relate to buttons being 'visually distinct': 'use a different shape from other elements' and 'use a different colour from other elements', as both shape and colour can influence visibility as these attributes can help with attention (Wolfe & Horowitz 2004). The assessment of the button shape criterion to use a 'rectangle with rounded corner' (Messmer 2015) was also revised, from being a 'yes/no' response to taking note of the shape used and whether or not the corners were rounded (if rectangular). Regarding button location, the published guidelines only suggest locating the button appropriately (Wells 2003); however, this is wide open to interpretation. The evaluation instrument was revised to include an item for matching the design layout of the evaluated page to one of five design layouts described in (Bryant & Jones 2012). In the case that there was no suitable layout matching the page being evaluated, a new layout could be added to the design layouts list. One font guideline was to 'use non-condensed typeface' (Hodes & Lindberg 2002). Since it was difficult to assess whether fonts were condensed or not, the instrument was revised to note letter spacing. Instead of providing a 'yes/no' answer to the criterion 'provide visual feedback when an item has been added to the cart' (Chaparro 2002;Naidu & Chaparro 2007;Najjar 2011), the instrument was revised such that the exact feedback used is noted. The guideline to 'combine text with graphic/icon (e.g. shopping cart)' was revised so that a note was made of the actual graphic/icon used. As mentioned in (Harte et al. 2014), too many clicks may imply a negative user experience. Thus, the assessment of the click criterion 'use single click or screen taps to access information' (Hodes & Lindberg 2002) was also modified such that the 'number of clicks to add item to cart from the first page that the product image is viewed' is also recorded. This is significant as it dictates the number of steps users need to take in order to complete the task of adding an item to the cart. Large buttons were recommended by the published guidelines (Hodes & Lindberg 2002); however, no exact measurements were provided for a button to be considered large. Thus the assessment 4 instrument was revised to include a space for recording the width and height of the button, as this data could be analysed to get an indication of the button sizes which are being used in current websites. RESULTS The evaluation instrument that can be used for the evaluation of button designs in e-commerce websites is as follows: FUTURE WORK This paper presents a new evaluation instrument for assessing 'add to cart' buttons. The instrument includes criteria for four main areas which are visibility, readability, understandability and navigability. A future study will use the evaluation instrument in a survey of the 'add to cart' button designs of existing e-commerce websites. Designs will be analysed for their adherence to the principles and guidelines, thus enabling us to uncover areas that may need improvement for easier navigation for older users. Identifying recurring 'unfriendly' design practices can be a starting point for making e-commerce sites more accessible for older adults.
2019-02-19T14:07:04.879Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "b91a54a818b66ed49a62351a6301fce61c7e1f97", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/87bfdd40-6cee-477b-9e3a-9772e6930cc3/ScienceOpen/BHCI-2018_Osman.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d59a2bcc18fc344439a0a6d19da123e8652766b4", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
221857287
pes2o/s2orc
v3-fos-license
Modern possibilities with early laboratory diagnosis of periprosthetic osteolysis predating aseptic loosening in total hip arthroplasty (literature review) Implant survival is a very important outcome measure of surgical treatment of patients with severe degenerative joint disease in the hip. The aim of this review is to summarize the present knowledge on the possibilities for earlier laboratory diagnosis of osteolysis and prognostic approaches to prevent aseptic loosening of prosthetic implants. Results Periprosthetic osteolysis is often seen as an early sign of an adverse event associated with the development of unstable total hip arthroplasty (THA). A lot of data support the concept of osteolysis as a condition caused by biomechanical stresses, surgery specific factors, preoperative decrease and postoperative loss of bone mineral density, vascular impairment and chronic inflammation. Hemostasiological, biochemical and immunological parameters of patients were explored before and after THA. Surgical intervention was treated as the cause of secondary immunodeficiency, and results of the recovery period evaluated with regard to the extent to which immunodeficiency appeared to be compensated. Dynamics in stress related bone remodeling around the implant was found to be be a marker for early detection of osteolysis and prediction of aseptic loosening of THA, as well as control over the "target" of drug exposure. Conclusion Literature review suggests that there is a common understanding of the pathogenesis of osteolysis and the development of aseptic loosening of THA, and there is scanty data on the laboratory markers for early diagnosis and prediction of the complication that would require further study. Since the time Sir John Charnley designed a hip prosthesis total hip arthroplasty (THA) has evolved into one of the most successful orthopedic procedures performed today. There has been an increase in the number of primary and revision THAs performed worldwide [1,2]. Aseptic loosening is a major complication of joint replacement and it is important to identify factors potentially associated with the adverse event [3][4][5][6][7][8]. Despite the large number of publications on complications following arthroplasty and extensive discussions in orthopaedic forums, controversies exist regarding potential prognosis and prophylaxis of adverve events of THA [9][10][11][12][13][14][15]. The aim of this review is to summarize the present knowledge on the possibilities for earlier laboratory diagnosis of osteolysis and prognosis to prevent aseptic loosening of prosthetic implants. RESULTS While the global medial research efforts focus on different aspects of osteoarthritis orthopaedic and trauma surgeons continue to perform radical procedures replacing the native joint with endoprosthesis for severe conditions. Total joint arthroplasties have revolutionized the care of patients with end-stage joint disease, leading to pain relief, functional recovery, and substantial improvement in quality of life. The longer patients use endoprosthesis the higher is the risk of implant loosening [16][17][18][19][20]. J.B. Meding et al. reviewed 8331 primary THAs to determine the greatest risk of failure across time. The average time to failure was 9.2 years, and 75 % of failures occurred by 13 years. The most common failure mechanisms were due to the cup (5.0 %), cup and stem (1.7 %) and the stem (0.4 %). Based on the most common failure mechanisms, the authors recommended to evaluate patients at 6 months, 1 year, 3 years, 7 years, 10 years, 12 years, 18 years, and 25 years postoperatively [21]. Aseptic loosening occurs in dynamics at a long term with the implant being stable and osteointegrated over a protracted period that can be followed by bone resorption at the periprosthetic site with the bone being replaced by spongious connective tissue with infiltrated macrophages and implant-derived wear particles. Aseptic loosening secondary to periprosthetic osteolysis has been accepted as one of the leading causes of revision procedures in 2/3 patients with previous joint arthroplasty [22][23][24]. The impact of periprosthetic osteolysis on THA ranges between 1 % and 40 % of all THA revisions [25][26][27]. A major concern in periprosthetic osteolysis is that patients may have no clinical manifestations, no suggestion of Literature review any sign of infection, effectively remaining completely asymptomatic [28][29][30][31]. Although Sir John Charnley suggested that aseptic loosening could be caused by subclinical infection, recently it has been recognized that aseptic joint replacement loosening cannot be driven by bacterial infection, and underlying mechanisms are being searched [32]. Aseptic loosening may occur due to the biological response of the bone to fluctuating intraarticular fluid pressure, stress shielding and micromotion at the bone-implant interfaces [33]. The process referred to as particle disease often leads to joint loosening and implant failure [23,28,30]. Wear of endoprosthetic components gradually sets in due to mechanical surface interactions between bearing surfaces of the implants and the bone with a lot of implant-derived wear particles migrating into the pseudosynovial fluid and the surrounding tissues. The characterization of wear particles (size, shape, chemical composition) ranges depending on the origin and individual response of the body. Those are mostly ultra-high molecular weight polyethylene (UHMWPE) wear particles generated from the bearing implant surfaces with the metal prosthetic femoral head and a polyethylene liner being involved in the pathways with a mean linear wear rate of 0.1 mm/year forming numerous UHMWPE particles [29]. Metal-to-metal implants have less wear than metal-to-polyethylene implants but still with release of numerous nano-sized metal particles [34]. Additional sources of wear include increased shattering and greater fragmentation of polymethylmethacrylate particles, metallic or ceramic particles released from bearing surfaces or modified implant surfaces [28,30,35]. The particles released into pseudosynovial fluid are accumulated in the surrounding tissues under the influence of hydrodynamic forces being generated in the fluid with every step and the environment appears to be densely packed with biomaterials of different wear particles. UHMWPE particles tended to exhibit many different morphologies over a number of size ranges. Particles of UHMWPE are assumed to be spheroids with the diameter of 0.1 to 1.0 μm (mean diameter ranging from 0.5 to 0.7 μm) [36,37]. It has been recognized that wear debris stimulates an innate host immune response leading to chronic low-grade inflammation and finally, to osteolysis [28][29][30][31]34]. The foreign body response with macrophages and foreign body giant cells is identified leading eventually to predominance of osteoclasts at bone-soft tissue interface. Biomechanical and tribological aspects are considrered to be crucial in pathogenesis of implant failure. Those include functional overloading, surgery specific failures, preoperative decrease and postoperative loss of bone mineral density, vascular impairment and slow blood flow, hypercoagulation, injury to vascular wall secondary to vasoconstriction deteriorating in the operated limb postsurgery, synovitis, generated wear debris in the tissues having a key role in the progression of the disease with numerous proinflammatory cytokine secretion [28,29]. Surgical aggressive approach in arthroplasty includes volume of intervention, the traumatic profile, blood loss and can cause secondary immunodeficiency and/or aggravate the patient's condition [38]. Biological interactions are explored with an implant's integration in the human body in addition to the aspects of mechanical wear of endoprosthetic components, and biochemical reactions of the symbiosis can be unpredictable. Endoprosthesis is placed into aggressive and dynamic physiological environment and introduces mechanical loading causing non-specific reactions and launching specific immune mechanisms [34]. Immunopathological features and changes in immune function during perioperative period are resposible for postoperative rehabilitation and the outcome. E.V.Gladkova et al., I.V.Chebotar focused on hemostasiological, biochemical and immunological tests examining peripheral blood films of patients preoperatively, at 4 to 5 months postsurgery, analyzing three leukocyte subpopulations (lymphocytes, monocytes, granulocytes) and immunophenotyping lymphocytes. Preoperative and postoperative blood test results indicated to expressed immune disorders in patients with osteoarthrosis of major joints of lower limbs. Postoperative changes in the blood tests exhibited humoral and cell-mediated immune deficiencies that were shown to interefere with adequate protective response to aggressive operative treatment with arthroplasty [39,40]. E.V.Koryakina et al. explored preoperative immune status of patients and detected activation of proinflammatory cytokine (FNOα, IL-1β, IL-6) associated with changes in concentration of anti-iflammatory cytokines (IL-4, IL-10). The authors suggested that preoperative lack of functional activity of T-helpers led to immune deficiency in patients with osteoarthrosis [41]. Low phagocytic activity of segmented neutrophils, high levels of T lymphocytes, B lymphocytes and immunoglobulins were reported in revision THA cases [42][43][44]. L.A. Dmitrieva reported increased serum concentration of IgA and high level of proinflammatory cytokines produced in the peripheral blood cells of patients with severe dysplastic coxarthrosis that necessitated grouping of dysplastic coxarthrosis cases depending on severity and type of immunopathological reactions (conventionally compensated and Literature review subcompensated immunodeficiency). There was correlation observed between outcomes of surgical treatment and rehabilitation, and the extent to which immunodeficiency compensated. The differences in the immune status and the pituitary-thyroidal link of the endocrine system noted in patients with compensated immunodeficiency facilitated favorable restorative period and minimal risk of postoperative adverse events. Alternately, patients with subcompensated immunodeficiency failed to hold a capacity to ramp up a protective mechanisms to surgical intervention and were identified as high-risk patients at different terms [45]. E.A.Volokitina et al. explored an immune response to surgical intervention in patients with hypoplastic coxarthrosis following THA and reported a slight increase in natural killer T cells (CD3+/ CD16+/CD56+) during the first postoperative month. The authors detected the absence of profound disorders in the functional immune system with the favorable scenario with major humoral and cell-mediated immune parameters returning to baseline values at 18 to 21 days following THA. Moderate decrease in T cell count, imbalance of lymphocyte subpopulations, dysimmunoglobulinemia were noted with increase in weight-bearing on the operated limb at 3-to-6-month follow-up. Major cell-mediated immune parameters normalized and absolute numbers of T-helper cells and B-lymphocytes decreased at 7-to-12-month followup with no history of early and delayed postoperative complications. Normal levels of serum immunoglobulin of primary classes and circulating immune complexes were observed during the first year following THA [18]. Postoperative clinical manifestations of pain, limping, disturbed function of the operated joint without evident radiological signs of radiological implant loosening are indications for exploring biochemical parameters for diagnostic purposes. Biochemical criteria were offered for prognosis and early diagnosis of aseptic loosening prior to clinical manifestations [46]. E.A.Persova found that changes in the blood serum biochemical markers identified after THA anticipated alterations in bone mineral density with stress induced remodeling being detected at 1.5 to 3 months [47]. S.Yu.Istomin reported on comparative analysis of clinical and radiological findings and lipid peroxidation parameters establishing correlation between instability of endoprosthetic components and increased concentration of isopropanol soluble products of lipid peroxidation, decreased ascorbat induced lipid peroxidation, and found the metalloprotein concentration being responsible for control of postoperative period [48]. Specific bone metabolism markers have been widely used for early diagnosis and identification of risk factors leading to aseptic stem loosening in THA. A.E. Kearns et al. suggest that polypeptide growth factors and cytokines are involved in osteogenesis and bone resorption and receptor activator of nuclear factor kappa-B ligand (RANKL), a member of the tumor necrosis factor cytokine family, promotes terminal differentiation of osteoclast precursor cells and stimulates boneresorbing activity of mature osteoclasts [49]. Such cytokines and molecular factors as interleukins (IL) IL-1β, IL-6, IL-8, IL-11, IL-17, tumor necrosis factoralpha (TNF-α), macrophage inflammatory protein-1α, prostaglandin E and others can induce the synthesis of RANKL with bone marrow stromal cells including osteoblasts, T lymphocytes and B cells [50][51][52][53][54][55][56]. DISCUSSION In this review we sought to summarize the present knowledge on the pathogenesis of periprosthetic osteolysis followed by the development of aseptic loosening of THA, possibilities for earlier laboratory diagnosis and prognosis of the adverse event. Being aware of the causes of aseptic loosening of THA as a chain of biomechanical reactions in the implant-host system, tribological implant characteristics, surgery specific failures specialists put forth their efforts in attempts to increase implant longevity. In addition, other factors can be involved in the pathogenesis of aseptic loosening in a particular case. There is a search of markers that would enable prediction of this threatening complication prior to THA or diagnosis of osteolysis as early as possible to prevent considerable bone loss. Hemostasiological, biochemical and immunological parameters of patients are explored before and after THA. There is an interest in studying specific features of immune status among phenotypical groups of patients with dysplastic and hypoplastic coxarthrosis. A surgical intervention is treated as the cause of secondary immunodeficiency, and results of the recovery period evaluated with regard to the extent to which immunodeficiency appears to be compensated. Potentially critical differences exist between biological mechanisms of primary, age-associated, post-traumatic and metabolic phenotypes of osteoarthritis. Dynamics in stress related bone remodeling of periprosthetic bone tissue can be a marker for early detection of osteolysis and prediction of aseptic loosening of THA, as well as control over the "target" of drug exposure. A prospective clinical observation of the anticipated development of the adverse event allows for timely detection of aseptic loosening. Further research on the sensitivity and specificity of the above parameters is required since, with the current knowledge of the pathogenesis of the chronic inflammation, they can be observed in a variety of nosologies including osteoarthrosis/osteoarthritis, systemic rheumatic diseases, metabolic syndromes, malignancies and other conditions. CONCLUSION Literature review suggests that there is a common understanding of the pathogenetic reactions at the bone-implant interface, the effects of particular biomechanical, tribological factors on the development of periprosthetic osteolysis followed by aseptic loosening of THA. Scanty data on the possibilities for early diagnosis and prediction of the complication require multidisciplinary research for the understanding of systemic approach to a range of conditions with identical markers involved in pathological reactions.
2020-06-18T09:03:08.731Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "c65a0e2334bdfa98f1665abb1d8a83c08025e362", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18019/1028-4427-2020-26-2-261-265", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fd1f4343c30ea3964eb4f5217889cd7666c6c453", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
60442876
pes2o/s2orc
v3-fos-license
A Multisensor Approach to Satellite Monitoring of Trends in Lake Area , Water Level , and Volume Lakes in arid regions play an important role in regional water cycles and are a vital economic resource, but can fluctuate widely in area and volume. This study demonstrates the use of a multisensor satellite remote sensing method for the comprehensive monitoring of lake surface areas, water levels, and volume for the Toshka Lakes in southern Egypt, from lake formation in 1998 to mid-2017. Two spectral water indices were used to construct a daily time-series of surface area from the Advanced Very High Resolution Radiometer (AVHRR) and the Moderate Resolution Imaging Spectroradiometer (MODIS), validated by higher-resolution Landsat images. Water levels were obtained from analysis of digital elevation models from the Shuttle Radar Topography Mission (SRTM) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), validated with ICESat Geoscience Laser Altimeter System (GLAS) laser altimetry. Total lake volume peaked at 26.54 × 109 m3 in December 2001, and declined to 0.76 × 109 m3 by August 2017. Evaporation accounted for approximately 86% of the loss, and groundwater recharge accounted for 14%. Without additional inflows, the last remaining lake will likely disappear between 2020 and 2022. The Enhanced Lake Index, a water index equivalent to the Enhanced Vegetation Index, was found to have lower noise levels than the Normalized Difference Lake Index. The results show that multi-platform satellite remote sensing provides an efficient method for monitoring the hydrology of lakes. Background: Remote Sensing of Lake Hydrology The world's ~117 million lakes and reservoirs [1] have an important role in the water and carbon cycles at regional scales, and provide a wide range of ecosystem services and economic benefits [2][3][4].In some regions, lakes are known to be declining or disappearing due to disparate causes, ranging from climate change to human water withdrawals for agriculture, industry, and domestic consumption [5,6].Lakes in arid regions are particularly sensitive to changes in local to regional hydrology [6][7][8], which is a troubling fact, given their disproportionate importance in the ecology and economic systems of these water-limited areas.Other anthropogenic impacts on lakes may also have severe consequences, such as the introduction of invasive species [9,10], overfishing [10,11], and pollution [12,13]. Remote sensing is widely used in limnology and lake management for identifying, measuring, and characterizing the properties of lakes [1,[14][15][16].Lake surface area has been successfully mapped using optical and microwave sensors in environments from the tropics to the poles [1,14,[17][18][19][20][21][22][23], often using spectral indices such as the normalized difference water index (NDWI; [20,21]), the normalized difference lake index (NDLI; [18,23]), or the automated water extraction index (AWEI; [22]).Much early work on mapping the distribution and extent of lakes used optical sensors carried on the Landsat and SPOT ("Système probatoire d'observation de la Terre", also rendered as "Satellite pour in Section 1.3, the dual objectives of this work are presented: First, to demonstrate and evaluate a set of methodological improvements that enable the construction of a fully-automated system for monitoring changes in surface area, water level, and volume in lakes; and second, to use this system to document the history of the Toshka Lakes at a higher temporal resolution and more rigorously and completely than previous studies. Study Region The Toshka Lakes basin is located in the Western Desert of southern Egypt, west of the Nile's Lake Nasser reservoir (Figure 1a).Although there is evidence for river channels and lakes in the region during the middle Pleistocene [48], the region is currently a hyper-arid desert (Figure 2) and was devoid of surface water prior to the development of an artificial spillway and channel (22 km length) from Lake Nasser in 1978 to protect the dam during times of high flow on the Nile [49].Beginning in September 1998, increasing water levels in the Nile brought Lake Nasser to the elevation of the spillway, leading to the formation of the first Toshka lake (Lake 1).As waters continued to rise during the late-1998/1999 flood season, and subsequent flood seasons up to 2001/2002, a series of additional lakes were formed by waters spilling out of Lake 1 into downstream basins to the west (Figure 1b).After 2002, water levels in the Toshka Lakes began dropping, and by mid-2017, only Lake 4 remained extant (Figure 3).During the early 2000s, a short-lived fishery developed on the lakes, with over 7000 tonnes produced in 2004 [50], but its growth was quickly halted and reversed by the salinization and eventual disappearance of the lakes.The formation and subsequent desiccation of the lakes also had a significant impact on local climate [51]. Due to the isolation of the region and the broad spatial scale of the lake basins, satellite remote sensing has been a primary source of information on the dynamic hydrology of the Toshka Lakes.The specific approaches have been varied, and are summarized in Table 1.What sets the present study apart from these earlier works is its combination of high-temporal frequency observation of lake surface area, water level, and volume, for the entire life cycle of the lakes, with comparisons among different methodological choices and data sources.The first of these four prior studies, one by Chipman and Lillesand [7], used relatively coarseresolution optical imagery from the Advanced Very High-Resolution Radiometer (AVHRR) and MODIS, in an adaptive, sub-pixel analysis algorithm to estimate the surface area of each lake on 145 dates, from lake formation in 1998 to mid-2006.The MODIS imagery used in this study was Level-1B (geometrically corrected top-of-atmosphere spectral radiance).Surface area measurements were validated by comparison to higher-resolution Landsat images.Water levels for four of the six lakes were measured using ICESat-1 GLAS, although Lake 3 had only a single date of GLAS measurements and thus could not be used to estimate rates of change in water level.For Lake 5, a pre-inundation digital elevation model (DEM) was obtained from the Shuttle Radar Topography Mission (SRTM) and was used to infer changes in water level as its basin filled and then began drying.Lake surface area reached a maximum of 1740 km 2 in or around December 2001.The volume of Lake 5 (from the Shuttle Radar Topography Mission [SRTM] DEM) reached a maximum of 3.13 × 10 9 m 3 in February 2002.During the declining period, water levels in Lake 5 dropped by 7.41 [95% CI 7.00 to 7.82] mm/day (SRTM DEM) or 7.13 [6.77 to 7.49] mm/day (ICESat-1 GLAS).This 2007 study can be seen as a pilot project for the current study, which greatly expands and improves upon its data sources and methodologies.Bastawesy et al. [52] used Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imagery from 2002 and SPOT-4 imagery from 2006 to measure the extent of the lakes in those two years.Lake areas were derived from a manual thresholding process in the near-infrared band of each sensor, resulting in a binary land/water image.A DEM was also digitized from pre-inundation topographic maps, and used to estimate water level and volume in each lake on the dates of the ASTER and SPOT-4 images.The 2002 total surface area was reported as 1591 km 2 , and was 937 km 2 in 2006.Volume (of all lakes) was 25.26 × 10 9 m 3 in 2002 and 12.67 × 10 9 m 3 in 2006.The rate of decline in water level during the four-year interval was reported as (approximately) 7 mm/day.Bastawesy et al. also predicted that, in the absence of additional inflows, the larger lakes would disappear in 2012 (Lake 3), 2014 (Lake 1), and 2020 (Lake 4). Abdelsalam et al. [53] used a one-class parallelepiped image classification approach to map lake extent on approximately 14 dates from 1998-2007.They reported a maximum area of ~1586 km 2 in August 2001, the closest of their image dates to the actual maximum.The projected date for the disappearance of the lakes was given as March 2011.The difference between this and the 2020 projection from [52] was due to the fact that [52] projected each lake's evolution individually, while [53] projected only the aggregated total area of all lakes, using a linear model. Finally, Hereher [23] analyzed 14 MODIS images, one per year from 2000-2013 during the month of March, to estimate lake surface area.Atmospherically corrected (surface reflectance) versions of the MODIS red, near-infrared, and shortwave-infrared bands were extracted from the MOD13Q1 vegetation index product, and used in two spectral indices: the land surface water index (LSWI; [54]) and the NDLI ( [18]).A threshold in each index was used to separate land and water, in a binary fashion as in [52].The largest surface area reported (in March of 2002) was 1664 km 2 , three to four months after the actual maximum area was reached in December 2001.In addition to reporting the surface area of the lakes, [23] also examined the extent of irrigated agricultural development between the Toshka Lakes and Lake Nasser, a large undertaking supported by water withdrawn from the Nile. Objectives This study had two overarching objectives.The first was to develop and test methodological improvements in the use of remote sensing for monitoring water storage in lakes.This contributes to the development of an automated system that integrates a wide range of remotely sensed data sources and analytical methods to simultaneously measure trends in lake surface area, water level, and volume, building upon and improving the methods previously described in [7].The second objective was to apply these data and methods for a comprehensive assessment of the evolution of the Toshka Lakes over the two decades of their existence, producing daily estimates of each lake's surface area, water level, and volume.To meet these dual objectives, the following tasks were performed: Lake surface area assessment: • Quantitative comparison of two remotely sensed spectral lake indices, NDLI and the enhanced lake index (ELI), for lake surface area estimation, using an adaptive algorithm for sub-pixel analysis. • Construction of a daily time-series of lake surface area over two decades (1998-2017) from AVHRR and MODIS imagery. • Validation of the AVHRR/MODIS lake surface area dataset in comparison to higher-resolution image sources. Water-level assessment: • Quantitative comparison of multiple algorithms and data sources for estimating water levels from integration of DEMs and lake surface area datasets. • Construction of a daily time-series of water levels and depth statistics for each lake. • Validation of the water level dataset by comparison to laser altimetry. Lake volume assessment: • Construction of hypsographic curves for each lake, representing the relationship between lake surface area and water level. • Derivation of a daily time-series of lake volume, by cross-referencing the lake surface area dataset and each lake's hypsographic curve. Note that, while the term "validation" is used here for the lake surface area and water level assessments, no direct, in-situ measurements of area or water level were available.Instead, in both cases the validation process involved comparison to another remotely sensed data source.For surface area, the measurements made here from coarse-resolution sensors (AVHRR and MODIS) were compared to equivalent measurements from much higher-resolution Landsat images.For water level, the results from different algorithms and DEMs were compared to ICESat-1 GLAS laser altimetry, a completely independent and previously-validated source of water level data [7,25,27]. In the course of addressing the first objective, the following hypotheses were tested (stated here informally, rather than in formal null-and-alternative hypothesis language): Hypothesis 1: The use of sub-pixel analytical methods allows for coarser-resolution imagery (e.g., AVHRR and MODIS) to produce lake surface area estimates that are compatible with each other and equivalent to those from much higher-resolution imagery (Landsat). Hypothesis 2: A new numerical lake index for multispectral optical imagery (ELI) will yield better (i.e., less noisy) estimates of lake surface area than the existing index, NDLI. Hypothesis 3: DEMs obtained when lakes are at low water-levels, or dry, can be combined with maps of surface area to provide estimates of water level that are similar to those from laser altimetry. Rather than being framed in terms of specific hypotheses to test, the second objective involves production of new datasets, providing a comprehensive, daily record of the dynamic expansion and contraction of the Toshka lakes during the two decades from 1998-2017, encompassing the formation of all six lakes, and the subsequent disappearance of five of the six.Comparison to measured evaporation rates (from nearby Lake Nasser) supports inferences about the relative proportion of water lost to evaporation vs. groundwater recharge.Finally, the results of this study will provide guidance for similar remote-sensing-based assessments of lake hydrology in other regions. This study goes beyond the prior work on the Toshka lakes in a variety of ways.Unlike [23] and [53], it includes analysis of lake water level and volume, and provides a much higher temporal resolution analysis of surface area (812 dates of imagery, vs. 14 and 27, respectively in those prior studies).While [7] and [52] included water level and volume, this new study again provides a much higher temporal resolution and more than a decade of additional data.Additionally, in contrast to [7], it provides water levels and volumes for all lakes, and in contrast to [52], it provides a comparison between water level estimates derived from DEM analysis and from satellite laser altimetry.It also defines and evaluates a new lake index (ELI), compares two different remotely-sensed sources of DEMs, and compares two different algorithms for obtaining water levels from DEMs. Data Sources The data used in this study are listed in Table 2, with references.For lake surface area, the primary source was the complete time series of Terra and Aqua MODIS 16-day composites (2000-2017), with AVHRR images from 1998-2000, and higher resolution Landsat-5, -7, and -8 images and SPOT-5 images used for validation (Landsat) and threshold selection (SPOT).Water level, mean depth, and lake volume were derived from DEMs from SRTM in 2000 and from stereoscopic ASTER band 3 (NIR) imagery from 2014-2017.ICESat-1 GLAS laser altimetry data (2003)(2004)(2005)(2006)(2007)(2008)(2009) were used for validation of water levels. The methods used to analyze these data and derive time-series of lake surface area, water level, and volume are summarized in Figure 4, and are discussed in detail in the following subsections. Lake Surface Area The process used to estimate lake surface area was based upon that described in [7].For this study, the MODIS MOD13Q1/MYD13Q1 16-day composite product (hereafter "MOD13") was used, rather than the Level-1B imagery of [7].MOD13 includes two vegetation indices, the normalized difference vegetation index (NDVI) and the enhanced vegetation index (EVI; [61]), as well as surface reflectance in selected MODIS bands, data quality flags, and other data.Two lake indices were derived from MOD13, the aforementioned NDLI [18,23] and the new ELI. NDLI is given by Equation ( 1): where R RED and R NIR are the spectral reflectance or spectral radiance in red and near-infrared wavelength bands, respectively.Note that NDLI is equivalent to a sign-reversed version of NDVI; i.e., values of NDLI >0 are increasingly likely to be water, and are equivalent to NDVI <0.Due to the high correlation between visible wavelength spectral bands, NDLI is generally very similar to the NDWI [20,21] but does not require the use of a green-wavelength band. Similarly, the new index ELI (Equation 2) is a sign-reversed version of EVI: where R NIR , R RED , and R BLUE are the near-infrared, red, and blue wavelength bands, C 1 and C 2 are wavelength-specific aerosol resistance terms (6 and 7.5, respectively, for MODIS, [61]), and G is a gain factor of −2.5 (versus +2.5 for EVI).High values of ELI (= low values of EVI) are increasingly likely to represent water.where RRED and RNIR are the spectral reflectance or spectral radiance in red and near-infrared wavelength bands, respectively.Note that NDLI is equivalent to a sign-reversed version of NDVI; i.e., values of NDLI >0 are increasingly likely to be water, and are equivalent to NDVI <0.Due to the high correlation between visible wavelength spectral bands, NDLI is generally very similar to the NDWI [20,21] but does not require the use of a green-wavelength band. Similarly, the new index ELI (Equation 2) is a sign-reversed version of EVI: where RNIR, RRED, and RBLUE are the near-infrared, red, and blue wavelength bands, C1 and C2 are wavelength-specific aerosol resistance terms (6 and 7.5, respectively, for MODIS, [61]), and G is a gain factor of −2.5 (versus +2.5 for EVI).High values of ELI (= low values of EVI) are increasingly likely to represent water.For this study, NDLI and ELI were each used to map lake surface area, using the same methodology.Firstly, the data were reprojected to the Universal Transverse Mercator (UTM) zone 36 North coordinate system and subset to the study region.Secondly, a constant threshold value (0 for NDLI, −0.04 for ELI) was used to create a binary land/water mask for each date.Thirdly, the binarymask time series was smoothed by comparing each pixel on date t to its value on dates t-8 and t+8 days (i.e., the previous and subsequent MOD13 dataset; prior to the first Aqua image in mid-2002, with only Terra imagery available, this was t-16 and t+16).If the pixel's land/water value differed on both the comparison dates, it was switched.Finally, the fractional extent of water for mixed pixels along shorelines (i.e., the boundaries between land and water) was estimated using a linear mixture model [7].For this study, NDLI and ELI were each used to map lake surface area, using the same methodology.Firstly, the data were reprojected to the Universal Transverse Mercator (UTM) zone 36 North coordinate system and subset to the study region.Secondly, a constant threshold value (0 for NDLI, −0.04 for ELI) was used to create a binary land/water mask for each date.Thirdly, the binary-mask time series was smoothed by comparing each pixel on date t to its value on dates t-8 and t+8 days (i.e., the previous and subsequent MOD13 dataset; prior to the first Aqua image in mid-2002, with only Terra imagery available, this was t-16 and t+16).If the pixel's land/water value differed on both the comparison dates, it was switched.Finally, the fractional extent of water for mixed pixels along shorelines (i.e., the boundaries between land and water) was estimated using a linear mixture model [7]. This produced two time series (nFrac and eFrac, derived from NDLI and ELI, respectively) of maps showing the fractional (0.0 to 1.0) extent of water within each pixel.Summing up the fractions for each pixel of a lake and multiplying by the pixel area (based on the sensor's spatial resolution) yielded the surface area for each lake on each date. For the period prior to the first Terra MODIS image (February 2000), the coarser-resolution AVHRR sensor was used in a similar process.Due to the lack of a blue-wavelength band on AVHRR, only an NDLI-based AVHRR dataset could be constructed, and it was used with both the NDLI and ELI versions of the MODIS time series.The same process was used to derive much finer-scale maps of lake area from a series of Landsat images.These data (30 m resolution, or nearly two orders of magnitude finer in resolution than MODIS) were used to validate the coarser-resolution lake area measurements.Likewise, the same process was used with two dates of even higher resolution (10 m) SPOT-5 multispectral images (2006-11-07 and 2008-04-20) to determine the appropriate threshold (-0.04) for the MODIS ELI land/water mask, by comparing MODIS land/water masks with a variety of different thresholds to area measurements from the two SPOT-5 images. The AVHRR (1998)(1999)(2000) and MODIS (2000-2017) lake surface area time-series were merged using a nonparametric locally estimated scatterplot smoothing (LOESS) model, interpolating area to a daily timestep using the minimal value of α = 5 data points (nominally 40 days in the post-2002 era when both Terra and Aqua were available) to minimize smoothing.Figure 5 shows this LOESS model for the period of the transition from AVHRR to MODIS (1999-01-01 to 1992-01-01).While the AVHRR data are coarser-resolution and noisier, the alignment between it and MODIS is very close.0.04) for the MODIS ELI land/water mask, by comparing MODIS land/water masks with a variety of different thresholds to area measurements from the two SPOT-5 images. The AVHRR (1998)(1999)(2000) and MODIS (2000-2017) lake surface area time-series were merged using a nonparametric locally estimated scatterplot smoothing (LOESS) model, interpolating area to a daily timestep using the minimal value of α = 5 data points (nominally 40 days in the post-2002 era when both Terra and Aqua were available) to minimize smoothing.Figure 5 shows this LOESS model for the period of the transition from AVHRR to MODIS (1999-01-01 to 1992-01-01).While the AVHRR data are coarser-resolution and noisier, the alignment between it and MODIS is very close. The NDLI-and ELI-derived lake area time series (nFrac and eFrac) were compared based on their detrended residuals, i.e., the individual observations minus the LOESS model, which are here considered to represent noise.Because the magnitude of this noise increases as the lake circumference (and zone of mixed pixels along the shoreline) increases, the residuals were normalized by dividing them by the square root of lake area, giving units of km 2 /km.These normalized residuals were used to evaluate the relative performance of NDLI and ELI as sources in the lake area algorithm.The results are presented in more detail in section 3, but are shown and discussed briefly here because the choice between NDLI and ELI affected the subsequent methods.The NDLI-and ELI-derived lake area time series (nFrac and eFrac) were compared based on their detrended residuals, i.e., the individual observations minus the LOESS model, which are here considered to represent noise.Because the magnitude of this noise increases as the lake circumference (and zone of mixed pixels along the shoreline) increases, the residuals were normalized by dividing them by the square root of lake area, giving units of km 2 /km.These normalized residuals were used to evaluate the relative performance of NDLI and ELI as sources in the lake area algorithm.The results are presented in more detail in Section 3, but are shown and discussed briefly here because the choice between NDLI and ELI affected the subsequent methods. Examination of the residuals showed three extreme outliers among the 750 MODIS image dates: Julian days 161 and 169 in 2012, and day 353 of 2007.Visual examination of these images confirmed that these should be excluded from the analysis due to artifacts in the MOD13 source data.In addition, the lake area values from both NDLI and ELI for Aqua MODIS day 361 were anomalously noisy, yielding larger residuals in many years than other dates (Figure 6), potentially as a result of the 16-day compositing process.(The NASA Land Processes Distributed Active Archive Center (LP DAAC) reported a known issue with the MODIS MYD13Q1 v6.0 16-day composite product, resulting in "unexpected missing data in the last cycles of each year".The issue is being addressed in the reprocessed v6.1.)All the day 361 images were thus removed as well, and the nFrac and eFrac time series were recalculated. The final step in the lake surface area process involved validation of area measurements by comparison to higher-resolution Landsat images.Two Landsat path/row combinations were needed to cover the area (paths 175 and 176, row 44) with Lakes 1 and 2 in path 175 and the remaining lakes in path 176.For each Landsat image, the areas of each lake were compared to the corresponding daily LOESS-interpolated AVHRR/MODIS eFrac lake area measurement.Because each Landsat pixel covers an area only 1/69th as large as a 250 m resolution MODIS pixel, the Landsat images provided a more precise estimate of lake area.As shown in Figure 6, the eFrac time series (derived from ELI rather than NDLI) had generally smaller residuals relative to its LOESS model.Thus, eFrac was used as the definitive lake surface area dataset for all subsequent analyses. The final step in the lake surface area process involved validation of area measurements by comparison to higher-resolution Landsat images.Two Landsat path/row combinations were needed to cover the area (paths 175 and 176, row 44) with Lakes 1 and 2 in path 175 and the remaining lakes in path 176.For each Landsat image, the areas of each lake were compared to the corresponding daily LOESS-interpolated AVHRR/MODIS eFrac lake area measurement.Because each Landsat pixel covers an area only 1/69th as large as a 250 m resolution MODIS pixel, the Landsat images provided a more precise estimate of lake area. Water Level, Mean Depth, and Volume Water level was derived by cross-referencing the lake surface area dataset (Section 2.2 above) with a digital elevation model (DEM).Based on the water level, mean and maximum depths were calculated on each date, hypsographic curves were computed for each lake, and lake volumes were estimated.Two different methods were used for the water level analysis, and the results were validated using ICESat-1 GLAS laser altimetry data, which were completely independent of the DEM-based methods. The first DEM for the study region was obtained from the SRTM [62] dataset.This DEM was produced using interferometric analysis of synthetic aperture radar (SAR) data from 11-22 February 2000.The basins for Lakes 1-4 had already filled by this date, so the bathymetry of those lakes could not be determined from SRTM. A second DEM was developed from photogrammetric analysis of stereoscopic ASTER bands 3N and 3B.Twelve individual ASTER DEMs were used, from images in 2014-2017 when all lakes except Lake 4 had dried up, thus allowing the bathymetry of each lake basin (except Lake 4) to be mapped directly.The ASTER DEMs had large vertical offsets relative to SRTM (and to each other), and various other artifacts.The cause of these offsets is undetermined, but they are reflective of the fact that the relative accuracy of these DEMs (i.e., the reported difference in elevation between any two points within a single DEM, compared to their true difference in elevation) is better than their absolute accuracy (the absolute elevations of points in the DEM compared to their true elevations); the latter are expected to be better than 25 m root mean squared error (RMSE) [59].To prepare these DEMs for use, the following process was followed: 1. Calculate each ASTER tile's mean and standard deviation of elevation differences from the SRTM DEM, using a subset of the area with no visible artifacts. 2. Adjust the ASTER DEMs to match the SRTM DEM by adding each tile's mean elevation difference, excluding areas that are more than three standard deviations from the SRTM elevation and were not covered by water at the time of the SRTM mission. 3. In the ASTER tile covering the (small) remaining water in the Lake 4 basin, exclude ASTER DEM pixels falling within the extent of the water area.4. Create an initial mosaic of all ASTER DEM tiles, averaging elevation values in areas of overlap. 5. Aggregate to 150 m spatial resolution, and convert grid cells to points. 6. Delete points in no-data areas (areas where the ASTER DEM was more than three standard deviations from SRTM, or within the remaining Lake 4 water area).7. Perform a "tension" spline interpolation (weight = 0.1, 100 points per region for local approximation) to fill in the gaps. The result was an ASTER DEM mosaic for the entire area, with artifacts removed and the small residual area covered by water in Lake 4 modeled by spline interpolation.This ASTER DEM was used to calculate elevation, depth, and volume for all lakes on all dates, while the SRTM DEM was used as a comparison on Lake 5 only.Table 3 lists the ASTER DEM tiles used in the DEM mosaic.Two different methods were used to estimate water levels from the lake surface area maps and the DEMs.In Method 1, the shoreline (where pixel fractional water extent crossed 0.5) was identified from the eFrac surface area maps, and the mean elevation of this line was found in the DEM(s).In Method 2, the area of pixels at or below each increment of elevation was calculated directly from the ASTER DEM, and then the observed area on a given date (from eFrac) was used to look up the corresponding water level that would produce that observed area. For each lake, the water levels derived from these two methods were compared to the ICESat-1 GLAS laser altimeter measurements of water level on the corresponding dates, averaging all 70 m-diameter GLAS footprints that fell on each lake surface on each orbit (Figure 7).A total of 2390 laser spots on 54 separate orbit tracks were used.Some orbit tracks crossed more than one lake, resulting in 76 combinations of lake and orbit track.An additional 644 laser spots on 16 orbit tracks were not used due to abnormally high noise levels in the GLAS data; these noisy tracks (also discussed in [7]) had standard deviations of the GLAS measurements for individual lakes from 0.1 to 0.7 m, and were identified and removed based on that basis. Lake Surface Area Both the nFrac and eFrac processes yielded time series of lake surface area from MODIS, from February 2000 through August 2017.These were merged with coarser-resolution AVHRR data, and interpolated to a daily timestep.As discussed in Section 2.2, comparison of the residuals suggested that eFrac had a lower noise level than nFrac -after removal of all MODIS day 361 images and three Hypsographic curves [63,64] for each lake were constructed, for use in estimating lake volumes on each date.These curves relate lake area (on the X axis) to depth (Y axis).Using the lake area vs. water level data from Method 2, points representing the measured values of these two variables on each date were plotted, and then a LOESS model was used to interpolate values for water level at 0.1 km 2 increments of lake area, from 0.1 km 2 to the maximum areal extent of each lake.This hypsographic curve was then used as a look-up table, to determine the mean depth and total volume for each lake on each date, based on its surface area, in increments of 0.1 km 2 . Lake Surface Area Both the nFrac and eFrac processes yielded time series of lake surface area from MODIS, from February 2000 through August 2017.These were merged with coarser-resolution AVHRR data, and interpolated to a daily timestep.As discussed in Section 2.2, comparison of the residuals suggested that eFrac had a lower noise level than nFrac-after removal of all MODIS day 361 images and three other outliers, the standard deviation of the normalized residuals was 0.20 km 2 /km for nFrac and 0.12 km 2 /km for eFrac.Thus, the eFrac version was used as the definitive lake area dataset. Figure 8 shows the time series of lake surface area for all six individual lakes, along with 206 validation points (black dots) obtained from comparison of the MODIS and AVHRR data to higher-resolution Landsat images (194 during the MODIS era and 12 during the AVHRR era).The lake system as a whole reached a maximum area on or around 1 December 2001, at 1722 km 2 .Individual lakes reached their maxima on different dates (Table 4). Using the 206 Landsat-derived lake surface area measurements as "truth", the mean absolute error (MAE) in the AVHRR/MODIS eFrac time series was 18.7% of lake area, but this was dominated by dates on which lake areas were very small-excluding those with lake areas below 5 km 2 (n = 42), the MAE for remaining dates (n = 164) was 4.8% of the lake area.A scatterplot of lake area estimates from Landsat vs from AVHRR/MODIS is shown in Figure 9.Using the 206 Landsat-derived lake surface area measurements as "truth", the mean absolute error (MAE) in the AVHRR/MODIS eFrac time series was 18.7% of lake area, but this was dominated by dates on which lake areas were very small -excluding those with lake areas below 5 km 2 (n = 42), the MAE for remaining dates (n = 164) was 4.8% of the lake area.A scatterplot of lake area estimates from Landsat vs from AVHRR/MODIS is shown in Figure 9. Water Level, Mean Depth, and Volume As described in Section 2.3, two methods were used to estimate water level by combining lake surface area maps with the ASTER DEM mosaic.Method 1 involved measuring the average elevation of the observed shoreline of the lake, while Method 2 involved constructing a histogram of elevations within each lake basin from the DEM (i.e., the count of pixels at each increment of elevation) and finding the elevation at which the cumulative histogram matched the observed lake area in km 2 . Figure 10 shows a comparison of the results of these two methods for all six lakes (for Lake 5, whose basin was not filled until after the date of the SRTM DEM; Method 1 was also used with the Water Level, Mean Depth, and Volume As described in Section 2.3, two methods were used to estimate water level by combining lake surface area maps with the ASTER DEM mosaic.Method 1 involved measuring the average elevation of the observed shoreline of the lake, while Method 2 involved constructing a histogram of elevations within each lake basin from the DEM (i.e., the count of pixels at each increment of elevation) and finding the elevation at which the cumulative histogram matched the observed lake area in km 2 . Figure 10 shows a comparison of the results of these two methods for all six lakes (for Lake 5, whose basin was not filled until after the date of the SRTM DEM; Method 1 was also used with the SRTM data as an additional comparison).ICESat-1 GLAS laser altimetry points are also shown for each lake crossed by one of the ICESat orbit tracks (see map in Figure 7).Because each GLAS orbit track across a given lake may include many individual laser measurements, each lake's GLAS water level estimate can be described in terms of both a mean and standard deviation.The mean values (for combinations of lake and orbit track) are used in Figure 11 and Table 5; the standard deviations ranged from 0.016 to 0.091 m, with a mean standard deviation of 0.041 m.In general, water levels increase and decrease prior to 2003, due to seasonal inflows from the Nile, and then decrease continuously and approximately linearly from 2003 onward, indicating a consistent net water loss due to evaporation and groundwater recharge.Table 6 shows the date on which each lake reached its maximum average depth.In principle, the date of maximum average depth could differ from the date of maximum surface area, but in this case, the dates are the same for all lakes except Lake 5 are the same (cf.Table 4).The rates of change in water level during the 2003-2009 operational period of the ICESat-1 GLAS mission are given in Table 5; these results are less reliable for Lakes 2 and 6 due to their small size and short duration of existence during this time period. Hypsographic curves for all six lakes are shown in Figure 12.The lowermost range of the curve for Lake 4 is uncertain, because it is based on the basin's bathymetric ASTER DEM with spline interpolation to estimate elevations within its late-2017 water extent (Section 2.3 above).In each case, zero on the depth axis is set based on the maximum areal extent of the lake during the entire 1998-2017 (AVHRR and MODIS) time series.Figure 11 shows the time series of inferred volumes for each lake, in 10 9 m 3 (or 1 km 3 ), as well as the total for all lakes.The peak volume (26.5 × 10 9 m 3 ) was reached in early December 2001.Maximum volumes and the dates of those maxima are listed in Table 7. Discussion Both the dual objectives of this study were achieved: (1) Various methodological improvements in the remote sensing of water storage in lakes were described and implemented in an automated system for measurement of lake surface area, water level, and volume; and (2) this system was used to develop new, comprehensive, highly detailed datasets on the life cycle of the Toshka lakes since their formation 20 years ago. All three of the informally-stated hypotheses from Section 1.3 were borne out.For Hypothesis 1, the adaptive sub-pixel analysis algorithm for estimation of lake surface area worked well across a wide range of spatial scales; the output from the 1000 m resolution AVHRR imagery merged smoothly with the 250 m MODIS output (Figure 5), and both were generally consistent with the 30 m Landsat (Figures 8 and 9) and 10 m SPOT imagery.As noted in [22], the widely used process of setting a "hard" land/water threshold in a near-infrared band or spectral index can cause problems due to slight variations in scene illumination, sensor calibration, or imperfectly corrected atmospheric conditions.The use of an adaptive algorithm for this threshold is conceptually preferable, as is the modeling of sub-pixel water fraction for shoreline pixels, rather than the use of a simple binary classification. For Hypothesis 2, the version of the lake area algorithm using ELI had a lower noise level than the version with NDLI (Figure 6).However, like its equivalent vegetation index EVI, ELI does require the presence of a blue spectral band, in addition to the red and near-infrared bands used in NDLI. For Hypothesis 3, the two methods for estimating water levels by combining lake area maps and DEMs gave generally similar results.The reported trends in water level (Table 5) were virtually identical for Lakes 1 and 4; for Lakes 3 and 5, Method 2 gave rates of decline that were 14% and 10% faster than Method 1.The observed rates of decline in ICESat-1 GLAS were 3% and 11% faster than the DEM-based methods for Lakes 1 and 4 respectively; for Lake 5, ICESat's rate fell between those from the two DEM-based methods.The recent launch of ICESat-2 should provide an excellent source for lake water measurements at a finer scale than those obtainable by satellite radar altimetry. The data produced in this study can be used to address other questions about the past and future of the Toshka lakes, such as the partitioning of water losses between evaporation and recharge, and the anticipated expiration of the last remaining lake.With no outlet for surface water discharge from this closed depression, the decline in water levels is attributable to two factors: evaporation and groundwater recharge.Elsawwaf et al. [65] used data from floating rafts on Lake Nasser, just east of the Toshka Lakes, to estimate evaporation rates from the reservoir.The mean daily evaporation rates at the two stations closest to the Toshka Lakes spillway were 5.78 mm/day (Allaqi) and 5.90 mm/day (Abusembel).Comparing these rates and the mean of Methods 1 and 2 for water level measurement in this study suggests that approximately 86% of the water loss observed here was via evaporation, with 14% via groundwater recharge.The groundwater recharge fraction was highest for Lake 3 and lowest for Lake 4. As noted in Section 1.2, Bastawesy et al. [52] reported a rate of decline in water level during from 2002-2006 of (approximately) 7 mm/day.During the same time period, this study finds similar rates (the average for the four largest lakes is 6.9 mm/day with Method 1, and 7.0 mm/day with Method 2).Bastawesy et al. also predicted that, in the absence of additional inflows, the larger lakes would disappear in 2012 (Lake 3), 2014 (Lake 1), and 2020 (Lake 4).Those predictions were accurate for Lakes 3 and 1.In this study, Lake 4 is projected to disappear between 2020 and 2022, barring any further inputs of surface water from Lake Nasser. While this study has applied its methods to one particular set of lakes with their own history and circumstances, the methods should be broadly applicable to many other lakes and reservoirs that have experienced declining water levels due to evaporation.Globally, the problem of water loss from artificial reservoirs is of increasing concern [66,67] and the use of automated, remote-sensing-based methods to characterize these losses and to assess variability in these systems is very promising [33], though the latter focuses on water levels in only the largest lakes worldwide. Conclusions Both of the overarching objectives of this study were successfully addressed.An automated system (objective 1) was demonstrated for monitoring changes in the surface area, water level, and volume of lakes, using data from multiple earth-observation satellites.Surface area measurements were produced using an adaptive algorithm that estimates fractional (sub-pixel) estimates of water extent along lake shorelines, using a newly proposed Enhanced Lake Index (ELI), parallel to the Enhanced Vegetation Index (EVI).The results are consistent across two orders of magnitude in spatial scale.Water-level measurements and trends can be derived from satellite laser altimetry or from analysis of digital elevation models (DEMs) using two different methods.Changes in volume can be inferred from surface area trends and hypsographic curves, which in turn can be derived from DEMs during periods of low water.While presenting several methodological improvements, e.g., the use of ELI for mapping water extent and comparison of two algorithms and two DEM data sources for water level estimation, this study also shows how these methods could be implemented in an end-to-end automated system for lake monitoring. Applied to the Toshka Lakes in southern Egypt (objective 2), these methods have provided the most comprehensive daily record of the formation, expansion, and decline of lakes over the past two decades, and have enabled the inference of the amount of groundwater recharge from each lake basin.With the development of new sources of wide-scale, high-temporal frequency imagery from constellations such as Sentinel and Planet Labs, and the recent launch of a new laser altimeter on ICESat-2, the next decade should see a rapid expansion in remotely sensed assessments of lake hydrology. Figure 1 . Figure 1.Maps of the study area.(a) Location of the Toshka Lakes region in southern Egypt; (b) lakes 1 through 6 superimposed on shaded relief (lake extents from February 2002). Figure 1 . 24 Figure 2 . Figure 1.Maps of the study area.(a) Location of the Toshka Lakes region in southern Egypt; (b) lakes 1 through 6 superimposed on shaded relief (lake extents from February 2002).Remote Sens. 2018, 10, x FOR PEER REVIEW 5 of 24 Figure 2 . Figure 2. The Western Desert near Lake Nasser and the Toshka region (photo by the author).Figure 2. The Western Desert near Lake Nasser and the Toshka region (photo by the author). 24 Figure 3 . Figure 3. Formation, expansion, and desiccation of the Toshka Lakes.(a) Landsat image mosaic from the 1980s, prior to lake formation; (b-c) Landsat images of the eastern half of the Toshka Lakes region, 1998-1999; (d-l) Terra/Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) images from odd-numbered years 2001-2017. Figure 3 . Figure 3. Formation, expansion, and desiccation of the Toshka Lakes.(a) Landsat image mosaic from the 1980s, prior to lake formation; (b-c) Landsat images of the eastern half of the Toshka Lakes region, 1998-1999; (d-l) Terra/Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) images from odd-numbered years 2001-2017. Figure 4 . Figure 4. Schematic diagram of the data and methods used in this study. Figure 4 . Figure 4. Schematic diagram of the data and methods used in this study. Figure 5 . Figure 5. Combination of lake surface area measurements from the Advanced Very High Resolution Radiometer (AVHRR; orange squares) and MODIS (blue diamonds) with interpolation using a locally estimated scatterplot smoothing (LOESS) regression model (black line) at a daily timestep.Area (in km 2 ) is total for all six lakes. Figure 5 . Figure 5. Combination of lake surface area measurements from the Advanced Very High Resolution Radiometer (AVHRR; orange squares) and MODIS (blue diamonds) with interpolation using a locally estimated scatterplot smoothing (LOESS) regression model (black line) at a daily timestep.Area (in km 2 ) is total for all six lakes. Figure 6 . Figure 6.Comparison of residuals from lake surface area time-series nFrac and eFrac, as a function of the seasonal cycle (Julian Day).(a) Normalized residuals (km 2 /km) for all years, by day; (b) standard deviation of normalized residuals (km 2 /km) by day.Excluded data from Aqua MODIS Day 361 are shown as open circles. Figure 6 . Figure 6.Comparison of residuals from lake surface area time-series nFrac and eFrac, as a function of the seasonal cycle (Julian Day).(a) Normalized residuals (km 2 /km) for all years, by day; (b) standard deviation of normalized residuals (km 2 /km) by day.Excluded data from Aqua MODIS Day 361 are shown as open circles. Figure 8 . Figure 8. Lake surface area (km 2 ) for each lake, 1998-2017.Colored lines are merged AVHRR/MODIS; black dots are individual lake measurements from Landsat, for validation. Figure 8 . Figure 8. Lake surface area (km 2 ) for each lake, 1998-2017.Colored lines are merged AVHRR/MODIS; black dots are individual lake measurements from Landsat, for validation. Figure 9 . Figure 9. Scatterplot of lake surface area (km 2 ) with coarse-resolution sensor area (MODIS or AVHRR) on the X axis and fine-resolution area (Landsat) on the Y axis. Figure 9 . Figure 9. Scatterplot of lake surface area (km 2 ) with coarse-resolution sensor area (MODIS or AVHRR) on the X axis and fine-resolution area (Landsat) on the Y axis. 24 Figure 10 . Figure 10.Comparison of methods for estimating water level (in colored lines) for all six lakes, with validation data from ICESat-1 GLAS laser altimetry (black diamonds). Figure 10 . Figure 10.Comparison of methods for estimating water level (in colored lines) for all six lakes, with validation data from ICESat-1 GLAS laser altimetry (black diamonds). Figure 11 . Figure 11.Hypsographic curves for each lake, relating lake area to depth (from AST14 DEMs). Figure 12 . Figure 12.Hypsographic curves for each lake, relating lake area to depth (from AST14 DEMs). Table 1 . Prior remote sensing studies of the hydrology of the Toshka Lakes. Table 2 . Remote sensing data sources. Table 3 . Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) DEM tiles (AST14) and their measured offsets from the SRTM DEM. Table 4 . Maximum surface area and date of maximum for each lake. Table 5 . Maximum average depth and date of maximum for each lake. Table 5 . Observed rates of change in water level (mm/day) from 2003-2009. Table 6 . Maximum average depth and date of maximum for each lake. Table 7 . Maximum volumes and dates of maximum for each lake.
2019-02-14T15:49:08.145Z
2019-01-16T00:00:00.000
{ "year": 2019, "sha1": "a7411e1f37483a98994cf013d140b3ef81f0a2eb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/11/2/158/pdf?version=1547630905", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a7411e1f37483a98994cf013d140b3ef81f0a2eb", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Environmental Science", "Computer Science" ] }
254919450
pes2o/s2orc
v3-fos-license
Polyamine Oxidase-Generated Reactive Oxygen Species in Plant Development and Adaptation: The Polyamine Oxidase—NADPH Oxidase Nexus Metabolism and regulation of cellular polyamine levels are crucial for living cells to maintain their homeostasis and function. Polyamine oxidases (PAOs) terminally catabolize polyamines or catalyse the back-conversion reactions when spermine is converted to spermidine and Spd to putrescine. Hydrogen peroxide (H2O2) is a by-product of both the catabolic and back-conversion processes. Pharmacological and genetic approaches have started to uncover the roles of PAO-generated H2O2 in various plant developmental and adaptation processes such as cell differentiation, senescence, programmed cell death, and abiotic and biotic stress responses. Many of these studies have revealed that the superoxide-generating Respiratory Burst Oxidase Homolog (RBOH) NADPH oxidases control the same processes either upstream or downstream of PAO action. Therefore, it is reasonable to suppose that the two enzymes co-ordinately control the cellular homeostasis of reactive oxygen species. The intricate relationship between PAOs and RBOHs is also discussed, posing the hypothesis that these enzymes indirectly control each other’s abundance/function via H2O2. Introduction Polyamines (PAs) are small, positively-charged organic molecules that are present in all living organisms. PAs show tissue-and organ-specific distribution patterns [1,2]. The most relevant PAs in plant cells are putrescine (Put), spermidine (Spd), and spermine (Spm). In plants, PA biosynthesis produces Put from arginine catalysed by the arginine decarboxylase enzyme (ADC), but in several plants, the ornithine decarboxylase (ODC) can also synthesise Put from ornithine [2][3][4]. Put can be converted to Spd by spermidine synthase (SPDS), which can be further converted to Spm by spermine synthase (SPMS). Thermospermine (t-Spm) is a specially modified polyamine synthesized by the thermospermine synthase (named ACAULIS5 or ACL5 in Arabidopsis) by transferring an aminopropyl residue to the N-terminal amino group of Spd [5]. Some species also have cadaverin (Cad) which is synthesized from lysine through a fully independent pathway by ornithine/lysine decarboxylases (O/LDCs) [6]. PAs can be found in plant cells in different forms, such as free, covalently conjugated, or non-covalently conjugated ones. The covalently conjugated PAs can be further classified as perchloric acid-soluble or insoluble [2]. PAs are involved in cell division, organ development, leaf senescence, fruit development and ripening, and abiotic stress responses, [2,7,8]. The involvement of PAs in stress tolerance has many aspects. They directly interact with and protect macromolecules and organelle membranes acting as compatible solutes. Further, they (in)directly scavenge oxygen and hydroxyl radicals, promote the production of H 2 O 2 acting as a signal PAOs in Plant Development There is accumulating evidence suggesting that PAs interfere with various biological processes through the generation of H 2 O 2 during their catabolism [4,7]. In agreement, the participation of PAOs has been reported in many developmental processes where ROS are involved (Table 1). Table 1. Mutations in plant polyamine oxidase-coding genes affecting reactive oxygen species homeostasis and polyamine levels under various experimental conditions. The cell compartements where the mutations prevented the accumulation of the given polyamine oxidases are indicated. At-Arabidopsis thaliana; Cs-Cucumis sativus; Zm-Zea mays; Spd-spermidine; Spm-spermine; t-Spm-thermospermine; Put-putrescine; pao/PAO-polyamine oxidase; RBOH-Respiratory Burst Oxidase Homolog; PRX-peroxidase; CAT-catalase; APX-ascorbate peroxidase; SOD-superoxide dismutase; GABA-gamma aminobutyric acid; NA-not applicable. Change in ROS Related Enzyme Activities or Gene Expression Ref. PAOs in Cell Differentiation Overexpression of the maize (Zea mays) ZmPAO1 resulted in early xylem differentiation and strongly affected root development in transgenic tobacco plants in correlation with augmented H 2 O 2 production and increased rate of cell death [44]. This PAO was shown to accumulate in the cell wall of xylem precursors parallel to secondary wall deposition [30]. The AtPAO5 enzyme that specifically accumulates in the vascular system has also been reported to participate in xylem differentiation [45,46]. Although most PAOs generate ROS as a by-product of their activity, AtPAO5 preliminary control PA, especially t-Spm, levels [27,45,46]. Since AtPAO5 acts rather as a dehydrogenase than oxidase, it does not produce excess H 2 O 2 [45]. In this way, AtPAO5 indirectly controls xylem differentiation via maintaining t-Spm homeostasis required for normal growth [37,45]. AtPAO5, via controlling the t-Spm level, was hypothesized to contribute to the tightly controlled interplay between auxins and cytokinins during the xylem differentiation process [46]. Nevertheless, there is ample evidence suggesting that the production of H 2 O 2 by PA catabolism contributes to the cross-linking of cell wall polysaccharides during cell wall maturation [30,44]. For example, the apoplastic maize ZmPAO enzyme was shown to provide ROS for peroxidase-mediated wall stiffening during wound healing [47,48]. Moreover, rice OsPAO7 was hypothesized to control lignin synthesis in anther cell walls [49]. The polar growth of pollen tubes correlates with ROS accumulation at their tip region, controlling hyperpolarization-activated Ca 2+ channels and cell wall stiffening [50]. Exogenous polyamines modulate pollen tube growth dependent on ROS generation [51]. In agreement, Spd treatment was reported to promote the opening of Ca 2+ channels in pollen tubes [52]. Mutations in the AtPAO3 gene blocked the effect of Spd on Ca 2+ channels and pollen tube growth, indicating that the effect was dependent on AtPAO3-mediated Spd degradation. Untreated pollen tubes of the AtPAO3 mutant also exhibited retarded growth, supporting the view that AtPAO3-generated ROS contributes to pollen tube growth [52]. The observation that the Arabidopsis polyamine transporter ABCG28 is required for the apical accumulation of ROS in growing pollen tubes and root hairs [53] further strengthens this hypothesis. PAs and their catabolism play a role in the induction of Ca 2+ and K+ fluxes also in roots during stress adaptation [54], supporting the general significance of the PA-PAO-ROS-Ca 2+ signalling connection. PAs are well known to be required for and promote in vitro plant regeneration, although the mechanism is largely uncovered (reviewed by [55]). It was suggested that metabolic degradation products of PAs, such as t-Spm and/or H 2 O 2 , can at least be partly responsible for the observed effects [56]. The expression of AtPAO5, unlike other AtPAOcoding genes, increased in parallel with the conversion of lateral root primordia to shoot meristem during direct in vitro organogenesis [56]. Furthermore, the ectopic expression of AtPAO5 but not AtPAO2 promoted the process. It was hypothesized that AtPAO5 exerted its effect via the modulation of the t-Spm homeostasis rather than H 2 O 2 production [56] since AtPAO5, unlike the other AtPAO enzymes, is known to have a stronger dehydrogenase than oxidase activity and has a high affinity for t-Spm as substrate [45]. Interestingly, AtPAO5 had been reported to have a negative effect on indirect (via auxin-induced callus formation) [57] and not direct (via cytokinin-induced meristem conversion) [56] shoot regeneration from Arabidopsis roots. This strengthens the view that maintaining t-Spm homeostasis is the primary function of AtPAO5 [27] since t-Spm was shown to suppress auxin signalling [58], which plays a different role in direct and indirect shoot regeneration from Arabidopsis roots [59]. Furthermore, H 2 O 2 as the metabolic product of PAs was found to be essential for the maintenance and propagation of embryogenic calli and their conversion into somatic embryos in cotton [60], indicating a more general role of PA catabolism in plant regeneration in vitro. PAOs in Senescence and Programmed Cell Death The link among PAs, ROS, and leaf senescence has been long established (reviewed in [61]). Augmenting transcription and activity of PA catabolic enzymes has been demonstrated during dark-induced senescence of barley leaves [61,62]. Inhibiting the PAO activity delayed the senescence process in parallel with Spm accumulation and reduced ROS production. In agreement, the Arabidopsis atpao4 mutant exhibited delayed senescence in correlation with high Spm levels but reduced ROS accumulation [38]. Altogether, the observations indicate that PAO-generated H 2 O 2 is involved in leaf senescence. PA catabolism has been also associated with fruit ripening, a senescence-like developmental process in grapes, tomatoes, and peaches [63][64][65]. Fruit ripening is associated with the increased expression of genes coding for apoplastic PAO enzymes, catalysing the terminal oxidation of PAs. Inhibition of PAO activity reduced ethylene production and flesh softening of peach fruits and the expression of ripening-related genes, while PA contents were dramatically increased. The role of PAO-generated H 2 O 2 as a ripening-promoting signal molecule was hypothesised as one of the potential mechanisms [22]. PAO-generated H 2 O 2 was shown to contribute to developmental PCD during xylem differentiation [44,66]. Polyamine oxidases were also found to be key elements in the oxidative burst, leading to programmed cell death in cryptogein-treated tobacco-cultured cells [67]. Moreover, tobacco plants overexpressing the transgene coding for the same maize PAO enzyme had high H 2 O 2 levels, which in some cases led to programmed cell death (PCD) [68]. PAOs and Abiotic Stress Responses There is overwhelming evidence that increasing the polyamine content contributes to cell protection under environmental stress conditions. PAs take part in osmoprotection, stabilisation of macromolecular complexes, maintenance of the ion homeostasis, scavenging ROS, and stress and hormone signalling [2,69]. Not only biosynthesis, but PA catabolism has also been shown to have a significant role in various abiotic stress responses [25,70]. These roles can at least partly be attributed to the products of PA catabolism catalysed by DAO and PAO enzymes, such as gamma-aminobutyric acid (GABA) and/or H 2 O 2 [70]. GABA, which is mainly synthesized in PA-independent pathways but can also be produced from PA-derived 4-aminobutanal, is an important plant metabolite with various protective functions in stress tolerance [71]. Under hypoxic conditions, PA catabolism with CuAO and PAO enzymes contributed to the GABA content by approximately 30% in Vicia faba [72]. Exogenous GABA, however, was shown to inhibit the breakdown of PAs, indicating negative feedback [71]. Besides GABA, the significance of PAO-dependent H 2 O 2 generation has also been described in drought adaptation, namely during ABA, as well as ethylenemediated stomatal closure in Vitis vinifera and Arabidopsis thaliana, respectively [73,74]. Fine-tuning PA catabolism during stress conditions might be required to control H 2 O 2 generation. ROS produced by PA decomposing enzymes can serve as important signalling molecules to boost antioxidative defence reactions, but above a certain level, they can augment the stress-associated cellular damage or even lead to PCD [25,75]. For example, tobacco cells were found to secrete Spd into the apoplast where it was oxidized by PAO, thereby generating H 2 O 2 at a level that promoted PCD [32,68]. In citrus (Citrus sinensis), the apoplastic CsPAO4 was shown to produce H 2 O 2 and cause oxidative damages under salt stress [41]. In tomato, PA catabolism (both DAO and PAO enzymes) responded stronger to sublethal than lethal doses of the stress hormone salicylic acid in salinity tolerance signalling [76]. Downregulated expression of PAO-coding genes increased the thermotolerance of tobacco likely due to reduced heat-induced H 2 O 2 generation [77]. PAO activity was shown to contribute to aluminium-or selenium-induced oxidative stress, further strengthening its pro-oxidant role during severe stresses [78,79]. However, the significance of PA-catabolism in antioxidant defence signalling contributing to the salt tolerance of PA-overproducing transgenic tobacco plants was also demonstrated by different groups [68,80]. Furthermore, contrasting salt stress tolerance of maize genotypes was found to be correlated with PA catabolism-dependent H 2 O 2 production during salt stress, but it was rather DAO than PAO activity-dependent [81]. In the leaf blade elongation zone of salinized maize plants, the PAO activity was found to be strongly increased (app. 20-fold) [82]. Together with increased apoplastic PA secretion, the PAO activity resulted in increased apoplastic ROS accumulation, contributing to leaf blade elongation under salt stress. Polyamine oxidase 5 loss-of-function atpao5 mutants of Arabidopsis are salt-stress tolerant; however, their salt tolerance did not show correlation with diminished ROS production but rather with the increased level of t-Spm [37]. However, in the salt-tolerant pao1 pao5 double mutant with no cytoplasmic PAOs, reduced ROS production was observed under NaCl stress [36]. Interestingly, simultaneous mutations in the pao2 and pao4 genes, both coding for peroxisomal PAO, were salt-sensitive, while the pao2 pao3 pao4 triple mutant with no peroxisomal PAO enzymes was not viable [36]. The above observations highlight the differential contribution of the various PAOs with different activities, by-products, and intracellular localisations, to stress tolerance. PAOs in Host-Pathogen Interactions PAO-generated H 2 O 2 may also contribute to pathogen defence. It may directly act as an anti-microbial agent in the apoplast or serve as a signalling molecule inducing the activation of defence genes [83,84]. PA levels and the activity of PA metabolic enzymes were found to be induced by various (biotrophic, as well as necrotrophic) pathogens infecting plant tissues [85][86][87]. For example, in response to the biotrophic pathogen Pseudomonas syringae, PAO activity was found to be increased in tobacco [88]. The infection also induced Spm secretion that, together with the elevated PAO activity, resulted in strong H 2 O 2 accumulation in the apoplast. The apoplastic Spm-mediated disease resistance could be compromised by PAO inhibitors [86,88]. Therefore, in biotrophic plant-pathogen interactions, PAO activity-related H 2 O 2 generation might contribute to the hypersensitive response (HR), as described for tobacco mosaic virus or Pseudomonas chicorii-infected tobacco [89,90] and powdery mildew (Blumeria graminis)-infected barley [89]. The oomycete Phytophthora cryptogea secretes cryptogein, a 10-kD protein that induces HR in tobacco. Inhibiting the expression of the gene coding for apoplastic tobacco PAO prevented PA degradation, cryptogein-induced apoplastic H 2 O 2 generation, and cell death [67]. The observation that, in these plants, cryptogein-induced kinase signalling was also compromised, highlighted that, besides its cytotoxic effect, PAO-generated H 2 O 2 also has a signalling role in the HR. The signalling role of Spm degradation-derived H 2 O 2 was also hypothesized in the transcriptional responses of Arabidopsis to the HR-inducing cucumber mosaic virus [91]. While PA degradation and H 2 O 2 production might beneficially control the HR response in biotrophic host-pathogen interactions, it might be detrimental in the case of infection by necrotrophic pathogens. In agreement, increased polyamine levels were reported to promote leaf necrosis during fungal infection dependent on PAO activity [86]. The PAO and NADPH-Oxidase Regulation Nexus NADPH oxidases, the RBOHs, are key enzymes regulating the controlled production of reactive oxygen species (ROS) during plant development and adaptation [92]. PAO and RBOH enzymes are involved in many cellular phenomena, raising the possibility that they are functionally interlinked in the control of ROS homeostasis. Both PAO and RBOH enzymes have a role in pollen tube growth [52,93,94]. The mechanism of acclimation to aluminium stress requires the operation of apoplastic PAO and the RBOH enzymes as well [78]. In Solanum lycopersicum, melatonin acts as a signalling molecule to regulate the SlPAO1 and SlRboh3/4 genes during lateral root development, implicating that both PAO and RBOH act downstream of melatonin in this process [95]. Longer uncommon polyamines (LUPAs) activate PAO and induce the expression of RBOH genes [96]. The convergent action of PAOs and RBOHs can be well exemplified by their control of the stomatal aperture. Stress factors, as well as plant hormones, induce stomatal closure via the production of H 2 O 2 [40,97]. H 2 O 2 production activates Ca 2+ channels that are ROS-dependent, thus increasing cytosolic Ca 2+ , triggering the signal transduction cascade and leading to the closure of stomata [98]. This H 2 O 2 mainly arises from the O 2 •generated by RBOHs [73,92,99,100]. In maize leaf cells, peroxidase and PAO activities also contributed to ABA-induced H 2 O 2 generation, although at a lower degree than RBOH [101]. Exogenous PAs Put, Spd, and Spm increase the level of ROS in guard cells and promote stomata closure in Arabidopsis [40]. Application of either diphenyleneiodonium (DPI), an inhibitor of NADPH oxidase, or 2-bromoethylamine (BEA), an inhibitor of copper amine oxidase, or 1,12 diaminododecane (DADD), an inhibitor of polyamine oxidase, could only partially reverse the stomatal closure. DPI, in combination with BEA/DADD, however, completely reverses the closure brought about by PAs. Therefore, the production of ROS during PA-mediated stomatal closure is controlled by both RBOH and amine oxidases. Stomatal closure in response to ethylene was shown to be dependent on AtrbohF-mediated H 2 O 2 production [73]. Nevertheless, the use of PAO inhibitors on Arabidopsis epidermal peels hindered ethylene's ability to stimulate H 2 O 2 production and stomatal closure [102]. In agreement, ethylene induces AtPAO2 and AtPAO4 gene transcription and PAO activity. Furthermore, the over-expression of AtPAO2 and AtPAO4 in Arabidopsis plants led to increased production of H 2 O 2 and higher sensitivity of stomatal movement to ethylene [102]. Other factors which induce stomatal closure, such as dehydration and high salinity, enhanced the expression of AtPAO2 and AtPAO4 to different degrees, indicating a general role of PAO-generated H 2 O 2 production as a stress-induced stomatal response [102]. The above observations support the view that, although a majority of H 2 O 2 is produced by RBOH enzymes in response to stomata-closing conditions, the contribution of PAOs cannot be neglected. A similar conclusion was drawn when investigating the oxidative burst associated with hyperhydricity in vitro in cultures of garlic where RBOH activity was more prominent than that of PAO [103]. RBOH are also key players in the wound and jasmonic acid responses of plants [104][105][106]. The inhibition of MeJA-induced ROS production by treatment with DPI was observed in tomato, rice, and pea plants [107][108][109]. Pre-treatment with DPI or a lack of AtRbohD or AtRbohF almost entirely prevented the accumulation of H 2 O 2 in Arabidopsis [110]. However, in maize, apoplastic polyamine oxidase (ZmPAO) was reported as the main producer of ROS in response to MeJA and wounding [47]. The researchers used N-prenylagmatine (G3), a specific and selective ZmPAO inhibitor, to study its effects on wound-induced cell wall lignification and suberinization in vivo. In addition, they looked at transgenic tobacco plants that constitutively express high levels of ZmPAO in their cell walls. G3 significantly inhibited lignin and suberin deposition in the wound periderm of maize mesocotyls. Furthermore, ZmPAO overexpression accelerated the same process in wounded tobacco stems, especially if the plants were treated with the ZmPAO substrate spermidine. Spd enhanced lignosuberized deposition in the cell walls of wild-type tobacco as well, suggesting that an endogenous amine oxidase might be involved in wound-healing processes not only in maize but in tobacco plants as well. Further, experimental evidence indicates that CuAOs also participate in the wound response [47]. Therefore, the degree of contribution of various enzymes to wound-induced H 2 O 2 generation might be species-specific [47]. The above examples highlighted the correlated action of PAO and RBOH enzymes in several plant responses. However, there are many studies supporting the hypothesis that PAO and RBOH activities are not simply correlated but are interconnected and can impact each other. There is ample evidence that suggests that exogenous PAs alter the transcription of RBOH genes and/or the activity of the RBOH enzymes ( Table 2). In tobacco leaf protoplasts, exogenous PAs were shown to reduce the accumulation of superoxide anions (O 2 •-) likely generated by microsomal NADPH oxidase during tissue maceration [111]. Andronis et al. [39] discovered that exogenous PAs, especially Spd, increased oxygen consumption through an NADPH-oxidase-dependent mechanism. The NADPH-oxidase blocker DPI attenuated the increase. The loss of function of the AtPAO3 gene resulted in the increased production of O 2 •through NADPH oxidase, which in turn activated the mitochondrial alternative oxidase pathway (AOX). Overexpression of AtPAO3 led to an increased but balanced production of both H 2 O 2 and O 2 •-. These observations indicate that the ratio of O 2 •to H 2 O 2 controls the respiratory chain in mitochondria, and PAO-dependent production of O 2 •by NADPH-oxidase alters this ratio in favour of the AOX pathway of the electron transfer chain. Seo et al. [80] found that the expression of NtRbohD and NtRbohF genes was reduced under NaCl stress conditions in S-Adenosyl-L-Methionine Decarboxylase (SAMDC) overexpressing Nicotiana tabacum plants with upregulated PA content. Thus, they determined that polyamines interfere with the production of ROS through RBOH enzymes. Treatment of cucumber (Cucumis sativus L.) plants with Spd decreased the activity of NADPH oxidases and NADPH-dependent O 2 •generation in microsomes, alleviating H 2 O 2 generation and injury under chilling stress [112]. Inhibiting PA biosynthesis enhanced microsomal NADPH oxidase activity and chilling injury in stressed plants. In agreement, the direct inhibitory effect of Spd and Spm on the activity of a Lotus glaber NADPH oxidase in vitro and on O 2 •generation in vivo was also demonstrated [113]. There are also examples where polyamines or polyamine degradation enhanced RBOH-dependent ROS generation. The salinity-alkalinity stress tolerance of tomato seedlings could be increased by exogenos Spd via RBOH1-dependent H 2 O 2 generation [117]. Exogenous polyamines increased the expression of RBOH-coding genes and the NADPH oxidase activity in apricot fruits, limiting the oxidative damage caused by Alternaria alternata. Gémes et al. [42] hypothesized that apoplastic PAO activity controls that of RBOH to amplify ROS generation in a positive feedback loop. These observations implicate that the activation of RBOH gene transcription, enzyme activity, and thus, RBOH-mediated ROS generation is controlled by polyamine metabolism. There are, however, observations supporting the hypothesis that RBOH activity plays a role in regulating the metabolism of polyamines. Demiralay et al. [118] studied polyamine metabolism in detail after the application of H 2 O 2 or an RBOH inhibitor to drought-stressed maize plants. It was found that inhibition of the RBOH enzyme by DPI enhanced polyamine degradation while exogenous H 2 O 2 promoted their synthesis, and RBOH played a key role in that regulation [118]. The observation that exogenous H 2 O 2 increased while DPI decreased the expression of the arginine decarboxylase (ADC) and agmatine aminohydrolase (AIH) genes, which encode enzymes involved in Put synthesis, suggests that H 2 O 2 produced by RBOH may contribute to the regulation of polyamine biosynthesis [118]. The expression level of DAO-and PAO-coding genes was higher in DPI-treated and lower in H 2 O 2 -treated than in control plants, supporting the idea that RBOH also controls polyamine degradation. The RBOH-PAO crosstalk was also demonstrated in the Arabidopsis thaliana-Pseudomonas syringae pathosystem [35]. Pseudomonas infection upregulates the transcription of AtPAO1 and AtPAO2 genes, and the double mutant atpao1-1 × atpao2-1 has increased susceptibility to the pathogen. The polyamine oxidases mutant showed not only disturbed H 2 O 2 but also O 2 •generation, which could be associated, among others, with the increased activity of RBOH enzymes. The lower expression levels of AtRbohD, AtRbohF genes in the mutant background were also reported. It was, therefore, hypothesized that peroxisomal Spm oxidation by PAOs negatively regulates RBOH activity in Arabidopsis by an unknown mechanism that could involve H 2 O 2 signalling [35]. Yoda et al. studied ROS generation during cryptogein-induced cell death in tobacco cell culture [67]. Co-treatment of the cells with cryptogein and a-difluoromethylornithine (DFMO), an irreversible inhibitor of polyamine synthesis via the ornithin decarboxylase enzyme, effectively suppressed H 2 O 2 production and prevented cell death. However, DFMO hardly had any influence on cryptogein-induced O 2 •production during the first 4 h of the elicitation of cells. The results suggested that at least two systems are involved in cryptogein-induced programmed cell death featuring RBOH in the early and polyamine degradation at the late stage. The findings presented by Gémes et al. [42] suggest that a feed-forward loop involving apoplastic PAO and RBOH controls ROS accumulation in response to salt stress in tobacco. RBOH was found to be required for ROS production induced by NaCl exposure in the early stages, while that of PAO was dispensable at this stage. The subsequent activation of apoplastic PAO was hypothesized to amplify ROS accumulation, thereby enhancing RBOH activity. At deleterious salt concentrations, this apoplastic PAO-fed ROS amplification loop causes the accumulation of ROS that surpasses a toxicity threshold, resulting in PCD. Coordination of PAO and NADPH-Oxidase Activities It is unclear how the activities of PAO and RBOH might be interconnected since the experimental data derived from various species and experimental systems are not fully consistent. However, the available data allows us to hypothesize some ways for their potential interaction (Figure 1). in tobacco. RBOH was found to be required for ROS production induced by NaCl exposure in the early stages, while that of PAO was dispensable at this stage. The subsequent activation of apoplastic PAO was hypothesized to amplify ROS accumulation, thereby enhancing RBOH activity. At deleterious salt concentrations, this apoplastic PAOfed ROS amplification loop causes the accumulation of ROS that surpasses a toxicity threshold, resulting in PCD. Coordination of PAO and NADPH-Oxidase Activities It is unclear how the activities of PAO and RBOH might be interconnected since the experimental data derived from various species and experimental systems are not fully consistent. However, the available data allows us to hypothesize some ways for their potential interaction (Figure 1). can also open Ca 2+ channels, and the increased intracellular Ca 2+ level augments the activity of the RBOH. Note that the various interactions are context-dependent, and several other ways of more indirect interactions may exist between the enzymes (e.g., via the activation of ROS-detoxifying mechanisms). PAO-polyamine oxidase; PAs-polyamines; MAPK-mitogen-activated protein kinase; RBOH-respiratory burst oxidase homolog (NADPH oxidase). The pointed arrows indicate the activation the round-pointed ones the inactivation. Yoda et al. [67] hypothesized, based on their observations with cryptogein-elicited tobacco cell cultures, that H 2 O 2 produced during the early phase of elicitation by RBOH activates a MAPK cascade, including SIPK that is required to provoke the second phase of H 2 O 2 generation by PAO [67]. Similar timing of RBOH and PAO activities has been observed in other experimental systems, suggesting that it is a rather common scenario (see earlier). However, the effect of RBOH-generated H 2 O 2 on PAO gene expression is controversial, since H 2 O 2 application was shown to decrease while DPI increased PAO gene expression in maize seedlings [118]. RBOH activity can enhance PA synthesis [118], and the elevated PA level may contribute to increased PAO activity [40,118]. The other way around, the effect of PAO activity on RBOH-mediated ROS generation, is more complex. A feed-forward loop linking RBOH and PAO activities was hypothesized to establish during lethal salt stress, leading to uncontrolled H 2 O 2 generation and PCD in tobacco [42]. In this scenario, PAO-generated H 2 O 2 could open Ca 2+ channels and thereby increase the activity of Ca 2+ -regulated RBOH enzymes. PAO-mediated regulation of Ca 2+ channels was demonstrated in other systems, such as in pollen tubes, where PAO and RBOH enzymes are both required for tip growth [52,93,119]. There are reports, however, where mutations in Arabidopsis AtPAO1-and AtPAO2-coding genes resulted in enhanced O 2 •generation and RBOH activity [35,39], indicating the negative effect of PAO action on that of RBOH. This can partly be explained by the AtPAO-dependent altered expression of some of the AtRboh genes. Interestingly, however, while the expression of the investigated AtRbohD and AtRbohF genes was increased in the atpao2-1 mutant, it was decreased in the double atpao1-1 atpao2-1 mutant. Gémes et al. studied the effect of an apoplastic PAO enzyme in tobacco [42] and found that the AtPAO1 and AtPAO2 [35] and AtPAO3 [39] enzymes are cytosolic and peroxisomal, respectively. Thus, PAO enzymes might have an intracellular localisation-dependent effect on RBOH gene expression and/or activity. Again, further studies dissecting the specific roles of PAO isoenzymes are required to clarify the picture. The existence and the exact nature of the hypothesized RBOH-PAO regulatory loops still need further verification, as it can lead to a better understanding of how plant cells can fine-tune H 2 O 2 generation to ensure their proper functioning, survival, or programmed death depending on the developmental/environmental context. Conclusions and Future Perspectives Although PAO enzymes can have different expression, biochemical activity, and intracellular localisation and may exhibit species-specific differences in these parameters, they all contribute to the generation of H 2 O 2 . Even though H 2 O 2 is only a by-product of the polyamine degradation activity of PAOs, accumulating evidence shows that PAOs control various cellular processes via H 2 O 2 -mediated pathways. This review attempted to summarize the current knowledge about these pathways. It is to be emphasized, however, that PAO action is more diverse. Besides H 2 O 2 generation, PAO activity alters polyamine levels and ratios and can contribute to the generation of regulatory metabolites / signalling molecules such as t-Spm, GABA, and NO. Since the interrelation of these PAO activities and their contribution to cellular functions is rather complex, we restricted our focus to H 2 O 2 generation-dependent mechanisms. PAO-generated H 2 O 2 is a two-edged sword; it is required for normal development and can enhance stress adaptation, but it can also be harmful and lead to cell death if it exceeds a threshold level. Stress-induced or developmentally regulated augmented biosynthesis or exogenous application of higher polyamines (Spm or Spd) often induces PAO activity. PAs thus can exert a concentration-dependent positive or negative effect on cellular functions via their degradation during which H 2 O 2 is produced. H 2 O 2 may serve as a signalling molecule to alter Ca 2+ homeostasis, MAPK signalling, and gene expression-modifying cellular processes such as growth, division, differentiation, and adaptation. The contribution of PAO-generated H 2 O 2 to these processes is hard to define since the actual level of H 2 O 2 is controlled by many other enzymes, including producers (RBOH, superoxide dismutase, CuAO) and removers (catalase, peroxidases), as well as non-enzymatic pro-and anti-oxidants. Future research should unravel the details of how PAO activity is integrated into the cellular machinery that controls ROS homeostasis. The isoenzyme-specific spatial (apoplast, cytoplasm, peroxisome) and temporal (sequence of events) contexts of this integration deserve special attention. Conflicts of Interest: The authors declare no conflict of interest.
2022-12-21T16:23:02.874Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "fc26bb07a7a437c38c4dd8200f645f770e1d13ed", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/antiox11122488", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8ce895feff9503451317b440e94f0040adec781", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257378628
pes2o/s2orc
v3-fos-license
On subtensors of high partition rank We prove that for every positive integer $d \ge 2$ there exist polynomial functions $F_d, G_d: \mathbb{N} \to \mathbb{N}$ such that for each positive integer $r$, every order-$d$ tensor $T$ over an arbitrary field and with partition rank at least $G_d(r)$ contains a $F_d(r) \times \cdots \times F_d(r)$ subtensor with partition rank at least $r$. We then deduce analogous results on the Schmidt rank of polynomials in zero or high characteristic. Introduction and main result Let K be a field, let n 1 , n 2 , r be positive integers and let A ∈ K n1 ⊗ K n2 be a matrix of rank r. Then A contains a r × r-submatrix of rank equal to r. We want to prove a similar statement for order-d tensors and partition rank. Let n 1 , . . . , n d be positive integers, and let T ∈ K n1 ⊗ · · · ⊗ K n d be a tensor. Then the partition rank of T , introduced by Naslund [Nas20] and denoted pr(T ), is the smallest nonnegative integer r such that T can be written as a sum of r terms of the form A⊗ B where A ∈ i∈I K ni and B ∈ i ∈I K ni and I is a proper subset of [d] := {1, . . . , d} containing 1. We stress that I may vary among the r terms. For d = 2, though, {1} is the only possible value for I, and it follows that then pr(T ) is the matrix rank of T . Our paper will be concerned mostly with d ≥ 3. If T is an order-d tensor and X 1 , . . . , X d are subsets of [n 1 ], . . . , [n d ] respectively then we write T [X 1 , . . . , X d ] for the subtensor of T obtained by restricting the entries of T to the product X 1 × · · · × X d . Set r := pr(T ). We want to show that T contains a subtensor of size some (not too large) function of r and partition rank at least some (not too small) function of r. The contrapositive says that if all small subtensors of T have bounded partition rank, then T itself has bounded partition rank. This is what we will prove. Theorem 1.1. Let d ≥ 2 be a positive integer. There exist functions F d , G d : N → N such that if r ≥ 1 is a positive integer and T is an order-d tensor over an arbitrary field such that every F d (r) × · · · × F d (r) subtensor of T has partition rank at most r, then T has partition rank at most G d (r). Furthermore, we may take the bounds F d (r) ≤ (2 d+3 r) 2d and G d (r) ≤ (2 d+3 r) 2d 2 for all r. We remark that the bounds on the quantities F d (r) and G d (r) in the theorem hold for any field K, though it is conceivable that "optimal" functions F d and G d do depend on K. Furthermore, we believe that it is likely that Theorem 1.1 could still be true with both bounds F d (r) and G d (r) taken to grow linearly in r for every fixed d, although we do not have a proof of that. In Section 4 we shall prove that we may take F 3 (r) = O(r 3/2 ) and G 3 (r) = O(r 3 ). The proof is inspired by an attempt to extend the following matrix argument to higher-dimensional tensors. If A is a matrix and r is the largest nonnegative integer such that there exists an r × r submatrix A[X, Y ] of A with rank r, then for every x ∈ X c and y ∈ Y c we have that det A[X, Y ] = 0 but det A[X ∪ {x}, Y ∪ {y}] = 0, so we can express all coefficients A(x, y) with x ∈ X c and y ∈ Y c in the simple way As with the matrix argument, because every order-(d− 1) slice of T has partition rank at most 1, a positive answer to Question 1.2 implies Theorem 1.1 with F d (r) = r and G d (r) = C d (r) + dr. In particular, it would suffice that C d be linear in r for G d to be linear in r. It is however not obvious to us that Question 1.2 has a positive answer. Unlike in the case of matrices, where all smallest-length rank decompositions of a full-rank matrix can be deduced from one another via a change of basis, the set of partition rank decompositions of a given full-rank tensor is richer in general, already in the d = 3 case; this makes it harder to obtain an analogue of the expression (1). In the absence of such a direct expression for T (x 1 , . . . , x d ), we may ask for a weaker description: an equation of which T (x 1 , . . . , x d ) is a solution, which leads the way to the arguments that our proof will involve. A second difficulty which we have to circumvent is that we do not know that the set of tensors in K n1 ⊗ · · · ⊗ K n d with partition rank at most r is Zariski-closed in general: although this is true for algebraically closed fields K and d = 3 (as shown by Sawin and Tao [TS16]), this is likely false in general, even over algebraically closed K. (Indeed, the corresponding notion of bounded strength for quartics is not closed [BBOV22].) This will nonetheless not interfere with our argument, as it will suffice for us to find a polynomial for which the zero-set merely contains the set of tensors in K n1 ⊗ · · · ⊗ K n d with partition rank at most r rather than being equal to it. Indeed, our main stepping stone towards proving Theorem 1.1 will be the following statement. that vanishes on all tensors in C n1 ⊗· · ·⊗C n d of partition rank at most r. Then for any field K, any positive integers n 1 , . . . , n d ≥ m, and any tensor T ∈ K n1 ⊗ · · · ⊗ K n d , if all m × · · · × m-subtensors of T have partition rank at most r, then T has partition rank at most We note that in the case of matrices, the determinant of the top-left submatrix is such a polynomial, and we may hence take m = r + 1. This yields 4r, almost recovering the bound 3r discussed earlier. In the case of high-characteristic fields, we may deduce from Theorem 1.1 an analogue for polynomials. If P is a homogeneous polynomial in several variables over a field K and with degree at least 2, then we let rk P be the smallest positive integer k such that we may write . This notion of rank is known as the Schmidt rank or strength of P . For every subset U ⊂ [n], we write P [U ] for the polynomial in K[x u |u ∈ U ] obtained by substituting in the polynomial P the value 0 for all variables in [n] \ U . Theorem 1.4. Let K be a field, let d ≥ 2, r ≥ 1 be positive integers and set D := d ⌊d/2⌋ ≤ 2 d . Assume that char K = 0 or char K > d. If P is a homogeneous polynomial in variables x 1 , . . . , x n over K with deg P = d that satisfies rk P [U ] ≤ r for every subset U ⊂ [n] with size at most dF d (r · D), then rk P ≤ G d (r · D). 1.1. Relations to the literature. The main results and techniques present paper may be contrasted to those of existing works in the literature. A general framework for studying restriction-closed properties had been started in [Dra19], but the techniques in that paper assume that the property is Zariski-closed, which as we have explained does not appear to be the case for the set of tensors in K n1 ⊗ · · · ⊗ K n d with partition rank at most r. The more recent work [BDR22] assumes that the field is finite. High-rank subtensors were also studied in [Kar22], using very different arguments: Theorem 4.1 from that paper there is similar to (our) Theorem 1.1, but assumes that the field is finite, an assumption which is heavily used in the proof, through a connection between the partition rank and the analytic rank; although the qualitative part of Theorem 1.1 follows as a special case of the first main theorem stated in the introduction of that paper, polynomial bounds in the functions F d and G d are a novelty of the present paper. Again letting aside the matter of the bounds, one can deduce from a universality theorem of Kazhdan and Ziegler [KZ20] a qualitative version of Theorem 1.4 where the assumption is replaced by the requirement that every image of T under any d-tuple of linear transformations has partition rank at most r. This condition is much stronger than our requirement on subtensors. A general method for passing from a result about linear maps to a result about subtensors, which involves fundamental results about finitely generated FI-modules, is described in [BDR22]. Let us finally mention a paper of Briët and Castro-Silva [BCS22] on random restrictions of tensors and polynomials, which provides an additional motivation for the present line of work. Although none of their results imply ours or the other way around, they identify linear bounds in Theorem 1.1 and in Theorem 1.4 as respectively providing a natural route to recover a random restriction theorem for tensors and for polynomials. Our proof of Theorem 1.4 shows that linear bounds in Theorem 1.1 would suffice to obtain linear bounds in Theorem 1.4. 1.2. Organisation of this paper. In Section 2 we prove Theorem 1.3, in Section 3 we find a bound on m in Theorem 1.3 in terms of r and derive Theorem 1.1. In Section 4 we use classical invariant theory to derive a slightly better bound on m in the special case of d = 3, and in Section 5 we deduce Theorem 1.4 from Theorem 1.1. Proof. Since tensors of partition rank at most r are the image of a map defined over Z, we may assume that f has integer coefficients. Since they are preserved by coordinate scalings, we may further assume that f is a weight vector, i.e., f gets scaled by (t α1 1 , . . . , t α d d ), for certain α i ∈ Z ni ≥0 , when the tensor gets acted upon by (diag(t 1 ), . . . , diag(t d )) ∈ d i=1 GL ni (C). Now if the j-th entry of α i is strictly greater than 1, then acting with the Lie algebra element E j,ni+1 on f we get another polynomial that vanishes on tensors of partition rank r and which has weight (α 1 , . . . , α ′ i , . . . , α d ), where We replace n i by n i + 1 and α i by α ′ i . Continue in this manner until all α i only have 1s and 0s as entries. The 0s correspond to slices of variables that do not occur in f . After removing these, f has weight (1 n1 , . . . , 1 n d ), where we note that the n i may have changed. Each variable has weight (e j1 , . . . , e j d ) for some j i ∈ [n i ], and it follows that all n i are equal to a common number m. Finally, divide the resulting f by an integer to ensure that the coefficients have gcd 1. ]. This is nonzero, since the coefficients of f have gcd 1, and it is still a weight vector of weight (1 m , . . . , 1 m ). From now on, we write f for the image. Note that f vanishes on tensors in K m ⊗ . . . ⊗ K m of partition rank at most r. Similarly, after further permutations on the first (m−1) indices, for k = 1, . . . , m− 1 we have where h k+1 is a nonzero weight vector of weight (1 m−k−1 0 k+1 , . . . , 1 m−k−1 0 k+1 ) and r k+1 a weight vector of weight (1 m−k 0 k , . . . , 1 m−k 0 k ) that does not involve x m−k,...,m−k . Note that h m is a nonzero constant and that r m = 0. Furthermore, for each i = 1, . . . , d, every term of r k+1 contains precisely one variable that has an index m − k on position i. These d indices m − k are distributed over at least two and at most d of the variables in the term. So r is a linear combination of terms of the following form (illustrated for d = 4): Proposition 2.2. Let n 1 , . . . , n d ≥ m be positive integers and let T ∈ K n1 ⊗ · · · ⊗ K n d . Suppose that, for some k = 0, . . . , m − 1, the whole G := i Sym([n i ])-orbit of h k vanishes at T , but not the whole G-orbit of h k+1 vanishes at T . Then T has partition rank at most Proof. Without loss of generality, we may assume that h k+1 (T ) is nonzero and the whole G-orbit of h k is zero on T . We then have For instance, if d = 4, then t m−k,...,m−k is a linear combination of terms such as Note that this fixes all indices up to m − k − 1. As a consequence, we find that the subtensor of T admits a decomposition as a sum of tensor products of tensors in which every term is divisible by some tensor like (for d = 4) for some choice of (j 1 , j 4 ) ∈ [m − k − 1] 2 . In each term of this decomposition, there is at least one factor which is a tensor of order at most ⌊d/2⌋. The number of these tensors equals Finally, the remainder of T admits a slice rank decomposition where each term is divisible by some standard basis vector in K m−k−1 , in one of the d factors. This is accounted for by the first term in (2). Proof of Theorem 1.3. By construction of f = h 0 , its entire G-orbit vanishes on the given tensor T . On the other hand, h m is a nonzero constant. Hence there exists a k ∈ {0, . . . , m − 1} such that the entire G-orbit of h k vanishes at T but not the entire G ′ -orbit of h k+1 vanishes at T . Hence Proposition 2.2 applies with this k. Now the bound in Theorem 1.3 follows from the bound in (2) by taking the worst k, namely, k = 0. 3. A degree bound for f and proof of Theorem 1.1 Theorem 3.1. Let d ≥ 2 be a positive integer. Then for every positive integer r ≥ 1 and for n = 8r, there exists a polynomial f with degree at most m ≤ (2 d+3 r) 2d in C[x i1,...,i d | i j ∈ [n]] that vanishes on all tensors in C n ⊗ · · · ⊗ C n (d factors) with partition rank at most r. Proof. We write X n,r for the set of order-d tensors in C n ⊗ · · · ⊗ C n with partition rank at most r. The set X n,r is contained in the set X ′ n,r of order-d tensors T that have a partition rank decomposition of the type for certain tensors A I,i ∈ j∈I C n and B I c ,i ∈ j∈I c C n . Note that we could take the summation over half of these I, but for simplicity we will not do so here. From now on, the range of I in summations, indexations etc. will always be taken to be P( Let π r = I (C n I × C n I c ) r be the parameter space for decompositions as in (4). We let P 2m (π r ) be the vector space of homogeneous polynomials of degree 2m in the variables A I,i , B I,i and let P m (C [n] d ) be the linear space of homogeneous polynomials of degree m in the n d entries of tensors of C [n] d . Letting ϕ : π r → C [n] d be the parametrisation defined by taking to be the right-hand side of (4), the pull-back ϕ # : If f in an element of ker ϕ # , then f vanishes on the image X ′ n,r of ϕ, so it now suffices to check that ker ϕ # = {0}. In turn, to show this, it suffices to show dim P 2m (π r ) < dim P m (C [n] d ). For every 1 ≤ |I| ≤ d − 1 we have n |I| + n d−|I| ≤ 2n d−1 , so S n,r ≤ 2 d+1 rn d−1 . Assuming that m ≥ n d /4 ≥ 3 and n = 2 d+3 r, the left-hand side of (5) is therefore at least Meanwhile, the right-hand side of (5) is at most its numerator, and hence at most (n d ) n d . Therefore, for (5) to hold, it suffices that which simplifies to m ≥ n 2d . Since n = 2 d+3 r, it suffices that m ≥ (2 d+3 r) 2d for there to exist a polynomial f which is zero on X ′ n,r and hence on X n,r . Proof of Theorem 1.1. Take F d (r) = m := (2 d+3 r) 2d . By Theorem 3.1, there exists a nonzero polynomial f of degree m that vanishes on order-d tensors of partition rank r, and by Theorem 1.3, any n 1 × · · · × n d -tensor all of whose m × · · · × msubtensors have partition rank at most r, has itself partition rank at most m d = (2 d+3 r) 2d 2 =: G d (r), as desired. Here we used d ≥ 3 in the last step, but as discussed in the beginning of the paper, for d = 2 even much better bounds work. Finally, note that if some n i happens to be smaller than m, then the tensor has partition rank at most m < G d (r). Order-3 tensors and invariant theory In this section, we focus on d = 3 and follow a construction suggested to us by Harm Derksen. In this case, partition rank equals slice rank. The following is well-known, but we include a quick proof. Lemma 4.1. The tensors T ∈ C n ⊗ C n ⊗ C n of slice rank strictly less than n are contained in the nullcone for the action of the group G := SL n × SL n × SL n . Here the nullcone is the set of all vectors on which all G-invariant polynomials vanish. Proof. For such a tensor there exist linear subspaces V 1 , V 2 , V 3 ⊆ C n with dim(V 1 )+ dim(V 2 ) + dim(V 3 ) < n such that After linear coordinate changes in the individual tensor factors, we may assume that V i is spanned by the first n i basis vectors. Now consider the triple λ := (λ 1 , λ 2 , λ 3 ) of 1-parameter subgroups in SL n defined by where there are n i copies of the t (n−ni) and n−n i copies of t ni , so that det(λ i (t)) = 1 as desired. Then one sees that λ(t) acts by some power t a on each standard basis vector in C n ⊗ C n ⊗ C n , and that a ≥ n − n 1 − n 2 − n 3 > 0 for all basis vectors that have a nonzero coefficient in T . Hence λ(t)T → 0 for t → 0 and all G-invariant polynomials vanish on T . For the following result we refer to [BI17], where this invariant is called F k . Proposition 4.2. If n = k 2 , then there exists a nonzero, homogeneous, G-invariant polynomial on C n ⊗ C n ⊗ C n of degree k 3 . Proof. Let k be the smallest positive integer satisfying k 2 ≥ r + 1 and set n := k 2 . By Proposition 4.2, there exists an nonzero G-invariant polynomial f of degree k 3 on C n ⊗ C n ⊗ C n . By Lemma 4.1, this polynomial vanishes on tensors of slice rank at most n − 1, hence in particular in tensors of slice rank at most r. Clearly, deg(f ) = O(r √ r). Deduction of the restriction result for the rank of polynomials In this section we deduce Theorem 1.4 from Theorem 1.1. We begin by establishing that we can deduce bounds on the partition rank of a tensor from bounds on the Schmidt rank of a polynomial and conversely. This correspondence is well known from the work of Kazhdan and Ziegler [KZ20]; we include it here for convenience of the reader. As in the statement of Theorem 1.4 we assume that char K = 0 or char K > d. Recall that the space of homogeneous polynomials of degree d in x 1 , . . . , x n has a natural isomorphism to the symmetric power S d K n , and consider the natural linear map determined by ϕ : K n ⊗ · · · ⊗ K n → S d K n , v 1 ⊗ · · · ⊗ v d → v 1 · · · v d . It clearly maps any tensor of partition rank at most 1 to a polynomial of Schmidt rank at most 1, and hence, by linearity, a tensor of partition rank at most r to a polynomial of Schmidt rank at most r. Conversely, we have a linear map determined by ψ : S d K n → K n ⊗ · · · ⊗ K n , v 1 · · · v d → π∈S d v π(1) ⊗ · · · ⊗ v π(d) , which is a linear isomorphism to the space of symmetric tensors in K n ⊗· · ·⊗K n with inverse ϕ/d! restricted to that space of symmetric tensors. This maps a polynomial of Schmidt rank at most 1, given as Q · R with Q of degree e and R of degree d − e, to a tensor of partition rank at most d e ≤ d ⌊d/2⌋ =: D. Again by linearity, this map sends a polynomial of Schmidt rank at most r to a tensor of partition rank at most r · D. Proof of Theorem 1.4. Suppose that P is a homogeneous polynomial of degree d in x 1 , . . . , x n and rk(P [U ]) ≤ r for all U ⊆ [n] of size at most dF d (rD). Then ψ(P ) =: T is a (symmetric) tensor. Consider any d-tuple U 1 , . . . , U d of subsets of [n], each of size at most F d (rD), and take U := U 1 ∪ · · · ∪ U d , a set of size at most dF d (rD). Then by our assumption ϕ(T [U, . . . , U ]) = d!P [U ] has Schmidt rank at most r. It follows that T [U, . . . , U ] = ψ(P [U ]) has partition rank at most rD, and hence a fortiori so does T [U 1 , . . . , U d ]. We conclude that T itself has partition rank at most G d (rD), and therefore P = ϕ(T )/d! has Schmidt rank at most G d (rD).
2023-03-08T06:42:45.207Z
2023-03-06T00:00:00.000
{ "year": 2023, "sha1": "24e3aae1a3144a0c97f4554e62a9614279479e9f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "24e3aae1a3144a0c97f4554e62a9614279479e9f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233324122
pes2o/s2orc
v3-fos-license
Link Prediction on N-ary Relational Data Based on Relatedness Evaluation With the overwhelming popularity of Knowledge Graphs (KGs), researchers have poured attention to link prediction to fill in missing facts for a long time. However, they mainly focus on link prediction on binary relational data, where facts are usually represented as triples in the form of (head entity, relation, tail entity). In practice, n-ary relational facts are also ubiquitous. When encountering such facts, existing studies usually decompose them into triples by introducing a multitude of auxiliary virtual entities and additional triples. These conversions result in the complexity of carrying out link prediction on n-ary relational data. It has even proven that they may cause loss of structure information. To overcome these problems, in this paper, we represent each n-ary relational fact as a set of its role and role-value pairs. We then propose a method called NaLP to conduct link prediction on n-ary relational data, which explicitly models the relatedness of all the role and role-value pairs in an n-ary relational fact. We further extend NaLP by introducing type constraints of roles and role-values without any external type-specific supervision, and proposing a more reasonable negative sampling mechanism. Experimental results validate the effectiveness and merits of the proposed methods. S INCE Google announced Knowledge Graph (KG) in 2012, KG has been increasingly popular in these years. And link prediction on KG has caught much attention for a long time, to complete KG and further promote KG based applications. Usually, a KG is represented as a set of triples in the form of (head entity, relation, tail entity), where nary relational facts are decomposed into multiple triples via introducing virtual entities, such as the Compound Value Type (CVT) entities in Freebase [1]. Actually, n-ary relational facts are not in the minority. As an example, in Freebase, more than 1/3 of its entities are involved in n-ary relational facts [2]. Based on the public KGs of the above binary form, researchers have developed numerous methods for link prediction [3], [4], [5], including the tasks of head entity prediction, relation prediction, and tail entity prediction. However, manipulating n-ary relational facts into triples has some deficiencies. Firstly, it makes link prediction on n-ary relational data need to consider many triples. Since existing link prediction methods on triples, like the above methods, usually model triples one by one individually and correspondingly conduct link prediction on only a single triple each time, it is intricate to carry out link prediction involving more than one triple. Secondly, there is a loss of structure information in some of the conversions from n-ary relational facts to triples [2], which may lead to inaccurate prediction. Thirdly, the added virtual entities, along with the corresponding triples, bring in more parameters to be learned, and there is a data sparsity problem. With these considerations in mind, in this paper, each n-ary relational fact is represented as a set of its role and role-value pairs, denoted as r:v pairs for short, instead of being converted into multiple triples. Hence, no additional entities and triples are introduced, and all the structure information is retained. Then, we propose a method to handle link prediction on n-ary relational data. Specifically, link prediction on n-ary relational data is to predict the missing role/role-value of an n-ary relational fact. Recently, there are also a few studies for link prediction on n-ary relational data. In m-TransH [2], an n-ary relation is defined by the mappings from a sequence of roles to their role-values. Each specific mapping is an nary relational fact. Then, a translation based method is proposed to model these facts. RAE [6] improves m-TransH with the modeling of the likelihood that two role-values co-participate in a common n-ary relational fact. Although RAE achieves favorable performance, it does not consider the roles explicitly when evaluating the above likelihood. Actually, under different sequences of roles (corresponding to different relations), the relatedness of two role-values is greatly different. For example, under the role sequence (person, award, point in time, together with) (in this paper, the examples are all derived from Wikidata 1 ), M arie Curie and Henri Becquerel are more related than under the role sequence (person, spouse, start time, end time, place of marriage), since they won N obel P rize in P hysics in 1903 together. To address these problems, our method explicitly models the relatedness of the r:v pairs involved in an n-ary relational fact. The above example also elucidates its necessity. Some newly proposed methods [7], [8], [9], [10] further apply position-specific convolution [7], tensor decomposition techniques [8], or structure-aware methods [9], [10]. They do not consider any type constraint. Since for a valid n-ary relational fact, the expected type of the role-value in the view of the role should be compatible with the actual type of the role-value in each r:v pair, we combine these signals without explicit external supervision from a type catalog. Thus, NaLP is extended to tNaLP (type-NaLP). Furthermore, we equip NaLP and tNaLP with a more reasonable negative sampling mechanism and thus extend them to NaLP + and tNaLP + , respectively. In general, the main contributions of this paper are: • We advocate a representation form for n-ary relational facts, which represents each n-ary relational fact as a set of its r:v pairs. • We propose a link prediction method NaLP on n-ary relational data, which captures the relatedness of all the r:v pairs in an n-ary relational fact explicitly. • We further introduce type constraints of roles and role-values in an unsupervised manner, and propose a more reasonable negative sampling mechanism. Thus, NaLP is extended to tNaLP, NaLP + , and then tNaLP + , respectively. • Experimental results and analyses testify the effectiveness and superiority of the proposed methods. In addition, as publicly available n-ary relational datasets are limited, we build a practical one, WikiPeople, based on Wikidata [11]. We further derive some new ones, WikiPeople-n, WikiPeople-0bi, WikiPeople-50bi, and WikiPeople-100bi, from WikiPeople with different percentages of binary relational facts. They have been available on github 2 for further research. We have also opened our source codes of the proposed methods 3 . Link Prediction on Binary Relational Data The rapid development of KG has triggered the spring up of a great many methods, including tensor/matrix based methods, translation based ones, and neural network based ones, for link prediction on binary relational data. Among tensor/matrix based methods [12], [13], [14], [15], [16], [17], [18], [19], RESCAL [12] views a KG as a threeway tensor, where each way corresponds to head entities, relations, or tail entities. The entry corresponds to the score indicating the validity of the triple formed by the thee ways. Through minimizing the reconstruction error of the tensor, the embeddings of entities and relations are learned. Then, the reconstructed tensor from the learned embeddings is used to conduct link prediction, with triples corresponding to entries of high scores as valid ones. Based on RESCAL, type constraints are further integrated in [14] via indexing only those embeddings of entities for each relation that agree with specific type constraint. Similar to RESCAL, ComplEx [15] constructs a matrix for each relation, which is factorized and optimized via minimizing the reconstruction error to learn the embeddings. Differently, complex values are used to define the embeddings, so as to deal with antisymmetric relations effectively. Upon ComplEx, Type-Complex [19] enhances each base factorization with two type compatibility terms between entity-relation pairs (one for the head entity and the other for the tail entity). Translation based methods are derived from TransE [20]. Its translation assumption advocates that valid triples can be viewed as relational translations from head entities to tail entities. Based on this assumption, different types of translations and accordingly different score functions measuring the similarity between the relational translation results of the head entities and the target tail entities are defined [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34]. The embeddings of entities and relations are then learned via minimizing score function based loss, which encourages valid triples to have much larger scores than invalid ones. Among these methods, TransE, performs translations in a shared space of entities and relations. TransH [21] further thinks translations should be conducted in relation-specific hyperplanes and relates each relation with a hyperplane. Then, entities are projected into the hyperplanes of relations before translations. TKRL [28] takes advantage of hierarchical entity types in the construction of projection matrices for entities. And in negative sampling, it tends to select entities having the same types; In evaluation, it removes the candidates which do not follow type constraints. TransT [29] constructs relation types from entity types and utilizes type-based semantic similarity of the related entities and relations to capture prior distributions of entities and relations. With these prior distributions, it generates semantic vectors for entities in different contexts, which are learned following TransE. Then, it estimates the posterior probability of entity and relation predictions. TAPR [34] uses types to enrich the embeddings of entities and relations, and describes a type-level attention to select the most relevant type of the given entity in a specific triple. Inspired by the excellent performance of neural networks in various applications, they have been introduced into link prediction on binary relational data and have achieved extremely promising results [35], [36], [37], [38], [39], [40], [41], [42], [43]. Among them, the R-GCN [37] method introduces relational Graph Convolutional Networks (GCN) as its encoder to get the embeddings of entities, and uses the existing method [44] as its decoder to compute the scores of triples with the embeddings of entities from the encoder and the embeddings of relations randomly initialized. CompGCN [43] further learns the embeddings of relations in the GCN encoder. Differently, SENN [39] integrates the prediction tasks of head entities, relations, and tail entities in a unified Fully Connected Network (FCN) framework based on shared embeddings. ConvKB [40] represents each triple as a three-column matrix. This matrix is fed into a convolution layer to get feature maps, which are then concatenated into a single feature vector representing the triple. After passing it to a fully connected layer, ConvKB obtains a score indicating the validity of the triple. Link Prediction on N-ary Relational Data As n-ary relational facts are complicated and usually tricky to deal with, most of existing studies convert them into triples. Studies for link prediction on n-ary relational data directly are relatively much scarcer and newer. The m-TransH [2] method generalizes TransH [21] on binary relational data to n-ary relational data. It presents a mathematical definition of n-ary relations as the mappings from sequences of roles to their role-values. Each n-ary relational fact is a specific mapping of the corresponding relation. Then, the score function of an n-ary relational fact is defined by the weighted sum of the projection results from its role-values to its relation hyperplane, where the weights are the real numbers projected from its roles. Based on m-TransH, RAE [6] introduces the relatedness of role-values, i.e., the likelihood that two role-values co-participate in a common n-ary relational fact, and adds this relatedness loss with a weight hyper-parameter to the embedding loss of m-TransH. Although with the additional modeling of the relatedness of role-values, RAE outperforms m-TransH, it does not look into the roles when computing the relatedness. Since roles are also a fundamental part of n-ary relational facts, taking them into consideration may make a difference as illustrated afore. To better capture the role that the entity plays in a given relation, HypE [7] further convolves each entity appearing at the specific position in a given tuple with the set of position-specific filters. GETD [8] is the first tensor based method for n-ary relational link prediction. However, GETD only focuses on handling n-ary relational facts of the same arities. HINGE [9] and NeuInfer [10] consider principal and subordinate structure information and represent each n-ary relational fact as a primary triple coupled with a set of its auxiliary descriptive r:v pair(s). Actually, no all n-ary relational facts have a primary triple. In a word, all these methods do not consider any type constraint. PROBLEM STATEMENT We represent each n-ary relational fact as a set of its r:v pairs. As an example, the representation of the fact that M arie Curie received N obel P rize in P hysics in 1903 together with Henri Becquerel and P ierre Curie, is: {person : M arie Curie, award : N obel P rize in P hysics, point in time : 1903, together with : Henri Becquerel, together with : P ierre Curie}. Formally, given an n-ary relational fact with n roles, and each role r i having m i role-values (i = 1, 2, . . . , n), its representation is the following set of r:v pairs: It is an n-ary relational fact of arity (m 1 + m 2 + · · · + m n ). In what follows, without loss of generality, we exemplify one n-ary relational fact Rel in the following form: where m is the arity of Rel. The role set of Rel is denoted as R Rel = {r 1 , r 2 , . . . , r m }, and r i may be the same to r j (i, j = 1, 2, . . . , m, i = j), as aforementioned; The role-value set of Rel is denoted as V Rel = {v 1 , v 2 , . . . , v m }. In this paper, we handle link prediction on n-ary relational data. Definition 1. Link prediction on n-ary relational data is to predict the missing role/role-value in an n-ary relational fact, i.e., role prediction and role-value prediction. Formally, for the n-ary relational fact Rel, it predicts r i /v i (i = 1, 2, . . . , m) given the remaining elements. Take the above 5-ary relational fact for example, it may be to predict the role of the role-value P ierre Curie or to predict the role-value of the role award, given the remaining information. In the following, it is transformed into estimating whether an n-ary relational fact is valid and then cast into a classification task. Actually, as each n-ary relational fact is represented as a set of its r:v pairs, the above classification task is operated on sets [45], [46], [47]. The Framework of NaLP We propose a link prediction method NaLP on n-ary relational data. It is designed based on two considerations. On the one hand, a role and the corresponding role-value are tightly linked to each other, thus should be bound together. On the other hand, as mentioned in Section 3, we aim to judge the validity of an n-ary relational fact, i.e., a set of r:v pairs. That is, for a set of r:v pairs, we need to determine whether it is able to form a valid nary relational fact. Intuitively, all the r:v pairs of a valid relational fact are closely related. And if all the r:v pairs from a set are closely related, then we assume that this set of r:v pairs has a high probability to constitute a valid nary relational fact. With these considerations in mind, the framework of NaLP is designed as illustrated in the upper part of Fig. 1, which consists of two key components, r:v pair embedding and relatedness evaluation, to generate the evaluation score of the input fact. For clarity, in the figure, we only exemplify the n-ary relational fact Rel, and in what follows, we illustrate its learning details. For Rel, the embeddings of its roles in R Rel and rolevalues in V Rel are looked up from the embedding matrices M R ∈ R |R|×k of roles and M V ∈ R |V |×k of role-values, respectively, where R is the set of all the roles in the dataset; V is the set of all the role-values; k is the dimension of embeddings. Following the convention, in what follows, embeddings are denoted with the same letters but in boldface. As depicted in Fig. 1, after passing these embeddings to the r:v pair embedding component, we get the embedding matrix of the m r:v pairs, i.e., Rel (see Section 4.2). This resulting embedding matrix is fed into the relatedness evaluation component to compute the relatedness between the r:v pairs, then estimate the overall relatedness of all the r:v pairs, which is used to obtain the evaluation score (see Section 4.3). Particularly, this framework equips NaLP with the ability to be permutation-invariant to the input order of the r:v pairs and deal with relational facts of different arities (see Section 4.4). Subsequently, the evaluation score is used to generate loss (see Section 4.5). The R:V Pair Embedding Component This component aims to obtain the embeddings of the input r:v pairs. This process contains two sub-processes, i.e., feature learning of the r:v pairs (convolution is adopted, as it is a good method to learn features) and obtaining the embedding matrix of the r:v pairs, corresponding to "convolute" and "concat" in the upper part of Fig. 1, respectively. Then, this concatenated matrix is passed to the convolution with the filter set Ω of n f filters. Since the dimensions of the concatenated embeddings of the r:v pairs are 1 × 2k, we set the dimensions of the filters to the same value, i.e., 1×2k to capture their features. After convolution, we get n f results of dimension m. The Rectified Linear Units (ReLU) function [48] is further applied to get n f feature vectors. The Embedding Matrix of the R:V Pairs We concatenate the n f feature vectors to form a matrix of dimension m × n f . Then, this matrix can be treated as the embeddings of the m r:v pairs, and each row corresponds to the embedding of one r:v pair. Specifically, each entry of a row encodes the feature of this dimension. Formally, the embedding matrix M Rel ∈ R m×n f of the r:v pairs, i.e., Rel is defined as: where Υ is the ReLU function, i.e., Υ(x) = max(0, x); * denotes the convolution operator; concat(R Rel , V Rel ) is the aforementioned embedding concatenation process of the r:v pairs before convolution. The Relatedness Evaluation Component This component computes the relatedness between the r:v pairs, estimates the overall relatedness of all the r:v pairs, and obtains the evaluation score, corresponding to "g-FCN", "min" and "f-FCN" in the upper part of Fig. 1, respectively. Before introducing these sub-processes, let us see the principle behind this component. The Principle of Relatedness Evaluation The principle goes deep into determining whether a set of r:v pairs is able to form a valid n-ary relational fact. As demonstrated in Section 4.1, this is reduced to computing the overall relatedness of all the r:v pairs in the set. However, the overall relatedness of a set with more than two objects is intricate to measure. When the number of objects is large, corresponding to the fact of high arity, the evaluation is further complicated. Whereas, computing the relatedness between two objects is relatively much simpler. Thus, how to determine the tricky overall relatedness via the simple relatedness between the r:v pairs? Straightforwardly, if all the r:v pairs form a closely related set, i.e., a valid n-ary relational fact, then every two pairs from the set are greatly related. Thus, for any two pairs, the values of their relatedness feature vector, measuring the relatedness in many different views, are all expected to be sufficiently large. That is, for each feature dimension, the minimum over this dimension of every two pairs is not allowed to be too small. Hence, based on this observation, we apply element-wise minimizing over the relatedness feature vectors of every two r:v pairs to approximately obtain the overall relatedness feature vector of all the r:v pairs. Accordingly, the relatedness evaluation component is designed to contain the estimation of the relatedness between the r:v pairs and the overall relatedness of all the r:v pairs, before computing the evaluation score. Relatedness Evaluation Between the R:V Pairs The relatedness between the r:v pairs is captured via a FCN with ReLU as its activation function. In particular, the widely used FCN is also adopted to infer how two objects are related and achieves marvelous performance in computer vision [45], [47]. In this sub-process, it turns out that a one-layer FCN already works very well. Thus, only a fully connected layer is adopted. Concretely, the embeddings of any two r:v pairs are concatenated and then fed into a fully connected layer with n gFCN nodes. The resulting vector is the relatedness feature vector, where each dimension indicates the relatedness of the input two r:v pairs in a certain respect. Overall Relatedness Evaluation As discussed in Section 4.3.1, the overall relatedness feature vector of all the r:v pairs is simply approximated by element-wise minimizing over the relatedness feature vectors of every two r:v pairs. Thus, the overall relatedness feature vector R Rel of Rel is defined as: where min(·) is the element-wise minimizing function; [x] i is the i-th entry of x; W gFCN of dimension 2n f × n gFCN and b gFCN of dimension n gFCN are the weight matrix and bias vector of g-FCN, respectively. Each entry of the resulting R Rel implies the degree of the overall relatedness in terms of a certain feature corresponding to that dimension. The Evaluation Score Then, R Rel is used to generate the evaluation score. A onelayer FCN is also found to be sufficient and applied. In this way, the evaluation score s 0 (Rel) of Rel is defined as: where W fFCN of dimension n gFCN × 1 and b fFCN are the weight matrix and bias variable of f-FCN, respectively. The Characteristics of NaLP Note that, in m-TransH [2], RAE [6], HypE [7], and GETD [8], all relations along with their role sequences are predefined, and the inputs of the models are the instances of these relations, i.e., the sequences of ordered role-values. Since the i-th role-value of an input instance corresponds to the i-th role of its relation, the order of the input role-values is usually not allowed to be changed randomly. Whereas, each input of the proposed NaLP method is a set of r:v pairs, corresponding to an n-ary relational fact. Thus, the order of the objects in each input matters in m-TransH, RAE, HypE, and GETD, while it has no impact on NaLP. Actually, NaLP is permutation-invariant to the input order of the r:v pairs and is able to cope with facts of different arities. Permutation Invariance of the R:V Pairs Suppose that the r:v pairs of Rel are input into NaLP in the original order with the indexes from 1 to m, and a permutation ρ to the indexes results in a new order with the indexes from ρ(1) to ρ(m). Thus, we simply explain the permutation invariance as: • In the process of r:v pair embedding, the embedding matrix M ρ(Rel) of the permutated r:v pairs ρ(Rel) is a permutation of the original M Rel , with the ρ(i)-th row of M Rel placed in the i-th row of M ρ(Rel) , i.e., In the process of relatedness evaluation, the overall relatedness feature vector That is, we obtain m × m relatedness feature vectors, the same to those in the original version, but with their order changed. Since min(·) is permutationequivariant, we have: R ρ(Rel) = R Rel . • Then, the evaluation score s 0 (ρ(Rel)) of ρ(Rel) is • Therefore, NaLP is permutation-invariant to the input order of the r:v pairs. Permutation invariance is also studied in DeepSets [46]. It claimed that a function acting on sets must be permutation-invariant. However, it only sums up the features of each object in a set. [45], [47] further consider the pairwise interactions among objects. Similar to [45], [47], we study these pairwise features, but on more complicated objects, each consisting of a role and its role-value. Feasibility to Different Arities Suppose that two batches of facts of arities m and m are input into NaLP subsequently. In the process of r:v pair embedding, we get the embedding matrices of the r:v pairs with m and m rows, respectively. In the process of relatedness evaluation, correspondingly, we obtain m × m and m × m relatedness feature vectors. As min(·) in Equation (2) returns one vector among all these results of number m×m or m ×m , the number of results makes no difference. Thus, NaLP is able to deal with facts of different arities. The Loss Function As above described, we obtain the evaluation score s 0 (Rel) of Rel. Following ComplEx [15], the loss of Rel is: where It is straightforward to optimize NaLP with the standard backpropagation. The stochastic optimization method Adam [49] with learning rate λ is used as the optimizer. The Training Process Algorithm 1 presents the training process of NaLP. Before training, the training set is reorganized into several groups, each of which keeps the facts of the same arities (Line 1). In order to guarantee that NaLP masters basic facts of low arities before learning facts of high arities, we sort the above resulting groups by their arities in ascending order (Line 2). Similar operations are applied to train the model on the path queries of length one and then all the path queries [50]. Subsequently, the embedding matrices M R and M V are randomly initialized from uniform distribution, respectively, with − 1 √ k as minimum and 1 √ k as maximum; Truncated normal distribution with 0.0 as mean and 0.1 as standard deviation is adopted as initializer to initialize all the filters in Ω [40]; W gFCN and W fFCN are randomly initialized via xavier initializer [51], respectively; b gFCN and b fFCN are randomly initialized with zeros (Line 3). During training, Lines 5-21 are repeated until the result on the validation set converges. In each epoch, |T |/β batches of training facts are sampled from each training group T , where · is the ceiling function. For each selected training fact, similar to the negative sampling method adopted in TransE [20], we randomly replace one of its rolevalues with a random role-value that the corresponding role holds with probability |V |/(|V | + |R|), or one of its roles is replaced by a random role with probability |R|/(|V | + |R|), to obtain a negative sample not contained in the dataset (Line 8). After that, for each sampled n-ary relational fact or negative one Rel, the embeddings of its roles and rolevalues are looked up from M R and M V , respectively (Lines 11 and 12). Then, they are fed into the r:v pair Algorithm 1 The training process of NaLP. Input: Training set T , role set R, role-value set V , max number of epochs nepoch, embedding dimension k, batch size β, number of filters n f in Ω, the hyperparameter n gFCN of W gFCN . Output: M R and M V , as well as the parameters of NaLP. tNaLP NaLP does not consider any type constraint to estimate the validity of an n-ary relational fact. Actually, for a valid n-ary relational fact, the expected type of each role-value in the view of the corresponding role should be compatible with the actual type of the role-value. Take the fact in Section 3 for example, the roles person, award, point in time, and together with expect the corresponding role-values of type human, award, time, and human, respectively. The actual types of the corresponding role-values in the mentioned fact do meet this requirement. Thus, with these considerations in mind, we further introduce these type constraints and extend NaLP to tNaLP. Not relying on any explicit external type supervision, tNaLP learns type embeddings in the training process. The framework of tNaLP is illustrated in Fig. 1, where the upper part obtains the evaluation score of the input fact, corresponding to NaLP, and the lower part obtains its type compatibility score. They are finally combined to obtain the type-constrained evaluation score. Similar to the relatedness evaluation component in Section 4.3, before obtaining the type compatibility score, we compute the type compatibility of each r:v pair and estimate the overall type compatibility of all the r:v pairs. Type Compatibility Evaluation of each R:V Pair Following Section 4.3.2, we adopt a one-layer FCN with ReLU as its activation function to estimate the type compatibility, denoted as "t-FCN" in Fig. 1. Specifically, for each r:v pair of Rel, the expected type embedding of the role-value in the view of its role and the actual type embedding of the role-value are looked up from the expected type embedding matrix M R ∈ R |R|×k and the type embedding matrix M V ∈ R |V |×k , respectively, where k is the dimension of type embeddings. They form the expected type embedding matrice R Rel and the actual type embedding matrice V Rel , respectively. Then, each row of R Rel and the corresponding row of V Rel are concatenated to form a vector of dimension 2k , and fed into a fully connected layer with n tFCN nodes. Each resulting type compatibility feature vector indicates the type compatibility of the corresponding role and role-value. Overall Type Compatibility Evaluation As Section 4.3.3 does, the overall type compatibility feature vector C Rel of Rel is approximated by element-wise minimizing over the type compatibility feature vectors of all the r:v pairs. Therefore, C Rel is defined as: where W tFCN of dimension 2k × n tFCN and b tFCN of dimension n tFCN are the weight matrix and bias vector of t-FCN, respectively. The Type Compatibility Score and Type-constrained Evaluation Score Subsequently, C Rel is fed into a one-layer FCN to obtain the type compatibility score s t (Rel) of Rel, denoted as "y-FCN" in Fig. 1, i.e., where W yFCN of dimension n tFCN × 1 and b yFCN are the weight matrix and bias variable of y-FCN, respectively. If Rel is valid, then its evaluation score s 0 (Rel) from NaLP and its type compatibility score s t (Rel) are expected to be large. That is, the minimum of them is encouraged to be large. Thus, we adopt the minimizing operation to obtain the type-constrained evaluation score s(Rel) of tNaLP: s(Rel) = min(s 0 (Rel), s t (Rel)). (9) NaLP + and tNaLP + In NaLP and tNaLP, the conventional negative sampling method (see Section 4.5.2) similar to that in TransE [20] is adopted. Although this negative sampling mechanism that replaces only one element of a fact is effective on binary relational datasets [20], it is not the case on n-ary relational datasets. Actually, it only pushes the model to distinguish a valid element from other elements in the dataset. However, it does not encourage the model to distinguish the r:v pairs in one n-ary relational fact from those in others. Therefore, in this paper, we propose a more reasonable negative sampling mechanism to take the distinguishment among n-ary relational facts into consideration. Upon each n-ary relational fact of arity m, the negative sampling process under this new mechanism is described as follows: (1) Decide to replace one role-value/role or r:v pair(s) randomly. If replace one role-value/role, go to Step (2), else go to Step (3); (2) Randomly replace one of the role-values or roles in the fact following Section 4.5.2; (3) Decide the random number n neg ∈ (0, m) of r:v pairs to be replaced, then choose n neg r:v pairs from the training set randomly, which may belong to different facts, to replace the n neg random r:v pairs of the given fact. Under this more reasonable negative sampling mechanism, NaLP and tNaLP are extended to NaLP + and tNaLP + , respectively. Note that, since the element-wise minimizing function in the overall type compatibility evaluation is permutation-equivariant and the proposed more reasonable negative sampling mechanism only affects the negative sampling process, tNaLP, NaLP + , and tNaLP + are permutation-invariant to the input order of the r:v pairs and are able to handle facts of different arities. Datasets We conduct experiments on two datasets. The first one is the public n-ary relational dataset JF17K 4 [2], derived from the popular KG Freebase [1]. For all its facts, the role sequences are equal to the predefined standard ones and the corresponding role-values are existing and known. This is usually not the case. Thus, we build a relatively more practical dataset WikiPeople, which is the second experimental dataset. It is constructed as follows: • We download the Wikidata dump 5 and extract the facts concerning entities of type human. 4. As the original version has no validation set, we use a new one, where 20% facts in the training set is randomly selected to form the validation set. 5. https://archive.org/details/wikibase-wikidatawiki-20171120 • Then, these facts are further denoised. For example, facts containing element related to image are filtered out, and facts containing element in {unknown value, no values} are removed. • Subsequently, we select the subsets of elements which have at least 30 mentions. The facts related to these elements are kept. Each fact is further parsed into a set of its r:v pairs. • The remaining facts are randomly split into training set, validation set, and test set by a percentage of 80%, 10%, and 10%, respectively. Actually, WikiPeople is relatively more flexible, where data incompleteness, insertion, and update are universal. Taking "M arie Curie received N obel P rize in Chemistry in 1911" as an example, its role sequence is [person, award, point in time]. Then, in the view of this fact, the following similar facts in Wikidata 6 correspond to the three cases of data incompleteness, insertion, and update, respectively: • M arie Curie received W illard Gibbs Award. • M arie Curie received M atteucci M edal in 1904 with P ierre Curie. M arie Curie received N obel P rize in P hysics in 1903 together with Henri Becquerel and P ierre Curie. • M arie Curie received Davy M edal with P ierre Curie. In the first example, the role point in time and its rolevalue are missing. Thus, it is incomplete. In the second examples, one or two new roles together with should be inserted into [person, award, point in time]. In the third one, the role point in time is missing but a new role together with is inserted. Thus, we call such a case as an update one. Fortunately, our representation form of n-ary relational facts, representing each one as a set of its r:v pairs, is able to cope with all these cases elegantly. In the real scenario, since there is possible missing in knowledge extraction, and knowledge also grows and updates rapidly with many new contents flowing in, in the dataset derived from one snapshot, the above phenomena of data incompleteness, insertion, and update are inevitable. Thus, the developed WikiPeople is practical. Notably, the role-values from the continuous domain (e.g., role-values corresponding to point in time, date of birth, date of death, etc.) are tackled like others from the discrete domain. The detailed statistics of the two experimental datasets are displayed in Table 1, where #Relation, #T rain, #V alid, and #T est are the sizes of the relation set, the training set, the validation set, and the test set, respectively. Here, #Relation is counted according to RAE [6] to have a better understanding of the two datasets. Metrics and Experimental Settings As for metrics, we adopt the standard Mean Reciprocal Rank (MRR) and Hits@N . These metrics are computed in a similar way of binary relational dataset [20]. For each test fact, one of its roles/role-values is removed and replaced by all the other roles/role-values in R/V . These corrupted 6. https://www.wikidata.org/wiki/Q7186 facts and the test fact are fed into the proposed methods to obtain the (type-constrained) evaluation scores. Then, they are sorted according to their (type-constrained) evaluation scores in descending order. The rank of the test fact is finally stored. This whole procedure is repeated for all the other roles/role-values of the test fact. Thus, MRR is the average of these reciprocal ranks and Hits@N is the percentage of ranks which are less than or equal to N . The traditional mean rank (the average of these ranks) is not adopted as a metric, since it is sensitive to outliers [53]. For these chosen metrics, the higher the value of MRR/Hits@N , the better the performance. In the test process, corrupted facts that may also be valid, i.e., exist in the training/validation/test set, are discarded before sorting the facts. For the proposed methods, the reported results are given under the best set of hyper-parameters on the validation set after grid search on the following ranges, by reference to ConvKB [40]: The embedding dimension k ∈ {50, 100}, the batch size β ∈ {128, 256}, the learning rate λ ∈ {5e −6 , 1e −5 , 5e The Effectiveness of tNaLP, NaLP + , and tNaLP + In this section, we analyze the effectiveness of the extended methods to further select methods to compare with the baselines in the next section. As role prediction is much simpler due to the much smaller role set, the above decision is made on more difficult role-value prediction. Table 2 demonstrates the experimental results. Specifically, on JF17K, tNaLP + improves NaLP by 0.063, 5.3%, 7.1%, and 8.1% on MRR and Hits@{1, 3, 10}, respectively. It verifies the superiority of combining type constraints unsupervisedly and the newly proposed negative sampling mechanism. This negative sampling mechanism, which replaces a random number of r:v pairs, encourages NaLP + and tNaLP + to distinguish not only a valid element from other elements in the dataset but also the r:v pairs in one n-ary relational fact from those in others, and results in much better performance. On WikiPeople, tNaLP performs better than NaLP, which indicates the necessity of type constraints. Although we introduce type constraints in an unsupervised manner, it does work. However, from the results of NaLP + and tNaLP + , we can observe that the new negative sampling mechanism does not present its effectiveness. It is probably because of the much higher percentage of binary relational facts on WikiPeople. To further verify this conjecture, we derive a new dataset, WikiPeople-n, from WikiPeople. Detailedly, based on WikiPeople, we keep all the n-ary relational facts, and randomly remove some binary relational facts to obtain the same percentage of binary and n-ary categories as in the training set on JF17K. The experimental results on WikiPeople-n are presented in Table 3. From the results, we can observe that NaLP + and tNaLP + are much better than NaLP and tNaLP, respectively, which confirms the above conjecture. However, the overall performance on WikiPeople-n is worse than that on JF17K and WikiPeople. WikiPeople, where the phenomena of data incompleteness, insertion, and update are universal, is more practical and difficult than JF17K (see Section 6.1). With the removal of many binary relational facts from WikiPeople, the above phenomena get severer, and thus WikiPeople-n becomes more difficult to deal with than JF17K and WikiPeople. What about further vary the percentage of binary relational facts? To go deep into it, we vary the percentage (0%, 50%, and 100%) of binary relational facts in WikiPeople to get three datasets, WikiPeople-0bi, WikiPeople-50bi, and WikiPeople-100bi, respectively. The experimental results are demonstrated in Table 4. It can be observed that tNaLP + gets best performance on WikiPeople-0bi, NaLP + on WikiPeople-50bi, and tNaLP on WikiPeople-100bi. From these results and the results in Tables 2-3, we find that on the datasets of dominant binary relational facts (e.g., Wikipeople and WikiPeople-100bi), the new negative sampling mechanism is not so effective; Otherwise, it is superior. Since WikiPeople-100bi is a KG dataset, we compare the proposed tNaLP method with the representative ten- 4 Comparison results of NaLP, tNaLP, NaLP + , and tNaLP + on role-value prediction in terms of MRR and Hits@{1, 3, 10} on WikiPeople-0bi, WikiPeople-50bi, and WikiPeople-100bi. sor/matrix based KG method ComplEx [15], translation based one TransE [20], and neural network based one CompGCN [43] in terms of fine-grained MRR and Hits@1 in Fig. 2. We observe that tNaLP outperforms the representative KG methods significantly and thus is also superior on KG dataset. Notably, on all the datasets, tNaLP does not gain significant performance improvement, as compared to NaLP, especially on JF17K and WikiPeople-50bi. To see the reasons behind, we analyze the type information. We present the analyses on WikiPeople and other datasets get similar results. The visualization of the type embeddings of the rolevalues on WikiPeople is illustrated in Fig. 3, where the rolevalues of the same type are in the same label and the same color. Here, the widely used t-SNE [54] is adopted to reduce the dimension of type embeddings. The statistics of the rolevalue types on WikiPeople (in logarithm) are illustrated in Fig. 4, where X-axis indicates the size of the role-value type (n type ), i.e., the number of the role-values in this type, and Y-axis indicates the number of the types of size n type . Although the type embeddings of the role-values in tNaLP capture useful information (see Fig. 3), i.e., those of the same type are close to each other, as the type distribution is highly imbalanced (see Fig. 4), tNaLP does not demonstrate its superiority. Actually, this imbalance of type distribution on JF17K and WikiPeople-50bi is more serious. Thus, tNaLP performs much worse on JF17K and WikiPeople-50bi. In a word, from Tables 2, 3, and 4, we conclude that the new negative sampling mechanism is a good choice on the datasets of many n-ary relational facts, while introducing the type constraints is preferred when the type distribution of the datasets is not so imbalanced. Baselines The representative or state-of-the-art methods for link prediction on n-ary relational data directly are RAE [6], HypE [7], GETD [8], HINGE [9], and NeuInfer [10]. As the tensor based method GETD inherently requires all facts to have the same arities, it is not applicable to JF17K and WikiPeople with mixed arities. To be fair, HINGE and NeuInfer are not adopted as baselines, since they introduce additional principal and subordinate structure information. Therefore, we adopt RAE and HypE as baselines. Experimental results Link prediction on n-ary relational data has two tasks, i.e., role prediction and role-value prediction. As RAE and HypE are deliberately designed only for role-value prediction, we conduct role prediction only on the proposed tNaLP + method. To elaborate the performance, we group the test set into binary and n-ary (n>2) categories according to the arities of the facts. Table 5 demonstrates the experimental results in terms of all the four metrics on binary category, n-ary category, and the whole dataset. The experimental results illustrate the power of tNaLP + . It achieves high performance on all the metrics. We attribute this to the reasonable modeling of r:v pairs. For role-value prediction, we select tNaLP and tNaLP + to compare with, based on the comparison results in Table 2, and conduct experiments on larger and more difficult WikiPeople for space limitation. The experimental results of role-value prediction compared to RAE and HypE are reported in Table 6. From the results, it is clear that tNaLP and tNaLP + perform better than the baselines. It testifies the strength of considering the relatedness of r:v pairs, introducing the type constraints, and proposing the more reasonable negative sampling mechanism. The tNaLP and tNaLP + methods even improve the overall performance by 14.0% and 12.1% on Hits@1, respectively, compared to the best baseline HypE, which elucidates that the proposed methods are more effective in picking exactly valid role-values out. We observe that RAE performs much worse than other methods. It is not surprising, since tNaLP and tNaLP + are capable of better coping with diverse data. Data incompleteness/insertion/update, which is ubiquitous on WikiPeople, is handled elegantly in tNaLP and tNaLP + , while new relation for it is defined in RAE and HypE, which may lead to data sparsity when the instances of the new relation are in small number. Although position-specific convolution further improves the performance of HypE, it still underperforms the proposed methods. To further comprehensively evaluate the performance of tNaLP and tNaLP + on role-value prediction, we select the best baseline from Table 6, i.e., HypE, and present the detailed experimental results on each arity (>2) in Table 7 (the arities having less than 10 facts are not reported). It can be seen from Table 7 that the proposed methods demonstrate their advantage on each arity. From Tables 6 and 7, we observe that tNaLP + performs better than tNaLP on n-ary relational facts, but worse on binary relational facts. It testified our conclusion in Section 6.3 that the new negative sampling mechanism benefits n-ary relational facts. Overall Relatedness Analysis In the proposed methods, the overall relatedness feature vector of a set of r:v pairs, i.e., an n-ary relational fact, is the crucial intermediate result. We conduct further analyses to dig deep into what it has learned. Distinguishability Metric According to Section 4.1, a valid n-ary relational fact is expected to have an overall relatedness feature vector of large values, while an invalid n-ary relational fact is on the contrary. How to evaluate the degree of largeness? Actually, the relative magnitude between a valid n-ary relational fact and its corrupted facts is relatively more meaningful. Specifically, it is expected that a valid n-ary relational fact has larger values in a majority of dimensions of the overall relatedness feature vector compared to its corrupted facts, and thus the valid n-ary relational fact is distinguishable from its corrupted facts. Therefore, we propose the following metric to measure this type of distinguishability: where Rel + is a valid n-ary relational fact, and Relis one of its corrupted facts; R Rel + and R Rel -are the overall relatedness feature vectors of Rel + and Rel -, respectively; sgn(x) is the function that returns 1, if x > 0, otherwise, returns 0; sum(·) is the element-wise sum function. In Equation (10), the left part of the minus sign counts the number of dimensions that R Rel + is larger than R Rel -, and the right Similar to the evaluation procedure, we replace M ichael Douglas in Fact-1 and N obel P rize in P hysics in Fact-2 with all the other role-values in V , then these corrupted facts, as well as Fact-1 and Fact-2 are fed into the proposed methods to obtain the overall relatedness feature vectors. Subsequently, the distinguishability metric is computed via Equation (10). Fig. 5 depicts these distinguishability results on NaLP and the extended methods get similar results. As exhibited in Fig. 5, most distinguishability results lie in the area above 0. It visually corroborates that the overall relatedness feature vector of a fact captures many discriminant features to further estimate its validity. Therefore, our conjecture on relatedness (see Section 4.1) makes sense to a certain degree. Due to the diversity of n-ary relational facts, the dimension of the overall relatedness feature vectors is encouraged to be large. In this way, there are sufficient dimensions to encode various features. This is consistent with the large optimal value of n gFCN (e.g., 800 on JF17K and 1200 on WikiPeople). There are also some corrupted facts that obtain extremely small distinguishability results and it is difficult to distinguish them from the valid facts. We take Case-2 for example and present the replaced role-values in its top 10 corrupted facts on NaLP and the extended methods in Table 8. From Table 8, we can observe that on NaLP, the corrupted facts with the replaced role-values participating in some other roles of Fact-2, i.e, M arie Curie, Henri Becquerel, and 1903, obtain the top 3 smallest distinguishability results, respectively. NaLP is unable to filter out such a situation. Some corrupted facts with the replaced role-values of type time, club, or institution also achieve relatively smaller distinguishability results. It is acceptable, since NaLP does not take role-value type into consideration. By introducing type constraints or the new negative sampling mechanism, tNaLP and NaLP + learn more reasonable embeddings, and some corrupted facts with the replaced role-values of type award get the smallest distinguishability results. By combining type constraints and the new negative sampling mechanism, tNaLP + further makes more corrupted facts with the replaced role-values of type award (6 out of the top 10 corrupted facts) get the smallest distinguishability results. These role-values are of the same type as the right role-value N obel P rize in P hysics and are difficult to filter out, especially for W olf P rize in P hysics, Centenary P rize der Royal Society of Chemistry, N obel P rize in Chemistry, and Lilienf eld P rize (an award administered by the American Physical Society), since M arie Curie is a physicist and chemist, and the right role-value N obel P rize in P hysics is a Nobel Prize. Thus, it demonstrates that the extended methods tNaLP, NaLP + , and tNaLP + are able to learn relatively more useful information. Ablation Study To look deep into the reasonability of the design choices in Section 4, we perform an ablation study on NaLP. Without loss of generality, these experiments are conducted on WikiPeople concerning more difficult role-value prediction. Detailedly, convolution in the r:v pair embedding component is replaced by plus and multiplication, denoted as NaLP(plus) and NaLP(mul), respectively. Besides, elementwise minimizing in the relatedness evaluation component is replaced by element-wise maximizing and mean, denoted as NaLP(emax) and NaLP(emean), respectively. The experimental comparison between NaLP and the ablation methods is presented in Table 9. It can be observed from the table that NaLP outperforms all the ablation methods significantly. It suggests that convolution is a much better way to learn features than plus and multiplication. Moreover, element-wise minimizing is experimentally more reasonable than element-wise maximizing and mean. Complexity Analysis To see the efficiency of the proposed methods, we compare them with the representative or state-of-the-art link prediction methods on n-ary relational data, in terms of FLOPs (floating point of operations) and parameter size on the larger WikiPeople. The comparison is illustrated in Fig. 6. As in GETD [8], all facts are required to have the same arities, which is not reasonable, thus we do not compare with it. It can be observed from the figure that NaLP and tNaLP + have similar FLOPs. Although they have more FLOPs than RAE [6], HINGE [9], and NeuInfer [10], they are much less time-consuming them HypE [7]. As for parameter size, all the methods have more than 5.000M parameters. The proposed methods are comparable with NeuInfer and have much less parameters than HypE and HINGE. CONCLUSIONS AND FUTURE WORKS In this paper, we represented each n-ary relational fact as a set of its r:v pairs and proposed NaLP and its extensions to cope with link prediction on n-ary relational data. The design of the frameworks equips the proposed methods with the characteristic of permutation invariance to the input order of the r:v pairs and the ability to well handle facts of different arities. By evaluating the relatedness of every two r:v pairs in an n-ary relational fact, NaLP is able to estimate the overall relatedness of all its r:v pairs approximately. The resulting overall relatedness feature vectors further enable NaLP to pick up many determinant features to decide the validity of the input facts. We further extended NaLP to tNaLP, NaLP + , and tNaLP + via introducing type constraints of roles and role-values unsupervisedly, a more reasonable negative sampling mechanism, and both, respectively. Furthermore, since publicly available n-ary relational datasets are limited, we developed a practical one, WikiPeople, and further derived WikiPeople-n, WikiPeople-0bi, WikiPeople-50bi, and WikiPeople-100bi from WikiPeople, with different percentages of binary relational facts. They were published online for further research. Experimental results on the public dataset JF17K and the newly developed datasets manifest the merits and superiority of the proposed methods. Specifi-cally, on role-value prediction, compared to the state-of-theart method, tNaLP and tNaLP + improve the performance even by 14.0% and 12.1% in terms of Hits@1, respectively. For future works, on the one hand, we will explore more expressive neural network models to capture more favorable features for relatedness evaluation. On the other hand, in this paper, we use only facts in the datasets to conduct link prediction on n-ary relational data. In the future, we will introduce additional information, such as rules and external texts, to further improve the developed methods. Moreover, type constraints of roles and role-values without supervision and the more reasonable negative sampling mechanism can be adopted to promote other state-of-theart link prediction methods on n-ary relational data, such as the newly published NeuInfer [10].
2021-04-22T01:16:00.343Z
2021-04-21T00:00:00.000
{ "year": 2021, "sha1": "1ff6be1c1702f35b403e717c4fd7937b7202ae49", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.10424", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1ff6be1c1702f35b403e717c4fd7937b7202ae49", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
55976785
pes2o/s2orc
v3-fos-license
The phase transitions between $Z_n\times Z_n$ bosonic topological phases in 1+1 D, and a constraint on the central charge for the critical points between bosonic symmetry protected topological phases The study of continuous phase transitions triggered by spontaneous symmetry breaking has brought revolutionary ideas to physics. Recently, through the discovery of symmetry protected topological phases, it is realized that continuous quantum phase transition can also occur between states with the same symmetry but different topology. Here we study a specific class of such phase transitions in 1+1 dimensions -- the phase transition between bosonic topological phases protected by $Z_n\times Z_n$. We find in all cases the critical point possesses two gap opening relevant operators: one leads to a Landau-forbidden symmetry breaking phase transition and the other to the topological phase transition. We also obtained a constraint on the central charge for general phase transitions between symmetry protected bosonic topological phases in 1+1D. Introduction and the outline Last five years witnessed a fast progress in the understanding of a new type of quantum disordered states -symmetry protected topological states (SPTs) [1][2][3]. These states exhibit a full energy gap in closed (i.e., boundary-free) geometry and exhibit the full symmetry of the Hamiltonian. However, these states are grouped into different "topological classes" such that it is not possible to cross from one topological class to another without closing the energy gap while preserving the symmetry. Our goal is to understand the difference (if any) between the traditional Landau type and this new kind of "topological" phase transitions. Because the Landau-type phase transitions are triggered by the fluctuations of bosonic order parameters over space-time, to minimize the obvious difference we focus on the phase transitions between bosonic SPT phases [3]. Hence we do not address the phase transition between fermionic topological insulators or superconductors [1,2]. Moreover, to make everything as concrete as possible we shall focus on one space dimension and to topological phase transitions which have dynamical exponent equal to one (hence can be described by conformal field theories (CFTs)). We spend most of the space studying a specific class of such phase transitions -the phase transition between bosonic SPTs protected by Z n × Z n . We spend most of the space describing the study of a specific class of such phase transitions -the phase transition between bosonic SPTs protected by Z n × Z n . Here we combine a blend of analytic and numerical methods to arrive at a rather complete picture for such critical points. From studying these phase transitions we observe an interesting fact, namely whenever the transition is direct (i.e., when there are no intervening phases) and continuous the central charge (c) of the CFT is always greater or equal to one. Near the end of the paper, we obtain a constraint on the central charge for CFTs describing bosonic SPT phase transitions: namely, c ≥ 1. Therefore, none of the best known "minimal models [4]" can be the CFT for bosonic SPT phase transitions! According to the group cohomology classification [3], in one space dimension, the group Z n × Z n protects n different topological classes of SPTs. If we "stack" a pair of SPTs (which can belong to either the same or different topological class) on top of each other and turn on all symmetry allowed interactions, a new SPT will emerge to describe the combined system. An abelian (cohomology) group H 2 (Z n × Z n , U(1)) = Z n (here the superscript "2" refers to the space-time dimension) classifies the SPT phases and describes the stacking operation. Here each topological class is represented by an element (i.e., 0, ..., n − 1) of H 2 (Z n × Z n , U(1)) = Z n and the "stacking" operation is isomorphic to the mod(n) addition of these elements. To understand the phase transitions between different classes of SPTs it is sufficient to focus on the transition between the trivial state (which corresponds to the "0" of Z n ) and the non-trivial SPT corresponding to the "1" of Z n . The transition between phases correspond to other adjacent elements of Z n , e.g., (m, m + 1), will be in the same universality class as that between (0, 1). Transitions between "non-adjacent" topological classes will generically spit into successive transitions between adjacent classes. There are 11 sections in the main text. In these sections we restrain from heavy mathematics, i.e., we simply state the main results and provide simple arguments. There are 6 appendices where mathematical details can be found. The outline of this paper is the follows. In section 2 we present the exactly solvable fixed point hamiltonians for the trivial and non-trivial Z n × Z n protected SPT phases. In section 3 we present a hamiltonian that interpolates between the fixed point hamiltonians in section 2. A single parameter tunes this hamiltonian through the SPT phase transition. Section 4 introduces a non-local transformation that maps the hamiltonian in section 3 to that of two n-state clock models with spatially twisted boundary condition and Hilbert space constraint. In particular, at criticality, we show that the partition function of the transformed hamiltonian corresponds to an "orbifolded" Z n × Z n clock model. In section 5 we discuss the effects of orbifolding on the phases of the clock model and show the results are consistent with what one expect for the SPT phases. Section 6 gives the phase diagram of the hamiltonian given in section 3. In section 7 we show that from the point of view of the orbifolded clock model the SPT transition corresponds to a Landau forbidden transition. In section 8 we present the conformal field theories for the SPT phase transitions discussed up to that point. Section 9 presents our numerical density matrix renormalization group results. We compare these results with the prediction of section 8. Section 10 presents the argument that the central charge of the CFTs that describe SPT phase transitions must be greater or equal to one. Finally, section 11 is the conclusion. In Appendix A, we provide a brief review of the key ingredients of the 1 + 1D group cohomology, namely, the notions of cocycles and projective representations. After that, we show how to use cocycles to construct solvable fixed point SPT hamiltonians. Appendix B summarizes the non-local transformation that maps the hamiltonian in section 3 of the main text to that of two n-state clock models with spatially twisted boundary condition and Hilbert space constraint. In Appendix C we show that the partition function associated with the hamiltonian in Appendix B (and section 3 of the main text) corresponds to that of "orbifolded" Z n × Z n clock model. Appendices D, E, F present the modular invariant partition functions of the orbifold Z 2 × Z 2 , Z 3 × Z 3 and Z 4 × Z 4 clock models, respectively. In these appendices, we examine the primary scaling operator content of the modular invariant conformal field theory. In addition, we study the symmetry transformation properties of various Verma modules and the scaling dimension of primary scaling operators, particularly that of the gap opening operator. Appendix G summarizes the details of the density matrix renormalization group calculation. Finally, in Appendix H we briefly review the symmetry of the minimal model conformal field theories. Exactly solvable "fixed point" Hamiltonians for the SPTs Each SPT phase is characterized by an exactly solvable "fixed point" Hamiltonian. In Appendix A we briefly review the construction of these Hamiltonians using the "cocycles" associated with the cohomology group [5,6]. For the case relevant to our discussion the following lattice Hamiltonians can be derived [7] so that its ground state belong to the "0" and "1" topological classes of H 2 (Z n × Z n , U(1)) = Z n These Hamiltonians are defined on 1D rings consisting of N sites. For each site labeled by i the local Hilbert space is spanned by |g 2i−1 , g 2i := |g 2i−1 ⊗ |g 2i where (g 2i−1 , g 2i ) ∈ Z n × Z n with g 2i−1 , g 2i = 0, 1, ..., n − 1. The total Hilbert space is the tensor product of the local Hilbert space for each site. For the convenience of future discussions from now on we shall refer to (2i − 1, 2i) as defining a "cell", and call |g 2i−1 and |g 2i as basis states defined for "site" 2i − 1 and 2i. The operators M j and R j in Equation (1) are defined by M j |g j := |g j + 1 mod n, and R j |g j := η g j n |g j where η n = e i2π/n . From Equation (2) we deduce the following commutation relation between M and R: Due to this commutation relation, it can be checked that the n × n matrices associated with M j and R j form a projective representation of the Z n × Z n group multiplication law (see appendix A.2 for the definition of projective representations). Finally periodic boundary condition is imposed on Equation (1) which requires Under these definitions Equation (1) is invariant under the global Z n × Z n group generated by The form of Hamiltonians given in Equation (1) is quite asymmetric between M and R. We can make it more symmetric by performing the following unitary transformation on the local cell basis as follow This results in the following transformations of the operators in Equation (1) It is straightforward to show that after these transformations the new operators obey the same commutation relation as Equation (3). Moreover, it can also be shown that R obeys the same boundary condition, namely, R 2N +1 = R 1 and R 2N +2 = R 2 . In addition, it is also straightforward to show that under Equation (6) the generators of the Z n × Z n group become Thus alternating "site" carries the projective and anti-projective representation of Z n × Z n . Under Equation (6) the Hamiltonian H 0 and H 1 become These Hamiltonians are pictorially depicted in Fig. 1(a,b). Note that while H 0 ( Fig. 1(a)) couples sites within the same cell, H 1 couples sites belong to adjacent cells ( Fig. 1(b)). Because both H 0 and H 1 consist of decoupled pairs of sites (the coupling terms associated with different pairs commute with one another) they can be exactly diagonalized. The result shows a unique ground state with a fully gapped spectrum for both H 0 and H 1 . Using Equation (7) it is simple to show that the ground states are invariant under Z n × Z n . The fact that H 0 and H 1 describe inequivalent SPTs can be inferred by forming an interface of H 0 and H 1 as shown in Fig. 1(c). A decoupled site (red) emerges. Localizing on this site there are degenerate gapless excitations carrying a projective representation of the Z n × Z n [23]. The fact that gapless excitations must exist at the interface between the ground states of H 0 and H 1 attests to that fact that these states belong to inequivalent topological classes of H 2 (Z n × Z n , U(1)) = Z n . An interpolating Hamiltonian describing the phase transition between Z n × Z n SPTs To study the phase transition between the ground state of H 0 and the ground state of H 1 we construct the following Hamiltonian which interpolates between H 0 and H 1 as follows With both H 0 and H 1 present the Hamiltonian given in Equation (9) is no longer easily solvable. However, in the following, we present analytic results showing (1) for 2 ≤ n ≤ 4 the phase transition occurs at λ = 1/2, (2) the central charge, the conformal field theory and its associated primary scaling operators at the phase transitions. For n ≥ 5 there is a gapless phase centered around λ = 1/2 hence the phase transition is not direct. Moreover for the interesting case of n = 3 we will present the numerical density matrix renormalization group results which confirm our analytic solution. Mapping to "orbifold" Z n × Z n clock chains In Appendix B we show that Equation (9) can be mapped onto a Z n × Z n clock model with spatially twisted boundary condition and a Hilbert space constraint. In Appendix C we further show that these amount to "orbifolding". The mapping is reminiscent of the duality transformation in a single Z n clock model. The mapping is achieved via the following transformations: After the mapping, the Hamiltonian in Equation (9) is transformed to Here M and R obey the same commutation relations as M and R in Equation (3). Equation (11) is the quantum Hamiltonian for two Z n clock models [8], one defined on the even and one on the odd sites, respectively. However, generated by the mapping, Equation (11) is supplemented with a twisted spatial boundary condition and a constraint: Constraint: Here B is an operator that commutes with all the Rs and Ms. The eigenvalues of B are b = 1, η n , ..., η n−1 n (recall that η n = e i2π/n ). In terms of the transformed variables, the generators of the original Z n × Z n group are given by The spatially twisted boundary condition Equation (12) and the constraint Equation (13) (which turns into a time direction boundary condition twist in the path integral representation of the partition function) execute the "orbifolding" (see later). By swapping the even and odd chains Equation (11) exhibit the duality. This implies the self-dual point at λ = 1/2 is special. In particular, if there is a single critical point as a function of λ, it must occur at λ = 1/2. Incidentally, if we put aside Equation (12) and Equation (13), λ = 1/2 is where each of the clock chains in Equation (11) becomes critical. As we will show later the effects of Equation (12) and Equation (13) (i.e., orbifold) is to change the primary scaling operator content of the critical CFT from that of the direct product of two Z n clock models. However they do not jeopardize the criticality, nor do they change the central charge. We shall return to these more technical points later. At the meantime let's first study the effects of Equation (12) and Equation (13) on the phases. The effect of orbifold on the phases Knowing the behavior of the single Z n clock chain, Equation (11) suggests for λ < 1/2 the odd-site chain will spontaneously break the Z n symmetry while the even chain remains disordered. The ground state will lie in the b (the eigenvalue of B ) = 1 sector on account of the twisted boundary condition. For λ > 1/2 the behaviors of the even and odd chains exchange, and the ground state remains in the b = 1 sector. On the surface, such symmetry breaking should lead to ground state degeneracy which is inconsistent with the fact that both SPTs (for λ < 1/2 and λ > 1/2) should have unique groundstate. This paradox is resolved if we take into account of the constraint in Equation (13). For simplicity let's look at the limiting cases. For λ = 0 the ground state of Equation (11) is |g, g, ..., g odd ⊗ |p, p, ..., p even ⊗ |b = 1 (15) where g = 0, ..., n − 1. Here the "paramagnet state" |p for each site is defined as As expected, such ground state is n-fold degenerate and it does not satisfy the constraint of Equation (13). However, if we form the symmetric superposition of the odd-site symmetry breaking states ⎛ ⎝ 1 √ n n−1 g=0 |g, g, ..., g odd ⎞ ⎠ ⊗ |p, p, ..., p even ⊗ |b = 1 (17) the constraint is satisfied and the state is non-degenerate. Obviously, Equation (17) is invariant under the Z n × Z n generated by Equation (14). Although Equation (17) is non-degenerate, the two-point correlation function R 2j +1 R † 2k+1 still shows long-range order. Almost exactly the same arguments, with odd and even switched, apply to the λ = 1 limit. The only difference is instead of observing |p, p, ..., p even being invariant under the action of N j =1 M 2j we need to observe that 1 √ n n−1 g=0 |g, g, ..., g even is invariant. As λ deviates from the limiting values, so long as it does not cross any phase transition the above argument should remain qualitatively unchanged. In this way we understand the effects of Equation (12) and Equation (13) on the phases. The phase diagram Since upon orbifolding the phases of the decoupled Z n × Z n clock models seamlessly evolve into the SPT phases we shall construct that phase diagram using what's know about the phase structure of the clock model. It is known that a single Z n clock chain shows an order-disorder phase transition at a single critical point for n ≤ 4, while there is an intermediate gapless phase for n ≥ 5 we conclude the phase diagram is shown in Fig. 2(a,b). Since our goal is to study the continuous phase transition between SPTs we focus on n ≤ 4. SPT transitions as "Landau-forbidden" phase transitions According to Landau's rule, transitions between phases whose symmetry groups do not have subgroup relationship should generically be first order. Continuous phase transitions between such phases are regarded as "Landau forbidden" in the literature. As discussed earlier, in terms of the orbifolded Z n × Z n clock chains, the two phases on either side of the SPT phase transition correspond to the breaking of the Z n symmetry in one of the clock chain but not the other. In the following, we elaborate on this statement. For λ < 1/2 although the ground state in Equation (17) is non-degenerate, the two-point correlation function R 2j +1 R † 2k+1 shows long-range order. When the odd and even chains are switched the same argument applies to the λ > 1/2 limit. If we define it is easy to show that equations (11), (12) and (13) commute with them, hence the Z n × Z n group they generate are also the symmetry of the problem. However it is important not to confuse Z n × Z n with the original Z n × Z n group (which is generated by Equation (14)). With respect to the Z n × Z n symmetry the two phases (realized for λ < 1/2 and λ > 1/2) breaks two different Z n factors, hence the symmetry groups of the two phases have no subgroup relationship, thus if a continuous phase transition between them exists it is a Landau forbidden transition. In fact, it is the original Z n × Z n symmetry that "fine tunes" the system to realize such non-generic continuous phase transition. The CFT at the SPT phase transition for n = 2, 3, 4 It is known that the central charge of the CFT describing the criticality of a single Z n clock chain is c = 1/2, 4/5, 1 for n = 2, 3, 4. Thus the central charge of the CFT describing the simultaneous criticality of two decoupled Z n clock chains should be c = 1, 8/5, 2 for Z 2 × Z 2 , Z 3 × Z 3 and Z 4 × Z 4 . This is summarized in Table 1. Of course, we do not have two decoupled clock chains. The spatial boundary condition twist (Equation (12)) and the constraint (Equation (13)), namely the orbifolding, couples the two Table 1 The central charges associated with the critical point of the Z n × Z n SPT phase transitions for n = 2, 3, 4. Symmetry group Central charge chains together. The purpose of this section is to address the effects of orbifolding on the criticality of the two decoupled chains. Let's start with the conformal field theory of a single Z n clock chain. The partition function of such CFT on a torus is given by Here the indices a, b labels the Verma modules. Each Verma module is spanned by states associated with a primary scaling operator and its descendants through the operator-state correspondence. Each Verma module carries an irreducible representation of the conformal group. The parameter q in Equation (19) is equal to e 2πiτ , where τ is the modular parameter of the spacetime torus (see Fig. D.8). χ a (q) and χ b (q) are, respectively, the partition function associated with the holomorphic Verma module a and antiholomorphic Verma module b. The matrix M ab has non-negative integer entries. The partition function of the two decoupled Z n clock chains that are simultaneously critical is given by It turns out that the effect of orbifold is to change In particular, N (1,1);(1,1) = 1, i.e., the tensor product of the ground state of the two clock chains is also the ground state of the orbifold model. Moreover, for those N (a,c);(b,d) > 0 the scaling dimension of the holomorphic primary operator (a, c) is h (a,c) = h a + h c and that of the antiholomorphic primary operator The fact that the ground state of the orbifold model remain the same as the tensor product of the ground states of the decoupled clock chains implies The latter identity can be seen from the fact that the central charge can be computed from the entanglement entropy, which is a pure ground state property. Thus, after the orbifold, the system is still conformal invariant (i.e. quantum critical) and the central charge is unaffected by the orbifold. This argument allows us to conclude that the central charge of the Z n × Z n (n = 2, 3, 4) SPT phase transition is indeed given in Table 1. In Appendices D, E, and F we go through the details of obtaining the modular invariant partition function for the orbifold Z n × Z n (n = 2, 3, 4) clock chains. We examine the primary scaling operator content of the modular invariant conformal field theory. In addition, we study the symmetry transformation properties of various Verma modules and the scaling dimension of primary scaling operators, in particular, that of the gap opening operator. In Table 2 we list the first few most relevant scaling operators and their scaling dimension for n = 2, 3, 4. Entries in blue are invariant under Z n × Z n . Numerical DMRG study of the Z 3 × Z 3 SPT phase transition In this section, we report the results of numerical density matrix renormalization group calculation for the Z 3 × Z 3 transition. The purpose is to check our analytic predictions in the last section. The details of the numerical calculations are presented in Appendix G. First, we demonstrate that λ = 1/2 in Equation (9) is indeed a critical point. Let's look at the second derivative of the ground state energy with respect to λ for both open and periodic boundary conditions with different system sizes (Fig. 3). The results clearly suggest a second-order phase transition at λ c = 1/2 where the second order energy derivative diverges. Next, we compute the central charge at λ = 1/2. This is done by computing the entanglement entropy, which is calculated from the reduced density matrix by tracing out the degrees of freedom associated with N − l sites in a system with total N sites. In Fig. 4 we plot the von Neumann entanglement entropy S against x = N π sin(πl/N ) where l is the number of sites that are not traced out. CFT predicts S = c 6 ln(x) + const for the open boundary condition and S = c 3 ln(x) + const for periodic boundary condition [9]. From the numerics we find c = 1.599 (9). This result is in nearly perfect agreement with our analytic prediction c = 8/5. In addition to the above results, we have also calculated the gap as a function of λ. In fitting the result to we estimate the gap exponent to be α = 0.855 (1) for open boundary condition (Fig. 5) and α = 0.847(1) for periodic boundary condition (Fig. 6). These results are in good agreement with the analytic prediction α = 5/6 (see Appendix E.4). The constraint on the central charge After an examination of Table 1 it is easy to notice that c ≥ 1 for all Z n × Z n SPT phase transitions. Moreover, for all the cases we know, including SPTs protected by continuous groups, all 1D (z = 1) bosonic SPT phase transitions are described by CFT with c ≥ 1. In the following present an argument that the CFT of all 1D bosonic SPT phase transition must have c ≥ 1. We proceed by showing that the c < 1 CFTs cannot be the critical theory for bosonic SPT transitions. The 1D CFTs that are unitary and have c < 1 are the so-called minimal models. In Appendix H we summarize the argument in Ref. [10] where it is shown that the maximum on-site internal symmetry ("on-site" symmetries are the ones consisting of product over local transformations that act on the local, e.g. site or group of sites, Hilbert space) that these CFTs can possess are either Z 2 or S 3 . Since the critical point of the bosonic SPT phase transitions must possess the same on-site symmetry as the phases on either side, and neither Z 2 nor S 3 can protect non-trivial bosonic SPTs in 1D (i.e., H 2 (Z 2 , U(1)) = H 2 (S 3 , U(1)) = Z 1 ), we conclude that the CFTs corresponding to the minimal model cannot possibly be the critical theory for bosonic SPT phase transitions. This leaves the c ≥ 1 CFTs the only possible candidates as the critical theory for bosonic SPT phase transitions. Conclusions In this paper, we present an analytic theory for the phase transition between symmetry protected topological states protected by the Z n × Z n symmetry group. We have shown that for 2 ≤ n ≤ 4 a direct, continuous, topological phase transition exists. In contrast for n ≥ 5 the transition from the topological trivial to non-trivial SPTs is intervened by an intermediate gapless phase. Our theory predicts that for n = 2, 3, 4 the central charge of the CFT describing the SPT phase transitions are c = 1, 8/5 and 2, respectively. We perform explicit numerical density matrix renormalization group calculations for the interesting case of n = 3 to confirm our analytic predictions. We expect treatment analogous to what's outlined in this paper can be generalized to the phase transitions between SPTs protected by symmetry group Z n 1 × Z n 2 × .... In addition, we provide the proof for a conjectured put forward in a previous unpublished preprint [11] that the central charge of the CFTs describing bosonic SPT transitions must be greater or equal to 1. Thus all c < 1 CFTs cannot be the critical theory for bosonic phase transitions. However, we have not yet answered the question "are all CFTs with c ≥ 1 capable of describing topological phase transitions." Of course upon non-local transformation the c < 1 minimal models can be viewed as the critical theory for parafermion SPT transitions. Indeed, the c = 1/2 Ising conformal field theory describes the critical Majorana chain. The c = 4/5 three-state Potts model CFT describes the critical point of Z 3 parafermion chain. We suspect that the parafermion models escape the classification of either the K theory or the cohomology group because of its non-local commutation relation. In space dimension greater than one, we do not know a model which definitively exhibits a continuous phase transition between bosonic SPTs. This is due partly to the likelihood of spontaneous breaking of the discrete protection symmetry when d ≥ 2. In addition, even if the continuous phase transition exists, it is more difficult to study these phase transitions, even numerically. However a "holographic theory" was developed for phase transitions between SPT phases which satisfy the "no double-stacking constraint" [6]. That theory predicts the criti-cal point should exhibit "delocalized boundary excitations" of the non-trivial SPT, which are extended "string" or "membrane" like objects with gapless excitation residing on them. We expect this kind of critical point to be fundamentally different from the Landau-like critical point. Clearly many future studies are warranted for the understanding of these interesting phase transitions. Z n × Z n SPT Hamiltonian in 1D We briefly review the definition of cocycles in the group cohomology, and describe a procedure [3,5] to construct fixed point SPT hamiltonians (1) that are relevant to this paper. A.1. Cocycle In 1D a cocycle associated with group G is an U(1) valued function ν(g 0 , g 1 , g 2 ) where the argument g i ∈ G which satisfies ν(gg 0 , gg 1 , gg 2 ) = ν(g 0 , g 1 , g 2 ). Here we only consider the group realized by unitary representation. Moreover, ν satisfies the following cocycle condition for certain c(g 0 , g 1 ) satisfying c(gg 0 , gg 1 ) = c(g 0 , g 1 ) we say it is a coboundary. It may be checked that a coboundary automatically satisfies the cocycle condition Equation (A.1). Two cocycles related by the multiplication of a coboundary are viewed as equivalent. The equivalence classes of cocycles form H 2 (G, U(1)) -the 2 nd cohomology group of G with U(1) coefficient. Bosonic G-symmetric SPTs in 1 space dimensions are "classified" by H 2 (G, U(1)), i.e., each equivalent class of SPTs is in one to one correspondence with an element of the abelian group H 2 (G, U (1)). The binary operation of the abelian group corresponds to the "stacking" operation, i.e., laying two SPTs on top of each other and turning on all symmetry allowed interactions. A.2. Projective representation In quantum mechanics, symmetry operators are usually realized as matrices R(g) acting on Hilbert space. Usually these matrices form a linear representation of the symmetry group, namely, However, two quantum states differ by an U(1) phase are regarded as the same quantum mechanically. Thus, one should relax Equation (A.4) by allowing a phase ambiguity ω, namely, When Equation (A.5) is satisfied we say that R(g) form a projective representation of the original symmetry group. Obviously, linear representation where ω(g 1 , g 2 ) = 1 is a special case of projective representation. In the literature linear representations are usually viewed as "trivial" projective representations. Associativity under group multiplication, namely, In addition the phase ambiguity of quantum states obviously allows one to multiply all R(g) by an U(1) phase φ(g), namely, This phase transformation results in Consequently ωs related by Equation (A.8) should be regarded as equivalent. It turns out that in 1D, cocycles of group cohomology can be interpreted as projective representations. The easiest way to see it is by defining ω(g 1 , g 2 ) and φ(g 1 ) in terms of the cocycle ν and the coboundary c defined in the last subsection, namely, where e is the identity group element of G. In terms of ω the cocycle condition becomes (∂ω)(g 1 , g 2 , g 3 ) := (∂ν)(e, g 1 , g 1 g 2 , g 1 g 2 g 3 ) (∂φ)(g 1 , g 2 ) := (∂c)(e, g 1 , which is exactly the factor appearing in Equation (A.8). A.3. Construction of Hamiltonian Here we describe how to construct solvable Hamiltonians, one for each equivalence class of the SPTs. Consider a 1D ring consists of N lattice sites. The Hilbert space for each site i is spanned by {|g i } where g i ∈ G, and the total Hilbert space is spanned by the tensor product of the site basis, i.e., |{g i } = i |g i . For each class of the SPTs (or for each element of H 2 (G, U(1))) picks a representing cocycle ν(g 0 , g 1 , g 2 ). The "fixed point" ground state, which is a particular representative of a whole equivalent class of SPTs, associated with the cocycle ν is equal to (Ref. [3] Section IX) Here e represents the identity element of G. It is attached to "0" site at the center of the ring as shown in Fig. A.7. σ (i, i + 1) = ±1 depending on the orientation of the triangle 0, i, i + 1. The orientation of each link in the triangle is represented by an arrow pointing from the site labeled by a smaller site index to the site labeled by the bigger index. From the link orientation we determine the triangle orientation by following the majority of the link orientation and the right-hand rule). Finally periodic boundary condition requires g N+1 = g 1 . The Hamiltonian whose exact ground state is Equation (A.12) is where J > 0. The operator B i only changes the basis states on site i, and (A.14) For G = Z n × Z n , there are n inequivalent SPT classes and H 2 (Z n × Z n , U(1)) = Z n . Each equivalent class of H 2 (Z n × Z n , U(1)) is represented by a cocycle ν((e, e), (g 1 , g 2 ), (g 3 , g 4 )) = η kg 2 g 3 n , where η n = e i2π/n In the above (g 2i−1 , g 2i ) ∈ Z n × Z n are the Z n elements associated with site i, and k ∈ {0, 1, . . . , n − 1} each correspond to a different element of Z n (H 2 (Z n × Z n , U(1))). In the main text, we refer to |g 2i−1 , g 2i as the "cell basis" which is the tensor product of the "site basis" |g 2i−1 and |g 2i . The fixed point Hamiltonian constructed using the procedure discussed above is where B 2i−1 changes the state |g 2i−1 and B 2i changes the state |g 2i . Explicitly calculating the matrix element (Equation (A.14)) for the cases, i.e., k = 0 and k = 1, relevant to our consideration (recall that we are interested in the quantum phase transition between SPTs correspond to the "0" and "1" elements of Z n ) it can be shown that where M j and R j are defined by Equation (2) of the main text. Appendix B. The mapping to Z n × Z n clock models with spatially twisted boundary condition and a Hilbert space constraint In this section, we show that Equation (8) and Equation (9) of the main text can be mapped onto an "orbifold" Z n × Z n clock models. The mapping is similar to the "duality transformation" of the Z n clock model. The mapping is given by Here the tilde operators obey the same commutation relation as the un-tilde ones. Due to the periodic boundary condition on R, namely, R 2N +1 = R 1 and R 2N+2 = R 2 the line of Equation Moreover, if we also impose the periodic boundary condition on R j a similar constraint on M i , namely, is obtained. Since there is no such constraint on M i in the original problem we need to impose a "twisted" boundary condition on R j : where B commutes with all R j and M j . Moreover B has eigenvalues b = 1, η n , ..., η n−1 n , i.e., B|b =b|b . Substituting Equation (B.1) into Equation (8) and Equation (9) of the main text we obtain the following expression of the transformed Hamiltonian It is important to note that Equation On the surface Equation (B.5) describes two decoupled Z n clock models living on even and odd sites, respectively. However the notion of "decoupled chains" is deceptive because the constraint in Equation (B.2) couples them together. Appendix C. The notion of "orbifold" A useful way to implement the constraint Equation (B.2) is to apply the projection operator where Z n-clock q s ,q τ represents clock model partition function under the space and time twisted boundary condition characterized by q s and q τ . In Equation (C.5) Z n-clock q s ,q τ appears twice on right-hand side because without orbifold (i.e., summing over space and time twisted boundary conditions) we have two independent n-state clock models. Averaging over the partition function under space and time boundary condition twists is the "orbifolding" [12]. Note that here the spatial boundary condition twist is generated by one of the Z n generator, namely B , in Equation (B.7). However, the time twist is generated by Q = 2N j =1 M j , which is a symmetry of the Z n × Z n clock Hamiltonian, Equation (B.5), but it is not the generator for the other Z n in Equation (B.7). D.1. Review of modular invariant partition function for the Ising model The Ising model shows an order-disorder phase transition. At the critical point, the Hamiltonian is given by where M i , R i are Pauli matrices σ x and σ z respectively (we use M, R rather than σ x , σ z for the consistency of notation). The central charge of a single critical Ising chain is c = 1 2 . Its conformal field theory is the M(4, 3) minimal model. The primary scaling operators are labeled by two pairs of indices (r, s) and (r , s ) each label the "holomorphic" and the "anti-holomorphic" part of the operator. The ranges of these indices are given by 1 ≤ s ≤ r ≤ 2 and 1 ≤ s ≤ r ≤ 2. The scaling dimensions of the holomorphic and anti-holomorphic parts of these operators are given by Through the operator-state correspondence, each of these primary fields and their associated "descendants" form the basis of a Hilbert space (the "Verma module") which carries an irreducible representation of the conformal group. Now consider the partition function of the CFT on a spacetime torus (see Fig. D.8). The prototype torus is obtained from identifying opposite edges of the parallelogram having (0, 1, 1 + τ, τ ) as the complex coordinates of its four vertices (τ ∈ upper half complex plane). On such a torus, the partition function is given by with q = e i2πτ and q = e −i2πτ . Here the trace Tr (r,s) and Tr (r s,s ) are taken within the Verma module labeled by (r, s) and (r , s ). In the literature χ r,s and χ r ,s are referred to as "characters". For the CFT to be consistent, its partition function must be "modular invariant" [13]. The modular group consists of discrete coordinate transformations that leave the lattice whose fundamental domain is given by Fig. D.8 invariant. This group is generated by the T (τ → τ + 1) and the S (τ → −1/τ ) transformations. When acted upon by these transformations the characters χ r,s (with a similar expression for χ r ,s ) transform according to Here S, T are known matrices and the transformation matrices for the anti-holomorphic χ are the complex conjugate of those of the holomorphic ones. The requirement of modular invariance, namely, δ (r,s),(r ,s ) . The corresponding partition function is given by: The explicit form of χ (r,s) is given by equation (8.15) of Ref. [14]. The conformal dimensions of primary fields and their eigenvalues under the action of the Z 2 generator are summarized in Table D.3 [14]. D.2. Constructing the orbifold partition function for the Z 2 × Z 2 critical theory With the brief review of the modular invariant partition function of the critical Ising model we are ready to construct the partition function for the orbifolded Z 2 × Z 2 model defined by of Equation (C.5): 1) is given in Ref. [14]. It is also shown in the same reference that Z Table D. 4 The quantum numbers of the first few primary operators of the orbifold where the τ dependence is suppressed. When expanded in terms of χ r,sχr ,s the first term yields 4 terms (henceforth referred to as group I terms). The second term yields 2 terms (group II terms). The third term yields 4 terms (group III terms). Due to the prefactor 2 in the second term on the right-hand side of Equation (D.8), terms in group II appear with multiplicity 2. It turns out that this partition function is the same as the XY model. The first few energy levels with h +h < 2 and their quantum numbers are listed in Table D.4. D.3. Transformation properties under the action of Z 2 × Z 2 To see how the contributing Verma modules of Equation (D.8) transform under the action of Z 2 × Z 2 , we construct operators that project the Hilbert space into subspaces carrying various irreducible representations of Z 2 × Z 2 . Let G A = B and G B = i∈even M i be the generators of Z 2 × Z 2 . The operator that projects into subspace with eigenvalues (η a 2 , η b 2 ) (here η 2 = −1 and a, b = 0, 1) under the action of G A and G B is given by To filter out the Verma modules that transform according to this particular irreducible representation, we compute Tr P ab Q q τ e −β(H even +H odd ) For example, which means only group I transform as the identity representation of Z 2 × Z 2 . For other P ab the results are summarized in Table D.5 Table D.5 Transformation properties of the contributing Verma modules in Equation (D.8) under the action of G A and G B . For group II, the doublet records the transformation properties of the multiplicity two Verma modules in Equation (D.8). Group D.4. Scaling dimension for the operator driving the Z 2 × Z 2 SPT transition The operator that drives the SPT phase transition must be (1) relevant, (2) translational invariant and (3) invariant under Z 2 × Z 2 . In Equation (D.8) the only term that contains operators (there are two such operators due to the multiplicity 2) satisfy these conditions is 2|χ I χ | 2 . The scaling dimension of (I )(Ī ) is h +h = 1 < 2 hence it is relevant. The momentum of this operator is h −h = 0 hence is translation invariant. Moreover according to Table D.5 there operators are invariant under Z 2 × Z 2 . It turns out that one of these two relevant operators drives a symmetry breaking transition while the other drives the SPT transition (see Fig. D.10). From the scaling dimension h +h = 1 we predict the gap exponent to be 1 2−1 = 1. E.1. Review of modular invariant partition function for the 3 states Potts model The construction of the orbifold partition function for the Z 3 × Z 3 case closely mirrors the Z 2 × Z 2 case. But instead of two critical Ising chains, we now have two critical Potts chains. We first review the known results for the modular invariant Z 3 clock model (equivalent to the 3-state Potts model). The 3-state Potts model shows an order-disorder phase transition. At the critical point the Hamiltonian is given by where R j = 1, η 3 , η 2 3 (η 3 = e i2π/3 ) and R j M k = η δ jk 3 M k R j . The conformal field theory for the critical 3-state Potts model belong to the well known "minimal" model M (6,5) [14,15]. The central charge is and the primary scaling operators are labeled by two pairs of indices (r, s) and (r , s ) each label the "holomorphic" and the "anti-holomorphic" part of the operator. The range of these indices are given by 1 ≤ s ≤ r ≤ 4 and 1 ≤ s ≤ r ≤ 4. The scaling dimensions of the holomorphic and anti-holomorphic parts of these operators are given by It is easy to check that h r,s = h 5−r,6−s and h r ,s =h 5−r ,6−s hence there are 10 distinct primary fields in the holomorphic and anti-holomorphic sector each. Requiring modular invariance (D.4) for c = 4/5 yields two possible such M's: one with M (r,s);(r ,s ) = δ (r,s),(r ,s ) describing the "tetra-critical Ising model", and the other corresponds to the 3-state Potts model described by the following partition function [14]: where χ I := χ 1,1 + χ 4,1 , χ := χ 2,1 + χ 3,1 , χ ψ := χ 4,3 , χ σ := χ 3,3 (E.4) Note that out of the 10 possible primary operators in each holomorphic/anti-holomorphic sector only six of them contribute to the partition function. In addition, the diagonal combination of the (3,3) and (4, 3) operators from each sector appear twice. The explicit form of χ (r,s) is given by equation (8.15) of Ref. [14]. The conformal dimensions of primary fields and their eigenvalues under the action of the Z 3 generator are summarized in Table E.6 [14]. E.2. Constructing the orbifold partition function for the Z 3 × Z 3 critical theory With the brief review of the modular invariant partition function of the critical 3-state Potts model we are ready to construct the partition function for the orbifolded Z 3 × Z 3 model defined by of Equation (C.5): and Z are given in Ref. [14]. Using Z 3-Potts q s ,q τ (τ ) = Z 3-Potts q s ,q τ q s (τ + 1) = Z 3-Potts Using the S, T matrices of the 3-states Potts model we can compute all these terms. Substituting the results into Equation (E.5) we obtain the orbifolded Z 3 × Z 3 partition function: where the τ dependence is suppressed. When expanded in terms of χ r,sχr ,s the first term yields 64 terms (henceforth referred as group I terms Following the same procedure in D.3 we construct operators that project the Hilbert space into subspaces carrying various irreducible representation of Z 3 × Z 3 which is generated by G A = B and G B = i∈even M i . The projector into subspace with eigenvalues (η a 3 , η b 3 ) (here η 3 = e i2π/3 and a, b = 0, 1, 2) under the action of G A and G B is given by .7). Analogous to Equation (D.10) we filter out the Verma modules that transform according to this particular irreducible representation by computing For example, which means only group I transform as the identity representation of Z 3 × Z 3 . For other P ab the results are summarized in Table E.8. E.4. Scaling dimension for the operator driving the Z 3 × Z 3 SPT transition From Table E.7 and Equation (E.10) it is seen that the translation-invariant (i.e. h −h = 0), relevant (i.e. h +h < 2), Z 3 × Z 3 invariant operators either have scaling dimensions 4/5 or 8/5. Through a comparison with the numerical result for the gap exponent in section 9 of the main text, we identify one of the operators with scaling dimension 4/5 as responsible for the opening of the energy gap in the SPT phase transition. The predicted gap exponent is 1 2−4/5 = 5/6 which agrees reasonably well with the numerical gap exponent. Moreover similar to the Z 2 × Z 2 case there are two operators with the same scaling dimension (4/5). Again one of these operators drives a symmetry breaking transition while the other drives the SPT transition, hence the phase diagram is similar to Fig. D.10. follows. For the Z 4 clock model the Hilbert space for each site j is 4-dimensional. In the following, we shall regard this 4-dimensional Hilbert space as the tensor product of two 2-dimensional Hilbert spaces associated with site 2j − 1 and 2j . We then view each of the 2-dimensional space as the Hilbert space of an Ising spin. In this way the Z 4 clock model with N sites can be viewed as an Ising model with 2N sites. More explicitly, under the unitary transformation U = i U i , where where X i and Z i denote the 2 × 2 Pauli matrices σ x i and σ z i . Thus the partition function of the Z 4 clock model under periodic boundary condition is given by The fact that Ising model has central charge c = 1/2 implies the central charge of the critical Z 4 clock model to be 1/2 + 1/2 = 1. CFT with c = 1 has infinitely many Verma modules [17]. The scaling dimension of the primary fields, which can take any non-negative values, is parametrized by h = x 2 /4 where x is a non-negative real number. The characters associated with these Verma modules are given [18] by Because later on we shall perform orbifolding it is necessary to consider the Z 4 clock model under twisted spatial boundary condition. With the spatial boundary condition twisted by the Z 4 generator, i.e., R N+1 = η 4 R 1 , the last two terms, namely Z 2N Z 2 + Z 2N−1 Z 1 in Equation (F.2), are replaced by In the language of Ising model, the above replacement creates an overpass connecting the even chain to the odd chain and a sign change of one bond (the red bond in Fig. F.11(b)). Thus we arrive at an Ising chain twice as long and with the spatial boundary condition twisted by the Z 2 generator. As a result The reason the modular parameter of the Ising partition function is half that of the Z 4 clock partition function is because the Ising chain has twice the length in the spatial direction. The same argument applies if the boundary is twisted by the inverse of the Z 4 generator (R N+1 = η 3 4 R 1 ) instead, i.e., Similarly, when the spatial direction is R N+1 = η 2 4 R 1 , the Hamiltonian of the Ising model becomes that of two decoupled Ising chain each having a sign-flipped bond equivalent to the Z 2 twisted boundary condition (see Fig. F.11(c)). The resulting partition function is given by Using the known S and T matrices for the Ising model, other Z 4-clock (q s ,q t ) (τ ) can be determined where The χ h in the above equations are given by Equation (F.3). The scaling dimensions of the highest weight states associated with the Verma modules that generate these χ h are summarized in Table F.9. Let's refer to the six terms in Equation (F.8) as Groups I, II, III, IV, V, and VI respectively. Due to the prefactor of 2, Group III elements appear in doublets. Due to the prefactor of 4, Groups IV, V and VI elements appear with multiplicity 2. In Table F.10 we list the first few primary fields with scaling dimension h +h < 2 and their quantum numbers. The results are summarized in Table F.11. The term 2|χ I χ | 2 in Equation (F.8) yields two primary fields with scaling dimension h +h = 1 (hence are relevant) and are invariant under Z 4 × Z 4 and translation. Hence they are qualified as the gap generating operator. The gap exponent is 1 2−1 = 1. Similar to the Z 2 × Z 2 and Z 3 × Z 3 cases there are two operators with the same scaling dimension (1). As the above two cases one of these operators drives a symmetry breaking transition while the other drives the SPT transition, hence the phase diagram is similar to Fig. D.10. (Color online) The space-time torus with spatial and temporal boundary condition twisted by group elements g s and g τ . The path in red picks up the group element g τ g s , while the path in blue picks up the group element g s g τ . Since the path in red can be deformed into the path in blue, g s and g τ need to commute so that the boundary condition is self-consistent. modular invariant. Therefore to detect whether an on-site symmetry group contains Z m as an abelian subgroup we just need to see whether it possible to assign Z m irreducible representations to the Verma modules so that after orbifolding the partition function is modular invariant. For discrete groups after knowing all abelian subgroups we can reconstruct the total group G. This is essentially the strategy followed by Ref. [10]. More explicitly, let the Hilbert space be consistent with a spatial boundary condition involving a twist generated by ρ g s (ρ is the generator of certain abelian subgroup Z m and g s = 0, ..., m − 1) where V i (V i ) is the ith Verma module in the holomorphic (anti-holomorphic) sector and M (g s ) ij is a non-negative integer labeling the multiplicity of the V i ⊗V j modules. Moreover, for the CFT to have a unique ground state, we require the vacuum module (i = 1) only shows up once in the periodic sector, i.e., M (g s ) 11 = δ 0,g s . Next we assign irreducible representation to the Verma modules: where g τ = 0, ..., m − 1, η m = e i2π m and Q(g τ ; g s , i, j, k) ∈ 0, ..., m − 1 is called "symmetry charge" in Ref. [10]. Combine Equation (H.1) and Equation (H.2) we obtain the following spacetime boundary twisted partition function on a torus with modular parameter τ Z g s ,g τ (τ ) = T r H (gs ) (q L 0 −c/24qL 0 −c/24 q τ ) = H.1. The consistency conditions So far the abelian subgroup Z m as well as M (g s ) ij and Q(g τ ; g s , i, j, k) are unknown. They need to be determined subjected to the following consistency conditions. (1) When there is no spatial boundary condition twist the Hilbert space in Equation (H.1) must return to that of the periodic boundary condition. Moreover in the case where there is also no time boundary condition twist the partition function must agree with the modular invariant partition function Z 0,0 (τ ). (1)-(3) pose strong constraints on the possible abelian subgroup Z m and the allowed assignment of the irreducible representations (i.e. Q(g τ ; g s , i, j, k)) to each Verma module. H.2. The on-site symmetry of minimal models Under constants (1)-(3) in the previous subsection Ref. [10] solved the possible abelian subgroups and their symmetry representations for the all minimal models. By patching these abelian subgroups together the author reached the following conclusion: the on-site symmetries of the unitary minimal models are exactly the same as those predicted by the lattice RSOS models [20]. Hence there is no emergent symmetry! Thus, for most of the unitary minimal models the symmetry is Z 2 . The only exceptions are 3-states Potts and tri-critical 3-state Potts models where the symmetry is S 3 . Finally for the minimal model labeled by E 7 , E 8 , where there is no symmetry.
2017-04-06T00:23:23.000Z
2017-01-03T00:00:00.000
{ "year": 2017, "sha1": "4f2eb07b94fbe88db83ee9e6ce8b3db37a3900c0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.nuclphysb.2017.03.021", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a42c92dfae8ef34622eef6863319ad10ecaa13d7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269136972
pes2o/s2orc
v3-fos-license
Impulse Control Disorders in Parkinson’s Disease: An Overview of Risk Factors, Pathogenesis and Pharmacological Management Impulse control disorders in Parkinson’s disease are relatively common drug-induced addictive behaviours that are usually triggered by the dopamine agonists pramipexole, ropinirole and rotigotine. This narrative review aimed to provide a comprehensive overview of the current knowledge of impulse control disorders in Parkinson’s disease. We summarised the prevalence, clinical features, risk factors and potential underlying mechanisms of impulse control disorders in Parkinson’s disease. Moreover, recent advances in behavioural and imaging characteristics and management strategies are discussed. Early detection as well as a tailored multidisciplinary approach, which typically includes careful adjustment of the dopaminergic therapy and the treatment of associated neuropsychiatric symptoms, are necessary. In some cases, a continuous delivery of levodopa via a pump or the dopamine D1 receptor agonist, apomorphine, can be considered. In selected patients without cognitive or speech impairment, deep brain stimulation of the subthalamic nucleus can also improve addictions. Finding the right balance of tapering dopaminergic dose (usually dopamine agonists) without worsening motor symptoms is essential for a beneficial long-term outcome. Introduction Impulse control disorders (ICDs) are defined as a "failure to resist an impulse, temptation, or drive to perform an act that is harmful to the person or others" [1].The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V) includes oppositional defiant disorder, intermittent explosive disorder, conduct disorder, kleptomania and pyromania as ICDs [2].The DSM-V also lists nine types of substance addictions that include alcohol, caffeine, cannabis, hallucinogens, inhalants (such as nitrous oxide, amyl nitrite and volatile solvents including paint removers and cleaning products), opioids, sedatives, hypnotics, anxiolytics, stimulants and tobacco.Moreover, gambling disorder is now included in the chapter on Substance-Related and Addictive Disorders [2].This change was performed to highlight the similarities between gambling disorder and drug addiction: in both conditions, an anticipatory craving, a decrease of anxiety, and the feeling of euphoria following gambling or intake of the drug may occur.Additionally, both gambling disorder as well as drug addiction frequently co-occur [3].According to the DSM-V criteria, ICDs occur in five stages.Typically, ICDs begin with an increased sense of tension, followed by a failure to resist an urge to act.During the act, the arousal peaks and as the act is completed a sense of relief or release is felt.Finally, patients may feel remorse or guilt for their behaviour [2]. Impulse control disorders and related disorders are seen as comorbidities in neurodegenerative diseases, such as progressive supranuclear palsy [4,5], multiple system atrophy [6,7] and frontotemporal dementia [8], and are most common in patients with idiopathic Parkinson's disease (PD) [9].Moreover, addictive behaviours can also occur in patients without clear evidence of neuronal/ nigrostriatal degeneration as a direct consequence of dopamine agonist therapy in patients with fibromyalgia [10], patients with restless legs syndrome (particularly in those who have in addition augmentation) [11] and in patients with endocrine diseases (such as pituitary adenomas) [12].Furthermore, ICDs and related disorders have been described in patients with frontal lobe dysfunction such as Gilles de la Tourette syndrome [13], and in patients with attention-deficit hyperactivity syndrome [14].In the majority of patients diagnosed with PD, these addictive behaviours emerge following the start of dopaminergic therapy, mainly dopamine agonists. Regardless of the underlying comorbidity, patients with ICDs and related disorders typically continue their addiction despite negative consequences.Any attempt to discontinue the behaviour frequently leads to dysphoria, anxiety and depression, similar to withdrawal symptoms after drug abuse [15]. Compulsive sexual disorder (see Table 1), gambling disorder (see Table 2), compulsive shopping (see Table 3) and compulsive eating are the most commonly described ICDs in PD [9].Other related addictions in patients with PD include dopamine dysregulation syndrome (DDS, sometimes also called Lees syndrome), where patients hoard drugs, self-medicate with a larger amount of levodopa against the physician's advice to avoid off-periods (for diagnostic criteria, see Table 4) and exhibit punding, which is the urge to perform senseless activities repeatedly (such as assembling and disassembling, collecting or sorting objects in brackets) [16][17][18].Other phenomena include hobbyism (a pathological pursuit in common hobbies, such as excessive fishing, writing or Internet use) reckless generosity [19], excessive hoarding [20], walkabouts [21] and drug addiction [22].Although the name ICD implies an inability to resist an urge, these heterogeneous behaviours are sometimes complex, sometimes habitual, non-goal oriented and stereotyped.Therefore, ICDs also have impulsive and compulsive aspects that have been mentioned in several studies [23][24][25].Similar to the general population, it is believed that the impulsive component, together with the feeling of joy and gratification may be responsible for the initiation of the addiction, while a more habitual and compulsive component may be the culprit of persistence [26]. In line with this, patients with PD with ICDs and DDS often report a feeling of euphoria, mania or pleasure; while punding is a more peculiar addictive behaviour in PD, not driven by pleasure [27].Previously, it was thought that a gambling disorder was the most frequent ICD and increased libido would occur less frequently [9] but results of several studies suggest that compulsive sexual behaviour (for proposed diagnostic criteria see Table 2) is one of the most, if not the most common addiction in male patients with PD [28,29].Multiple addictions are also common if an ICD or related disorder has been detected [9], particularly in those with compulsive sexual disorder [29]. While ICDs in PD have been described more thoroughly within the last few decades, these side effects of dopaminergic medication are not new and had been reported in the 1960s and 1970s, a few years after the introduction of levodopa [30][31][32].The true prevalence of these behaviours in PD is unknown, as patients likely conceal or under-report these side effects because of shame or denial.The general consensus is that ICDs and related addictions occur somewhere between 14% and 30% [9,28] and are likely much higher in patients with a younger disease onset [33] with a 5-year cumulative incidence of 46% [34].Because of the increased awareness and change in prescribing dopaminergic therapy, ICDs and related addictions are currently possibly declining again, although some suggest that the COVID-19-induced lockdowns and thus an increase in environmental stress may have caused again a rise of these addictive behaviours [35]. The objective of this narrative review is to provide a comprehensive overview of risk factors, potential mechanisms, diagnosis and the management of ICDs and related disorders in patients with PD. Behavioural Aspects of Patients with PD with ICDs and Related Disorders Not surprisingly, studies found that patients with PD with addictive disorders report higher impulsivity scores, had higher levels of neuroticism, lower levels of agreeableness and conscientiousness, as well as lower working memory (A) Persistent and recurrent problematic gambling behaviour leading to clinically significant impairment or distress, as indicated by the individual exhibiting four (or more) of the following in a 12-month period: Needs to gamble with increasing amounts of money in order to achieve the desired excitement; Is restless or irritable when attempting to cut down or stop gambling; Has made repeated unsuccessful efforts to control, cut back or stop gambling; Is often preoccupied with gambling (e.g.having persistent thoughts of reliving past gambling experiences, handicapping or planning the next venture, thinking of ways to get money with which to gamble); Often gambles when feeling distressed (e.g.helpless, guilty, anxious, depressed); After losing money gambling, often returns another day to get even ("chasing" one's losses); Lies to conceal the extent of involvement with gambling; Has jeopardised or lost a significant relationship, job, or educational or career opportunity because of gambling; Relies on others to provide money to relieve desperate financial situations caused by gambling.(B) The gambling behaviour is not better accounted for by a manic episode. Table 3 Diagnostic criteria for compulsive shopping [37] (A) Maladaptive preoccupation with buying or shopping that is manifested as impulses or behaviours that: 1. Irresistible, intrusive and/or senseless experiences; 2. Result in frequent buying of more than can be afforded, items that are not needed or a longer period of time than intended.(B) Cause marked distress, are time consuming, significantly interfere with social and occupational functioning, or result in financial problems.(C) Not occurring exclusively during (hypo)manic episodes. Table 4 Diagnostic criteria for dopamine dysregulation syndrome [21] DRT dopamine replacement therapy capacity than patients with PD without ICDs and related disorders [38][39][40].Furthermore, these patients have higher schizotypy scores (which measures the risk of psychosis) compared with controls [41]. Several studies have assessed the acute behavioural changes following dopaminergic administration in PD so far.Unmedicated patients with PD showed enhanced learning from negative feedback while medicated patients learned better from positive feedback [42][43][44].This has been shown in drug-naïve patients with PD (n = 26) who were treated for 12 weeks with an oral dopamine agonist (pramipexole n = 14, ropinirole n = 12) and tested on feedback learning.This was done using a computer-based probabilistic classification task, where a reward-learning task, a punishmentlearning task and a no-feedback outcome were inter-mixed.Untreated patients had intact negative feedback learning but impaired positive feedback learning whereas this behaviour changed to impairment of negative outcomes with normal reward learning following 12 weeks of dopamine agonist therapy [45].These studies led us to the hypothesis that patients with PD with ICDs have increased positive feedback learning and/or diminished negative feedback learning, which may then facilitate the development of addictive behaviours in PD.However, several studies prior to and following dopaminergic therapy did not show differences in feedback learning in a two-choice probabilistic discrimination task in patients with PD with ICDs and related disorders compared to healthy volunteers [40,41,46].In contrast, one other study with a two-choice probabilistic discrimination task with three conditions (gain, loss, neutral) showed that patients with PD with ICDs and related disorders (n = 14) had better reward learning [47], in another study, patients with PD with ICDs and related disorders (n = 16) were worse in negative feedback learning [48].Furthermore, patients with PD with ICDs and related disorders had a heightened reward sensitivity to reward-related cues measured by pupillary dilatation both in the "off" as well as in the "on" state, whereas patients with PD without ICDs and related disorders only had this reward sensitivity after dopaminergic medication [49].Other factors that may likely contribute to the development of ICDs and related disorders include risk taking.Two studies observed patients with PD prior to and after dopaminergic therapy; patients were tested with a forward and backward digit span test, an instrumental learning task, a gambling task in one study [40] and with the Balloon Analogue Risk Task (a computerised decision-making task used to assess risk-taking behaviour) in the other study.Both studies found increased risk-taking behaviour in patients with PD with ICDs following medication intake [40], particularly dopamine agonists (either pramipexole or ropinorole) [50]. Mixed results have been also reported on inhibitory control.In one study, patients with PD (n = 52) were worse than healthy controls in the Stroop task prior to dopaminergic medication intake, with no difference between patients with PD with addictive behaviours (n = 28) and those without (n = 24).After dopaminergic medication, both patients performed as well as healthy volunteers with no group differences [51].In line with this, patients with PD with addictions did not perform worse on the Simon task (a task to assess impulsive choice) than PD controls following dopamine agonist intake.In fact, those with ICDs and related behaviours made fewer fast impulsive response errors than PD controls, which suggests that addictive behaviours in PD are less related to motor impulsivity [52].However, preliminary results from an eyetracking study showed increased error rates on the anti-saccade task [53].In the anti-saccade task, participants are asked to fixate a central cross on a screen.As soon as it disappears, a peripheral cue appears on the horizontal plane randomly on the right or left; here participants are asked to not perform a saccade towards the cue, but rather in the opposite direction.Successful inhibitory control depends on intact frontal cortical function as well as an intact frontal eye field and normal function within the thalamo-cortico-cerebellar network [54].In line with this, a functional magnetic resonance imaging (MRI) study with a double-blind, randomised, crossover design on male volunteers (n = 16) receiving placebo and pramipexole has shown that pramipexole reduces striatal interaction with the prefrontal cortex [55].These data further dovetail with the hypothesis of a dysfunction of prefrontal cortical inhibition in patients with PD with ICDs and related disorders. Impulsivity has several facets and it is likely that in contrast to motor impulsivity, temporal discounting (the preference of a smaller immediate reward rather than a larger delayed reward) and reflection impulsivity (tendency to make decisions without considering available information) may play a bigger role in the development of addictive behaviours in PD.Patients with PD with ICDs and related disorders (n = 35) had a steeper discounting of future rewards on medication compared with their off state [56] and had increased temporal discounting in their on medication state compared with non-impulsive patients with PD (n = 55) [41,43,57].In particular, patients with PD with a gambling disorder and those with compulsive shopping seem to have greater temporal discounting than patients with PD with other ICDs, such as compulsive sexual behaviour and binge eating disorder [38].The orbitofrontal cortex seems to play a critical role in encoding temporal discounting.For example, lesions to the medial part due to a stroke caused increased discounting for money, suggesting that the orbitofrontal cortex is necessary for optimal weighting of future outcomes during decision making [58]. Furthermore, patients with PD with ICDs and related disorders made premature decisions and jumped to conclusions with little evidence in a study comparing patients with PD with ICDs (n = 6), patients with PD without ICDs (n = 27), patients with a gambling disorder (n = 23) and patients with substance abuse (n = 13) using the bead task [59].This poor information sampling is sometimes also called "reflection impulsivity" and is likely caused by dopamine agonist medication but not levodopa or deep brain stimulation (DBS) [60]. Another typical feature of patients with PD with ICDs and related disorders is enhanced novelty seeking [38], which could also be shown in a probabilistic learning task [61]. Taken together, these studies show that ICDs and related disorders likely affect impulsivity in the decisional domain, with impairment in temporal discounting, poor information sampling, novelty seeking and increased risk taking, and less difficulties in the motor domain, such as response inhibition. Differences of ICDs and Related Disorders Within Patients with PD Comparative studies and large studies with differences between the single addictive behaviours are rare.This is likely because of the multiple addictions that frequently co-occur [9].One large study, however, compared patients with PD with gambling disorders (n = 54), compulsive sexual behaviour disorders (n = 47), compulsive shopping (n = 54) and those with binge eating disorders (n = 42).All these patients only had one addictive behaviour.As expected, all patients with PD with ICDs and related disorders had greater depression and all but the binge eating group had higher anxiety scores compared with PD controls (n = 282).Interestingly, only patients with PD with compulsive shopping had increased temporal discounting (assessed with the delayed discounting task, a self-report scale used to observe choice impulsivity) compared with PD controls.Novelty seeking was significantly different to PD controls (18.7 on the self-report Temperament and Character Inventory) in patients with compulsive shopping (25.1) with a trend in those with a gambling disorder (22.3) but not in patients with PD with a compulsive sexual disorder (19.1) and patients diagnosed with a binge eating disorder (18.9) [38].Moreover, patients with PD with single or multiple ICDs had higher levodopa doses (679.9 vs 544 mg/day), were functionally more impaired and had higher scores on depression (Geriatric Depression Scale-15, 4.9 vs 2.8), anxiety (State Trait Anxiety Inventory, 39.9 vs 33.6), obsessivecompulsive (Obsessive Compulsive Inventory, 13.7 vs 8.8), novelty seeking (Temperament and Character Inventory, 21.8 vs 18.7) and impulsivity (Barratt Impulsiveness Scale, 66.6 vs 57.5) compared with PD controls.However, there was no difference between patients with PD with single and with multiple ICDs [38]. There are some characteristics that are probably more commonly seen in patients with PD with compulsive sexual disorders than in patients with PD with other types of addictions. For example, one albeit small study (n = 111) reported that multiple ICDs are particularly common in young male patients with PD with a compulsive sexual disorder [29].Furthermore, psychotic symptoms such as paranoid delusional jealousy (Othello syndrome) have also been more commonly described in those with a compulsive sexual disorder.These patients have the false certainty of the infidelity of their partners [62].In line with this, one study also found that patients with PD with a compulsive sexual disorder are less agreeable than patients with PD with other ICDs or PD controls using the Neuroticism-Extroversion-Openness Five Factor Inventory [63].In the Parkinson Progression Markers Initiative cohort, punding behaviours could be predicted by current or antecedent attentional dysfunction in de novo patients with PD and by impairments in activities of daily living [64]. Burden of ICDs and Related Disorders in PD Patients with PD with ICDs and related disorders experience more non-motor symptoms (particularly neuropsychiatric problems) than patients with PD without addictive behaviours.More specifically, depression, a poorer quality of life [28], a reduction in social well-being [65], apathy [66] worse sleep, more anxiety as well as higher mania scores [67], psychosis [68] and a higher frequency of rapid eye movement-sleep behaviour disturbances [69] are frequently seen in these patients.Higher aggressiveness, irritability, disinhibition, poorer insight and denial also occur regularly [70].Moreover, urinary dysfunction, fatigue, cardiovascular problems [71] as well as poorer working memory [40] negatively impact the quality of life of patients with PD and ICDs. In addition, patients with PD who develop addictive behaviours have a longer disease duration (i.e.data from the National Danish Patient Registry show a mean disease duration of 9.3 years in patients with ICDs compared with 7.5 years in those without), have more motor complications, and take larger amounts of dopaminergic medication than those without ICDs and related disorders [38,39,68,72,73]. Apart from the patients' personal disease burden, the burden of relatives caring for patients with PD is already high because of mental, physical and socioeconomic problems [74].Carers of patients with PD without ICDs and related problems report a far greater burden from mental rather than physical stress, which significantly reduces their quality of life [75]; this strain on the quality of life is even more pronounced in carers of patients with PD with ICDs and related disorders [76].More specifically, depressive symptoms, apathy and disinhibition in patients with PD with ICDs result in the high caregiver burden [77]. Non-Pharmacological Risk Factors for ICDs and Related Disorders in PD It is unclear why some patients with PD develop addictive disorders and others do not.It is therefore unlikely that a single mechanism is causative for the development of ICDs.However, it has now been widely accepted that the use of dopaminergic medications (particularly dopamine agonists) in susceptible patients is responsible for the development of an addiction in PD [78] [79].Several nonpharmacological risk factors have been identified in recent years.The individual vulnerability may consist of striatal density or genetic factors or a combination of both [72]. In line with this, a recent genome-wide association study identified four loci (DAB1, PRKAG2, MEFV and PRKCE) associated with ICDs in a large cohort of 5770 patients with PD, which can distinguish patients with PD at high versus low risk for developing addictions [80]. Other factors include a younger age, younger onset of PD, being single and experiencing more non-motor symptoms than patients with PD without addictions (see Table 5) [38,68,72,73].Furthermore, higher anxiety scores as well as autonomic and cognitive dysfunction seem to also be risk factors [81].Sex differences also play a role but are not specific for PD.Compulsive sexual behaviour has been more frequently reported in male patients with PD (n = 3090, 5.2% prevalence in men vs 0.5% prevalence in women [9]), while compulsive shopping and binge eating disorders (same study cohort, respectively, 4.5% in men vs 7.8% in women and 3.4% in men vs 5.8% in women) seem to occur more often in female patients with PD [9,36]. Other risk factors include a higher novelty-seeking personality trait, a history of alcohol or smoking, depression, anxiety, insomnia, higher caffeine consumption, and a personal or family history of addictive behaviour [23,[82][83][84].Depression and anxiety seem to play an important role, as both of these symptoms occur significantly more often in the off-state and on-state in patients with ICDs compared with those without (depression, 23% vs 13%; anxiety, 9% vs 4%).Moreover, larger changes in depressive symptoms from the off to the on state (identified as the change in the Hamilton Depression Rating Scale) were also observed in the ICD group compared with the PD control group; this was assessed in a cross-sectional study including 159 patients without ICDs and 41 patients with ICDs [85].Alexithymia, the difficulty to express, define or identify emotions, has been also linked with increased impulsivity in drug-naïve patients with PD [86] and has been proposed as a risk factor for ICDs [86,87].In line with this, apathy, a reduction in emotions, interests and motivation which is common in PD, frequently also co-occurs in patients with ICDs [88].It has been therefore speculated that the hypodopaminergic behaviours, such as depression, anxiety and apathy, which lie on the opposite spectrum of hyperdopaminergic behaviours (ICDs) [89], may share a common behavioural continuum [90,91]. Pharmacological Risk factors for ICDs and Related Disorders in PD It is currently accepted that dopaminergic medication can trigger addiction in PD, as PD itself is not associated with an increased prevalence of ICDs and related disorders.In fact, a case-control study in drug-naïve patients with PD (n = 168) showed a similar frequency of ICDs and related disorders (18.5% PD vs 20.3% controls) compared to healthy controls (n = 143) [92].By far the biggest risk factor for developing compulsive sexual behaviour, compulsive shopping, and gambling disorder in PD is the use of dopamine agonist therapy [23].Gambling disorder in patients with PD has almost always been triggered by dopamine agonists and has been only rarely associated with levodopa monotherapy [93]. Although craving for sweets is common in PD, particularly in those who have ICDs [94], the association of binge eating and dopamine agonist therapy remains unclear.Counterintuitively, dopamine agonist use seems not to be associated with binge eating and food addiction in PD [95].In fact, a small (n = 96 patients with PD) cross-sectional study identified eight patients with binge eating with DBS being the only predictor for overeating [96]. There have been conflicting reports on whether ICDs correlate with the dopamine agonist dose but the dopamine agonist plasma concentration was similar between those with compared to those without ICDs [97].However, the lifetime average dose as well as the duration of dopamine agonist therapy seem to be associated with ICDs [34].Moreover, the combination of a dopamine agonist with levodopa seems to increase the risk of ICDs and related disorders even further possibly owing to an increase of mesolimbic dopamine levels and the synergic effect on dopamine receptors [9,25,[98][99][100].In line with this, dyskinesias (resulting from higher dopaminergic therapy) are significantly more often seen in patients with PD with ICDs and related disorders than in those without [101]. Although addictive behaviours can be triggered with all available dopamine agonists, they are less often seen in patients with PD treated with the transdermal dopamine agonist rotigotine compared with pramipexole or ropinirole [102].These results have been recently confirmed in a meta-analysis including more than 650 patients with PD.The rotigotine patch was three times less likely to induce addictive behaviours than pramipexole and ropinirole [103].While rotigotine has high affinities to the dopamine D 1 , D 2 , D 3 , D 4 , and D 5 receptors, ropinirole and pramipexole only have high affinities to the D 2 , D 3 and D 4 receptors.While these pharmacodynamics may play a role in triggering addictions, it is more likely that the drug delivery (oral vs transdermal) is more relevant.Transdermal drug application may lead to a more continuous drug delivery, avoiding peaks and troughs.While oral dopamine agonist plasma concentrations eventually drop after 6-12 h, plasma concentrations during rotigotine therapy remain stable for up to 24 h.Moreover, transdermal application of rotigotine provides direct access to the bloodstream avoiding the hepatic first-pass effect seen in oral dopamine agonists [103] (see Table 6). The role of D 3 agonism in inducing impulsivity has been further confirmed in a recent pharmacovigilance-pharmacodynamic study.Here, around 3000 ICD reports of impulsivity under dopamine agonists (pramipexole and pergolide) are presented, with data regarding receptor occupancy supporting the role of D 3 -induced ICDs [104].Interestingly, however, there seems to be no difference between the extended-release and standard oral dopamine agonist formulation (pramipexole and ropinirole) [34]. Although much rarer, ICDs and related disorders have been also described with the use of monoamine oxidase B inhibitors [102] and amantadine [9].Impulse control disorders under the therapy of catechol O methyltransferase inhibitors are rare.The exact frequency is unknown, mainly because most studies report levodopa equivalent daily doses.A posthoc analysis on the pooled data from two large randomised, double-blind, placebo-controlled trials on opicapone (n = 517) shows a low incidence of addictive behaviours (between 0.2 and 0.5%) [105], but importantly the risk of ICDs does not appear to increase with long-term use of opicapone [106].More recently, ICDs have also been observed with aripiprazole (n = 97), which acts as a partial D 3 agonist, bupropion (n = 56), a dopaminergic antidepressant and the psychostimulant methylphenidate (n = 40) [107][108][109]. Experimental Drugs Currently Under Investigation Naltrexone, an opioid receptor antagonist, which is effective in alcohol addiction, failed to improve ICDs in PD [133].However, it has been argued that some ICDs such as hobbyism may be more responsive to naltrexone than other ICDs, but further studies are warranted [134].Clonidine, an α2-adrenergic agonist, has been shown to significantly reduce impulsivity in a gambling task in abstinent heroin addicts (n = 53) [135].A recent randomised, controlled, double-blind, phase IIb trial in patients with PD with ICDs (n = 39) showed, however, that administration of clonidine for 8 weeks resulted only in a non-significant reduction of impulsivity compared with placebo [136].Although in this study clonidine (75 µg twice daily) was well tolerated, common side effects include low blood pressure as well as dizziness and depression, which may further reduce the quality of life in PD.Nevertheless, the results of this study warrant a longer treatment duration and a larger sample size in a further phase III trial.A crossover, double-blind, placebo-controlled study using atomoxetine (40 mg orally), a noradrenalin reuptake inhibitor, showed reduced motor and reflection impulsivity as well as risk taking.Although this study is promising, the sample size was rather small and none of the patients with PD had ICDs (n = 33) [137].However, evidence from functional MRI shows that atomoxetine may enhance prefrontal cortex connectivity and possibly have a restoring effect on executive functions; this may hold interest in future trials [138].Currently, a randomised, placebo-controlled, phase II trial (NCT03947216) assessing the effect of pimavanserin, a selective serotonin 5-HT 2A inverse agonist, on ICDs is underway and results are expected in 2025.In this trial, patients with PD will be treated with pimavanserin 17 mg or placebo daily for 8 weeks, with the primary outcome measure being the change in ICDs (measured with the Questionnaire for Impulsive-Compulsive Disorders in Parkinson's disease [QUIP]) after treatment. Management of ICDs and related disorders in PD is challenging.Thus, the phrase "prevention is better than cure" is particularly important, as there are no consensus guidelines available because of the paucity of randomised controlled trials.Therefore, all patients with PD should be advised about the potential risk of developing behavioural addictions especially following dopamine agonist therapy.This consultation should ideally take place together with family members, carers or close friends who are in regular contact with the patient.Long-term vigilance is required especially in younger patients, those who have a personal or family history of addictive behaviours, or who are single and experience more motor symptoms such as dyskinesias as well as non-motor symptoms [23].It is also important to highlight that ICDs and related behaviours in PD almost always build up gradually and any change in behaviour, particularly increased irritability, disturbed night-time sleep or increased spending may be harbingers.In line with this, it has been reported that 24% of patients with subsyndromal ICDs (defined as subthreshold behaviours without reaching the formal diagnostic criteria) developed clinically significant ICDs after 1 year [88].The severity of the addiction is important to take into consideration and sometimes an immediate hospital admission may be required.The QUIP [139] and the QUIP rating scale, which includes the severity of the addiction [24], can be useful to detect an ICD early on. In contrast, there are rare circumstances where no change of treatment is required in patients with PD with addictive behaviours depending on the patients' disability, financial and social circumstances.However, usually if an ICD or a related disorder is left untreated or ignored, it may have devastating financial and psychological consequences for the lives of patients and their families (see illustrative case). Non-pharmacological approaches such as physical exercise, cognitive behavioural therapy or limiting access to credit cards, the Internet or gambling venues should be implemented but are usually not enough on their own [16,23,140].Dopamine agonists should be reduced in patients with gambling disorders, compulsive sexual disorders and those with compulsive shopping and (if possible) completely weaned off.Patients are sometimes reluctant to reduce the dopamine agonist because of low insight but switching from a dopamine agonist to levodopa can improve impulsive behaviour within a few months [141].However, patients must be informed that anxiety, panic attacks, depression, dysphoria, fatigue, pain and the feeling of being undertreated may occur.These symptoms are known as dopamine agonist withdrawal syndrome and may cause significant psychological distress that may be refractory to levodopa or any other PD medication [142].Hospital admission may be necessary in these patients to alleviate dopamine agonist withdrawal syndrome. In patients with DDS, a reduction in levodopa, or a fastacting apomorphine pen injection is necessary, but these patients often do not tolerate the reduction because of worsening of motor fluctuations, 'off' dystonia or withdrawal symptoms.These heterogeneous non-motor as well as motor symptoms usually subside within a few days or weeks but can also last several months [72].Again, in these patients, hospital admission and a multidisciplinary approach including a psychiatrist and psychologist may be necessary. Treatment of the neuropsychiatric comorbidities, such as depression, anxiety and panic attacks, as well as an improvement of potential sleep disturbances may be frequently required regardless of the underlying addictive behaviour [23,67,143].Trazodone and the alpha-2 adrenoreceptor antagonist mirtazapine may help to improve some neuropsychiatric symptoms as well as nocturnal sleep [144].Additionally, considering that the pathophysiology of depression in PD likely involves several neurotransmitters (dopaminergic, serotoninergic, noradrenergic), depression should be treated with selective serotonin reuptake inhibitors, serotonin and norepinephrine reuptake inhibitors, or a tricyclic antidepressant.Although there are no official guidelines guiding the therapeutic choice, there is some evidence in favour of the aforementioned drugs, as well as for cognitivebehavioural therapy [145,146].If patients have additional psychosis, quetiapine or clozapine may be administered; however, regular blood counts because of the potential risk of agranulocytosis are limitations in those treated with clozapine [146]. The role of DBS of the subthalamic nucleus in patients with PD with ICDs and related disorders is controversial.However, in selected patients with PD who do not experience cognitive impairment, or have any other contraindication for functional surgery, DBS of the subthalamic nucleus can result in improvement of ICDs and related symptoms because of the reduction in dopaminergic therapy [147].In some patients, however, de novo ICDs can occur, possibly due to misplacement of the electrode, or failure of a dopaminergic drug reduction [16].Pre-operative but also postoperative psychiatric monitoring is mandatory in patients with PD who undergo DBS, given reports of the increased risk of post-surgical suicide attempts [143]. Dyskinesias have been linked with ICDs in PD [72] and thus, a reduction in dyskinesias by decreasing the overall dopaminergic therapy will often also lead to an improvement of addictive behaviours.In line with this, there is preliminary evidence that continuous delivery of levodopa/carbidopa or the D1 receptor agonist apomorphine can improve ICDs [148,149]. Overall, a remission of ICDs and related disorders can be achieved in about 40-80% of patients.Not surprisingly, several studies have shown that a reduction in the dopamine agonist dose or ideally a complete discontinuation is linked with better outcomes [34,[150][151][152]. Potential Underlying Mechanisms In PD, the dorsal striatum is primarily affected and neurodegeneration is more severe than in mesolimbic neurons, which are relatively unaffected [111].Therefore, one hypothesis is that in patients with PD with ICDs and related behaviours, the nucleus accumbens may still be relatively intact and that the extra dopaminergic medication leads to a local dopamine overdose in the ventral striatum [112].Importantly the nucleus accumbens shell has strong connections to limbic structures and is therefore believed to have an important role in motivation and addiction.Stimulation of the nucleus accumbens is believed to play a pivotal role in drug addiction, as the iatrogenic dopamine release in this nucleus shares similarities to natural rewards (such as food), but is missing the physiological adaptation (habituation and inhibition by predictive stimuli) [113,114]. This "overdose hypothesis" has been recently confirmed in a post-mortem immunohistochemistry study in patients with PD with various addictive behaviours (n = 31) who were matched to patients with PD without addictions (n = 29).Patients with PD with ICDs and related disorders had significantly less alpha-synuclein pathology in the ventral striatum than patients without addictions.This further strengthens the hypothesis that the ventral striatum is indeed better preserved in these patients.Furthermore, and on the surface counterintuitively, patients with ICDs had also lower D 3 receptors [115].This may be due to downregulation of the receptors leading to a supersensitivity of the remaining D 3 receptors or a premorbid personality trait making these patients more vulnerable for addictive behaviours [115,116].Alternatively, the lower D 3 receptors could also reflect a smaller motor response to dopaminergic medication in patients, which would then lead to higher doses to achieve symptomatic control, causing a dopamine overdose of the ventral striatum [115].However, as D 1 and D 2 but not D 3 receptors are responsible for the overall best motor response [114], this hypothesis remains speculative. Dopamine agonists may directly affect the corticostriatal network.A study with 16 healthy male volunteers shows that pramipexole increases mesolimbic dopamine levels during anticipation of monetary rewards, but at the same time reduces the striatal interaction to the prefrontal cortex [55].This dopamine agonist induced reduction in "top down control" in addition to the mesolimbic dopamine "overdose" is currently thought to play a key role for developing ICDs and related disorders in susceptible patients [110]. Structural MRI The role of structural imaging in patients with PD with ICDs is inconclusive with some studies showing cortical thinning of the orbitofrontal cortex [117], while others reported an increased cortical thickness of the orbitofrontal cortex [118,119], and others did not find structural differences compared to PD controls [81,120].There are only a few of these studies and they vary in the number of participants observed and their demographics; orbitofrontal cortex thinning has also been associated with other conditions, which may work as confounders when interpreting these results (depression, alcohol dependence).Thus, there is no clear evidence on whether cortical thickness does play a major role in patients with PD with ICDs and related behaviours. Functional MRI Resting-state MRI revealed that patients with PD with ICDs have an increased connectivity within the salience network (anterior insula and dorsal anterior cingulate cortex) and a decreased connectivity within the central executive network (dorsolateral prefrontal and lateral posterior parietal cortex).This altered connectivity of the neurocognitive networks, which is also found in patients with other addiction disorders, may be one neural correlate of ICDs in PD [121]. Positron Emission Tomography One of the first positron emission tomography (PET) studies using [ 11 C] raclopride assessed patients with PD with DDS (n = 8) and PD controls (n = 8) prior to and following the first levodopa dose.Patients with PD with DDS but not PD controls had elevated levodopa-induced ventral striatal dopamine release.This sensitised ventral striatal dopamine release was associated with self-reported compulsive drug "wanting" but not "liking" [122].Sensitisation (an enhanced response to a stimulus) is -like tolerance, withdrawal and dependence -a hallmark of addiction [123].In line with this, another PET study using [ 11 C] raclopride showed a higher ventral striatal dopamine release in patients with PD with a gambling disorder during gambling but not in PD controls following dopamine agonist therapy (pramipexole n = 5, ropinirole n = 2) [124].Moreover, patients with PD with a variety of different ICDs but not PD controls also exhibited an increased ventral striatal dopamine release following reward-related visual cues after levodopa intake (200/50 mg, scanning acquired 45 minutes after intake) [125].Another H 2 15 O PET study revealed a reduction in the lateral orbitofrontal cortex, as well as in the amygdala and the rostral cingulum during a card selection game following apomorphine administration only in patients with PD with a gambling disorder (n = 7) [126]. A study using the PET radiotracer, [11C] FLB-457, with high affinity for extra-striatal D 2 /D 3 receptors, found decreased binding in the midbrain during a gambling task in patients with PD with ICDs (n = 7) compared with PD controls (n = 7).These results hint towards a wider dopaminergic dysfunction with altered striatal and cortical dopamine homeostasis in patients with PD with ICDs [127].In line with this, a study using cerebral 18F-fluorodeoxyglucose PET showed that patients with PD with ICDs (n = 18) had a dysfunction of a large network including the mesocorticolimbic system, the caudate, the parahippocampus and the orbitofrontal cortex, but also with increased metabolism of the right middle and inferior temporal gyri [128].It is therefore possible that these temporal regions are involved in the establishment of the mnemonic component of addiction [128]. Several studies have used the [123I] FP-CIT radioligand, which showed a reduction in dopamine transporter (DAT) levels in the ventral striatum of patients with PD with a gambling disorder (n = 8) [129] and patients with PD with a variety of different ICDs (n = 282) [130].It is possible that the lower DAT binding reflects lower membrane DAT expression on presynaptic terminals, resulting in a functional reduction of presynaptic reuptake and thus increased dopamine levels within the ventral striatum [129].In line with these results, a small preliminary study (n = 31) [131] and a large (n = 320 at baseline; n = 284 at year 1, n = 217 at year 2, n = 96 at year 3) longitudinal study using the data acquired in the Parkinson's Progression Marker Initiative found an association between lower striatal DAT binding and an increased risk of developing ICDs [132].Thus, these PET studies, combined with the neuropathological results, imply that increased and abnormal mesolimbic dopamine release, due to a relatively intact ventral striatum, in combination with prefrontal cortex dysfunction may trigger behavioural addictions [115,[124][125][126]. Conclusions Impulse control disorders are relatively common non-motor symptoms that arise in patients with PD being treated with dopaminergic drugs, most commonly with dopamine agonist therapy.The variability on the amount of patients who develop ICDs and also the type of ICDs that may arise depends on several risk factors, which include younger age, higher anxiety traits and a history of addictive behaviours in the past.As ICDs may have devastating consequences in patients' lives both socially and financially, patients being started on dopaminergic drugs should be properly informed of the possibility of ICDs arising, ideally in the presence of a family member or close friend.If an ICD is reported, early treatment is of paramount importance, as the patient's cognition may already be impaired.Management of ICDs requires a reduction, and if possible, a complete discontinuation of dopamine agonist therapy.In patients with DDS, a reduction in fast-acting dopaminergic drugs is necessary.Often patients with PD have to be admitted to hospital to alleviate dopamine agonist withdrawal syndrome.New trials exploring additional therapeutic strategies need to take in account the diverse nature of all disorders falling under the term ICDs and if necessary tailor a therapy for each disorder. Declarations Funding Open access funding provided by University of Innsbruck and Medical University of Innsbruck. ( A) Parkinson's disease with documented levodopa responsiveness.(B) Need for increasing doses of DRT in excess of those normally required to relieve parkinsonian symptoms and signs.(C) Pattern of pathological use: expressed need for increased DRT in the presence of excessive and significant dyskinesias despite being "on", drug hoarding, drug-seeking behaviour, unwillingness to reduce DRT, absence of painful dystonias.(D) Impairment in social or occupational functioning: fights, violent behaviour, loss of friends, absence of work, loss of job, legal difficulties, arguments or difficulties with family.(E) Development of hypomanic, manic or cyclothymic affective syndrome in relation to DRT. (F) Development of a withdrawal state characterised by dysphoria, depression, irritability and anxiety on reducing the level of DRT.(G) Duration of disturbance for at least 6 months. Table 1 [36]osed criteria for compulsive sexual behaviours in Parkinson's disease (adapted from Voon et al.)[36](A)The sexual thoughts or behaviours are excessive or an atypical change from baseline marked by ≥1 of the following: If all criteria except C are fulfilled, the disorder is subsyndromal. Table 2 [2]gnostic and Statistical Manual of Mental Disorders, fifth edition, diagnostic criteria for gambling disorder[2] Table 5 [9,68,69,80]ain risk factors for International Classification of Diseases and related disorders in Parkinson's disease[9,68,69,80].For each risk factor, the OR for developing International Classification of Diseases-related disorders has been reported OR odds ratio, REM rapid eye movement Table 6 Synopsis for the management of ICDs and DDS in PD CBT cognitive behavioural therapy, DBS deep brain stimulation, DDS dopamine dysregulation syndrome, ICDs impulsive control disorders, PD Parkinson's disease Prevention Inform patients and relatives of potential ICDs prior to the start of dopamine replacement therapy Continue screening for ICDs by asking patients and family members during each follow-up visit Management of ICDs and DDS in PD
2024-04-15T06:17:10.929Z
2024-04-13T00:00:00.000
{ "year": 2024, "sha1": "e317cd3f3012165205b04ae7e5891db031d4093c", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40263-024-01087-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "307426a7dfe0150fa4aa269547226211fa762d68", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55322094
pes2o/s2orc
v3-fos-license
The Effect of Public Participation in the Elections to Improve the Living Conditions of People in the Islamic Republic of Iran The purpose of this research is to investigate the impact of public participation in election events on improvement of lives of people in Islamic Republic of Iran. This study is a descriptive-analytic research and was performed under field methods. The authors have elaborated on theoretical foundations of the subject and mechanisms of election in Iran. in later sections, results of the study have been investigated. With respect to analyses and investigations, it can be stated that in every democratic system, it is necessary to hold elections in order to determine the destiny of people and the country. In fact existence of people's vote regarding legitimacy of the system can result in development of the society and improvement of people's lives as well. With respect to results it can be said that in every country, holding elections seems crucially necessary and in fact a perquisite for obtaining a prosperous and democratic society is holding elections. Introduction One of the easiest and least expensive ways of popular participation in political governance systems is political participation and its symbol is the elections or democracy.Election is of the highest manifestation of the people in the political arena and the main realization of the sovereignty of the people.Democracy is a system of popular sovereignty over its people's destiny i.e., the system of government of the people and government accountable to its people.Competitive elections and peaceful participation increases the sense of responsibility of the demands of society.Election increases the responsibility of rulers over demands of society.The public nature of this participation and its freedom and selective nature are the main constituents affecting people in decisions made in the frame of the legislature.On the other hand, popular system is a direct rule based on the principles of democracy.Today, thanks to popularity of Islam-based system that aims to suit the demands of the people and its value-based foundation, people have a special position among the political participation in all aspects.However, adopting the guidelines of the election law provides an opportunity for people to participate in this great task with discretion and based on a duty in the system, and to choose the right representative.This means the participation of the people based on Islamic principles. Statement of the Problem Election has been considered as one of the manifestations of religious democracy in the Islamic Republic of Iran.Public participation, as one of the main characteristic of developed democratic system, is the most common, easiest and lowest cost type of political participation.In the Islamic Republic of Iran, voluntary participation is considered to be the norm in such a way that people decide to participate in elections or influence on policy through non-participation.Participation in elections is seen as just one of the mechanisms of political participation.Later, they reflect their demands in public policy.Evaluating the effectiveness and role of the people is very important. Historically, before the popular revolution in Iran, people were unaware of government performance while today at the beginning of the twenty-first century and in the shadow of the communications revolution, people are more or less aware of their role and the government and able to affect the behavior of the government.Today, people are encouraged to intervene and participate in the political process.One reason for the increase in people's participation in politics is public governance as a basis for legitimate political power.According to this principle, people have the right to rule and those who attempt to work, in fact, get their authority from the people.General acceptance of the principle of popular sovereignty would support the political participation of the masses norm.So if we accept the principle that political power comes from the people, in principle, people should be involved in political affairs and exercise their sovereignty.What role can citizens have in politics?Are citizens able to influence the political elite to be able to influence policy?In other words, whether there is a link between turnout and government policy. In recent years, in line with public policy, the government's top priority is to solve the problems of the people.In this regard, people can be affected by their participation in politics.When people participate in elections and vote for a candidate program, in fact, people are prioritizing among various candidates' policies and programs.The successful candidate also proposes programs and policies according to people's attempts to apply public policy.This indicates that there is a main link between people's participation, elections and public policy.Public participation has many effects at all levels of the political process.Now the question arises if popular participation affects the political process in Iran, does it affect the improvement of living conditions of people in the Islamic Republic of Iran?In other words, what is the effect of public participation in the electoral arena on the improvement of the living conditions of people in the Islamic Republic of Iran? Public Participation In Persian dictionary, the term participation means "take part" and more means "doing business with each other and share its benefits" (Pishgahifard, 2007).Participation, in general, is the agreement between two or more people in the business.Each of the owners involved in the deal are personally responsible for any legal action or debt.Public participation usually arises as a result of solidarity and national unity and include effective informed consent and active participation of the community to achieve a specific goal in terms of interactivity, collaboration, cooperation and collaboration of the desire, willingness and enthusiasm by all actual and potential facilities.Accordingly, in the social system, public participation refers to mutual assistance of people and sovereignty in the implementation of development of political, economic, social programs and projects (Shiri and Khodadadpour, 2014). In addition, public participation, as a contributing factor, has an important role in development cooperation by providing assistance and cooperation fields constantly spreading and physical development and supply needs of the public and the country's infrastructure (Gholami, 2012). The Concept of Elections The word "election", the Arabic verb root "Nokhab", means selection and choice.Its European equivalent, Election, goes back to the Latin verb of "Eligeve" meaning to remove, disassemble and adopt.In terms of semantics, this term has gained a wide semantic changes.However, in common usage and in general, it means the method for choosing a certain number of people from the large number of people who have been nominated to fill a position or office (Haghighi, 2001).But in a more general definition, election is an integrated and ongoing operation in a certain area and time limit, and led to the election of the person or persons or object and purpose by the majority of people (Ashuri, 2008). From this perspective, the election is the means by which we can determine the will of the citizens in the development of political institutions and political authority in charge of the intervention (Lazar Esfeld, 2003).Voter participates in the political affairs of their community which is a political legal action.The elections, on the one hand, shows the social bases of political power and, on the other hand, it is a good criterion to evaluate the distribution of power in society (Haghighi, 2001). The Nature of Popular Participation in the Electoral Arena Election is the arena of public participation and represents people's power and national support of system.The more participation takes place in the election, the system will have a stronger backing and will move in its direction with strength and more power.In other words, the interpretation of the nature of public participation in this area has taken a new concept that is divided into two categories (Razi, 2002): A) rights-based participation of people Some people know participation in the elections a right.And in order to have sustainable society, they will choose and consider this as a legal right for themselves. B) partnership based on a practical and Islamic duty Others know the participation in the election not only as a legal right to their own, but as a divine duty and on the basis of Islamic principles and religious known to the system.According to this theory, the role of such people in Islamic Iran is more than the first.And omnipresence and the epidemic will have proved their political vitality in all fields.In this type of partnership which is a political and religious all-people's referendum, people with a selection of natural selection and the origin of religion-based and provincial and reflecting the value that has the expertise and commitment, will led to secure and neutralize the enemies' conspiracies and threats (Mehregan and Ezzati, 2006). Factors Affecting Participation in the Iran's Presidential Election Iran has failed the topographic pattern of population structure and natural environment.This affects the pattern of formation of the nation and its peoples and nations which have created hybrid marginalized ethnic minorities.This hybrid model where the vast majority of the three-layer combination, religion (95%), ethnicity and language (75%) and the establishment of heterogeneity in geographical space have joined the Central Plateau of Iran has facilitated the establishment of centralized government in Iran.This type of Iranian regime pattern that has not spread geographic spatial distribution of power in the country, has not led to the political and administrative non-interference in the fate of people in the national and local levels.Due to this, the following factors are the most important factors affecting the Iranian people's participation and electoral behavior: The Location and Spatial Differences The geographical distribution of natural diversity in Iran has caused people live with different characteristics in different regions of the country.This creates spatial diversity and specific social behaviors in each of the geographic regions of Iran.Electorate is, in principle, a geographic area known legal boundaries and a certain number of representative components that form the territory of the country (Haghighi, 1991).The creation of electoral districts, in addition to geographical features and public acceptance, electoral laws, public laws and constitution have key role (Pishgahifard, 2007).The difference in public participation at polling stations in central and border parts of Iran in nine presidential elections shows different views of geographical Iranian people's participation in different places.For example: Statistical analysis of Isfahan and Yazd provinces as the central provinces of Iran and Kurdistan and Sistan-Baluchestan provinces as borders in nine presidential elections show participation rates in the border provinces has always been lower than the central provinces of Iran. The Human Factor and Human Differences The structure of society is also a major cause of human social behavior including behaviors and participation in elections.Age, gender, and religion are known to be the most important human factors in voting behavior.Political, sociological studies show that, according to the discipline and conservatism, the elderly people tend to be much more moderate and have tendency to conservative parties of young people who are inclined to the left or right radical parties (Dodarzheh, 1990).Gender is also an impressive and important factor in the election.In Iran, according to the traditional values and beliefs of women, until the mid-70s women's participation in the presidential elections were not significant (KordKarimi, 2002).Of religious aspect, the majority of Iranians are Shiites and live in the central regions of the country.Sunni minorities live in border areas and marginal provinces. Socio-Economic Factor Iran's political structure and economic functions independent of the geographical regions of the country and the government's financial relying on the sale of natural resources.And disproportionate infusion of wealth in the country, has led to form certain social classes in different geographical areas of Iran in terms of economic factors (Pishgahifard, 2007). Advertising-political Decision Factor Government and election officials provide fields of extensive presence of people in the election before the election.It depends a lot on political decisions and the amount and type of electoral campaigns by the government and political parties.In addition to political decisions, advertising is an important factor affecting the role of public participation in the elections.One of the most important tasks of political geographers in the process of elections is to determine which advertising model in which geographic regions have more applications.This would, in addition to substantially reducing the cost of advertising, will effectively help to increase women's participation in the presidential elections (ibid: 98). The Effects of Public Participation in the Elections to Improve the Living Conditions of People in the Islamic Republic of Iran Increasing participation of elections can be regarded as one of the most important features of elections in each country.Based on the very foundations of democracy based on Islam is institutionalized in the Islamic Republic of Iran.Turnout in election vaccinates system and provides protection for the country.In recent years, one goal of colonizers for the secularization of Muslim societies is making them indifferent to the fate of their own social and preparation for easy control of the communities as Islam is the collection of religious, political, and spiritual affairs.But secularism is not attached to the religion with politics.With this idea, it follows the Muslims only to pray and remain unaware of the social and political affairs.Therefore, in the Shiite school, leadership thought is inseparable from the political leadership and the two have also been incorporated (Mehregan and Ezzati, 2006).The epic comes from passion, determination and unity of a nation, all of which are the characteristics of creating an epic.People can participate with a maximum presence securing the system against arrogant enemies.Then, they open the way to progress and go in a direction that is in line with the horizon and the prospects for 2025.Second, with participation of people, people of the world, whether friend or enemy, realize that people in any circumstances support the revolution.Enemies would conclude that this revolution has popular support and can implement their plans and programs (Razi, 2002).By preventing the domination of monopolistic groups on the political and economic foundations of society and maintaining social justice, we can strengthen the motivation the desire of the public to participate in the political, economic and social.For if all things are in the hands of a particular party and government units and institutions moving in to their demand, there will be no room for the expression of popular participation.Culture is the most important variables of political and social participation.Participation in a community that has a participatory political culture is vast (Sayyed Imami and Abdulmutallab, 2009). Arousing public trust, for the institutionalization of political behavior, change attitudes and change people's mentality and political sensitivity, people's participation is one of the desired results.As a result, arousing public participation in political formations entails the dynamics of social order and stability of sovereignty.Social and economic justice and poverty are major issues which play a decisive role in the development of political and social participation.The Quran knows social and economic justice as the social purpose behind sending prophets of God (Hadid,25).Imam Khomeini (RH) describes religious orders as justice interface and theocracy (Imam Khomeini, 1998). One of the functions of the media is to create social cohesion and affect the public participation.Media in society, despite the differences, contradictions and help to destroy the unity or unification of society.The media, in different ways, invite public participation including: Advertising and encouraging, motivating (Koel, 2006). In addition, extensive presence of people in the presidential election will affect the policies of the international community.Iran presidential election may, apart from its geopolitical position, be such as the election campaign which are held in dozens of other countries and does not have much importance for the international community but Iran's position and status is important in this matter.Iran could play a role in various regional issues and focuses the world's attention to the presidential election. Regional and trans-regional countries often are awaiting the results of the Iranian presidential election, if people move toward cooperation by maintaining principles.Other countries will treat differently.Of course, the more people participate in the election, the larger influence will have on politics and international calculations.The higher the presence of people in presidential election, the international communities are doing calculations on Iran's role in the global arena.But if the countries of the region and beyond see people dealing reluctantly with election, procedures in other countries toward Iran will be destructive.Therefore, the presence of people in the presidential election is quite impressive on policy toward Iran and other countries (Mehregan and Ezzati, 2006). In addition, the maximum presence of people in the election is the most important factor in increasing the bargaining power of Iran.Therefore, efforts to improve people's internal affairs and a strong presence on the ground is essential.One of the manifestations of popular support, in addition to support, is the presence of people on special days and events of the campaign for self-determination.The maximum number of people in the elections, which can act as a symbol in abroad, can be effective in solving international problems in which Iran is also involved (Razi, 2002). Voting is to defend and protect themselves against the enemy.In each country, the most important issue is national security.National security is a shield in which you live and where you have endless freedom and power.Each vote is as a brick building the national security.If the brick is not in its place, undoubtedly, the building will be permeable and will kindled the greed of the enemy.In fact, voting is to defend self and our own interests (Seyed-Emami and Abdulmutallab, 2009). Conclusion The importance of the phenomenon in the creation and representation of political legitimacy, breadth and quality of participation and response capacity of the political system extend open spaces and expand the scope of social power, and the power of intermediary institutions and, ultimately, lead to hedging the political system.Acceptance of the constitution governing the elections is a way like any other pattern.Adoption of election model, as a way of managing the country's constitution like any other pattern, needs few requirements.For this reason, basing or explaining the electoral law in which the electoral system and public participation are determined is essential.It is clear that the practical impact of public participation in reviewing the results of the elections on improving the living conditions of people in any democratic country is considered because survey feedback is required for every law in the process of completing legislation and, thus, the trend of development of a country on the path to happiness.
2018-12-05T19:23:40.588Z
2016-11-30T00:00:00.000
{ "year": 2016, "sha1": "c18a0da89b87597863242c91b81b51881e650885", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/jpl/article/download/62589/34932", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c18a0da89b87597863242c91b81b51881e650885", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Sociology" ] }
236209688
pes2o/s2orc
v3-fos-license
Real-World Evidence on Palliative Gemcitabine and Oxaliplatin (GemOx) Combination Chemotherapy in Advanced Biliary Tract Cancer Simple Summary Cancers of the biliary tract are rare but severe with high mortality rates. Randomised controlled trials suggest that chemotherapies such as gemcitabine and oxaliplatin (GemOx) may relieve symptoms and prolong life, but less is known on the efficacy and safety of such regimens in real life. The current paper assessed the real-world outcome of GemOx in all patients with advanced biliary tract cancer treated at any cancer centre in the South East Region of Sweden over a period of nine years. The median overall survival was nine months and time to disease progression five months. Prognostic factors such as performance status and gall bladder (rather than bile duct) localisation of the primary tumour were identified. Most patients received a lower dose of oxaliplatin than proposed by previous studies, which seemed feasible as few patients had severe adverse events. This study supports further use of GemOx as standard of care. Abstract Background: Gemcitabine and oxaliplatin (GemOx) is a standard combination regimen in advanced biliary tract cancer (BTC). There is limited evidence on its efficacy and safety in real life. Methods: A retrospective multicentre cohort study in the South East Region of Sweden, covering nine years (2011–2020) and three hospitals where GemOx was treatment of choice, was designed. Clinicopathological prognostic parameters were explored. Results: One hundred and twenty-one patients with advanced BTC were identified. Median overall and progression-free survival (OS and PFS) were 8.9 (95% CI = 7.2–10.6) and 5.3 (95% CI = 3.8–6.7) months. Performance status according to Eastern Cooperative Oncology Group (PS according to ECOG) 1–2 and primary gallbladder carcinoma were independent predictors for poor OS. PS and derived neutrophil/lymphocyte ratio were predictive for PFS. The most common severe type of myelosuppresion was grade 3 neutropenia that was recorded in 8%. Fifty-three (43.8%) experienced at least one episode of unplanned hospitalisation. One hundred and seventeen (97%) received oxaliplatin with lower dosage than was utilized in previous phase III trials (80–85 vs. 100 mg/m2) and a majority received further dose reductions of oxaliplatin and/or gemcitabine. Conclusion: The outcome of GemOx in advanced BTC appears comparable in controlled trials and real-world contexts. A lower dose of oxaliplatin seems more tolerable without compromising the outcome. Introduction Biliary tract cancer (BTC) is an entity comprising a group of rare types of cancers with high mortality rates, including intrahepatic cholangiocarcinoma, perihilar cholangiocarcinoma (Klatskin tumour), extrahepatic cholangiocarcinoma (distal biliary tract carcinoma) and gallbladder carcinoma [1]. In general, the incidence of cholangiocarcinoma varies between 0.3 and 6 cases per 100,000, but in China, South Korea and Thailand, incidence rates of >7 are seen [2]. For gallbladder carcinoma, the worldwide age standardized incidence rate (per 100,000) is 0.9 for men and 1.4 for women [3]. The median 5-year survival varies between 10 and 40%, which is highly dependent on the location of the primary tumour, the disease stage at diagnosis and access/eligibility for multimodal treatment in terms of surgery and/or chemotherapy [4,5]. The majority of patients with BTC have locally advanced and/or metastatic disease at the time of diagnosis, both of which preclude curative intent surgery. Over the last twenty years, gemcitabine (Gem) has been a cornerstone in palliative systemic treatment of advanced BTC, either alone or in combination with platinum compounds [6,7]. The additional value of the gemcitabine/cisplatin (GemCis) combination (compared to Gem monotherapy) was confirmed in the milestone phase III trial ABC-02 by Valle et al. from 2010 [8], which displayed an improvement of almost four months in median overall survival (OS) in the combination arm (11.7 vs. 8.1 months for GemCis and Gem, respectively). Not surprisingly, the improved survival with GemCis came at the expense of increased toxicity. To avoid adverse events and complications specifically associated with cisplatin (e.g., ototoxicity, acute kidney failure and nausea/vomiting), alternative combinations with Gem and other platinum compounds such as oxaliplatin (GemOx) have been introduced in parallel. The efficacies of the two separate regimes GemCis and GemOx were described in a systematic review [9]. While OS appeared to be marginally better with the GemCis combination (11.7 vs. 9.7 months for GemCis and GemOx, respectively), toxicity profile and tolerability rather favoured the GemOx regimen. On the other hand, a recent phase III trial on advanced gallbladder cancer failed to show superiority for either of the two regimens, although OS was numerically slightly better in the GemOx arm [10]. A Cochrane review on standard BTC treatments such as GemCis and GemOx concluded that there was little evidence for superiority of either of the two regimens [11]. Due to the favourable toxicity profile, many centres, including all oncology departments in the South East Region of Sweden, have advocated GemOx (rather than GemCis) as standard first-line treatment in advanced BTC. So far, there is limited published evidence on the feasibility, efficacy and safety of the GemOx regimen in the real-life setting. As the health care in Sweden is publicly funded and available for all citizens regardless of socioeconomic status, the conditions for evaluating real-world outcomes are optimal. A population-based multicentre retrospective cohort study was therefore designed to assess the real-world outcome and safety profile of GemOx in advanced BTC, covering all eligible patients in the geographical area over a period of nine years. Besides providing real-world data on overall and progression-free survival (OS and PFS), potentially prognostic parameters in terms of clinical, pathological and biochemical characteristics were explored. In addition, key palliative parameters such as access to specialised palliative care and chemotherapy at end of life were analysed. Patients A retrospective multicentre cohort study was conducted in the South East Region of Sweden, including the oncology departments of Linköping, Jönköping and Kalmar. The area has a population of approximately 1.1 million citizens. All included patients were identified using the digital software CSAM Cytodose (CSAM Health AS, Oslo, Norway), which was the software used for prescribing chemotherapy at all participating centres. Inclusion criteria were as follows: administration of at least one dose of palliative firstline GemOx for biliary tract carcinoma (intrahepatic, perihilar and extrahepatic/distal cholangiocarcinoma) or gallbladder carcinoma (ICD codes C23.x, C24.x and C22.1) at any of the participating centres between November 2011 and September 2020. Patients who only received GemOx in the neoadjuvant or adjuvant setting were excluded. Otherwise, and to reflect the real-world situation, no exclusion criteria were applied. Treatment The GemOx chemotherapy regimens used were slightly different at the participating sites and over the study period. While gemcitabine was consistently prescribed in 1000 mg/m 2 at day one, the oxaliplatin dose was either 80, 85 mg/m 2 or (in a very limited number of patients) 100 mg/m 2 and was given either at day one or day two every 14-day cycle. If oxaliplatin was omitted due to toxicity, subsequent gemcitabine monotherapy cycles were still considered part of the first-line GemOx regimen. To equalize the registration of the total number of cycles, any 28-day cycles in which gemcitabine was prescribed following the omission of oxaliplatin were registered as two 14-day cycles. Slight progression after a planned treatment break and restart of the same regimen did not count as true progression. If treatment was initiated with dose reduction merely for 'tolerance testing', but then escalated to full dose within two cycles and the patient received more than one cycle of full dose, the higher dose was considered the starting dose. Patient Data Clinical data were collected from medical records with a structured case report form and included baseline patient data, tumour characteristics, history of previous neoadjuvant/adjuvant chemotherapy and curative intent surgery, treatment intensity and duration, toxicity, blood samples at baseline, haematological toxicity (graded according to Common Terminology Criteria for Adverse Events (CTCAE) version 5.0), unplanned hospitalisations, admission to specialist palliative care and any further chemotherapy treatment after GemOx. All patients were followed until death or until 31 December 2020, whatever came first. As lymphocyte counts were not routinely analysed, derived neutrophil lymphocyte ratio (dNLR) [12] was utilized as a surrogate for NLR, and was calculated according to the formula below. neutrophils white blood cell count − neutrophils The cutoff value used was adopted from Grenader et al. and has been shown to predict OS and PFS in advanced BTC [13]. Statistics Statistical analyses were performed using SPSS Statistics v25 (IBM, Armonk, NY, USA). Primary outcome was overall survival (OS), counted from start of treatment until death or last follow-up date. Secondary outcomes included progression-free survival (PFS), counted from start of treatment until progression. Progression was determined from radiology reports or as determined by the managing clinician. Other relevant outcomes were unplanned hospitalisations, infections requiring antibiotics and/or hospitalisation, haematological toxicity, peripheral neurotoxicity, number of treatment cycles, dose intensity, treatment within last 30 days of life and inclusion into specialist palliative care. Median OS and PFS were estimated using Kaplan-Meier survival analysis and the significance of the difference between factors was calculated using Mantel-Cox log rank test. To evaluate hazard ratios for potential prognostic factors, Cox regression analysis was performed. p-values < 0.05 were considered significant. Statistically significant prognostic factors in the univariate analysis were further analysed in a multivariate Cox regression analysis. Ethics Ethical approval for this study was granted by the Regional Ethics Review board in Linköping (diary number 2018/139-31). Due to the retrospective non-interventional design and the fact that the majority of patients were not expected to be alive at time of data collection, the Ethics board waived the requirement for informed consent. Patients Over the study period (2011-2020), a total of 171 patients treated with at least one cycle of GemOx, at any of the covered oncology centres in the South East Region of Sweden, were identified. Fifty of these were excluded due to exclusion criteria, leaving a total cohort of 121 patients receiving first-line palliative intent GemOx for advanced BTC (Figure 1). Patient and treatment characteristics are displayed in Table 1. A minority of patients (n = 35, 29%) had previously been treated with curative intent resection, and 17 (14%) had received previous adjuvant chemotherapy, corresponding to approximately half of the patients (n = 17, 49%) that underwent surgery. The most commonly used adjuvant chemotherapy was capecitabine (n = 11, 65%). Most patients had one organ with metastasis at date of incurable disease (n = 59, 49%). The most common site of metastasis was the liver (n = 36, 30%) followed by peritoneum (n = 27, 22%) and lymph nodes (n = 24, 20%). A majority of patients (n = 101, 84%) had the diagnosis verified with histology or cytology. Median number of treatment cycles was six if counting only combination therapy and seven if counting total number of treatment cycles, including subsequent gemcitabine monotherapy following discontinuation of oxaliplatin. The main reason for termination of treatment was progression (n = 72, 60%) and impaired performance status (n = 18, 15%). Sixty-two (53%) patients received additional line(s) of chemotherapy after GemOx (Table 1). At least one episode of infection requiring antibiotics and/or hospitalisation was recorded in 51 (42%) patients during the GemOx treatment period. Three cases (2.5%) of febrile neutropenia were observed. Peripheral neurotoxicity affecting activities of daily life (corresponding to chemotherapy-induced peripheral neurotoxicity grades 3-4 according to the CTCAE scale) was recorded in 33 (27%) of patients. Fifty-three (44%) patients had at least one episode of unplanned (inward) hospitalisation. Grade 3 haematotoxicity was evident in five patients (4.2%) regarding anaemia, four (3.4%) regarding thrombocytopenia and leukopenia and six (8.2%) regarding neutropenia ( Table 2). No cases of grade 4 myelosuppression were observed. Survival The median OS for the whole cohort was 8.9 months (95% CI 7.2-10.6) and median PFS was 5.3 months (95% CI 3.8-6.7). The median follow-up time was 8.7 months and at end of follow up, 116 patients had evidence of progressive disease (95%) and 108 patients had died (89%). Treatment in End of Life and Palliative Care Admissions Median number of days from last dose of gemcitabine until death was 123.5 days (range 2-1183) in the total cohort. When excluding patients who received other treatments beyond progression on GemOx, the median number of days was 61.5 (range 2-455). In this latter subgroup of patients, 10 (19%) received their last cycle of GemOx within the last 30 days of life. Seventy-four (61%) of the patients were admitted to a specialised hospital-based palliative care team. The median number of days from diagnosis of incurable disease until inclusion in palliative care was 260.5 (range 7-1491). Median number of days from inclusion in palliative care until death was 36 (range 0-1059, Table 5). Discussion To our knowledge, this is the first published multicentre real-world cohort study on first-line GemOx combination chemotherapy in advanced BTC. Median OS and PFS were 8.9 and 5.3 months, respectively. While the outcomes closely mirror what was previously reported in the phase III trial by Sharma et al. [10], which compared GemOx to GemCis in patients with gallbladder cancer and reported mOS of 9 and 8.3 months in the GemOx and GemCis arms, respectively, the outcomes appear slightly worse than the corresponding results of the 'Gemcitabine and oxaliplatin with or without cetuximab in advanced biliary-tract cancer' (BINGO) trial [14], where survival in the GemOx comparator arm was 12.4 months (vs. 11 months in the experimental GemOx/cetuximab arm). This minor difference is not surprising, as the BINGO trial only enrolled patients with PS 0-1, whereas the present cohort included PS 2 as well. Notably, a similarly conducted real-world study on Taiwanese patients [15], but treated with GemCis, reported almost identical data to ours. In the present study, patients with gallbladder cancer had a significantly worse OS than patients with primary tumour in intra-or extrahepatic bile ducts. Previous studies have shown diverse results regarding tumour site, where some studies are in line with ours [16,17] while others rather suggest intrahepatic tumours to be particularly bad, (e.g., Andre et al., [7,18]). The milestone ABC-02 trial, however, reported no prognostic impact of the primary tumour site [8]. Female gender was associated with poor OS in univariate analysis but did not reach significance in the multivariate analysis. Notably, there was a slight male predominance in all diagnoses but gallbladder cancer, which was in contrast characterised by a marked female overrepresentation. This is in line with previous data that have shown a female to male ratio of 3-6:1 [19,20]. This sex difference in OS could therefore likely be explained by the presence of gallbladder cancer and not by gender per se. Not surprisingly, poor performance status was associated with both worse OS and shorter PFS. This has been shown in multiple previous studies [18,[21][22][23], which emphasise the poor prognosis in frail patients and add doubts to the value of palliative chemotherapy in patients with impaired performance status. The present study reports low incidence of grade 3-4 bone marrow toxicity. While no patient experienced grade 4 toxicity, only 8% of patients had the most common type of myelosuppression in terms of grade 3 neutropenia. The incidence of grade 3 anaemia and thrombocytopenia was even lower. These figures are generally lower than observed in other studies. In the study by Andre et al., 14% of the population experienced grade 3-4 neutropenia, whereas 9% had grade 3-4 thrombocytopenia and/or anaemia. The BINGO trial [14] reported grade 4 neutropenia in 3% and grade 3 in 13% of the patients in the GemOx comparator arm. Thrombocytopenia grade 3 was recorded in 19% in the same group but with no grade 4 toxicity. It is reasonable to believe that the less frequent and less severe bone marrow toxicity observed in the present cohort may depend on the more cautious dosing of oxaliplatin. In our study, only four patients received oxaliplatin 100 mg/m 2 (that was the standard dosing in the Andre and BINGO trials), while the vast majority (117 out of 121) received oxaliplatin at 80 mg/m 2 or 85 mg/m 2 . In addition, about one third of patients received upfront dose reduction of gemcitabine and/or oxaliplatin. Given the closely resembling survival estimates in the different studies, it therefore seems that a slightly lower starting dose of oxaliplatin is more feasible in terms of tolerance but is similarly potent as the higher dosage utilised in earlier trials. Apart from myelosuppression, chemotherapy-induced peripheral neuropathy (CIPN) is a common complication for platinum compounds. In the present cohort, 27% of patients experienced CIPN with negative impact on activities of daily life and, following myelosuppression, it was the second most common reason for dose reduction during the treatment course. In 31% of patients, oxaliplatin was, at some point, discontinued completely and replaced with the single drug gemcitabine. However, while the main reason for ultimate termination of the treatment was progression followed by impaired performance status, toxicity was the reason to terminate in just a few (6%) of the patients. There was quite a high proportion (44%) of patients with at least one episode of unplanned inward hospitalisation in the present population. Forty-two percent of the patients had at least one infection that required antibiotic use and/or hospitalisation. As febrile neutropenia was very rare, it appears that these events were related to the disease rather than the chemotherapy per se, although the retrospective nature of the study and the lack of an untreated control group preclude firm conclusions on the matter. Nevertheless, this underlines the frailty of these types of patients and implies that the treating oncologist should be alert for signs of infection and/or general deterioration of the performance status. The medical centres covered by this study did not routinely analyse whole differential white blood cell counts, making it impossible to perform calculations on NLR, which previously have been shown to be a prognostic marker for both OS and PFS [24][25][26]. Some studies have however investigated the use of dNLR and found it to be a reasonable surrogate for NLR [13]. As dNLR builds on the absolute neutrophil count and the total number of white blood cells, it is possible to estimate even without lymphocyte counts available. Notably, dNLR was found to be a significant prognostic factor for both OS and PFS in univariate analyses of the present population. Following multivariate analysis, the statistical significance remained for PFS but not OS. This is still in line with a previous study by Grenader et al. [13], who reported dNLR to be a strong independent prognostic factor for patients with advanced BTC in the compiled populations of the ABC-02 [8] and one additional study [27] in terms of both OS and PFS. For those patients that did not receive further lines of treatment after GemOx, more than 80% did not receive any chemotherapy in the last 30 days of life. Notably, this is a somewhat higher proportion than was reported in a study by Randén et al. [28] that focused on near end-of-life chemotherapy in patients with advanced cancer in Stockholm. This indicates that few patients were 'over-treated' with GemOx in the end-of-life situation. While 63% of the patients received specialised hospital-based palliative care, the median number of days from inclusion into specialised palliative care to death was limited to 36 days. Although these data are difficult to interpret, as 'simpler' forms of palliative care might have been provided by other care givers not covered by the present application, such as general practitioners, it still indicates that early admission to a palliative care provider is recommended as the disease course may rapidly accelerate in these types of patients. Clinical experience as well as evidence-based guidelines [29,30] suggest that access to a skilled palliative care provider is essential for preserving quality of life in late stage cancer. The present study has some essential limitations. There was no standard regimen or overall guidance on starting doses or protocol-based dose reductions, although the vast majority received a lower dose of oxaliplatin than was initially stipulated in clinical trials. Still, we believe it is of considerable interest to evaluate the outcome and safety in an unselected patient cohort receiving treatment according to the oncologist's discretion rather than according to a strict protocol. To account for the low incidence of this disease, the inclusion period covered a long period of time (nine years) in order to achieve a reasonably large cohort. The most obvious strength is the true real-world approach, with a population-based inclusion covering all patients, regardless of socioeconomic status, who received this treatment in a large geographical area. While formal assessment of quality of life was not possible given the retrospective design, significant efforts were undertaken to collect detailed data on toxicity, unplanned hospitalisations and admissions to palliative care providers, which may all be considered surrogates for symptom burden and quality of life. Together with other clinical trials and real-world studies on the topic, it is obvious that novel and more potent treatment strategies are needed in order to obtain considerably improved survival and disease control in advanced BTC. Conclusions This study provides novel and robust real-world data on GemOx combination chemotherapy in advanced BTC. Survival estimates are similar to the outcome in comparable clinical trials, which supports further utilisation of GemOx in the palliative treatment of BTC. Lower starting doses of oxaliplatin (80-85 mg/m 2 rather than 100 mg/m 2 ) seem to be associated with lower frequency of severe bone marrow toxicity without apparently impairing the prognosis. Good performance status is a strong and independent prognostic factor, both in terms of OS and PFS. In addition, primary tumour location in the gallbladder and dNLR were recognized as independent parameters predictive for OS and PFS, respectively. Frequent infections and unplanned hospitalisations, as well as the short expected survival in these types of patients, indicate that early admission to a specialised palliative care provider is advisable. Novel treatment strategies are necessary to improve the long-term prognosis in advanced BTC. Institutional Review Board Statement: This study was performed according to the Helsinki declaration. Ethical approval was granted by the Regional Ethics Review board in Linköping (diary number 2018/139-31). Informed Consent Statement: Due to the retrospective non-interventional design and the fact that the majority of patients were not expected to be alive at time of data collection, the Ethics board waived off the need for informed consent. Data Availability Statement: Additional data are available at reasonable request to the corresponding author.
2021-07-25T06:17:02.001Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "db529cfb8d0c9e7760d96655fb27233885d4de46", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/14/3507/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "73f65e4dd84684337e9f09469cf9408438c610e8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
32530147
pes2o/s2orc
v3-fos-license
Scientific Endeavors of A.M. Mathai: An Appraisal on the Occasion of his Eightieth Birthday, April 2015 A.M. Mathai is Emeritus Professor of Mathematics and Statistics at McGill University, Canada, and Director of the Centre for Mathematical and Statistical Sciences, India. He has published over 300 research papers and more than 25 books on topics in mathematics, statistics, physics, astrophysics, chemistry, and biology. He is a Fellow of the Institute of Mathematical Statistics, National Academy of Sciences of India, President of the Mathematical Society of India, and a Member of the International Statistical Institute. He is the founder of the Canadian Journal of Statistics and the Statistical Society of Canada. He is instrumental in the implementation of the United Nations Basic Space Science Initiative. The paper is an attempt to capture the broad spectrum of scientific endeavors of Professor A.M. Mathai at the occasion of his anniversary. Early Work in Design of Experiment and Related Problems A.M. Mathai's first paper was in the area of Design of Experiment and Analysis of Variance in Statistics. This work was done after finishing M.A in Mathematics at University of Toronto and waiting to register for Ph.D, during July-August 1962. This was the first publication which appeared in the journal Biometrics in 1965, Mathai (1965) [Biometrics, 21(1965), [376][377][378][379][380][381][382][383][384][385]. This problem was suggested by Professor Ralph Wormleighton of the University of Toronto. In two-way classification with multiple observations per cell the analysis becomes complicated due to lack of orthogonality in the design. If two factors, such as the amount of fertilizer used and planting methods in an agricultural experiment to study the yield of corn, are to be tried and if the experiment is planned to replicate n times, it may happen that some observations in some replicate may get lost and as a result, instead of n observations per cell one may have n ij observations in the (i, j)th cell. When doing the analysis of the data, for estimating the effects of fertilizers, say, α 1 , ..., α p , one has to solve a singular system of linear equations of the type (I − A)α = G where G is known and I − A is singular and the unknown quantity α ′ = (α 1 , ..., α p ) is to be evaluated. Due to singularity, one cannot write α = (I − A) −1 G. This A = (a ij ) is the incidence matrix and has the property that all elements are positive and p j=1 a ij = 1 for each i = 1, ..., p. Mathai observed that this property means that a norm of A, namely A = max i p j=1 |a ij | = 1 and further, since the design is taking care of a general effect, one can impose a condition on α 1 , ..., α p such as α 1 + ... + α p = 0. Now, consider A being rewritten as A = A − C + C where C is a matrix where all the first row elements are equal to the median a 1 of the first row elements of A, all the second row elements are the median a 2 of the second row elements of A and so on. Now, by using the conditions on α i 's, Cα = O (null). where B = (b ij ), b ij = a ij −a i , j = 1, ..., p or p j=1 |b ij | = p j=1 |a ij −a i | = sum of absolute deviations from the median a i , which is the least possible. Hence the norm B = max i p j=1 |a ij −a i | is the least possible and evidently < 1. Then works and others' related results were put together and brought out a monograph on characterizations, see A.M. Mathai and G. Pederzoli, Characterizations of the Normal Probability Law, Wiley Eastern, New Delhi and Wiley Halsted, New York, 1977. Work in Multivariate Analysis Mathai had already noted the densities of several structures could be written in terms of G and H-functions. Consider x 1 , x 2 , ..., x r , x r+1 , ..., x k mutually independently distributed positive random variables such as exponential variables, type-1 or type-2 beta variables or gamma variables or generalized gamma variables etc. Consider the structures where δ 1 , .., δ k are some arbitrary real powers. Then taking the Mellin transforms or the (s−1)th moments of u and v and then taking the inverse Mellin transform one can write the density of u as a G-function in most cases or as a H-function, and that of v as a H-function. Product of independently distributed type-1 beta random variables has the same structure of general moments of the likelihood ratio criterion or λ-criterion, or a one-to-one function of it, in many of testing hypotheses problems connected with one or more multivariate Gaussian populations and exponential populations. This showed that one could write the exact densities in the general cases as G-functions in most of the cases. Mathai was searching for computable representations in the general cases. Development of 11-digit accurate percentage points for multivariate test statistics Even after giving the explicit computable series forms for the various exact distributions of test statistics in the null (when the hypothesis is true) and non-null (under the alternate hypothesis) for the general parameters, the series forms were complicated and exact percentage points could not be computed. When Mathai visited University of Campinas in Brazil he met the physicist R.S. Katiyar. After six months of joint work of simplifying the complicated gamma products, psi and zeta functions, Katiyar was able to come up with a computer program. The first paper in the series giving the exact percentage points up to 11-digit accurate was produced. This paper made all the complicated theory usable in practical situations of testing of hypotheses in multivariate statistical analysis. The paper appeared in Biometrika and other papers followed, see Mathai and Katiyar (Biometrika, 66(1979), 353-356; Annals of the Institute of Statistical Mathematics, 31(1979), 215-224;Sankhya Series B, 42(1980), 333-341), Mathai (Journal of Statistical Computation and Simulation, 9(1979), 169-182). Development of a computer algorithm for nonlinear least squares After developing a computer program for computing exact 11-digit accurate percentage points from complicated series forms of the exact densities of λ-criteria for almost all multivariate test statistics, the problem of developing a computer program for non-linear least squares was re-examined. Starting with Marquardt's methods, there were a number of algorithms available in the literature but all these algorithms had deficiencies. There are a few (around 11) standard test problems to test the efficiency of a computer program. The efficiency of a computer program is measured by checking the following two items: In how many test functions the computer program fails and how many function evaluations are needed to come up to the final solution. These are the usual two criteria used in the field to test a new algorithm. A new algorithm for non-linear least squares was developed by Mathai and Katiyar which did not fail in any of the test functions and the number of function evaluations needed was least compared to all other algorithms available in the literature. The paper was published in a Russian journal, see Mathai and Katiyar (Researches in Mathematical Statistics (Russian), 207(10)(1993), 143-157). This paper was later translated into English by the American Mathematical Society. Integer programming problem The usual optimization problems such as optimizing a quadratic form or quadratic expression, subject to linear or quadratic constraints, optimizing a linear form subject to linear (linear programming problem) or quadratic constraints etc deal with continuous variables. When the variables are continuous then these optimization problems can be handled by using calculus or related techniques. Suppose that the variables can only take integer values such as positive integers 1, 2, 3, ... then the problem becomes complicated. Many of the standard results available when the variables are continuous are no longer true when the variables are integer-valued. One such problem was brought to the attention of Mathai by S. Kounias. This was solved and a joint paper was published, see Kounias and Mathai (Optimization, 19(1988), 123-131). Work on Information Theory When the exact distributions for the test statistics being worked out, side by side the work on information theory was also progressing. Characterizations of information and statistical concepts were the ones attempted as a joint venture by Mathai and Rathie. Several characterization theorems were established for various information measures and for statistical concepts such as covariance, variance, correlation etc, see for example, Mathai and Rathie (Sankhya Series A, 34(1972) where f (x) is a density function. This is the continuous version. There is also a discrete analogue. The denominator is put into the form of the exponent of 2 for ready applications to binary systems. When α → 1 one has H in (4.1) going to the Shannon entropy S = − ∞ −∞ f (x) ln f (x)dx and hence (4.1) is called an α-generalized entropy. There are several α-generalized entropies in the literature, including the one given by Mathai. This (4.1) in a modified form with the denominator replaced by 1 − α is developed later by C. Tsallis, as the basis for the whole area of non-extensive statistical mechanics. The Mathai-Rathie (1975) book can be considered to be the first book on characterizations. As a side result, as an application of functional equations, Mathai and Rathie solved a problem in graph theory, see Journal of Combinatorial Theory, 13(1972), 83-90. Other applications of information theory concepts in social sciences, population studies etc may be seen from Kaufman and Mathai (Journal of Multivariate Analysis, 3(1973), 236-242), Kaufman, Mathai, Rathie (Sankhya Series A(1972), 441-442), Mathai (Transactions of the 7th Prague Conference on Information Theory, pp. 353-357). Applications to real-life problems Applications of the concepts of information measures, 'entropy' or the measure of 'uncertainty', directed divergence (a concept of pseudo-distance), 'affinity' or closeness between populations, concept of 'distance between social groups' etc were applied to solve problems in social statistics, population studies etc. Mathai had developed a generalized measure of 'affinity' as well as 'distance between social groups'. On application side, dealing with applications of information theory type measures, see George and Mathai (Canadian Studies in Population, 2(1975), 91-100, 7(1980, 1-7; Journal of Biosocial Sciences (UK), 6(1975), 347-356; The Manpower Journal, 14(1978), 69-78). Work on Biological Modeling During one of the visits of Mathai to the Indian Statistical Institute in Calcutta, India, he came across the biologist T.A. Davis. Davis had a number of problems for which he needed answers. He had a huge collection of data on the number of petals in certain flowers of one species of plant. He noted that the petals were usually 4 in each flower but sometimes the number of petals was 5. He wanted to know whether the occurrence of 5-petaled flowers showed any pattern. His data were insufficient to come up with any pattern. Patterns, if any, would be connected to genetical factors. Then he had a question about how various patterns come in nature, in the growth of leaves, flowers, arrangements of petals and seeds in flowers etc and whether any mathematical theory could be developed to explain these. Then he brought in the observations on sunflower. When we look at flowers, certain flowers such as rose flower, sunflower etc look more beautiful than other flowers. This appeal is due to the arrangements of petals, florets, and color combinations. When we look at a sunflower at the florets or at the seed formations, after the florets dry up, we see some patterns in the arrangements of these seeds on the flower disk called capitulum. The seeds look like arranged along some spirals coming from the periphery going to the center. Let us call these as radial spirals. If one marks a point on the periphery and then one looks to the left of the mark one sees one set of radial spirals and if one looks to the right one sees a different set of radial spirals going in the opposite direction. The numbers of these two sets are always two successive numbers from a Fibonacci sequence of numbers 1, 1, 2, 3, 5, 8, 13, 21, ... (the sum of two successive numbers is the next number). Another observation made is that if one looks along a radial spiral this spiral does not go to the center but it becomes fuzzy after a while. At that stage if one draws a concentric circle and then look into the inside of this circle then one will see that if one started with the pair (13,21), then this has shifted to (8, 13) and then to (5, 8) and so on. The same sort of arrangement can be seen in pineapple, in the arrangement of leaves on a coconut tree crown and at many other places. If one takes a coconut crown and project onto a circle then the positions of the leaves on the crown form a replica of the seed arrangement in a sunflower. In a coconut crown if the oldest leaf is in a certain direction, call it 0-th direction then the next older leaf is not the next one to the oldest, but it is about θ degrees either to the right or to the left and this θ is such that θ 2π−θ = golden ratio = √ 5−1 2 . This golden ratio also appears at many places in nature and the above θ ≈ 137.5 o . Davis wanted a mathematical explanations for these and related observations. These observations were made by biologists over centuries. Many theories were also available on the subject. All the theories were trying to explain the appearance of radial spirals. Mathematicians try with differential equations and others from other fields try with their own tools. Mathai figured out that the radial spirals that one sees may be aftermath of something else and radial spirals are not generated per se. Also the philosophy is that nature must be working on very simple principles. If one buys sunflower seeds from a shop or look at sunflower seeds on a capitulum the seeds are all of the same dimensions if one takes one from the periphery or from any other spot on the capitulum. Such a growth can happen if something is growing along an Archimedes' spiral, which has the equation in polar coordinates r = kθ after one leaves the center. Davis' artist was asked to mark points on an Archimedes' spiral, differentiating from point to point at θ =≈ 137.5 o , something like a point moving along Archimedes' spiral at a constant speed so that when the first points reaches θ mark a second point starts, both move at the same speed whatever be the speed. When the second point comes to the mark θ a third point starts, and so on. After creating a certain number of points, may be 200 points, remove the Archimedes' spiral from the paper and fill up the space with any symmetrical object, such as circle, diamonds etc, with those points being the centers. Then if one looks from the periphery the two types of radial spirals can be seen. No such spirals are there but it is one's vision that is creating the radial spirals. Thus a sunflower pattern was recreated from this theory and Mathai and Davis proposed a theory of growth and forms. Consider a capillary a very thin tube with built-in chambers. Consider a viscous fluid being continuously pumped in from the bottom. The liquid enters the first chamber. When a certain pressure is built up, an in-built valve opens and the fluid moves into the second chamber and so on. Suppose that the tube opens in the center part of a pan (with a hole at the center). If the pan is fully sealed so that the only force acting on the liquid is Earth's gravity. The flow of the liquid will be governed by the functional equation f (θ 1 ) + f (θ 2 ) = f (θ 1 + θ 2 ) whose continuous solution is the linear function f (θ) = kθ. This is Archimedes' spiral. The paper was sent for publication in the journal of Mathematical Biosciences the editor 'enthusiastically accepted for publication'. In this paper, Mathai and Davis * (Mathematical Biosciences, 20(1974), 117-133), a theory of growth and form is proposed. This theory still stands and since then there were many papers in physics, chemistry and other areas supporting various aspects of the theory and none has disputed the theory so far. In 1976 the journal has taken Mathai-Davis sunflower head as the cover design for the journal and it is still the cover design. Work on coconut tree crown The coconut crown was also examined from many mathematical points of view and found to be an ideal crown. This paper may be seen from Mathai and Davis (Proceedings of the National Academy of Sciences, India, 39(1973), 160-169). Engineering wonder of Bayya bird's nest and other biological problems Further problems looked into by Mathai and Davis are the following: (1) The engineering aspect of the egg chamber of bayya bird's nest. The nest hangs from the tips of tree branches, the mother bird goes into the egg chamber through the tail opening of the nest, the nest oscillates violently during heavy winds or storms but no egg comes out of the egg chamber and fall through the tail opening. Naturally the tail opening is bigger than the diameter of the eggs because the mother bird goes through that opening. This shape, beng an engineering marvel, was examined by Mathai and Davis. (2) thermometer birds in Andaman Nicobar Islands; (3) transfer of Canadian Maple Syrup technology in the production of palm sugar and jaggery in Tamilnadu, India; (4) Nipa palms to prevent sea erosion along Kannyakumari sea coast; (5) rejuvenation of Western Ghats in Kannyakumari region. All these projects were undertaken jointly by the Centre for Mathematical Sciences, Trivandrum Campus (CMS) where A.M. Mathai was the Honorary Director and Haldane Research Institute of Nagarcoil, Tamilnadu (HRI) where T.A. Davis was the Director and A.M. Mathai was the Honorary Chairman. Earlier to these studies, George and Mathai had done work in population problems, especially in the study of inter-live-birth intervals, that is, the interval between two live births among women in child-bearing age group, see George and Mathai (Sankhya Series B, 37(1975), 332-342;Demography of India, 5(1976), 163-180;The Manpower Journal, 14(1978), 69-78). Here, Mathai had introduced the concepts of affinity and distance between social groups. Introducing the phrase 'statistical sciences' By 1970 Mathai was working to establish a Canadian statistical society and a Canadian journal of statistics. The phrase 'statistical sciences' was framed and defined it as a systematic and scientific study of random phenomena so that the theoretical developments of probability and statistics and applications in all branches of knowledge will come under the heading 'statistical sciences', and random variables as an extension of mathematical variables or mathematical variables as degenerate random variables. After launching Statistical Science Association of Canada, the term 'statistical science' became a standard phrase. Journals and organizations started using the name 'statistical science'. Mathai was responsible to introduce these terms into scientific literature. When G.P.H. Styan, a colleague of Mathai, was editing the news bulletin of the Institute of Mathematical Statistics he posed the question whether the phrase 'statistical science' was ever used before launching statistical science association of Canada. There was a response from a Japanese scientist claiming that he had used the term 'statistical science' before. Incidentally, later the Institute of Mathematical Statistics changed the name of Annals of Mathematical Statistics to Annals of Statistics and hence that name was no longer available when statistical science association of Canada changed its name back to the original proposed name Statistical Society of Canada. Work on Probability and Geometrical Probabilities Work in mathematical statistics and special functions continued. As a continuation of the investigation of structural properties of densities, Mathai came across the distributions of lengths, areas and volume contents of random geometrical configurations such as random distance, random area, random volume and random hyper-volume. All the theories of G and H-functions, products and ratios of positive random variables etc could be used in examining the distributional aspects of volume of random parallelotopes and simplices. By analyzing the structure of general moments, Mathai noted that these could be generated by products of independently (1) gamma distributed points, (2) uniformly distributed points, (3) type-1 beta distributed points, (4) type-2 beta distributed points. Out of these, (1) fell into the category of G p,0 0,p , the second and third fell into G p,0 p,p category and (4) fell into G p,p p,p category, for all of which the necessary theory was already developed by Mathai and his team. Papers were published on the distributional aspects, see Mathai (Sankhya Series A. 45(1983), 313-323; Mathai and Tracy (Communications in Statistics A,12 (15)(1983) (2000), 219-232), Mathai and Pederzoli (American Journal of Mathematical and Management Sciences 9(1989), 113-139;Rendiconti del Circolo Matematico di Palermo, Serie II, Suppl., 50(1997), 235-258). A conjecture in geometric probabilities Then Mathai came across a conjecture posed by an Australian scientist R.E. Miles, regarding the asymptotic normality of a certain random volume coming from uniformly distributed random points. This was proved to be true by H. Ruben. In fact Ruben brought this area to the attention of Mathai. The structure of the random geometric configuration was known to Mathai and that it was a G-function of the type G p,0 p,p and Mathai realized that a very simple proof of the conjecture could be given by using the asymptotic formula, or Stirling's formula which is the first approximation there, for gamma functions. This was worked out and shown that the conjecture could be proved very easily. This paper appeared in the journal in probability, see Mathai (Annals of Probability. 10(1982), 247-251). Incidentally, there is a mistake there. Final representation is given in terms of a confluent hypergeometric function 1 F 1 there but it should be a Gauss hypergeometric function 2 F 1 , one parameter is missed there in writing the final form. Then Mathai noted that the same conjecture can be formulated in terms of type-1 beta distributed random points and similar conjectures could be formulated for type-2 beta distributed random points and gamma distributed random points. These conjectures were formulated and solved, see Mathai (Sankhya Series A, 45(1983), 313-323; American Journal of Mathematical and Management Sciences, 9(1989), 113-139); Mathai and Tracy (Communications in Statistics A, 12(15)(1983), 1727-1736Metron, 44(1986), 101-110). Random volumes and Jacobians of matrix transformations Side by side Mathai was developing functions of matrix argument. The work in this area will be given later but its connection to geometrical probabilities will be mentioned here. The area of stochastic geometry or geometrical probabilities is a fusion of geometry and measure theory. When measure theory is mixed with geometry the standard axiomatic definition for probability measure is not sufficient. It is quite evident to see that an additional property of invariance is needed because a geometrical object can be moved around in a plane or in space and the probability statements must remain the same. The famous Betrand's paradoxes or Russell's paradoxes come from lack of invariance conditions there. The details are discussed in the book, A.M. Mathai,Introduction to Geometrical Probability: Distributional Aspects and Applications, Gordon and Breach, New York, 1999. Consider a circle of radius r. Take two points A and B at random and independently on the circumference of this circle. Here, 'at random' could mean that the probability of finding a point, such as A, in an interval of length δ is δ 2πr . Consider the chord AB. Then AB is a random chord. Let P be the mid point of this chord and O the center of the circle. Then OP is fixed when AB is fixed and OP is perpendicular to AB. Consider another situation of selecting a point P at random inside the circle. This can be done by assigning probability of finding P in a region R inside the circle is R πr 2 . If P is fixed and if P is the midpoint of a chord then the chord is automatically fixed. In many ways one can geometrically uniquely determine a chord. The chord can be made 'random' by assigning probabilities in many ways. Two ways are described above. If one asks a question, what is the probability that the length of this random chord is less than a specified number? The answer will be different for different ways of assigning probabilities. This is the paradox. Note that all steps in the derivations of the answers will be correct and valid steps as per the usual axioms of probability. In stochastic probability area the methods used are the methods from differential and integral geometry and usually very difficult. Even if one wishes to talk about the distribution of random volume of a parallelotope through differential or integral geometry the process is very involved. Mathai noted that such problems could be easily answered through Jacobians of matrix transformations. A paper was published in advances in applied probability, see Mathai Advances in Applied Probability, 31 (2)(1999) Applications in transportation problems As an application of geometrical probability problems Mathai explored the travel distance from the suburb to city core for circular and rectangular grid cities. Many of the European cities are designed with a city center and circular and radial streets from the center whereas in North America most of the cities are designed in rectangular grids. Travel distances, time taken and associated expenses are random quantities and related to the nature of city design. Some problems of this type were analyzed by Mathai (Environmetrics, 9(1998), 617-628); Mathai and Moschopoulos (Environmetrics, 10(1999), 791-802). Work in Astrophysics After publishing the two books on generalized hypergeometric functions in 1973 and Hfunction in 1978, physicists were interested to use those results in their works. A number of people from different parts of Germany were using these results. The German group working in astrophysics problems were trying to solve some problems connected with reaction rate theory. Then H.J. Haubold, came to McGill University with open problems where help from special function theory was needed. After converting their problems in terms of integral equations, Mathai noted that the basic integral to be evaluated was of the following form: x γ e −ax−bx − 1 2 dx (7.1) and generalizations of this integral. Note that if a or b is zero then the integral can be evaluated by using a gamma integral. Mathematically, if the nonlinear exponent is of the form x − 1 2 or of the form x −ρ , ρ > 0 it would not make any difference. Mathai could not find any such integrals in any of the books of tables of integrals. He noted that the integrand consisted of integrable functions and therefore one could make statistical densities out of them. For example, f 1 (x) = c 1 x γ e −ax , 0 ≤ x < ∞ is a density where c 1 is the normalizing constant. Similarly f 2 (x) = c 2 e −x ρ , ρ > 0, 0 ≤ x < ∞ is a density where c 2 is the normalizing constant. Then the structure in (7.1) can be written as follows: where g(u) can represent the density of u = x 1 x 2 where x 1 and x 2 are independently distributed positive real scalar random variables with the densities f 1 (x 1 ) and f 2 (x 2 ) respectively. Once the structure in (7.1) is identified as that in (7.2) then, since the density being unique, it is only a matter of finding the density g(u) by using some other means. We can easily use the properties of arbitrary moments. For example due to statistical independence of x 1 and x 2 , where E denotes the expected value. Note that E(x s−1 1 ) is available from f 1 (x 1 ) and E(x s−1 2 ) from f 2 (x 2 ). Then g(u) is available from the inverse, that is, where i = √ −1 and c is determined from the poles of E(u s−1 ). Thus, by using statistical techniques the integral in (7.1) was evaluated. After working out many results it was realized that one could also use Mellin convolution of a product to solve integrals of the type in (7.1). This was not seen when the method through statistical distribution theory was devised. Various types of thermonuclear reactions, resonant, non-resonant, depleted case, high energy cut off case etc were investigated. The work also went into exploring exact analytic solar models, gravitational instability problems, solar neutrino problems, reaction-rates, nuclear energy generation etc. The work until 1988 was summarized in the monograph Mathai and Haubold (Modern Problems in Nuclear and Neutrino Astrophysics, Akademie-Verlag, Berlin, 1988). Since then a lot of work was done, some of them are the following: Haubold and Mathai (Annalen der Physik, 44(1987), 103-116; Astronomische Nachrichten, 308 (5)(1987) Work on Differential Equations One of the problems investigated in connection with problems in astrophysics was the gravitational instability problem. The problem was brought to the attention of Mathai by Haubold. Papers by Russian researchers were there on the problem of mixing two types of cosmic dusts. Mathai looked at it and found that by making a transformation in the dependent variable and by changing the operator to t d dt instead of the integer order differential operator D = d dt one could identify the differential equation as a particular case of the differential equation satisfied by a G-function. Then G-function theory could be used to solve the problem of mixing k different cosmic dusts. Thus the first paper in integer order differential equation was written and published in the MIT journal, see Mathai (Studies in Applied Mathematics, 80(1989), 75-93). Two follow-up papers were written developing the differential equation and applying to physics problems, see Haubold and Mathai (Astronomische Nachrichten,312(1)(1991), 1-6; Astrophysics and Space Science, 214(1&2)(1994), 139-149). The Idea of Laplacianness of Bilinear Forms and Work on Quadratic and Bilinear Forms In the 1980's two students of Mathai, S.B. Provost and D. Morin-Wahhab, finished their Ph.Ds in the area of quadratic form. Mathai has also published a number of papers on quadratic and bilinear forms by this time. Then it was decided to bring out a book on quadratic forms in random variables. On the mathematical side, there were books on quadratic forms but there was none in the area of quadratic forms in random variables. Only real random variables and samples coming from Gaussian population were considered. Later in 2005 Mathai extended the theory to cover very general classes of populations. This aspect will be considered later when pathway models are discussed. Only when I. Olkin pointed out to Mathai about the many applications of complex Gaussian case in communication theory, after the book appeared in print, Mathai and Provost realized that an equal amount of material was missed out: A.M. Mathai and S.B. Provost, Quadratic Forms in Random Variables: Theory and Applications, Marcel Dekker, New York, 1992. Work on quadratic forms and related topics may be seen from Mathai(Communications in Statistics A,20 (10) Chisquaredness of quadratic forms and Laplacianness of bilinear forms Consider the following quadratic form and bilinear form: where x 1 , ..., x p , y 1 , ..., y q are real scalar random variables, A is a p × p matrix and B is a p × q matrix, where p ≤ q or p ≥ q. When X is distributed as a N p (O, I) or a p-variate Gaussian or normal population with mean value null and covariance matrix an identity matrix then there is a theorem which says that Q is distributed as a chisquare with r degrees of freedom if and only if A is idempotent and of rank r. This result is frequently used, especially in design of experiments and analysis of variance problems. In fact, this result and its companion result on the independence of two quadratic forms are the backbones of the areas of analysis of variance, analysis of covariance, regression, model building and many others. What is a concept corresponding to chisquaredness of quadratic form in the bilinear form case? It was shown by Mathai that the concept is Laplacianness or the corresponding distribution is Laplace density instead of chisquare density, see Mathai (Journal of Multivariate Analysis, 45(1993), 239-246). Apart from introducing the concept of Laplacianness, this paper also throws light on covariance structures. When Mathai was taking his M.A. degree in mathematics, one of the professors in a course on multivariate analysis asked a simple-looking question in 1962. If one has a simple random sample from a bivariate real normal population N 2 (µ, Σ), Σ > 0 (positive definite; standard notation), consider the sample correlation coefficient, denoted by r, where The question was what is the density of the sample covariance n j=1 (x ij −x)(y ij −ȳ)/n? The density of r in the bivariate normal case, and the corresponding density for the sample multiple correlation in the multivariate case, were already available in the literature. The answer looked trivial because the sample covariance is directly connected to the sample correlation. Nobody had the answer including the professor who posed the question. In 1990-19991 when Mathai was writing on Laplacianness, he realized that covariance structure is nothing but a bilinear form and hence the density of the sample covariance must be available from that of the bilinear form. Thus, the 1962 question was answered in the above-mentioned 1993 paper. The corresponding matrix-variate case should also be available but nobody has worked out yet. Bilinear form book After publishing the quadratic form book in 1992, a lot of work had been done on bilinear forms. Even though a bilinear form can be written as a quadratic form, there are many properties enjoyed by bilinear form and not enjoyed by quadratic forms. Quadratic forms do not have covariance structures. Then T. Hayakawa of Japan contacted Mathai asking why not bring out a book on bilinear form, parallel to the one on quadratic form including chapters on zonal polynomials. This book on bilinear forms and zonal polynomials was brought out in 1995: A.M. Mathai, S.B. Provost and T. Hayakawa, Bilinear Forms and Zonal Polynomials, Springer, New York, 1995, in the lecture notes series. Additional papers may be seen from Mathai and Pederzoli (Journal of the Indian Statistical Society, 3(1995), 345-356; Statistica, LVI(4)(1996), 4-7-41). Functions of Matrix Argument Meanwhile Mathai's work on functions of matrix argument was progressing. These are realvalued scalar functions where the argument is a real or complex matrix. The theory is well developed when the argument matrix is real positive definite or hermitian positive definite. Note that when A is a square or rectangular matrix we do not have a concept corresponding to the square root of a scalar quantity uniquely defined. But if the matrix A is real positive definite or hermitian positive definite, written as A > O, operations such as square root can be uniquely defined. Hence the theory is developed basically for real positive definite or hermitian positive definite matrices. Gordon and Mathai tried to develop a matrix series and a pseudo analytic function involving general matrices, the attempt was not fully successful but some characterization theorems for multivariate normal population could be established, see Gordon and Mathai (Annals of Mathematical Statistics, 43(1972), 205-229). Gordon has two more papers in the area, one in the Annals of Statistics and the other in the Annals of the Institute of Statistical mathematics. Hence the theory of real-valued scalar functions of matrix argument is developed when the matrix is real or hermitian positive definite. There are three approaches available in the literature. One is through matrix-variate Laplace transform and inverse Laplace transform developed by C. Herz and others, see for example, Herz (Annals of Mathematics,61(3)(1955), 474-523). Here one basic assumption is functional commutativity f (AB) = f (BA) even if AB = BA, where A and B are p × p matrices. Under functional commutativity we have the following result, observing that when A is symmetric there exists and orthonormal matrix P, P P ′ = I, P ′ P = I such that P ′ AP = D where D is a diagonal matrix with the diagonal elements being the eigenvalue of A. Then Thus, the original function of p(p + 1)/2 real scalar variables, can be reduced to a function of p variables, the eigenvalues of A. Another approach is through zonal polynomials, developed by Constantine, James and others, see for example James (Annals of Mathematics, 74(1961), 456-469) and Constantine (Annals of Mathematical Statistics, 34(1963Statistics, 34( ), 1270Statistics, 34( -1285. In this definition a general hypergeometric function with r upper parameters and s lower parameters is defined as follows: where C K (X) is zonal polynomial of order k, K = (k 1 , ..., k p ), k 1 + ... + k p = k, and for example, through the definition of Laplace and inverse Laplace pair. The third approach is due to Mathai and it is defined in terms of a general matrix transform or M-transform. The M-transform of f (−X) defined by the equation where ℜ(·) means the real part of (·). Under functional commutativity, f (−X) in ( In this definition, a general hypergeometric function with r upper and s lower parameters will be defined as that class of functions for which the M-transform is the following: where Γ p (a) is the real matrix-variate gamma given by Then that class of function f (−X) is given by the equation ( Series II, Suppl., 65(2000), 219-232; Linear Algebra and Its Applications, 183(1993), 202-221;in Probability and Statistical Methods with Applications, pp.293-316, Chapman and Hall, 2001), Mathai and Saxena (Journal de Matematica e Estatistica, 1(1979), 91-106), Mathai and Rathie (Statistica, XL(1980), 93-99;Sankhya Series A, 42(1980), 78-87;), Mathai and Tracy (Communications in Statistics A, 12(15)(1983), 1727-1736Metron, 44(1986), 11-110), Mathai and Pederzoli (Metron, LI(3-4)(1993), 3-24;Indian Journal of Pure Applied Mathematics, 27(3)(1996), 7-32;Linear Algebra andIts Applications, 253(1997), 209-226, 269(1998), 91-103). The important publication in this area is the book on Jacobians of matrix transformation: A.M. Mathai, Jacobians of Matrix Transformations and Functions of Matrix Argument, World Scientific Publishing, New York, 1997. The work on functions of matrix argument is continuing in the form of applications in pathway models, fractional calculus and so on. These will be mentioned later. In connection with matrix-variate integrals it is a very often asked question that whether matrix-variate integrals can be evaluated by treating them as multiple integrals and by using standard techniques in calculus. Mathai explored the possibility of explicitly evaluating matrixvariate gamma and beta integrals as multiple integrals in calculus. The basic matrix-variate integrals are the gamma integral and beta integrals, where X is a p × p real positive definite matrix or hermitian positive definite matrix. For example, when X is real and X > O (positive definite) the gamma integral is The corresponding integrals are there in the complex-variate case also. It is shown that this can be done explicitly for p = 2 and a recurrence relation can be obtained so that step by step they can be evaluated but for p > 2 this method of treating as multiple integrals is not a feasible Power transformation and exponentiation Another problem explored is to see the nature of models available by power transformations and exponentiation of standard probability models. Such a study is useful when looking for an appropriate model for a given data. These explorations are done in Mathai (Journal of the Society for Probability and Statistics (ISPS), 13(2012), 1-19). Symmetric and asymmetric models A symmetric model, symmetric at x = a where a could be zero also, means that for x < a the behavior of the function or the shape of the function is the same as its behavior for x > a. In many practical situations, symmetry may not be there. The behavior for x < a may be different from that for x > a. Many authors have considered asymmetric models where asymmetry is introduced by giving different weighting factors for x < a and for x > a so that the total probability under the curve will be 1. But the shape of the curve itself may change for x < a and for x > a. A method is proposed in the paper referred to in 11.1 above (Mathai 2012) where asymmetry is introduced through a scaling parameter so that the shape itself will be different for x < a and x > a cases but the total probability remaining as 1, which may have more practical relevance. The Pathway Model The basic idea was there in a paper of 1970's in the area of population studies where it was shown that by a limiting process one can go from one class of functions to another class of functions, the property is basically coming from the theory of hypergeometric functions from the aspect of getting rid off a numerator or a denominator parameter. This idea was revived and written as a paper on functions of matrix argument where the variable matrix is a rectangular one, see Mathai (Linear Algebra and Its Applications, 396(2005), 317-328). Let X be a real m × n matrix, m ≤ n and of rank m be a matrix variable. Let A be m × m and B be n × n constant nonsingular matrices. Consider the function where α, η, C be scalar constants. This C can act as a normalizing constant if we wish to create statistical density out of (12.1). Consider the case when m = 1, n = 1 and x > 0. Then one can also take powers for x and the model in (12.1) can be written as where a > 0, δ > 0, η > 0, x ≥ 0. In the matrix-variate case in (12.1) arbitrary powers for matrices is not feasible even though AXBX ′ is positive definite because even for a positive definite matrix, Y , arbitrary power such as Y δ may not be uniquely defined. Even when uniquely defined transformation such as Z = Y δ will create problems when computing the Jacobians. The types of difficulties that can arise may be seen for the case δ = 2 described in the book, A.M. Mathai, Jacobians of Matrix Transformations and Functions of Matrix Argument, World Scientific Publishing, New York 1997. Hence for the matrix case we consider only when δ = 1. Consider case −∞ < α < 1. Then (12.2) remains as it is given in (12.2) which is a generalized type-1 beta function. But if α > 1 then writing 1 − α = −(α − 1) the form in (12.2) changes to the following: for a > 0, α > 1, η > 0, δ > 0, x ≥ 0. This model is a generalized type-2 beta model. When α → 1 in (12.2) and (12.3), f 1 (x) and f 2 (x) reduce to the the form This is a generalized gamma model. Thus three functional forms f 1 (x), f 2 (x), f 3 (x) are available for α < 1.α > 1, α → 1. This parameter α is called the pathway parameter, a pathway showing three different families of functions. The practical utility of the model is that if (12.4) is the stable or ideal situation in a physical system then the unstable neighborhoods or functions leading to (12.4) are given in (12.2) and (12.3). In a model building situation, if the underlying data show a gamma-type behavior then a best-fitting model can be constructed for some values of the parameters or for some value of α the ideal model can be determined. Most of the statistical models in practical use in the areas of statistics, physics and engineering fields can be seen to be a member or products of members from f 1 , f 2 , f 3 above. Note that for α > 1 and α → 1 situations we can take δ > 0 or δ < 0 and both these situations can create statistical densities. Note that f 1 is a family of finite range models whereas f 2 and f 3 are families of infinite range models. Extended models are available by replacing x by |x| so that the whole real line will be covered. In this case the nonzero part of model (12.2) will be in the range ±[a(1 − α)] − 1 δ and for others −∞ < x < ∞. Note that in (12.1) all individual variables x ij 's are allowed to vary over the whole real line subject to the condition I − (1 − α)AXBX ′ > O (positive definite). This model is also extended to complex rectangular matrix-variate case, see Mathai and Provost (Linear Algebra and Its Applications, 410(2005), 198-216). Note that (12.2) for γ = 0, δ = 1, a = 1, η = 1 is Tsallis statistics in nonextensive statistical mechanics. The function, without the normalizing constant c 1 will then be g(x) = [1 − (1 − α)x] 1 1−α (12.5) which is Tsallis statistics. This can be generated by optimizing Tsallis entropy or Havrda-Charvát entropy with the denominator factor 1 − α instead of 2 1−α − 1, subject the constraint that the first moment is fixed and this condition can be connected to the principle of the total energy being conserved. Note that (12.5) is also a power function model. Mathai's students have introduced a pathway fractional integral operator based on (12.2) and a pathway transform based on (12.2) and (12.3). (12.2),(12.3) can also be obtained by optimizing Mathai's entropy subject to two moment type constraints and also the pathway parameter α can be derived in terms of moments of f 1 (x) or f 2 (x). Thus, in terms of entropies one can establish a entropic pathway, in terms of distributions as explained above one can create a distributional pathway, one can also look into the corresponding differential equations and create a differential pathway, in current use are the following: There is no condition on the parameter γ. If these are to be written in terms of H-functions then α and γ have to be real and positive. A generalization can be made by introducing a general hypergeometric type function, which may be written as E a1,...,ar α,β,b1,...,bs (x δ ) = ∞ k=0 (a 1 ) k ...(a r ) k (x δ ) k k!Γ(β + αk)(b 1 ) k ...(b s ) k where the notation (a j ) k and (b j ) k are Pochhammer symbols. Convergence conditions can be worked out for this general form. A problem of interest in this case is a general Mittag-Leffler density because such a density is needed in non-Gaussian stochastic processes and time series areas. Such a density was introduced based on E γ α,β (x δ ) and it is shown that such a model is connected to fat-tailed models, Lévy, Linnik models. Structural properties and asymptotic behavior are also studied and it is shown that such models are not attracted to Gaussian models, see Mathai (Fractional Calculus & Applied Analysis,13(1) (2010), 113-132), Mathai and Haubold (Integral Transforms and Special Functions, 21(11)(2011), 867-875). Work on Krätzel Function and Krätzel Densities Another area explored is Krätzel function, Krätzel transform and Krätzel densities. Since Krätzel transform is important in applied analysis area, a general density is introduced based on Krätzel integral. The basic Krätzel integral is of the form x γ e −ax− y x dx, a > 0, y > 0 (14.1) which can be generalized to the form for a > 0, y > 0, α > 0, β > 0 or β < 0. The integrand in (14.1), normalized, is the inverse Gaussian density. The integral itself can be interpreted as Mellin convolution of a product, the marginal density in a bivariate case etc. The integral in (14.2) is connected the general reaction-rate probability integral in reaction-rate theory (β = 1 2 , α = 1 is the basic integral in reaction-rate theory) , unconditional densities in Bayesian analysis, marginal densities in a
2015-02-25T22:07:20.000Z
2015-02-25T00:00:00.000
{ "year": 2015, "sha1": "f8474616c5ede29708eacc66a03cbdf9e46c1f98", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1680/4/3/213/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "f8474616c5ede29708eacc66a03cbdf9e46c1f98", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Physics" ] }
261977262
pes2o/s2orc
v3-fos-license
A Comparative Study of Retrorectus Mesh Placement Versus Properitoneal Mesh Placement in Open Repairs of Ventral Hernias Background Ventral hernias affect millions of patients each year. Surgery is the main line of management and various techniques have been advocated; however, mesh repair has become the norm and different approaches have been described regarding the plane of mesh fixation, but none of them are standardized. Open repair is commonly practiced, and the two most commonly performed methods are retrorectus and properitoneal mesh placement. Objectives To compare the postoperative outcomes between the retrorectus plane and the properitoneal plane of fixation of mesh in open ventral hernia repair. Methods Between September 2018 and August 2020, 56 patients with midline ventral hernia admitted to Ramaiah Hospital, Bengaluru were chosen for this prospective comparative study. Group A had 28 patients who underwent open retrorectus mesh repair and 28 patients in Group B underwent open properitoneal mesh repair. The postoperative outcomes were studied in terms of operating time, postoperative complications, and early recurrence at the end of six months and 24 months post-surgery. Results The operative time for retrorectus mesh placement was significantly lower than properitoneal mesh placement. The latter had a higher complication rate overall with an incidence of 18%, with seroma being the most common complication; however, the difference in complication rates was not statistically significant. Skin necrosis was identical in both groups and 0% of cases in both groups had SSI or mesh infection. Three patients (10.71%) in the retrorectus group and two patients (7.10%) in the properitoneal group developed recurrence at 24 months follow-up. Conclusion Retrorectus mesh repair and properitoneal mesh repair in open ventral abdominal hernias have equally good postoperative outcomes. Introduction Hernias of the anterior abdominal wall, or ventral hernias, may be congenital or acquired, and represent defects in the parietal abdominal wall fascia and muscle through which intra-abdominal or preperitoneal contents can protrude [1].The most common ventral hernias are the incisional and paraumbilical hernias which constitute about 85% of overall ventral hernias [2].Surgery is the main line of management.There is a high chance of recurrence following ventral hernia surgeries when compared to inguinal hernia surgery.Over the years, surgeons all over the world have tried various techniques for the repair of ventral hernias; while each procedure had its own advantages and disadvantages, meshplasty was considered to be the best.Subsequently, it was realized that the placement and fixation of the mesh were crucial in determining the outcome of the repair [3]. Laparoscopic hernia repair has become the standard of care for ventral hernia.It has the advantage of less pain, small wounds, lower chances of surgical site infection (SSI), shorter hospital stays, and early recovery.Laparoscopic hernia repair has a steep learning curve with the higher cost of equipment as well as mesh and fixation devices being the major disadvantages.In a resource-restricted setting, open hernia repair is preferred over laparoscopic hernia surgery. Several mesh repair techniques have been developed including the placement of onlay mesh (over the fascia), inlay mesh (a bridging repair of the fascial defect with little or no fascial overlap), retrorectus mesh (sublay), properitoneal mesh, and underlay mesh.Recurrence rates after open repair of up to 20% have been reported and are predisposed by mesh size and fixation type [4]. Properitoneal meshplasty is one of the most common open procedures that has been advocated for ventral hernias but numerous studies support the retrorectus approach due to its easier approach and better outcomes.Hence, a study to compare retrorectus meshplasty and properitoneal meshplasty was done. Objective of the study To study and compare the postoperative outcome between retrorectus mesh placement and properitoneal mesh placement in ventral midline hernias. Study design This prospective comparative study included all patients diagnosed with midline ventral hernia undergoing open mesh repair who met the inclusion and exclusion criteria from the start of the study until the sample size was achieved.To achieve results that can be statistically analyzed, considering an effect size of 0.8 and mean difference of 0.4 between the groups, considering the power of 80% and alpha error of 5%, the sample size was calculated to be a minimum of 28 in each group, with a total sample size of 56. Inclusion and exclusion criteria Patients of both sexes aged 18 years and above with diagnosed midline ventral hernia who planned to undergo open mesh hernioplasty were included in the study and followed up as per protocol for 24 months. Patients presenting with complications of ventral hernia (intestinal obstruction, incarceration, strangulation), pregnant patients, patients with recurrent hernias, and patients who were not willing to participate in the study were excluded. Methodology A total of 56 patients diagnosed with midline ventral hernia were included in the study.The included patients were subjected to complete history taking of preoperative symptoms and the mode of presentation was recorded after clinical examination.After confirmation of diagnosis, all the patients' hemograms, coagulation profile liver function tests, and renal function tests were conducted, after which they underwent pre-anesthetic checkups.From these, 28 patients included in Group A underwent retrorectus meshplasty where the polypropylene mesh fixed the posterior to rectus muscle and anterior to posterior rectus sheath with Prolene 2-0 sutures, and 28 patients in Group B included properitoneal meshplasty where the polypropylene mesh fixed posterior to posterior rectus sheath and anterior to peritoneum with Prolene 2-0 sutures; the size of the defect, polypropylene mesh size, and operating time were recorded respectively.In all cases vertical midline incision was made.The sac of the hernia was identified and dissected free from the neck, subcutaneous tissue contents were reduced and the sac was either excised or reduced.Vicryl No1 suture material was used for the closure of the peritoneum and posterior rectus sheath with a continuous running suture.An appropriate size mesh was placed in the properitoneal or retrorectus plane.Polypropylene mesh was fitted in order to exceed the edge of the defect at least 5cm in all directions.The mesh was fixed to Prolene no 2-0 with transfascial fixation.The anterior rectus sheath was closed with a running suture using Prolene no 1.A negative suction drain was placed in the subcutaneous plane if wide subcutaneous dissection was done and not in all cases.The postoperative outcome of all patients was assessed using the following parameters: postoperative pain according to a visual analog scale (VAS) which ranges from 1-10, presence of drain, duration of drain kept, postoperative complications like seroma, skin necrosis, SSI, and mesh infection. The patients were followed up for short-term post-discharge from the hospital for suture removal between postoperative Days 8 to 14 and the status of the wound and complications (if present) were recorded.Longterm follow-ups were conducted at three-, six-, and 24-month intervals, and any recurrence was recorded.If any suspicion was found, then the patient was requested to undergo soft tissue scans for confirmation of recurrence. Statistical analysis Data were entered into a Microsoft Excel spreadsheet and analyzed using SPSS v.22 software (IBM Corp., Armonk, NY).Categorical data were presented in terms of frequencies and proportions.The chi-square test or Fischer's exact test (only for 2x2 tables) was used as the significance test for qualitative data.Continuous data were presented as mean and standard deviation.The independent t-test was used as a significance test to determine the mean difference between two quantitative variables.MS Excel and MS Word were used to obtain different types of charts.A p-value of <0.05 was considered statistically significant after accepting all rules of statistical tests.SPSS software was used for data analysis. Results A total of 56 patients were included in the study.Each group had 28 patients and the demographic data in both the groups were comparable.Most of the cases were in the age group 31-40 years and 51-60 years with an increase in the incidence of ventral hernias from the third to sixth decade of life. The male to female ratio was 0.9 with a total of 29 (51.78%)males and 27 (48.21%)females.Among males, the age group of 51-60 years had the highest presentation while in females the most common age group was between 31-40 years. The most common comorbidity was hypertension 14 (25%) when compared to other comorbidities like diabetes, ischemic heart disease, asthma, and other comorbidities like hypothyroidism, COPD, and chronic kidney disease; 32 (57.14%) patients didn't have any comorbidities (Table 1).The mean BMI among subjects with retrorectus meshplasty was 27.047±4.182kg/m2 and the mean BMI among subjects with properitoneal meshplasty was 27.241±5.640kg/m2 (Table 1).There was no statistically significant difference found between the two groups with respect to BMI. Paraumbilical hernias were the most common type of ventral hernia with a distribution of 59% (N=33) of cases and incisional hernia was the second-most common with 29% (N=16) of cases. The most common hernia noted were paraumbilical hernias among both males and females (Table 1).Incisional hernia was more common among females with a female-to-male sex ratio of 3:1.Epigastric hernia was seen only in males while spigelian hernia was noted only in females.The paraumbilical hernia had preponderance for the male sex with a ratio of 1.35:1.There was a statistically significant difference found between the two groups with respect to the type of hernia. The mean duration of surgery for the retrorectus mesh hernioplasty group was 95.89±23.335minutes and for the properitoneal mesh hernioplasty group was 110.89±29.252minutes (Table 2).There was a statistically significant difference found between the two groups with respect to the duration of surgery.The majority of 39 (69.64%) cases had a defect size between 2-4 cm with a comparable number of cases undergoing retrorectus and properitoneal repair with the maximum size of a defect being 6 cm (Table 3).As depicted in Tables 4-5, as the size of the hernia defect increased there was a proportional increase in the duration of surgery for properitoneal hernia repair.The pain scores are comparable in both groups with no significant pain at the end of Day 4 requiring injectable analgesics as shown in Figure 1.There was no statistically significant difference found between the two groups with respect to pain VAS on postoperative Days 1-5.Around 7% (N=2) cases of retrorectus and 14% (N=4) of properitoneal repair had seromas that were treated conservatively with aspiration.Skin necrosis was seen in both groups equally at 4% (N=1) incidence (Table 2). Drain insertion was decided based on the amount of subcutaneous dissection and the size of the defect.Around 75% (N=21) of cases in the retrorectus group and 68% (N=19) of cases in the properitoneal group had a drain in situ, which in the majority of cases was removed between the third and fourth postoperative day for the retrorectus group and between the fifth and sixth day for the properitoneal group when the drain output was less than 20 ml.One (3.57%) case in the retrorectus group and two (7.14%) cases in the properitoneal group had prolonged drain in situ due to extensive dissection and obesity of the patient. Around 80% (N=45) of cases were discharged in less than five days after surgery of which 53% (N=24) cases were in the retrorectus group and 47% (N=21) were in the properitoneal group (Table 2).Two cases in the properitoneal group had seromas because of which patients were discharged between the ninth and tenth postoperative day and one case in the retrorectus group was discharged on the 20th postoperative day due to intestinal obstruction which resolved with conservative management. No recurrence was noted at the end of three months in both groups.Two patients(7.10%) in the retrorectus group and one patient (3.57%) in the properitoneal group developed recurrence at six months.Three patients (10.71%) in the retrorectus group and two patients (7.10%) in the properitoneal group developed recurrence at 24 months follow-up (Table 2). Discussion Ventral hernias are one of the common cases faced by a general surgeon in their regular practice.Ventral hernias belong to a broad group of hernias occurring in the anterior abdominal wall which include the following: epigastric, paraumbilical, spigelian, and incisional hernias.The standard approach towards the management of ventral hernia is that of fascial defect repair and mesh reinforcement of the abdominal wall. In the last two decades, ventral hernia repair has undergone significant changes with respect to approach, choice of mesh, and selection of mesh plane.The focus of this study was to compare the outcomes with regards to the selection of mesh planes with reference to retromuscular placement or retrorectus plane and retrofascial placement or properitoneal plane. The aim of a mesh hernia repair surgery is to provide a strong, mobile, and physiologically dynamic barrier wall to prevent fascial weakness and recurrence. The Rives-Stoppa (RS) hernia repair is a worldwide accepted open technique wherein the prosthetic mesh is placed in a preperitoneal-sublay fashion with >5cm of overlapping of the mesh beyond the hernia defect.It has the advantage of being a tension-free repair as well as providing maximum surface area for tissue ingrowth through the mesh.The original description of Stoppa repair was followed for inguinal hernias with prosthetic mesh placement in the intraparietal plane below the arcuate line, superficial to the peritoneum and deep to the transversalis fascia.A later modification of this technique for ventral hernias fixes the prosthesis posterior to the rectus abdominis muscles and anterior to the posterior rectus sheath known as retrorectus mesh repair.These techniques, popularized more recently by Stoppa and colleagues, achieve three major goals of herniorrhaphy: (1) extensive overlap between the prosthesis and the fascial edges allows a tension-free closure as well as a large surface area for tissue incorporation; (2) the mechanical strength of the synthetic prosthesis reinforces the abdominal wall, especially when there is increased intraabdominal pressure; and (3) placement of the prosthesis adjacent to the vascular-rich rectus muscles facilitates tissue incorporation, promotes resistance to mesh infection, and allows interposition of autologous tissue between the prosthesis and the skin/subcutaneous tissues anteriorly and the peritoneum posteriorly [5]. Preperitoneal mesh repair was described in literature also as a modification of the RS technique but the mesh here is placed between the peritoneum and the posterior rectus fascia, instead of between the rectus muscle and the posterior rectus fascia.Developing the preperitoneal plane is often challenging when the abdominal wall layers have been entered previously in a prior laparotomy, but it can usually be dissected sufficiently to fully protect the underlying bowel from the mesh.It has the advantage of allowing a wide mesh overlap of the defect of 8 to 10 cm or more and dissection deep into the pelvis and space of Retzius to the pubis, and the preperitoneal plane usually is devoid of any major vessels and allows for a bloodless dissection.The retrorectus mesh repair is limited at the linea semilunaris where the fascias of lateral abdominal muscles condense to re-form the anterior and posterior rectus sheath.The preperitoneal plane dissection allows the surgeon to position and safely secure the mesh to the pubis and Cooper's ligaments, overlying the wings of the iliac bones, posterior-laterally to the psoas muscles, and beyond the costal margins superiorly.Compared to the retrorectus plane, the properitoneal plane is much thinner and more fragile leading to multiple tears if not handled with due care. The overall sex ratio distribution of ventral hernias showed that 52% (N=29) were males and 48% (N=27) were females with a sex ratio of 0.9 showing male preponderance.Male dominance was reported by similar studies conducted by Sultan et al. [6] and Malik et al. [7]. A study by Salameh et al. [8] showed that epigastric hernias are more frequent in men with a male-to-female ratio of 3:1 and the commonly involved age group is in the third to fifth decade of life with a decreasing trend after the sixth decade, which corresponds to our study. In our study, the age of presentation, sex distribution, BMI, and comorbidities among the retrorectus meshplasty and properitoneal meshplasty groups did not have any statistical significance which could affect the outcome of the surgery.The average defect size in the retrorectus group was 2.714 ± 1.205 cm and the properitoneal group was 3.018 ± 1.364 cm. The time taken to execute retrorectus mesh repair was 95.89 ± 23.33 minutes while properitoneal mesh repair was 110.89 ± 29.25 minutes.There was a significant statistical difference between the groups with a p-value of 0.039.This could be attributed that the retrorectus plane is a virgin plane and easier to manage as it's a bloodless dissection while the properitoneal is a challenging plane to manage due to the possibility of tears in the peritoneum in a violated abdomen i.e. incisional hernias due to adhesions.The average defect size in the retrorectus group was 2.714 ± 1.205 cm and in the properitoneal group was 3.018 ± 1.364 cm with no statistical significance signifies that defect size doesn't affect the plane of mesh fixation. The postoperative outcomes were studied in both groups and there were no significant differences in pain scores between the two groups with mean pain scores on postoperative Day 3, being 2.0 ± 0.9 in the retrorectus group and 2.18 ± 0.98 in the properitoneal group.This can be explained that both are similar kinds of surgeries and involved similar dissections of fascia with differences in mesh placement which did not affect the pain outcomes during immediate and late post-operative periods.The presence of drains was subject to the amount of subcutaneous fat and dissection in the subcutaneous plane which did not affect the surgical postoperative outcomes of patients in both groups. The overall complications rate in retrorectus group was 11% (N=3) with seroma being 7% (N=2) and skin necrosis being 3.6% (N=1) and properitoneal group was 18% (N=5) with seroma being 14% (N=4) and skin necrosis being 3.6% (N=1).A systematic review conducted by Frank et al. [9] which involved 743 cases of retrorectus mesh placement reported an overall complication rate of 17% and seroma rates of 3% which was comparable to our study.A decade-long prospective observational study on properitoneal ventral hernia repair conducted by Heniford et al. [10] reported an overall wound complication to be 27.3% with an average duration of postoperative stay being five days while Novitsky et al. [11] reported to be at 12.5%.The mean duration of postoperative stay in the retrorectus group was 5 ± 3 days while the duration in the properitoneal group was 5 ± 2 days, however, this observation was not statistically significant.In our study, SSI, and mesh infections were not observed in any group suggesting that both techniques are equally effective as mesh in both the techniques is below the muscular component and it prevents in transmission of infection from subcutaneous tissues to mesh. Early recurrence within three months was not noted in either group.Two patients (7.10%) in the retrorectus group and one patient (3.57%) in the properitoneal group developed recurrence at six months.Three patients (10.71%) in the retrorectus group and two patients (7.10%) in the properitoneal group developed recurrence at 24 months follow-up.Although there was a higher recurrence rate in the retrorectus group than in the properitoneal group, the difference was not statistically significant. FIGURE 1 : FIGURE 1: Graph showing the comparison of the pain VAS score between the two groups VAS: visual analogue scale; POD: postoperative day This study was conducted in Ramaiah Medical College and Hospitals after receiving ethical approval from the Ethics Review Board, Ramaiah Medical College and Hospitals (ERB registration number -ECR/215/Inst/KA/2013/RR-22) with approval number EC/PG-33/2018. TABLE 5 : Duration of surgery according to type of repair and defect size N: Number
2023-09-17T15:12:43.635Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "a15ceaf1e440e5684102217d0e3e2481588e2536", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/169505/20230915-17641-1ohjusn.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b34b213491a3cbe312318beb9c766ae9828a310f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213268719
pes2o/s2orc
v3-fos-license
Effect of Community Based Rehabilitation on the Quality of Life of People with Locomotor Disabilities in Vadakkupalayam Village of Cuddalore District Introduction: It is estimated that around 20% of total disabled population is locomotor disabled, out of which 57% are from rural areas. These disabilities affect their quality of life. Rehabilitation measures extended to these affected population aims at improving their quality of life and economic independence. These interventions provided at the community level rather than to individual persons, have a better outcome. Expanding Community Based Rehabilitation (CBR) is the need of hour. In India, there are very few studies on CBR and Quality of Life (QOL) of locomotor disability. Our study was intended as a pilot study for CBR. This study aims to find out the effect of Community Based Rehabilitation (CBR) on the quality of life of people with locomotor disabilities in Vadakkupalayam village of Cuddalore district in Tamilnadu as it can give us a sample picture for implementation of future CBR programmes. Aim: 1. To find out the prevalence of locomotor disability in Vadakkupalayam village of Cuddalore district. 2. To find out the effectiveness of Community Based Rehabilitation (CBR) on the Quality of Life (QOL) of people with locomotor disabilities in the area. Materials and Methods: This study was conducted as Quasi Experimental Study. Secondary data was collected and survey with questionnarie was done to identify people with locomotor disabilities (52) after which rehab interventions were given. WHODAS 2.0 scale was used before and after rehab intervention and t-test was used for statistical analysis and the data was obtained. Results: As mean WHODAS decreases, the quality of life of person with locomotor disability increases. Comparison of preand postintervention WHODAS showed that there was a statistically significant difference observed t=15.084, p<0.001. It can be inferred that CBR has improved the quality of life of person affected with locomotor disability among the selected participants in this study. Conclusion: Our study shows that Community based Rehabilitation has improved the quality of life of person affected with locomotor disability among the study population .Awareness to the public through information booklets and mobile therapy vans will enable the people, especially in rural areas, to benefit from the rehabilitative measures provided by the government and non-governmental organisations. Introduction In India, out of the 121 Crore (Cr) population, 2.68 Cr persons are 'disabled' which is 2.21% of the total population. According to census of 2011 1 , 20% out of total disabled population is locomotor disabled. Among the male disabled, 22% are having disability in movement. In the case of the female disabled, 18% has disability in http://jmscr.igmpublication.org/home/ ISSN (e)-2347-176x ISSN (p) 2455-0450 DOI: https://dx.doi.org/10.18535/jmscr/v7i12.43 movement and 8% of them are having multiple disabilities. Among the disabled non-workers with disability in movement, 49.8%are dependents and 19.7% are students 2 . Locomotor disability includes a person with loss or lack of normal ability to execute distinctive activities associated with the movement of self and objects from place to place. In general, the conditions may include paralysis of limb or body, deformity of limb, maximum loss of limb, dysfunction of limb, deformity of joints of limb, deformity of the body other than limbs. It may be due to congenital and developmental causes like Cerebral Palsy (CP), Congenital Talipes consists of five key components-health, education, livelihood, social and empowerment components. This study particularly concentrates on the health component. World Health Organisation (WHO) defines Quality of Life 5 as an individual's perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns. It is a broad-ranging concept affected in a complex way by the person's physical health, psychological state, personal beliefs, social relationships and their relationship to salient features of their environment. All these are significantly affected by persons ability to move. There have been many studies in community based rehabilitation in general but studies on the effectiveness of CBR in locomotor disability particularly in increasing the quality of life is much needed one. As locomotor disability due to trauma is on the rise this study will help in designing the rehabilitation approach and to provide better quality of life to the affected population. This study was conducted in Vadukkapalayam village of Cuddalore district in Tamilnadu. This study was done through regular outreach camps by the department of Physical Medicine & Rehabiltation (PMR), Rajah muthiah medical college with the help of Leprosy mission trust India (TMLTI) Which has been doing CBR works in Cuddalore and throughout India. WHODAS 2.0 7 scale, used in our study, is a generic instrument developed by WHO to provide a standardized method for measuring health and disability across cultures and simple to administer. Assessing on person's ability to move in a way reflects quality of life of the person which is the motto of this study. Aims and Objectives 1. To find out the prevalence of locomotor disability in Vadakkupalayam village of Cuddalore District, Tamilnadu. JMSCR Vol||07||Issue||12||Page 248-254||December 2019 2. To find out the effectiveness of Community Based Rehabilitation (CBR) on the Quality Of Life (QOL) of people with locomotor disabilities in the area Materials and Methods This Quasi-Experimental study was conducted in Vadakkupalayam village of Cuddalore district of Tamilnadu in three phases, from September 2017 to October 2018. Selection criteria included subjects affected by locomotor impairment due to any cause and falling under the age group of 17 years to 60 years. Informed consent was obtained from every participant. Secondary data was collected and survey was done to find out people affected by locomotor disability. 52 people were identified with locomotor disability and was provided with questionnaire after brief explanation about the study and the questionnaire. All the 52 completed questionnaires were taken for data analysis. Number of days the patients' ability to work was affected was also obtained. WHODAS 2.0 scores were calculated summing the scores. In phase 2, awareness was created about latest disability act and benefits given by government by distribution of Information booklet. They were given exercises for mobility, occupational therapy to improve Activities of Daily Living (ADL) functions, given orthotic devices. They were also given instructions to carry out home based training activities. Those who required tertiary care were evaluated and referred to higher centres. Outreach camps were conducted using therapy van for assessment provided with physical modalities was used for those with pain. In phase 3, the study population was assessed by interviewer according to the WHODAS 2.0 questionnaire after the rehabilitation interventions. The data obtained were statistically analysed using t-test. Statistical Analysis Comparison of pre-and post-intervention WHODAS scores showed that there was a statistically significant difference observed t=15.084, p<0.001 Comparison of pre-and postintervention WHODAS among male showed that there was a statistically significant difference observed t=12.471, p<0.001while pre-and postintervention days affected among male showed that there was a statistically significant difference observed t=7.005, p<0.001 Comparison of pre-and post-intervention WHODAS among female showed that there was a statistically significant difference observed t=8.243, p<0.001. While pre-and postintervention days affected among female showed that there was a statistically significant difference observed t=7.415, p<0.001. Results Total no. of locomotor disabled people identified were 52 out of 152 disabled accounting for 34%. Among them, gender variation was 59.6% male and 40.4% female. Age group of participants diagnosed with locomotor disability constituted 26.9% (21-30 years), 25% (31-40), 21.2%, (41-50) more than 50 yrs-17.3%. In this study degenerative causes were the reason behind 51.92%of locomotor disablility, Next to it stand vascular causes (13.46%) and Acquired casesinfective (11.53%). Comparison between pre and after intervention showed that as mean WHODAS2.0 decreases the quality of life of person with locomotor disability increases. Number of work affected days decreased after rehab intervention. Discussion Early identification and treatment can reduce the severity of locomotor disability. People in rural areas and low socio economic status are deprived of these rehabilitation interventions due to lack of awareness and financial constraints. Regular outreach camp serves as a valuable tool in identifying weakness at an early stage so that corrective measures can be taken up to prevent worsening of the condition. In this study, we have taken Vadakkupalayam village in Cuddalore district of Tamilnadu as a model village for Community Based Rehabilitation. Our study proves that many people with locomotor disability in rural areas remain unidentified and they can benefit from CBR program. A study by Amaritchavaran et al 8 reveals that there are many persons with untreated disabling conditions in rural communities, and that a significant number of them can benefit from medical treatment and rehabilitation.in our study, we were able to identify 52 people with locomotor disabilities among a population of 4961and 8 of them were without any history of previous treatment and referred to higher centres after counselling. Sustainability of the program is another important requirement which determines the success of program. Jay kumar et al 9 in their study showed the sustainability of interventions through activities involving the community structures. In our study, we were able to identify people with locomotor disabilities and train them to develop in-house orthotic devices using the resources available to them. We also taught them homebased self training exercises. The importance of establishing and promoting community structures to support CBR sustainability is necessary for success of any CBR program. CBR initiative appears to be most beneficial to those who have mild physical disability. In our study, among the causes of locomotor disability, trauma is attributed as the reason to locomotor disabilities of most severe nature.3 out of 52, were severely affected on several parameters of quality of life. Similiar findings were obtained by Sandip Dhole etal 10 with traumatic locomotor permanent disability constituting 14.97%.
2019-12-12T10:18:59.894Z
2019-12-08T00:00:00.000
{ "year": 2019, "sha1": "7764a36ffe97ed0bd890abc5fd6fbf602e117dfa", "oa_license": null, "oa_url": "https://doi.org/10.18535/jmscr/v7i12.43", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bf271f699f874c9583eebed9a52c5c3eabec2fb1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236645475
pes2o/s2orc
v3-fos-license
Influencing mechanism of an external magnetic field on fluid flow, heat transfer and microstructure in aluminum resistance spot welding ABSTRACT The magnetically assisted resistance spot welding (MA-RSW) method provides a promising approach for improving weld performance. However, the MA-RSW process involves complex multiphysics fields that make numerical modeling very challenging. Thus, most resistance spot welding (RSW) models adopt electrothermal-mechanical coupling and ignore the influence of fluid flows caused by electromagnetic forces on nugget growth. In this paper, a magnetohydrodynamic (MHD) model, which couples the electric, thermal, magnetic and flow fields, is developed to explore the influencing mechanism of an external magnetic field (EMF) on the flow patterns and microstructures of aluminum resistance spot welds. Numerical results show that the application of an EMF can lead to the formation of Lorentz force in the liquid nugget, which transforms the flow pattern from an in-plane flow to a combined in-plane and out-of-plane flow. The flow velocity of molten metal in the MA-RSW process is 5 times higher than that of the RSW process, and the maximum temperature of the MA-RSW model is reduced from 774°C in the MHD RSW model to 725°C. Therefore, the MA-RSW joint exhibits a 25.5% larger nugget diameter, a more than 20% finer grain structure, and fewer defects than the RSW joint in the experimental results. Introduction Resistance spot welding (RSW) is an important joining method in automobile, locomotive and aerospace industries (Wei & Wu, 2012) because of its high efficiency, low cost and high automation properties (Florea et al., 2012). As lightweight materials that promote energy savings and emission reductions, aluminum alloys have been widely used because of their low density, high specific strength and good corrosion resistance (Kang et al., 2016). However, aluminum alloy has a lower electrical resistivity, higher thermal conductivity and lower electrical resistivity than steel, which makes it necessary to use 3-5 times the welding current when compared with steel RSW. The high centralized instantaneous heat input increases the risk of early expulsion, hot cracks, shrinkage cavities, and insufficient nugget diameters, resulting in low weld performance . To improve the weld quality in a energy-efficient method, magnetically assisted resistance spot welding (MA-RSW) technology was proposed by Shen et al. (2011). Experimental studies on dual-phase steel , stainless steel (Li et al., 2018), aluminum CONTACT Yongbing Li yongbinglee@sjtu.edu.cn alloy (Huang et al., 2020), magnesium alloy (Yao et al., 2014) and Al/Ti dissimilar materials (Li et al., 2015) have shown that an external magnetic field (EMF) can enlarge the nugget size, reduce the shrinkage defects, and improve the mechanical performance of weld joints. However, these studies were based on experimental methods, and the control mechanism of an EMF on the flow behaviors of molten metal cannot be observed because the weld formation process is completely invisible, which limits further improvement of the weld performance. Numerical modeling is a mainstream method used for exploring the multiphysics mechanism inside an enclosed weld nugget. However, most of the existing RSW numerical models have neglected the fluid flow caused by the induced magnetic field (IMF) due to the extreme complexity of the multiphysics model, and they could not truly reveal the temperature evolution nature during nugget growth (Bi et al., 2016;Deng et al., 2020;Moshayedi & Sattarifar, 2012;Wan et al., 2016). To understand the magnetohydrodynamic (MHD) behavior in liquid nuggets, Wei et al. (1996) pioneered the study of unsteady transport phenomena in RSW by using a 2D finite-difference model. Li et al. (2007) proposed a multiphysical finite element model to study the MHD behaviors in the mild steel RSW process. Both models for the RSW process showed that the Lorentz force resulting from the interaction of the welding current and its IMF has an important influence on nugget growth and temperature distribution. However, the abovementioned models were for the steel RSW process, and there is no research on the effect of an IMF on the flow and heat transfer behavior of aluminum alloy materials in the RSW process. Furthermore, the effect of the EMF on the MA-RSW process remains unclear. In the MA-RSW process, the existing 2D model is no longer applicable because the external electromagnetic field is three-dimensional in nature. In addition, differing from the coupling mechanism of thermo-electro-structural fields in most of the previous studies, the complexity of the coupling mechanism of the MA-RSW model is greatly increased when the EMF and flow field are considered. In recent years, computer fluid dynamics (CFD) has been an important means for modeling various complex phenomena (Abadi et al., 2020;Granados-Ortiz et al., 2021). For example, Ramezanizadeh et al. (2019) used CFD to investigate the heat transfer of nanofluidic thermosyphon heat exchangers. Therefore, how to couple electric and magnetic fields in a CFD model is the major difficulty and challenge in this research. To this end, a more complex model is needed to investigate the MHD behaviors in the MA-RSW process. In this study, a 3D numerical model is established using ANSYS software to simulate the MA-RSW process of aluminum alloy. This model constructs a fourfield coupling mechanism of electrical, thermal, magnetic and flow fields. Physical measurements of the EMF and weld profile are used to validate the model. Contrastive analyses throughout the RSW and MA-RSW processes are performed in terms of the magnetic field, magnetic force distribution, flow pattern, and temperature field. Finally, the influencing mechanism of the EMF on the weld macromorphology and microstructure is discussed based on experimentation and simulations. Governing equations The MHD behaviors in MA-RSW are very complex due to the combined action of both an IMF and an EMF. To reduce the model complexity and improve the calculation efficiency, the following assumptions are made: (1) During the heating and melting process, the density of workpieces does not change with temperature and pressure (Li et al., 2007). (2) The charge density ρ q of molten metal in the nugget is equal to zero . (3) The gravity effects on fluid flow in nuggets can be ignored due to short welding times in the MA-RSW process (Khan et al., 2000). (4) The density of the workpieces does not change during the phase transformation process. (5) Viscous stress exists between the liquid fluid particles, and there is no slip velocity at the solid-liquid phase boundary . (6) The solute diffusion effect is ignored in the solid due to a small solute diffusivity coefficient (Wei and Yeh, 1991). The electromagnetic field can be described by the Maxwell equation, continuity equation and an equation describing the electromagnetic characteristics of media as follows: where E is the intensity vector of electrical field, H is the intensity vector of the magnetic field, B is the flux density vector, J is the current density vector, μ r is the relative permeability, μ 0 is the absolute permeability in a vacuum environment, and A is the magnetic vector potential. The current density can be expressed as: where J is the current density, ∅ is the electrical potential, and ρ e is the material electrical resistivity. The flow field continuity equation can be simplified to: where V is the velocity vector. The Navier-Stokes equation describing the conservation of MHD momentum can be expressed as: where f b is the body force, and μ e is the viscosity coefficient. Based on the assumption of an incompressible fluid, the energy equations can be simplified as follows: where C M and C P are the modified specific heat capacity and the specific heat capacity, respectively. k is the heat conductivity, L is latent heat, and f s is the solid phase ratio. The f s can be expressed as: where T s is the solidus temperature, and T l is the liquidus temperature. Thermal power density S can be further described as: where σ b , σ w/e , and σ w/w are the conductivity of the material, workpiece-electrode interface and workpieceworkpiece interface, respectively. Simulation model During the MA-RSW process, the liquid nugget makes not only in-plane flow but also out-of-plane flow as reported by Qi et al. (2020). To improve computing efficiency, an 18°three-dimensional (3D) model is used in the electro-thermal field and electro-magnetic field analysis. As shown in Figure 1(a), for the electro-thermal field model, the welding current is applied on the top surface of the upper electrode as the loading condition. The boundary conditions are as follows: The electrical potential at the lower electrode arm is fixed to zero. Heat flux conditions are applied to the electrode inner wall for water cooling and the outer surface of the electrode and workpieces for air cooling, and their corresponding convection coefficients are 3800 W/(m 2°C and 19.4 W/(m 2°C ), respectively. The environment temperature is set at 21°C. Figure 1 (b) shows the electro-magnetic field model. The current density is applied to the electrodes and workpieces as loading conditions. The magnetic coercive forces of the upper and lower permanent magnets are set as 955000 A/m and −955000 A/m, respectively. The parallel boundary condition AZ = 0 is applied to the air layer. The boundary conditions AX = 0 and AY = 0 are applied to the symmetric axis for the central axisymmetric magnetic field. Because of the multidirectional fluid flow in the MA-RSW process, a 3D 360-degree thermal-flow field model is needed, as shown in Figure 1 (c). The loading conditions consist of Joule heating by the workpiece and heat flux by contact pairs. The boundary conditions consist of zero velocity constraints on the outer surface of the workpiece, radial zero velocity constraints on the axis of this model, and axial zero velocity constraints on the workpiece-workpiece interface. Material properties The RSW process is a typical transient multiphysical field coupling process. To improve the accuracy of the simulation results, the material properties of the workpiece are defined as temperature-dependent properties. The properties of the Al sheet and copper electrode caps, such as electrical resistivity, thermal conductivity, and specific heat are exhibited in Figure 2 and Table 1 Riahi & Nazari, 2011;Wan et al., 2016;Wang et al., 2015). Electric contact conductance (ECC) and thermal contact conductance (TCC) are also essential for the calculations in the RSW process. According to the published data (Wang et al., 2015;Tuchtfeld et al., 2019), the ECC for Cu/Al and Al/Al interfaces can be calculated and are given in Figure 3. In addition, the effect of TCC on nugget growth can be ignored since there is no structure field in this model and the electrode tip and workpiece are in an as-received condition. The TCC of Cu/Al and Al/Al contact surfaces is set to a value of 2 × 10 5 W/(m 2 · • C). In the analysis of thermo-flow field, both the liquid phase and the solid phase exist in the workpiece during the MA-RSW process. To solve the changing liquid-solid interface, this paper introduces an artificial viscosity method to treat entire workpiece as a fluid. As shown in Table 2 (Jeng and Chen, 1997;Noshadi et al., 1998), when the temperature of the workpiece is lower than its melting point, a very high viscosity (10 9 kg/(m·s)) is assigned to ensure that the solid metal does not flow at that time. When the temperature reaches or exceeds the melting point of the workpiece, the true viscosity (1.38×10 −3 kg/(m·s)) is given to ensure the flow and heat transfer of liquid metal under the action of electromagnetic force. Moreover, the EMF is produced by a pair of ring-shaped Nd-Fe-B permanent magnets with a relative permeability of μ r = 1.05. The detailed parameters are listed in Table 3. Solution procedure The solution procedures of RSW and MA-RSW processes are given in Figure 4. Sequential coupling is conducted at each time increment. At the beginning of the welding stage, the electro-thermal field is first calculated to obtain the initial current density and temperature distribution in the workpieces. Then, the current density and temperature distribution are introduced into the electro-magnetic field model and thermo-flow field model as loads, respectively. Then, the electromagnetic force distribution under the action of the EMF and the IMF is calculated in the electro-magnetic field model and output to the thermo-flow field model. After that, the new temperature distribution is calculated in the thermo-flow field under loads of Joule heat and electromagnetic force and output back to the electro-thermal field model for the next loop. Experimental procedures In this study, two identical 3.0-mm thick AA6061-T6 sheets were used as the research material for the numerical model and experimental materials. The workpieces were sheared into 38 ×130 mm. The oxide film of the aluminum alloy was removed by mechanical grinding. As shown in Figure 5(a), a medium frequency direct current (MFDC) welding machine integrated with a FANUC R-2000i robot was used to weld the workpieces. A pair of C15000 copper alloy electrodes was adopted, and the tip diameter was 12 mm. The MA-RSW apparatus consists of a pair of mutually exclusive circular permanent magnets with rubber covers, as shown in Figure 5(b). The detailed parameters of the permanent magnet are listed in Table 3. The distance between the permanent magnet and workpiece is kept at 1 mm to prevent the demagnetization effect of high temperature on the permanent magnet. According to the characteristics of the aluminum workpiece, the optimized welding parameters used in the numerical calculation and model validation were determined, as shown in Figure 6. Figure 7 shows the measurement process of the magnitude of the EMF in the nugget region. The distance between the upper and lower permanent magnets is fixed at 8 mm, which is consistent with the experimental process and simulation model. The Hall probe of the Gauss meter is placed vertically on the central axis to measure the radial magnetic field component, and then measured along the x-axis. In the preparation of metallographic samples, the cross-sectioned samples were first mounted, polished with silicon papers and etched with Keller (95 ml H 2 O + 2.5 ml HNO 3 + 1.5 ml Hcl +1.0 ml HF). The macro-and micro structures of weld nuggets were photographed by Leica optical microscope systems. The electron backscattered diffraction (EBSD) test was carried out with a Tescan Mira3 scanning electron microscope. Mesh independence verification To eliminate the effect of the mesh number on the accuracy of simulated results, Table 4 exhibits the mesh independence verification of the RSW and MA-RSW models with five schemes of different mesh numbers. When the mesh number exceeds mesh scheme B, the change of maximum temperature in traditional RSW and MA-RSW nuggets at 0.3 s is less than 1%. Therefore, on the premise of both accuracy and computational efficiency, mesh scheme B is selected as the computational grid. Verification of the intensity of external magnetic field To ensure the accuracy of electro-magnetic field, this study compared the intensity of the EMF on the workpiece/workpiece contact surface between the simulated and experimental results, as shown in Figure 8. The radial magnetic field intensity of the simulated result is extracted from the same measuring path of the experimental results. The calculated result is in accordance with the changing trend of the measured result. The simulation error of each measuring point is relatively stable. Since the probe of the Gauss meter calculates the magnetic field by measuring the magnetic flux on a small square area rather than a point, there is a ±2% systematic error in the measurement. Figure 9 and Table 5 show the experimental results and calculation results of the RSW and MA-RSW nuggets. The calculated results of nugget morphology and size are basically consistent with the experimental results. The average simulation error is less than 10%. There are two main reasons for the errors between the calculated results and the experimental results: 1) This model mainly focuses on MHD behaviors in the RSW and MA-RSW processes, ignoring the influence of workpiece indentation and deformation on the nugget formation process, which results in errors in nugget thickness; 2) In this model, the idealized artificial viscosity method is used in thermo-flow field. The analysis of viscoelastic flow behavior in the mixed zone of solid and liquid phases during the melting process is insufficient, which affects the calculation results of the thermo-flow field. Distribution of the electromagnetic force under the effect of a magnetic field To reveal the influence mechanism of an EMF on the flow pattern of liquid nuggets, this paper compares the magnetic field and magnetic force distribution during RSW and MA-RSW. As shown in Figure 10(a), in the RSW process the IMF is produced by a vertical welding current that exists in the region of the nugget (refer to Figure 11). It is distributed uniformly along the circumferential direction on the horizontal plane of A-A. However, in the MA-RSW process, the magnetic field in the nugget consists of two components, as shown in Figure 10(b). The first is the circumferential IMF produced by the welding current. The second is the external radial magnetic field generated by the permanent magnets. The intensity of the magnetic field during RSW, and MA-RSW first increases and then decreases in a radial direction from the symmetrical axis to the workpiece edge. The intensity of the IMF reaches a maximum at the brim of the nugget because the current density is the highest at the edge of the electrode (refer to Figure 11). However, in the notch region, the air gap makes it difficult for a current path to form, and consequently, the current density and the IMF intensity decrease rapidly. On the other hand, although an EMF is introduced in MA-RSW process, the maximum intensity of the MA-RSW combined magnetic field is only 0.03 T higher than that of RSW. This is because the intensity of an EMF is far less than that of an IMF. Therefore, the direction of the MA-RSW magnetic field rotates from the tangential direction of the IMF and slightly in the radial direction of the EMF (see in sectional drawing B-B of Figure 10(b)). As shown in Figure 12(a), according to Fleming's lefthanded rule, the electromagnetic force generated by the IMF only exists in the x-z planes in the RSW process. The distribution of the in-plane induced electromagnetic force points to the central axis and increases gradually from the Al/Al contact surface to the Al/Cu contact surface, as illustrated in sectional drawing A-A of Figure 12(a). As shown in Figure 12(b), both the in-plane (xz plane) induced magnetic force and the out-of-plane (x-y plane) external magnetic force exist in the welding region of the MA-RSW process. The direction of the combined electromagnetic force in the nugget of an MA-RSW basically points to the central axis of the electrode, but the vector direction rotates slightly in the circumferential direction, as shown in sectional drawing B-B of Figure 12(b). In addition, the magnitude of the external electromagnetic force decreases gradually from approximately 0.002 N/m 3 at the nugget edge to 0 N/m 3 at the nugget center, which is much lower than that of the induced electromagnetic force in the nugget region. However, this small change produces a considerable impact on the flow pattern inside the nugget. Fluid flow pattern and evolution in the weld nugget The flow pattern of the liquid metal plays an important role in the growth process of nuggets. During the welding stage of the RSW process, as shown in Figure 12(a), the induced magnetic force gradually strengthens from faying surface to nugget brim along the thickest section of the nugget. This nonuniform magnetic force gradient results in the liquid metal at the edge of the nugget brim having a stronger force, which causes the liquid nugget to form a counter-clockwise flow pattern, as illustrated in Figure 13(a1). When the nugget increases, as shown in Figure 13(b1) and (c1), rotation cores in each quarter gradually move from the center of the nugget to the edge because the rate of growth of the nugget in the direction of its thickness is far slower than that in the direction of its diameter. In addition, the molten metal exhibits in-plane flow solely in the radial plane, and there is no circumferential velocity component, as shown in Figure 14. In addition, the maximum flow velocity of the liquid nugget in the aluminum alloy RSW process is 2.35 m/s, which is approximately 5 times faster than the 0.48 m/s of the liquid nugget in mild steel RSW process calculated by Li et al. (2011). This change is mainly attributed to two factors. First, the welding current of aluminum alloy RSW is usually 2 ∼ 6 times higher than that of the steel RSW process. Thus, the IMF and electromagnetic force is significantly enhanced. Second, the viscosity of the AA6061 aluminum alloy is 0.00138 kg/(m·s), which is much lower than the 0.006 kg/(m·s) of mild steel. The loss caused by the internal friction of fluid flow is significantly reduced. In the MA-RSW process, the molten metal has not only a radial velocity component but also a circumferential velocity component, as shown in Figure 14. At the beginning of the MA-RSW process, the circumferential flow velocity is relatively low. The molten metal in the weld nugget is mainly shown as in-plane flow as in the RSW process, as shown in Figure 13(a2). During the middle and late welding stages, as illustrated in Figure 13(b2) and (c2), the circumferential stirring effect of the external magnetic force increases gradually with the growth of the nugget. Thus, the circumferential flow gradually replaces the in-plane flow as the dominant flow pattern. As shown in Figure 15, the maximum compound flow velocity reaches 11.43 m/s, which is more than five times faster than that of RSW. Furthermore, under the action of the out-of-plane high-speed flow in the MA-RSW nugget, the in-plane flow velocity of MA-RSW is significantly enhanced, and is approximately twice the velocity of RSW. This phenomenon mainly occurs because the four fluids in the four quadrants of the RSW process undergo in-plane flow, which cancels each other out near the symmetric axis region. However, the molten metal of the MA-RSW nugget accelerates continuously in the circumferential direction under the continuous stirring effect of the electromagnetic force without the canceling out effect. Figure 13 (d1)-(f1) shows the holding stage of the RSW process. The molten metal still undergoes an inplane flow under the action of inertial force. While the flow pattern is more complex, and multiple rotation cores are formed. The flow velocity drops sharply below 0.01 m/s, as shown in Figure 15. On the other hand, in the holding stage of MA-RSW, as shown in Figure 13 (d2)-(f2), molten metal continues to undergo high-speed circumferential flow due to the inertia effect. This high-speed flow transforms the solidification mode of the MA-RSW nugget, which will be discussed in the following sections. Distribution of the temperature field and evolution in the weld nugget The stirring effect of the EMF changes the flow pattern in liquid nugget, which causes the temperature field of the MA-RSW weld to be significantly different from that of the RSW weld. Previous numerical models did not include the MHD behaviors of the RSW process. This section compares the simulation results of three kinds of models, i.e. the traditional RSW model (only the electro-thermal field is considered), the MHD-RSW model (four-field coupling), and the MA-RSW model (four-field coupling), to reveal the MHD behaviors during the RSW and MA-RSW processes. Figures 16 and 17 show the evolution of the temperature field and the temperature history of four special points in a weld nugget calculated using the three types of numerical models, respectively. To reveal the temperature gradient of the liquid nugget, the unmelted zone below the melting temperature of 652°C is colored gray. Before melting, as shown in Figure 17(a), the temperature of the workpiece increases rapidly under the action of contact and bulk resistance. The temperature rises slowly when it reaches the phase transformation region. The three types of RSW processes have the same temperature history in this period without the effect of fluid flow. When the temperature exceeds the liquidus, as shown in Figure 16(a1)-(c1), the traditional RSW model predicts that the temperature of the nugget will continue to rise rapidly until the last welding moment. In addition, the high-temperature region is concentrated at the center of the nugget, which is consistent with most of the existing research results. In contrast, when the IMF and flow field are considered in the MHD-RSW model, as shown in Figure 16(a2)-(c2), the high-temperature region in the nugget gradually shifts from the center to the edge under the effect of the in-plane flow (refer to Figure 13(a1)-(c1)). Due to the convection effect of high-temperature and low-temperature metals in the nugget, the maximum temperature calculated by the MHD-RSW model is reduced from 952 • C as calculated by the traditional RSW model, to 774 • C. During the MA-RSW process, the temperature distribution is affected by both the in-plane flow and out-ofplane high-speed flow. Figure 17(c) and (d) shows the beginning of nugget growth, the rate at which the temperature increases at points P3 and P4 in MA-RSW process is much higher than that of the MHD-RSW process. This means that the introduction of the EMF can effectively improve the growth rate of the nugget diameter. Furthermore, as shown in Figure 16(a3)-(c3), the compound high-speed flow further decreases the maximum temperature and temperature gradient in the weld nugget. The maximum temperature is reduced to 725 • C, and the energy efficiency is further improved. During the holding stage, as shown in Figure 16(c1)-(f1), the molten metal solidifies rapidly from the nugget edge to the center because of the effect of water and air cooling. The high-temperature region is constant in the nugget center. However, in the MHD-RSW process illustrated in Figure 16(c2)-(f2), the high-temperature region contracts from both sides of the nugget edge to the center along the faying surface with a rapid decrease in flow velocity (refer to Figure 15). As shown in Figure 16(c3)-(f3), during the holding stage of MA-RSW process, the inertial flow maintains a high flow rate after the welding current has been disconnected (refer to Figure 13(c2)-(f2)). Thus, the temperature gradient and solidification rate of the nugget change significantly during solidification, and its influence on the microstructure will be discussed in the next section. Figure 18 shows the variation in the nugget diameter and thickness associated with the welding time calculated using the three types of models. The nugget sizes at 0.25 and 0.3 s have been verified in Section 4.2. With the effect of the EMF, the nugget diameter during an MA-RSW weld is significantly increased, and is approximately 19.8% and 25.5% higher than that of the RSW welds at 0.25 and 0.3 s, respectively. In addition, the RSW welds are approximately elliptical in shape, while both ends of an MA-RSW nugget are sharper and thinner than those of the RSW welds. Influencing mechanism of the magnetic field on weld macro-and microstructures The main reasons for the above phenomenon should be attributed to the high-speed combined flow. On the one hand, under the influence of the enhanced in-plane flow, the liquid metal continues to flow and transfers the heat from the nugget center to the edge. On the other hand, under the action of the out-of-plane flow, the high velocity flow scours the nugget brim, which enhances the heat transfer and greatly increases the nugget diameter. In addition, because the total heat input does not change during the MA-RSW process, the heat transferred from the center to the edge of the nugget would slow down its growth rate in the thickness direction. A decrease in the thickness of the nugget is beneficial to avoid an excessive penetration rate and delay electrode pitting. Furthermore, the microstructure of the MA-RSW nugget is also different from that of the RSW nugget. Figure 19 (a) and (d) shows the notch region of the RSW and MA-RSW nuggets. The columnar grain zone (CGZ) in the MA-RSW nugget is wider compared with the RSW nugget. At the end of welding stage, the maximum velocities in the RSW and MA-RSW nuggets are 2.35 and 11.43 m/s, respectively. Thus, the molten metal can keep a high-speed inertial flow at the beginning of solidification (refer to Figure 15), which reduces the solidification rate (refer to Figure 16(c3)-(d3)) along the solidification front at the nugget notch region compared with the RSW process. The slower solidification rate promotes the growth of the columnar grains and leads to a wider CGZ of the MA-RSW nuggets based on the study of the columnar to equiaxed transition (CET) proposed by Hunt (1984). Figure 19 (b) and (e) shows the upper region of RSW and MA-RSW nuggets, which is close to the electrode. The direction of growth of the CGZ in the RSW nugget is perpendicular to the fusion line. However, the direction of growth of the CGZ in MA-RSW nugget deflects approximately 45 degrees because of the deflection of the heat-extraction direction due to the circumferential inertial flow. Furthermore, a double-layered grain structure with different colors is observed at the upper edge of the MA-RSW nugget. To further confirm the difference between them, the EBSD inverse pole figure map of this area is plotted in Figure 20. The results show that both CGZ-I and CGZ-II are columnar grains and there is no obvious structural difference between them. Huang et al. (2020) and Li et al. (2016) reported a similar phenomenon. The main reason for this could be the effect of segregation. Under the effect of forced water cooling of the electrode, both the nuggets of RSW and MA-RSW cooled rapidly. Due to the lower temperature of the upper region in the MA-RSW nugget (Figure 16(d3)) and forced water-cooling effect, the solidification rate at the upper region of the MA-RSW nugget is higher compared with the RSW nugget. Therefore, as shown in Figure 19(e), the secondary phase has no time to precipitate and results in the formation of highcontrast CGZ-I. Furthermore, the solidification rate of the nugget gradually decreases as it gets closer to the center of the nugget due to the effect of high-speed inertial flow (refer to Figure 15). Thus, the synergistic effects of the concentration of the high solute elements and low solidification rate lead to the formation of low-contrast CGZ-II. In addition, the relatively larger temperature gradient ( Figure 16(e2)) and higher cooling rate ( Figure 17) result in insufficient flow and heat transfer inside the nugget, which inhibits heterogeneous growth and results in a high volume fraction of interdendritic regions (Tian et al., 2019), as shown in Figure 19(c). The liquid metal solidifies gradually from the edge of the nugget to its center. This is a typical sequential solidification mode and is prone to produce shrinkage defects in the center of the nugget. In contrast, the high-speed flow in the MA-RSW nugget is conducive to breaking the dendrites during the solidification process. Furthermore, due to the relatively small temperature gradient (Figure 16(e3)) and low cooling rate (Figure 17(a)) in the center of the MA-RSW nugget, dendrites with smaller arm spacing could remelt and disappear (Terzi et al., 2010), and the solidification mode is transformed from the sequential mode to the simultaneous mode. As a result, the shrinkage defects and the volume fraction of interdendritic regions are greatly reduced, as shown in Figure 19 (f). The average grain size of the EGZ of the MA-RSW is 19.8 μm, which is 21.7% finer than that of the RSW, as shown in Figure 20 (b). Conclusions In this paper, an MHD model coupling of the electrical, thermal, magnetic, and fluid fields was established. The magnetic field, fluid flow pattern, temperature evolution, and weld macro-and microstructures in the RSW and MA-RSW processes of aluminum alloys were studied. The following conclusions can be drawn: (1) The flow pattern and the temperature field distribution of the liquid nugget in the aluminum alloy RSW process are similar to those of the steel RSW process. However, due to the stronger induced electromagnetic force generated by the larger welding current, the maximum flow velocity of the liquid nugget in the aluminum alloy RSW process by the induced magnetic field is increased from approximately 0.48 m/s for mild steel (calculated by Li et al. (2011)) to approximately 2.35 m/s. (2) The radially induced electromagnetic force in the RSW process can only produce a single in-plane flow. In comparison, the circumferential electromagnetic force generated by the external magnetic field in the MA-RSW process can transform the in-plane flow into a three-dimensional combined flow. This change improves the maximum flow velocity to five times that of the RSW process, reaching 11.43 m/s. (3) The high-temperature zone is concentrated at both sides of the nugget in the RSW process, and the temperature gradient is relatively large. Under the effect of the combined flow in MA-RSW, the temperature gradient inside the nugget decreases significantly, and the maximum temperature drops from 774°C for RSW to 725°C. (4) The increase rate of the nugget diameter is mainly in the range of 6% ∼ 15% in steels MA-RSW as reported by Shen et al. (2011) and Qi et al. (2020). In the aluminum alloy MA-RSW process, however, due to the higher flow velocity and heat transfer efficiency, the increase rate reaches 25.5%. (5) During the MA-RSW solidification process, the solidification mode of the nugget center transforms from sequential mode to simultaneous mode due to the small temperature gradient. Compared with the RSW nugget, the shrinkage defects inside the MA-RSW nuggets are greatly reduced, and the average grain size of the EGZ is refined by 21.7%. This numerical model provides an effective method to reveal the influencing mechanism of the EMF on the flow pattern and microstructure of welds. However, this model also has some limitations that need to be overcome. For instance, this model ignores the influence of the structural field, which has a certain influence on the accuracy of the heat generation and nugget size. In the future, this model will consider the structure field to realize five-field coupling. Furthermore, this model could be coupled with a computational grain growth model to predict the evolution of grain morphology and orientation.
2021-08-03T00:06:16.208Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7426817e7cfdfe159ca8bfc648ecef7940749e5a", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19942060.2021.1938684?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "0343eab33d9d910b2669c2ee08f3ed2025b36ede", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
261375232
pes2o/s2orc
v3-fos-license
Reduced tubuloglomerular feedback activity and absence of its synchronization in a connexin40 knockout rat Introduction: Tubuloglomerular feedback (TGF) is the negative feedback component of renal blood flow (RBF) autoregulation. Neighbouring nephrons often exhibit spontaneous TGF oscillation and synchronization mediated by endothelial communication, largely via connexin40 (Cx40). Methods: We had a knockout (KO) rat made that lacks Cx40. One base pair was altered to create a stop codon in exon 1 of Gja5, the gene that encodes Cx40 (the strain is WKY-Gja55em1Mcwi). Blood pressure (BP)-RBF transfer functions probed RBF dynamics and laser speckle imaging interrogated the dynamics of multiple efferent arterioles that reach the surface (star vessels). Results: The distribution of wild type (WT), heterozygote, and KO pups at weaning approximated the Mendelian ratio of 1:2:1; growth did not differ among the three strains. The KO rats were hypertensive. BP-RBF transfer functions showed low gain of the myogenic mechanism and a smaller TGF resonance peak in KO than in WT rats. Laser speckle imaging showed that myogenic mechanism had higher frequency in KO than in WT rats, but similar maximum spectral power. In contrast, the TGF frequency was similar while peak power of its oscillation was much smaller in KO than in WT rats. In WT rats, plots of instantaneous TGF phase revealed BP-independent TGF synchronization among star vessels. The synchronization could be both prolonged and widespread. In KO rats TGF synchronization was not seen, although BP transients could elicit short-lived TGF entrainment. Discussion: Despite the reduced TGF spectral power in KO rats, there was sufficient TGF gain to induce oscillations and therefore enough gain to be effective locally. We conclude that failure to synchronize is dependent, at least in part, on impaired conducted vasomotor responses. Introduction Regulation of renal blood flow (RBF) is dominated by autoregulation, the selfstabilization of perfusion that occurs when blood pressure (BP) varies.In turn, autoregulation is mediated by the myogenic response (MR) that is found in all vascular smooth muscle cells, and the kidney-specific tubuloglomerular feedback (TGF) that operates at the level of individual nephrons.It is TGF that provides the negative feedback that stabilizes RBF and thus renal function when BP varies (Cupples and Braam, 2007).Because the TGF sensor in the distal nephron is the most downstream component of the combined system, the interaction between TGF and the MR is nonlinear. When acting alone in single nephrons, TGF is an autonomous oscillator (Holstein-Rathlou and Leyssac, 1986).Synchronization of TGF dynamics in nephron pairs that share close arterial connections was first demonstrated in 1987.The uniformly small phase difference between nephrons was consistent only with rapid, vascular signal transmission (Holstein-Rathlou, 1987;Wagner et al., 1997).While the potential explanatory power of synchronization was immediately apparent, its extent and importance could not be assessed until development of laser speckle imagers permitted imaging of perfusion dynamics in multiple star vessels (efferent arterioles that reach the surface).Initial studies using laser speckle imaging confirmed that blood flow, not just tubular pressure, was synchronized in multiple nephrons.While the authors of those studies drew divergent conclusions about the scale of the interaction (Holstein-Rathlou et al., 2011;Brazhe et al., 2014;Mitrou et al., 2015), a recent study has emphasized large temporal and spatial scales of TGF synchronization (Postnov et al., 2022).Collectively, these studies informed two recent reviews (Marsh et al., 2019;Zehra et al., 2021). Early studies of TGF synchronization established that TGF is a conducted vasomotor response.It acts remotely (Chen et al., 1995;Wagner et al., 1997), enabled by endothelial gap junction transmission of electrical signals (Wagner et al., 1997).We showed that a gap junction blocker, carbenoxolone, impaired TGF synchronization as well as steady state and dynamic autoregulation (Mitrou et al., 2016).This result indicates that TGF synchronization contributes to renal autoregulation.However, gap junction blockers are notoriously pleiotropic drugs and more specific interference with gap junction conductance was needed. The major endothelial connexin in the kidney, as elsewhere, is connexin40 (Cx40) which provides strong axial conductance to the endothelium (de Wit et al., 2000;Diep et al., 2005;Segal, 2005;Jobs et al., 2012).Absence of Cx40 impairs, but normally does not block, conducted vascular responses in other vascular beds (de Wit et al., 2006;Jobs et al., 2012;Zechariah et al., 2020).Cx40 is also located within the mesangium where it plays significant roles in transmission of TGF signals from the macula densa to the afferent arteriole, affecting vascular resistance, and to the juxtaglomerular apparatus, where it is important to structural organization and tubular regulation of renin secretion (Kurtz et al., 2007;Kurtz et al., 2010).A Cx40 knockout (KO) mouse was generated 25 years ago and has become a mainstay of microvascular research (Simon et al., 1998). There is little published information concerning TGF in the Cx40 KO mouse.Assessment of TGF by microperfusion in KO mice revealed that the dynamic range of TGF is reduced by 30%-50% from that in wild-type (WT) mice (Oppermann et al., 2013).A study using the isolated juxtamedullary nephron preparation reported that kidneys from KO mice showed no autoregulatory response and essentially no conducted vasomotor response of afferent arterioles (Sorensen et al., 2012), suggesting absence of TGF.No information is available concerning TGF kinetics or dynamics in Cx40 KO mice. We attempted to perform in vivo studies in a Cx40 knockout mouse.It proved impossible to achieve stable BP within the autoregulatory range for long enough to perform relevant studies.Accordingly, we had a Cx40 KO rat made by the Gene Editing Rat Resource Centre at the Medical College of Wisconsin.In this first study we provide initial characterization of the Cx40 KO rat and addressed two questions.Is TGF active and does it contribute to stability of RBF in the Cx40 KO rat?Does TGF in the KO rat exhibit synchronization?Based on the literature we predicted that TGF would active (Just et al., 2009;Oppermann et al., 2013).Based on the literature, impaired synchronization was expected (Jobs et al., 2012;Zechariah et al., 2020), although the extent of impairment was unclear (Sorensen et al., 2012). Methods Experiments were conducted in accordance with the guidelines of the Canadian Council on Animal Care and received prior approval by the Animal Care Committee of Simon Fraser University with continuing evaluation of the new knockout strain.Rats were housed in groups of 2-6 in a 12:12 h light:dark cycle, and had free access to standard rat chow (LabDiet 5,001) and water. The knockout rat was created at the Gene Editing Rat Resource Centre at the Medical College of Wisconsin (https://rgd.mcw.edu/wg/gerrc).A single base change by Crispr introduced a stop codon in exon 1 of Gja5 (WKY-Gja5 5em1Mcwi ).The Wistar Kyoto (WKY) rat was chosen as host because it is a widely used, normotensive, and fecund inbred strain.We received heterozygote (HZ) breeders and continued to breed from HZ parents.Survival of offspring to weaning approximated the Mendelian ratio (21 WT, 50 HZ, 16 KO, χ 2 = 2.547, p > 0.2), including 43 males and 44 females.Growth curves differed between sexes as expected, but not among genotypes.The HZ rats tended to be older and therefore larger as some were retired breeders. The WT, HZ, and KO strains had identical growth curves for both males and females, shown in Figure 1.Retired breeders were used for experiments so HZ rats were older and larger than WT and KO rats. The surgical preparation was described previously (Mitrou et al., 2016).Briefly, rats received buprenorphine (Temgesic ® Reckitt & Benckiser, Inc.) 0.01 mg/kg, i.p. 20 min before initiating procedures.Anaesthesia was induced with 4% isoflurane (Baxter, Mississauga, ON, Canada) in inspired air supplemented to 45% O 2 .Anesthetic concentration was reduced to ≈2% during surgery, and 1.25% thereafter.The trachea was cannulated and the animal was ventilated by a small animal respirator (TOPO, Kent Scientific, United States) that operated in timed respiration mode and was adjusted to match the rat's breathing rate (≈1 Hz).Isoflurane has recently received attention as a vasodilator, particularly in brain, for example, (Lambers et al., 2023).Comparison of data from rats anesthetized by isoflurane, halothane, or Inactin (a barbiturate) reveals no systematic differences in BP, RBF, or conductance in multiple strains, suggesting that vasodilatation by isoflurane is not a major issue in kidney (Cupples et al., 1988;Cupples, 1993;Naguib et al., 1994;Ajikobi et al., 1996;Cupples et al., 1996).In direct comparisons, BP and RBF dynamics are more complex under isoflurane anaesthesia than under halothane anaesthesia (Naguib et al., 1994;Ajikobi et al., 1996;Cupples et al., 1996) and they more closely resemble the dynamics seen in conscious rats (Lessard et al., 1999).The left femoral vein was cannulated (PE-50) for infusion of 2% charcoal-washed BSA (Sigma-Aldrich, Oakville, ON, Canada) in saline (1% body weight/h).The left femoral artery was cannulated (PE-90 with narrowed tip) and connected to a pressure transducer driven by a Kent Scientific Ltd.TRN005 amplifier.The left kidney was exposed by a subcostal flank incision, liberated from fat and fascia, immobilized in a plastic holder anchored to the table, and covered by a coverslip.After the renal artery was stripped of fat and fascia, a transit-time ultrasound flow probe (PRB-001, Transonic Systems, United States) was mounted and connected to a flowmeter (TS420, Transonic).The flow probe was secured with acoustic coupling gel (NALCO 1181 mixed with surgical lubricant).One hour elapsed between the end of surgery and the start of data acquisition. Data records had a nominal length of 25 min during which BP and RBF were acquired at 500 Hz, low pass filtered, and resampled to 2 Hz.The relationship between BP and RBF was assessed by transfer and coherence functions.Output of the transfer function consists of gain and phase vectors.The gain vector shows the frequencies at which, and extent to which, BP fluctuation is attenuated in RBF.The phase vector reports the temporal relationship between BP and RBF.A first-order system that responds only to the level of the input variable, BP, has a slope of gain reduction ≈20 dB/decade and a phase peak of π/4 radians.A second-order system responds to both the level of input and its rate of change; it has a slope ≈40 dB/decade and a phase peak of π/ 2 radians (Milhorn, 1966;Just et al., 1999).The coherence function offers quality control-high coherence at any frequency indicates that the input and output signals are closely and linearly related; reduced coherence can indicate the presence of increased dynamic complexity, noise, or unrelated signals. Laser speckle images were acquired simultaneously with the BP-RBF records.The acquisition hardware was specified by Dr. DD Postnov, University of Aarhus, who also wrote the drivers and donated them to us.A chosen field of view on the kidney surface was illuminated by a 785 nm, 100 mW laser (Thorlabs, LB875-SF100).Data were acquired by a Basler ACE 2000-165umNIR camera in arrays of 1,000 × 1,000 pixels at 70 Hz.The camera was mounted on an Edmund VZM_300 zoom imaging lens that was in turn mounted on a heavy microscope base that facilitated focusing.This assembly was used at ×2 magnification to give a field of view 2.21 × 2.21 mm (4.88 mm 2 ).Blood flow index (BFI) was estimated as the mean light intensity at each pixel over 25 frame intervals divided by its standard deviation, resulting in a 2.84 Hz record (Postnov et al., 2022).The default analyzed segment was full length (1,480 s), but was reduced if necessary. Regions of interest (ROIs) were extracted automatically from the BFI image using morphological image processing implemented in custom Matlab code, then checked manually.Briefly, we obtained the mean perfusion image, usually over the first 2 min of recording, denoised the image by subtracting the row-wise median and applying a 5-pixel median filter, and adjusted for large-scale differences in lighting with a top-hat transformation.Next we used a modified peak-to-sidelobe ratio to emphasize areas with high perfusion surrounded by areas of low perfusion, and considered pixels with the highest resulting intensity (usually the top 2%) as candidates for star vessels.Morphological opening removed small unconnected areas, after which we kept only areas with the highest intensities (usually the upper 60%) and which were above a size threshold (usually 100 pixels).We added surrounding pixels to the identified areas if they exceeded a threshold intensity (usually 0.5).Finally, we smoothed vessels and filled any holes.Subsequent manual editing of machine-identified ROIs was based on capturing high blood velocity in an efferent arteriole while minimizing capture of much slower blood velocity in neighbouring capillaries and static speckle in proximal tubules (Postnov et al., 2020;Zehra et al., 2021).In WT and HZ rats the TGF peak in ROI spectra is consistent and was used to further differentiate actual from factitious ROIs.In KO rats there was no a priori expectation of clear TGF signals so use of ROI spectra was not appropriate.For KO rats we used the same machine-identification of ROIs as in WT and HZ rats with intensity criteria at least as stringent, with subsequent manual selection informed by the experience gained from WT and HZ rats.An example from a WT rat is shown in Figure 2. The median (range) of included ROIs was 27 (18-36) per rat and did not differ by sex or strain.The BFI vector was constructed from the average over all pixels in a ROI at each time and was used for further analysis. Initial analysis generated figures showing instantaneous BFI of all ROIs and their power spectra.As previously, frequencies of the myogenic mechanism (f MR ) and TGF (f TGF ) were machine-extracted from each ROI as the frequency with maximum power within the MR and TGF bands (0.09-0.25 Hz and 0.015-0.05Hz respectively) (Scully et al., 2013;Mitrou et al., 2015).This analysis found a surprising number of ROIs (307 of 960) with f TGF at the lower limit of the TGF band.The incidence was similar in all groups (30% in WT, 33% in HZ, and 33% in KO rats) and was not sex-related.This suggested a systematic measurement issue unrelated to Cx40.The primary determinant of f TGF is the length of the tubule from the glomerulus to the macula densa in the early distal tubule; a secondary determinant is the transmission time required to send a signal from the macula densa to the afferent arteriole (Holstein-Rathlou and Marsh, 1990).Thus, it is plausible that the KO rat would have lower f TGF than WT and HZ rats, but it is implausible that all strains would have the same high incidence of f TGF <0.02 Hz. We are aware of two potential confounds that could generate this very low frequency signal.First, there is an autonomous oscillator that operates in rat kidneys at ≈0.01 Hz (Siu et al., 2009).Second, this region of the power spectrum often displays 1/f characteristics from which it is difficult or impossible to extract a discrete peak.We were concerned that carte blanche acceptance of the very slow peak power might capture something other than TGF. Inspection of ROI spectra showed that most ROIs having dominant 0.0176 Hz peaks also had one or more distinct oscillations within the TGF band.We therefore defined criteria for acceptance of f TGF : 1) a peak at the lower, or upper, limit of the TGF detection band must be distinct and not part of a 1/f sequence or of a peak outside the band; 2) a distinct peak within the TGF band, defined as twice the noise power, always replaced the peak at or beyond the limit.Noise power is the average power within 0.3-0.5 Hz.If neither of those conditions was met then the ROI was excluded.After applying the criteria, only 28 of 960 ROI had f TGF = 0.0176 Hz while 6 were excluded.The same criteria, with relevant lower and upper limits, were applied to assigning f MR , where 1For Hz N = 17 for weight, age, and blood pressure, N = 16 for renal blood flow and conductance.ANOVA, with 3 groups × 2 sexes found differences a-e.ANOVA, 2 × 2 found difference f. a) Body weight differed by sex, p = 1.25 × 10 −16 .b) Body weight differed by strain, HZ, being larger, p = 0.00036.c) HZ, are older than WT, and KO, p = 0.00025.d) Blood pressure differed among groups because it was elevated only in KO, rats, p = 5.31 × 10 −8 .e) Weight-normalized conductance differed among the three groups, p = 0.0133.f) Weight-normalized conductance differed between WT, and KO, group groups, p = 0.004. they had much less impact.For this analysis the MR band was set at 0.09-0.21Hz to minimize interference by irrelevant BP power >0.2 Hz which is common in these rats.Because distributions of spectral powers in the myogenic and TGF bands appeared to be log-normal, they were log-transformed to normalize variance prior to statistical analysis.We computed the instantaneous phase of TGF in each ROI using the Hilbert transform applied to the BFI vector.To assess synchronization of TGF we examined plots of instantaneous TGF phase in all ROIs.We considered TGF to be synchronized if plots satisfied 3 criteria: 1) continuing aligned TGF cycles within part or all of the field of view, 2) consistent TGF cycle duration through time, and 3) insensitivity to initiating or driving events in BP.Consistency of TGF cycle duration is included because if ROIs are synchronized then they are phase-locked and will have the same cycle duration, even if they are synchronized in anti-phase.Data are presented as mean of rats ±standard deviation.ANOVA, with 3 groups × 2 sexes found a sex difference in ROI, area only between male and female HZ, rats (*, p = 0.0446, confirmed by t-test).ANOVA, with 2 groups × 2 sexes did not find any significant differences between WT, and KO, rats or between sexes.Results from HZ, rats are included for completeness and reference.They are not discussed in the text. FIGURE 3 Absence of Cx40 in kidney of KO rat.Immunofluorescence images of renal cortical tissue labelled to show smooth muscle actin (green) a marker of vascular smooth muscle, for Cx40 (gold), and for CD31 (red) a marker of endothelial cells, plus their merged image.Tissue from a WT rat is shown in the upper panels and from a KO rat in the lower panels.In the WT rat, Cx40 is clearly present in endothelium and in the glomerulus, presumably in the mesangium within the glomerulus.Tissue from the KO rat conspicuously lacks Cx40 in both arterial endothelium and within the glomerulus (at 12 o'clock). Samples of kidney cortex were excised, blotted, then immersed in 30% sucrose and OCT for storage at −80 °C.They were processed in Braam's laboratory at the University of Alberta.Frozen 5 µm sections were fixed in acetone for 10 min at −20 °C.Slides were blocked in 5% fish gelatin +3.5% BSA +20% donkey serum for 1 hour.They were then exposed to primary antibodies at 4 °C overnight.The antibodies used were: anti-Cx40 Invitrogen 364,900 which was used at 1: 100 dilution; anti-smooth muscle actin Novus biologicals NB300-978 which was used at 1:300 dilution; anti-CD31 Thermo-Fisher MA1-81051 which was used at 1:100 dilution.The respective secondary antibodies were Alexa 568, Alexa488, and Alexa647 which were exposed for 1 h.Slides were stored at 4 °C in the dark.They were then examined by confocal microscopy. Data are reported as mean ± standard deviation (SD).Data were first analysed by 2-way analysis of variance (ANOVAn) in Matlab R2022B.Both 3 × 2 (WT, HZ, KO x sex) and 2 × 2 (WT, KO × sex) designs were used to identify sex-related variables and interactions.Because the limited number of replicates was not optimal for these designs, WT × KO (pooled sexes) were also tested by two-tailed t-tests with equal variance.Where there could be ambiguity, the test is reported alongside P. P < 0.05 was considered to indicate a significant difference. Results Table 1 shows that, as expected from the Cx40 KO mouse (Rummery and Hill, 2004;Wagner et al., 2010), BP was higher in the KO rats (p = 5.3 × 10 −8 ) than in WT and HZ rats which were both normotensive.Although BP appeared lower in the female than the male KO rats, the difference was not significant (p = 0.0638).RBF is presented normalized to body weight because there is a tight relationship between body weight and kidney weight in male Wistar rats (left kidney weight (g) = 0.0035 × body weight (g) + 0.148, r 2 = 0.849 (N = 107), body weight range: 124-519 g, unpublished data).RBF did not differ among strains, although it may have differed between sexes as both HZ and KO females appeared to have lower RBF than WT females (p = 0.0703).Renal vascular conductance, normalized to body weight, differed FIGURE 5 ROI spectra.The figure illustrates ROI spectra extracted from a WT rat (A), upper and a KO rat (B), lower.In each panel spectral powers of 4 ROIs are plotted including the median, the minimum, the maximum, and the "mean".The "mean" is the ROI that had peak TGF power closest to the true mean.The figure highlights both the lower TGF power in the KO rat and the substantial variability of ROI TGF power that is seen in all rats.Neither rat showed significant power in the frequency band 0.09-0.2Hz in which the myogenic mechanism operates. Figure 3 shows Immunofluorescence images of renal cortex from a WT rat and from a KO rat.Labelling of Cx40 is clear in the WT sample with punctate labelling of endothelium in 2 muscular vessels and within the glomerulus.In the KO rat 3 muscular vessels and the glomerulus (at 12 o'clock) are evident but, there is no Cx40 labelling. Figure 4 presents BP spectral power and RBF dynamics shown in the coherence and transfer functions, with the latter containing gain and phase plots.All strains have similar BP power, i.e., patterns of BP fluctuation.The higher frequency BP power peaks seen in all strains are irrelevant since they do not appear in coherence or gain plots.In all groups the MR operates between 0.15-0.2Hz with progressive gain reduction to <0 dB at lower frequencies, demonstrating attenuation in RBF of BP fluctuation (autoregulation).All rats display a phase peak centred at 0.07-0.1 Hz that is the temporal signature of the MR.Operation of TGF is shown in WT and HZ rats by the resonance peak in gain centred at ~0.023 Hz and in the KO rats by a gain shoulder at that frequency.It is also shown temporally in the phase plots with their peak values < 0.02 Hz.There is no reduction of coherence by the MR in any of WT, HZ, or KO rats, indicating that operation of the MR was not sufficient to induce nonlinearity in the BP-RBF relationship.Even the stronger TGF, judged by gain reduction, caused only modest reduction of coherence at frequencies <0.02 Hz. Table 2 presents average ROI areas and numbers by strain and sex.The only significant difference was between male and female HZ rats.As shown in Table 3, TGF power (0.015-0.05Hz) was greater in WT than in KO rats (p = 0.0164).Figure 5 shows median, minimum, maximum, and "mean" ROI power spectra taken from a WT rat and from a KO rat (The "mean" is the ROI that had peak TGF power closest to the true mean for that rat).The figure illustrates the large variability of TGF power seen in all rats and the lack of discrete MR power.It is worth noting that spectral power in the MR band (0.09-0.21Hz) was minimal in both rats with little evidence of a peak or peaks.We commonly observed this lack of MR power in all groups and the lack of power is reflected in the MR dynamics seen in transfer functions (Figure 4).Table 3 reports peak powers and frequencies of both MR and TGF and average noise power.Peak MR power in the band 0.09-0.21Hz was only about three-fold higher than the noise power, shown in Table 3. Nor did peak MR power differ between groups or sexes.Instead, f MR differed between WT and KO rats (p = 0.0297) with the KO rats having a higher frequency (Table 3). Two plots of instantaneous TGF phase are shown in Figure 6, one from a KO rat and the other from a WT rat.Three further examples are shown in Supplementary Material.They illustrate several aspects that separate TGF entrainment from TGF synchronization.TGF entrainment is seen in both WT and KO rats and is a response to BP transients, particularly BP reductions.These alignments were brief, typically decaying over one to two TGF cycles, so they did not differentiate between WT and KO rats.TGF synchronization was observed in 6 of 9 WT rats.The plots of instantaneous TGF phase show episodes of TGF synchronization ranging in duration from 3 TGF cycles to the full record length and in size from a few ROIs to the entire field of view.Here the alignment of TGF phase among ROIs is clearly independent of BP and often robust to BP transients (Figure 6, Supplementary Data).Episodes of synchronization started, grew or shrank, and stopped independently from any observable change in BP or RBF.Of the remaining 3 WT rats, BP-induced TGF activation accounted for all entrainment in two of them and the third displayed 21 transient BP reductions in an 800 s record (≈0.026 Hz) making it impossible to determine the source of TGF entrainment.Synchronization was not observed in any KO rat.Instead, half of KO rats showed short-lived TGF entrainment, typically one to two cycles, which was temporally associated with transient events in BP.The difference in observed synchronization between 6 of 9 in WT and 0 of 12 in KO was significant (χ 2 = 13, p < 0.005). Discussion The Cx40 KO rat is a new model that was created to facilitate studies of TGF synchronization and is relevant to studies of conducted vasomotor responses throughout the body.It offers advantages for integrative studies that address function and its regulation on scales that are larger than are possible in the mouse.The dimensions of synchronized regions reported here and in (Postnov et al., 2022) appear to approach the renal surface area of a mouse kidney. Most of what we know or surmise about the actions of Cx40 in the kidney was acquired in the Cx40 knockout mouse.Hypertension in the Cx40 KO rat was expected since Cx40 has been shown in the mouse to be necessary for both organization and function of the juxtaglomerular apparatus (Gerl et al., 2015).Other studies of mice have shown that the Cx40 knockout is also a Cx37 knockdown, Cx37 being the other major endothelial connexin (Kurtz et al., 2009;Jobs et al., 2012).It is unlikely that the dominant vascular smooth muscle connexin, Cx45, could substitute for knockout of the Data are presented as mean ± standard deviation.Maximum powers (MP) for MR, and TGF, were acquired in the bands 0.09-0.21Hz and 0.015-0.05Hz respectively.f MR , and f TGF , are the dominant frequencies in those bands at maximum power.Noise power is the average power in the band 0.3-0.5 Hz and did not differ among groups or between WT, and KO, rats.ANOVA (2 × 2) found that Log(MP MR ) did not differ between WT, and KO, or among groups; f MR , was higher in KO, than in WT, rats (*, p = 0.0496).Log(MP TGF ) was lower in KO, than in WT, rats (**, p = 0.0235).Results from HZ, rats are included for completeness and reference.They are not discussed in the text. Frontiers in Network Physiology frontiersin.org07 More et al. 10.3389/fnetp.2023.1208303endothelial Cx40.This was shown in two studies that used a mouse with the Cx40 coding region replaced by Cx45 to show that Cx45 could only partly restore regulation of renin secretion, without restoring structure of the juxtaglomerular apparatus (Schweda et al., 2009).In addition, it only partly restored autoregulation (Just et al., 2009).The HZ rat is normotensive, indicating that there is adequate communication from the macula densa to the juxtaglomerular apparatus.While we bred from HZ parents, others keep separate WT and KO colonies (DG Welsh, personal communication).This leads to an interesting consideration for interpretation of Cx40 knockout studies in any species.ANG II augments TGF (Ploth and Roy, 1982;Braam and Koomans, 1995), MR (Cupples, 1993;Kirton and Loutzenhiser, 1998), and autoregulation in conscious rats (Polichnowski et al., 2015), acting at multiple sites within the negative feedback loops.Thus, any effect of elevated [ANG II] in KO rats will tend to minimize differences between WT and KO rats in the context of RBF regulation. Because BP differed between WT and KO while RBF did not, renal vascular conductance is lower in the hypertensive KO than in WT or HZ rats.There is an apparent sex difference in conductance between WT on one hand and HZ and KO on the other that becomes nonsignificant when conductance is normalized to body weight.Reduced conductance in KO males is consistent with increased BP and autoregulation.This is not the case for KO females or for HZ females in which BP is not the issue.The female to male ratios of body weight are similar, being 0.65 in WT, 0.63 in HZ, and 0.63 in KO rats whereas the female to male ratios of weight-normalized RBF are 1.06 in WT, 0.75 in HZ, and 0.71 in KO rats.Statistical testing may have failed to detect lower RBF in female HZ and KO rats due to insufficient numbers of replicates.Assuming that the relationship between body weight and kidney weight in female rats is at least similar to that in males then we are missing something. The most important point shown by the BP-RBF transfer functions is that KO rats display reasonably effective dynamic autoregulation, albeit with smaller contribution from TGF than in WT rats.The second point is that coherence declines only slightly at frequencies < f TGF , indicating that the system is remarkably linear in these rats.This is intriguing because the TGF sensor at the macula densa is the most downstream component of the combined autoregulation system while both TGF and MR operate on the same vascular smooth muscle.Thus, nonlinearity is to be expected when both systems are operating even if the MR were not being modulated by TGF.Such modulation has been demonstrated repeatedly by investigators using a variety of experimental designs (Chon et al., 2005;Sosnovtseva et al., 2005;Shi et al., 2006;Sosnovtseva et al., 2007;Siu et al., 2009). One point that we find perplexing is that all these WKY rats display slower TGF-mediated autoregulation than we have reported previously.Over 25 years we used both outbred (Sprague-Dawley, Wistar, Long-Evans) and inbred strains (SHR, Wistar Kyoto, Brown Norway).Almost without exception they exhibited TGF-mediated gain roll-off at ≈0.03 Hz as opposed to the ≈0.02Hz roll-off seen in the current study.We do not have a satisfactory explanation for this difference, but we feel it is noteworthy. Dynamics of MR and TGF in ROIs differed between WT and KO rats.The MR in KO rats had a higher f MR but similar maximum power as in WT rats although it was often difficult to separate MR from noise.Interestingly, we were able to identify by eye the MR peaks in ROI spectra from 7 of 12 KO, but only 1 of 9 WT rats, suggesting that the MR was more organized in the KO rats.That plus the higher f MR associated with reduced TGF power suggests release of MR from TGF modulation in KO rats as has been suggested to explain the increased f MR in isolated, perfused hydronephrotic kidneys that have only MR (Marsh et al., 2019).Another possibility would be BP-dependency of f MR (Cupples and Loutzenhiser, 1998). The lower peak TGF power that is seen in KO rats indicates reduced TGF gain although, to a considerable extent, the oscillations are still present as illustrated in Figure 5.The bulk of TGF studies in rats were performed using barbiturate anaesthetics.Those studies routinely demonstrated TGF gain sufficient to regulate pre-glomerular resistance and contribute to autoregulation (Schnermann and Briggs, 1989;Casellas and Moore, 1990;Braam et al., 1993), even in Cx40 KO mice (Oppermann et al., 2013).In addition, Kallskog and Marsh demonstrated additive TGF interactions in nephron pairs of rats anesthetized by Inactin (Kallskog and Marsh, 1990).Barbiturates, and particularly Inactin, reduce TGF gain to below the critical level that causes bifurcation to oscillating dynamics (Pitman et al., 2004).In turn, this suggests that the residual gain in KO rats in the current experiment is above that critical level, at least in some nephrons, and sufficient to initiate conducted vasomotor responses if all components of such responses are present and active. We used examination of instantaneous phase plots to identify synchronization according to the criteria stated in Methods.Gap junction mediated communication is certainly not required for BP-dependent entrainment of TGF which was seen in both WT and KO rats.It is, however, the only known communication pathway that enables TGF synchronization.Synchronization was not observed in any KO rat.Instead, TGF entrainment was observed in temporal association with BP transients, usually BP reductions, and was always short-lived, decaying within 1 or 2 TGF cycles.Parenthetically, the presence of widespread BP-induced TGF entrainment is strong evidence that TGF is active or can be induced in most or all nephrons.In contrast to the absence of synchronization in KO rats, 6 of 9 WT rats showed episodic or sustained TGF synchronization that involved either a portion of the field of view or the entire field of view.Both initiation and termination of widespread synchronization were observed in the absence of any BP or RBF perturbation.The duration of synchronization was often in the range of 5-7 min and in one case for 25 min (≥35 TGF cycles), results that are consistent with those of (Postnov et al., 2022).Most of the synchronized ROIs oscillated in phase while a few were clearly synchronized but out of phase. Summary and conclusions We had a Cx40 knockout rat made in which a single base change created a stop codon in exon 1 of Gja5, the gene that encodes Cx40.The KO rats breed and grow normally.They have relatively normal RBF dynamics although the power in TGF oscillations is much lower than in WT rats.Since TGF oscillations are present in KO rats, TGF is undoubtedly operating in individual nephrons.Synchronization of TGF dynamics is seen in most of the WT rats, but synchronization could not be differentiated from BP-dependent entrainment in any of the KO rats.We conclude that the Cx40 KO rat has TGF that is active in individual nephrons, indicating that Cx40 is not essential for transmission of the TGF signal across the extraglomerular mesangium to the afferent arteriole.Cx40 does appear to be essential for vascular transmission beyond the afferent arteriole which would otherwise enable conducted vascular responses and synchronization.Overall, these results argue for large radii of TGF synchronization.The absence of TGF synchronization in Cx40 KO rats is consistent with a reduced radius of interaction due to reduced transmission along afferent arterioles and small arteries. FIGURE 1 FIGURE 1Growth curves.The figure shows the relationship between weight and age for male (m) and female (f) WT, HZ, and KO rats.The numbers of animals for each genotype and sex are reported in the figure.Growth curves differed between sexes as expected, but not among genotypes.The HZ rats tended to be older and therefore larger as some were retired breeders. FIGURE 2 FIGURE 2 ROI selection.The figure illustrates ROI selection in a WT rat.The left image shows the blood flow index (BFI) image of 1,000 × 1,000 pixels averaged over 2 minutes.In the processed image in the centre, identified ROIs are outlined and the walls of tubules are visible.The image on the right shows the final ROI map. FIGURE 4 FIGURE 4 BP-RBF dynamics.The figure reports RBF dynamics in WT, HZ, and KO rats.The figure legend in panel (C) includes the numbers of animals used.The top panel (A) shows the BP spectral power that is the input to the coherence and transfer functions.The second panel (B) shows the expected high coherence at >0.15 Hz, reflecting a tight linear relationship between BP and RBF.At lower frequencies coherence declines slightly.The third panel (C) shows the gain vectors.The bottom panel (D) shows the phase vectors of the transfer function.The gain reduction that occurs from ≈0.15 to 0.04 Hz and its associated phase peak centred at ≈0.07 Hz are the signature of the MR.Gain <0 dB indicates that RBF is actively attenuating fluctuations from BP.The resonance peak in gain at ≈0.025 Hz and subsequent roll-off at lower frequencies together with the associated phase peaks at <0.02 Hz are the signature of TGF.The MR dynamics indicate weak myogenic autoregulation in all groups.The TGF dynamics indicate adequate autoregulatory capacity in WT, but less so in HZ and KO groups. FIGURE 6 FIGURE 6This figure depicts records of BP, RBF, and instantaneous TGF phase from two rats.The upper set of panels (A) are from a WT rat and the lower set of panels (B) are from a KO rat.In both records, BP is stable throughout the record, both showing episodes of 0.25-0.5 Hz input from the autonomic nervous system.(A) The WT rat shows widespread entrainment of instantaneous phase in almost all the 20 ROIs from the start of the record to ≈500 s.Thereafter TGF signature becomes disordered until ≈700 s when synchronization begins again, particularly in ROIs 1-9.The synchronization spreads throughout the field of view before the initiating ROIs become disorganized shortly after 900 s.By 1,100 s the field of view is again fully synchronized.(B) In the KO rat there are no BP transients to drive TGF entrainment and the lack of synchronization throughout the record is evident.There is one TGF cycle of full entrainment at ≈230 s, but the entrainment decays within <2 TGF cycles.The rest of the instantaneous phase record shows little evidence of entrainment and a high incidence of variable TGF cycle duration. TABLE 1 Basic comparison of WT, HZ, and KO strains. TABLE 3 MR and TGF variables.
2023-08-31T15:21:21.521Z
2023-08-29T00:00:00.000
{ "year": 2023, "sha1": "95c368b15edc8a9eb5c98885ddb2aab27060485b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnetp.2023.1208303/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5967cdc8afd1ca5c28b1f230c57eb5a5c01c70d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254203350
pes2o/s2orc
v3-fos-license
A 14-Year-Old Saudi Boy with Gynecomastia, Cushing Syndrome, Large-Cell Calcifying Sertoli Cell Tumor of the Testis, and Carney Complex Patient: Male, 14-year-old Final Diagnosis: Carney complex Symptoms: Gynecomastia • testicular pain Medication: — Clinical Procedure: Invasive adrenal venous sampling Specialty: Endocrinology and Metabolic Objective: Rare disease Background: Carney complex (CNC) is a rare multiple neoplasia syndrome with autosomal dominant inheritance. CNC is frequently misdiagnosed owing to its diverse clinical characteristics. We reported the case of a 14-year-old Saudi boy with a history of gynecomastia, Cushing syndrome, large-cell calcifying Sertoli cell tumor of the testis, and CNC. Case Report: The patient was referred to the pediatric endocrine clinic for evaluation of bilateral slow progressing gynecomastia for 1-year duration. His clinical examination revealed lentigenes, bilateral diffuse breast enlargement (consistent with Tanner stage III), and asymmetrical testicular enlargement, more on the left side. Other systemic examinations were unremarkable. The initial blood workup showed elevated estradiol level with unsup-pressed cortisol after an overnight 1-mg dexamethasone suppression test. Breast ultrasound (US) confirmed true gynecomastia. Testicular US revealed microcalcification and the testicular biopsy confirmed diagnoses of large-cell calcifying Sertoli cell tumor (LCCSCT). A 2-step dexamethasone suppression test showed a paradoxical rise in serum and urine cortisol levels, which are characteristic for PPNAD. LCCSCT and PPNAD are 2 major criteria fulfilling a diagnosis of CNC. The gene test showed heterozygous mutation in the PRKAR1A gene, which is diagnostic for CNC. The patient underwent bilateral mastoplasty and was planned for radical left orchiectomy. Conclusions: Gynecomastia and LCCSCT can be presenting features of CNC, which mandates careful, thorough clinical examination and tailored investigation to reach a diagnosis. Background Carney complex (CNC) is a rare genetic disorder involving multiple neoplasia syndrome, first described in 1985 [1]. More than 50% of cases are familial owing to an autosomal dominant inheritance pattern, or appear occasionally as a result of a de novo genetic defect [2]. The pathogenesis of PPNAD is unclear, but it appears to be associated with pathogenic variants of the PRKAR1A gene [3]. CNC is inherited in an autosomal dominant pattern with high penetrance but a heterogeneous expression [4]. Approximately one-quarter of cases are due to de novo mutations. There are 2 genetic sites that have been associated with CNC. Inactivating mutations in the protein kinase A type I-alpha regulatory subunit (PRKAR1A) gene on chromosome 17q22-24 are found in most CNC patients [5]. A second locus on chromosome 2p16 is also associated with CNC [6]. Cushing syndrome is the most frequently observed endocrine tumor in CNC, occurring in approximately 25% of affected individuals. Large-cell calcifying Sertoli cell tumors (LCCSCTs) are observed in one-third of affected males within the first decade of life and in most adult males. Up to 75% of individuals with CNC have multiple thyroid nodules, most of which are nonfunctioning thyroid follicular adenomas. Clinically evident acromegaly from a growth hormone (GH)-producing adenoma occurs in approximately 10% of adult patients. Psammomatous melanotic schwannoma (PMS), a rare tumor of the nerve sheath, occurs in an estimated 10% of affected individuals. The median age at diagnosis is 20 years. The diagnosis of CNC is established in a proband with 2 or more major diagnostic criteria and/or by identification of a heterozygous germline pathogenic variant in PRKAR1A on molecular genetic testing [7,8]. It has been observed that most CNC patients (>80%) have spotty skin pigmentation or skin growths, which usually emerge early in life and can occur anywhere on the body, classically on the mucosa, lips, face, and genital region [2]. Nevertheless, the majority of non-cutaneous lesions found in CNC are cardiac myxomas, accounting for over 50% of CNC-related deaths [9][10][11]. Sertoli cell tumor is a rare subtype of sex cord-stromal tumors and it contributes to less than 1% of all testicular tumors [12,13]. However, there are 3 subtypes of Sertoli cell tumors: not otherwise specified, and 2 variants (sclerosing Sertoli cell tumors and LCCSCTs) [13]. LCCSCT is an uncommon variant of Sertoli cell tumor [14]. LCCSCT is typically described by distinguishing histology: an intratubular growth pattern, with hyalinization of the tubular basement membrane and cells arranged in circumscribed nodules. Large round Sertoli cells with abundant cytoplasm are characteristic, in addition to massive calcifications, which are essential for the diagnosis of LCCSCT [14]. To the best of our knowledge, our case report is the first to describe LCCSCT in a Saudi boy associated with CNC for which conservative treatment was initially provided along with regular follow-up. Conservative treatment is initially indicated if there is no sign of distressing malignancy. Physical and ultrasonographic examinations are necessary once a year as part of the follow-up [15]. Aromatase inhibitors can be an efficacious treatment option for patients with LCCSCT [16], but data from clinical trials are needed. The present case is comparable to that of an adolescent who presented with a testicular mass at age 9 years followed by PPNAD diagnosed at age 22 years [1]. CNC is a rare condition, but over 750 patients have been described in different ethnic groups [7]. Given the significant variability in the clinical manifestations, even within a given family, careful clinical evaluation of first-degree family members is warranted in presumed "sporadic" cases [17]. Familial transmission has been reported more frequently via the affected mother, suggesting a non-Mendelian inheritance or impaired male fertility of affected individuals [17]. Early diagnosis allows vigilant continuing observation of tumors and complications, thus improving disease prognosis by early treatment. Herein, we report the case of a 14-year-old boy who had CNC with large-cell calcifying Sertoli cell tumor (LCCSCT) of the testis and primary pigmented nodular adrenocortical disease (PPNAD). Case Report A 14-year-old boy was referred to our Endocrinology Department at King Fahd Medical City (KFMC) in 2017 for evaluation of bilateral breast enlargement and hyperprolactinemia. The bilateral breast enlargement was noticed by the family 1 year prior to presentation. It was slowly progressive but not painful, with no discharge from nipples or galactorrhea, and no skin changes over the breast area. There was no report of headache, vomiting, visual impairment symptoms, or acne. He had no history of intracranial tumor or head irradiation. The parents of the patient were second-degree cousins. The patient was the eldest child among 3 healthy siblings and was developmentally normal, with average school performance. The examination revealed a healthy-appearing patient who was not dysmorphic, pale, or jaundiced. His weight was 46 kg (90 th percentile), height was 156 cm (90 th percentile), and body mass index was 18.5 kg/m 2 (80 th percentile). Lentigines were found on the membrane lining the eyes and on the sclera (Figure 1). There was hyperpigmentation of the gingiva. Bilateral diffuse breast enlargement was consistent with Tanner stage III. Also, he had neither focal breast lesion nor palpable axillary lymph nodes. Pubic Hair Tanner stage was II. The physical examination also revealed enlargement of the left testis. The clinical laboratory findings revealed a serum prolactin level of 576 ng/mL (normal is less than 20 ng/ml) and serum estradiol 178 pg/mL (normal is 10-50 pg/ml). However, other laboratory findings, such as complete blood count, liver function test, and kidney function test, were normal. The serum sodium and potassium levels were 136 mEq/L and 3.9 mEq/L, respectively. Lactic acid dehydrogenase, luteinizing hormone, follicle-stimulating hormone, testosterone, and dehydroepiandrosterone sulfate were in normal ranges. The b-Human chorionic gonadotropin hormone and a-fetoprotein serum levels showed a normal profile. An ultrasonography evaluation of the breast revealed prominent fibro-fatty tissue bilaterally, likely a manifestation of physiological gynecomastia. However, there was no evidence of suspicious solid or cystic mass lesions. An ultrasonography evaluation of the testis revealed multiple microcalcifications (Figure 2). The computed tomography (CT) scan of the pelvis revealed bilateral testicular calcification, larger on the left side (Figure 3). Based on findings of ultrasonography, testicular biopsy was performed, which revealed a large-cell calcifying Sertoli cell tumor (Figure 4). The case was discussed in the Tumor Board meeting and conservative treatment with regular follow-up was initiated. High-dose adrenocorticotropic hormone (ACTH) stimulation testing revealed a normal profile and did not suggest a diagnosis of adrenal insufficiency. Similarly, molecular genetics reports on congenital adrenal hyperplasia also were negative. The low-dose (1-mg) overnight dexamethasone suppression test showed high cortisol levels, at 139 nmol/L (a normal response to dexamethasone suppression test should be less than 50 nmol/l). The 6-day (2-step) standard dexamethasone suppression test (Table 1) showed a low serum ACTH level, at 0.8 (normal range: 5-50 pg/mL), a paradoxical rise in serum cortisol level to 503 nmol/l, and 24-h urine free cortisol more than 1000 nmol/L on day 6 (normally 15-224 nmol/l), which is characteristic of primary pigmented nodular adrenocortical disease (PPNAD). Therefore, the diagnosis of ACTHindependent Cushing syndrome (CS) was confirmed. The left testicular mass biopsy report ( Figure 4) described a sex cordstromal tumor, favoring large-cell calcifying Sertoli cell tumor. Thus, because of large-cell calcifying Sertoli cell tumor of the testis and PPNAD, the diagnosis of CNC was suspected. A blood sample was sent to Centogene Lab (Germany), at which a gene panel for LCCSCT using next-generation sequencing (NGS) was done and showed heterozygous, nonsense mutation in the PRKAR1A gene, likely a pathogenic variant, consistent with the genetic diagnoses of Carney complex type 1. Regarding other possible comorbidities of CNC, a wide examination was performed to exclude the involvement of other organs. Ultrasonography for the thyroid and pelvis, as well as echocardiography, did not reveal abnormalities. Brain and pituitary gland MRI were normal. Invasive bilateral adrenal venous sampling was done ( Table 3), and showed almost double the normal amount of cortisol secretion from the left adrenal gland, which confirmed the presence of left adrenal lateralization (predominance) as a source of cortisol secretion. After confirming the diagnosis, the family members were offered genetic testing for the same mutation (target gene mutation test), the mother and siblings have not carried the mutation (wild-type gene). His father was unable to undergo genetic testing because of special social circumstances. The patient underwent bilateral mastoplasty because of distressing gynecomastia. For the Cushing syndrome, the conservative management vs laparoscopic adrenalectomy was initially discussed with the family, and as the patient was not showing any apparent clinical manifestation of Cushing The patient is 16 years old at present. During regular followup visits in the urology clinic, he has been reporting left testicular pain, for which testicular US was repeated and showed significant progression of left testicular calcification, and a radical left orchiectomy was planned. In the endocrine clinic, he started to show intermittent elevation in blood pressure, higher BMI (29-31) kg/m 2 , and new appearance of pink stria on the abdominal wall. His repeated Hba1c, thyroid function test, and IGF-1 levels were normal. For the Cushing features, 24-h urine collection for urinary free cortisol (UFC) excretion showed a level of 34.7 mcg/m 2 /day (normal level is less than 70 mcg/m 2 /day), with follow-up for reevaluation. Discussion This case report shows that gynecomastia can be the presenting feature of an underlying multisystem disorder, which mandates thorough examination and tailored investigation to reach a diagnosis. We presented a patient with classic clinical characteristics of CNC, including spotty skin pigmentation on the membrane lining the eyes and on the sclera, LCCSCT, and ACTH-independent Cushing syndrome -PPNAD. The molecular genetic analysis identified a known pathogenic mutation of PRKAR1A. Considering all this together, our patient met the diagnostic criteria of CNC (Table 4) [7]: (1) spotty skin pigmentation with the distinctive presence on the face, lip, and sclera; (2) LCCSCT, as confirmed by testicular biopsy result; and (3) ACTH-independent Cushing syndrome caused by PPNAD (as identified in our patient). The patient has also manifested primary pigmented nodular adrenal disease (PPNAD), in association with ACTH-independent Cushing syndrome (CS). The diagnosis of ACTH-independent CS related to PPNAD can be challenging and the best screening test for CS is measurement of urine free cortisol excretion [18]. The current case report found more than 1000 nmol/L free cortisol in a 24-h urine sample on day 6 of the dexamethasone suppression test. Moreover, a low serum ACTH level of 0.8 (normal range: 5-50 pg/mL) and serum cortisol level of 503 nmol/L were noted. Our patient had a less aggressive course of the CNC in comparison to a previous case report by Rosenblum et al in 2017 [1], in which the patient had LCCSCT, pancreatic cancer, and Cushing syndrome due to adrenocortical carcinoma. The CT examination of the adrenal gland with PPNAD is often interpreted as normal and occasionally it represents the appearance of a string of beads due to the presence of multiple small nodules in the otherwise atrophic adrenal cortex [19]. However, it is worth emphasizing that, in cases like this, because of the patient's low levels of ACTH and normal adrenal imaging, a hypothesis of exogenous glucocorticoid must also be persistently investigated and ruled out. Furthermore, it is important to make sure that the sampling and assaying of ACTH have been performed adequately; otherwise, the ACTH values might be falsely found to be low, thus inducing diagnostic error [20]. PPNAD is a rare form of ACTH-independent CS that can appear alone, but 90% of cases are linked to CNC [21]. The symptoms are often mild, and patients typically present long after the onset of symptoms. The disease commonly appears during the second decade of life, but an age range of 4-44 years has been recorded in the literature [17,22]. CS in PPNAD can present as overt CS, subclinical CS, cyclic CS, or atypical (asthenic) CS, of which overt CS is the most common [23]. Our patient Major criteria 1. Spotty skin pigmentation with typical distribution (lips, conjunctiva and inner or outer canthi, vaginal and penile mucosa) 2. Myxoma (cutaneous and mucosal) or cardiac myxoma 3. Breast myxomatosis or fat-suppressed magnetic resonance imaging findings suggestive of this diagnosis 4. Primary pigmented nodular adrenocortical disease or paradoxical positive response of urinary glucocorticosteroid excretion to dexamethasone administration during Liddle's test 5. Acromegaly as a result of growth hormone-producing adenoma 6. Large-cell calcifying Sertoli cell tumor or characteristic calcification on testicular ultrasound 7. Thyroid carcinoma (at any age) or multiple hypoechoic nodules on thyroid ultrasound in prepubertal child 8. Psammomatous melanotic schwannomas 9. Blue nevus, epithelioid blue nevus (multiple) 10. Breast ductal adenoma (multiple) 11. Osteochondromyxoma was classified as having subclinical CS. The preferred treatment option for PPNAD with overt CS is bilateral total adrenalectomy [24]. A laparoscopic approach has a lower rate of morbidity in comparison to the open technique, and it involves less postoperative pain, shorter length of hospital stay, and low overall cost [19]. In case of surgical failure or before adrenalectomy, pharmacotherapy with ketoconazole, metyrapone, mitotane, and trilostane alone or in combination can control hypercortisolism by inhibiting steroidogenesis. Fluconazole has recently been suggested as a safer alternative to ketoconazole [25]. Conclusions We presented a case of CNC with gynecomastia, large-cell calcifying Sertoli cell tumor (LCCSCT) of the testis and primary pigmented nodular adrenocortical disease. PPNAD is a rare cause of ACTH-independent CS in childhood and can indicate underlying CNC. Moreover, LCCSCTs are very rare tumors and most of them are benign. It is highly recommended that clinical assessments and genetic tests be conducted promptly in such patients. The account of clinical manifestations of this patient adds to the knowledge about CNC and PRKAR1A mutations. The ability to distinguish CNC is important for the prevention of severe complications. PRKAR1A mutation analysis must be conducted as soon as possible in suspected CNC patients. Declaration of Figures' Authenticity All figures submitted have been created by the authors who confirm that the images are original with no duplication and have not been previously published in whole or in part.
2022-12-04T16:43:07.364Z
2022-12-02T00:00:00.000
{ "year": 2022, "sha1": "6804ccda147cbe3a9b6c9b998cded302d8a9cb20", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "e97f4f7291da51ec1e254be91fc9bf0ee70850ec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
3788989
pes2o/s2orc
v3-fos-license
A Comparison of the Efficacy and Safety of Intravenous Followed by Oral Delafloxacin With Vancomycin Plus Aztreonam for the Treatment of Acute Bacterial Skin and Skin Structure Infections: A Phase 3, Multinational, Double-Blind, Randomized Study This phase 3 trial in acute bacterial skin and skin structure infections showed IV followed by oral delafloxacin to be noninferior to IV vancomycin/aztreonam combination therapy and well tolerated. Oral delafloxacin appears to maintain the initial response with IV delafloxacin. Skin and soft tissue infections (SSTIs) represent a heterogeneous range of diseases and severity, and are among the most common infections seen in hospital settings [1][2][3][4][5]. While gram-positive pathogens are most common, gram-negative pathogens play a role in both polymicrobial and monomicrobial SSTIs and those with mixed or gram-negative infections have a longer length of stay, greater mortality, and higher total costs [3,4,6]. Though early appropriate therapy is associated with better clinical outcomes and lower healthcare resource utilization, many patients with complicated SSTIs (cSSTIs) do not receive adequate first-line treatment [7,8]. Inadequate empiric therapy (odds ratio, 9.25) is one of the leading risk factors for cSSTI treatment failure [9]. Delafloxacin is an anionic fluoroquinolone antibiotic with a number of unique properties that may make it useful in the treatment of severe infections, including ABSSSIs. Compared to other quinolones, delafloxacin demonstrates excellent in vitro activity against gram-positive pathogens, including methicillin-resistant Staphylococcus aureus (MRSA), while retaining good activity against gram-negative organisms [10]. Delafloxacin is more active in vitro than levofloxacin against most gram-positive pathogens, including levofloxacin-nonsusceptible isolates, and is 32-fold more active than levofloxacin against MRSA isolates [11]. This phase 3 study compared the efficacy and safety of intravenous (IV) followed by oral delafloxacin monotherapy to IV vancomycin plus aztreonam combination therapy in adult patients with ABSSSIs caused by either gram-positive or gram-negative pathogens. Trial Design This multicenter, multinational, stratified, randomized, double-blind trial enrolled patients at 76 study centers in 16 countries between May 2014 and January 2016. Written informed consent was obtained from each patient and the study protocol was approved by an independent ethics committee or institutional review board at each site. The trial was conducted in accordance with the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice. Setting and Participants Eligibility criteria included age ≥18 years and a diagnosis of ABSSSI defined as cellulitis/erysipelas, wound infection, major cutaneous abscess, or burn infection that was characterized by ≥75 cm 2 of erythema and ≥2 signs of systemic infection. Patients could receive study drug as inpatients or outpatients, provided that all study drug infusions and oral tablets were administered by blinded study site staff. Complete inclusion and exclusion criteria are presented in the Supplementary Materials. Interventions Study treatments are summarized in Table 1. Outcomes The US Food and Drug Administration (FDA)-defined primary efficacy endpoint [12] was the objective response assessment 48-72 hours after initiation of treatment based on ≥20% decrease in lesion size assessed by digital planimetry of the leading edge in the absence of clinical failure. The definition of clinical failure was (1) <20% reduction of the ABSSSI lesion spread of erythema area; (2) administration of rescue antibacterial drug therapy or administration of nonstudy antibacterial drug therapy for treatment of the ABSSSI before the primary efficacy endpoint assessment; (3) unplanned surgical intervention excluding limited bedside debridement and standard wound care before the primary efficacy endpoint assessment; or (4) death within 72 hours after initiation of study drug. If digital planimetry was not available within the 48-to 72-hour window, patients were classified as missing and clinical failures in the intent-totreat (ITT) analysis. All sites had to have an approved vancomycin blinding plan with vancomycin dosing managed by designated unblinded personnel. Aztreonam 2 g Q12h Administered as a 30-min infusion until baseline cultures were confirmed negative for gram-negative pathogens. After the first 6 doses, patients randomized to vancomycin also received oral placebo BID to maintain blinding. 15-29 mL/min Vancomycin Renal adjustment for vancomycin patients was allowed and was part of an approved vancomycin dosing plan for each site to maintain trough target. Aztreonam 1 g Q12h Administered as a 30-min infusion until baseline cultures were confirmed negative for gram-negative pathogens. The European Medicines Agency (EMA)-defined primary efficacy measure [13] was the investigator assessment of clinical response at the FU visit in the ITT population, defined as all randomized patients. Clinical response was based on ABSSSI signs and symptoms and was categorized as cure (complete resolution); improved (some symptoms but no additional need for antibiotics); failure (additional nonstudy antibiotics required); or indeterminate (incomplete assessment). Patients with missing follow-up data and those with improved outcomes were combined with failures in the primary ITT analysis. An additional secondary endpoint was investigator-assessed success (cure plus improved and no further antibiotic needed) at the FU visit. Other antibiotic studies in skin infections have defined a successful outcome as resolution or near resolution of signs and symptoms that no longer require antibiotic therapy. This definition aligns with the definition of success in this study. Patients' subjective assessment of pain was recorded on a numerical rating scale ranging from 0 (no pain) to 10 (pain as bad as imaginable) at baseline, during treatment, and at end of therapy (EOT), FU, and LFU. Microbiological Assessments Microbiological response was categorized as documented eradicated (baseline pathogen absent in follow-up cultures); presumed eradicated (no follow-up material available for culture, but the patient had a clinical response of success); documented persisted (baseline pathogen present in follow-up cultures); or presumed persisted (no follow-up material available for culture, but the patient had a clinical response of failure). Safety and Tolerability Assessments Safety assessments included all AEs, physical examinations, vital sign measurements, 12-lead electrocardiograms at baseline and as clinically indicated thereafter, and clinical laboratory tests. Patients were contacted by telephone 30 days after the final dose of study drug to assess the occurrence of long-term AEs. Treatmentemergent adverse events (TEAEs) were defined as events that occurred or worsened following administration of the first dose of the study drug through the 30-day telephone follow-up. Randomization and Blinding Patients were randomized (1:1), with randomization stratified by infection category and baseline body mass index (BMI). Infection categories were limited to ≤25% of patients with a major cutaneous abscess and ≤30% with wound infections. Patients were characterized by BMI as <30 kg/m 2 and ≥30 kg/ m 2 (obese); patients with a BMI ≥30 kg/m 2 were limited to no more than 50% of enrolled patients. Statistical Methods Separate statistical analysis plans for the FDA and the EMA were prospectively developed prior to database lock and unblinding. Clinical efficacy outcomes were analyzed for the ITT population while the safety analysis population included all enrolled patients administered at least 1 dose of study drug. Analysis of microbiological outcomes was based on the microbiological intent-totreat (MITT) population. There were multiple microbiologically evaluable (ME) and clinically evaluable (CE) analysis sets, each based on the type of assessment (investigator-assessed or objective) and timing of the assessment (48-72 hours, EOT, FU, LFU). Descriptive statistics (mean, median, minimum, and maximum) described continuous variables while counts and percentages were calculated for categorical data. The rate of the primary FDA-defined efficacy endpoint was the sample responder rate defined as (responder / [responder + nonresponder]). The rate of the primary EMA-defined efficacy endpoint was the sample cure rate defined as (cure / [cure + failure]). A 2-sided 95% confidence interval (CI) for noninferiority testing was computed based on the difference in sample responder rates and investigator-assessed response rates for vancomycin/aztreonam and delafloxacin at 48-72 hours following initiation of treatment using a nonstratified method [14]. Noninferiority was concluded if the lower limit of the 2-sided 95% CI exceeded -10%. Mean differences between treatments were expressed as delafloxacin minus vancomycin/aztreonam. All analyses were performed with SAS software version 9.2 or higher (SAS Institute, Cary, North Carolina). Patient Disposition and Analysis Sets Eight hundred fifty patients were randomized (ITT population), including 423 to delafloxacin and 427 to vancomycin/ aztreonam ( Figure 1). Overall, 766 (90.1%) enrolled patients completed the study through the FU visit. A total of 842 randomized patients received at least 1 dose of study drug (safety population), and 552 patients had an identified baseline pathogen known to cause ABSSSIs (MITT population). Baseline Characteristics Baseline characteristics were similar between treatment groups (Table 2). Overall, 46.8% of patients were from North America with an additional 39.8% from Europe; 23.5% received antibacterial therapy in the 14 days prior to enrollment. ABSSSI categories were similarly distributed between the treatment groups (Table 3). Objective Response The percentage of patients classified as responders in the ITT analysis population at 48-72 hours after initiation of study drug was similar between the 2 groups at 83.7% and 80.6% for delafloxacin and vancomycin/aztreonam, respectively. The difference in responder rates was 3.1% (95% CI, −2.0% to 8.3%; (Figure 2), demonstrating noninferiority of delafloxacin compared to vancomycin/aztreonam. Equivalent efficacy for delafloxacin was also demonstrated for the CE, ME, and MITT analyses. The percentage reduction from baseline in digital measurements of erythema was similar between the 2 groups at each visit (48-72 hours; EOT; FU; LFU) in the ITT analysis group. Investigator-Assessed Response The primary efficacy endpoint for the EMA submission and a secondary efficacy endpoint for the FDA submission was the investigator-assessed response of signs and symptoms of infection at FU in the ITT population. The cure rate of 57.7% for the delafloxacin group was noninferior to the rate of 59.7% observed in the vancomycin/aztreonam group with a difference of -2.0% (95% CI, -8.6 to 4.6). Noninferiority of delafloxacin compared to vancomycin/aztreonam was also confirmed for the MITT, CE, and ME analysis sets ( Figure 2). A sensitivity analysis in which success was defined as cure plus improved and failure as indeterminate plus failure revealed similar success rates between the 2 groups in the ITT analysis set (87.2% and 84.8% [difference, 2.5%; 95% CI, -2.2% to 7.2%] for the delafloxacin and vancomycin/aztreonam groups) as well as for the CE, MITT, and ME analysis sets ( Figure 2). In patients with bacteremia, 8 of 11 (72.7%) delafloxacin-treated patients and 5 of 8 (62.5%) vancomycin/aztreonam-treated patients had successful outcomes (investigator assessment of success, ITT). This included 1 patient with Pseudomonas aeruginosa bacteremia who was a clinical success in the delafloxacin arm. The mean baseline patient-reported pain score was 7.4 and 7.2 for delafloxacin and vancomycin/aztreonam, respectively. In the ITT analysis set, mean pain scores were similar between the delafloxacin and vancomycin/aztreonam groups at the FU assessment at 0.5 and 0.6, respectively. Mean pain scores were the same for both groups at 1.2 and 0.3 for EOT and LFU, respectively. The most common pathogen was S. aureus; rates of microbiological success approached 100.0% in both groups at FU. Per-pathogen early objective response at 48-72 hours and microbiological response rates were similar between the 2 treatment groups Tables 4 and 5. The microbiological response was similar for patients with MRSA infections at 96.0% and 97.0% for delafloxacin and vancomycin/aztreonam, respectively. Additionally, there was 97% documented or presumed eradication for the levofloxacin-nonsusceptible S. aureus isolates in the delafloxacin group. No isolates were shown to have an increase in delafloxacin MIC values during the course of therapy; emergence of resistance was not seen. Safety In the safety analysis set, 43.6% of those administered delafloxacin and 39.3% of the vancomycin/aztreonam group reported 1 or more TEAEs, and the percentage of TEAEs considered related to study drug was comparable between the 2 groups ( Table 6). The percentages of TEAEs resulting in early discontinuation of study drug and serious AEs were similar between treatment groups. Most TEAEs were considered to be mild in severity in both treatment groups, with 30 patients experiencing ≥1 severe TEAE, including 16 (3.8%) patients in the delafloxacin group and 14 (3.3%) patients in the vancomycin/aztreonam group. Two deaths occurred in the vancomycin/aztreonam group and none in the delafloxacin arm; both deaths were considered not related to treatment. One (0.2%) patient treated with delafloxacin developed Clostridium difficile diarrhea (prior treatment failure on trimethoprim/sulfamethoxazole and clindamycin), with no cases reported in the vancomycin/aztreonam arm. There were no cases of tendinitis, tendon rupture, or myopathy and 1 case of paresthesia in each treatment group that was thought to be potentially related to treatment. There were no reports of treatment-related hypo-or hyperglycemia during trial in either group. There were no significant differences between the 2 treatments in changes from baseline in hematology and/or chemistry parameters. There were no increases in hepatic AEs in the delafloxacin treatment group when compared to vancomycin/ aztreonam. A lower percentage of patients in the delafloxacin group compared with the vancomycin/aztreonam group had an alanine aminotransferase value at least once postbaseline that was >5 times the upper limit of normal (ULN) (1.2% [5/417] delafloxacin vs 1.9% [8/425] vancomycin/aztreonam). Only 4 and 2 patients in delafloxacin and vancomycin/aztreonam groups, respectively, had aspartate aminotransferase >5 times the ULN at any time during the trial. No patient in either treatment group met potential Hy's law criteria. Serum creatinine >2 times the ULN was seen in 7 vancomycin-treated patients at any time during the trial, compared to no reports in delafloxacin patients. DISCUSSION This phase 3 trial of IV followed by oral delafloxacin showed that in patients with ABSSSI, IV/oral delafloxacin monotherapy was noninferior to IV vancomycin/aztreonam combination therapy. The addition of oral delafloxacin appears to maintain the initial clinical response seen with IV delafloxacin providing an option of switching from IV to oral therapy as soon as patients are clinically stable. Abbreviations: ABSSSI, acute bacterial skin and skin structure infection; CRP, C-reactive protein; MRSA, methicillin-resistant Staphylococcus aureus; MSSA, methicillin-susceptible Staphylococcus aureus; SD, standard deviation; ULN, upper limit of normal; WBC, white blood cell count. a n = 275 for delafloxacin and n = 277 for vancomycin + aztreonam. Delafloxacin patients had comparable per-pathogen microbiological response rates vs vancomycin/aztreonam patients against important pathogens that cause ABSSSIs, including MRSA and gram-negative bacteria. The eradication rate in patients with MSSA was higher in the delafloxacin group compared with the vancomycin/aztreonam group. Although the numbers were small, IV/oral delafloxacin monotherapy had eradication rates of 100% for gram-negative pathogens such as Escherichia coli, P. aeruginosa, and Klebsiella pneumoniae, which included isolates from monomicrobial infections. Emergence of resistance was not seen in the study. In a resistance selection study, delafloxacin demonstrated a low probability for the selection of resistant mutants in MRSA [15,16]. In addition, analysis of phase 3 data shows high eradication of fluoroquinolone-nonsusceptible isolates. Delafloxacin appeared to be well tolerated in this study with a lower rate of discontinuation due to related TEAEs than vancomycin/aztreonam: 5 of 417 (1.2%) and 10 of 425 (2.4%), respectively. Previous studies have documented that delafloxacin was not associated with QT prolongation [17] or phototoxicity [18], and has minimal potential for drug interactions [19]. Similar results were seen in a recently published phase 3 study of 660 patients that compared delafloxacin 300 mg or vancomycin 15 mg/kg plus aztreonam 2 g each administered twice daily intravenously for 5-14 days in the treatment of ABSSSIs. Delafloxacin was found to have comparable clinical activity to vancomycin and was well tolerated [20]. While gram-positive pathogens are the most common bacteria identified in ABSSSI, gram-negative pathogens are increasing in prevalence and must be considered when selecting initial empiric therapy [3]. A large prospective observational study found that nearly 25% of patients hospitalized for cSSTIs received initial inappropriate therapy, a finding similar to those in other studies [8,[21][22][23][24][25]. The most common independent risk factors of initial inappropriate therapy was shown to be infection that included gram-negative pathogens. Clinicians must be attuned to specific patient risk factors which lead to consideration of gram-negative coverage [3,9,25]. The current FDA definition of ABSSSI used in this study excludes patients with infection types including diabetic foot infection, osteomyelitis, and decubitus ulcer from ABSSSI studies and limits the ability to investigate patients with gram-negative infections. Even given this study limitation, 20.7% of patients in this study had a gram-negative pathogen identified. Though further study in patients most likely to have a gram-negative infection would be important, delafloxacin may offer a monotherapy option for the treatment of ABSSSI in these patient types. Because in previous studies a signal was seen indicating that delafloxacin may provide benefit in obese patients, this study randomized and was stratified for obesity [20,26]. Rates of cure and success in obese patients were similar between the delafloxacin and vancomycin/aztreonam groups. Delafloxacin administered at the standard dose of 300 mg every 12 hours IV and oral delafloxacin 450 mg every 12 hours was found to provide good outcomes in obese patients (BMI ≥30 kg/m 2 ), potentially simplifying dosing in this patient population. Due to limitations on vancomycin dosing and infusion time and thus blinding, patient weight was limited to a maximum of 200 kg Additional limitations to this study include a low number of burn and surgical wounds, and relative to the general population, the number of older adults and nonwhites was lower. Patients in this study were enrolled based upon the most recent FDA guidance for industry for the study of drugs for the treatment of ABSSSI. This guidance was developed to support an indication for the treatment of ABSSSI and exclude less serious skin infections such as impetigo and minor cutaneous abscess [27]. Use of this guidance has made results of more recent trials more consistent by defining infection types and size of lesions that should be included as well as exclusion criteria. Extensive use of fluoroquinolones has led to recognition of a variety of rare AEs and acknowledgement that the class should be used more appropriately as noted in FDA guidance [28]. Delafloxacin product labeling includes these risks associated with the fluoroquinolones. Based on currently available data, delafloxacin does not appear to be associated with an increased risk of AEs associated with other fluoroquinolones [29]. Future observation as the drug is more widely used in the clinic will be prudent. In this study, in ABSSSI patients, IV/oral delafloxacin monotherapy was noninferior to IV vancomycin/aztreonam combination therapy for both the objective and the investigator-assessed response rates and was well tolerated. With both an IV and oral formulation, delafloxacin offers a potential treatment option for ABSSSI due to gram-positive (including MRSA) and gram-negative bacteria.
2018-04-03T02:38:16.823Z
2018-03-06T00:00:00.000
{ "year": 2018, "sha1": "c9aadf7443a57995e8d44f09d6dec5355d5cad79", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/cid/article-pdf/67/5/657/25496764/ciy165.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c9aadf7443a57995e8d44f09d6dec5355d5cad79", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252187874
pes2o/s2orc
v3-fos-license
Identification of diagnostic mRNA biomarkers in whole blood for ankylosing spondylitis using WGCNA and machine learning feature selection Ankylosing spondylitis (AS) is a common inflammatory spondyloarthritis affecting the spine and sacroiliac joint that finally results in sclerosis of the axial skeleton. Aside from human leukocyte antigen B27, transcriptomic biomarkers in blood for AS diagnosis still remain unknown. Hence, this study aimed to investigate credible AS-specific mRNA biomarkers from the whole blood of AS patients by analyzing an mRNA expression profile (GSE73754) downloaded Gene Expression Omnibus, which includes AS and healthy control blood samples. Weighted gene co-expression network analysis was performed and revealed three mRNA modules associated with AS. By performing gene set enrichment analysis, the functional annotations of these modules revealed immune biological processes that occur in AS. Several feature mRNAs were identified by analyzing the hubs of the protein-protein interaction network, which was based on the intersection between differentially expressed mRNAs and mRNA modules. A machine learning-based feature selection method, SVM-RFE, was used to further screen out 13 key feature mRNAs. After verifying by qPCR, IL17RA, Sqstm1, Picalm, Eif4e, Srrt, Lrrfip1, Synj1 and Cxcr6 were found to be significant for AS diagnosis. Among them, Cxcr6, IL17RA and Lrrfip1 were correlated with severity of AS symptoms. In conclusion, our findings provide a framework for identifying the key mRNAs in whole blood of AS that is conducive for the development of novel diagnostic markers for AS. Introduction As a kind of chronic axial spondyloarthritis, ankylosing spondylitis (AS) is characterized by aseptic sacroiliitis, spinal stiffness and deformity, ultimately leading to severe disability in patients. Due to the undefined etiology and paucity of early effective detecting methods, the diagnosis of AS is delayed for an average of 8 years (1)(2)(3). To date, human leukocyte antigen B27 (HLA-B27), C-reactive protein (CRP) and matrix metalloproteinase 3 (MMP- 3), have been found to be associated with AS and positive in 85-95% of patients with AS (4,5). However, they are also significantly positive in most patients with other immunologic disorders (6)(7)(8)(9)(10), indicating their insufficient diagnostic value for assessing AS activity and predicting therapeutic effectiveness. Therefore, to facilitate early diagnosis and assess AS activity, finding novel biomarkers with satisfactory sensitivity and specificity by exploring the molecular mechanisms of AS is crucial. With the rise of high-throughput transcriptomic techniques such as microarray and sequencing, multiple bioinformatic methods have subsequently been developed and applied in the construction of gene correlation networks on a large scale to shed new light on screening key RNAs in terms of molecular interactions and the exploration of candidate biomarkers for diseases (11). Compared with other developed network analytical methods, weighted gene co-expression network analysis (WGCNA) is a novel systematic biological method that describes the correlation between the expression levels of genes with a weighted value rather than with the all-or-none dichotomy (12). Compared with analyzing single differentially expressed genes, WGCNA can cluster mRNAs into different modules that are more stable and comprehensive in reflecting the underlying pathological mechanism of transcriptomic alterations by calculating the topological parameters of gene correlations. Moreover, WGCNA reveals the correlation of each mRNA module with different clinical traits of interest, which provides more clues for identifying specific biomarkers or therapeutic targets (13). Generally, the use of traditional experimental methods to validate the function of genes filtered by microarray and sequencing is a long process because of the large amount of data (14). Furthermore, the redundancy and collinearity of highthroughput data severely disrupt the accuracy of bioinformatic analyses. To solve this problem, many gene selection algorithms based on machine learning have been proposed to remove irrelevant or redundant information or features. Among these algorithms, recursive feature elimination based on support vector machine (SVM-RFE) is an effective tool for gene selection (15). As a backward elimination method, SVM-RFE can rank the different genes or features based on the squared sum of the feature coefficients and select the top-ranked genes that significantly influence the classification or identification of different clinical traits (16). Hence, applying SVM-RFE in identifying key mRNAs or biomarkers from transcriptomic data is promising. To identify novel biomarkers for AS from whole blood, we utilized a microarray dataset to perform WGCNA. After generating the modules of mRNAs specific to AS, we performed gene set enrichment analysis (GSEA) with Gene Ontology (GO) on the mRNAs of these modules and then overlapped them with differentially expressed mRNAs to screen out more specific feature mRNAs to construct a protein-protein interaction (PPI) network. Based on this network, we found hub mRNAs by Cytoscape calculation. Then, we utilized SVM-RFE analysis on these hub mRNAs and screened out 13 feature mRNAs. After verification through qRT-PCR and correlation analysis, 8 key mRNAs were finally identified as the key biomarkers for AS diagnosis. AS patients and control group The Ethics Committee of Shanghai Changzheng Hospital has approved this study. All included AS patients and control donors provided the informed consent including details of present study. According to the modified New York criteria (17), 40 AS patients were included in this study. In addition, 40 healthy donors were recruited in control group. The general information (age and gender), symptoms, erythrocyte sedimentation rate (ESR), C-reactive protein (CRP) and Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) of patients were recorded ( Table 1). Acquisition of microarray data and processing The microarray dataset GSE73754 by Eric Gracey et al (18) was downloaded from the Gene Expression Omnibus (GEO) database for analysis. This dataset comprises whole blood mRNA expression data from 72 subjects (52 AS patients and 20 healthy controls). The raw data of GSE73754 were preprocessed using the "affy" and "limma" packages available from Bioconductor in R. The missing values were replenished using the k-nearest neighbor algorithm (19). The normalization of raw data was performed using the robust multiarray average algorithm (20). The batch effect was eliminated using the "sva" package of R based on the COMBAT method. Due to the public availability of relevant data, approval from a local ethics committee was not required. WGCNA The "WGCNA" package of R was used for clustering modules and constructing a co-expression network. To eliminate noise and speed up the computation, the mRNAs whose variance in expression was in the top 25% of all the expression profiles were selected. The power parameter b was determined based on the function of the scale-free topology fit index. Based on the weighted Pearson correlation coefficients, an adjacency matrix was constructed to reveal unsupervised coexpression relationships between each mRNA. To simplify this step, the function "blockwiseConsensusModules" was performed with a minimum module size of 30 to construct a network and detect a consensus module. The conservation of each module was assessed using the "modulePreservation" function, which predicts the Z-score. Module-trait correlations were calculated using "modTraitCor" to detect the modules correlated with AS. GSEA GSEA of GO is an effective computational method that assesses an a priori-defined set of genes enriched in specific biological states (21). GSEA was performed on the modules selected from WGCNA with the GO gene sets database (c5.all.v6.2.symbols.gmt). The cutoff criterion of the P-value was set as < 0.05. Identification of differentially expressed mRNAs The screening of differentially expressed mRNAs was performed using the "limma" package of R software (version 3.6.2), and Benjamini-Hochberg adjusted P-values < 0.01 and |fold change| >1 were set as the cutoff criteria. The heatmap was visualized using the "pheatmap" package of R. PPI network construction and hub gene identification The online analysis tool, Search Tool for the Retrieval of Interacting Genes/Proteins (STRING), was used to evaluate the interactions between each of the selected mRNAs. Afterwards, a PPI network was constructed using Cytoscape. The nodes' scores of each mRNA in the PPI network were obtained by the cytoHubba plugin of Cytoscape and were defined as the criterion for further mRNA selection. Support vector machine based recursive feature elimination As a powerful machine learning model, SVM has been widely applied in the functional prediction of biological molecules (22). In this study, SVM modeling was performed by using the "e1071" package of R, in which the radial basis function was the selected kernel function. SVM-RFE is a backward feature deletion method that loops around SVM 22 . First, all of the original features are used to build the SVM learning model to obtain the absolute coefficient |w| of each input feature. Second, the features are ranked based on the square of |w|, and the bottom-ranked features are discarded. Then, the rest of the features are subject to a new loop of SVM model building and ranking with the same procedures as before. These procedures are repeated until all features are removed. The order of removed features represents the level of feature importance (23). The top-ranked features that are discarded later are deemed to be more informative than those that are discarded earlier. In this study, the features correspond to mRNAs. To determine how many top-ranked mRNAs should be selected, 5-fold cross-validation was performed on the dataset. This method randomly divides the dataset into 5 sections, of which 4 sections are selected as the training set, with the last section as the testing set. Depending on these sets, SVM is built with different numbers of top mRNAs for calculating the generalized prediction error. These procedures are repeated 5 times. Finally, the number of top-ranked mRNAs corresponding to the minimum error is the optimal number of selected mRNAs. Using the "pROC" package of R, receiver operating characteristic (ROC) curve analysis was performed to calculate the area under the curve (AUC) value for each selected feature mRNA to evaluate its predictive capability for the diagnosis of AS. Validation of mRNA expression 5 ml of whole blood was drawn into an EDTA tube from AS patients before medical interventions. Ficoll was used to separate mononuclear cells from whole blood. The total RNA was isolated from mononuclear cells by using TRIzol LS reagent (Ambion). The extracted RNA was used to synthesize cDNA with a Reverse Transcription kit (Takara). The expression of RNAs was firstly determined by 1.5% agarose gel electrophoresis. Electrophoresis was performed at a constant voltage of 100 V for 30 min in TBE running buffer, and the retardation of RNA mobility was visualized under UV light. Quantitative real-time PCR (qRT-PCR) was performed using SYBR Green qPCR Master Mix (Takara) in qPCR CFX 96 Thermocycler system (Bio-Rad). The primers for each selected mRNAs were listed in Supplementary Table S1. The reactions were run according to the following conditions: initial hold at 95°C for 10 min, followed by 40 cycles of amplification at 95°C for 15 s, and annealing for 60s at 60°C and drawing the melting curves by increasing from 60°C to 95°C (0.3°C per second). All expression values were normalized to the expression of GAPDH. Relative expression levels are obtained by calculating 2 -DDCT . Statistical analysis The statistical analysis was performed with R software (version 3.6.2). The continuous variables were presented with Mean ± SD, while the categorical variables were presented with quartile. The expression values of mRNAs were compared by using one-way analysis of variance (ANOVA) between AS group and control group. Correlation between expression of mRNAs and BASDAI was evaluated by using Pearson's correlation coefficient test. The P<0.05 was selected as the cut-off for statistical significance. Results Generation of key modules associated with AS by WGCNA The initial step was to generate consensus modules of mRNA expression by constructing a weighted gene coexpression network. We made hclust analysis, with height 45 as cutoff. There was no outlier in included samples (Supplementary Figure S1). The determination of the soft thresholding power b is entailed in raising Pearson correlation matrices to obtain the network (24). According to the criterion of approximate scale-free topology, in which the scale-free topology model fit index was more than 0.9 and the mean connectivity degree was close to 0, the optimal power b was chosen to be 14 ( Figure 1A). Afterwards, the weighted coexpression networks were constructed, and consensus modules with similar expression trends were clustered and labeled with different colors, as shown in a dendrogram ( Figure 1B). Then, the correlation matrices between consensus modules and clinical traits (AS and HC) were calculated ( Figure 1C). Based on the cutoff of 0.3 to correlation, the Blue, Yellow and Gray modules with specific relation to AS were selected for further investigation. There were 463 mRNAs in the Blue module, 318 mRNAs in the Yellow module, and 404 mRNAs in the Gray module, of which information about the network is presented in Supplementary Table S2. In addition, we performed correlation analysis of Module Membership vs. Gene Significance, and found significant correlation coefficients were 0.28 in Blue module, 0.44 in Grey module, and 0.38 in Yellow module, respectively (Supplementary Figure S2). GSEA with GO on selected modules To further investigate the role of the selected mRNA modules and pathological processes in white blood cells, we performed GSEA with GO terms on mRNAs of the Blue, Yellow and Gray modules. As shown in Figure 2, mRNAs in the Blue module were enriched in the top 10 GO terms with the lowest normalized Pvalue, including "leukocyte chemotaxis", "leukocyte migration", "cell chemotaxis" and "regulation of inflammatory response", which implicated active inflammatory and immune responses in AS patients' blood. However, in contrast to the Blue module, most GO terms enriched by the mRNAs in the Yellow and Gray modules are unspecific to AS activity, except for "leukocyte cell adhesion", suggesting that these two modules may represent secondary pathological processes of AS. Therefore, it can be inferred that mRNAs in the Blue module exert more imperative effects than those in the Yellow and Gray modules and are immune dysregulated by AS activity. Screening of differential expressed mRNAs To further investigate the discrepancy in whole blood between AS and HC, we filtered differentially expressed mRNAs. A total of 1116 mRNAs were differentially expressed, among which 491 mRNAs were upregulated and 625 mRNAs were downregulated (Supplementary Table S3). Next, we constructed a heatmap for the top 100 most differentially expressed mRNAs to show the consistencies and discrepancies in mRNA expression among the samples. As shown in Supplementary Figure S3, most AS blood samples are clustered together with similar expression tendencies, which means that their expression patterns differ from the patterns of HC samples. Selection of feature mRNAs from modules and differential expressed mRNAs To obtain comprehensive information from the whole blood mRNA expression of AS, finding a balance between WGCNA modules and differentially expressed mRNAs is critical. Accordingly, we overlapped 1185 mRNAs from the Blue module, Yellow module and Gray module with 1116 differentially expressed mRNAs and screened out 296 feature mRNAs for AS. The intersection of each module with differentially expressed mRNAs is shown in Supplementary Table S4. Construction of PPI network based on feature mRNAs Given the interaction between key genes in various pathological processes, performing interaction network analysis on mRNA groups is effective for identifying candidate biomarkers. To this end, we constructed a PPI network on the 296 feature mRNAs by STRING (Supplementary Figure S4). A total of 427 protein interactions and 280 gene nodes were identified in this network with an enrichment P-value of 5.26e-07. In the expression network, hub genes are a series of key genes that have great topological connectiveness with their neighboring genes. To distinguish the hub genes in a network, Closeness Centrality (CC) and Betweenness Centrality (BC), which are based on a concept of moving along the most optimal and shortest paths throughout a network, are widely used in network analysis (25). Because of the vague principles of the usage of these 12 parameters, we simultaneously applied all of them to measure the connectiveness of mRNAs in the PPI network. After inputting the data of the PPI network into Cytoscape and calculating each nodes' scores through cytoHubba, we sorted feature mRNAs by 12 nodes' scores in descending order and generated 12 sequences of mRNAs. Then, we selected the top 25% mRNAs from these 12 sequences and converted these selected mRNAs together. Finally, according to the occurrence of mRNAs in each sequence, 63 mRNAs appearing more than 4 times were obtained as the hub genes (Supplementary Table S5). The interaction network of these feature mRNAs is shown in Supplementary Figure S5. GSEA with GO terms on Blue, Yellow and Grey modules. The size and color intensity of a circle represent the numbers of mRNAs and −log 10 (P value) of enrichment for each module. Identification of key mRNAs by SVM-RFE Although the 63 selected feature mRNAs can serve as biomarkers for AS, there is still much redundant information in them, resulting in poor feasibility in practical applications. To solve this problem, we applied SVM-RFE according to the feature ranking of the correlation coefficients to eliminate relatively unspecific feature mRNAs and preserve the key mRNAs. To determine the optimal number of feature mRNAs with the greatest accuracy in the SVM model, 5-fold crossvalidation was introduced into the SVM classifier step, and the error rates of different numbers of mRNAs were captured. We plotted the change in the 5-fold cross-validation error rate at each recursive step (Supplementary Figure S6). The error rate fluctuated with increasing numbers of mRNAs until it reached a minimum with 14 feature mRNAs, suggesting that discrimination between AS and HC reached almost 90% accuracy. ROC curve analysis was further carried out, and the AUC values of the 14 key mRNAs were calculated to reveal their predictive power ( Figure 3). Accordingly, MAP3K11 was discarded because of its nonsignificant predictive power in distinguishing between AS and HC. Among the 13 remaining selected feature mRNAs, Sqstm1, Srrt, Cxcr6, Eif4e, Ppid, H2afy, Card11, IL17ra, Picalm, Lrrfip1, Polr2a, Mapk8ip3 and Synj1 were screened out as the key mRNAs of AS for further analysis. Validation of key mRNAs expression To verify the prediction of bioinformatic and SVM analysis, we performed qRT-PCR and agarose gel electrophoresis to test the expression levels of these 12 key mRNAs in whole blood of AS group and control group. As shown in Figure 4, the expression of Sqstm1, Srrt, Cxcr6, and Eif4e were significantly down-regulated in AS patients, while the expression of IL17ra, Picalm, Lrrfip1 and Synj1 were significantly up-regulated compared with control group. In addition, there were no significant differences on the expression of Ppid, H2afy, Card11, Mapk8ip3 and Polr2a between two groups. These results indicated the expression patterns of 8 significant key mRNAs in included patients were consistent with bioinformatic analysis and SVM prediction. Correlating analysis between BASDAI and expression of key mRNAs To further examine the predictive strength of 8 significant key mRNAs, we analyzed the correlation between their expression levels and BASDAI of AS patients. In a total of 40 blood samples from AS group, a significant correlation between BASDAI and expression level was revealed in three key mRNAs (Cxcr6, IL17ra, Lrrfip1), while the remaining 5 mRNAs showed no significant correlation with BASDAI ( Figure 5). There, Cxcr6, IL17ra, Lrrfip1 were proposed to serve as the potential biomarkers for AS. Discussion While HLA-B27 has been demonstrated to mainly account for the genetic effects of AS, the other undefined markers may be associated with this immunologic disease (4,26,27). People with positive HLA-B27 have a significantly higher risk of developing AS than those with negative HLA-B27. However, most of the former remain healthy, implying that in addition to HLA-B27, other potential factors may contribute to the onset of AS (28, 29). Hence, elucidating AS pathogenesis from the perspective of immune regulation, especially associated with blood karyocytes, can be regarded as a promising direction for finding diagnostic biomarkers with reliable specificity and sensitivity beyond HLA-B27. In present study, we explored the microarray dataset of GSE73754 by WGCNA and PPI network construction, and then identified 3 modules (Blue, Yellow and Gray) and 63 hub mRNAs. Several studies have demonstrated the pivotal role of adaptive immune responses in AS pathogenesis (30). The interaction between CD4 + T cells and HLA-B27 triggers the cascade reaction of various chemokines and cytokines, contributing to FIGURE 3 The ROC curve analysis of 14 key mRNAs in diagnostic specificity for AS. inflammatory damage and bone erosion in AS (31). In addition to the adaptive immune response, innate immune abnormalities also contribute to the initiation of AS (32). In AS, Tumor necrosis factor (TNF) mediates the destabilization of bone morphogenetic signaling proteins in osteoblasts and inhibits the expression of insulin-like growth factor-1, osterix and Runx2, resulting in poor osteoblastogenesis (33)(34)(35). Consistent with the preceding findings, the GSEA results of this study regarding GO terms in the Blue module showed the involvement of inflammatory and immune responses in AS, further verifying the imperative role of immune dysregulation in AS progression. However, the results of GO enrichment in Yellow and Gray modules revealed a negative relationship with immune response. Although the mRNAs in the Yellow and Gray modules seem to reflect uncorrelated effects with immune responses, the possibility of their synergism with the immune response cannot be ruled out and needs to be further explored. In analyzing thousands of gene expression data through bioinformatic method, the "curse of dimensionality" cannot be denied which severely impairs the accuracy of classification and prediction. To reduce the dimensionality, wrapper methods have been developed to be incorporated into a machine learning algorithm, which evaluate the values of different features according to the pre-estimated errors (36). SVM-RFE, as a novel established wrapper method for feature selection, can refine the optimum feature by ranking the coefficients of different features obtained by SVM (23). This is because the rank of each coefficient indirectly reflects the orthogonal degree between the feature and hyperplane generated by SVM. The orthogonality of a feature to the hyperplane signifies that this feature is more informative than others (23). In this study, we used a PPI network to identify 63 hub mRNAs that are already highly correlated with AS. However, to some extent, using these 63 mRNAs as biomarkers for further prediction is also a kind of high-dimensional modeling, which likewise encounters overfitting or other high-dimensional challenges. Therefore, to address these problems, we utilized SVM-RFE and optimally selected 13 out of the 63 feature mRNAs based on a 5-fold cross-B A FIGURE 4 Differences in relative expression level of 13 key mRNAs between AS group and control group. Agarose electrophoresis (A) and qRT-PCR quantification (B) for Sqstm1, Srrt, Cxcr6, Eif4e, Ppid, H2afy, Card11, IL17ra, Picalm, Lrrfip1, Polr2a, Synj1 and Mapk8ip3. ** means P-value < 0.01. validation error rate. Moreover, ROC curves were subsequently plotted and reflected the significant specificities of these 13 key mRNAs for recognizing AS. Then, 8 of 13 key mRNAs (Sqstm1, Srrt, Cxcr6, Eif4e, IL17ra, Picalm, Synj1 and Lrrfip1) in AS blood sample showed significant consistence with microarray data in qRT-PCR validation, and 3 of them (Cxcr6, IL17ra, Lrrfip1) were correlated with symptomatic severity of AS, indicating the efficacy of SVM screening combined with bioinformatics. IL-17ra is one of five well-known receptor subtypes for IL-17 ligands. When bound by IL-17, this receptor upregulates the expression of various cytokines and chemokines to exert a proinflammatory role in host defense. In whole blood, IL-17 and its receptors are mainly expressed in Th17 cells and neutrophils and were demonstrated to play a pivotal role in AS patients (37)(38)(39). Evidence suggests that the binding of IL17 to its receptor triggers several feedback-loop mechanisms in spondyloarthritis, resulting in the proliferation of Th17 cells, thereby causing increased production of IL-17 (40). This was further highlighted by the significant remission of AS symptoms after the application of inhibitory medication targeting IL-17 pathways (41,42). In addition to Il-17RA, the downregulation of Sqstm1 in whole blood may be related to AS. As a kind of ubiquitin binding protein, Sqstm1 is reduced when autophagy is activated, which subsequently increases the level of IL23 in the intestinal mucosal surfaces of AS patients (43). Intriguingly, thus far, there is no robust proof to verify the direct involvement of the other significant feature mRNAs (Cxcr6, eIF4E, Lrrfip1, Srrt, Synj1 and Picalm) in AS pathogenesis. Cxcr6, eIF4E, and Lrrfip1, were found to be related to innate or adaptive immune processes. C-X-C Motif Chemokine Receptor 6 (CXCR6), a kind of chemokine receptor, is mainly expressed on the CD4+ T cell surface and mediates a series of immune cellular activation and chemotaxis events (44). Eukaryotic translation initiation factor 4E (eIF4E) is mainly expressed in macrophages and activated following the stimulation of LPS, leading to the upregulation of IkBa, which inhibits the expression of inflammatory cytokines and genes (45). LRR Binding FLII Interacting Protein 1 (LRRFIP1) was found to be involved in the innate defense against pathogenic organisms and in the regulation of autoimmune disorders (46). In our study, upregulated IL-17RA and Cxcr6 were found to be positively correlated with BASDAI, while downregulated Lrrfip1 was negatively correlated, implying the potential of IL-17RA, Cxcr6 and Lrrfip1 in predicting AS symptom. In addition, the biological function of Srrt, Synj1 and Picalm has not been shown to be specific to AS, even though they are significant differential expressed in AS patients. But this does not mean that they are unqualified to serve as biomarkers. Their correlations with AS need further investigation to be elucidated in the future. Undeniably, there was an inevitable limitation in our study. Because of the shortage of a proper microarray dataset for the whole blood of AS patients, there were not sufficient samples for randomly selecting and establishing a training set and testing set for machine learning, so we were incapable of further verifying the efficacy of the SVM classifier made of feature mRNAs. Further studies are expected to include more available datasets and verify the accuracy of prediction. In summary, this study reveals that IL17RA, Sqstm1, Picalm, Eif4e, Srrt, Lrrfip1, Synj1, Cxcr6 can be seen as potential predictors for AS. These mRNAs may function via involvement in various pathways of AS, especially in immunerelated pathways. Exploration of their function in AS pathology may be beneficial for the diagnosis of AS. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. Ethics statement The studies involving human participants were reviewed and approved by The Ethics Committee of Shanghai Changzheng Hospital. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
2022-09-12T13:47:16.108Z
2022-09-12T00:00:00.000
{ "year": 2022, "sha1": "29af30993dd739d23ee4b97bc7bc99c68c92195f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "29af30993dd739d23ee4b97bc7bc99c68c92195f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233626948
pes2o/s2orc
v3-fos-license
Epidemiology of Dupuytren disease and Patients Undergoing Selective Fasciectomy Objective  To describe the epidemiological and clinical profile of patients with Dupuytren disease treated by selective fasciectomy and the factors associated with the severity of the disease. Methods  Retrospective descriptive observational study involving 247 patients with Dupuytren disease, from 2013 to 2019. Multivariate logistic regression was performed for data analysis. Results  Most patients were male (83.8%), self-declared white (65.2%), alcoholics (59.6%) and 49% were smokers, with a mean age of 66 ± 9 years old, with 77.2% presenting symptoms of the disease after the age of 51 years old. Approximately 51.9, 29.6 and 17.3%, respectively, had arterial hypertension, diabetes mellitus and dyslipidemia comorbidities. Bilateral involvement of the hands was observed in 73.3% of the patients. The rate of intra- and post-selective fasciectomy complications was of 0.6 and 24.3%, respectively, with 5.2% of the patients needing reintervention after 1 year of follow-up. After multivariate analysis, males were associated with bilateral involvement of the hands (odds ratio [OR] = 2.10; 95% confidence interval [CI]: 1.03–4.31) and with a greater number of affected rays (OR = 3.41; 95% CI: 1.66–7.03). Dyslipidemia was associated with reintervention (OR = 5.7; 95% CI = 1.03–31.4) and bilaterality with a higher number of complications (35.7 versus 19.7%). Conclusion  A low rate of reintervention and operative complications was observed in patients with Dupuytren disease treated by selective fasciectomy. Male gender was associated with severe disease (bilaterality and more than two affected rays), and dyslipidemia with reintervention. Introduction Dupuytren disease or contracture figures among the most common human connective tissue disorders. 1 It is characterized by a progressive, irreversible fibroblast proliferation affecting the palmar fascia, resulting in its gradual thickening, which causes a flexion contraction of the fingers. 2 Initially, it presents as subcutaneous nodules, followed by fibrous cords, which account for contractures. 3 The incidence of Dupuytren disease ranges from 3 to 40%. It often affects males 4 > 50 years old, with a higher prevalence in Caucasians. 5 In addition, its incidence is high among patients with a history of smoking and alcohol use, metabolic disorders, or those treated with antiretrovirals or anticonvulsants. 5 Therefore, its etiology is multifactorial, associated with both intrinsic and extrinsic factors. 3 Today, several treatment options are available, including percutaneous fasciotomy, fasciotomy using collagenase (from Clostridium histolyticum), partial or selective fasciectomy, total fasciectomy, and dermofasciectomy. 6 However, the recurrence rate of the disease is high. 7 Surgical intervention is indicated after functional impairment, 8 and it is typically recommended for patients with at least 30°of metacarpophalangeal joint contracture and/or proximal interphalangeal joint contracture associated with functional impairment. 9 Selective fasciectomy is the most frequently performed surgical procedure for Dupuytren disease, and it is deemed the gold standard for the primary release of flexion contracture. 10 The literature is inconsistent in defining the percentage of contracture correction and recurrence rate, making it difficult to assess the effectiveness and safety of surgical interventions for Dupuytren disease. 11 In addition, an important aspect for surgical treatment evaluation is patient satisfaction after the intervention, which is not always related to a greater flexion contracture correction, hindering the comparison of therapeutic options. 12 The present study aimed to describe the epidemiological and clinical profile of patients with Dupuytren disease surgically treated with selective fasciectomy. In addition, the present study attempted to determine the frequency of surgical complications, the need for new surgical procedures, and potential factors associated with the clinical characteristics of this condition. Materials and Methods This is a retrospective observational descriptive study of a sample of patients with Dupuytren disease who were seen and surgically treated by the Hand Surgery Service of a tertiary hospital from the Brazilian Unified Health System Conclusion A low rate of reintervention and operative complications was observed in patients with Dupuytren disease treated by selective fasciectomy. Male gender was associated with severe disease (bilaterality and more than two affected rays), and dyslipidemia with reintervention. Selective fasciectomy was the main treatment option for Dupuytren disease during the study period, corresponding to 96.6% of the procedures performed in these patients. To homogenize the sample, patients with Dupuytren disease diagnosis confirmed by a hand surgeon were treated with an isolated primary selective fasciectomy (►Figure 1) (n ¼ 247). This was the first surgical approach for disease treatment, and no other procedures were performed, including capsulotomies or tenotomies. Patients previously submitted to surgical treatment and those with unavailable information in medical records were excluded. Resumo Demographics were obtained through an active search in medical records, and the following information was recorded in a data collection instrument: (i) sociodemographic features, such as gender, age, self-declared skin color, weight, height, smoking and alcohol intake; (ii) positive history for diabetes mellitus, dyslipidemia, epilepsy, cardiovascular diseases, hypertension, HIV seropositivity, Ledderhose disease, Peyronie disease, adhesive shoulder capsulitis or hand trauma; and (iii) clinical presentation of Dupuytren disease: unilateral or bilateral involvement, affected rays, presence of Garrod dorsal nodules, age at onset of symptoms, positive family history for the disease, intra-and postoperative complications, length of follow-up after primary selective fasciectomy, and the need for a new surgical approach to treat a complication associated with the initial selective fasciectomy or due to recurrence of flexion contracture in the fingers, which is deemed a reintervention. Postoperative complications included an extension deficit with no report of any apparent cause, such as scar retraction, pain, or other intercurrence, in the medical record. Surgery was indicated for functional impairment, 8 metacarpophalangeal joint flexion contracture > 30°and/or proximal interphalangeal joint contracture (►Figure 2A). Selective fasciectomy was performed with limb exsanguination using a pneumatic cuff. The volar face of the hand was incised as popularized by Bruner, extending to the affected radius (►Figure 2B). After mobilization of skin flaps, neurovascular bundles and the thickened palmar fascia were dissected, identified, and protected. Next, the affected fascia was excised (►Figure 2C). This specimen (►Figure 2D) was sent for histopathological study, confirming the characteristic changes associated with Dupuytren disease. A dressing and a volar plaster cast were applied to maintain extension of the fingers. The plastered immobilization was removed when the first dressing was changed, around the end of the 1 st week. Then, passive and/or active finger mobility was initiated by the team of occupational therapists specialized in hand surgery. None of the included patients used orthosis during the postoperative period. Continuous variables were presented as mean AE standard deviation (SD). Factors associated with clinical characteristics of Dupuytren disease were determined based on odds ratios (ORs) and their respective 95% confidence intervals (CIs) estimative using the multivariate logistic regression method. This univariate analysis considered both the biological importance and the degree of statistical significance; p-values 0.25 and 0.10, respectively, were required for regression model input and maintenance. The analyzes were performed using IBMSPSS Statistics for Windows, version 20.0 (IBM Corp., Armonk, NY, USA), and statistical significance was set as p < 0.05. Results The majority of the 247 patients with Dupuytren disease treated with selective fasciectomy were male, self-declared white (►Table 1), reporting alcohol use (59.6%) and/or smoking (49.0%). Their mean age was 66.3 AE 9.2 years old (minimum, 31 and maximum, 94). Most subjects (84.4%) had a body mass index (BMI) ranging from 18.5 to 29.9 kg/m 2 . As for comorbidities associated with Dupuytren disease, 29.6% of the patients had diabetes mellitus, 17.3% dyslipidemia, 6.6% epilepsy, and 2.5% cardiovascular diseases; less than 1% were HIV-positive. In addition, 51.9% of the subjects reported high blood pressure. Bilateral hand involvement was observed in 73.3% of the patients. The little (35.0%) and ring (34.7%) fingers were the most affected rays, followed by the thumb, middle and index fingers, with 15.1, 12.1 and 3.1% of involvement, respectively. In addition, 37.2 and 38.9% of the patients had, respectively, 1 and 2 affected rays. The remaining patients had 3 (18.4%) or more affected rays (5.5%). About 11.1% of the patients also had a previous history of hand trauma, 19.7% of Ledderhose disease, 7.5% of Peyronie disease (considering the 207 affect-ed men), 7.2% of adhesive shoulder capsulitis, and 27.8% of Garrod dorsal nodules, affecting most the index finger (33.4%), followed by the middle and ring (25.9% each) fingers, the little finger (11.1%) and the thumb (3.7%). A family history of Dupuytren disease was reported in 18.5% of the cases, and 3.7% of the patients had ! 3 affected relatives. In total, 300 selective fasciectomies were performed as the first surgical approach in these 247 patients with Dupuytren disease. Patients with bilateral involvement (n ¼ 181) underwent primary selective fasciectomy in each hand (n ¼ 53) at different times (►Figure 1). Considering patients with a minimum follow-up period of 1 year, the frequency of reintervention was 5.2%. The following reinterventions were performed during the study: surgical dressing due to surgical wound complication; selective fasciectomy associated with volar capsutolomy at the proximal interphalangeal joint, and z-plasty for scar contracture; selective fasciectomy associated only with volar capsulotomy of the proximal interphalangeal joint; 5th finger amputation; and isolated selective fasciectomies. Regarding intraoperative complications of selective fasciectomies, 2 cases of digital nerve injury (0.6%) were recorded. Another case of digital nerve injury was recorded during the reintervention of a patient who underwent a new selective fasciectomy. All three cases were primarily repaired but presented some sensorial loss in the affected finger. A total of 73 complications were recorded after the 300 primary selective fasciectomies (24.3%). The most common complication was a deficit in complete extension of the affected finger (50.7%), followed by joint stiffness (24.7%) and skin necrosis (8.2%). The remaining complications totaled 16.4% (►Figure 3). Factors associated with the severity of Dupuytren disease were determined at a multivariate analysis. Male gender was associated with bilateral hand involvement and the number of affected rays (►Table 2). Dyslipidemia was associated with reintervention (p ¼ 0.04; OR ¼ 5.7; 95%CI: 1.03-31.4). Patients with bilateral involvement had a higher number of postoperative complications (35.7 versus 19.7% in patients with unilateral involvement, OR ¼ 2.27; 95%CI ¼ 1. 15-4.46). Other evaluated factors (age, skin color, alcohol intake, smoking, diabetes mellitus, dyslipidemia, epilepsy, HIV and surgical complications) were not significantly associated with bilateral disease, number of affected rays (1 ray versus ! 2 rays), age at disease onset ( 50 years old versus ! 51 years old) or need for reintervention. Discussion Dupuytren disease is common in the hand surgery specialist office. However, several aspects remain inconsistent, includ-ing its etiology and the most appropriate treatment option for the clinical presentation. Classically, Dupuytren disease mainly affects white males, > 50 years old, 5,13,14 and its prevalence increases with age. 13 We noted a predominance of white or brown individuals, aged > 50 years old. A 5:1 ratio between men and women was observed, which is consistent with previous descriptions (5.9:1). 13 Moreover, males were associated with an approximately two-fold higher risk for the most severe disease (bilateral involvement and more than two affected rays). Hindocha et al. 15 reported that bilateral presentation and male gender are associated with a greater recurrence of Dupuytren disease after surgical treatment. Bilateral disease involving the little and ring fingers was commonly observed in our sample, which is consistent with the literature. 5,6,13,16 Garrod dorsal nodules were more frequent in the index finger, as previously described. 17 These nodules have been associated with an increased risk of Dupuytren diathesis. 18 Alcohol intake and smoking have been associated with Dupuytren disease. 5,13,19 The cause and the mechanisms of such associations remain unclear. Most patients have a history of alcohol intake and smoking; however, in a greater proportion than evidenced by Mansur et al. 5 when evaluating a smaller number of Brazilian patients (n ¼ 58) with Dupuytren disease (9 and 22%, respectively, compared with 60 and 49% in our study). Smoking and alcohol intake are associated with the need for surgical treatment for Dupuytren disease. 13 Since our sample consisted only of patients undergoing surgical treatment, potentially with more severe presentations, the prevalence of alcoholics and smokers is higher when compared with the data described by Mansur et al. 5 Among conditions associated with Dupuytren disease, 29.6% of the patients presented diabetes mellitus. In a series with only 58 cases of Brazilian patients with Dupuytren disease, Mansur et al. 5 found a 44.8% prevalence of diabetes, with 62% of insulin-dependent subjects. Recently, a metaanalysis observed an approximately three-fold risk association between Dupuytren disease and diabetes mellitus. 20 Although the molecular mechanism involved in both conditions has been widely studied, 20 it is suggested that the metabolites generated by diabetes mellitus stimulate the development of myofibroblasts, the main cell type involved in Dupuytren disease. 21 In addition, the majority of patients had arterial hypertension, consistent with the aforementioned study from Mansur et al.,5 in which 55% of the subjects presented it. 5 Both conditions mainly affect elderly patients, but the cause of the association between arterial hypertension and Dupuytren disease has not been described in the literature. The rate of reintervention after the surgical treatment for Dupuytren disease depends on several factors, including contracture severity and the type of procedure performed. The observed reintervention rate (5.2%) was consistent with a recent description in the American population, which evaluated 132 selective fasciectomies and detected a reintervention rate of 4%. 22 The multivariate analysis revealed an association between dyslipidemia and a higher rate of reintervention. Dyslipidemia has been associated with Dupuytren disease, 23 suggesting the need to consider this condition in patients with palmar fibromatosis for proper treatment planning In a comprehensive review, Denkler 24 reported that the rate of surgical complications after primary selective fasciectomy ranged from 4 to 39%. 24 We observed a low rate of intraoperative complications, consisting only of digital nerve injury (0.6%). According to the literature, the average rate of digital nerve injury is $ 3%. 24,25 The most common postoperative complications included wound healing issues, ranging from 0 to 86%. 24 However, in our sample, the most common postoperative complication was a complete extension deficit, followed by joint stiffness. In addition, patients with bilateral involvement had an approximately two-fold higher risk of postoperative complications, as the contracture may have been aggravated by the longer waiting time for surgical treatment of the contralateral hand. Ribak et al. 6 described a case of transient digital nerve paresthesia and type I regional complex pain syndrome as complications after resection of the affected fascia during a study comparing selective and percutaneous fasciectomy. In addition, they observed no significant differences between groups submit-ted to different fasciectomies regarding the Tubiana classification, time to resume professional activities and disease recurrence. 6 Since this is a retrospective study, the absence of a standardized electronic medical record system and the lack of determining the degree of flexion contracture of the fingers to aid disease recurrence identification after selective fasciectomy are our main limitations. However, the sample size, as well as information confirmation through a double database check contribute to the robustness of our results, which can be used to build a database of different populations to identify disease-associated factors. Together, this information can assist in prognosis and postoperative follow-up, suggesting the need for early referral to a specialist, even with no functional limitation of the hand, which is the main determinant for the surgical treatment of Dupuytren disease. Conclusion Selective fasciectomy showed a low rate of reintervention and surgical complications. Male gender was associated with bilateral hand involvement and a higher number of affected rays, while dyslipidemia was associated with reintervention. Financial Support There was no financial support from public, commercial, or non-profit sources.
2021-05-05T00:08:36.420Z
2021-03-22T00:00:00.000
{ "year": 2021, "sha1": "22fab5b30aa288746a97676021b625c10aadb76a", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0040-1721839.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07377ff552128c779d45a76b4b659ad414127160", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265034420
pes2o/s2orc
v3-fos-license
Correcting Sense Annotations Using Wordnets and Translations Acquiring large amounts of high-quality annotated data is an open issue in word sense disambiguation. This problem has become more critical recently with the advent of supervised models based on neural networks, which require large amounts of annotated data. We propose two algorithms for making selective corrections on a sense-annotated parallel corpus, based on cross-lingual synset mappings. We show that, when applied to bilingual parallel corpora, these algorithms can rectify noisy sense annotations, and thereby produce multilingual sense-annotated data of improved quality. Introduction Word sense disambiguation (WSD) is the task of identifying the appropriate meaning of a word in context, from a predefined sense inventory, such as WordNet (Miller, 1995) and BabelNet (Navigli and Ponzetto, 2012).It is one of the central problems in natural language understanding (Navigli, 2018).The primary approaches to tackle the WSD problem can be divided into supervised and knowledgebased methods.Supervised WSD systems have historically achieved the best overall results on standard WSD datasets (Raganato et al., 2017).However, these systems rely on large amounts of sense-annotated data for training, which is costly and difficult to produce.In particular, there is a severe lack of high-quality annotated data for languages other than English, which is known as the knowledge acquisition bottleneck problem (Pasini, 2020).To address this issue, various approaches have been proposed to automate the process of annotating texts in different languages at a large scale. Some of the automated annotation approaches operate by leveraging translations from parallel corpora.The idea of using translations for WSD was considered by Resnik and Yarowsky (1997), based on the conjecture that different translations of an ambiguous source word in a target language could serve as sense-tagged training examples.This idea was put into practice by Ng et al. (2003), and then on a large scale by Chan and Ng (2005), as they implemented a semi-automatic approach of disambiguating English nouns using distinct Chinese translations, leveraged from an English-Chinese parallel corpora.Taghipour and Ng (2015) used a similar semi-automatic approach to create a WSD training set by leveraging the Chinese-English part of the MultiUN corpus (Eisele and Chen, 2010).Delli Bovi et al. (2017) removed the bottleneck of manual intervention, as they proposed a fully automated approach of producing multilingual sensetagged corpora by jointly disambiguating multiple languages of a parallel corpus. Our work is inspired by the central idea of the aforementioned research that translations may provide the necessary information to disambiguate an ambiguous word.However, we focus on leveraging translations to improve the quality of an already sense-tagged parallel corpus, rather than to annotate the corpus from scratch.We propose two algorithms for correcting sense annotations in a parallel corpus.The first algorithm attempts to rectify aligned senses that belong to different multisynsets.The second algorithm considers all alignment links in a bitext to construct a one-to-one mapping between synsets in different languages.Both algorithms are based on the theory of synonymy and translational equivalence of Hauer and Kondrak (2020). We empirically show that our algorithms achieve their goal of improving the quality of sense annotations in multiple languages.We extrinsically evaluate the proposed corrections by providing the corrected corpora as training data to a supervised WSD system.An intrinsic evaluation on a random sample of 200 corrected instances in English and Spanish confirms the improvement in the overall quality of the annotated corpora. MultiWordNet (MWN) Algorithm Algorithm 1 MWN Input : set of aligned sense pairs (s, t) lex(s) -word of which s is a sense M (s) -multi-synset that contains sense s M(w) -set of multi-synsets that contain word w 1: for each aligned sense pair (s, t) do The MWN algorithm (Algorithm 1) is based on the simplifying assumption that the senses of aligned words are translationally equivalent (Hauer and Kondrak, 2020).The algorithm consults an existing multilingual wordnet (multi-wordnet) which is composed of multilingual synsets (multi-synsets) that include translationally-equivalent senses of words from both languages.Each polysemous word belongs to multiple multi-synsets.If the senses of the aligned words are found to belong to different multi-synsets, this is an indication of a possible annotation error that could be corrected. The algorithm operates on a sense-annotated parallel corpus (bitext).It performs annotation corrections on individual aligned word pairs (line 1) which are annotated with different multi-synsets (line 2).Each sense in a multi-wordnet is uniquely defined as a (word, synset) tuple.When applied to a sense, the lex and M operators return the first and second element of the tuple, respectively.We denote as C the set of all multi-synsets that contains both aligned words (Line 3). The algorithm is designed to make selective corrections only in those alignment instances where there is little doubt about the appropriate correction.At most one of the two sense annotations in each instance can be corrected.A correction is made if and only if exactly one of the two aligned senses is found in C (lines 4-7).We do not attempt a correction if either both or none of the two senses are in C. If both senses are outside of C, we suspect multiple errors in bitext annotations and/or the multi-wordnet.On the other hand, if both senses are within of C, it is not clear which of the two annotations may be incorrect. Bipartite (BP) Algorithm Algorithm 2 BP Input : set of aligned sense pairs (s, t) lex(s) -word of which s is a sense S(s) -synset that contains sense s S(w) -set of synsets that contain word w 1: G ← ∅ 2: for each aligned sense pair (s, t) do weight(S(t))++ The BP algorithm (Algorithm 2) is also based on the assumption that the aligned words should express exactly the same concept.However, it differs from the MWN algorithm in that it globally considers all the alignment links in a given bitext, and makes annotation corrections based on the most frequently observed links.Another difference is that BP only corrects the annotations in language L 2 , based on the annotations in the base language L 1 , which are assumed to be always correct.The algorithm is inspired by the concept universality principle of Hauer and Kondrak (2020) which states that each each monolingual synset corresponds to at most one synset in another language.No access to a multi-wordnet is assumed; instead the algorithm consults two language-specific wordnets, which are composed of monolingual synsets, rather than of multi-synsets. The BP algorithm consists of three stages: (1) construct a bipartite graph G of synsets; (2) identify its subgraph G of degree 1; and (3) correct sense annotations that are not found in subgraph G .In fact, the first two stages constitute a stand-alone algorithm for creating a cross-lingual mapping be-tween synsets.We describe the three stages in more detail below. In the first stage (lines 1-5), we construct a weighted undirected bipartite graph G = (V, E, weight) in which nodes represent monolingual synsets, and edges represent alignment links that are observed in the bitext.The weight of an edge is equal to the number of the observed alignment links in the bitext between the senses of the corresponding synsets.The weight of a node is simply the sum of the weights of all edges incident with the node, which is equal to the number of times the corresponding synset is used in aligned sense annotations in the bitext. In the second stage (lines 6-10), we construct a graph G = (V, E ), which is a subgraph of G, such that every node has a degree of at most 1.The goal is to select the edges that represent the most frequent alignments.This is achieved by only retaining the edges with the relative weight above a threshold α (lines 8-9) in both directions.The threshold is constrained to be greater than 0.5, to guarantee at most one edge per node is selected. In the third stage (lines 11-15), annotation corrections are made based on the edges of the constructed bipartite graph G .Unlike the MWN algorithm, the BP algorithm only corrects the annotations of words in language L 2 .If an edge corresponding to a given alignment link is not found in G (line 12), it attempts to correct the annotation in L 2 by following the edge in G between the node S(s), which represents the synset used to annotate the word in L 1 , and the node S , which represents the synset in L 2 that expresses the same concept as S(s). We extract four sentence-aligned bitexts from EuroSense, by considering four different language pairs: English-Italian (EN-IT), English-German (EN-DE), English-French (EN-FR) and English- The annotated translation pairs in EuroSense are filtered to remove non-existent senses, non-literal translations, and hypernym translations.A sense of a word is considered non-existent if it is not found in the respective BabelNet synset.If the aligned words have no synsets in common, they are treated as non-literal translations.Finally, we detect non-synonymous translations pairs by traversing hypernymy and hyponymy links in BabelNet (Hauer et al., 2020).In our development experiments, we found that approximately 3% of the pairs contain invalid senses, 13% are cases of non-literal translations.and 5% involve word entailment. Following this filtering procedure, the remaining translation pairs are used as inputs to both algorithms to perform annotation corrections for each language separately.The BP threshold α is set to 0.8 on the basis of the development experiments.For IT, DE, FR and ES corrections, we use English as the base language.To perform EN corrections, we use Italian as the base language as it is reported to have good BabelNet coverage (Hauer et al., 2020).75.3% of the English-Italian synset mappings returned by the BP algorithm match Ba-belNet concepts.Table 1 contains dataset and correction statistics for each of the five languages.The arrows in the leftmost column point from the base language to the corrected language. We extrinsically evaluate the corrections by providing the corrected corpora as training data for a supervised WSD system, which is then evaluated on standard benchmark datasets.To this end, we employ IMS (Zhong and Ng, 2010), a supervised WSD system based on lexical features.To keep the corpus at a reasonable size, we consider a maximum of 10,000 randomly sampled training examples per sense.For English, in cases where the system fails to make a prediction, we off to the most frequent sense.For all languages, any monosemous words are automatically tagged with their single possible sense.Table 2 presents the WSD results of IMS models trained on the corrected corpora, along with the results of models trained on the original EuroSense corpus.The evaluation is performed on benchmark multilingual datasets from SemEval-2013 task 12 (Navigli et al., 2013) and SemEval-2015 task 13 (Moro and Navigli, 2015).The results show that IMS achieves better results when trained on the corrected corpora.The MWN improvements are statistically significant (p < 0.05 using McNemar's test) over the results obtained by the original corpus for all languages except English.The BP improvements are smaller but consistent.This verifies the utility of the annotation corrections made by two algorithms when the information is transferred from English to less-resourced languages. Intrinsic Evaluation To intrinsically evaluate the quality of the sense annotation corrections made by the algorithms, a random sample of 200 English and Spanish instances were annotated manually.For each instance, an annotator was shown the corresponding sentence from EuroSense, and asked to decide whether the focus word is used in the original or the corrected sense (or neither).The senses were defined using BabelNet glosses and synonyms.and provided in a random order. The results in of the concept that it represents.For example, the synset bn:00109131a, which is glossed in English as "related to the future", contains the Spanish adjective futuro but not its English translation future.Such omissions, which are frequent in BabelNet because of its semi-automatic construction method, prevent the MWN algorithm from making a correction. Noise in the bitext The English-German bitext slice of EuroSense contains a total of 19,230 distinct English synsets, among which only 10,661 (55%) have matching German synsets in the dataset.This implies that nearly half of concepts represented in English are not expressed by German words, which makes it impossible to match concepts across languages.The issue may be related to the high frequency of nominal compound words in German, which are often translated as multi-word expressions in English (e.g., Versicherungskaufmann "insurance salesman"). Excessive granularity of senses Some instances involved a choice between fine-grained senses.For example, in the Spanish phrase "la conclusión real de este fin de semana" ("the actual conclusion of this weekend") the annotator found it difficult to decide whether the Spanish noun conclusión is used in the sense of "the temporal end; the concluding time" or "a concluding action." Conclusion Our extrinsic and intrinsic evaluation results constitute a strong proof-of-concept that translations and wordnets can be leveraged to make effective annotation corrections in a sense-annotated bitext.Manual analysis indicates that most of the invalid corrections can be traced to errors and omissions in existing lexical resources.In the future, we plan to investigate the use of machine translation instead of bitexts for the purpose of automatically annotating raw monolingual text corpora. Table 1 : Number of sense corrections made by both algorithms. Table 3 : Intrinsic evaluation results.A boldfaced result indicates a statistically significant improvement.
2023-11-07T14:05:49.454Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "88442324523a305973e1315c8176fef36ec9ea19", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "88442324523a305973e1315c8176fef36ec9ea19", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
17718208
pes2o/s2orc
v3-fos-license
A novel phosphorylation-independent interaction between SMG6 and UPF1 is essential for human NMD Eukaryotic mRNAs with premature translation-termination codons (PTCs) are recognized and eliminated by nonsense-mediated mRNA decay (NMD). NMD substrates can be degraded by different routes that all require phosphorylated UPF1 (P-UPF1) as a starting point. The endonuclease SMG6, which cleaves mRNA near the PTC, is one of the three known NMD factors thought to be recruited to nonsense mRNAs via an interaction with P-UPF1, leading to eventual mRNA degradation. By artificial tethering of SMG6 and mutants thereof to a reporter mRNA combined with knockdowns of various NMD factors, we demonstrate that besides its endonucleolytic activity, SMG6 also requires UPF1 and SMG1 to reduce reporter mRNA levels. Using in vivo and in vitro approaches, we further document that SMG6 and the unique stalk region of the UPF1 helicase domain, along with a contribution from the SQ domain, form a novel interaction and we also show that this region of the UPF1 helicase domain is critical for SMG6 function and NMD. Our results show that this interaction is required for NMD and for the capability of tethered SMG6 to degrade its bound RNA, suggesting that it contributes to the intricate regulation of UPF1 and SMG6 enzymatic activities. INTRODUCTION In order to guarantee the accuracy of gene expression, eukaryotic cells have evolved numerous intricate quality control mechanisms. One of the best studied of these mechanisms is the nonsense-mediated mRNA decay pathway (NMD) that was archetypically known as a pathway acting to selectively identify and degrade mRNAs containing a premature translation-termination codon (PTC), and hence reduces the accumulation of potentially toxic truncated pro-teins. However, NMD also targets various physiological mRNAs, signifying a role for NMD in post-transcriptional gene expression regulation in eukaryotes (1)(2)(3). Therefore, NMD probably controls a large and diverse inventory of transcripts which reflects the important influence of NMD on the metabolism of the cell and consequently in many human diseases (4,5). In order to develop pharmacological reagents and to better understand the influence of NMD on disease, it is essential to unravel the molecular mechanisms that underpin NMD. A plausible current model of NMD in human cells postulates that the decision of whether the pathway is to be initiated relies upon competition between up-frame shift 1 (UPF1), a core NMD factor that exhibits 5 -3 helicase and nucleic acid-dependent adenosine triphosphatase (ATPase) activities (6), and cytoplasmic poly-A binding protein for binding to eukaryotic release factor 3 (eRF3) on the terminating ribosome (7)(8)(9)(10)(11). Suppressor with morphogenetic effect on genitalia protein 1 (SMG1), which is a phosphatidylinositol 3-kinase-related protein kinase (PIKK) (12), is also recruited by ribosomes terminating translation prematurely through interactions with the eRF1/3 and this complex of UPF1, SMG1 and the eRF1/3 is termed the SURF complex (13). In the presence of UPF2 and UPF3, most likely bound to downstream exon junction complexes (EJCs) on the mRNA, SMG1 phosphorylates UPF1 (13)(14)(15) creating an N-terminal binding platform for SMG6 and a C-terminal binding site for the SMG5-SMG7 complex, the latter of which has been reported to recruit mRNA decay factors (16,17) and these interactions at the N and Ctermini of UPF1 are essential for NMD (18). SMG5, SMG6 and SMG7 each contain a 14-3-3-like domain, which in the case of SMG6 and SMG7 has been experimentally confirmed to bind phosphorylated residues of UPF1 (18,19). SMG6 can also associate with the mRNA surveillance complex through its ability to directly bind the EJC via conserved motifs called EJC binding motifs (EBMs) (20). SMG5 and SMG6 both contain a C-terminal PIN (PilT Nterminus) domain adopting a similar overall fold related to ribonucleases of the RNase H family but only SMG6 harbors the canonical triad of aspartic acid residues crucial for nuclease activity (21)(22)(23). Thereafter, SMG6 was revealed to be the endonuclease in human and Drosophila melanogaster cells that cleaves nonsense mRNAs in the vicinity of the PTC (24,25). However, less is known about the actual mRNA degradation aspect of NMD but an emerging consensus is that phosphorylated UPF1 (P-UPF1) is the common starting point for all of the multiple decay routes that have been reported to be possible in NMD (26). SMG6 is one of the several proteins that are able to interact with P-UPF1 to ultimately induce RNA decay. So far, it is not known if and how the endonuclease activity of SMG6 is regulated so that it is used only when and where it is needed and how this regulation would be orchestrated. Similarly, it is not clear exactly how SMG6 achieves target specificity; how exactly it is recruited to target mR-NAs. In this study, we have investigated what is required for SMG6-mediated endonucleolytic cleavage of mRNA. Through in vivo functional assays, protein-protein interaction studies and in vitro experiments, we found that SMG6 activity specifically requires SMG1 and UPF1. However, this requirement is not dependent on the previously documented interaction between SMG6 and UPF1 phosphorylated at threonine 28 (18) but rather due to a newly identified phosphorylation-independent interaction between SMG6 and the unique stalk region in the UPF1 helicase domain, along with a contribution from the proximal portion of the SQ domain. We confirm that this interaction is critical for NMD and give insight into how this novel interaction may contribute to the regulation of UPF1 and SMG6. Plasmids A comprehensive description of the plasmids used and made in this study can be found in the Supplementary Data section. Tethered function and NMD assays Cells were seeded into 6-well plates and DNA cotransfections were carried out using Lipofectamine 2000 (Life Technologies) according to the manufacturer's guidelines. In all tethered function assays (TFAs), 100-ng pEGFP-C3, 100-ng pc␤-globin6MS2 and 2-g pCMV-SMG6-MS2-HA or 1-g pCMV-LacZ-MS2-HA or 1-g pCMV-HA-MS2-LacZ, along with either 400-ng pSUPuro scrambled plasmid (control) or 400-ng pSUPuro plasmid expressing an shRNA targeting specific NMD factors, was used. In Figure 1D, 2-g pCMV-SMG6-MS2-HA, 1-g pCMV-SMG6m14-3-3-MS2-HA and pCMV-SMG6mEBM-MS2-HA and 4 g of pCMV-SMG6mPIN-MS2-HA were used. In Figure 2B-D, 400 ng of pSUPuro plasmids expressing shRNAs against SMG1, SMG5, SMG7, UPF1 or UPF2 were co-transfected. In Figure 3A-C, 600 ng of RNAi-resistant (RNAiR) pcDNA3-Flag-SMG1 or 550-ng RNAiR pcDNA3-HA-UPF1 plasmids were also co-transfected and the amount of DNA in each co-transformation mixture was made equal by the addition of the corresponding empty plasmid. In the NMD assays, 100 ng of each plasmid producing mini-or ␤-globin reporters containing a PTC as well as the wild-type counterpart, 100-ng pmCMV-rGPx1-TGC, along with either 400-ng pSUPuro control plasmid or 400 ng of plasmids expressing two shRNA targets against UPF1, were co-transfected. In Figure 7A, 550-ng RNAiR pcDNA3-HA-UPF1 wild-type, 600ng RNAiR pcDNA3-HA-UPF1 Stalk and 500-ng of RNAiR pcDNA3-HA-UPF1 1C were included in the co-transformation mixtures of both the NMD rescue experiments and the TFA rescue experiments. The shRNA target sequence for SMG5 was defined in (27) and the rest have been described elsewhere (28). The remainder of the pSUPuro-based knockdown procedure, cell harvesting for both protein and RNA samples (which were taken from the same sample), total RNA extraction and measurement of relative mRNA levels by quantitative reverse transcription polymerase chain reaction (RT-qPCR) was done as previously explained (29). qPCR assays have been described elsewhere (27,28), except for the TaqMan assay to measure eukaryotic green fluorescent protein (eGFP) mRNA levels that is comprised of 5 -TGAGCAAAGACCCCAACGA-3 , 5 -GGCGGCGGTCACGAA-3 and 5 -FAM-GCGGATCACATGGTCCTGCTGG-BHQ1 -3 . For the assays in Figures 3 and 6, total RNA was extracted using the Total RNA Miniprep Kit (Sigma-Aldrich). To examine protein levels from the assays, generally 35 000 whole cell equivalents were separated using 6-12% sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and the proteins were transferred to a polyvinylidene difluoride (PVDF) membrane (Westran Clear Signal, GE Healthcare) and probed with the indicated primary antibodies and appropriate fluorophore-coupled secondary antibodies. Fluorescent signals were captured using the Odyssey Infrared Imaging System (LI-COR Biosciences). A full inventory of all the antibodies used in this study can be found in the Supplementary data. Yeast two-hybrid ␤-galactosidase plate assays Two hundred and fifty nanograms of the designated pGADT7 and pGBKT7 plasmids were cotransformed into Mav99 cells (31) according to the high-efficiency LiOAc/single-stranded carrier DNA (Clontech)/polyethylene glycol ((Clontech)/PEG) method of transformation (32). Four co-transformants from each co-transformation reaction were spread in equal concentrated patches on fresh selective plates and incubated at 30 • C for 3 days. Thereafter, non-lethal X-gal/␤galactosidase plate assays were performed (33). Briefly, the plates were then flooded with chloroform, completely immersing the colonies. After 5 min, the chloroform was decanted and the plates were inverted and dried for 5 min. Each plate was then overlaid with 1% low melting agarose (Promega, V2111) containing 1-mg ml −1 X-Gal and 100-mM KPO 4 , pH 7.0 cooled to 42 • C. Once the agarose was hardened, the plates were inverted and incubated at 30 • C for 24 h and then at 4 • C for 24 h before being photographed. Figure 1B, using plasmids expressing SMG6-MS2-HA in (B), SMG6m14-3-3-MS2-HA in (C) and SMG6mEBM-MS2-HA in (D), except performed in cells depleted of various NMD factors, whereby a plasmid expressing either a control shRNA or an shRNA targeting one of the indicated NMD factors was also included in the co-transfection mixture. ␤-globin reporter mRNA levels have been normalized to GFP mRNA levels and with mRNA levels in cells expressing LacZ-MS2-HA set as 100 (not shown for clarity). The bar charts depict the relative mRNA levels under control knockdown conditions (black bars) and the designated knockdown conditions (named below each gray bar). The bars represent the mean of >6 independent experiments. Error bars indicate SD and P-values are displayed in Supplementary Figure S1B. In vitro RNA-protein interaction studies using MicroScale Thermophoresis The MicroScale Thermophoresis (MST) assays were set up, performed and analyzed as previously outlined (35)(36)(37)(38). Specifically, the 5 -labeled Cy5 RNA U 30 oligonucleotide was purchased from Microsynth.ch and diluted to 40 nM using 1 x HBS-EP+ buffer. The solution of each designated unlabeled recombinant protein (see recombinant protein production) was serially diluted from a concentration In rows 1-6, plasmids expressing GAL4-DNA-binding domain UPF1 fusion protein (BD-UPF1) and derivatives thereof were co-transformed with a plasmid expressing GAL4-activation domain SMG6 fusion protein (AD-SMG6) into Mav99 cells. The designated co-transformations in rows 7-14 were performed as negative controls, where BD only represents transformation of a plasmid expressing GAL4-DNA-BD-empty and AD only represents transformation of a plasmid expressing GAL4-AD-empty. In row 15, plasmids expressing GAL4-DNA-BD-eRF3a co-transformed with a plasmid expressing GAL4-AD-eRF1 served as a positive control, along with the necessary controls in rows [16][17]. Four colonies (denoted as A-D) from each co-transformation were selected for the ␤-galactosidase plate assay. An interaction between the BD and the AD fusion proteins activates ␤-galactosidase expression, which produces a blue color by hydrolyzing X-gal. (C) Top: PyMOL view of the structure of human UPF1 HD based on pdb file 2xzp (30) showing the two RecA domains (in green), the additional protruding sub-domains 1B and 1C (both in black) along with the stalk helices (in orange). Bottom: schematic of the UPF1 part of the additional GAL4-DNA-BD-UPF1 constructs tested. Note the same deletions were created and tested in full-length UPF1. (D) As in (B), except the co-transformations were performed using plasmids expressing the stipulated GAL4-DNA-BD-UPF1 proteins (BD-UPF1) with the plasmid expressing GAL4-DNA-AD-SMG6 (AD-SMG6) shown in rows 1-8. Rows 9-16 represent the corresponding negative controls expressing empty GAL-AD (AD only). 30 RNA to the quadratic solution of the mass action law, the binding constants in nM (K D ) were determined. The normalized fluorescence (F norm ) is plotted on a linear y-axis in per thousand () against the total concentration of the titrated protein (nM) on a log 10 x-axis. At least two independent measurements were made for each assay and error bars on the individual data points represent the SD between these repetitions. of 500 nM down to 0.015 nM (with the exception of SMG6 that was diluted from 350 nM to 0.01 nM) in the presence of constant labeled RNA (20 nM). The RNA-protein mixtures were analyzed in hydrophilic capillaries (NanoTemper Technologies ref: K004) using the Monolith NT.115T MST instrument (NanoTemper Technologies) at room temperature (RT), where the light-emitting diode (LED) was set at 40% (except in Figure 6E where the LED was set to 70%), the IR-laser power was set at 20% and laser on and off times were fixed at 30 s and 5 s, respectively. The Nanotemper Technologies Analysis software, version 1.5.41, was used to obtain normalized fluorescence versus concentration curves and to determine the corresponding K D using the law of mass action. Tethering of the SMG6 PIN domain alone is not sufficient to reduce reporter RNA levels To build upon what is known about SMG6 and further delineate the molecular mechanisms surrounding SMG6 activity, we established an SMG6 TFA, a comprehensive description of which can be found elsewhere (29). Full-length SMG6, mutants thereof and a fragment of LacZ serving as a control were fused to the MS2 coat protein, which in turn can be artificially tethered to a ␤-globin reporter mRNA containing six MS2 binding sites in its 3 -untranslated region (3 UTR) (␤-globin-6-MBS; Figure 1A). SMG6-MS2 fusion proteins were co-expressed with the ␤-globin-6-MBS reporter and a GFP expressing plasmid in HeLa cells, and the steady-state levels of the reporter mRNA were quantified and normalized to the levels of GFP mRNA. Ex- Figure 7. Functional assays confirm that the phospho-independent interaction between UPF1 and SMG6 is crucial for NMD. (A) SMG6-TFA completed as in Figure 3B, except the plasmid expressing RNAiR UPF1 Stalk (blue bars) was also included. The knockdown was extended by 1 day in these experiments and the bars represent the mean of three independent experiments. Error bars are SD and P-values are show in Supplementary Figure S1D. Western blot showing UPF1 levels. Lanes 1-3 represent a serial dilution (100, 33 and 11%) of cell lysate from untransfected HeLa cells. Lane 4 represents control shRNA (Ctr) and lane 5 represents the samples expressing two shRNAs targeting UPF1. Lanes 6 and 7 represent the samples expressing the shRNAs targeting UPF1 and the exogenous expression of RNAiR UPF1 WT and UPF1 Stalk, respectively. The detected protein is indicated on the right of the blot. (B) NMD rescue assays. HeLa cells were transfected with plasmids expressing mini-mRNA with a PTC at position 310 ( Ter310), pmCMV-rGPx1-TGC, serving as a co-transfection control and with plasmids expressing either a control shRNA or shRNAs targeting UPF1 and where indicated plasmids expressing RNAiR UPF1 WT or RNAiR UPF1 Stalk were also co-transfected. RT-qPCR analysis was used to evaluate the relative mRNA levels, normalized to GPx1 mRNA levels, and the levels of normalized WT mRNA in control knockdown cells were set to 100% (not shown for clarity). pression of SMG6-MS2-HA reduced the steady-state levels of the reporter mRNA to below 10% of the levels detected in cells expressing the LacZ-MS2-HA or the HA-MS2-LacZ controls, which encode a fragment of the ␤galactosidase fused to a C-terminally located MS2-HA or an N-terminally located HA-MS2 moiety, respectively. To control for potential pleiotropic effects caused by overexpression of proteins, SMG6 without the MS2-moiety was also tested ( Figure 1B). Western blotting confirmed that the MS2 fusion proteins and the SMG6 lacking the MS2 moiety were all expressed ( Figure 1B, lower panels). To address what aspect of SMG6 was important for inducing reporter level reduction when tethered to the mRNA, we expressed various SMG6 mutants fused to MS2-HA ( Figure 1C). In the SMG6-PIN mutant (SMG6mPIN), the endonuclease activity was abolished by mutating the three crucial aspartic acids in the catalytic center to asparagines (24). In the SMG6-14-3-3 mutant (SMG6m14-3-3), four highly conserved amino acids in the 14-3-3-like domain were changed (19) with the expectation that this should prevent SMG6 from binding to UPF1 phosphorylated at T28, as previously shown with such an SMG6 mutant (18). Finally, in the SMG6 EBM mutant (SMG6mEBM), point mutations were introduced in the EBMs 1 and 2 in the very N-terminus of SMG6 (20). Transfection conditions were adapted to achieve a similar expression of all of the SMG6-MS2-HA mutants as displayed in the western blot in the lower part of Figure 1D. When tethered to the ␤-globin reporter mRNA, we observed that the SMG6mPIN-MS2-HA did not reduce the reporter RNA levels at all ( Figure 1D). By contrast, the SMG6m14-3-3-MS2-HA behaved like its wild-type counterpart and was able to reduce the reporter RNA levels to ∼12% of the levels detected in cells expressing the LacZ-MS2-HA control. Finally, the ability of the tethered SMG6mEBM-MS2-HA in reducing the reporter RNA levels was significantly compromised compared to tethering wild-type SMG6 (P-value 3.9 × 10 −5 ; see Supplementary Figure S1A) yet significantly different from the LacZ-MS2-HA control (P-value 1.5 × 10 −11 ). The same results were reached when the ␤-globin RNA levels were analyzed by northern blotting and also when the same SMG6 fusion proteins were tethered to a Renilla luciferase reporter mRNA containing six MS2 binding sites in its 3 UTR (Supplementary Figure S2). Next, we wondered if tethering of the PIN domain alone was enough to elicit decreased ␤-globin reporter mRNA levels. As shown in Figure 1E, tethering the SMG6 PIN domain alone was not sufficient to bring about the reduction in reporter mRNA levels. Expression of the PIN-MS2-HA and mPIN-MS2-HA are documented in lanes 2 and 3 of the accompanying western blot. In summary, mutations affecting the N-terminal EBM motifs compromise the ability of SMG6 to reduce reporter mRNA levels despite the fact that their proposed role of recruiting SMG6 to the RNA (20) is bypassed in these assays, begging the question as to what function these motifs contribute to the endonuclease activity of SMG6. The SMG6 14-3-3 mutant that should no longer be able to interact with P-UPF1 induced reporter mRNA reduction as well as its wild-type SMG6, consistent with the view that normally SMG6 is recruited to NMD-targeted messenger ri-bonucleoprotein particles (mRNPs) by this interaction and our tethering assay bypasses this requirement. Importantly, the endonucleolytic activity of SMG6 is required but not sufficient by itself for inducing reporter mRNA level reduction, demonstrating that additional parts of the SMG6 protein are necessary for endowing the PIN domain with endonuclease activity, presumably by interacting with additional proteins. The SMG6-mediated reporter mRNA level reduction requires SMG1 and UPF1 To test whether additional NMD factors are required to stimulate the endonuclease activity of SMG6, we performed TFAs with the SMG6-MS2-HA constructs described in Figure 1 in HeLa cells depleted of various NMD factors. UPF1, UPF2, SMG1, SMG5 or SMG7 was knocked down by expressing corresponding shRNAs in parallel with a control pSUPuro plasmid expressing a scrambled sequence with no predicted specific targets in human cells. A fraction of the cell lysates were used to extract RNA and determine the relative ␤-globin reporter mRNA levels (see below), and western blots were performed with the remainder of each lysate to assess the knockdown efficiencies of the stipulated NMD factors (Figure 2A). The effectiveness of the stipulated RNAi-mediated knockdowns was also documented at the mRNA level (Supplementary Figure S3A-E). The ␤-globin reporter mRNA level for each experimental condition is depicted relative to the level in cells expressing LacZ-MS2-HA and normalized to GFP mRNA encoded on a co-transfected expression plasmid to account for differences in transfection efficiencies among the samples ( Figure 2B-D and Supplementary Figure S1B). We found that SMG6-MS2-HA-induced reporter mRNA level reduction significantly requires UPF1 and SMG1 but is only marginally compromised by the knockdown of UPF2, SMG5 or SMG7 ( Figure 2B). Essentially the same result was obtained when SMG6m14-3-3-MS2-HA was tethered ( Figure 2C). The already compromised destabilizing activity of the SMG6mEBM was also lost under UPF1 and SMG1 knockdown conditions ( Figure 2D). Intriguingly, the modest reduction of reporter RNA induced by tethering of SMG6mEBM was further lost when UPF2 was knocked down, in contrast to tethered SMG6 wild-type, which was barely affected by UPF2. Knockdown of UPF3B gave a similar result; it barely affected the strong drop in reporter mRNA levels induced by tethering wild-type SMG6, but it significantly abrogated the modest decrease of reporter levels induced by tethering of SMG6mEBM (Supplementary Figures S3F and G and S1E). This suggests that UPF2 and UPF3B are also involved in activation of the SMG6 endonuclease under physiological conditions but possibly their contribution is weaker than those of UPF1 and SMG1, and hence it can only be detected under conditions that reduce the otherwise dominant contributions, such as in the case of the EBM mutant. In summary, the data identify UPF1 and SMG1 as necessary co-factors for the endonuclease activity of SMG6 and also indicate a minor involvement of UPF2 and UPF3B. In contrast, the roles of SMG5 and SMG7 seem marginal at best in our SMG6 TFA. Nucleic Acids Research, 2014, Vol. 42, No. 14 9227 Rescue experiments confirm that SMG6 activity requires UPF1 and SMG1 To ascertain that the loss of the SMG6 induced reporter mRNA level reduction was specifically caused by the depletion of SMG1 and UPF1 and not by an off-target effect of the knockdown, we tried to rescue the observed SMG6mediated effect on the mRNA reporter by expression of exogenous RNAiR versions of these proteins. Figure 3A depicts the SMG1 rescue experiment where the specified SMG6-MS2-HA constructs, or the controls LacZ-MS2-HA and HA-MS2-LacZ, were tethered in HeLa cells expressing a control shRNA (black bars), cells depleted of SMG1 (gray bars) or cells depleted of SMG1 and expressing exogenous RNAiR SMG1 (green bars). Our data show that expression of RNAiR SMG1 significantly and specifically rescues the steady-state reporter mRNA levels back to those observed under no knockdown conditions (see Supplementary Figure S1C for P-values). Western blotting confirmed that the RNAi mediated SMG1 knockdown was effective and that the expression of the exogenous RNAiR SMG1 replenished the pool of intracellular SMG1 to a similar level as without or with a control knockdown. The UPF1 rescue experiment was analogously performed and gave a similar result ( Figure 3B and Supplementary Figure S1C). It showed that exogenously expressed RNAiR UPF1 was able to significantly rescue the mRNA reporter levels under endogenous UPF1 knockdown conditions back to those seen in HeLa cells with a control knockdown. As for SMG1, western blotting confirmed that the UPF1 knockdown was effective and that the expression of the RNAiR UPF1 in cells depleted of UPF1 yielded UPF1 protein levels similar to those observed in cells without or with a control knockdown. These results confirm that tethered SMG6mediated reporter mRNA level reduction definitively requires the presence of both UPF1 and SMG1. UPF1 phosphorylation mutants still support SMG6facilitated RNA reporter level reduction, regardless of the need for SMG1 and UPF1 To decipher the domains or functions of UPF1 and SMG1 required for the activity of SMG6, RNAiR UPF1 constructs harboring various previously described mutations were expressed in SMG6 tethering rescue experiments. We tested three different UPF1 phospho-site (P-site) mutants: T28A, S1096A and a double mutant comprising both of these point mutations (18). As before, the UPF1 knockdown effectiveness and the expression levels of the RNAiR UPF1 constructs were monitored by western blotting (Figure 3C, right panel). The efficient UPF1 knockdown was determined (lane 2) and the levels of all RNAiR UPF1 mutants (lanes 4-6) were comparable to the endogenous UPF1 levels in control knockdown cells (lane 1). As previously observed ( Figure 3B), wild-type RNAiR UPF1 rescued the increased mRNA reporter levels resulting from the loss of UPF1 back to the levels observed when the assay was performed without knockdown ( Figure 3C, red bars). Interestingly, RNAiR UPF1-T28A (pink bars) and UPF1-S1096A (brown bars) rescued the reduced reporter levels induced by SMG6 as efficiently as wild-type UPF1 (all P-values >0.1 when compared to the effect induced by SMG6 under control knockdown conditions; Supplementary S1C) and even the double mutant T28A/S1096A (orange bars) was almost as effective as the wild-type counterpart in these SMG6 tethering rescue experiments (the P-values when compared to the other P-site mutant rescues are >0.1 but the P-value is 0.01 compared to the control knockdown condition). This result was unexpected in light of the previously reported dependence of the UPF1-SMG6 interaction on phosphorylated T28 in UPF1 (18). Therefore, we wanted to confirm that the mutants designed to disrupt the reported interaction between P-UPF1 and SMG6 were successfully abolishing such an interaction in our assays. Hence, we co-expressed HA-UPF1 with our various SMG6 mutants fused to MS2-HA and performed immunoprecipitations (IPs) with an anti-MS2 antibody (MS2-IP) to test the SMG6-MS2-HA proteins for their ability to co-IP P-UPF1 (Supplementary Figure S4A). Since only a very small fraction of UPF1 is phosphorylated at steady state (39), we co-expressed HA-UPF1 and added okadaic acid (a potent inhibitor of serine/threonine phosphatases 1, 2A and 2B) prior to cell lysis to increase the abundance of intracellular P-UPF1 (40). Intriguingly, all of the described SMG6-MS2-HA proteins co-immunoprecipitated P-UPF1 (lanes 6, 9, 12, 15 and 18) as determined by using a phospho-(Ser/Thr) ATM/ATR substrate antibody that specifically recognizes P-UPF1 at the correct molecular weight, albeit, we observed that the SMG6mEBM-MS2-HA and the SMG6m14-3-3-MS2-HA versions coprecipitated much less P-UPF1 than the SMG6-MS2-HA wild-type and SMG6mPIN-MS2-HA constructs (compare lanes 12 and 15 with 6 and 9, respectively). Furthermore, we detected a small and similar fraction of UPF1 in the precipitates of all SMG6-MS2-HA constructs using an anti-UPF1 antibody, even when the lysate was treated with RNase A (lane 18). In contrast, the LacZ-MS2-HA control did not co-IP any P-UPF1 (lane 3). In a vice-versa approach, we expressed our MS2-UPF1 P-site mutants together with HA-SMG6, performed MS2-IPs and examined the extent of SMG6 co-precipitation (Supplementary Figure S4B). Noteworthy, the MS2-UPF1-T28A still co-precipitated a small fraction of SMG6 (lane 9). In fact, all of the UPF1 P-site mutants still weakly co-precipitated SMG6 (lanes 12 and 15), to a similar or slightly lesser extent than the wildtype UPF1 protein and regardless of RNase A treatment (lanes 6 and 18, respectively). HA-MS2-LacZ did not co-IP any SMG6 which confirmed the specificity of the IPs (lane 3). Given that the UPF1 P-site mutants still coimmunoprecipitated with SMG6, we wondered whether the kinase activity of SMG1 was required for SMG6-mediated RNA decay in the TFA or whether perhaps another function of SMG1 supported SMG6 activity. Therefore, RNAiR SMG1 D2331A where the kinase activity was abolished by mutating a crucial aspartic acid in the PIKK catalytic domain to an alanine (41,42) was expressed in the SMG6 TFA. As previously observed ( Figure 3A), wild-type RNAiR SMG1 rescued the increased mRNA reporter levels resulting from the loss of SMG1 back to the levels observed when the assay was performed without knockdown (Supplementary Figure S5A, green bars and Supplementary Figure S1F for P-values). In contrast, the RNAiR SMG1 D2331 (purple bars) did not rescue the SMG6-induced reduced reporter RNA levels. The efficient SMG1 knockdown was determined (Supplementary Figure S5B, lane 5) and the levels of RNAiR SMG1 WT and the D2331A mutant (lanes 6 and 7) were comparable to the endogenous SMG1 levels in untreated cells and in the control knockdown cells (lanes 1 and 4, respectively). This result indicates that SMG1 kinase activity is required for SMG6 activity in the TFA. Since tethered SMG6 needs UPF1 and kinase-competent SMG1 for decreasing the reporter mRNA levels, suggesting a requirement for P-UPF1, while on the other hand the phospho-epitope binding SMG6 14-3-3 mutant still induced strong reporter mRNA level loss and the UPF1 T28A and S1096A mutants rescued SMG6-mediated effects on the reporter mRNA in UPF1 knockdown cells as efficiently wild-type UPF1, we conclude that these SMG1-mediated phosphorylation events needed for SMG6 activity occur either in other proteins or in UPF1 at P-sites other than T28 and S1096. Moreover, we discovered that more of the SMG6 protein than just the PIN or 14-3-3 domain was needed for its activity and we always observed a minor but constant interaction between UPF1 and SMG6, regardless of the mutations made in either protein. Based on all of this, we postulated that at least a part of the SMG6-UPF1 interaction may occur in a phosphorylation-independent manner as previously hinted to but never explored (18,20). We anticipated that such a phospho-independent interaction might contribute a regulatory aspect to each enzyme and would be important for the actual mRNA degradation induced by SMG6. Discovery of a phosphorylation-independent interaction between SMG6 and the helicase domain of UPF1 To identify such a phospho-independent interaction between UPF1 and SMG6, we utilized the yeast two-hybrid system (43). First, we tested several truncations of UPF1 ( Figure 4A) fused to the GAL4-DNA-binding domain (BD) for an interaction with full-length SMG6 fused to the GAL4-activation domain (AD) using a LacZ reporter gene, which allows detection of colonies expressing ␤galactosidase by addition of the chromogenic substrate Xgal to the plates. Of each co-transformation, four different colonies (A-D) were analyzed. GAL4-DNA-BD-eRF3a and GAL4-AD-eRF1 served as positive control since the interaction between eRF1 and eRF3 is well documented (44,45) ( Figure 4B, row 15). Additional controls ruled out self-activation of any of the GAL4-DNA-BD fusion proteins in the absence of the GAL4-AD-SMG6 ( Figure 4B, rows [8][9][10][11][12][13][14] while the control shown in row 7 confirmed that GAL4-AD-SMG6 is also not able to self-activate the reporter gene. Co-expression of BD-UPF1 and AD-SMG6 resulted in blue colonies, indicating an interaction between the UPF1 and SMG6 full-length proteins (row 1). While we did not observe any interaction between SMG6 and the CH or SQ domains of UPF1 (rows 2 and 4), we scored an interaction between the UPF1 HD domain and SMG6 (row 3), which was abolished by the presence of the CH domain (row 5). On the other hand, we noted that the interaction between the UPF1 HD and SMG6 appeared consistently stronger when the HD still had its SQ domain present (row 6). These results indicate that the presence of the CH domain inhibits and the presence of the SQ domain enhances the interaction of SMG6 with the UPF1 HD. The expression levels of the utilized UPF1 constructs and of the SMG6 full-length construct in yeast cells were confirmed by western blotting using an anti-c-Myc antibody recognizing the in-frame c-Myc tag located between the DNA-BD and the start of the UPF1 ORF and an anti-HA antibody identifying the in-frame HA-epitope tag located between the AD and the start of the SMG6 ORF (Supplementary Figure S6A and B, respectively). As illustrated in the crystal structure of the UPF1 HD (30) in Figure 4C, it consists of two domains shown in green, each of which comprises RecA-like ␣/␤ domains that are responsible for nucleotide and RNA binding and have been referred to as RecA1 (or 1A) and RecA2 (or 2A), respectively (30,46). The adenosine triphosphate (ATP) binding site is located in a deep cleft separating these two RecAlike domains. On the first domain, two further sub-domains termed 1B and 1C exist (shown in black). 1B comprises amino acids (AA) 325-414 and takes the form of a ␤barrel consisting of six anti-parallel strands located above the boundary between the two RecA-like domains (6,(46)(47). 1B is connected to RecA1 by two ␣-helices that are not preserved in any other known helicase structures. These two ␣-helices were termed the stalk (colored in orange) because they protrude from RecA1 (30). 1C comprises AA 556-609 and is made up of three helices that pack against the outer surface of Rec1A and makes a few contacts to sub-domain 1B. Since the 1B and 1C insertions are unique for UPF1, form structural entities above the RecA-like domains and seem to be important for NMD, at least in yeast (30,46), we wondered if SMG6 interacted with these structures, which would confer specificity in the UPF1-SMG6 interaction. To test this, we generated full-length UPF1, UPF1 HD or UPF1 HD-SQ fused to the GAL4-DNA-BD that lacks the stalk plus the intervening 1B insertion ( Stalk, AA 271-433), the 1C sub-domain ( 1C, AA 561-608) or both ( Stalk 1C) as illustrated in Figure 4C below the crystal structure. When tested in the yeast two-hybrid assay, we found that even when the 1C region is removed from the UPF1 HD, it still interacted with SMG6 ( Figure 4E, rows 4, 5 and 8). Notably, the enhancing effect of the presence of the SQ domain was again clearly seen in these tests (compare rows 4 and 5). However, when the stalk region was removed, the interaction was lost ( Figure 4D, rows 1-3 and 6-7). This dependence on the stalk region strongly suggested that the interaction point of UPF1 with SMG6 lies somewhere within AA 271-433. Again, the negative controls ruled out self-activation of the various protein fusions alone (rows [9][10][11][12][13][14][15][16]. Given that this result is based upon a loss of interaction, it was essential to confirm by western blotting that the constructs depicted in Figure 4C and assayed in Figure 4D were all made into proteins in the yeast cells ( Supplementary Figure S6C). We could not detect expression of the BD-UPF1 HD Stalk construct (not shown), but we could detect expression of the same mutant incorporated into BD-UPF1-HD-SQ (left panel, lane 3) and in full-length UPF1 (right panel, lane 8). Nucleic Acids Research, 2014, Vol. 42, No. 14 9229 Concurrently with mapping the region in UPF1 required for this novel interaction between UPF1 and SMG6, we also tried to identify the interaction point to UPF1 in SMG6 using the yeast two-hybrid approach. We constructed several plasmids encoding the different SMG6 portions fused to the GAL4-AD but we were not able to conclusively map the interaction point in SMG6 because many of the SMG6 fusion proteins involving the unstructured N-terminus were not well expressed in the yeast cells (data not shown). However, SMG6 PIN domain (AA 1239-1419), 14-3-3 domain (AA 576-814) and SMG6 AA 814-1419 fused to the GAL4-AD all expressed well in the yeast cells but we did not observe an interaction of any of these SMG6 fusion proteins with UPF1 (Supplementary Figure S6D and E). The interaction between UPF1 and SMG6 14-3-3 domain is probably reliant upon phosphorylation and the yeast cells lack SMG1 to phosphorylate UPF1 for this to occur (12). Therefore, it is clear that the interaction between SMG6 and UPF1 probably involves the N-terminal part of SMG6 and is not mediated by the PIN domain of SMG6. UPF1 Stalk and SQ regions are involved in binding to SMG6 in vitro We next analyzed in vitro the binding between SMG6 and UPF1 to confirm the interaction identified using the yeast two-hybrid assay by testing if purified SMG6 can pulldown purified UPF1 and variants thereof depicted in Figure 5A. Specifically, Flag-HA-SBP-SMG6 and Flag-HA-SBP-MBP, serving as a control were expressed in HEK293T cells and captured on anti-HA magnetic beads ( Figure 5B) and streptavidin-binding protein (SBP) tagged UPF1 WT, UPF1 Stalk, UPF1 1C and eGFP were expressed in HEK293T cells. Such cells were treated with 7-mM caffeine for 4 h to abolish UPF1 phosphorylation and affinity purified using streptavidin sepharose ( Figure 5C). Aliquots of immobilized Flag-HA-SBP-SMG6 and Flag-HA-SBP-MBP were mixed with equimolar amounts of the purified UPF1 protein variants or with eGFP as a control and the retained proteins were detected by western blot using an anti-UPF1 or anti-GFP antibody, respectively ( Figure 5D). All three UPF1 protein variants co-precipitated specifically with Flag-HA-SBP-SMG6 but the amount of SMG6 associated with UPF1 Stalk was reduced compared with UPF1 WT and UPF1 1C ( Figure 5D, compare lane 11 with lanes 10 and 12, respectively). This result corroborates our yeast two-hybrid data by confirming that the stalk region is an important part of UPF1 for its interaction with SMG6. Since we consistently observed in our yeast twohybrid analyses that the SQ domain of UPF1 enhanced the binding between UPF1 and SMG6 ( Figure 4) and removal of the stalk region did not completely abolish the interaction between SMG6 and UPF1, we also tested if the SQ domain of UPF1 contributes to the interaction. Thus, two additional SBP-tagged C-terminally truncated UPF1 Stalk constructs lacking the last 99 amino acids (UPF1 Stalk SQ1; see Figure 5A) or the entire SQ domain ( Stalk SQ2) were also expressed in HEK293T cells as before ( Figure 5E). Again, the UPF1 WT protein coprecipitated specifically with Flag-HA-SBP-SMG6 (Figure 6F, lane 11) and the amounts of SMG6-associated UPF1 Stalk and Stalk SQ1 were reduced to similar extents compared with UPF1 WT (Figure 5F, compare lane 11 with lanes 12 and 13, respectively). Strikingly, almost no UPF1 Stalk SQ2 was pulled down by SMG6 (lane 14). Therefore, the UPF1 Stalk region of the HD and the proximal 105 amino acids of the UPF1 SQ domain are involved in binding to SMG6. This result is in line with the fact that Chakrabarti et al. also simultaneously detected a contribution of the SQ domain to this newly identified interaction between UPF1 and SMG6 (48) . Since yeast cells do not appear to have an SMG1 ortholog (12) and because the UPF1 used in our in vitro assays was hypo-phosphorylated, this identified interaction between UPF1 and SMG6 likely occurs independently of phosphorylation. Further to this, there are no annotated P-sites existing within the UPF1 region spanning AA 271-433 and the two annotated P-sites in the proximal SQ domain at Y946 and Y958 are not well conserved and have not been site-specifically analyzed so far (49,50). UPF1 Stalk can still bind RNA while UPF1 1C has a greatly reduced affinity for RNA in vitro The activity of the UPF1 HD is regulated by its neighboring N-terminal histidine-rich (CH) domain and C-terminal SQ domain. The CH domain of UPF1 makes intramolecular connections to HD which seems to hide its enzymatic activity and this repression is only alleviated when the UPF2/UPF3 complex binds to the CH domain, which induces a large conformational change causing UPF1 to relax its grip on the RNA and leading to the activation of UPF1 ATPase/helicase activity (30,51,52). Similarly, the Cterminal region of human UPF1 also appears to interact directly with the HD to suppress UPF1 enzymatic activities. Again, this repression is only diminished when a currently unknown factor binds to the area of the SQ proximal to the HD, causing a rearrangement allowing for enzymatic activation (53). Ultimately these regulatory effects are a consequence of tempering with the extent of RNA binding and the sub-domains 1B and 1C of the HD have been implicated to be important in controlling RNA binding (30,46). Given the postulated roles of 1B and 1C in UPF1's RNA binding behavior, it was vital to examine how deletions of the stalk region (encompassing sub-domain 1B) and the sub-domain 1C in the HD affect the ability of UPF1 to bind single-stranded RNA. We wanted to know whether the reduced interaction between UPF1 Stalk and SMG6 was due to the loss of an interaction point or due to a damaging effect on the ability of UPF1 to bind RNA. Moreover, we also wanted to test if SMG6 could bind to RNA itself. Therefore, interactions of our recombinant proteins with RNA were examined in solution using MST (38). We used fluorescently labeled U 30 RNA at a fixed concentration (20 nM) and titrated the unlabeled designated protein partner in a range starting at concentrations above the expected dissociation constant (K D ) and ending with sub-stoichiometric concentrations with respect to the labeled RNA. Under our experimental conditions, UPF1 exhibited a high affinity for RNA with a calculated K D of 1.48 nM ( Figure 6A). Deletion of the stalk/1B structure led to a moderate weakening of the interaction between UPF1 and RNA (K D of 6.51 nM) ( Figure 6B). In contrast, deletion of the 1C region dramatically reduced UPF1's binding affinity for RNA by 32.2-fold compared to UPF1 WT (Figure 6, compare A and C). Contrary to another study (54), we found no evidence for binding of SMG6 to RNA in our assay ( Figure 6D). Finally, we did not observe eGFP binding to RNA, which served as a control in these assays ( Figure 6E). Thus, our result indicates that SMG6 has to be recruited to its target mRNAs by a protein-protein interaction, for example by UPF1. Furthermore, we validated the functionality of the recombinant full-length SMG6 protein in vitro by performing endocleavage assays. Wild-type SMG6 protein degraded a radioactively labeled U 30 RNA while its endonucleolytically inactive counterpart could not degrade the RNA. We also demonstrated that this ribonuclease activity requires Mn 2+ (Supplementary Figure S7) as was shown by Glavan et al. using bacterially produced SMG6 PIN domain (21). SMG6 activity in these in vitro assays was poor and we ambitiously tried to increase the activity of SMG6 by adding UPF1. However, we did not conclusively find UPF1 to promote SMG6 activity in these assays, most likely because we were still lacking other regulatory factors that are essential for optimal SMG6 activity (data not shown). In conclusion, we have shown that only the deletion of 1C abrogated the ability of UPF1 to effectively bind RNA, which is in line with earlier results using filter binding assays (46), while the UPF1 Stalk protein is able to bind RNA. This strongly indicates that our observed loss of interaction between UPF1 Stalk and SMG6 observed in the yeast two-hybrid assays and in the in vitro pull-down assays is not because UPF1 can no longer bind RNA but rather due to the fact that the stalk region of the UPF1 HD (AA 271-433) encompasses a phosphorylation-independent interaction surface for SMG6. It had not escaped our attention that the stalk region includes the area of difference between human UPF1 isoform 1 (UniProtKB: Q92900-1) and isoform 2 (Q92900-2). Compared to hUPF1 isoform 1, isoform 2 lacks AA 353-363 (Supplementary Figure S8A) due to the usage of an alternative 5 splice site in exon 7. Both isoforms have been used in publications but possible functional differences between them have never been examined. So far, we had only worked with UPF1 isoform 2 and thus we wanted to examine if the extra 11 amino acids in isoform 1 altered the affinity of UPF1 for SMG6. HeLa cells produce mRNAs for both isoforms, with isoform 2 being the more abundant isoform (Supplementary Figure S8B). Similar to results gained using UPF1 isoform 2 in Figure 4B, we found that small amounts of SMG6 also co-immunoprecipiated with MS2-UPF1 iso1 (Supplementary Figure S8C) and RNAiR UPF1 iso1 was able to rescue the ability of SMG6 to induce reporter mRNA level reduction as efficiently as RNAiR UPF1 iso2 (Supplementary Figure S8D). In addition, both isoforms were able to restore NMD of different PTC-containing Ig-and TCR␤ reporter mRNAs in cells with reduced endogenous UPF1 isoform 1 and 2 protein levels (data not shown). These data suggest that both isoforms of UPF1 can support the mapped interaction with SMG6 in human cells. The interaction between the UPF1 Stalk region and SMG6 is crucial for SMG6 activity in the TFA and for NMD To determine the functional significance of the interaction between UPF1 HD and SMG6, we first examined how important the stalk region of the HD of UPF1 was to the induction of SMG6-mediated mRNA reporter level reduction. The SMG6 TFA was performed as before, along with an RNAi-mediated depletion of endogenous UPF1 and expression of RNAiR UPF1 Stalk. As shown in Figure 7A and Supplementary Figure S1D, the RNAiR UPF1 Stalk (blue bars) was not able to rescue the steady-state mRNA reporter levels under UPF1 knockdown conditions back to those seen in HeLa cells with a control knockdown. Notably, the UPF1 knockdown in this experiment was prolonged by 1 day, which accounts for the stronger effects observed in the UPF1 knockdown conditions compared those documented in Figure 3C. As shown in the western blot, the RNAiR UPF1 versions were detected in cells lacking endogenous UPF1 and were each expressed to a similar level as their wild-type counterpart ( Figure 7A, lanes 6 and 7). This result confirmed a functional necessity for the UPF1 stalk region in SMG6-mediated degradation of reporter mRNA levels. In addition, we also performed classical NMD assays using two well-characterized NMD reporter genes, one based on an immunoglobulin mini-gene (55) and the other on the ␤-globin gene (56). We enforced UPF1 knockdowns and rescued the loss of NMD using RNAiR UPF1 WT as well as RNAiR UPF1 Stalk. As anticipated, the levels of the Ter 310 and ␤-globin Ter 39 reporter mRNAs were reduced to less than 1% of their corresponding wild-type counterparts due to NMD ( Figure 7B and C, respectively, and Supplementary Figure S1D). Depletion of UPF1 and hence abolishment of NMD caused the Ter 310 and ␤-globin Ter 39 reporter mRNA levels to increase by over 10-fold. When RNAiR wild-type UPF1 was expressed, the Ter 310 and ␤-globin Ter 39 reporter mRNA levels were rescued almost back to that observed in the control-treated cells. In contrast, no rescue at all was observed when RNAiR UPF1 Stalk was expressed. Western blotting confirmed that the UPF1 knockdown was efficient ( Figure 7B, lane 5) and that the expression of the RNAiR UPF1 versions in cells depleted of endogenous UPF1 yielded UPF1 protein levels similar to those observed in cells without or with a control knockdown (compare lanes 6 and 7 with 1 or 4, respectively). Similarly, the accompanying western blot of Figure 7C also confirmed that the UPF1 knockdown was effective ( Figure 7C, lane 5) and that the exogenously expressed RNAiR UPF1 proteins were detected (lanes 6 and 7) in the cells expressing ␤-globin Ter 39. These results establish that the region of UPF1 HD that we have identified as being involved in a phosphorylation-independent interaction with SMG6 is essential for NMD in human cells. DISCUSSION SMG6 is an essential component of the NMD apparatus in mammalian cells and we have sought to further delineate the molecular mechanisms surrounding the regulation of its endonucleolytic activity in the NMD pathway. To examine what is required for SMG6 to induce degradation uncoupled from what is needed for its recruitment to the mRNA, we established an SMG6-TFA in HeLa cells. The combination of the TFA with knockdowns of other NMD factors allowed us to identify NMD factors required for SMG6mediated decay in NMD. Contrary to a previous study (57) but in keeping with a recent study (17), we found that tethered SMG6 resulted in strongly reduced reporter mRNA levels, bypassing the need for a PTC ( Figure 1B and Supplementary Figure S2). We also examined the SMG6 14-3-3-like domain and the EBMs, both of which have been implicated in the recruitment of SMG6 to mRNPs. We reasoned that if these parts of SMG6 are solely involved in the function of recruiting SMG6 to NMD-targeted mRNPs, then their function should be bypassed by the MS2-mediated tethering and thus no longer critical for SMG6 activity in the TFA. This is what we observed with the SMG6-14-3-3 mutant, which was fully active in the TFA albeit its association with P-UPF1 was strongly diminished ( Figure 1D and Supplementary Figure S4A). On the contrary, mutations made to the EBMs in the N-terminus of SMG6 compromised the activity of SMG6 in the TFA ( Figure 1D), suggesting that in addition to its role in recruitment of SMG6 to the mRNP, the EBM also seems to play a role in activating SMG6 activity. It was proposed that SMG6 gained its target specificity by binding to the EJC via these conserved motifs, yet they are lacking from D. melanogaster SMG6 (20,58), where it has been demonstrated that the components of the EJC are not essential for NMD and that PTC definition occurs independently of exon boundaries (59). Furthermore, the EBMs most likely do not contribute to the SMG6 recruitment to mRNPs in the EJC-independent mode of NMD in mammalian cells (27) but the involvement of non-canonical EJC (60,61) in the EJC-independent NMD mode cannot be formally excluded. Since our data also show that SMG6 cannot bind RNA itself ( Figure 6D), it seems highly plausible that P-UPF1 recruits SMG6 via its 14-3-3-like domain to the mRNP. When brought to the target RNA, SMG6 is known to induce endonucleolytic cleavage (24,25) and in line with this, we were able to show that the PIN domain of SMG6 is absolutely essential for the ability of tethered SMG6 to induce reduction of reporter mRNA levels, yet we also discovered that the PIN domain on its own is not sufficient in human cells ( Figure 1E). This is contrary to what has been reported for SMG6 in D. melanogaster, where experiments performed in S2 cells showed that tethering of the SMG6 PIN domain could degrade reporter mRNA as well if not even slightly better than full-length SMG6 (21). Nonetheless, we deduced that in human cells an intact SMG6 protein is required for its activity in vivo, which already strongly hinted at other parts of SMG6 being important and most likely other factors being essential for SMG6-mediated decay. This is in line with a recent study that showed in SMG6 lacking cells that all parts of SMG6 were required to rescue NMD of classical reporter mRNAs (TCR-␤ and ␤globin +/− PTCs) and that the N-terminus of SMG6 can co-IP with most other NMD factors (20). Such a conclusion might also explain why we (Supplementary Figure S7) and other studies observe such poor SMG6 endonuclease activity in vitro where other factors are simply lacking for efficient enzymatic activities (21,24,25). To identify which other NMD factors are needed for SMG6 function, we looked to see if UPF1, UPF2, SMG1, SMG5 and SMG7 were needed for SMG6 activity in our assays. Indeed, we could specifically show that the presence of both SMG1 and UPF1 was needed for efficient SMG6 activity in the TFAs, even in the case of the SMG6-4-3-3 and EBM mutants (Figure 2 and Supplementary Figure S3 A, B, D, F and H). Intriguingly, the SMG6 EBM mutant also had no activity in cells lacking UPF2 or UPF3B ( Figure 2D and Supplementary Figure S3C, F, G and H). Thus, it seems that UPF2 and UPF3B become essential for SMG6 activation when SMG6 cannot associate with the EJC, consistent with our previous observation that the EJC-independent NMD mode was more sensitive to depletion of UPF2 and UPF3B than EJC-enhanced NMD (27). Additionally, we found that while the EBM mutant retained its weak interaction to UPF1, it strongly reduced its interaction with P-UPF1 (Supplementary Figure S4A), which may signify that SMG6 connects to the EJC while it is being recruited by P-UPF1, meaning that it is recruited by two means; one by anchoring on to the EJC and the other by P-UPF1, adding extra specificity to the SMG6 recruitment process. However, collectively our results suggest that there is more to the role of the EBM motifs in SMG6 than simply a means to recruit SMG6 to target mRNPs and future work should try to determine the contribution these motifs play to SMG6 activity in NMD. The observation that SMG6 decay function requires UPF1 and kinase-competent SMG1 immediately insinuated a requirement for UPF1 phosphorylation ( Figure 2B and Supplementary Figure S5). However, and in contrast to recent work (17), the UPF1 T28A mutant rescued SMG6 tethering mediated reporter mRNA reduction in cells depleted of endogenous UPF1 as efficiently as UPF1 WT ( Figure 3C), arguing that UPF1 could activate SMG6 through a contact different from the previously characterized interaction between the UPF1 P-T28 and the SMG6 14-3-3-like domain. Our results advocate that the well-documented P-UPF1-SMG6 interaction probably represents the means to get SMG6 to the target mRNP, a step bypassed in the TFA, while the subsequent activation of endonuclease activity of SMG6 depends on a phosphorylation-independent interaction with UPF1. With regards to SMG1, it may be that its kinase activity is needed to phosphorylate sites on another NMD factor. For example, it is known that human and yeast UPF2 can be phosphorylated but the function of P-UPF2 in NMD is not yet known (62,63). It is also very likely that SMG1 can phosphorylate UPF1 at various P-sites and by doing so dictates the function of UPF1 in many aspects of NMD such as RNA binding, enzymatic activity and recruitment of various factors beyond SMG6 and SMG5-7. Recently, it was documented in yeast that two newly identified P-sites in UPF1 act to promote ATP hydrolysis, NMD efficiency and translation termination fidelity (50). Further work is needed to delineate the roles of each individual UPF1 P-site and their contribution to UPF1 function. Our hypothesis of a phosphorylation-independent interaction between UPF1 and SMG6 was corroborated by find-ing that both proteins consistently co-immunoprecipitated each other, regardless of the mutations made in either protein (Supplementary Figure S4). This has also been observed in other studies but never further investigated (18,20). Using yeast two-hybrid assays, we discovered a phosphorylation-independent interaction between the HD of UPF1 and SMG6 ( Figure 4B) and narrowed down the interaction point within UPF1 HD to the stalk structure comprising AA 271-433 ( Figure 4D). We were initially concerned that the observed loss of interaction between the UPF1 Stalk and SMG6 in our interaction studies was actually to do with a detrimental effect on UPF1 RNA binding but we demonstrated that the deletion of the stalk and hence sub-domain 1B did not abolish RNA binding (Figure 6B). Furthermore, UPF1 and SMG6 could still co-IP even in the presence of RNase A (Supplementary Figure S4), and UPF1 lacking 1C, which hindered RNA binding ( Figure 6C), still interacted with SMG6 ( Figures 4D and 5D), further indicating that the stalk/1B structure of the UPF1 HD directly interacts with SMG6. In mapping AA 271-433 of UPF1 isoform 2 to bind to SMG6, we realized that this fragment covers the only alteration between human UPF1 isoform 2 and isoform 1: at amino acid position 353, the less abundant isoform 1 has an insertion of 11 amino acids. Since the alternative splicing event generating the two isoforms appears to be highly conserved and because we could detect the mRNA of both isoforms in our cells, we examined the tantalizing possibility that UPF1 isoform 1 might have a different affinity for binding to SMG6. However, we could not detect any difference between the two human UPF1 isoforms with regard to binding SMG6, being needed for SMG6 activity (Supplementary Figure S8), or even for its involvement in NMD. However, it still does beg the question as to why the cell produces two UPF1 isoforms, differing only by 11 amino acids. Finally, UPF1 Stalk could not rescue tethered SMG6mediated RNA degradation in cells lacking endogenous UPF1, validating that this region of UPF1 is essential for SMG6 activity ( Figure 7A). In addition, we could demonstrate that the UPF1 stalk structure is required for NMD ( Figure 7B and C). With regard to which parts of SMG6 interact with UPF1, our results hint toward the N-terminal part of SMG6 (AA 1-576), because neither the SMG6 814-1419 construct nor the C-terminal PIN domain or the 14-3-3-like domain interacted with UPF1 in yeast two-hybrid assays (Supplementary Figure S6D and E). This is in line with a previous study showing that removal of the SMG6 Nterminal 576 AA abolished co-IPs between SMG6 and UPF1 (20). Unfortunately, our constructs containing the unstructured N-terminal region of SMG6 did not express in yeast, which prevented further mapping of this region. Notably, it has been suggested that the interacting complexes in NMD contain disordered protein components and that their flexibility plays a crucial role in the formation of long-reaching protein-protein interactions. Within the disordered N-terminus of SMG6, eight protein-protein interaction sites have been predicted, one of which maps to the EBM1 at the very N-terminus (64). From the seven remaining sites, one site spanning AA 401-424 is a conserved site that could potentially be the site needed for an interaction with the UPF1 HD. In further assessing the yeast two-hybrid data, we consistently observed that the presence of the CH domain weakened the interaction of UPF1 with SMG6 in the yeast twohybrid assays, while the interaction between the UPF1 HD and SMG6 appeared consistently stronger when the HD still had its SQ domain present (Figure 4). Despite the data being predominately qualitative, these observations indicated that the presence of the CH domain inhibits and the presence of the SQ domain enhances the interaction of SMG6 with the UPF1 HD. Therefore, when we sought to confirm the interaction between UPF1 stalk region and SMG6 in vitro by pull-down experiments, we also tested truncations of the SQ domain. While deletion of the stalk region reduced UPF1 binding to SMG6 only partially, additional removal of the proximal part of the SQ domain almost completely abolished the interaction between UPF1 and SMG6 ( Figure 5D). Therefore, we can show along with Chakrabarti et al. that the SQ domain makes a significant contribution to the interaction between UPF1 stalk and SMG6 (48). The CH and SQ domains of UPF1 engage in intramolecular interactions with the HD to suppress its enzymatic activities and only when UPF2 binds to the CH domain and another unidentified factor binds to the SQ domain, this repression is lifted and UPF1 enzymatic activities are induced (30,53). It was further shown that when UPF2 binds to UPF1, triggering ATPase and helicase activities, this is accompanied by a switch from an RNA clamping mode to a more relaxed UPF1 grip on the RNA (30) and this is keeping with the fact that we observed in vitro that UPF1 F192E, which inhibits the allosteric inhibition from the CH domain to RecA2 (30,65), and P-UPF1 both had reduced affinity for RNA compared to wild-type UPF1 (data not shown). Based on our observations, we speculated that SMG6 may also be involved in this intricate regulation of UPF1 enzymatic activities and/or RNA binding affinities. The observed inhibitory effect of the CH domain on the interaction between SMG6 and UPF1 HD in the yeast twohybrid assays could be explained by the absence of UPF2 in the yeast nucleus, which would result in the UPF1 conformation in which the CH domain is repressing the HD domain. In NMD, UPF2 binding probably precedes the interaction between UPF1 and SMG6. Yet, the interaction between UPF1 and SMG6 also involves the proximal part of the SQ domain. It is very plausible that the unidentified factor that specifically binds to the SQ domain to abolish the intramolecular interactions repressing the HD activities may be SMG6. This possibility is in line with the fact that SMG6 is involved in the UPF1 dissociation from mRNA (18), which already suggested that SMG6 may have a role to play in ATP binding, ATPase and/or RNA binding activity of UPF1. Moreover, UPF1 ATPase activity is required for UPF1 dissociation from mRNA but there has been no analysis performed to date that would implicate the involvement of SMG6. Structural studies and further biochemical studies beyond the scope of this investigation will have to address this exciting possibility. UPF1, SMG1 and SMG6 are the only enzymes specific to the NMD pathway and recent studies have started to give insight into how the enzymatic activities of UPF1 Nucleic Acids Research, 2014, Vol. 42, No. 14 9233 (30,46,(51)(52)(53) and SMG1 (14,34,66) are managed in the NMD pathway. Here, we have been able to show that SMG6 activity is also under regulation by additional proteins. The future of trying to understand the complex regulatory dynamic between the three enzymes may not only be important in trying to understand the role of these proteins in NMD but also in other cellular processes where these three enzymes have been implicated to also function, such as in DNA replication, telomere metabolism and genome maintenance pathways (67,68). SUPPLEMENTARY DATA Supplementary Data are available at NAR Online.
2017-04-12T22:29:12.902Z
2014-07-22T00:00:00.000
{ "year": 2014, "sha1": "bcb6f6dec0625d100c17f8a70120bce4c734afc5", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/42/14/9217/14124154/gku645.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1a2856e44edf8b630d3daeb31df29682986be6b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
216250748
pes2o/s2orc
v3-fos-license
Blended Learning in Research Statistics Course at The English Education Department of Borneo Tarakan University The development of information technology that is very rapidly grown nowadays, particularly the development of information and communication technology enriches the development of the concept of learning based on blended learning. The characteristics of its apparatus, which can always be accessed anytime, anywhere, multiuser and offers all its simplicity have made blended learning as a medium of instruction which is very appropriate for the development of education. One of the courses considered essential to utilize the information and communication technology is a research statistics because the process of learning, this course has two fundamental parts which cannot be separated to one another, namely theory and practice. Thus, it needs a plan of learning activities which combine face-to-face learning and online-based learning interaction. The research was required to conduct to develop blended learning on research statistics course for the students of English Education Department of Teacher Training and Education Faculty of Borneo Tarakan University under a Research and Development by using the ADDIE’s model. The evaluation which was conducted to find out validity, practicality, and effectiveness of blended learning drew three main conclusions: First, the design and development of learning devices were appropriate to be used as guidelines in implementing the learning process. Second, the process of learning was implemented in accordance with the plans and learning devices as well as a learning setting. Third, the students’ learning achievement was classically completed. Thus, the blended learning was found to be valid, practical, and effective for the development of learning. Keywords—Blended Learning, Online Learning, Learning Achievement Introduction We are in a unified world with exceptional access to a widespread pattern of online information and experiences. The world where excitement and opportunities are just a screen touch away [1]. This leads to the massive use of information and communication technologies in educational practices, which enhances drastic changes in the roles of educators and learners. It opens the way for new and innovative methods of teaching and learning which previously could only be done through traditional face to face classroom settings, and now it can be prepared by independent and distance learning using various computer-assisted media both offline and online. The twenty-first century or often called the industrial revolution 4.0 demands to change how human activities take place in the scale, scope, complexity, and transformation of previous life experiences. Learning contents are expected to be able to meet these demands through various learning, innovation, critical thinking and problem solving, digital literacy skills, as well as career and life skills [2]. So, an educator must try to innovate learning to trail the actual conditions in encouraging students to grow according to their era. One of the courses considered essential to utilize the information and communication technology is research statistics because the process of learning, research statistics has two fundamental parts which cannot be separated to one another, namely theory and practice. The theoretical aspects carried out in the face-to-face process have taken up quite a lot of lecturing hours, so that the practical elements involving students to analyze data through the application of data processing encounter some obstacles. On the other hand, synchronization between research method and data analysis techniques dealing with various kinds of research dynamics requires more time outside the learning hours, because different research methods or designs will make data analysis techniques also different. Thus, it needs a plan of learning activities which combines face-to-face learning and online-based learning interaction. From the experience of the teaching team in handling the Research Statistics course at the English Language Education Department of Borneo Tarakan University, an essential aspect in a learning outcome is that the students are able to process and interpret research data as a provision in preparing the final project (thesis). However, the limitation of space, time, and learning facilities causes it to be very difficult to achieve. The complaint of the students is in the difficulty of understanding concepts, the statistical concepts if not taught through real examples and with the help of application will be complicated for non-mathematical students to understand. These various problems require the Research Statistics teaching team to find a new approach of teaching as alternative learning that can integrate traditional and contemporary aspects using media and the role of information technology in a series of learning activities. One solution that is considered to be able to overcome these problems is the implementation of blended learning by utilizing the e-learning at Borneo Tarakan University, which so far has not been exploited properly. Blended learning systems is a combination of face-to-face instruction and online computer-mediated instruction [3]. The term "blended learning" [4] refers to the structure of a course and the approach to teaching and learning in which, generally 30% to 70% of the instruction is delivered online, with the remainder distributed in the classroom. Blended learning is recognized in teaching and learning circumstances where there is an effective combination of various modes of delivery, models of instruction and styles of learning as a result of adopting a strategic and systematic approach to the use of technology merged with the best features of face-to-face interaction (Krause in [5]). The mixing of face-to-face teaching and online learning enables students to have some choices over where they study (at school, at home, or somewhere in between) and when they study (during school hours, in the evening, or on weekends); but it is still the teacher who decides the extent of choice, as well as which elements of the student's learning are completed online and which elements are completed in the class. These tremendous features of blended learning provide an optimal and exciting environment for teaching and learning that positively affect the process of knowledge acquisition as a whole. In this respect, Marsh in [6] remarks that blended learning has many advantages over the traditional mode of learning a language. To them, blended learning: 1. Provides more individualized learning experience 2. Provide more personalized learning support, 3. Support and encourage independent and collaborative learning among learners 4. Increase learner engagement in learning 5. Adapt many different learning styles 6. Provide a place to practice beyond the classroom 7. Provide a less stressful practice environment 8. Provide flexibility of studying to meet learners' needs 9. Help learners develop the necessary learning skills for the twenty-first. The main target of using blended learning in this research is the use of information and communication technology in order to help students understand and apply the concepts being taught. This is very relevant to learning research statistics, where face-toface learning must be accompanied by data analysis tutorial using a program because data processing will, in fact, be very difficult if it is done manually. For this reason, the development of research statistics course through blended learning is expected to minimize the difficulties experienced by students majoring in English Education at the Faculty of Teacher Training and Education at Borneo Tarakan University. Method Research methodology provides an explanation of the techniques applied to collect the data, analysis, and present it in a logical manner [7]. This research was a research development that aimed at developing blended learning on Research Statistics course. The subject of this research was the fifth-semester students of Local B at English Education Department of Teacher Training and Education Faculty of Borneo Tarakan University who programed Research Statistics course. The research procedure used the ADDIE Model [7], which consists of five stages: analysis, design, development, implementation, and evaluation. The analysis stage includes analysis learning outcomes, learning resources, and students' condition and characteristics. The design stage consists of designing activities and learning devices. The development phase is the process of embodying the design results on a pattern and learning activities. The implementation stage is the process of applying the results of design and development into learning activities. Last, the evaluation stage is to know whether or not the process of instruction meets the criteria of quality. The quality criteria used in this research was the quality criteria for a product by [8], which consists of validity, practicality, and effectiveness. To collect the data, some instruments were used in the form of validation sheet, observation sheet, questionnaire, and achievement test. Research data were analyzed quantitatively and qualitatively. Qualitative analysis was used to describe the results of the development, quantitative analysis related to the presentation of data resulting from validation, observation, questionnaire, and achievement test. Result and Discussion The process of development used five stages of the ADDIE Model: analysis, design, development, implementation, and evaluation as presented as follows. Analysis stage Analysis is the first stage that must be determined by the instructional developer. At this stage, the main activity is to analyze: 1. Learning outcomes 2. Learning resources 3. Students' condition and characteristics. First, the learning outcomes that have been proposed in the curriculum are that the students are expected to be able to: a. Explain the nature of statistics b. Present the data into tables and diagrams c. Determine the scoring system, central tendency, location, and dispersion d. Determine the research sampling technique e. Apply the research data analysis using descriptive procedure f. Interpret the hypothesis testing. Second, learning resources as a consideration in developing learning materials and activities. Those can be resources that are utilized and are designed. In this case, the learning resources that can be utilized are the existing facilities available at Borneo Tarakan University, and those that are designed are the learning materials arranged by the team of research statistics subject. Last, the students' condition and characteristics. Based on the observation and interview to students, it was obtained several things as follows: 1. The age of students ranged from 20-24 years old in which it is accordance with the theory of developmental psychology categorized as adults, meant that students can think abstractly 2. The completeness of technology and information tools which is already owned by every student. Out of 35 students, there was only one student who did not have laptop and all of them had smartphones 3. The students' prior knowledge of Research Statistics is still trifling 4. The students have already done e-learning, but only a few meetings due to the unstable campus network 5. The students' GPA are varied within high, medium, and low ability. Based on this condition, blended learning can be applied by utilizing a smartphone or laptop with the help of lecturer as a facilitator. The ability gap among students can be overcome by grouping them through cooperative learning, but independent learning is also emphasized, especially in working with the assignments. Design stage This stage is basically a decision-making process. Designing is based on what has been formulated in the analysis stage. This is an analogue to syllabus making. It includes designing the learning/instructional materials, determining the learning strategies, designing learning devices, and creating e-learning class forum. a. Designing the learning materials: Based on the learning outcomes that had been formulated in the analysis stage, the materials were then designed and/or selected by considering the hierarchy of learning materials, either procedurally or conceptually. b. Determining the learning strategies: Learning strategies are related to what tactics are used in conveying the learning materials. The learning strategy is also determined based on the analysis in the early stage. This strategy is concerning with the learning models, approaches, methods, and media. 1. Learning model:The way of students' learning in general was by using e-learning. In order to make learning activities more organized and directed through a syntax, several learning models were applied. They were direct instruction, cooperative learning, problem based learning, and project based learning. 2. Learning approach: The learning approach used was a contextual approach and problem solving. The contextual approach was used to raise contextual problems that existed around students that were solved by using statistics, while the problem solving approach was used to direct students to solve problems independently or in groups. 3. Learning method: The learning method used was the expository, discussion, question and answer, and guided discovery. This is adjusted to the model and approach used in the learning design. 4. Learning media: The learning media used was e-learning media. The e-learning page included the course name, course description, and materials (pdf, PPT slides, URL links: videos and websites), and assignments. c. Designing learning devices: The initial draft of the learning tool to be developed was based on the results of a discussion with the team teaching. Learning devices include the lesson plan, modules, presentation slides, and learning instruments. d. Creating e-learning forum: The design of e-learning forum that was made was firstly communicated with the head of the ICT Center of Borneo Tarakan University. The main features available on the e-learning page are course management in the form of course description, addition of learning resources, addition of learning activity, and division of groups. The additional features are private files, new questions, question banks, import activities, YouTube links, and messages. Out of those features, the e-learning design made in this course was referred to the main points of learning and was made as practical as possible. The features used were resources, activity, group, youtube/web links, and question banks. Development stage This is the stage of production where everything that has been made in the stages of design can be manifested, and it was carried out as described as follows. a. Development of learning materials: Learning materials that had been determined at the design stage were then developed into an instructional module regarding with the learning outcomes and learning objectives. The module consisted of several materials that were discussed during lecturing. Although it was fully arranged for one semester, the delivery of materials to students was given one by one through the e-learning page in pdf format. It aimed at making students more focused on a learning goal. b. Development of learning strategies: The design of learning strategies was then developed into a series of learning activities. Based on the analysis, the decisions about how to teach students varied for each meeting. For the first two meetings, full online was conducted by giving students materials in pdf format, URL links, and videos. Furthermore, the students were directed to provide comments or questions through the discussion forum that had been provided. This was done because the content of the materials in the first two meetings was not too complicated to be followed and studied independently by students. Meetings 3-7 were conducted more varied through face to face by accessing the materials online with a diverse proportion, either 25% for face to face learning, 75% for individual learning or in groups through e-learning or vice versa. For meetings 9-15 (meeting 8 for mid-test and meeting 16 for final test) in which materials referred to the SPSS-assisted data analysis, the proportion of face-to-face learning was more directly enlarged using an offline computer. c. Development of learning devices:To support the learning process, the Research Statistics elements from the planning to the evaluation process were outlined in the learning devices in the form of The data analysis tutorial that was planned previously was considered to be included in the module. 1. Lesson plan:The lesson plan was made by the team based on preliminary analysis. Basically, the lesson plan was different for each meeting adjusting to the difficulty level of learning materials. If it was considered easy, the students were then directed to independent learning, but if it was considered difficult, the faceto-face process was more taken by providing enough guidance. 2. Module: The presentation of Research Statistics materials was designed in the form of a textbook that was used for 16 meetings. 3. Student worksheet: Student worksheet was made and provided in the e-learning page. The worksheet was called "assignment" which contained problems to be solved by the students done individually or in groups as the process of exploring concepts. The assignment was given in each meeting in order the students understood the concept or procedure through the process of experiencing. 4. Powerpoint slides: The presentation slides became more important in which the students could grasp the general key points of the learning materials. The presentation slides were in the form of a ppt file. Due to difficulties (excessive capacity) in uploading them to the e-learning page, the slides were firstly converted to pdf format. Instruments: To obtain data about the process of learning through blended learning, the instruments needed to be prepared. Those instruments consisted of the achievement test, questionnaire, and observation sheet. The achievement test was given in the mid-term and final-term to see the students' learning results. A questionnaire was given to see the students' responses toward learning activity, the use of module, worksheet, and lecturer competence. And observation sheet was used to check the students' learning activity during the process of learning through blended learning. d. Creating an e-learning discussion forum: After communicating with the ICT centre of Borneo Tarakan University, the ICT staff then made an e-learning account where the Research Statistics team could start creating a learning management system. As stated earlier, the features that were utilized in the UBT elearning page were resources, activity, group, youtube/web links, and question banks. Implementation stage The learning system is ready to use on this stage. It aims at implementing the model of blended learning. a. First meeting: The first meeting of the Research Statistics subject was held on 8th September 2018. The focus of learning material was to explain the syllabus (RPS) and learning contract. The process of learning was carried out in full online without any face-to-face activities. This was done because the teaching team was outside of Tarakan and the content of the material at this initial meeting was still introductory and theoretical. During lecturing hours, the students were directed to log in to e-learning class which had been prepared by giving a comment "present" on the chat form. After that, the lecturer asked them to open the video link then to give comments or responses around the academic rules, learning materials, and learning contract. b. Second meeting: The second meeting was held on 15th September 2018. The function of statistics in research was introduced through online. At the second meeting, the students were expected to log in during the lecturing hours to access the learning material (module), presentation slides, and learning video. The discussion forum was also prepared to respond to the students' questions. After being given time to study the materials, the students were then given a task that has been prepared on the e-learning page. c. Third meeting: The third meeting was held on 22nd September 2018 focusing on the type of data, data presentation, measurement scale, and frequency distribution. The process of learning was carried out in a combination of online and face-to-face learning. The learning was started by conveying the learning objectives and directing students to open the e-learning page. The core activities were in the form of students discussion related to the learning topic, sharing among groups, and closing activity was by giving quizz. During the learning activities, the lecturer carried out his function as a learning facilitator and let the students create their own ways of learning. The students were also asked to use their mobile phone to explore learning materials on the e-learning website. d. Fourth meeting: The fourth meeting was held on 29th September 2018 with the learning material on the central tendency. The process of learning was carried out similar to the third meeting which was a combination of online learning and face-to-face learning. The learning was begun by conveying the learning objectives and directing students to open the e-learning page. The core activity was in the form of problem solving (via e-learning page) where the students worked in groups and made a presentation. The closing activity was in the form of providing a conclusion relating to the material discussed. The role of the lecturer as a facilitator who was ready to facilitate/guide the groups needed for clarification or explanation pertaining to the materials being learned. e. Fifth meeting: The fifth meeting was held on 6th September 2018. The learning material was about dispersion. The process of learning was the same as the process in the third and fourth meeting in which there was a combination of online learning and face-to-face learning. It was just that for the fifth meeting, the online learning process was given in a smaller portion than the previous meetings. It was replaced by the use of an offline computer. The lecturer began the lesson by conveying the learning objectives and directing the students to open the e-learning page to access the material provided. In the core activity, the students in groups were directed to the UBT library and followed the procedures (project-based) that had been uploaded on the e-learning page to treasure the research data. The research data were firstly analyzed manually to find out the variances and standard deviations. After that, the analyses were by using SPSS to adjust the results with that of manual calculation. f. Sixth meeting: The sixth meeting was held on 13th October 2018. The learning material focussed on sampling techniques. The learning process was carried out the same as that of in the third and fourth meeting, namely the integration of online learning and face-to-face learning. The learning objectives were introduced by the lecturer and let the students open e-learning page to access the material that had been provided. The core activity was in the form of group discussion relating to given topics, exchanged group members, but did quiz individually. The lecturer role during the learning activity was a learning facilitator g. Seventh meeting: The seventh meeting was held on 20th October 2018 focusing on the method of data analysis. The learning process was exactly the same as that of in the sixth meeting, namely the integration of online learning and face-to-face learning. The core activity was in the form of group discussion, exchanged group members, and did quiz individually. At the end of the lesson, the students under the guidance of the lecturer summarized the learning materials. The students were also required to prepare themselves for the mid-test in the following week. h. Eighth meeting: The eighth meeting was held on 24th October 2018. This meeting was merely regarding to the evaluation. The evaluation was conducted to measure the students learning outcomes during the first half period of the semester and their responses toward the learning activity through blended learning. The students were given an achievement test as the mid-test and a questionnaire. Evaluation stage a. Validity: Based on the results of experts validation of the learning devices, it was obtained a summary of experts validation as presented in Table 1. Based on Table 1, it can be concluded that the mean score of validation results from the experts on the design of learning devices-lesson plan, instructional module, student worksheet, and achievement test-fell into "very valid" category. This means that the learning devices had met the valid criteria and ready to use. In other words, the learning devices were well made. However, according to the experts, the results of the design were still needed to have minor revisions. The revisions that had been made are as follows: 1. Revision on student worksheets covered: (a) correction on the questions that were considered ambiguous, and (b) adding specific instruction on how to do with the worksheet in each meeting. 2. Revision on lesson plans. The improvement needed to make were the learning materials and learning steps on each lesson plan which should be adjusted to the flow of the materials in the instructional module. 3. Revision on the instructional module was by adding a variety of problems which could stimulate the students to carry out activities according to the learning patterns. 4. Revision on achievement test was on the words/phrases/sentences which were considered ambiguous that made students difficult to understand. b. Practicality: Practicality is seen in two aspects: the feasibility of learning devices and the implementation of the learning devices. First, the feasibility of learning devices. The results of experts validation that the learning devices could be used as the result of the feasibility of learning. Overall, the degree of feasibility was in "quite valid" category. This indicates that the learning design was adequate to use. Second, the implementation of the learning devices. The process of learning could be said "implemented", if every meeting the lecturer could conduct the learning steps in accordance with the planned steps (lesson plan) categorized as "quite good". The data collected from the observation sheet in six face-to-face meetings were found that the ability of the lecturer in implementing the learning process was "quite good". Based on data resulting from the feasibility of learning devices validated by the experts and the ability of lecturer in implementing the learning process obtained from the observation could be said that blended learning had met the practicality criteria. c. Effectiveness: Effectiveness can be seen from three aspects, namely: students' activities, students' questionnaire responses, and students' achievement test. Firstly, the students' activities based on observation were obtained that their activities in the learning process were in "quite active" category. Thus, it can be said that the students' activities had met the specified criteria. Secondly, the students' responses to the learning process, instructional material (module), worksheet, and the ability of the lecturer were obtained the results that 77.14% (positive category) of the students responded positively to each item of each aspect of the questionnaire. It can be said that learning process through blended learning was very helpful to foster the students' participation and comprehension. Finally, the achievement test shows that the students gained the mean score 77.03 with a standard deviation of 4.66, minimum score 63.8, and maximum score of 80.7. The classical learning mastery achieved by the students was 85.71% in which it was above 65 (as criteria of completeness) of the ideal score of 100. Thus, the students' learning outcome was achieved. It can be concluded that all aspects of learning effectiveness: students' activities, students' questionnaire responses, and students' achievement test had met the criteria of "effectiveness". Research on the development of a learning model and its application has obtained findings that indicate the effectiveness of students' learning needs in increasing their learning independence. Therefore, the blended learning development has implications both theoretically in adding to the treasury of knowledge, as well as practically for operational policies that can be applied in the implementation of learning. The aim of this research is to develop a learning model based on blended learning by utilizing the Borneo Tarakan University e-learning platform on Research Statistics course. This research has resulted in what is called a blended learning model which includes a syllabus (RPS), lesson plan, instructional module, student worksheet, achievement test and learning products in the forms of pdf files, PowerPoints, and videos uploaded on the e-learning platform of Borneo Tarakan University. Blended learning is one of the revolutions in the field of internet technology-based education that can be used as a support for learning. In its implementation, utilizing blended learning through internet technology does not require learning by using online only, but the implementation of learning must still be combined with face-to-face instruction. Findings from the students' responses show that they would value a blended learning approach. They were ready to try something new and considered it helpful, interesting, and motivating to perform the tasks and stimulated practical works. These results are consistent with the prior research showing that students learned much better online medium which was particularly useful for making the coursework accessible, flexibly learnable and teachable, making it more student-centered rather than teacher-based [10]; [11]; and [12]. Designing a blended course also improves students' learning achievement and their participation in the implementation of such a blend. O'Toole and Absalom in [13] state that uploading material online positively affects the achievement level of the students. It is observed that the students who read the online course materials besides the in-class lecture had got better performance in a quiz or in an exam than those who only depended on the in-class regular lecture. Also, the blended materials allow the students to learn according to their own speed of learning. This matches to Tomlinson and Whittaker in [14] who state that students can study when they want and at any speed; and this is also consistent with [4] who says that the students have more control over the timing, pace, place, and path of their learning. It leads to student-to-student collaboration and sharing, and student and instructor communication. The role of instructors in blended learning is no longer at the center of direct content delivery. It shifts [15] to student-centered instruction in which students become active and interactive learners. The instructor becomes a facilitator who provides the content framework, information resources, and one-on-one support or group-learning tasks and solve problems. To do so, [16] suggest that instructor must be effective and be an active participant in the classroom, observing, analyzing, and interpreting information about student learning and then using this information for planning and decision making. Conclusion The development of blended learning on Research Statistics course for the students of English Education Department of Teacher Training and Education Faculty of Borneo Tarakan University can be drawn three main conclusions as follows: First, the design and development of learning devices (lesson plan, instructional module, student worksheet, and achievement test) were appropriate to be used as guidelines in implementing the learning process. Second, the process of learning was implemented in accordance with the plans and learning devices as well as a learning setting. The learning illustrates the process of varied learning materials and learning strategies provided in both online and offline environments. Third, the students' learning achievement was classically completed. The students were found to receive the blended learning model positively and participate fair actively. They considered such a blend useful and helpful for the improvement of their understanding. Overall, blended learning was found to be valid, practical, and effective for the development of learning. Therefore, the implementation of the blended learning model can be used as a supplement face-to-face learning activities with the access to online learning, completing the delivery of instructional materials in a wide range of theoretical and practical competencies, and systematic combination of face-to-face and online learning to build the construction students' ideas.
2020-04-09T09:14:17.882Z
2020-04-08T00:00:00.000
{ "year": 2020, "sha1": "e057dee418b061ed93650fb23d470b87e069b75e", "oa_license": "CCBY", "oa_url": "https://online-journals.org/index.php/i-jet/article/download/13231/6825", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "01677921c935d5d1de73978871f51deeae73413e", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
267009036
pes2o/s2orc
v3-fos-license
High-intensity aerobic exercise training improves exercise capacity, dyspnea, and fatigue in patients with severe asthma using triple inhaler Objectives: Asthma is a chronic respiratory disease that affects millions of people worldwide and causes severe symptoms such as wheezing, coughing, and breathing difficulty. Despite modern treatments, 3%–10% of patients develop severe asthma, which requires high-dose medications, and they may still experience frequent and severe symptoms, exacerbations, and psychological impacts. This study aimed to investigate the effects of high-intensity aerobic exercise training (HIAET) in patients with severe asthma. Materials and Methods: Patients with severe asthma were recruited, and cardiopulmonary exercise tests, dyspnea, and leg fatigue scores were performed before HIAET. Participants underwent a 12-week hospital-based HIAET, which involved exercising twice weekly to reach 80% of their peak oxygen uptake (VO2). Results: Eighteen patients with severe asthma underwent HIAET, which resulted in significant improvement in peak VO2 (1214.0 ± 297.9–1349.4 ± 311.2 mL/min, P = 0.004) and work rate (80.6 ± 21.2–96.2 ± 24.8 watt, P < 0.001) and decrease in dyspnea (5.1 ± 1.8–4.1 ± 1.2, P = 0.017) and fatigue scores (5.2 ± 2.3–4.0 ± 1.2, P = 0.020) at peak exercise. No significant changes were observed in spirometry results, respiratory muscle strength, or circulatory parameters. Conclusion: HIAET can lead to improved exercise capacity and reduced dyspnea and fatigue scores at peak exercise without changes in spirometry, respiratory muscle strength, and circulatory parameters. IntroductIon A sthma is a chronic respiratory disease affecting the airway. It affects many people worldwide and has a significant impact [1].According to the World Health Organization, approximately 339 million individuals worldwide are affected by asthma, with the majority of deaths occurring in older adults [2].The Global Initiative for Asthma (GINA) reported that the prevalence of asthma varies between 1% and 18% in different countries and that its incidence is increasing globally [2].Overall, asthma is a major public health issue requiring attention and resources from health-care providers [1,2]. Asthma can cause severe symptoms, such as wheezing, coughing, and difficulty in breathing, which can lead to hospitalization and even death [3].Asthma symptoms and airflow limitations can fluctuate in pattern and intensity over time and are triggered by factors such as exercise, exposure to allergens or irritants, weather changes, and respiratory infections [3].These triggers can lead to exacerbations, a Division of Pulmonary Medicine, Department of Internal Medicine, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei, Taiwan, b School of Medicine, Tzu Chi University, Hualien, Taiwan, c Department of Chinese Medicine, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei, Taiwan, d Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei, Taiwan a significant psychological impact, with patients often experiencing anxiety, depression, and other mental health issues related to frequent medication [5].They may require frequent hospitalizations, emergency department visits, and even intensive care unit stays, and they may consume more health-care resources than other patients with asthma [4].Therefore, it is necessary to provide additional interventions for these patients. Patients with severe asthmatics often experience deconditioning of peripheral skeletal muscles and cardiovascular impairments, which collectively contribute to reduced exercise capacity and physical activity [6].Deconditioning of peripheral skeletal muscles results from multiple factors, including prolonged periods of inactivity, and the systemic effects of chronic inflammation [6].The deconditioning process involves a decline in muscle strength, endurance, and flexibility, leading to difficulties in performing physical activities [6].Peripheral cardiovascular impairments are also reported in patients with asthma [7].These impairments further exacerbate the physical limitations experienced by severe asthmatics.Severe asthma is also associated with impaired cardiovascular function [8].In these patients with severe asthma attacks, the bronchospasm increases the workload on the heart, leading to an increased heart rate (HR) and potential arrhythmias [8].Reduced oxygen supply during exacerbations can temporarily compromise cardiac performance.Chronic inflammation associated may contribute to endothelial dysfunction and increased risk of cardiovascular dysfunction [8].Since patients with severe asthma may experience deconditioning of peripheral skeletal muscles and cardiovascular impairments, exercise training may be beneficial for these individuals. Pulmonary rehabilitation (PR) is a comprehensive program aimed at improving exercise capacity and health related quality of life (HRQoL) [9].The program typically involves a comprehensive approach, combining exercise training, breathing techniques, and education on asthma triggers, medications, inhaler technique, action plans, and lifestyle management [9].PR can be an effective adjunct to standard asthma treatments to improve HRQoL, dyspnea, and exercise capacity [9].Although there have been some studies on the effects of PR on asthma, the majority of the study populations had moderate asthma, with only a few patients having severe asthma.Exercise training is an important part of PR for asthma.Because severe asthma is a significant and challenging condition having a profound impact on patients and society, it is crucial to investigate the effects of exercise training in patients with severe asthma who are already receiving optimal triple-inhaler therapy.Therefore, we conducted this study to explore the effects of high-intensity aerobic exercise training (HIAET) in patients with severe asthma. Study design and patient recruitment Patients with asthma were recruited from the outpatient department.As the GINA guidelines, severe asthma is diagnosed when asthma symptoms persist despite adhering to maximal high-dose ICS-LABA treatment and effectively managing all contributing factors [2].The inclusion criteria in this study were severe asthma with persistent dyspnea despite being on optimal triple inhalation medication (ICS + LABA + LAMA), but no acute exacerbation of severe dyspnea, and the ability to fully perform the cardiopulmonary exercise test (CPET).The duration from triple inhalation medication to exercise training ranged from 28 to 1517 days (average: 480 days).The exclusion criteria were patients with orthopedic or neurological impairments that prevented them from performing CPET, a history of other lung diseases (e.g., chronic obstructive pulmonary disease, pneumoconiosis, and tuberculosis) or documented heart disease (e.g., congestive heart failure and coronary heart disease).This study was approved by the Ethics Committee of Taipei Tzu Chi Hospital (IRB no: 12-X-087).Informed consent was obtained from all the participants.All patients underwent spirometry, CPET, respiratory muscle strength testing, and symptom evaluation during maximal exercise.The CPET was conducted both 1 week before and 1 week after the exercise training.This allowed for the assessment of changes in exercise capacity before and after the intervention, providing valuable insights into the impact of exercise training on the study participants. Pulmonary function test Pulmonary function was assessed using a spirometer (Medical Graphics Corporation, St. Paul, MN, USA), in accordance with the guidelines recommended by the American Thoracic Society [10].Spirometric reference values in these adults in Taiwan were derived from a previous study conducted by Wang et al. [11]. Cardiopulmonary exercise test A bicycle ergometer (Lode Corival, the Netherlands) was used to perform the CPET.An incremental protocol was employed for CPET.Breath analysis was conducted using Breeze Suite 6.1 (Medical Graphics Corporation) to measure variables such as oxygen uptake (VO 2 ), carbon dioxide output (VCO 2 ), tidal volume (V T ), and respiratory frequency (Rf).Blood pressure (BP), HR, and oxygen saturation in the arterial blood (SpO 2 ) were monitored during CPET [12].These patients underwent individualized incremental CPET with a personalized ramp protocol [13].The load was set up to finish between the 10 and 12 min of exertion.Patients underwent a 2-min warm-up phase (unloaded cycling), following which the work rate (WR) was continuously increased in increments of 10, 15, or 20 watts per min, depending on the patient's estimated subjective functional capacity.During the exercise test, patients were asked to maintain a cycling frequency of approximately 60 revolutions per min [13].VO 2 at the anaerobic threshold (AT) was determined using the V-slope method, which plots VO 2 against VCO 2 [12].The work efficiency (WE) was assessed using data obtained during the approach to peak exercise, where we determined the slope of VO 2 versus WR using linear regression analysis [14].Oxygen pulse (O 2 P) was calculated by dividing VO 2 by HR [15].Assuming that oxygen extraction remains constant during maximal exercise, O 2 P is considered a stroke volume parameter [15].The evaluation of ventilatory equivalent for carbon dioxide (VEqCO 2 ) was performed at the nadir point of its value during CPET. Respiratory muscle strength Maximum inspiratory pressure (MIP) and maximum expiratory pressure (MEP) were measured using a respiratory pressure meter (Micro Medical Corp, England).MIP was determined by measuring the pressure when patients exhaled to the residual volume and then performed rapid maximal inspiration.MEP was measured when patients inhaled to the total lung capacity and then exhaled with maximal effort [14]. Dyspnea and leg fatigue score during maximal exercise The dyspnea and leg fatigue scores were assessed at rest and at peak exercise using the Borg scale, which employs a 10-point scoring scale.A higher score indicated a more severe symptom presentation [16]. Pulmonary rehabilitation program In the 12-week hospital-based PR program, all participants underwent two sessions per week.These sessions aimed to educate patients on medications and self-management skills.HIAET with lower-limb cycle ergometer exercise training was included, with a targeted intensity of 80% peak VO 2 . The training program consists of a 2-min warm-up period, followed by 10 min of exercise at 50% intensity, then another 10 min at 60% intensity, 20 min at 80% intensity, and finally, a 2-min cool-down period.Each exercise training session was supervised by an experienced and qualified respiratory therapist who monitored vital signs such as SpO 2 , Rf, HR, and BP.The flow chart shows patient recruitment and exercise training program [Figure 1]. Statistical analysis All parameters are reported as mean ± standard deviation.Statistical analyses were conducted using the Statistical Product and Service Solutions, version 24.0 (SPSS Inc., Chicago, IL, USA).A paired t-test was used to compare the parameters of the patients before and after PR.The threshold for statistical significance was set at P < 0.05. Baseline clinical and demographic characteristics Eighteen patients with severe asthma completed the CPET and HIAET.The clinical characteristics of the patients are summarized in Table 1.The mean age was 57.8 ± 17.4 years, mean body weight was 63.8 ± 12.2 kg, and mean body height was 159.5 ± 7.9 cm.The mean forced expiratory volume in the first 1 s (FEV 1 )/forced vital capacity (FVC) was 73.4% ± 11.6%, FVC was 2.54 ± 0.74 L (90.2% ± 23.7%), and FEV 1 was 1.92 ± 0.80 L/min (82.8% ± 28.9%).All patients with severe asthma were prescribed triple-inhaler medications that included ICS, LABAs, and LAMAs. Effects of high-intensity aerobic exercise training on exercise capacity and symptoms during exercise The effects of HIAET on exercise capacity were assessed using VO 2 and WR during peak exercise.Figure 2 shows the changes in peak VO 2 , WR, exertional dyspnea, and leg fatigue at peak exercise in individual patients.HIAET improved both VO 2 and WR at peak exercise (P < 0.05).After HIAET, there was a significant decrease in dyspnea (5.1 ± 1.8-4.1 ± 1.2, P < 0.05) and fatigue scores (5.2 ± 2.3-4.0 ± 1.2, P < 0.05) at peak exercise. Effects of high-intensity aerobic exercise training on respiratory parameters in patients with severe asthma HIAET did not result in significant changes in spirometric parameters (FEV 1 /FVC, FVC, and FEV 1 ), respiratory muscle strength (MIP and MEP), or ventilation parameters (Rf, V T , VEqCO 2 , and SpO 2 ) at rest or during exercise in patients with severe asthma [P > 0.05; Table 2]. Effects of high-intensity aerobic exercise training on circulatory parameters No significant changes in HR or systolic and diastolic BP were observed after HIAET in patients with severe asthma (P > 0.05).In addition, HIAET did not result in significant changes in cardiac response parameters, such as O 2 P, WE, and VO 2 at AT in these patients [P > 0.05; Table 3]. dIscussIon This study enrolled patients with severe asthma who experienced exertional dyspnea despite receiving triple-inhaler medication comprising ICS, LABA, and LAMA.Our study demonstrated that high-intensity training sessions of 40 min per session twice a week for 12 weeks aimed at achieving 80% of peak VO 2 led to an increase in exercise capacity and a reduction in dyspnea and fatigue scores at peak exercise.However, HIAET did not produce significant changes in respiratory or circulatory parameters in these patients.These findings imply that HIAET can be an effective intervention for enhancing exercise tolerance in patients with severe asthma who are already on optimal inhaler medications.Some previous studies have reported on exercise training in patients with asthma.Cochrane and Clark conducted a study involving patients with mild asthma, in which exercise training was implemented at a level of 75% maximum HR for 30 min per session, three times a week, for a total of 12 weeks [17].The results indicated improvements in lung function and peak VO 2 , O 2 P, AT, and VEqCO 2 [17].In a study conducted by Türk et al., patients with moderate asthma and obesity underwent exercise training, aiming to achieve 90% of peak VO 2 for 40-60 min per session, three times a week, over a period of 12 weeks [18].The study findings indicated improved peak VO 2 and decreased Asthma Control Questionnaire (ACQ) scores, body mass index, and fat mass [18].However, it should be noted that the intervention group in these two studies had less severe asthma than our study population.In a study by Ricketts et al., exercise training was conducted once a week for 8 weeks using leg extensions, bicep curls, sit-to-stand movements, step-ups, pole raises, and knee lifts in patients with difficult-to-treat asthma and obesity [19].The results indicated improved Asthma Quality of Life Questionnaire (AQLQ) score, decreased ACQ score, decreased Modified Medical Research Council Dyspnea Scale score, and increase 6-min walk distance [19].However, the study did not evaluate VO 2 , fatigue, or dyspnea scores at peak exercise.Majd et al. performed exercise training, achieving 60%-80% of peak VO 2 for 20 min per session, twice a week for a duration of 12 weeks in patients with severe asthma and showed improvement in HRQoL parameters, such as Chronic Respiratory Disease Questionnaire, Hospital Anxiety and Depression Scale, and AQLQ [20].However, there was no improvement in peak VO 2 in this study [20].Although the severity of asthma in the patients in Majd et al.'s study was comparable to that in our current study, their exercise training program had a lower intensity and duration than ours.A summary of these studies and the current study is provided in Table 4. Several factors contribute to poor exercise capacity and exertional dyspnea in patients with asthma.Airway inflammation and obstruction aggravate dyspnea during exercise [12].Exercise-induced bronchospasm is also a common trigger for shortness of breath during exercise [12].Patients with severe asthma have reduced dynamic hyperinflation and are likely to have poor exercise capacity [21].Anxiety and depression can worsen asthma symptoms and cause shortness of breath during activities [22]. Patients with asthma may avoid exercise because of the fear of triggering symptoms, which can lead to a cycle of reduced fitness and worsening of symptoms during physical activity [23]. Exercise training has the potential to enhance exercise capacity and decrease exertional dyspnea through various mechanisms, including improvement of respiratory function, airway inflammation, bronchial hyperresponsiveness, and cardiovascular fitness.A previous study showed that exercise training can improve lung function and VEqCO 2 [17].Aerobic training decreases bronchial hyperresponsiveness and systemic inflammation in patients with moderate or severe asthma [24].França-Pinto et al. showed that exercise training reduced bronchial hyperresponsiveness and serum pro-inflammatory cytokines, such as interleukin 6 and monocyte chemoattractant protein 1, in patients with asthma [24].Exercise-induced bronchoconstriction also contributes to exercise intolerance, and exercise training has been shown to mitigate this effect, leading to improved exercise capacity [25].Combined respiratory muscle training and aerobic exercise training can also improve respiratory muscle strength and endurance, which can help patients with severe asthma to breathe more efficiently during exercise and reduce dyspnea [26].According to previous evidence, there is substantial support for the diverse physiological adaptations resulting from aerobic exercise training [27].These adaptations encompass increased oxygen extraction by trained muscles, ultimately leading to lower blood lactate levels, reduced carbon dioxide production, and decreased ventilatory requirements during exercise [27].Consequently, these adaptations contribute to a reduction in exertional dyspnea. Exercise training can also positively impact psychological factors such as anxiety, which can contribute to a sense of breathlessness and reduced physical activity levels [28].Regular exercise can help reduce anxiety, leading to improvements in overall well-being, quality of life, and exertional dyspnea [28].Overall, the mechanisms by which exercise training improves exertional dyspnea or exercise capacity are multifactorial and include airway inflammation, respiratory muscles, and psychological factors. Clinical implications HIAET in severe asthma is significant, as it can provide valuable intervention for patients who experience exertional dyspnea despite receiving optimal inhaler medication.Our study demonstrated that HIAET can lead to an improvement in exercise capacity and a reduction in dyspnea and fatigue scores at peak exercise.Although HIAET did not produce significant changes in the respiratory or circulatory parameters, it still provides a promising approach for improving the overall quality of life of patients with severe asthma.Therefore, health-care providers should consider HIAET as a complementary treatment option to optimize exercise tolerance in this patient population. Study limitations This study has some limitations that should be considered.First, this was a single-center study, and the number of cases was relatively small, which could potentially lead to bias.However, given the low prevalence of severe asthma among patients with asthma, recruitment of a large number of patients with severe asthma can be challenging.Nevertheless, the study findings are still relevant.Multicenter studies with larger sample sizes are necessary to confirm the findings of the present study.Second, as this was a retrospective study and not a randomized controlled trial, conducting prospective randomized controlled studies would be essential to provide additional robust evidence.Randomized controlled trials are designed to minimize biases that may arise from researchers or participants, thus offering more reliable conclusions regarding the effectiveness of the intervention.Third, dynamic hyperinflation was not assessed in patients during the exercise test.As a result, we lacked an understanding of the effects of PR on dynamic hyperinflation in patients with severe asthma. conclusIon HIAET can be an effective intervention for enhancing exercise tolerance in patients with severe asthma who experience exertional dyspnea despite receiving optimal inhaler medication.We demonstrated that high-intensity training can improve exercise capacity and reduce dyspnea and fatigue scores at peak exercise. Table 2 : Effect of pulmonary rehabilitation on respiratory response to exercise Comparison between pre-HIAET and post-HIAET.HIAET: High-intensity aerobic exercise training, FEV 1 : Forced expiratory volume in 1 s, FVC: Forced vital capacity, MIP: Maximal inspiratory pressure, MEP: Maximal expiratory pressure, V T : Tidal volume, Rf: Respiratory frequency, SpO 2 : Oxygen saturation of arterial blood, VEqCO 2 : Ventilatory equivalent for carbon dioxide Table 4 : Studies on pulmonary rehabilitation in patients with asthma Reference Severity of asthma Case numbers Frequency Intensity (%) Time (min) Duration (weeks) Outcomes Pulmonary rehabilitation, UC: Usual care, SMS: Self-management support program, VO 2 : Oxygen uptake, O 2 P: Oxygen pulse, AT: Anaerobic threshold, HR: Heart rate, FEV 1 : Forced expiratory volume in 1 s, VEqO 2 : Ventilatory equivalent for oxygen, ACQ: Asthma Control Questionnaire, BMI: Body mass index, CRQ: Chronic respiratory questionnaire, HADS: Hospital Anxiety and Depression Scale, AQLQ: Asthma Quality of Life Questionnaire, mMRC: Modified medical research council, 6MWD: 6-min walk distance
2024-01-17T16:26:35.447Z
2024-01-12T00:00:00.000
{ "year": 2024, "sha1": "dd7f67e942e142f4bb4b348f7a8a5996dcfcd437", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ef3de2dbc55926faeb42aef9ba85f05b88994d2e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
238930351
pes2o/s2orc
v3-fos-license
The role of carbon nanotubes in antibiotics drug delivery Carbon nanotubes (CNTs) have a tremendous role as nanocarriers in the area of drug delivery, due to their high surface area and ability to be easily conjugated with many organic and drug groups like anticancer, antibiotics, etc . This review has reported the role of the functionalized CNTs in antibiotics drug delivery from 2017 to 2019. Introduction The important goal of enhancing nanocarrier drug delivery systems (DDS) is to develop the therapeutic impact or decrease the toxicity of medically active compounds [1]. Carbon nanotubes (CNTs) discovered by Iijima in 1991, are structures with nanometric diameters [2,3]. There are two types of carbon nanotubes, including single-walled (SWCNTs) and multi-walled carbon nanotubes (MWCNTs). Carbon nanotubes have significant properties, including remarkable electrical conductivity, exceptional tensile strength, thermal conductivity, and the ability to be modified chemically [4,5]. Carbon nanotubes features, including unique surface area, stiffness, strength, and resilience, have led to much interest in the field of drug delivery systems. Also, nanomaterials like CNTs have been employed in different applications such as DNA detection and the improvement of immunoassays for the bacteria detection [6]. CNTs are exposed to adsorb or conjugate with a wide kind of therapeutic materials such as bioactive proteins, peptides, and drugs because of their high surface area. These systems have a splendid potential in the nanomedicine field because functionalized CNTs show low toxicity and not to disturb immune systems [7]. Drug delivery refers to methods, formulations, technology fields, and systems for carrying a pharmaceutical composite in the body, sometimes based on nano compounds such as CNTs [8]. DDS like lipid-or polymerbased nano compounds could be aimed to improve the pharmacologic and therapeutics features of drugs controlled parenterally [9]. Two functionalization methods are used for the modification of CNTs (SWCNTs 1, MWCNTs 2) (Scheme 1). CNTs are able to be oxidized employing strong acids, and as a result, their length reducts while forming carboxylic groups, which improve their dispersion in aqueous solutions [10]. Dissolvability under physiological situations is a key precondition to gain CNTs biocompatible. Furthermore, functionalized CNTs (f-CNTs) 3 or 4 can be bonded to different active molecules, such as proteins, peptides, nucleic acid, and other medical agents. CNTs eternal walls can be functionalized by an efficient method in regard to the 1,3-dipolar cycloaddition of azomethine ylides. CNTs go through the addition reaction while warmed in DMF in front of an aldehyde and an α-amino acid [11]. The rang of this reaction is extremely wide and forms f-CNTs 5 or 6, which have high solubility in an extensive scope of solvents. It can be possible to adjust solubility in aqueous solutions of organic solvents by attentively selecting the reactants [12]. Antibiotics has had a significant effect on human and animals life with the ability of controlling infections during the past century [13]. Weak antibiotic internalization can be solved by carring antibiotics with nanomaterials such as CNTs which are able to effectively interact with the bacterial envelopments [14]. The synthesis of functionalized single-walled carbon nanotubes (f-SWCNTs) SWCNTs functionalized by thioglycoside derivatives The reaction of 4-Ethynylaniline 7 with di-tert-butyldicarbonate provided tert-butyl(4-ethynylphenyl)carbamate 8, which reacted with azide composition (mannose) 11 or (lactose) 12 respectively in the presence of sodium L-ascorbate and copper sulfate in water to produce 1,4-disubstituted triazole 13 or 14. The deprotection of compound 13 or 14 by treatment with trifluoroacetic acid (TFA) produced the compound 15 or 16, which was treated with a mixture of MeONa/ MeOH to obtain the neutral amine 17 or 18 (Scheme 2) [23]. In the third step, the acid-functionalized SWCNTs compound 3 reacted with aminoporphyrin 26 in N,N'-dicyclohexylcarbodiimide (DCC) to provide the porphyrin conjugated SWCNTs 27 (Scheme 6). The interaction among the porphyrin, as an antibacterial agent, conjugated SWCNTs 27, and the bacterial cells in front of visible light cause the cell membrane ruin. Porphyrin-SWCNTs conjugates could be employed as an antibacterial agent too [25]. SWCNTs functionalized by lysine SWCNT-Lys-NH 2 29, as shown in Scheme 7 was prepared from SWCNTs 1 and H-Lys(Boc)-OH 28 in the presence of paraformaldehyde (PFA) and TFA to afford lysine-modified SWCNTs 29, which were nontoxic and did not have the primary influence on the microbiota [26,27]. SWCNTs-ciprofloxacin as nano-antibiotic Ciprofloxacin 30 was reacted with di-tert-butyl dicarbonate (Boc 2 O) in water/dioxane including NaOH to provide N-Boc- The main strategy was the covalent functionalization of SWCNTs with ciprofloxacin by spacer support to enhance the hydrophilichydrophobic stability and gain a fresh stable nano-antibiotic compound, which is biocompatible (Scheme 11). The antibacterial activity of f-SWCNTs 45 was significantly enhanced in comparison to the ciprofloxacin versus different bacteria such as S. aureus, P. aeruginosa, and E. coli [28]. Malachite green (MG) MWCNT conjugate MGMWCNT 56 was obtained by the reaction of MG 55 and COOH-MWCNT 4, according to the reported [25] reaction previously (Scheme 16). This article presented the improved capability of a photoactivated malachite green coupled to carboxylated multi-walled CNTs (MGMWCNT) 56 for the antibacterial therapy in front of Staphylococcus aureus and Pseudomonas aeruginosa. This information indicated that the MGMWCNT conjugate 56 could be used as a further approach for removing these both bacteria structures from the medical system [36]. MWCNTs-coated titanium alloy discs: MWCNTs-coated titanium alloy discs were prepared through the coating of MWCNTs by the hard titanium alloy discs (TiAl6V4), and then immobilizing the encapsulated Ni particles on the MWCNTs surface. Rifampicin is an antibiotic drug which was loaded on the MWCNTs-coated titanium alloy discs to inhibit biofilm formation against Staphylococcus epidermidis [43]. Carboxymethyl chitosan (CMCS)-MWCNTs Chitosan and NaOH in isopropanol (C 3 H 8 O) as a solvent was put into a flask to alkalize. Monochloroacetic acid (MCA) was solubilized in isopropanol, and added to the reaction mixture to provide CMCS 64 [44]. The mixture of MWCNTs 2 and CMCS 64 were sonicated to afford a homogeneous CMCS-MWCNT 65, which was poured in MeOH to obtain the solid nano bio-compound 65 (Scheme 19) [45]. These nanocomposites display greater antibacterial activity in front of S. aureus and E. coli than the employed standard ampicillin (a penicillin antibiotic) and gentamicin [46]. Electro-conductive CNT hybrid hydrogels CNTs were produced by aerosol-assisted chemical vapor deposition [47]. Hydrogel including different quantities of CNT, H NTi (i = 1, 2, 3), were produced under SONOPULS method by mixing proper CNT mass in a Gel water mixture 66 [48]. Acrylamide (AAm) 67, polyethylene glycol dimethacrylate (PEGDMA) 68, and ammonium The mixtures 58 and 59 were combined and sonicated. The initiator 2,2-azobis (2-isobutyronitrile) (AIBN) was added to the mixture of reaction to start the polymerization reaction to provide MWCNT@ liquid crystalline molecularly imprinted polymers particles (MWCNT@ LC-MIP) 60, which showed better drug loading ability because of their smaller size, well permeability, and high surface area (Scheme 17). LVF is used in the infection treatment of soft tissue, skin, urinary tract, and respiratory tract. The LVF-imprinted MWCNT@LC-MIP has the potential activity for a gastro retentive drug delivery system (GRDDS) and used as a floating drug delivery systems (FDDS) [37,38]. Polyethylene glycol f-MWCNTs/gelatin-chitosan nano compound f-MWCNTs (COOH-MWCNTs) 4 was treated with SOCl 2 to give MWCNTs containing acyl chloride (RCOCl) groups 61, which was treated with polyethylene glycol 62 (PEG) to obtain polyethylene glycol f-MWCNTs 63 (Scheme 18). It was added into the chitosangelatin mixture under the sonicated condition to afford polyethylene glycol f-MWCNTs/gelatin-chitosan, which didn't demonstrate any cytotoxicity. Therefore, the ciprofloxacin lactate drug mixture was appended to MWCNTs-COOH/gelatin-chitosan nano compound and MWCNTs-PEG/gelatin-chitosan nano compound to evaluate their antibiotic activities. It can be used as an agent in nanomedicine, targeted thermic tumor ruin, drug delivery, and magnetic battlefield targeting of tumors. It is evident that the transfer of antibiotic drugs like ciprofloxacin can be easily made when it was instantly immobilized on the carbon nanotube surface through H-bonding and π-π stacking [39]. MWCNTs-magnetic nanoparticles In this process, MWCNTs were reacted with concentrated H 2 SO 4 /HNO 3 to provide COOH-MWCNTs, which were mixed with persulfate ((NH 4 ) 2 S 2 O 8 ) 69 were added to the mixture 66 to provide the polymerization solution 70, which was put among two 5.0 × 5.0 cm 2 glass sheets, disconnected with a Teflon spacer (0.6 mm) to obtain electro-conductive CNT hybrid hydrogels 71 (Scheme 20). The design of electro-conductive CNT hybrid hydrogels could release curcumin (CUR) as an antibiotic drug, which was applied to extraneous voltage as a valuable device for wound curing. The existence of gelatin in the reaction feed is forced to improve the dispersibility of CNT in liquid media [49,50]. Human serum albumin (HSA)-CNTs CNTs were oxidized using H 2 SO 4 /HNO 3 under the ultrasonic condition to provide COOH-CNTs. In this process, the phosphate buffer was used to dissolve HSA, which was added to COOH-CNTs solution to obtain HSA-CNTs [51]. HSA can facilitate binding and carrying a lot of poorly soluble antibiotic drugs like bendroflumethiazide in water [52]. Star-shaped poly(ɛ-caprolactone) (PCL)-poly(ethylene glycol) (PEG)/CNTs p red. OCl nano compound CNTs were synthesized using Fe as the catalyst via CVD method [53]. CNTs were functionalized to immobilize some kinds of chemical groups such as OH, COOH, and RCOCl on their surface through threesteps. The first step, the CNTs were reacted with nitric acid under reflux condition to provide partly oxidized CNTs to produce OH and COOH groups. In the second step, carboxyl groups (COOH) of CNTs p oxi were reduced chemically by LiAlH 4 to promote the hydroxyl groups to afford CNTs p red , which were treated with OxCl to obtain CNTs p red-OCl 72 (Scheme 21). In the last step, CNTs p red-OCl were utilized to produce the star-like PCL-PEG/CNTs p red-OCl nano compounds as below. The CNTs p red-OCl 72 were mixed with prepared star-like PCL-PEG copolymers 73, which was prepared by relatively modifying a chemical three-step method [54]. At first, a homopolymer of star-like PCL was produced by ɛ-caprolactone (ɛ-CL), pentaery-thritol (PTOL), and stannous octoate (SnOct 2 ) under reflux condition. In the second step, a prepolymer of star-like PCL-OxCl was obtained by the treatment of OH groups of starlike PCL (stPCL) and acyl chloride (RCOCl) groups of oxalyl chloride (OxCl) under reflux condition. In the next step, a mixture of PEG in dichloromethane was added to avoid unwanted cross-linking reacting over the prepolymer preparation to provide star-like PCL-PEG, which was reacted with CNTs p red-OCl to afford star-like PCL-PEG/CNTs p red-OCl nano compound 74 (Scheme 22). Also, the resulting product was tested against Staphylococcus aureus bacteria and Pseudomonas aeruginosa. were better than the polymeric compound, while their bacterial action was lesser. The molecular scaffold of the star-like copolymer made conceivable poly(ethylene glycol) chains be used to the achievement of bacteria and inhibited their growth [55]. Conclusion In conclusion, the role of carbon nanotubes in antibiotics drug delivery was reviewed through the functionalization of SWCNTs and MWCNTs to improve their abilities such as drug-releasing, targeting, adsorption, solubility in water. Consequently, f-SWCNTs and f-MWCNTs display better antibacterial activity against bacteria like E. coli, S. aureus, Bacillus subtilis, and etc.
2021-08-27T16:35:04.164Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b8096cf883a1a7890b80c56fda639aae1c5d5980", "oa_license": "CCBY", "oa_url": "https://www.oatext.com/pdf/FDCCR-4-151.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fd8249f7bc04f02f5210fe9b8ded604b297ba929", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Chemistry" ] }
244197077
pes2o/s2orc
v3-fos-license
The Perceived Satisfaction in Utilizing Learning Management System among Engineering Students during the COVID-19 Pandemic: Integrating Task Technology Fit and Extended Technology Acceptance Model : Online education has become the norm for higher education institutions (HEIs) during this COVID-19 pandemics. HEIs are required to implement a fully online learning system that is structured and readily accessible with the assistance of a learning management system (LMS), including in developing countries such as the Philippines. This study aims to assess factors that positively influence the perceived satisfaction of engineering students when using the LMS during the COVID-19 pandemic in the Philippines. Additionally, it aims to integrate two models: Task Technology Fit (TTF) and Technology Acceptance Model (TAM), with added variables such as the content of the learning management system, social presence, and social space. Upon deploying the convenience sampling, a total of 1011 engineering students responded in the online survey, which consisted of 81 questions. Structural equation modeling (SEM) showed that the Task Technology Fit was positively influenced by technology, individual, and task characteristics. Moreover, behavioral intention to use LMS was positively influenced by perceived usefulness and perceived ease of use. Furthermore, Task Technology Fit had a significant direct effect on behavioral intention to use LMS, which subsequently led to perceived satisfaction. This study is among the first to explore factors affecting perceived satisfaction among engineering students in using the LMS in the Philippines during the COVID-19 pandemic. To evaluate the perceived satisfaction of students in using the learning management system, future works can be extended and the model can be applied in other countries. Introduction COVID-19 is a worldwide public health crisis, including in the Philippines. About 1,627,816 people have been infected and 28,427 have died in the Philippines as of 6 August 2021 [1]. Enhanced community quarantines, widely including lockdowns, social distancing, and the wearing of face masks and face-shields, are still enforced in the country to control the spread of the virus. The country has been constantly changing its restrictions depending on the current number of cases. Still, the traditional school setup has not yet returned [2]. The government implemented distanced learning to replace traditional close education as an alternative tool to control the spread of the COVID-19 virus. Through online 2 of 18 learning, academicians and students engaged in continued academic learning. However, the Philippines was not prepared during the initial stage of this pandemic due to the closure of internet shops, internet instability, the isolated location of most students, and other relevant factors [2]. Different types of distance education have been introduced, such as e-learning, mobile learning, online education, distance learning and online learning. Toquero [3] suggested conducting studies related to the effects of the COVID-19 pandemic on the educational system. It is necessary to strengthen the practices of the curriculum and make it more adaptable to the needs of students beyond traditional setups. In addition, Joaquin et al. [4] highlighted that there are still gaps and challenges in their responses, despite innovations made by HEIs regarding the use of technologies and alternative learning material for delivering academic educations. The study recommends that policy responses and learning innovations should be made on a deeper understanding of online education, and be flexible in times of change for online education [4]. Online education is an instructional method that utilizes a variety of tools and technology that facilitate student-faculty communication for the enrichment of the student learning experience. In the contemporary world, the online education concept is not new anymore, and there are various available means economical internet access, thanks to recent advances in cloud technologies that promote a flexible learning system and support traditional learning methods [5]. During the COVID-19 pandemic, online learning has become the norm for higher education institutions (HEIs) in many countries including the Philippines. HEIs in the Philippines have adjusted to the restrictions inflicted by this global pandemic and are required to implement a fully online learning system. Different HEIs have different policies of online learning, which facilitate student learning activities via the provision of synchronous lectures, or asynchronous, delayed activities [6]. Learning materials are structured and readily accessible, along with the different learning management systems (LMS) offered by HEIs. Learning Management System is normally used to depict a variety of systems providing access to online educational services for teachers, students, and executives. In general, these services have some essential features such as provision for different types of learning content, types of communication tools, and limited access control to authorized people. It is usually referred to as an online learning platform [7]. LMS was widely used by the different countries to continuously improve the instruction and learning activities in higher education. The improvements of LMS were investigated by comparing the different features of commercially available platforms [8]. There are many benefits of LMS for the students, teachers, and academic staff. For example, they can review lectures, give feedback, answer exam, assignments, hold discussions, and have social interactions. In addition, LMS features have been examined so that writing instructors might become more familiar with writing education [9]. Moreover, the technological efficiency of using LMS for online writing instruction (OWI) allowed the attaining of a sustainable practice [10]. There are available options for higher education institutions in terms of using LMS, such as ANGEL, BBlearn, Canvas, Desire2Learn, Moodle, Sakai, and others [11]. These LMS are widely used, especially in engineering universities and institutes in the Philippines. An empirical study has been used to examine the adoption and acceptance of elearning using the structural equation model (SEM) [12][13][14]. The recent condition of the COVID-19 pandemic has meant other researchers and have evaluated the acceptance of e-learning in their countries using the Technology Acceptance Model [15,16]. The study of Pal and Vanijja [17] evaluated the perceived usability of Microsoft Teams in India using the System Usability Scale (SUS) and Technology Acceptance Model (TAM); the results demonstrate the similarity and equivalency between SUS and TAM. Likewise, Isaac et al. [18] examined the intervention role of Task Technology Fit and its compatibility in the Delone and Mclean information system success model. The results of the proposed model were effective in demonstrating the variables that positively influence a student's academic performance. Moreover, an integration of the Task Technology Fit (TTF) and Technology Acceptance Model (TAM) was used in the literature as a robust model [19,20]. The study of Yen et al. [19] integrated both the Technology Acceptance Model (TAM) and the Task Technology Fit (TTF) model in understanding the determinants of users' intentions to use wireless technology in an organization. Based on the results, the perceived usefulness and ease of use positively influence an individual's behavioral intention to use wireless technology, while the perceived ease of use has a significant direct effect on perceived usefulness. Technology characteristics and task characteristics positively influence the Task Technology Fit, while there is a significant effect of technology characteristics on perceived ease of use and usefulness. Lastly, TTF positively influences TAM, and this implies that the user's intention to adopt wireless technology is determined directly by TTF between task characteristics and technology characteristics; the same holds with the user's perceived usefulness and ease of use. LMS has been investigated by applying the Technology Acceptance Model (TAM) and employing the Structural Equation Modeling (SEM) approach to examine students' adaptation process [13]. With the integration of Task Technology Fit and the Technology Acceptance Model, the study assesed the adoption of social networking sites and their impact on students' performance [21]. The results show that there is a significant relationship between technology characteristics, task characteristics, social aspects and Task Technology Fit (TTF) in using social media as a platform for academic purposes; it also promotes student enjoyment and enhanced academic performance. Moreover, there are significant relationships between comprehension efficiency, enjoyment, ease of use and behavioral intention to use social media for academic purposes, which positively influences satisfaction and achievement. Hence, the study implies that TTF and behavioral intention to use social media significantly improve the learning process of students while enabling them to share knowledge, discussions and information. This paper aims to evaluate factors that positively influence the perceived satisfaction among engineering students when using the Learning Management System (LMS) in the Philippines during the COVID-19 pandemic. It also aims to integrate the two models of Task Technology Fit (TTF) and the Technology Acceptance Model (TAM) with added variables, such as social presence, social space, and the content of the learning management system. The present study is among the first study to analyze factors that positively influence the perceived satisfaction of engineering students using the Learning Management System in this country during the COVID-19 pandemic. Furthermore, the integrated TAM and TTF, and added variables such as social presence, social space, and the content of the learning management system, can be extended and applied in other countries to evaluate the perceived satisfaction of students using the learning management system. Figure 1 illustrates the theoretical research model of the recent study. This study integrated the Technology Acceptance Model (TAM) and Task Technology Fit (TTF) with added variables such as social presence, social space, and content of the learning management system. The main objective was to evaluate factors affecting the perceived satisfaction through online education among engineering students in a higher education institution (HEI) in the Philippines. These factors are investigated for their impact on student's perceived satisfaction when using a Learning Management System (LMS). This study examines 11 hypotheses, as illustrated in Figure 1. Technology Characteristics Technology characteristics refer to the current system used by students to complete their tasks [22]. The indicator used in this construct is based on the ability of the LMS to offer information and of the students to perform the task virtually in any location, and can be accessed using mobile devices at any point in time [23]. Previous studies show the integration of information and system quality using social media in informal and formal learning [24]. In view of this, the study suggests the following hypothesis. H1 : Technology characteristics will positively influence Task Technology Fit. Task Characteristics Task characteristics refer to the actions accomplished by individuals in transforming inputs into outputs [22]. In this model, some of the indicators used in this construct were based on the different assessment tasks, collaboration with other individuals, and how frequently they coordinate with each other to perform the given task. In view of this, the study suggests the following hypothesis. H2: Task characteristics will positively influence Task Technology Fit. Individual Characteristics Individual characteristics refer to the student's characteristics that significantly influence the use of the Learning Management System [25]. In this study, self-efficacy and attitude are the two variables considered under individual characteristics. Self-efficacy refers to one's opinion of one's own ability to utilize technology in accomplishing certain tasks [26], while attitude refers to an individual's positive or negative thoughts about performing a task [27]. In view of this, the study suggests the following hypothesis. H3: Individual characteristics will positively influence Task Technology Fit. Task Technology Fit Task Technology Fit refers to the correspondence among the individual's abilities, the task requirements, and the features of technology [22]. TTF corresponds to the level of technology that supports an individual's work in order to accomplish a given job. Gen- Technology Characteristics Technology characteristics refer to the current system used by students to complete their tasks [22]. The indicator used in this construct is based on the ability of the LMS to offer information and of the students to perform the task virtually in any location, and can be accessed using mobile devices at any point in time [23]. Previous studies show the integration of information and system quality using social media in informal and formal learning [24]. In view of this, the study suggests the following hypothesis. Task Characteristics Task characteristics refer to the actions accomplished by individuals in transforming inputs into outputs [22]. In this model, some of the indicators used in this construct were based on the different assessment tasks, collaboration with other individuals, and how frequently they coordinate with each other to perform the given task. In view of this, the study suggests the following hypothesis. Individual Characteristics Individual characteristics refer to the student's characteristics that significantly influence the use of the Learning Management System [25]. In this study, self-efficacy and attitude are the two variables considered under individual characteristics. Self-efficacy refers to one's opinion of one's own ability to utilize technology in accomplishing certain tasks [26], while attitude refers to an individual's positive or negative thoughts about performing a task [27]. In view of this, the study suggests the following hypothesis. Task Technology Fit Task Technology Fit refers to the correspondence among the individual's abilities, the task requirements, and the features of technology [22]. TTF corresponds to the level of technology that supports an individual's work in order to accomplish a given job. Generally, individuals can use the technology to complete specific tasks under any given condition [28]. In this paper, Task Technology Fit refers to the LMS's ability to assists students in their various learning activities. These activities include accessible learning Sustainability 2021, 13, 10669 5 of 18 materials with interactive activities such as quizzes, assignments, discussions, and practical activities. In view of this, the study suggests the following hypothesis. Hypothesis 4 (H4). Task Technology Fit will positively influence behavior intention to use a learning management system. Behavioural Intention to Use LMS Individual attitudes, a reaction in a particular way concerning the use of the system, and the perception of its utility are referred to as behavioral intention to use (BIU) [29,30]. In view of this, the study suggests the following hypothesis. Hypothesis 5 (H5). Behavioral intention to use a learning management system will positively influence perceived satisfaction. Perceived Usefulness Perceived usefulness refers to the level at which an individual thinks that using a distinct system would improve individual performance [29]. This latent variable including perceived ease of use has been introduced into the Technology Acceptance Model (TAM) to measure an individual's intention to use technology. In view of this, the study suggests the following hypothesis. Hypothesis 6 (H6). Perceived usefulness will positively influence behavioral intention to use a learning management system. Perceived Ease of Use Perceived ease of use refers to the degree to which a person thinks that using a specific system would be free of effort [29]. In view of this, the study suggests the following hypothesis. Hypothesis 7 (H7). Perceived ease of use will positively influence behavioral intention to use a learning management system. Perceived Enjoyment The notion of perceived enjoyment was based on the flow theory, under which an activity is perceived as enjoyable aside from the user's perception of its usefulness or its ability to attain certain performance goals [31,32]. According to Venkatesh et al. [26], an individual who derives enjoyment from using an information system can use it more broadly than those who do not An individual will be more inspired to use it again if they perceive it as an enjoyable task in contrast to a similar activity that is not. If an individual can experience enjoyment through the implementation of new technology, their attitude towards it will be positive. In view of this, the study suggests the following hypothesis. Hypothesis 8 (H8). Perceived enjoyment will positively influence behavioral intention to use the learning management system. Social Presence Social presence refers to the degree to which one feels the existence of participants in communication-the psychological sensation of the other being "there" and "present" [33,34]. In view of this, the study suggests the following hypothesis. Social Space Social space refers to the perceived system of interpersonal relationships between students. Kreijns et al. [35] designed and implement a sociable computer-supported Sustainability 2021, 13, 10669 6 of 18 collaborative learning (CSCL) system, which may increase the probability that a sensible social space will develop. In view of this, the study suggests the following hypothesis. Content of Learning Management System The content of the learning management system (LMS) refers to the tools by which students can gain access to its content. The content of the learning management system provides up-to-date, useful, sufficient and relevant content on the provided topic, quiz, assignment, discussions, practical activity, etc. [36]. The interaction in which individuals are involved is based on the contents of the learning management system. In view of this, this paper proposes the following hypothesis. Hypothesis 11 (H11). The content of the learning management system will positively influence perceived satisfaction. Perceived Satisfaction The degree to which a student is satisfied with all elements of using the Learning Management System (LMS) through online education [34]. As for the studies conducted, this is the first study that evaluates the factors of perceived satisfaction in using a learning management system as part of online education in one of the HEIs in the Philippines during the COVID-19 pandemic. Data Collection The desired population of this study, to whom the questionnaires were administered, were the engineering students at one of the higher education institutions in the Philippines. The present research used the non-probablity convenience sampling method, sent via Google Forms. Data were collected from January 2021 up to February 2021. The research subjects were engineering students who used the Learning Management System as a platform for their online education. In total, 1011 responses were received. Their participation in answering the questionnaire was voluntary and treated confidentially and anonymously. As shown in Table 1, among the 1011 respondents, 76% were males, and almost 24% were female; 63.90% were between 19 and 21 years old, 22.95% were between 22 and 23 years old, 11.47% were 24 or above and only 1.68% were aged 16-18 years old. Most of the respondents were in their 2nd year, comprising 46.98%, 25.22% were in their 5th year, 20.18% were in their 3rd year, and only 7.62% were in their 4th year. About 71.12% of the respondents lived in cities and 28.88% lived in the province. Most of the respondents were from the Civil Engineering Program, comprising 56.48%, the Industrial Engineering Program (18.69%), the Mechanical Engineering Program (10.68%), the Computer Engineering Program (4.95%), the Electrical Engineering Program (4.06%), the Electronics and Communications Engineering Program (2.87%), the Marine Engineering Program (1.38%), the Sanitary Engineering Program (0.69%) or the Environmental Engineering Program (0.20%). Most of the respondents (64.19%) used laptops as the learning platform and 35.81% used smartphones. As regards to synchronous learning in weekly classes, 35.81% and 35.11% of the respondents declared synchronous activity between 3 and 5 h and 5 and 10 h per week, respectively. Additionally, 14.24% and 14.85% of the respondents declared synchronous learning for less than 3 h and above 10 h per week, respectively. As regards to asynchronous learning in weekly classes, 38.87% of the respondents refered to 3-5 h of asynchronous activity when using LMS, 26.61% of the respondents declared asynchronous activity for 5-10 h with LMS, 17.80% of the respondents declared asynchronous activity for under 3 h with LMS, and 16.72% of the respondents declared asynchronous activity for above 10 h with LMS. Questionnaire The Table 2 shows the constructs and measures used in the survey. CO1 The Learning Management System provides up-to-date content on the provided topic, quiz, assignment, discussions, etc. [36] CO2 The Learning Management System provides useful content for the topic, quiz, assignment, discussions, etc. [36] CO3 The Learning Management System provides sufficient content for the topic, quiz, assignment, discussions, etc. [36] CO4 The content in the Learning Management Systems is relevant [40] CO5 The content in the Learning Management Systems is readable [40] CO6 The content in the Learning Management Systems is accurate [40] CO7 The content in the Learning Management Systems is concise and to the point [40] Technology Characteristics TECH1 This Learning Management System offers me the ability to receive information and perform assessment tasks from virtually any location [19] TECH2 This Learning Management System offers me the ability to receive information and perform assessment tasks from virtually any location at any time [19] TECH3 This Learning Management System can be accessed on mobile devices through a mobile app to represent information in ways appropriate to me [19] TECH4 Learning Management Systems can also be subject to frequent problems and crashes [40] Task Characteristics TC1 Using this Learning Management System, I frequently deal with different assessment tasks [19] TC2 Some tasks given to me have never been replicated before [19] TC3 The task problems I cope with often involve more than one assessment task [19] TC4 I frequently deal with nonroutine task problems [19] TC5 I have to collaborate with others in my coursework [40] TC6 My coursework requires frequent coordination with the efforts of others [40] Individual Characteristics "*Big Blue Button" is a conference open-source system for online education and is embedded in the Canvas platform. Structural Equation Model The Structural Equation Modeling (SEM) approach is a multivariate approach that is used in testing hypotheses concerning the impacts among interacting variables [44]. This paper utilizes AMOS 22 when running the SEM. This is a user-friendly structural equation modeling tool that investigates the correlation between latent variables to validate the relationship and test hypotheses. Figure 2 shows the SEM constructs consisting of 12 latent variables, with 9 exogenous latent variables (Technology Characteristics, Task Characteristics, Individual Characteristics, Perceived Usefulness, Perceived Ease of Use, Perceived Enjoyment, Social Presence, Social Space, Content of Learning Management System) and 3 endogenous latent variables (Behavioral Intention to Use LMS, Task Technology Fit, and Perceived Satisfaction). The Structural Equation Modeling (SEM) approach is a multivariate approach that is used in testing hypotheses concerning the impacts among interacting variables [44]. This paper utilizes AMOS 22 when running the SEM. This is a user-friendly structural equation modeling tool that investigates the correlation between latent variables to validate the relationship and test hypotheses. Figure 2 shows the SEM constructs consisting of 12 latent variables, with 9 exogenous latent variables (Technology Characteristics, Task Characteristics, Individual Characteristics, Perceived Usefulness, Perceived Ease of Use, Perceived Enjoyment, Social Presence, Social Space, Content of Learning Management System) and 3 endogenous latent variables (Behavioral Intention to Use LMS, Task Technology Fit, and Perceived Satisfaction). Prior studies used six sets of measurement to analyze the model fit, such as the Tucker Lewis Index (TLI), the Comparative Fit Index (CFI), the Incremental Fit Index (IFI), the Goodness of Fit Index (GFI), the Adjusted Goodness of Fit Index (AGFI), and Root Mean Square Error of Approximation (RMSEA). A value greater than 0.90 indicates a good model fit for TLI, CFI, and IFI [45,46]. On the other hand, a value greater than 0.80 is the lowest sign of a good model fit for GFI and AGFI [47]. Lastly, a value smaller than 0.07 is also an indication of a good model fit for RMSEA [48]. Prior studies used six sets of measurement to analyze the model fit, such as the Tucker Lewis Index (TLI), the Comparative Fit Index (CFI), the Incremental Fit Index (IFI), the Goodness of Fit Index (GFI), the Adjusted Goodness of Fit Index (AGFI), and Root Mean Square Error of Approximation (RMSEA). A value greater than 0.90 indicates a good model fit for TLI, CFI, and IFI [45,46]. On the other hand, a value greater than 0.80 is the lowest sign of a good model fit for GFI and AGFI [47]. Lastly, a value smaller than 0.07 is also an indication of a good model fit for RMSEA [48]. Figure 2 shows the initial SEM results from AMOS 22 in evaluating the perceived satisfaction, using LMS among engineering students in the Philippines. Based on the figure, one hypothesis was not significant: the relationship of perceived enjoyment and behavioral intention to use LMS (Hypothesis 8), PE (β = 0.0037). Thus, this latent variable has been omitted to increase the model's fit. Results Tables 3 and 4 demonstrate the reliability and validity, and the model fit, respectively. Based on Table 3, it is apparent that each construct in our proposed model possessed internal consistency, reflected by the Cronbach α and composite reliability (CR), since the values were found to be higher than 0.7. Furthermore, the constructs also had a degree of discriminant validity, with a minimum value of 0.5, which is represented by the average variance extracted (AVE); all of the constructs surpassed this value. Finally, an evaluation of the data observed throughout the study in relation to the proposed model was also conducted; this was made possible by assessing the value of model fit. Inspired by the study of Gefen et al. [47], here, values of GFI and AGFI higher than 0.80 were the cut-off. In addition, as derived from the study of Hair [45], we recommend the values of IFI, TLI, and CFI be greater than 0.9, and the RMSEA be smaller than 0.07 [45,47]. Therefore, the model fit measures were within the acceptable value range. Figure 3 shows the final model for evaluating perceived satisfaction in using LMS through online education during the COVID-19 pandemic. Discussion The study integrated the Task Technology Fit (TTF) and Technology Acceptance Model (TAM) to evaluate factors affecting the perceived satisfaction among engineering students using a Learning Management System (LMS) in the Philippines amidst the Discussion The study integrated the Task Technology Fit (TTF) and Technology Acceptance Model (TAM) to evaluate factors affecting the perceived satisfaction among engineering students using a Learning Management System (LMS) in the Philippines amidst the COVID-19 pandemic. A total of 1011 engineering students responded to the online questionnaire deployed using Google Forms, which consisted of 81 questions. Structural Equation Modeling (SEM) was used to analyze the interrelationship among Technology Characteristics (TECH), Task With regard to Task Technology Fit, SEM was significantly affected by TECH (β = 0.38), TC (β = 0.201), and IND (β = 0.754). The model indicated that the LMS equipped the students to perform their assigned tasks at virtually any location and at any time. It shows that accessing the LMS through a mobile application is generally accepted by the students in terms of representing information. In addition, it also shows that group collaboration to perform a certain task is effective, and the use of LMS will give the students confidence in performing different tasks. Furthermore, the individual attitudes of students towards using LMS in their studies are acceptable, and they considered it beneficial to them. Additionally, the LMS's functions and contents give confidence to students constantly using the platform. The study of Khan et al. [50], which examined the factors that influence the student's adoption of massive open online courses (MOOCs) in Pakistan, showed that Task Technology Fit was significantly affected by Technology Characteristics and Task Characteristics. Moreover, a prior study regarding the significant effect of individual characteristics on TTF was carried out in the context of the adoption of e-books in academic settings [51]. The SEM also indicated that Task Technology Fit had a significant direct effect on BI (β = 0.526). The students think that the LMS is easy to use, user-friendly, easy to learn, provides updated information, provides output based on their needs, and helps them to accomplish assignments and academic tasks. Generally, it shows that the students were satisfied in using LMS and would remain so in the future, and it has benefits for the students. Previous studies also established consistent results regarding the significant direct effect of Task Technology Fit on behavioral intention [50,52,53]. Regarding the behavioral intention to use LMS, the current model showed that it was significantly affected by PU (β = 0.379) and PEOU (β = 0.511). This implies that when using LMS, students will increase their performance, efficiency, productivity, flexibility, and effectiveness in performing tasks. In addition, it will provide new ways of learning and provide benefits during online education. It also shows that students perceived the LMS as easy to understand and flexible to interact with, and that it improves the quality of learning. Furthermore, several studies presented a significant effect of perceived usefulness and perceived ease of use on behavioral intention [15,16,20]. As discussed before, the study integrated variables such as social presence (SP), social space (SS), and the content of the learning management system (CO). The results prove that perceived satisfaction was significantly affected by SP (β = 0.255), SS (β = 0.416), and CO (0.239). This shows that using LMS in a conference via Big Blue Button gives the students a feeling of being in a real face-to-face class, and distinct impressions of their fellow students. It also allows them to feel the presence of their fellow students physically within the same room. In addition, the students felt that, when using LMS, they were free to criticize and scrutinize the ideas and opinions of their fellow students. Students also kept in touch with each other, conducted open and happy conversations, and preserves connections regarding work tasks, such as assignments, quizzes, and discussions. Furthermore, the students perceived a network of interpersonal relationships when using LMS. Prior studies support the significant effects of social space and social presence on student satisfaction by providing learning experiences that consider the aspect of a sociable learning environment [34]. In addition, the content of LMS provides up-to-date and useful information on the provided activities, such as quizzes, assignments, discussions, and even practical activities. There are numerous academic tasks, such as online practicum, online experiments, and online simulations. Despite the challenges during the COVID-19 pandemic, the results of the study show that online learning platforms could still satisfy engineering students. This also implies that the provided contents were relevant, readable, accurate, and concise. Previous studies looked at how the content will be offered in the future [54]. The practical benefits of LMS offer several advantages to improve the students' studies. The first aspect is how the video content referring to both theoretical and practical aspects can be viewed by all students, customized, and repeated according to their learning needs and availability. Offline mode only allows one-time viewing, and the learning capacity of students varied. Customized learning in terms of flexibility is surely a great advantage for students. The second aspect is how the quality of content can be increased by different sources. The teachers can use different sources such as reputable videos or invited teachers without complex preparations. The limited geographical sources of content will be eliminated in the approach of LMS. Finally, SEM indicated that behavioral intention to use LMS had a significant direct effect on PS (β = 0.594). It showed that the students were fully satisfied in terms of their educational needs, and all their expectations were fully met when using LMS as a distance learning method. Conclusions The COVID-19 pandemic has been a worldwide health crisis. To control the spread of COVID-19, the Philippine government implemented a distance learning method as an optional tool to replace traditional close education. To assess its effectiveness for students, this paper utilizes a Learning Management System (LMS) as a distance learning method among engineering students in one of the higher education institutions (HEI) in the Philippines. The contribution of this research is that it provides insight into case studies on confirmatory factor analysis, specifically on the satisfaction of engineering students utilizing LMS. The paper uses Task Technology Fit (TTF) [55][56][57] and a Technology Acceptance Model (TAM), with added variables such as social presence, social space, and content of the learning management system, in assessing the factors affecting the perceived satisfaction of students when using LMS. A total of 1011 engineering students responded to the online questionnaire, which comprised 81questions. A total of 10 hypotheses were supported (H1, H2, H3, H4, H5, H6, H7, H9, H10, H11), and 1 hypothesis was not-the relationship of perceived enjoyment with behavioral intention to use LMS (Hypothesis 8), PE (β = 0.0037). This implies that, in the initial model, perceived enjoyment had no significant effect on the behavioral intention to use LMS. Thus, this latent variable has been omitted to increase the model's fit. The results show that the Task Technology Fit was positively influence by technology characteristics, task characteristics, and individual characteristics. In addition, behavioral intention to use LMS was positively influenced by perceived usefulness and perceived ease of use. As a result, Task Technology Fit had a significant direct effect on behavioral intention to use LMS, which subsequently led to perceived satisfaction. Theoretical Contribution This study provides theoretical contributions to the existing literature on using a Learning Management System (LMS) in online education during the COVID-19 situation in the Philippines. The study contributes to building a novel method related to the factors that affect perceived satisfaction in using LMS in online education by integrating Task Technology Fit and a Technology Acceptance Model. In addition, added variables such as social presence, social space (adopted from Weidlich and Bastiaens [34]), and content of learning management system (from Wang [36]) are considered as factors affecting the perceived satisfaction of students. This implies that the social aspects involved in students using LMS were significant factors as regards perceived satisfaction in using LMS during online education. The result shows that an average 29% improvement in the model fit was obtained after omitting one latent variable. Lastly, this paper is among the first to analyze the factors affecting perceived satisfaction in using a Learning Management System (LMS) in the Philippines during the COVID-19 pandemic. Practical Implication The findings from this study can be used by any higher educational institution (HEI) to increase the level of satisfaction of students using online learning platforms during the COVID-19 pandemic. The model fit measures were within an acceptable range (see Table 4). If students are satisfied with their learning experience when using the LMS platform, they will be able to continue their studies without having to search for other schools that offer online education. Despite the COVID-19 pandemic, students can easily learn and study at home with the help of LMS. They would only need an internet connection and the correct technology to attend the online class. Likewise, teachers can conduct lectures and continue to educate students. Thus, the distance and location will not be an issue for academic learning for either students or teachers. One benefit of LMS is that it makes it easy to access information. Information is readily accessible to all users, and learning materials are structured and can be accessed virtually anytime and at any location. Students have access to lesson materials, such as quizzes, assignments, practical exercises, discussions, and resources. These resources include a PDF e-book copy and other online links, and can be integrated into the class page for online learning. Additionally, the platform can be used as an instrument to create content, on which students can upload, share work, and carry out projects with their teachers and fellow learners Another benefit of LMS is the diverse range of formats of the resources that are being used and disseminated in the class modules. The instructor gathers multiple resources on the topic that will help the students understand the context of the subject. Interactive videos and external sites can be embedded easily into the class page. Institutional resources can be accessed through different e-journals, and e-books are also embedded in the module. The assessment tasks in the LMS can also be in various formats, such as multiplechoice questionnaires that provide immediate feedback. The teacher can also reference an external site in an interactive video format, apply the questions for assignments, carry out practical activities, and host graded discussions. Moreover, clear feedback from the teacher for assignments, quizzes, activities, etc., is also one benefit of LMS for students. Feedback can be easily shared with the student via Speed Grader. Speed Grader is a tool embedded in the LMS that is used by teachers to view, annotate and assess student's submissions without the need to download any files. Through this, students can download copies of their work after it is graded with the instructor's comments. Instructors can "mark-up" student's work using in-line text comments, highlighting and drawing tools, and can quickly assess work using customdesigned rubrics within Speed Grader. All scores are automatically entered into the course grade book. Big Blue Button, which is an open-source conference system for online education and is embedded into the Canvas platform, is part of the Learning Management System. These features of the LMS enable the students to interact with their classmates and teachers virtually. Through this, they feel that they are really physically present, just like in a traditional classroom. Additionally, students, teachers, employees, and administrators can facilitate the overall management of communication through individual emails, messages, announcements, and agendas. Moreover, the online learning experience using LMS during this pandemic has introduced flexibility and tailored pacing into the learning of students. Students can return to past lecture material using LMS if they are not comfortable enough to ask their professors about a specific detail of the topic. Limitations and Future Research We would like to acknowledge some limitations of the study. First, this study mainly focused on the general perception of engineering students regarding their perceived satisfaction in using the LMS during the COVID-19 pandemic. Second, the survey deployed was limited to engineering students only; it only shows that LMS works in engineering program, as other programs were not considered, such as management, business, accounting, medicine, law, etc. Future works in this area can consider other latent variables, such as the instructional delivery method, and analyze whether this could positively influence the perceived satisfaction with using learning management systems during online education. Additionally, the proposed model can be expanded to other programs and different educational levels, such as grade school and high school. Moreover, the proposed model can also be applied to assess teacher's perceived satisfaction when using LMS during COVID-19. Additionally, it can be extended and applied in other countries to evaluate the perceived satisfaction of students and teachers when using the Learning Management System. Lastly, the possibility of improving the LMS in terms of its features could also be addressed in future research. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-10-05T20:09:54.846Z
2021-09-26T00:00:00.000
{ "year": 2021, "sha1": "aca696860ddd90404f82776e8068e4a4466e070b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/19/10669/pdf?version=1632827836", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6cfd8b6840616992fadeefc4d1abfd0985ddc80d", "s2fieldsofstudy": [ "Engineering", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
2367459
pes2o/s2orc
v3-fos-license
Ion channel gene expression predicts survival in glioma patients Ion channels are important regulators in cell proliferation, migration, and apoptosis. The malfunction and/or aberrant expression of ion channels may disrupt these important biological processes and influence cancer progression. In this study, we investigate the expression pattern of ion channel genes in glioma. We designate 18 ion channel genes that are differentially expressed in high-grade glioma as a prognostic molecular signature. This ion channel gene expression based signature predicts glioma outcome in three independent validation cohorts. Interestingly, 16 of these 18 genes were down-regulated in high-grade glioma. This signature is independent of traditional clinical, molecular, and histological factors. Resampling tests indicate that the prognostic power of the signature outperforms random gene sets selected from human genome in all the validation cohorts. More importantly, this signature performs better than the random gene signatures selected from glioma-associated genes in two out of three validation datasets. This study implicates ion channels in brain cancer, thus expanding on knowledge of their roles in other cancers. Individualized profiling of ion channel gene expression serves as a superior and independent prognostic tool for glioma patients. grades III and IV are designated as high-grade and are usually regarded as malignant. These high-grade gliomas are comprised of glioblastoma, anaplastic astrocytoma, mixed anaplastic oligoastrocytoma (AOA), and anaplastic oligodendroglioma (AOD) 15 . Ion channels have been reported to be widely expressed in glia cells 16 and play a critical role in malignant glioma 17,18 . First, through their effects on the shape and volume of glioma cells, ion channels may influence tumor invasion and migration 17 . For example, reduced membrane expression of a member of the voltage-gated Clchannels, ClC-3 (encoded by gene CLCN3), inhibits migration of glioma cells in vitro and in vivo 19 . Second, ion channels may affect the proliferation of glioma cell. For example, voltage-dependent large-conductance Ca 2+ -activated K + channels, often referred to as Big Potassium (BK) channels, were found to play a key role in growth control of human glioblastoma cells 20 . Third, ion channels may regulate the apoptosis of glioma cells. Inhibition of outwardly rectifying K + channels was reported to cause apoptosis in malignant glioma cells 21 . Despite the advances in therapeutic approaches, patients with malignant glioma have short survival time, especially for glioblastoma with a median survival of approximately twelve months 22 . Therefore, several molecular markers have been proposed to predict survival in glioma, including both microRNA 23,24 and protein-coding gene 25,26 signatures. Our previous work has suggested that ion channel gene expression is a novel genomic biomarker in predicting the outcome of several human carcinomas [11][12][13] . In this study, we profile expression of ion channel genes to predict glioma outcome. We identify a prognostic molecular signature, which includes expression patterns of 18 ion channel genes. 16 of these 18 ion channel genes (89%) were down-regulated in high grade gliomas. This signature successfully distinguishes glioma patients with high death-risk from the ones with low risk. This signature is also independent of and improves on traditional prognostic factors in glioma. These results highlight the utility of ion channel genes as valuable biomarkers for glioma outcome prediction and for potentially facilitating individualized therapies in this disease. Results Ion channel gene expression is correlated with WHO grade of glioma. We categorized the severity of gliomas using the WHO's grading system. Grade I glioma is the least severe with the best prognosis while grade IV is the most severe carrying the worst prognosis. To identify the genes that are associated with the severity of glioma, we downloaded a high-throughput gene expression dataset from the Gene Expression Omnibus (GEO) database (GEO accession: GSE43289), which was based on the Affymetrix Human Genome U133 Plus 2.0 Array. The patients of this cohort were from the University Hospital of Coimbra (UHC), Portugal 27 . Forty glioma patients with annotated WHO tumor grade were considered in this study, which included 3 grade I, 3 grade II, 6 grade III, and 28 grade IV patients. One possibility to explain the differences in sample sizes is that glioma is asymptomatic until it has progressed to higher grade, as evinced by the fact that grade III and grade IV are the most commonly diagnosed grade of glioma 28 . For the Affymetrix microarray, only the well-annotated probe sets with a "present" call in at least two third of the samples were retained. In total, 18,041 probe sets encoding 10,385 genes were considered in this study, which included 108 probe sets encoding 84 ion channel genes. Spearman's rank correlation test was used to identify the genes in which the gene expression level was significantly correlated with glioma grade. Figure 1 shows the distribution of the correlation coefficient (ρ) for all the genes. The ρ of ion channel gene expression profiles is significantly more negative than the non-ion channel gene expression profiles (t-test: P = 4.9 × 10 −6 ) (Fig. 1). In total, we found that 2,559 probe sets encoding 1,913 genes are differentially expressed with glioma grade (Spearman's rank correlation test: adjusted P < 0.05 after Benjamini & Hochberg correction). Among these probe sets, 842 sets are up-regulated, while 1,717 sets are down-regulated in high-grade glioma (Supplementary Table S1). Among the deregulated probe sets, only two probe sets encoding two ion channel genes, CLIC1 and CLIC4 (both encode chloride intracellular channel), are up-regulated in high-grade glioma ( Fig. 2A and Supplementary Table S2). In contrast, 22 probe sets encoding 16 ion channel genes, including both voltage-gated ion channels and ligand-gated channels, are down-regulated in high-grade glioma ( Fig. 2B and Supplementary Table S2). Among the down-regulated probe sets, there is a significant enrichment of ion channel genes in high-grade glioma (Fisher's exact test: P = 1.5 × 10 −3 ). To validate the above findings, we analyzed an independent gene expression dataset (GEO accession: GSE4290) from the Henry Ford Hospital (HFH) 29 , containing 23 non-neoplastic samples and tumor samples from 45 grade II, 31 grade III, and 77 grade IV glioma patients. The expression pattern of ion channel genes in the HFH cohort mirrored our findings in the UHC cohort: gene expression level and glioma grade were significantly correlated (Spearman's rank correlation test: adjusted P < 0.05 after Benjamini & Hochberg correction) for all the deregulated ion channel genes identified from the UHC cohort ( Supplementary Fig. S1 and Table S3). Expression of ion channel genes predicts survival in glioma patients. We identified 18 ion channel genes deregulated with glioma. Here, we predicted that the expression of these 18 ion channel genes could be used for prognostic purpose in glioma. We assigned the 18 ion channel genes as an ion Channel based Gene (iCG) signature (Table 1 and Fig. 2). Of these 18-ion channel genes, 16 genes (89%) were down-regulated in high grade gliomas. A weight was assigned to each gene among iCG signature according to the direction of differential expression: 1 for the up-regulated and -1 for the down-regulated genes in high-grade glioma. A risk score was assigned to each patient based on the iCG signature and gene weight (see Methods for details). A more positive risk score was meant to presage a higher death risk. Next, we tested whether the iCG-based risk score was able to predict survival in glioma. For this purpose, we obtained three independent high-throughput gene expression datasets online: i) a cohort including 95 high-grade glioma patients (GEO accession: GSE43107) from the European Organisation for Research and Treatment of Cancer (EORTC) 30 , ii) a cohort composed of 77 high-grade glioma patients (GEO accession: GSE4271) from the M.D. Anderson Cancer Center (MDACC) 31 , and iii) a cohort consisting of 50 high-grade glioma patients (http://www-genome.wi.mit.edu/cancer/pub/glioma/) from the Massachusetts General Hospital (MGH) 32 . These datasets were chosen based on the large number of samples (sample size ≥ 50) and the availability of clinical outcome data. We defined iCG + patients as those having a risk score larger than zero while the other patients were assigned as iCG -. Kaplan-Meier survival curves indicate that there is a significant difference in survival between the iCG + and iCGglioma patients in all the validation cohorts (log-rank test: P = 5.2 × 10 −6 for the EORTC cohort; P = 4.3 × 10 −4 for the MDACC cohort; and P = 8.7 × 10 −5 for the MGH cohort) (Fig. 3). The association between iCG risk score and survival was also confirmed by univariate Cox proportional hazard regression of survival. The iCG + patients have a 2.84-, 2.51-, and 3.95-fold increased risk of death in the EORTC cohort, the MDACC cohort, and MGH cohort, respectively ( Table 2). We divided iCG into two subsets: iCG-up (the two ion channel genes that are up-regulated in high-grade glioma) and iCG-down (the 16 ion channel genes that are down-regulated in high-grade glioma). We tested the prognostic power of iCG-up and iCG-down separately. Kaplan-Meier survival curves indicate that there is a significant difference in survival between the iCG-up + and iCG-upglioma patients in the MDACC and MGH cohorts (log-rank test: P = 6.4 × 10 −3 for the MDACC cohort; and P = 6.7 × 10 −3 for the MGH cohort), but not in the EORTC cohort (log-rank test: P = 1.3 × 10 −1 ) ( Supplementary Fig. S2). As for iCG-down, we found that the iCG-down + patients have a significantly increased risk of death in all the validation cohorts (log-rank test: P = 1.8 × 10 −5 for the EORTC cohort; P = 8.7 × 10 −4 for the MDACC cohort; and P = 1.7 × 10 −4 for the MGH cohort) (Supplementary Fig. S3). iCG is better than the random gene signatures picked up from human genome. A computational study by Venet et al. demonstrated that, in breast cancer, most published prognostic gene signatures were not significantly better than random gene sets of identical size that were randomly selected from human genome 33 . To address this issue in our study, we conducted a resampling test for the iCG signature. We obtained 1,000 random gene signatures by randomly selecting 18 genes from human genome (the same size as the iCG signature). For each random gene signature, we calculated the risk score for each glioma patient and performed univariate Cox proportional hazard regression of survival to evaluate the association between the random gene signature and glioma clinical outcome. The Wald statistic (Z), the ratio of Cox regression coefficient to its standard error was recorded for each random gene signature. This ratio indicates the significance level of the relationship between survival and the risk score. Our alternative hypothesis was that the Z of iCG should be more positive than expected by chance if the prognostic power of iCG was significantly better than the random gene signatures. We found that, in all the validation cohorts, we could reject the null hypothesis that the association between iCG and survival is by chance. The Z of iCG is significantly larger than that of the random gene signatures (Right-tailed: P < 0.001 for for the EORTC cohort; P = 0.045 for the MDACC cohort; and P = 0.007 for the MGH cohort) (Fig. 4). iCG performs better than the random gene signatures picked up from glioma-associated genes. Next, we asked whether the prognostic power of iCG is superior to the other genes that are associated with glioma by conducting a second resampling test. We limited the resampling pool to the genes that were differentially expressed with glioma grade (Supplementary Table S1) and defined these genes as glioma-associated. We then randomly selected 18 genes from the pool of glioma-associated genes and tested the predictive power of this random gene signature. The performance of the random gene signature was quantified by the Wald statistic (Z) computed by univariate Cox proportional hazard regression of survival. We found that the prognostic power of iCG is significantly better than that of 1,000 random glioma-associated gene signatures in the EORTC and MGH cohorts (Right-tailed: P = 0.006 for for the EORTC cohort; and P = 0.023 for the MGH cohort) (Fig. 4), but not in the MDACC cohort (Right-tailed: P = 0.260) (Fig. 4). iCG is an independent prognostic factor. Using multivariate Cox proportional hazard regression, we tested the performance of iCG in comparison with the other prognostic factors associated with glioma outcome. Due to the limitation of available patient medical data, we were unable to consider the MDACC and MGH cohorts. Only the EORTC cohort was investigated here. The EORTC cohort was the largest dataset in this study, which was mainly composed of AOA and AOD patients. First, we considered clinical factors, including age, gender, type of surgery, and performance status, molecular factors, such as loss of heterozygosity (LOH) on chromosome 1p and 19q, and histological factors (AOA or AOD). Here, type of surgery was categorized into biopsy, partial resection, and total resection and encoded as 1, 2, and 3, respectively. Performance status was based on the Eastern Cooperative Oncology Group standard 30,34 . In total, we identified 89 glioma patients without missing data. Multivariate Cox proportional hazards regression of survival indicated that the iCG status is the most significant covariate in relation to the other clinical and pathological factors (Table 3). Second, we added more molecular prognostic factors into the multivariate test, including epidermal growth factor receptor (EGFR) amplification, isocitrate dehydrogenase 1 (IDH1) mutation, and O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation. Due to missing observations, only 53 patients were included. Multivariate Cox proportional hazards regression of survival demonstrated that the iCG status is still the most significant factor in the new multivariate model (Table 4). Mutations in IDH1 are among the key events in the formation of diffuse gliomas and associated with prolonged survival. Here, we also found that the IDH1 mutation status was one of the significant prognostic covariate in the multivariate model (Table 4). Therefore, we further stratified the patients according to the IDH1 mutation status and repeated the Cox proportional hazards regression. For patients with and without IDH1 mutation, the iCG + patients have a 3.91-and 3.32-fold increased risk of death, respectively (Cox proportional hazard regression: P = 3.2 × 10 −3 for patients without mutation; and P = 4.9 × 10 −3 for patients with mutation). Kaplan-Meier survival curves also demonstrated significantly reduced survival for the iCG + patients in each subset grouped by the IDH1 mutation status (Supplementary Fig. S4). Discussion Because of their highly influential role in central biological processes (e.g. cell signaling, motility, and proliferation), ion channel genes have been implicated in a wide variety of disease processes 2-6 . In particular, the role of ion channels in cancer pathology has been heavily documented in breast 11,35 , lung 12,36 , colon 13,37 , and skin 38,39 cancers. In this study, we identified a prognostic gene signature composed of 18 ion channel genes (iCG), which successfully predicted glioma outcome in three independent validation cohorts. We therefore expand knowledge of the link between deregulation of ion channel gene expression and cancer by examining this link within glioma patients and suggest that deregulation of ion channel genes may be a general feature of cancer pathology (see also Lastraioli et al. 7 ). In sum, our results indicate that i) ion channels play an important role in the pathology of glioma, ii) ion channels generally tend to be down-regulated in high-grade glioma (only 2 of 18 genes were up-regulated here), and iii) iCG is a superior and independent covariate, which adds prognostic value to traditional clinical and pathological factors. The explicit link between the deregulation of ion channel expression in glioma and prognosis of glioma patients adds to a growing body of evidence that alterations to ion channel gene expression may be a common feature in various cancers 7 . Indeed, CACNA1D 40 , the intracellular chloride channel genes 41 , GRIA2 42 , the potassium channel genes 43 , NALCN 44 , P2RX7 45 , SCN1A 46 , and VDAC1 47 from iCG are all under investigation in cancer therapies in some capacity. This indicates that at least 15 of the 18 genes in iCG (83%) are already recognized for their potential in cancer treatments. More work is needed to determine whether the other genes in our signature could be exploited for cancer therapy. We conducted Spearman's correlation test between gene expression level and WHO glioma grade. We found that the correlation coefficient for the ion channels genes were statistically more negative than that of the other genes. More interestingly, 16 out of 18 (89%) ion channel genes in iCG, including Ca 2+ , K + , Na + , and Clchannels, were down-regulated in high-grade glioma. All these results suggest that high-grade glioma expresses fewer ion channels compared with low-grade tumors, which is consistent with the previous findings in voltage-gated Na +48 and K +49 channels. Intriguingly, however, this bias toward down-regulation is not reported in other studies linking ion channels to cancer [11][12][13] . More work is needed to understand what mechanisms produce this pattern. We also found a contradictory expression pattern for the BK channel. BK channels are essential for the regulation of several key physiological processes, which are especially fundamental to the control of neuronal excitability 50 . BK currents in glioma cells were found to be more sensitive to intracellular Ca 2+ concentration compared with that in normal glial cells 51,52 . BK channels have been found to be up-regulated in biopsies of high-grade glioma 17 . Also, a positive correlation was detected between BK channel expression and malignancy grade of glioma 53 . The BK channel gene, KCNMA1, is among the iCG gene list. However, the weight of KCNMA1 is negative in this study (Table 1), which means the gene expression of KCNMA1 is negatively correlated with glioma grade in the UHC cohort. To double-check the expression pattern of KCNMA1, univariate Cox proportional hazards regression was conducted to estimate the relationship between glioma survival and KCNMA1 expression level in the three validation cohorts. Interestingly, we observed that the survival time of glioma was inversely and significantly correlated (hazard ratio <1) with KCNMA1 expression level in all three validation datasets (Table 5), which is consistent with the finding from the UHC cohort. Therefore, the down-regulation of KCNMA1 in malignant glioma is unlikely by chance. Seeking the reason for the opposite observations for BK channel is beyond the scope of current study. However, our finding suggests that the role of BK channel in glioma is vexed and further extensive investigation is needed. A published bioinformatical study by Venet et al. demonstrated that most published prognostic gene signatures of breast cancer are not more strongly associated with cancer survival than random gene sets 33 . Venet et al. compared 47 prognostic breast cancer signatures to the signatures composed of random genes and found that roughly 60% of the published signatures were not significantly better than the randomized gene signatures of identical size 33 . This important finding reminds us that the strength of a putative gene signature to predict survival outcomes must be tested explicitly, since many randomly-generated gene signatures could also predict survival. Using resampling tests, we found that the prognostic power of iCG is better than that of the random gene sets selected from human transcriptome in all the validation cohorts. More importantly, we demonstrate that iCG performs even better than the random gene signatures selected from glioma-associated genes in two out of three validation datasets. Therefore, it is reasonable to conclude that the iCG signature overcomes the problem raised by Venet et al. Finally, our analyses add to the growing body of evidence that cancer is a disease under quantitative genetic and genomic control 54 . Although there are loci of major effect in cancer (e.g. oncogenes and tumor suppressor genes), likely many loci of small effect also contribute to carcinogenesis and metastasis. The contribution of many genes (as well as other, non-genetic mechanisms 55 ) to the cancer disease process not only makes development of effective cancer treatments difficult, but also means that researchers should examine cancer as any other complex trait. The fact that cancer is a complex trait also means that there are many potential therapeutic targets than can be exploited in the future. These may enhance both our understanding of cancer as a biological phenomenon as well as provide the means to overcome particularly intractable problems in cancer therapy such as development of chemoresistance. This study confirms the central role of ion channels in brain cancer despite a clear molecular mechanism. The expression profiling of ion channel genes serves as a significant and independent tool for glioma outcome prediction. When working cooperatively with known clinical, molecular, and histological prognostic factors, the iCG signature will enhance the prediction accuracy for identifying glioma patients at higher risk for death. Our study also suggests that ion channels may serve as potential drug targets in future cancer therapy. High-throughput gene expression data. Five independent glioma datasets, including the UHC (GEO accession: GSE43289) 27 , HFH (GEO accession: GSE4290) 29 , EORTC (GEO accession: GSE43107) 30 , MDACC (GEO accession: GSE4271) 31 , and MGH (http://www-genome.wi.mit.edu/cancer/pub/glioma/) 32 cohorts, were collected in this study. The UHC and HFH cohorts, which are based on Affymetrix Human Genome U133 Plus 2.0 Array, were used to measure the correlation between gene expression level and glioma grade. The EORTC, MDACC, and MGH cohorts were based on Affymetrix Human Exon 1.0 ST Array, Affymetrix Human Genome U133A/B Array, and Affymetrix Human Genome U95 Version 2 Array, respectively, which were used to validate the prognostic power of iCG. The robust multi-array average (RMA) function in the "affy" package of Bioconductor 56 was used to summarize the expression level of each probe set for the microarray data from the UHC, HFH, MDACC, and MGH cohorts. For the UHC dataset, the function "mas5calls" in the "affy" package 57 was used to compute the present/absent call for each probe set. For the EORTC cohort, the gene expression values were summarized using the Affymetrix Power Tools Version 1.15.0 (http://www.affymetrix.com/). We limited our analysis to the probe sets with unique annotations. Genes on chromosomes X and Y were removed to avoid the potential confounding factors. For the gene with multiple probe sets, we used the geometric mean of expression values of all probe sets that mapped to the gene in the three validation cohorts (EORTC, MDACC, and MGH). Risk score. A risk score was calculated for each glioma patient using a linear combination of expression values of genes in the iCG signature [58][59][60] . The formula is shown below: Here, S is the risk score of the patient; n is the number of genes in the iCG signature; W i denotes the weight of gene i (as shown in Table 1), which indicates the direction of deregulation for gene i (1 or -1); e i denotes the expression level of gene i; and μ i and τ i are the mean and standard deviation of the gene expression values for gene i across all samples, respectively. In each validation cohort, glioma patients were stratified into iCG + and iCGgroups with zero as the cutoff.
2018-04-03T03:21:13.680Z
2015-08-03T00:00:00.000
{ "year": 2015, "sha1": "091fc59560f8c60e64c97709aeab67f7265c42e9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep11593.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d27db01affca76ff2f6578e63fed95b4e0b8a8a4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235260273
pes2o/s2orc
v3-fos-license
Mutation of SlARC6 leads to tissue-specific defects in chloroplast development in tomato The proliferation and development of chloroplasts are important for maintaining the normal chloroplast population in plant tissues. Most studies have focused on chloroplast maintenance in leaves. In this study, we identified a spontaneous mutation in a tomato mutant named suffulta (su), in which the stems appeared albinic while the leaves remained normal. Map-based cloning showed that Su encodes a DnaJ heat shock protein that is a homolog of the Arabidopsis gene AtARC6, which is involved in chloroplast division. Knockdown and knockout of SlARC6 in wild-type tomato inhibit chloroplast division, indicating the conserved function of SlARC6. In su mutants, most mesophyll cells contain only one or two giant chloroplasts, while no chloroplasts are visible in 60% of stem cells, resulting in the albinic phenotype. Compared with mature tissues, the meristem of su mutants suggested that chloroplasts could partially divide in meristematic cells, suggesting the existence of an alternative mechanism in those dividing cells. Interestingly, the adaxial petiole cells of su mutants contain more chloroplasts than the abaxial cells. In addition, prolonged lighting can partially rescue the albinic phenotypes in su mutants, implying that light may promote SlACR6-independent chloroplast development. Our results verify the role of SlACR6 in chloroplast division in tomato and uncover the tissue-specific regulation of chloroplast development. Introduction Chloroplasts are an important organelle where plants absorb solar energy and produce sugars 1 . The number, size, and morphology of chloroplasts directly affect leaf color and photosynthesis intensity. Chloroplast division and proliferation are important for maintaining the chloroplast population. In Arabidopsis, a number of mutants that are defective in the accumulation and replication of chloroplasts (arc) have been identified, in which chloroplast number, size, and shape are severely affected [2][3][4][5] . Similar to their original microbial ancestors, chloroplasts replicate by binary fission in plants, which is driven by ring-like dynamic division machinery located at the middle of the organelle [6][7][8][9] . In plants, the contractile component of the division machinery is composed of the FtsZ ring (Z ring), which is located in the inner membrane of the chloroplast, and tubulin-like heteropolymer-forming proteins (FtsZ1, FtsZ2, and DRP5B), which are located in the outer membrane of the chloroplast [10][11][12][13][14][15][16][17][18] . ARC6 and PARC6 (paralog of ARC6) encode chloroplast-targeted proteins that assemble and stabilize the Z ring by directly interacting with FtsZ2 and FtsZ1 [19][20][21][22] . ARC6 is closely related to Ftn2, a prokaryotic cell division protein. PARC6 is unique in vascular plants. It is possible that PARC6 was duplicated from ARC6 after the separation between nonvascular and vascular plants 23 . In Arabidopsis arc6 mutants, a mesophyll cell usually contains only two giant chloroplasts 2,21 . The chloroplast number in parc6 mutants is tenfold less than that in the wild type (WT), while PARC6 overexpression often inhibits chloroplast division by repressing FtsZ assembly [23][24][25][26] . ARC6 and PARC6 can recruit PLASTID DIVISION1 (PDV1) and PDV2, both of which are located in the outer envelope membrane (OEM) 23,27 . PDV1 and PDV2 then further recruit dynamin-related protein 5B (DRP5B/ARC5) to the OEM of the mid-chloroplast 28 . The Z ring is confined to the mid-chloroplast, and the formation of the Z ring requires the chloroplast Min system, which includes ARC3, MinD1, and MinE1 9,24,27,[29][30][31][32] . Multiple chloroplast division site 1 (MCD1) is another plant-specific protein that is required for Z ring positioning 33,34 . It was previously shown that MCD1 interacted with ARC6 in the stroma and interacted with FtsZ2 in an ARC6-dependent manner 33 . Much of our understanding of chloroplast division has been derived from studies on Arabidopsis leaves. However, fossil records revealed that most ancient vascular plants had only the axis without leaves, suggesting that chloroplasts emerged much earlier than leaves 35,36 . Thus, knowledge of chloroplasts derived from other tissues can contribute to our understanding of chloroplast division and physiology. In this study, we analyzed a tomato mutant named su, which is a naturally spontaneous mutant collected by the TGRC Tomato Genetics Resource Center (http://tgrc.ucdavis.edu). A prominent phenotype of su mutants is albinic stems with visually normal leaves. Bulked segregant analysis (BSA), mapbased cloning, and functional verification by virusinduced gene silencing (VIGS) and CRISPR all showed that mutation of SlARC6 led to the differential albinic phenotype. Further observations indicated that the chloroplasts in the stem of su mutants almost disappeared, while they fused into the giant abnormal plastids in leaves, suggesting the differential effect of SlARC6 mutation on stems and leaves. The defective chloroplasts in the stem of su mutants could partially be rescued by prolonged light exposure, suggesting that light signaling could regulate chloroplast development. Phenotypic analyses of the suffulta (su) mutant In the natural lines collected by the TGRC (http://tgrc. ucdavis.edu/), we characterized a mutant named su (LA0628), in which the spontaneous mutation causes the phenotype of albinic stems. We first examined the color changes of stems and leaves over different developmental stages in su mutants and WT (Ailsa Craig) plants ( Fig. 1A-L, Supplementary Fig. 1). In 1-week-old seedlings, su mutants exhibited pale hypocotyls, while the color of cotyledons and young leaves stayed similar to those in WT (Fig. 1A, B, G, H). At 2-3 weeks, the newly formed tissues, including the stems, petioles, and rachis, in su mutants were still albinic, but the leaves appeared WT-like Fig. 1), suggesting that the defect of albinic stems is not a developmental stage-associated phenotype. To further assess whether the albinic phenotype is related to chlorophyll, we measured the chlorophyll content in 3-week-old tomato seedlings. Our results showed that the levels of multiple chlorophylls, including Chla, Chlb, Car, and total Chl, in su stems were significantly lower than those in WT stems, but such a difference was not detected in the leaves (Fig. 1M, N). The pale color and the defective chlorophyll content prompted us to hypothesize that the albinic phenotype in su mutants might derive from the absence of chlorophyll precursors or dysfunctional chloroplasts. Fine mapping of Su To identify the mutation, we developed an F 2 population by crossing su mutants to WT. All F 1 plants exhibited normal color in both stems and leaves, indicating that the albinic phenotype was caused by a recessive mutation. In the F 2 population, the separation ratio between normal stems and albinic stems was~3:1, consistent with Mendel's law of single-gene inheritance (Supplementary Table 1). Using a next-generation sequencing-based BSA approach, we identified an associated locus on the long arm of chromosome 4 ( Fig. 2A). We next generated seven InDel molecular markers (M17, M24, M28, M220, M221, M223, and M31) in the candidate interval of 2 Mb between 63.5 and 65.5 Mb of chromosome 4. Using these InDel markers among 456 F 2 plants, we delimited the su mutation to the region between markers M28 and M220, at which 11 and 3 recombination events were detected, respectively ( Fig. 2B Table 3). We next amplified and sequenced the genomic sequences (including all introns and exons) of the 19 candidate genes. A comparative sequence alignment showed that there was a missense mutation with a C-T at 718 bp downstream of the predicted translation initiation site in ORF8 of su mutants, and the mutation site was heterozygous in F1 plants ( Fig. 2E; Supplementary Fig. 2). According to the annotated tomato genome (ITAG release 2.4), ORF8 encodes a heat shock protein of 819 amino acids. The N-terminal region of ORF8 contains a putative DnaJ domain and a chloroplast-targeting signal. The C-terminal region contains a transmembrane domain (TMD). The mutation in su mutants led to a premature stop codon (TGA) in the truncated protein with a deletion of the conserved TMD domain (Fig. 2E). Phylogenetic analyses showed that the homolog of ORF8 in Arabidopsis is AtARC6, which was reported to function in chloroplast division ( Supplementary Fig. 3) 37 . Together with the physiological phenotypes, we speculated that the mutation in su mutants could cause defective division and development of chloroplasts. Functional verification of Su mutation To further test ORF8 function in tomato, we knocked down ORF8 by VIGS. To this end, a fragment of ORF8 was inserted into the pTRV2 vector for infection. We used phytoene desaturase (PDS) in the same vector as the positive control, and empty pTRV2 was used as the negative control. The WT-like appearance of the negative control ( Fig. 3A-C) and photobleaching phenotype in the positive control ( Fig. 3D-F) indicated the effectiveness of VIGS of PDS. As expected, ORF8 silencing showed albinic stems without a leaf phenotype, which was similar to that in su mutants ( Fig. 3G-I). Consistent with the phenotype, the expression level of ORF8 was markedly decreased in both leaves and stems ( Supplementary Fig. 4). To examine whether ORF8 functions in chloroplast division, we isolated mesophyll protoplasts from leaves and the epidermis of stems. Under microscopy, we observed many protoplasts with only one (or occasionally a number of) giant chloroplast in the ORF8 VIGS-silenced plants, which was in sharp contrast with multiple round chloroplasts in the protoplasts isolated from the negative control ( Fig. 3J, K). To further verify this result, we constructed orf8 mutants using the CRISPR/Cas9 technique (Fig. 4). Two homozygous CRISPR lines were obtained: line 2 with a G insertion and line 23 with a G deletion (Fig. 4A). Similar to the su mutants, the ORF8 CRISPR lines showed albinic stems (Fig. 4A, C). These results indicate that ORF8 is indeed the target gene. Tissue specificity of chloroplast development in tomato Loss of function of ARC6 in Arabidopsis was shown to disrupt the stabilization of the Z ring during chloroplast division 21,22 . However, it is unclear why the albinic phenotype was only observed in the stem of su mutants. Interestingly, in the tissue sections prepared by the vibratome, we indeed observed a remarkable alteration of chloroplast morphology in the leaves of su mutants. The mesophyll cells of the su mutants usually contained only one or two giant chloroplasts, which was apparently different from the multiple round-shaped chloroplasts in the WT mesophyll cells (Fig. 5A-F). In contrast to the case of the leaves, the cross-sections of the su stems showed that the chloroplasts almost all disappeared (Fig. 5G-J). Occasionally, a number of cells containing one or two giant chloroplasts were observed in stem cells, and those chloroplasts were~396 μm 2 (area) in size, which is 40 times that of the WT (Fig. 5K, L; Supplementary Fig. 5). To further confirm this observation, we performed highresolution imaging by transmission electron microscopy (TEM). The overall morphology and the number of chloroplasts in su mutants were consistent with the observation of the live tissue sections (Fig. 5M, P, S, V). In WT, the ultrastructure of chloroplasts appeared uniform, with clearly visible stacks of thylakoids (Fig. 5N, Q, T, W). However, the ultrastructure of chloroplasts in su mutants was fairly diverse under TEM (Fig. 5O, R, U, X). The lamellae comprising grana thylakoids and stroma thylakoids were sparse in su mutants (Fig. 5O, R, U, X). These results suggest that the absence of chloroplasts, as well as the sparse thylakoids and stromal thylakoids in the giant chloroplasts, contribute to the albinic stem phenotype in su mutants. The role of SlARC6 in chloroplast division in meristematic cells One prominent difference between the leaf and the stem is the cell division pattern during postembryonic growth. Leaf growth involves continuous cell division and cell expansion, while stem cells mostly undergo cell expansion. To understand whether cell division is the major reason for the distinct chloroplast phenotype, we examined meristematic cells. The typical meristem dome consists of three cell layers, in which most cells contain several immature vacuoles 38 . Chloroplasts are derived from proplastids, which feature two envelope membranes and limited stromal thylakoids 3,39 . In the meristem, proplastids and chloroplasts were found in almost all cells in both WT and su mutants, but some chloroplasts of su mutants also showed abnormal morphology (Fig. 6A-D). It is possible that chloroplast division and cell division are tightly associated. We next observed the lower part of the meristem where cell division is as rare as in the stem (as shown in Fig. 6G). The cells in this area featured large but not fully expanded vacuoles (Fig. 6E, F). Compared with WT cells, many cells in this area of su mutants contained abnormal chloroplasts (Fig. 6E, F). Compared with those in the meristem, most chloroplasts in mature stems disappeared, suggesting that the chloroplasts may divide concomitantly with cell division but do not divide when there is only cell expansion in the stem. Light can partially rescue the albinic phenotype In the WT, the adaxial and abaxial surfaces of the petiole seemed to have no noticeable difference in color (Fig. 7A, B). However, in su mutants, compared with the adaxial surface, the abaxial surface appeared to be yellowish (Fig. 7D, E). We then observed the chloroplasts in the fresh tissue sections of the petiole. Under differential interference contrast (DIC) microscopy, the epidermal cells of the petiole in WT were full of chlorophyll, so the cells appeared to be opaque, whereas the epidermal cells in su mutants seemed to be transparent due to the lack of chloroplasts (Fig. 7C, F). Interestingly, the adaxial surface of su mutants was more transparent than the adaxial side. Under the microscope, the chloroplast number within the cells on the adaxial side was significantly higher than those on the abaxial surface in su petioles (Fig. 7C, F). This distinction could derive from the different light received by adaxial and abaxial sides. Light plays an important role in chloroplast development and chlorophyll biosynthesis 40,41 . To assess the role of light in the albinic phenotype, we cultured both WT and su mutants under conditions of 10 h day/14 h night or 16 h day/8 h night for 20 days. The WT and su mutants grown under longer lighting had greener leaves and stems than those grown on the 10 h day/14 h night (Fig. 7G-J, O-S). The stems of su mutants seemed to have the most significant change in color (Fig. 7G-J). In the freshly sectioned tissues, we observed a substantial rise in chloroplast-containing cells in su stems after prolonged light exposure, with cell chloroplast-containing cells elevated from~50% under 10 h day/14 h night conditions tõ 90% under 16 h day/8 h night conditions (Fig. 7G-N, U). In the protoplasts isolated from the epidermal cells of stems, we found that the proportion of chloroplastcontaining cells increased from 38 to 52% with prolonged light exposure, confirming that light could promote the formation of chloroplasts. To further verify this, we isolated protoplasts from the stems of su mutants grown and then exposed the protoplasts to light for 5 h. Our quantification indicated that the proportion of chloroplast-containing protoplasts increased from 38 to 48% (Fig. 7U). In addition, we quantified the average number of chloroplasts within the protoplasts of su mutants. The results showed that up to 99% of the su protoplasts contained one or two chloroplasts at 10 h day/14 h night, while~27% of the protoplasts contained three or more chloroplasts at 16 h day/8 h night (Fig. 7O-S, V). This result indicates that light can promote chloroplast development in su mutants. Discussion Chloroplasts are biological factories where plants transform solar energy into organic substances for plant growth and development. Thus, it is important to maintain the appropriate number and physiology of chloroplasts. In Arabidopsis, a number of mutants named ARCs were reported to have defective chloroplast division. In Arabidopsis mutants of ARC family members, larger chloroplasts in mesophyll cells were observed 2,4,5 . In Arabidopsis, arc6 mutants have abnormal chloroplasts in mesophyll cells but no other dramatic changes in whole plants 21 . Here, we found that the mutation of SlACR6 in tomato leads to the albinic phenotype in stems. A similar phenotype was previously reported in three su mutants that had slightly paler leaves and albinic stems 42 . Interestingly, the results here demonstrated phenotypic variability among the three accessions, implying that su mutations with different backgrounds could affect the phenotype 42 . In higher plants, chloroplasts divide and replicate via a contractile division complex including the FtsZ ring (Z ring), a protein complex located on the stromal surface of the inner envelope membrane, and the ARC5 ring, a protein complex located on the cytosolic surface of the OEM [10][11][12][13][14][15][16][17][18] . ARC6 has evolved from a prokaryotic cell division protein and bears three conserved domains: the N-terminal region, which protrudes into the stroma and directly interacts with FtsZ2; the C-terminal region, which extends into the intermembrane space and interacts with PDV2; and a TMD 27 . The ARC6-FstZ1 interaction is required for the Z ring to localize to the stromal surface of the inner envelope membrane 23,24,27 . In su mutants, the absence of the TM domain and the C-terminal region of SlACR6 may result in failed localization on the inner membrane of chloroplasts, which also prevents the localization of the Z ring mid-chloroplast. However, the number of chloroplasts was dramatically different in the leaves and stems of su mutants. In mature mesophyll cells, chloroplasts inherited from mother cells propagate through chloroplast division. Previous observations in Arabidopsis arc6 mutants showed that the cells in the shoot apical meristem, leaf primordium, and mature leaves all contain two larger plastids, suggesting that plastids could divide during cell division 2,21,37 . In the leaves of tomato su mutants, we found a similar phenotype but rarely observed any chloroplasts in stems, suggesting that chloroplast development was entirely blocked in expanding cells. Based on these findings, we speculated that the tissue specificity of plastid development is likely not caused by differential ACR6 functions but instead by the distinct cell division patterns between leaves and stems. The mutation of ACR6 provides a good example to gain insight into the spatial and tissue-specific regulation of chloroplast development. In Arabidopsis, overexpression of PDV1 and PDV2 can increase the number of chloroplasts. Interestingly, expression of PDV1 and PDV2 can be promoted by exogenous cytokinin treatment or overexpression of cytokininresponsive transcription factor 2 43,44 . In addition to cytokinin, GA-deficient mutants of Arabidopsis (ga1-3) and Oryza sativa (d18-AD) both exhibited reduced chloroplast division and decreased expression of FtsZ2, ARC6, DRP5B, and PDV 45 . As GA and cytokinin coordinate to regulate the development of stems and leaves from the shoot apical meristem, the varied distribution of these two hormones in different tissues could be involved in the tissue specificity of chloroplast division. In addition to hormones, FHY3, a key regulator of far-red light signaling, was reported to activate the expression of ARC5, and the large chloroplast phenotype in fhy5 mutants could be rescued by expression of ARC5 46,47 . This result suggests that light could also be a key regulator of chloroplast division. The partial complementation of su mutants by extended lighting provides further evidence that light is involved in ARC-regulated chloroplast division. Plant materials and growth condition Spontaneous su mutants LA0628 and LA1589 and Ailsa Craig (AC, accession number LA2838A) were provided by the Tomato Genetics Resource Center (http://tgrc.ucdavis. edu). The mapping population of Su was constructed by crossing the su mutant and LA1589. In addition, the F2 population was obtained from the F1 selfing. All plants were cultured in a greenhouse at 26°C. Determine of chlorophyll content A 0.2 g sample was placed in a 2 mL centrifuge tube with liquid nitrogen and then ground into powder. Chlorophyll was extracted in methanol, and the absorbance was detected using a microplate fluorometer at 666, 653, and 470 nm. The chlorophyll contents were calculated using the following formulas: Chl a (mg/g FW Bulked segregant analysis Individuals (>30) exhibiting the su mutant-like phenotype and individuals (>30) exhibiting the WT phenotype were collected from the F 2 population generated by crossing the su mutant and LA1589. Their genomic DNA was extracted by CTAB. Two micrograms of DNA was mixed to construct two samples (mutant-like and WT samples). Genome sequencing of the two samples was performed by HiSeqXten-PE150 (Novogene, Beijing) with a depth of 30× coverage of the tomato genome. The candidate region was analyzed according to the method in ref. 49 . Map-based cloning InDel molecular markers were designed in the candidate region according to the resequenced DNA and were polymorphic between the su mutant and LA1589 50 (Supplementary Table 3). First, seven markers were analyzed using 456 individuals exhibiting a su mutant-like phenotype of the F 2 population. The candidate region was determined according to the recombinants. Then, six markers in the candidate region were analyzed using 515 individuals exhibiting a su mutant-like phenotype. Finally, the candidate gene was delimited in a smaller interval. Virus-induced gene silencing A fragment of ARC6 was designed using the VIGS tool (https://vigs.solgenomics.net) and inserted into the pTRV2 vector (named TRV2-ARC6). A PDS gene was also inserted into the pTRV2 vector and used as the positive control. pTRV1, pTRV2, pTRV2-PDS, and pTRV2-ARC6 vectors were transferred into Agrobacterium tumefaciens (GV3010) and then grown on LB plates with 100 μg/mL kanamycin and 50 μg/mL rifampicin. When tomato plants had two fully expanded cotyledons, we injected the cotyledons with an A. tumefaciens suspension. The details for preparing the A. tumefaciens suspension and injection are the same as those in ref. 51 . The plants were cultured at 24°C in a growth chamber. After 20 days, the phenotype was recorded. Construction of knockout plants To generate the CRISPR/Cas9-Slorf8 construct, we inserted two target sites of Slorf8 (http://skl.scau.edu.cn/ targetdesign) into the pTX vector using the Clone Express II One Step Cloning Kit (Vazyme Biotech C112-01/02). The construct was introduced into tomato cv. Micro-Tom by Agrobacterium (A. tumefaciens) mediated transformation. Homozygous transgenic plants were used for phenotypic characterization. Vibratome sectioning Agarose (5%) was boiled in a microwave oven, and the agarose solution was cooled to~65°C before being poured into a tube. Subsequently, a specimen was suspended in the agarose solution, and the air bubbles were aspirated. The tube was put at 4°C for hardening. Specimen blocks were cut into 80 µm sections by a vibratome. The vibratome sections were observed by DIC microscopy Transmission electron microscopy Fresh samples were cut into 1 mm × 3 mm strips and fixed in 2.5% glutaraldehyde overnight at 4°C. The glutaraldehyde was removed, and the samples were washed three times with phosphate buffer (0.1 M, pH 7.0). The samples were fixed again in 1% (wt/vol) osmium tetroxide for 1.5 h at room temperature. Osmium tetroxide was removed, and the samples were washed three times with phosphate buffer (0.1 M, pH 7.0). After that, the samples were dehydrated by using successive treatments in 50, 70, 80, 90, 95, and 100% (all vol/vol) ethanol for 20 min and 100% (all vol/vol) acetone for 20 min. Then, the samples were embedded in Spurr's low-viscosity resin. After polymerization for 24 h at 70°C, the ultrathin sections were cut on an ultramicrotome with a diamond knife and picked up on copper grids. Before observation, the samples were stained with uranyl acetate and lead citrate. Images were captured using a Hitachi-7650 transmission electron microscope.
2021-06-01T13:48:24.152Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "e1f0ce77d635e9e386b8b14700270d891d27530e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41438-021-00567-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c0fbe33eeb41f110abc0d4dd0d9536fd865c320", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119172717
pes2o/s2orc
v3-fos-license
Double q-Analytic q-Hermite Binomial Formula and q-Traveling Waves Motivated by derivation of the Dirac type delta-function for quantum states in Fock-Bargmann representation, we find q-binomial expansion in terms of q-Hermite polynomials, analytic in two complex arguments. Based on this representation, we introduce a new class of complex functions of two complex arguments, which we call the double q-analytic functions. The real version of these functions describe the q-analogue of traveling waves, which is not preserving the shape during evolution as the usual traveling wave. For corresponding q-wave equation we solve IVP in the q-D'Alembert form. Introduction In complex analysis, a complex function f (z) of one complex variable z is analytic in some domain D if in D it satisfies If we consider a complex function f (z, w) of two complex variables z, w, analytic in both variables it is called a double analytic if As an example, f (z, w) = (z + iw) 2 is double analytic, while (z − iw) 2 is double anti-analytic. In general, double analytic function can be written as power series in complex binomials f (z, w) = ∞ n=0 a n (z + iw) n . For such double analytic binomials here we derive the following Hermite binomial formula (z + iw) n = 1 2 n n k=0 n k i k H n−k (z)H k (w). Derivation of this formula is motivated by description of the Dirac type δ-function for quantum states in the Fock-Bargmann representation. As is well known, states of a quantum system in the Fock-Bargmann representation are described by complex analytic function f (z) and visa versa [1]. In this representation, due to formula dµ(z) e ξz f (z) = f (ξ), (5) where measure dµ(z) = dzdze −zz , the exponential function plays the role of the Dirac type δ function [2]. Proof of this formula is based on following identity dµ(z) e ξz z n = ξ n . By generalization of this identity to two complex variables we find expansion (4). In the present paper, we are going not only proof the identity (4), but also derive the q-analogue of this identity. Recently, [5] we have introduced a complex function f (z; q) of one complex variable z according to equation and we called it as the q-analytic function. We have described a wide class of these functions, which are not analytic in the usual sense (1). A complex function f (z, w) of two complex variables z and w, analytic in both variables and satisfying equation we call the double q-analytic function. Example: Complex q-binomial is analytic in z and w, since ∂ ∂z (z +iw) 2 q = ∂ ∂w (z +iw) 2 q = 0 and double q-analytic due toD z,w (z + iw) 2 q = 0. We will show that complex q-binomial (z + iw) n q , for n-positive integer, is double q-analytic. This is why any convergent power series f (z, w) q = ∞ n=0 a n (z + iw) n q represents a double q-analytic function. As a central result of the paper, we find an expansion of the q-binomial in terms of q-Hermite polynomials In the limit q → 1, this formula reduces to the Hermite binomial formula (4). As an application of our results, we consider q-traveling waves in the form of power series of real q-binomial (x ± ct) n q . For these traveling waves we find q-Hermite binomial expansion in terms of x and t variables. We notice that in contrast to usual traveling waves, the q-traveling waves are not preserving the shape during evolution. The q-traveling waves are subject to q-wave equation for which we solve IVP in the q-D'Alembert form with the Jackson integral representation. The paper organized as follows. In Section 2 we derive the Hermite binomial formula (4). In Section 3 we introduce q-Hermite polynomials and derive the q-Hermite binomial formula. The q-binomials are the double q-analytic functions, as we demonstrate in Section 4. In Section 5 we discuss relation between the double q-analytic binomial and the q-analytic one. In Section 6 we introduce qtraveling waves in terms of real q-binomials and solve IVP for q-wave equation in D'Alembert form. We illustrate our results for different initial values by particular examples in Section 7. Finally, in Section 8, we derive q-Hermite polynomial expansion of q-traveling waves. Hermite Binomial Formula In this section, we derive the Hermite binomial formula (4). We start with the following Lemma: Lemma 2.0.1 For arbitrary complex numbers ξ and η: By changing variables ξ and η to λ and µ according to Proof 2.0.4 By taking the limit of (10) as η → ξ ⇒ µ → 0 and λ → ξ, Proof 2.0.6 By changing complex coordinates to the Cartesian ones, the integral is expressed in terms of the sum where we have used the Gaussian integrals By using Corollary 2.0.3. we find desired result (15). We can generalize this result for an arbitrary analytic function given by power series f (z) = ∞ n=0 a n z n , so that, The above proof implies some interesting binomial identity for Hermite polynomials. We start from the Rodrigues formula for Hermite polynomials: and by replacing z → iz Then we have the following Identity 2.0.7 Proof 2.0.8 According to previous proof By inserting 1 = e ξ 2 /4 e −ξ 2 /4 and using Rodrigues formulas we have Particular forms of this identity are: and (by reductions ξ = x and ξ = iy), This result can be generalized to the Hermite binomial formula with two complex variables z and w Proof 2.0.10 From generating function for Hermite polynomials and by changing variable τ = it in the second one and multiplying (26) and (27) we have By changing the order of double sum with l + k = n, and expanding the left hand side in t we get By equating terms of the same power t n we obtain the desired result (25): Here we can give also another proof of this relation by using the holomorphic Laplace equation. Proof 2.0.11 ζ n ≡ (z + iw) n is double analytic function of two complex variables z and w. Therefore, due to (3) it satisfies the holomorphic Laplace equa- Then due to relation: we get the Hermit binomial formula q-Hermite Binomial Formula Here we are going to generalize the above formula to the q-binomial case. For this, first we need to introduce the q-Hermite polynomials. q-Hermite Polynomials In paper [4], studying the q-heat and q-Burgers equations, we have defined the q-Hermite polynomials according to generating function where are Jackson's q-exponential functions and q-numbers and q-factorials are defined as follows: By q-differentiating the generating function (33) according to x and t we have the recurrence relations, correspondingly We get also the special values and the parity relations For more details we refer to paper [4]. First few polynomials are and when q → 1 these polynomials reduce to the standard Hermite polynomials. The generating function (33) for t = 1 gives expansion of q-exponential function in terms of q-Hermite polynomials In the limiting case q → 1 it gives expansion of exponential H n x 2 n! and for x = 1 we find for Euler's number e as: For the Jackson q-exponential function we have the q-analog of this expansion: and as follows where is the q-analog of Euler's number e. Relation (40) should be compared with the next one which is coming for x = 1 from the following identity. Proof 3.1.2 We expand and change order of the double sum Splitting the first sum to the even n = 2m and to the odd parts n = 2m + 1 we have Due to known identities [3] 2m+1 k=0 2m + 1 k the second sum vanishes and for the first sum we get . Finally, q-Hermite Binomials Here we formulate our main result as the q-Hermite binomial identity. The q-analogue of identity (25), giving q-binomial expansion in terms of q-Hermite polynomials is Proof 3.2.2 By using the generating function for q-Hermite polynomials (33) and replacing x → Z we obtain In this formula we replace t → it, Z → W, q → 1/q so that Multiplying (45) with (46) and using factorization of q-exponential functions leading to e q (−t 2 )e 1 q (t 2 ) = e q (0) q = 1, we find By changing order of the double sum and expanding the left hand side in t, we get Then, at power t n we have identity where By replacing Z = z and W q = w the desired result is obtained Double q-Analytic Function Here we consider a class of complex valued functions of two complex variables, z and w, (or four real variables), analytic in these variables ∂ ∂z f = ∂ ∂w f = 0. Definition 4.0.3 A complex-valued function f (z, w) of four real variables is called the double analytic in a region if the following identity holds in the region: where and z = x + iy, w = u + iv. where Here we should notice that The simplest set of double q-analytic functions is given by complex q-binomials From above result follows that any convergent power series f (z + iw) q = ∞ n=0 a n (z + iw) n q determines a double q-analytic function. Since our relation (44) shows expansion of double q-analytic q-binomials in terms of q-Hermite polynomials, it also gives expansion of any double q-analytic function in terms of the analytic polynomials. Examples: For n = 1 : For n = 2 : q-Holomorphic Laplacian Another proof of identity (44) can be done by noticing that q-binomial (z + iw) n q is double q-analytic function. Then we can use the following identity and complex q-Laplace equation. Using the fact that (∆ q ) m q (z + iw) n q = 0, ∀m = 1, 2, ..., only the first term in expansion survives, then we get desired result. Due to (47) we can factorize q-exponential operator function as By using the generating function of q-Hermite Polynomials (45) we have the following identity: which gives and where D qw . Substituting into (56), we get Then, according to identity (54) we obtain desired result As a particular case of our binomial formula, we can find q-Hermite binomial expansion for the q-analytic binomial (x + iy) n as well. If in (44) we replace z → x and w → y, then we get Since a q-analytic function is determined by power series in q-binomials [5], this formula allows us to get expansion of an arbitrary q-analytic function in terms of real q-Hermite polynomials. q-Traveling Waves As an application of q-binomials here we consider the q-analogue of traveling waves as a solution of q-wave equation. Traveling Waves: Real functions of two real variables F (x, t) = F (x ± ct) called the traveling waves, satisfy the following first order equations It describes waves with fixed shape, prorogating with constant speed c in the left and in the right direction correspondingly. The general solution of the wave equation ∂ 2 u ∂t 2 = c 2 ∂ 2 u ∂x 2 , then can be written as an arbitrary superposition of these traveling waves u(x, t) = F (x + ct) + G(x − ct). q-Traveling Waves Direct extension of traveling waves to q-traveling waves is not possible. This happens due to the absence in q-calculus of the chain rule and as follows, impossibility to use moving frame as an argument of the wave function. Moreover, if we try in the Fourier harmonics f (x, t) = e i(kx−ωt) , replace exponential function by Jackson's q-exponential function f (x, t) = e q (i(kx − ωt)), then we find that it doesn't work due to the absence of factorization for q-exponential function e q (i(kx − ωt)) = e q (ikx) e q (iωt). This is why, here we propose another way. First we observe that q-binomials for n = 0, ±1, ±2, ..., satisfy the first order one-directional q-wave equations Then, the Laurent series expansion in terms of these q-binomials determines the q-analog of traveling waves Due to (63) the q-binomials (62) satisfy the q-wave equation and the general solution of this equation is expressed in the form of q-traveling waves where This allows us to solve IVP for the q-wave equation where −∞ < x < ∞, in the D'Alembert form: where the Jackson integral is If the initial velocity is zero, g(x) = 0, the formula reduces to It should be noted here that q-traveling wave is not traveling wave in the standard sense and it is not preserving shape during evolution. It can be seen from simple observation. The traveling wave polynomial (x − ct) n q = (x − ct)(x − qct)(x − q 2 ct)...(x − q n−1 ct) includes the set of moving frames (as zeros of this polynomial) with re-scaled set of speeds (c, qc, q 2 c, ..., q n−1 c). It means that zeros of this polynomial are moving with different speeds and therefore the shape of polynomial wave is not preserving. Only in the linear case and in the case q = 1, when speeds of all frames coincide, we are getting standard traveling wave. EXAMPLES In this section we are going to illustrate our results by several explicit solutions. Example 1: We consider I.V.P. for the q-wave equation (66) with initial functions Then the solution of the given I.V.P. for q-wave equation in D'Alembert form is found as When q = 1, it reduces to well-known one as superposition of two traveling wave parabolas (x ± ct) 2 moving to the right and to the left with speed c. Geometrically, the meaning of q is the acceleration of our parabolas in vertical direction. Example 2: The q-traveling wave gives solution of I.V.P. for the q-wave equation (66) with initial functions If q = 1 in this solution we have two degenerate zeros moving with the same speed c. In the case q = 1, two zeros are moving with different speeds c and qc. It means that, the distance between zeros is growing linearly with time as (q − 1)ct. The solution is the parabola, moving in vertical direction with acceleration (q−1) 2 4 c 2 , and in horizontal direction with constant speed is changing according to time as t 3 . For more general initial function f (x) = x n , n = 2, 3, ... we get q-traveling wave u(x, t) = (x − ct) n q = (x − ct)(x − qct)...(x − q n−1 ct) with n-zeros moving with speeds c, qc, ..., q n−1 c. The distance between two zeros is growing as (q m − q n )ct, and the shape of wave is changing. In parabolic case with n = 2, the shape of curve is not changing, but moving in horizontal direction with constant speed, and in vertical direction with constant acceleration. In contrast to this, for n > 2, the motion of zeros with different speeds changes the shape of the wave, and it can not be reduced to simple translation and acceleration. q-Traveling Waves in terms of q-Hermite Polynomials Identity(25) allows us to rewrite the traveling wave binomial in terms of Hermite polynomials as (x + ct) n = 1 2 n n k=0 n k i k H n−k (x)H k (−ict). Its q-analogue for q-traveling wave binomial follows from (44) Then, the general solution of q-wave equation (64) can be expressed in the form of q-Hermite polynomials u(x, t) = F (x + ct) q + G(x − ct) q , where F (x + ct) q = ∞ n=−∞ a n (x + ct) n q = ∞ n=−∞ a n 1 [2] It is instructive to prove the q-traveling wave solution − cD x q (x + ct) n q = 0 by using q-Hermite binomial. We have we get The expression in parenthesis is zero due to q-combinatorial formula and [n] 1 q = [n]q q n−1 .
2016-02-02T13:50:53.000Z
2016-02-02T00:00:00.000
{ "year": 2016, "sha1": "5253db7d41843dce45576fe10f7a3cbaa7919205", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5253db7d41843dce45576fe10f7a3cbaa7919205", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
240246464
pes2o/s2orc
v3-fos-license
Development of Eco-friendly Dyeing Process Based on Caesalpinia sappan L. Bark, Cocos nucifera Fiber and Leucaena leucocephala Leaves Natural dyes have begun to regain attention due to biodegradable nature and low negative impact on human health and the environment. Besides, the continuous development of color shades variants from several natural sources of newly available abundant natural dyes and their ability to produce high color intensity and high wash fastness has become a major consideration of their use in textile dyeing. This study has evaluated the potential utilization of Caesalpinia sappan L bark, Cocos nucifera fiber, and Leucaena leucocephala leaf as a single and combination natural dye material. In particular, the extracts of these natural dye materials are used to dye the cotton fibers by involving mordanting stages using alum and fixation using iron (II) sulfate, alum, and calcium oxide compounds. The quality of the resulting dyeing was evaluated by the intensity of the color and wash fastness against wet washing. The dyeing results showed the appearance of varied shades of color with the dominance of the typical reddish color of Caesalpinia sappan L. While the results of the analysis using Diffuse Reflectance Ultraviolet (DRUV) and the wash fastness using the staining scale method showed high color dyeing with the color intensity in the range 53.55% 94.34% with wash fastness in the staining scale range 2 (less) – 4 (good). Specifically, in contrast to the behavior of dye AB, the AC staining scale increases with increasing dye C added (up to 50%) and decreases when dye C added increases to 75% (dye AC3). Keywords— Textile; natural dyes; Caesalpinia sappan L; Cocos nucifera; Leucaena leucocephala. Manuscript received 5 Oct. 2020; revised 27 Nov. 2020; accepted 3 Feb. 2021. Date of publication 31 Oct. 2021. IJASEIT is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. I. INTRODUCTION Over the past few years, many industries, particularly the textile industry, have been heavily criticized for their role in polluting the environment [1], [2]. This is directly related to the textile industry's use of large amounts of synthetic dyes [3]- [5]. Several synthetic dyes facts that are considered more economical have superior fastness properties, varied color shades, and greater reproducibility has become the main background of the dominance of the synthetic dyes used in various industries [6]- [8]. However, dyeing using a large number of synthetic dyes is known to have the potential to produce a toxic waste that negatively affects the aquatic ecosystem and has mutagenic, carcinogenic, and toxicological properties in human health [9]- [13]. In the increasing phenomenon of environmental destruction and deterioration of the world's public health quality, environmentally friendly non-toxic natural dyes reappear as a choice of 'Green Chemistry' as a substituent/complement of synthetic dyes to some extent [14]. In recent years, large-scale research [15]- [17] has been undertaken globally to explore the natural dye preparation procedures from a variety of sources, both plants, animals, and minerals as well as optimizing its utilization by suppressing its limitations on textile fiber dyeing. Nowadays natural dyes primarily derived from plants; because of its abundant availability, biocompatibility, low toxicity, green approaches, and eco-friendly properties have gained worldwide popularity for use in textiles [18]- [21]. Natural dyes can be obtained from the extraction of certain parts of the specific plant [22]- [28], such as roots, rhizomes, seeds, bark, leaves, flowers, or fruit. Many plant parts such as secang (Caesalpinia sappan L.) bark, coconut (Cocos nucifera) fibers, and lamtoro (Leucaena leucocephala) leaf can be used as a source of natural dyes [29]- [33]. Water-soluble braziline flavonoids are the main source of red dye on Caesalpinia Sappan Linn. or Sappanwood (family Leguminosae) [34]- [36]. The extract has been used extensively as a natural dye, especially in the textile industry due to its low toxicity and low cost [37]- [39]. Sappanwood extracts in addition to functioning as a natural red dye, also show great benefits as antibacterial agents [38], [40]- [42], antioxidants [38], [43]- [45], and pharmacological activity [46]. Also, braziline is easily oxidized by light to form a very red brazilein [35], [38]. Characterization with UV-Vis spectroscopy and μ-spectrofluorimetry revealed that in its acid form, brazilein chromophores provide maximum absorption, excitation, and emission respectively at wavelengths of 446, 475 and 536 nm and 540, 548 and 560 nm when deprotonated. When complexed with Al 3+ the wavelength values are changed to 510, 520, and 585 nm, respectively. Cocos nucifera or coconut fiber has fiber with a strong structure that is not easy to rot or moldy and durable [45], [47]. Indonesia is the biggest coconut-producing country in the world [48]. But unfortunately, coco fiber processing has not been done optimally, where every year only 15% of the total abundance of processed coconut husk, while the rest will be left to accumulate as waste that will eventually dry up and then burned [49]. This condition is unfortunate, considering coconut husk is part of the coconut husk that can be used as a natural dye because of the flavonoid type water-soluble tannin content. This dye will give off a reddish-brown color and gradation depending on the type of mordant and fixer compound used [50]. Characterization with UV-Vis spectroscopy revealed that Cocos nucifera fiber extract provides maximum absorption at wavelengths of 385.41 nm and 380.79 nm when complexed with Al 3+ . Leucaena leucocephala, commonly called petai china, is a kind of shrub from the Fabaceae family (Leguminosae, legumes). Leucaena Leucocephala is a plant that is easy to find in the tropics, such as Indonesia. Similarly, with coconut husk, Leucaena leucocephala leaf material has not been used as much of the fruit that is widely used as a raw material of traditional Indonesian food. This material contains many phenolic compounds in high concentrations, especially types of tannins and mimosine, making it a potential natural dye material. As with most tannins, tannin chromophores in Leucaena leucocephala leaf will also produce a reddishbrown color and a gradation, according to the type of mordant and fixer applied [51], [52]. Characterization with UV-Vis spectroscopy revealed that Leucaena leucocephala leaf extract provides maximum absorption at wavelengths of 417.84 nm and 387.50 nm when complexed with Al 3+ . Dyeing using natural dyes is generally accompanied by several problems such as narrow shade range and lower fastness. The utilization of mordant compounds is one effort that can be done to overcome these problems. In addition to increasing the affinity between the dye compound and the fabric fibers, the application of mordant may cause hue changes of a number of specific dyes. Application of different mordant types to the same dye compound can darken, brighten, or alter the final color significantly [53], [54]. The colorimetric properties of the dyeing results including lightness (L*), redness-yellowness (a*), blueness-greenness (b*), chroma (c *), hue (hº) and color strength (K/S) are strongly influenced by the chemical properties, both mordant and fiber compounds and the forming ability of metal complexes with dyes and fibers [55]. Mordanting is generally capable of improving dye performance by producing a wide spectrum of shades on a variety of natural fibers as well as synthetic fibers with increased color intensity and fastness [56], [57]. One of the colored fabric qualities can be seen from the fastness. Besides mordanting optimization, fixation is another way that can be done to improve fastness through the color locking process on fiber. Fixation is generally done by adding metal complex materials, such as iron (II) sulfate (FeSO4.7H2O), aluminum sulfate, or alum [Al2(SO4)3.18H2O] and calcium oxide (CaO). However, the application of the fixation process in the final stages after dyeing also has the potential to darken, brighten, or alter the color significantly as it does with mordanting. This is triggered by the presence of several chromophores in the fixer compound used, which will lead to the appearance of a specific hue when the absorption of light occurs [58]- [61]. The current research focuses on producing dyed fabrics using red pigments from sappanwood extract and reddishbrown producing pigments from Cocos nucifera fiber and Leucaena leucocephala leaf extract separately and as a combination dye. This study is the first report on the dyeing carried out by combining the braziline and tannin compounds, the two dominant pigments in the three raw materials used. To find out the effect of both pigments' combination on the quality of cotton dyeing, the color production is done through the use of composition 75%/25%; 50%/50%; and 25%/75% both for a combination of sappanwood bark extract with Cocos nucifera fiber and Leucaena leucocephala leaf extract. Meanwhile, the color production using a 100% composition in sappanwood bark, Cocos nucifera fiber, and Leucaena leucocephala leaf extract was applied as a comparative color needed to determine the extension of the color spectrum resulting from the pigment combination process. Besides, achieving optimum color strength and creating a wide colors spectrum is done by forming bridge compounds between cotton and pigment using mordant alum, which has aluminum as a central atom with a three-valence positive charge application color-locking using 3 types of fixer compounds. Furthermore, the interaction between fabric fibers, mordant alum, combinations of pigments, and fixer compounds, which have not been previously reported, have been investigated. An analysis of the effect from three different fixer compound uses was also performed to determine the suitability of interactions between pigment combinations with fixers in generating color spectrum expansion and increased strength and fastness. A. Washing Process Cotton fibers to be dyed must go through a washing process to remove contaminants attached. Thus, the appeal of mordant alum to fabric fibers increases. The following procedure does the washing of cotton fibers. The 2.57-gram cotton fiber is immersed in a 2 gram/liter Turkey Red Oil (TRO, Dunia Kimia, Indonesia) solution for 6 h. Furthermore, three times were rinsing using distilled water to remove the mordant residue that does not interact with cotton fibers. The cotton fiber washing procedure was ended by drying the cotton fibers in the open air for 24 hours [3], [11]. B. Mordanting Process After going through the washing stage, the cotton fiber preparation process continues with the mordanting using aluminum sulfate (Al2(SO4)3.18H2O, Brataco Chemistry, Indonesia) and soda ash (Na2CO3, Water, Indonesia). The mordant solution is made by dissolving 8 grams of alum and 2 grams of soda ash in 1 liter of distilled water. The stirring process using a magnetic stirrer was applied to ensure the homogeneity of the solution. The solution is then heated to boiling, and into it is added 2.57 grams of cotton fiber. The process of heating process continued for 1 hour. To optimize the interaction that occurred with alum mordant, cotton fibers were left submerged in a mordant solution for 24 hours. In the next step, cotton fibers are rinsed (not squeezed) three times and then dried and ironed. Ironing of cotton fibers is done to get a uniform fiber orientation. After that, the cotton fibers are ready to be dyed using natural dyes [3], [11]. C. Dyeing Process After going through the washing and mordanting, cotton fibers are ready to be dyed using water extract of Sappanwood bark (Dwi Jaya, Indonesia), Cocos nucifera fiber (Berkah, Indonesia), and Leucaena leucocephala leaf (Bengawan Solo riverbanks, Indonesia). Table 1 shows the operational conditions for the dyeing [11], while Table 2 shows the natural dyes extract compositions. D. Fixation Process To improve the fastness of natural dyes, after going through the staining stage, the cotton fibers must go through fixation using three types of fixers, including iron (II) sulfate (FeSO4.7H2O, Nusa Indah Megah, Indonesia), alum, and calcium oxide (CaO, Mitra Water, Indonesia). The fixer solution was prepared by dissolving 50 grams of each fixer material in 1 liter of distilled water. The solution was allowed to age for 24 hours and then from the solution was taken a transparent solution. The fixation process is carried out by immersing the cotton fibers in the fixer solution for 10 minutes. To determine the resulting color difference, a fixation of 0.85 grams of cotton fibers was performed in 30 mL of iron (II) sulfate, alum, and calcium oxide solution respectively. In the next step, each cotton fiber is rinsed three times using distilled water and then dried in the open-air [11]. E. Characterization To determine the pigment characteristic in the water extract from Sappanwood bark, Cocos nucifera fiber, and Leucaena leucocephala leaf, wavelength analysis was done with maximum electromagnetic wave absorption Pharmaspec UV-1700 UV-Visible Spectrophotometer. The maximum wavelength region (λmax) will show the dominant pigment contained in the water dye extract of each raw material. Each dye's CieLab coordinates (L*, a*, b*) were directly measured using a spectrophotocolorimeter (Tintometre, Lovibond PFX 195 V 3.2, Amesbury, UK). In this coordinate system, the L* value is a measure of lightness, ranging from 0 (black) to 100 (white); the a* value ranges from -100 (greenness) to +100 (redness) and the b* value ranges from -100 (blueness) to +100 (yellowness). Besides, at this stage, cotton fiber also analyzed the intensity and fastness to determine the quality of the colors resulting from the combination of pigment and three different fixer compounds. Color intensity analysis was performed using Shimadzu Diffuse Reflectance Ultraviolet (DRUV) UV-2401-PC Spectrophotometer, while the fastness analysis was performed using the staining scale method. The higher color intensity will generally result in lower reflection percentages in the analysis results using DRUV. A. Dyes Extract Characterization A, B, and C dyes, each prepared as cotton fiber pigments by extracting dyes from Sappanwood bark, Cocos nucifera fiber, and Leucaena leucocephala leaf. Consequently, A dyes exhibit characteristic properties of the red-pink color of brazilein flavonoids, which appear as red chromophores as oxidized forms of braziline from Sappanwood bark (Fig. 1a). The red-pink color with maximum wavelength at 538.57 nm, as shown in Fig. 3a, is red (a*) higher than 19.00 (Table 1). Meanwhile, B and C dyes exhibited characteristic properties with a typical reddish-brown tannin flavonoid, as shown in Fig. 1b and Fig. 1c. However, compared to Cocos nucifera extract, Leucaena leucocephala extract showed a darker color. Also, the occurrence of reddish-brown Cocos nucifera showed redness in color (a*) for 12.16. The redness in color values is lower than Leucaena leucocephala, which has a* value of 17.40. The UV-Vis spectrum of the AB1 dye extract shows the maximum absorption at the wavelength 538.34 nm with a bathochromic shift (0.23 nm) of the A dye wavelength. Theoretically, adding 25% tannins of the B dye has led to the increasing quantity of chromophores and auxochrome in the AB1 dye. However, what should be noted is that the increase in tannin levels in AB dye will occur along with a decrease in brazilein levels and based on the results of the analysis it is suspected that there is a close relationship between the molecular size and the success of the fiber in binding dye molecules. The larger the size of the dye molecule is predicted to increase the barriers to fabric fibers to maximize the binding of the dye molecules. As shown in Fig. 2 (a), the molecular structure of brazilein has 2 types of chromophores (C=C and C=O) and 2 types of auxochrome, ie -OH and C-O. Meanwhile, in Fig. 2 (b) it appears that although it has the same chromophore and auxochrome properties, the tannin molecular structure exhibits the higher quantity of the corresponding chromophores and auxochrome. The quantity of chromophores and auxochromes increases with increasing tannin levels to 50% and 75%, as found in AB2 and AB3 dyes, respectively. The increase in the number of chromophores and auxochromes is predicted to be capable of inducing an increase in the number of electronic transitions, especially those involving the non-bonding orbital (n) → phi antibonding (π *) and n → sigma anti-bonding (σ *), which leads to a shift maximum wavelength and shades of color that appear. However, the higher effect of space difficulties that arise along with the increase in tannin content, which reaches its peak in the composition of the AB2 dye has triggered conditions where the bond formed between the fabric fibers and dyes molecules becomes minimum to produce a brighter color accompanied by lower redness (a*) and yellowness (b*). The opposite phenomenon was detected when tannins dominated the AB3 dye extract. Increased tannins accompanied by decreased levels of brazilein appear to produce brighter shades, but with increasing levels of redness and yellowish. Thus, it can be reported that comparable levels of brazilein and tannins in the AB dye extract will produce a mutually nullifying effect. Slightly different from the conditions caused by the addition of tannins from Cocos nucifera extract, the addition of tannins from the Leucaena leucocephala extract, as seen in the AC1-AC3 dye composition, shows a lower magnification (0.02 nm) shift of the A dye wavelength. The condition is predicted to be triggered by mimosine pigment, which produces a complementary effect with tannins in the dye. This complementary effect is getting stronger because of the smaller molecular size of mimosine, which reduces the effect of space difficulties, which can inhibit the binding of dye molecules by fabric fibers. The presence of mimosine, with the molecular structure as shown in Fig. 2 (c), in AC1-AC3 extract has enriched the auxochrome variety of AC1-AC3 dyes, wherein the molecular structure of mimosine has N-H chromophores absent in AB1-AB3 dyes. This is the cause of the difference in bathochromic shifts that occur in AC1-AC3 dyes when compared to AB1-AB3 dyes. The condition also shows that in addition to increasing orbital quantities, the presence of mimosine has enriched the number of chromophores and the quantity and variety of auxochromes involved in the electronic transition of AC1-AC3 dyes when adsorbing light energy. These conditions make mimosine as one inducer of a larger bathochromic shift in AC1-AC3 dyes. This condition is following the color look of the extract is getting closer to the typical brown tannin. B. UV-Visible Spectra The UV-Vis spectrum of Sappanwood bark (A) dye extract showed λmax at 538.57 nm. These results confirm the presence of brazilein pigment in the Sappanwood bark extract. These results correspond to λmax data of brazilein flavonoids at 540 nm reported by the ref. [28]. Meanwhile, the UV-Vis spectrum analysis results of Cocus Nucifera fiber and Leucaena leucocephala leaf extract showed the maximum absorption at 384.41 nm and 417.84 nm. The difference in maximum wavelength between Cocus Nucifera fiber and Leucaena leucocephala leaf extract has strengthened the prediction of the presence of mimosine pigment as a complement of tannin in C dye extract. Fig. 3 shows the UV-Vis spectra of the A, B, C, AB1-AB3, and AC1-AC3 dye extracts. Table 3 shows the colorimetric parameters of different compositions of dyes extract. The colorimetric analysis results showed that elevated levels of tannin pigment from Cocos nucifera fiber extract on AB1-AB3 dyes had resulted in a decrease in the brightness value from 47.34 in A dye, being respectively of 41.35 in AB1. It is closely related to the bathochromic events or maximum absorption shifts to higher wavelength regions triggered by an increase in the number of chromophores and auxochrome occurring as a result of tannin pigment added from Cocus nucifera to then synergize with brazilein pigment from Sappanwood bark extract. The same trigger has induced an increase in redness and yellowness value along with an increase in tannin content from Cocus nucifera fiber extract into Sappanwood bark extract. In general, almost the same phenomenon is detected in AB3 dyeing results. However, balanced levels of brazilein and tannin pigments in AB2 dyes have given rise to a mutually nullifying effect that minimizes the number of bonds between fabric fibers with both pigments, which increases the color brightness accompanied by a decrease in redness and yellowness compared to the colors produced by AB1 and AB3 dyes or when one of the pigments dominates. In detail, the results of dyeing using AB2 dye have brightness 49.95 with the a* and b* notation values of 24.80 and 9.22, which are 9.90% and 29.18% lower compared to the a* and b* color notation values produced by AB1 dyes; and 20.89% and 54.56% lower than AB3 dyes. The corresponding conditions were detected in the analysis results of the AC1-AC3 colorimetric parameters. Brightness value (L*) of AC1-AC3 dye showed an increasing trend along with elevated levels of tannin from Leucaena leucocephala leaf extract added to Sappanwood bark extract. However, as described in the preceding section, the presence of mimosine pigment as a tannin complement in Leucaena leucocephala leaf extract has triggered an increase in the number of chromophores as well as the quantity and diversity of auxochrome which is bonded to the fabric fibers and induces less significant maximum wavelength shifts in AC1-AC3. However, in contrast to the dyeing phenomenon by AB2, the results of dyeing using AC2 dyes produce an increase in brightness, accompanied by an increase in the a* and b* notation values compared to AC1. The decrease in brightness, although not significant, together with a decrease in the notation values of a* and b* was only detected after an increase in tannin and mimosine levels in the dye reached the maximum (AC3). The decrease in CieLab coordinate in this condition is thought to be closely related to the minimum presence of brazilein pigment in the AC3 dyes, which has led to a decrease in the resulting redness and yellowness color. More specifically, this is reinforced by the results of the analysis showing the more dominant level of redness of AB3 and the yellowness level of AC3 dyes. C. Fourier Transform Infra-Red Spectroscopy To confirm the presence of red-pink chromophore from brazilein, the FT-IR spectrum showed adsorption at wavenumbers 1635.71 cm -1 from C=C aromatic stretching ( Fig. 4 and Table 4). Based on these adsorptions, the FT-IR analysis confirmed the presence of chromophores C = C. Meanwhile, the presence of auxochromes -OH in the brazilein structure has been confirmed by the appearance of respective absorption at wavenumbers 3265.86 cm -1 . The absorption details arising from the analysis of FT-IR A dyes are shown in Figs. 4 and Table 4. In addition to brazilein, FT-IR analysis was also applied to confirm the presence of reddish-brown chromophore from tannins, both in B and C dyes. Specifically for C dye, the FTIR analysis has been very useful for confirming the predicted presence of mimosine pigments as tannins complement the extracts. The FT-IR result of B dye shows that the absorption does not differ significantly from A dye. The absorption at wavenumber 1636.05 cm -1 of C=C aromatic stretching has confirmed the presence of heterocyclic compounds with the aromatic ring of the tannin molecule. These of FT-IR FTIR result has also confirmed the presence of C=C chromophores of tannin molecules. Meanwhile, the emergence of -OH and -CO auxochromes in tannin molecular structure has been confirmed by peak incidence at wavenumbers 3265.04 cm -1 and 2154.48 cm -1 . The only characteristic differentiation between brazilein and tannin is a peak that appears at wavenumber 2154.48 cm -1 which indicates the presence of C-O functional groups typical of the ester group. Especially for the results of C dye analysis, the appearance of the peak in wave number 1636.19 cm -1 indicates the presence of C=C chromophore and N-H auxochrome. This result further strengthens the results of the DRUV and UV-Vis spectrophotometers analysis which show a lower bathochromic shift to the maximum wavelength of the A dye. The condition has triggered the appearance of C dye that has lower brightness as well as higher redness and yellowness values than the B dye. This is closely related to the similarity of the chromophores and auxochromes quantity between A dye and C dye compared to that of B dye. This has induced the similarity of the electronic transitions occurring in A dye and C dye as compared to the B dye when adsorbing light. D. Diffuse Reflectance Ultraviolet (DRUV) The effect of the dye composition on the variety of shades, strength, and fastness was studied by varying the B and C dye as the mixture of the A dye which became the main dye in this study (with a ratio of 75/25; 50/50; 25/75) and by using 3 types of fixer compounds to enrich the original shades of the Sappanwood bark extract. Fig. 5 and Fig. 6 show the shades and reflectance values of the colors produced by each color combination and type of fixer. In general, higher reflection percentages will result in higher color intensity. Implementation of dyeing ending with the fixation process using three different fixer compounds proved to have enriched the shades produced by the A dye. The physical appearance of the dyeing results in Fig. 5 shows the appearance of dark brown for fixation with iron (II) sulfate and red for both other fixation, alum, and calcium oxide. The combination of the presence of specific chromophores and auxochromes from dyes and fixers used is predicted to induce the appearance of specific colors from the use of each fixer. Fig. 7 appear different types of chromophores in the three fixer types of compounds used. In fixer iron (II), sulfate and alum appear to have S=O chromophore and S-O auxochrome, whereas there is only 1 type of chromophore in the calcium oxide fixer, ie Ca=O. The only difference between fixer iron (II) sulfate and alum is in the higher quantity of chromophore and auxochromes in the alum fixer. Different patterns of influence emerged from the application of three different types of fixers to dyeing using B and C dyes. The dyeing results using B dye indicate the appearance of blackish brown, light brown, and dark brown resulting from the use of fixers iron (II) sulfate, alum, and calcium oxide. Similar to the results, dyeing using C dye denotes the appearance of gray, light yellow, and yellow in color resulting from the use of fixers iron (II) sulfate, alum, and calcium oxide. Despite having the same pigment content, B and C dyes show the appearance of different shades, though not significantly. This is closely related to the presence of mimosine pigment as a differentiator of pigment content in B and C dyes. Besides, the presence of mimosine pigments in addition to tannins in C dye has induced the appearance of similar colors with higher brightness and yellowness and lower redness values in C dye compared with B dye. This finding is relevant to the results of the colorimetric parameters and DRUV analysis shown in Table 5 -Table 7. More specifically, the DRUV data have shown a lower reflectance percentage of the shades produced by the C dye, both terminated by fixation using iron (II) sulfate, alum, and calcium oxide. When adding more B dye to A dye, each cotton fiber sample gives shades of light brown, light red, and pink colors that fade respectively on the dyeing result fixed with an iron (II) sulfate, alum, and calcium oxide fixer, as shown in Fig. 5. This condition indicates that the addition of B dye with various compositions has tended to enrich the shades of A dyes produced through the creation of shades gradations predicted due to the interaction competition between brazilein pigments from A dye and tannin pigments from B dye. Increases in B dyes levels in the dye composition will produce shades with a lower brightness for dyeing, which ends with fixation using iron (II) sulfate, while the application of alum and calcium oxide fixers results in increased brightness shades. This finding is relevant to the results of the colorimetric parameters analysis shown in Table 5 and the color reflectance shown in Table 7. Overall, increases in the B dye amount added have been shown to increase the percentage of reflectance from the dye. This condition shows a decrease in color intensity in dyeing with increasing tannin levels. Similar to the results obtained from the combination of A and B dyes, the result of an A and C dyes combination has triggered the appearance of the shades of blackish brown, dark pink, and light pink that fade respectively on the dyeing result fixed with an iron (II) sulfate, alum, and calcium oxide fixer, as shown in Fig. 6. This condition indicates that the addition of C dye with various compositions has tended to enrich the shades of A dyes, which produce through the creation of shades gradations predicted due to the interaction competition between brazilein pigments from A dye and combination of tannin and mimosine pigments from C dye. Increased levels of C dye used in dyeing give varying results in CieLab coordinate. On the results of dyeing using AC which ends with fixation using iron (II) sulfate detected to give color shades increasing brightness, while the use of alum and calcium oxide fixers, each of which results in the emergence of color shades with stable and decreasing brightness. This finding is relevant to the results of the colorimetric parameters analysis shown in Table 5 and the color reflectance shown in Table 6. In general, increases in the C dye amount added have been shown to increase the percentage of reflectance from the dye. This condition shows a decrease in color intensity in dyeing with increasing tannin and mimosine levels. E. Staining Scale The wash fastness properties of the nine dyes compositions of the focus in this study are illustrated by the staining scale in Table 8(a)-(b). It appears that the original dyes, including A, B, and C dyes, each capable of inducing the appearance of specific shades with staining scale values of each are equal to 2-3 (less); 3 (enough), and 3 (enough). The wash fastness properties of the dyes are important to evaluate to determine the ability of each dye to maintain its existence in the mixture through competition with its companion dye in interacting with the alum mordant, which mediating the interaction between the cotton fibers and the dye. Compared with B and C dyes, A dye has the lowest wash fastness. In the early stages of adding B dye to A dye (AB1 dye), the dye staining scale value is constantly detected at the value/range 2-3 (less). The constant value of this staining scale continues to hold up to 50% of the B dye added to the mixture (AB2 dye). The increase in staining scale values begins to appear in the addition of 75% of the B dye into the mixture (AB3 dye). By comparing the staining scale values of A and B dyes as the origin dyes of this combination, it can be seen that tannin pigment (B dye) competitiveness is superior to that of brazilein pigment in A dye. The pattern of color behavior appears on the dyeing results using a combination of A and B dyes, both of which then ends with fixation using iron (II) sulfate, alum, and calcium oxide. The appearance of shades is strongly influenced by the mutually supportive or mutually exclusive properties between the chromophores and the auxochromes, which are either owned by dyes or fixers. This is evidenced by the same pattern of similarity formed between the AB1 and AB2 dyes terminated by fixation using all three types of fixers used with the result of A dye with the same three fixer types. Similarly, the resultant dye behavior produced by the AB3 dye having similarities to the B dye produced from the same fixer. This finding is in line with the similarity of the three colorimetric parameters (brightness, redness, and yellowness) produced by AB1 and AB2 dyes with A dye and AB3 dye with B dye. The evaluation result of the C dye addition to the A dye (AC dye) shows that when into the A dye is added a small amount of C dye (AC1 dye), the staining scale value of the dye is constantly detected at the value/range 3 (enough). However, in contrast to the behavior of the AB dye, the staining scale has increased with increasing C dyes added (up to 50%) and has decreased when the C dyes added raise to 75% (AC3 dye). This condition is predicted to occur due to the presence of mimosine pigment as a tannin complement in the C dye. As has been explained previously, the presence of mimosine pigment, one of which has enriched the auxochrome variety, through the presence of the -CN and -NH bonds. The presence of nitrogen atoms with high electronegativity will tend to strengthen the interaction with mordant alum and minimize the potential of staining scale decrease in dyeing results using this type of dye. This suggests a synergistic or mutually reinforcing effect between chromophores and auxochromes, both owned by brazilein as well as tannins and mimosine pigments. This finding is also in line with the physical appearance of AC1, AC2, and AC3 dyes, which still show similarities with the A dye through the emergence of typical gradations of brazilein pigment. This result is reinforced by the similarity of the brightness, redness, and yellowness values, as shown in Table 5, between the dye A with the AC1, AC2, and AC3 dyes. The better behavior is shown by the wash fastness resulting from AB and AC dyes with iron (II) sulfate as fixers compounds. However, the opposite phenomenon can be observed from wash fastness, which decreases when dyeing ends with fixation using alum and calcium oxide. Thus, the use of three different fixers is not only influential on the shades variation and colorimetric parameters, but also the wash fastness. The wash fastness is influenced by the strength of interaction, either between fabric fibers with mordant alum, mordant alum with specific dye, or dye with each type of fixer used. However, if further investigated, the application of iron (II) sulfate fixer as soon as the dyeing process is completed is detected capable of producing the best wash fastness through the appearance of a specific shade with the highest staining scale value, ie 4 (good). To provide a more complete illustration of the interactions that occur in the dyeing process using Sappanwood bark, Cocus nucifera fiber, and Leucaena leucocephala leaf extract, in Fig. 8-Fig. 16 appears interaction between cotton fiber, mordant, pigment, and each type of fixer. Fig. 8 The interaction of the dyeing process using brazilein pigment with iron (II) sulfate fixer Fig. 9 The interaction of the dyeing process using brazilein pigment with alum fixer Fig. 10 The interaction of the dyeing process using brazilein pigment with calcium oxide fixer Fig. 11 The interaction of the dyeing process using tannin pigment with iron (II) sulfate fixer Fig. 12 The interaction of the dyeing process using tannin pigment with alum fixer Fig. 13 The interaction of the dyeing process using tannin pigment with calcium oxide fixer Fig. 14 The interaction of the dyeing process using tannin and mimosine pigment with iron (II) sulfate fixer Fig. 15 The interaction of the dyeing process using tannin and mimosine pigment with alum fixer Fig. 16 The interaction of the dyeing process using tannin and mimosine pigment with calcium oxide fixer IV. CONCLUSIONS The cotton dyeing in this study was carried out using natural dyes from an extract of the Sappanwood bark, Cocus Nucifera fiber, and Leucaena leucocephala leaf. It was found that all three had maximum absorption at 540 nm; 384.41 nm; and 417.84 nm; which confirmed the presence of brazilein pigments in the Sappanwood bark extract tannins on the extract of Cocus Nucifera fiber and Leucaena leucocephala leaf. In addition to tannins, mimosine pigment is detected as a tannin companion in Leucaena leucocephala leaf extract characterized by detection of wavelength differences wherein extract of Cocus nucifera fiber Leucaena leucocephala leaf is capable of producing maximum absorption. Increased levels of Cocus Nucifera fiber extract in combination with Sappanwood bark have resulted in specific results on color percentage reflectance, brightness, redness, and yellowness. This condition is associated with an increase in the type and quantity of chromophores and auxochromes in a combination of brazilein and tannin pigments. It has been the same for dyeing using a combination of brazilein pigment with tannin and mimosine from Leucaena leucocephala leaf extract. This condition is associated with the presence of mimosine pigment in addition to tannin in Leucaena leucocephala leaf extract, which has improved the quality of interaction, especially with mordant alum to improve the quality of the resulting dyeing. In addition to the dye composition, applying three different fixer types to this dyeing process has enriched the shades and produced specific colorimetric parameters with significantly affected wash fastness.
2021-10-31T15:17:07.229Z
2021-10-21T00:00:00.000
{ "year": 2021, "sha1": "1c8b19a1ca6699846e9665b1f40ba9c361da5b21", "oa_license": "CCBYSA", "oa_url": "http://www.insightsociety.org/ojaseit/index.php/ijaseit/article/download/13400/3078", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "1103d492db49d8f9895fbe19eb2453951029468d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
257917968
pes2o/s2orc
v3-fos-license
Dorsal Cheilectomy Using Great Toe Metatarsophalangeal Joint Arthroscopy for the Treatment of Hallux Rigidus Great toe metatarsophalangeal joint (MTPJ) arthroscopy has been described in the literature for more than 50 years for treatment of a multitude of first MTPJ pathologies, including hallux rigidus, hallux valgus, and osteochondritis dissecans, among others. Despite this, the use of great toe MTPJ arthroscopy has not become widely used for treatment of these conditions as the result of reported difficulties with adequate visualization of the joint surface and manipulation of surrounding soft-tissue structures with the instruments available. We propose a simple technique with illustrations of the operating room setup and procedural steps to perform a dorsal cheilectomy in those with early-stage hallux rigidus using great toe MTPJ arthroscopy and a minimally invasive surgical burr in a way that is reproducible by foot and ankle surgeons. H allux rigidus is a condition that causes pain and restricted range of motion of the great toe metatarsophalangeal joint (MTPJ) secondary to degenerative joint disease. Although the primary etiology of this condition is unknown, the pathologic process typically includes osteophyte formation and degeneration of cartilage dorsally in the early stages followed by involvement of the entire great toe MTPJ during its later stages. 1 For patients with early-stage disease, noted by pain at the extremes of range of motion and the presence of a dorsal osteophyte, dorsal cheilectomy has been shown to be a satisfactory treatment option. Completion of a dorsal cheilectomy using an open technique is reported to provide greater than 90% patient satisfaction rates after completion of the procedure. 2,3 Unfortunately, despite their initial improvement with open dorsal cheilectomy, approximately 10% of patients requiring a further surgical procedure resulting from progression of disease at 2 years from their index procedure. 4 A minimally invasive technique including the use of great toe MTPJ arthroscopy with high-torque, low-speed burrs has been described with the potential for quicker recovery, faster rehabilitation, and decreased length of hospital stay in the early postoperative period, with low rates of complications at intermediate follow-up. 5,6 Recent technological advances have made the great toe MTPJ more accessible using arthroscopic techniques, including the creation of 1.9-mm high-definition cameras and 1.4-mm bipolar radiofrequency ablation devices. 7 Even with the aforementioned technological advances in performing great toe MTPJ arthroscopy, contraindications still exist, including severe soft-tissue swelling, arterial insufficiency, infection, and presence of a large osteophyte. 8 Minimal literature is available on the use of great toe MTPJ arthroscopy for the treatment of hallux rigidus, with 1 previous study noting high patient satisfaction with minimal complications postoperatively. 6 Our aim is to illustrate a simple technique to perform an arthroscopic dorsal cheilectomy for treatment of hallux rigidus in a way that is reproducible by arthroscopically trained foot and ankle surgeons. Surgical Technique (With Video Illustration) The patient is positioned supine on the operating table under general anesthesia. The patient is positioned so that the operative lower extremity has the ankle hanging off the end of the bed. A small bump is made using rolled bedsheets or towels under the operative hip to allow the ankle to lay in a neutral position. A safety strap is placed around the patient's waist, and a 4-inch strip of silk tape is applied around the contralateral leg to reduce overall movement of the body during the procedure, as well as to prevent the contralateral leg from falling off the operative table. The patient is then prepped and draped in typical sterile fashion. Toes 2 through 5 are wrapped together with a strip of Ioban (3M, St. Paul, MN). The hallux is left uncovered and the procedure is performed without distraction to allow for manipulation of the first MTPJ when required. Pertinent anatomic structures are marked with a surgical skin marker, including the extensor hallucis longus (EHL) tendon and the MTPJ, and the dorsal medial and dorsal lateral portals are marked approximately 5 mm from the EHL tendon (Fig 1). The arthroscope and arthroscopic tools are then inserted via a standard nick-and-spread technique. The senior author (K.M.) uses a NanoScope (Arthrex, Naples, FL), which is a 1.9-mm high-definition flexible camera with a rubber blunt tip and a 2.5-mm shaver for initial debridement of the great toe MTPJ. Thickened synovium and hemorrhagic soft tissue should be debrided with use of the 2.5-mm shaver (Fig 2). The great toe MTPJ is evaluated to assess the amount of remaining cartilage, amount of exposed subchondral bone, and size of the dorsal osteophyte. The loose chondral flaps on both the surface of the metatarsal head and proximal phalanx are debrided with the shaver and all bone debris are removed from the joint (Fig 3). When the thickened synovium, bone debris and loose chondral tissue are adequately debrided and removed, and a 3-to 4-mm accessory portal is made approximately 2 cm proximal and slightly medial to the MTPJ. This portal is made parallel with the dorsal cortex of the first metatarsal and proximal to allow for the burr to be inserted within the soft tissues when removing the dorsal osteophyte. A periosteal elevator is inserted through the accessory portal and visualized within the joint inferior to the EHL tendon. The 4.3-mm  13-mm conical wedge minimally invasive surgical (MIS) low-speed, high-torque burr (Arthrex) is then inserted within the path created by the periosteal elevator underneath the EHL tendon and into the great toe MTPJ (Fig 4). Intraoperative fluoroscopy in combination with arthroscopic visualization is then used to determine how much of the dorsal osteophyte of the first metatarsal should be removed (Fig 5). The dorsal osteophyte is excised under direct visualization using the MIS low-speed, high-torque burr by slowly manipulating the burr with medial-to-lateral sweeps. Successful removal of the osteophyte (up to 30% of the metatarsal head) is then confirmed using intraoperative fluoroscopy (Fig 6). The joint is taken through complete range of motion under direct GREAT MTPJ ARTHROSCOPY FOR HALLUX RIGIDUS visualization to ensure no mechanical impingement remains. Before removing the arthroscope, we ensure all bone debris is removed with the shaver. The 3 small portal incisions are closed with simple nylon stitches, and the foot is placed in a soft postoperative dressing. The patient is allowed to bear weight as tolerated within a hard-soled shoe. A demonstration of using great toe metatarsal phalangeal joint arthroscopy for completion of dorsal cheilectomy for treatment of hallux rigidus can be found in Video 1. Discussion Our described surgical technique for treatment of early-stage (1, 2, and select grade 3) hallux rigidus with great toe MTPJ arthroscopy and completion of a dorsal cheilectomy using a low-speed, high-torque MIS burr may decrease postoperative pain, stiffness, and scar formation. Similar to open techniques, the use of the 1.9-mm NanoScope (Arthrex) allows for the procedure to be completed under direct visualization to ensure adequate resection of the dorsal osteophyte. There are many advantages of an arthroscopic approach when completing a dorsal cheilectomy compared to open techniques (Table 1). A large benefit of the arthroscopic technique when compared with open techniques is that it allows for more targeted debridement of the chondral surfaces, removal of loose bone debris, and thickened synovium, which may play a role as pain generators in the postoperative period. 5 Patients who undergo dorsal cheilectomy using our arthroscopic technique are allowed to weight-bear as tolerated in a hard-soled shoe and with no limitations on flexion and extension of the great toe MTPJ. One previous study has demonstrated that completion of a dorsal cheilectomy can be performed using an arthroscopic technique with high patient satisfaction and minimal complications at an average follow-up of more than 4 years. 6 Using only 3 small incisions, therefore decreasing soft-tissue morbidity, can decrease the risk of wound complications in those patients with diabetic neuropathy or vascular insufficiency. 9 Our technique also allows for direct visualization of the articular surface before and after removal of the dorsal osteophyte, which allows for more limited fluoroscopic exposure when compared with more traditional techniques. In addition, this method decreases the risk of EHL rupture during the operative procedure by allowing for the creation of the accessory portal and insertion of the MIS burr under direct visualization. While management of early-stage (1, 2, and select grade 3) hallux rigidus is attempted to first be treated using great toe metatarsal phalangeal joint arthroscopy within our practice, the surgeon has the opportunity to convert the case to an open procedure if needed based on their intraoperative findings. In particular, if inadequate visualization is obtained using the arthroscope, conversion to an open procedure may be necessary. Before implementation of an arthroscopic technique for the treatment of hallux rigidus within one's practice, it is important to understand some of the technical pitfalls, which include use of appropriate arthroscopic technique, portal placement, and instrumentation ( Table 2). The decreased risk profile, described high patient satisfaction at intermediate follow-up, faster patient recovery, and decreased use of intraoperative fluoroscopy are all benefits of our described technique. In conclusion, improved arthroscopic techniques can allow arthroscopically trained orthopaedic foot and ankle surgeons to perform a dorsal cheilectomy in those with early-stage hallux rigidus in a reproducible way.
2023-04-04T15:03:09.468Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "f9d9a7874cece459af4ef4b956ba4348be29c799", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.eats.2022.12.014", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5e9686738844fe61256d3f75c174e228d687006", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
11109101
pes2o/s2orc
v3-fos-license
Opus: University of Bath Online Publication Store Insights into Fanconi Anaemia from the Structure of Human Fance This version is made available in accordance with publisher policies. Please cite only the published version using the reference above. ABSTRACT Fanconi Anaemia (FA) is a cancer predisposition disorder characterized by spontaneous chromosome breakage and high cellular sensitivity to genotoxic agents. In response to DNA damage, a multi-subunit assembly of FA proteins, the FA core complex, monoubiquitinates the downstream FANCD2 protein. The FANCE protein plays an essential role in the FA process of DNA repair as the FANCD2-binding component of the FA core complex. Here we report a crystallographic and biological study of human FANCE. The first structure of a FA protein reveals the presence of a repeated helical motif that provides a template for the structural rationalization of other proteins defective in Fanconi Anaemia. The portion of FANCE defined by our crystallographic analysis is sufficient for interaction with FANCD2, yielding structural information into the mode of FANCD2 recruitment to the FA core complex. Disease-associated mutations disrupt the FANCE–FANCD2 interaction, providing structural insight into the molecular mechanisms of FA pathogenesis. INTRODUCTION A group of rare genetic conditions collectively defined as chromosome instability syndromes has received much attention in recent years, as their study continues to provide important insight into the molecular mechanisms responsible for the integrity of our genome. One such condition is Fanconi Anaemia (FA), a genetically heterogeneous disorder characterized by congenital abnormalities, aplastic anaemia and predisposition to cancer, especially acute myeloid leukemia and squamous cell carcinomas (1)(2)(3). A conspicuous cellular feature of FA is chromosomal fragility and hypersensitivity to DNA cross-linking agents such as mitomycin C, diepoxybutane and cisplatin. Sensitivity to genotoxic agents suggests that the pathogenic effects of FA are due to defects in the molecular mechanisms of DNA damage signalling and repair (4)(5)(6). Twelve different FA subtypes (A, B, C, D1, D2, E, F, G, I, J, L, M) have been isolated, and the genes for all but type I have been cloned (7). The majority of FA proteins do not possess clear functional motifs, and only a subset of them have been associated with an enzymatic activity, including a E3 ubiquitin ligase, FANCL (8,9), and two helicases, FANCJ (10,11) and FANCM (12,13). A nuclear multi-subunit complex of at least eight FA proteins (FANCA, FANCB, FANCC, FANCE, FANCF, FANCG, FANCL and FANCM), the FA core complex (14), adds a single ubiquitin chain to FANCD2 following DNA damage or replicative stress (15) (Figure 1A). Monoubiquitination acts as a signal for FANCD2 recruitment to nuclear foci where it colocalizes with cell-cycle checkpoint regulation and DNA repair proteins such as BRCA1, BRCA2 and RAD51 (15)(16)(17). Within the FA core complex, individual constituents engage in multiple interactions with each other, giving rise to functional subcomplexes (18,19). Recent evidence also points to additional roles of the FA core complex besides FANCD2 ubiquitination (20). Although much has been learned about the role of the FA proteins in maintenance of genome stability, our understanding of the molecular mechanisms underlying their function remains largely incomplete. FANCE is essential for FANCC accumulation in the nucleus and assembly of the FA core complex (21,22). Moreover, FANCE localizes to constitutive nuclear foci (21) and becomes associated with ubiquitinated FANCD2 and BRCA2 in a chromatin complex (23). FANCE is the only member of the FA core complex for which a direct association with FANCD2 has been demonstrated (21). Indeed, it has been proposed that FANCE represents the essential link between the FA core complex and FANCD2 (19,21,22). Here we describe the identification and crystallographic analysis of a large, evolutionarily conserved region of human FANCE. The first structure of a FA protein reveals the presence of a repeated helical motif, which was not apparent from the analysis of its amino acid sequence and represents a structural template for other proteins defective in Fanconi Anaemia. We demonstrate that the FANCE region defined by the structure is sufficient for interaction with FANCD2 and identify an epitope on the FANCE surface that is critical for FANCD2 binding. Disease-associated mutations in FANCE and FANCD2 disrupt the FANCE-FANCD2 interaction, providing a structural rationale for their pathological effect in FA patients. Purification and crystallization A C-terminal segment of the human FANCE protein spanning amino acids 273 to 536 (natural C-end) was cloned into the pET28a plasmid vector and over-expressed in the E. coli BL21(DE3) strain as a 6xHis-tagged protein. The recombinant protein was purified by standard Ni 2þaffinity chromatography over a HisSelect TM Sepharose (Sigma, UK) column. The histidine tag was cleaved with thrombin protease and the digested sample was passed again over the HisSelect column in order to remove the cleaved tag. The FANCE protein was further purified by size exclusion chromatography using a Superdex TM Phasing and refinement The X-ray crystal structure of FANCE was solved using the anomalous signal of the selenomethionine-substituted Orange boxes represent regions of the protein that are predicted to constitute independently folded domains. The two nuclear localization sequences in the middle section of the protein are drawn in black. (C) Cartoon representation of the crystal structure of amino acids 273 to 536 of human FANCE. The protein chain is shown as a ribbon, rainbow-coloured from blue at the N-terminal end to red at the C-terminal end. Two views are shown, differing by a 908 rotation around an axis aligned to the long dimension of the molecule. The alpha-helical segments in the structure are labelled, a1 to a14. The positions of the five helical repeats identified in FANCE are indicated next to the structure. protein. Native diffraction data to a resolution of 2.0 Å and multiple anomalous dispersion data to 2.8 Å were collected at beamline ID29 of the European Synchrotron Radiation Facility (ESRF), Grenoble, France. The diffraction intensities were measured in MOSFLM (24) and merged in SCALA (25). The positions of the selenium atoms in the asymmetric unit were determined in Shakeand-Bake (26), and used as input for phasing in SHARP (27). The solvent-modified map calculated by SHARP was readily interpreted by ARP/wARP (28), which produced an almost complete model of the structure. Refinement of the structure was carried out in REFMAC5 (29), together with minor manual rebuilding in COOT (30). The final crystallographic model comprises 249 amino acids and 223 water molecules (2067 atoms), and includes amino acids 275 to 535 of the human FANCE sequence. Residues 301 to 307 in the loop linking helix 1 and helix 2, and residues 479 to 483 in the intra-helical loop of repeat FANC4 are not visible in the electron density map and are not included in the final model. The conformation of amino acids 484, 485 and 518 to 521 in the inter-helical loops of repeats FANC4 and FANC5 must be considered tentative as the quality of the electron density is poor for these residues. 97.3% of residues in the crystallographic model are in the most favoured regions of the Ramachandran plot, 2.7% in the allowed regions and none in disallowed regions of the plot. Figures were prepared with PyMOL (http:// pymol.sourceforge.net). Yeast two-hybrid analysis The MATCHMAKER Two-Hybrid System 3 (Clontech) was used for yeast two-hybrid analysis according to the manufacturer's instructions and as described earlier (31). Briefly, GAL4-activation domain and GAL4-binding domain constructs were either sequentially transformed into AH109 yeast cells and subjected to selection on -trp/leu/-his/-ade medium, or transformed separately into AH109 and Y187 yeast strains and mating cultures plated onto selection medium. Transformations were performed using a PEG/ssDNA/lithium acetate procedure. Colonies that grew on -trp/-leu/-his/-ade selection media were transferred onto filters and tested for b-galactosidase expression with X-gal. The activation of the three reporters in this system: His3, Ade2 and LacZ, was assayed for in each single experiment. Yeast colony growth therefore represents simultaneous activation of the His3 and Ade2 reporters and blue colouring of colonies following X-gal treatment represents activation of the LacZ reporter. Experiments were performed at least in triplicate. Domain mapping and structural analysis of FANCE The human FANCE gene encodes a protein of 536 amino acids. Examination of its sequence in DISOPRED (32) reveals the presence of a disordered region between residues 170 and 270, linking N-and C-terminal domains with high predicted secondary structure content ( Figure 1B). In addition, amino acids in the C-terminal domain display a higher degree of evolutionary conservation relative to the rest of the protein ( Figure 2). Thus, FANCE sequence analysis suggests that its C-terminal region might represent a distinct protein domain capable of autonomous folding and suitable for biophysical investigation. We expressed and purified a region of human FANCE spanning residues 273 to 536 (264 amino acids). Initial crystallization experiments were unsuccessful. SDS-PAGE analysis showed a marked tendency of the recombinant protein to multimerize through disulphide-mediated cross-linking. Systematic replacement of cysteine residues with alanine showed that the C391A mutation removed the tendency of the protein to form covalent aggregates (Supplementary Figure 1) and promoted the growth of crystals suitable for high-resolution X-ray analysis. The crystal structure was solved to a resolution of 2.0 Å by the multiple anomalous dispersion method using selenomethioninesubstituted protein (Table 1). In the rest of the article, we will refer to the FANCE region studied here simply as FANCE. General features of the structure The crystal structure reveals that FANCE consists predominantly of helices (thirteen a-helices, one 3 10 -helix) and no b-strand ( Figure 1C). The molecule adopts an elongated, non-globular shape, with a size of 70 Å in its longest dimension, a width of 30 Å and thickness of 20 Å . The polypeptide folds in a continuous, right-handed solenoidal pattern from the N-to the C-terminal end of the chain. Beginning with helix a5, the loops of the solenoid become more regular, and it is possible to identify five copies of a helical motif repeating to the C-end of the protein. Thus, the most outstanding feature of the FANCE structure is the presence of a repeated motif, which was not detected by the inspection of the amino acid sequence. The repeats vary between 30 and 40 amino acids in length and fold in an antiparallel helical hairpin (Figures 2 and 3). The two helices (H1 and H2) of the repeat are of similar size, spanning between three and four turns, and cross with an angle varying between 218 and 358. The helices in the repeat are straight or display only a minimal degree of bending. The only exception to these characteristics is seen in helix H1 of repeat 1, which is longer at five turns and shows a pronounced kink between the second and third helical turn. In each repeat, helices H1 and H2 make extensive contacts with each other and with the helices of neighbouring repeats, thus generating a continuous hydrophobic core extending throughout the molecule. The loops connecting the helices within a repeat, as well as adjacent repeats, vary significantly in length and conformation. Each repeat lies within an ideal plane that is broadly perpendicular to the long axis of the molecule, and stacking of the repeats in a consecutive array generates a double layer of helices. Within the structure, adjacent repeats are related by a rotation and a tilt along an axis parallel to the long dimension of the structure, which together generate a super-helical curvature. The rotation angle between repeats 1 and 2 and repeats 3 and 4 spans 358, whilst the rotation angle between repeats 2 and 3 and 4 and 5 is 158. The presence of super-helical curvature does not confer an overall curved shape to the structure, since the direction of rotation changes as one moves from the N-to the C-terminus of the chain: repeats 2 and 3 show a left-handed rotation, Figure 2. Multiple sequence alignment of FANCE orthologues. The amino acid sequences of mouse, human, chicken and zebrafish FANCE were aligned in the program ClustalW. Absolutely conserved residues are highlighted in green, identical residues in yellow and conserved residues in cyan. The secondary structure elements identified in the FANCE structure are marked above the alignment. FANCE residues involved in FANCD2 binding by bioinformatic and yeast two-hybrid analysis are marked by an asterisk below the alignment. Residues mutated in Fanconi Anemia are highlighted by a red box. P S SA S R L LR V AL V SF C VK YT Y AI C R AV L CP L LQ D PR V GP A QT E LL C SL I KD E SL E SD M QV H uma n 37 6 T S SA S R L LT T AL T SF C AK YT Y PV C S AL L DP V LQ A PG T GP A QT E LL C CL V KM E SL E PD A QV C hic ken 31 6 S Q PP S R H LM A AL I SF C SK YS E PF C Q VL V AP V LR E PG Q GA E QT K LV C EL V -E E CL E PD Y VR Z ebr afi sh 34 3 A E PA S R C LV T AV T SL C SR YP R PT C Q AL I EP L LQ K GQ L GS A QA D LL C RL V ID -CL E PH H RL M ous e 46 7 Q I LG Q V L EL A WR E ET F LV LQ T LL E R QV E MT P EV F SV L VQ R LC K EG P AA T TS M AY A KL M LT H uma n 43 6 L M LG Q I L EL P WK E ET F LV LQ S LL E R QV E MT P EK F SV L ME K LC K KG L AA T TS M AY A KL M LT C hic ken 37 5 M V LG Q V L AV P LT E KL L PV VL A VL G R QE P LP S EL F DL L VL T LC R QA P AF A TS L SY A KL V TA Z ebr afi sh 40 2 L V FR T A L GA S WD E GV L SV IH A LL D S KL E LS Q ED F CL F AE H LC S QS P HF S KS M KF A KM L LS M ous e 52 7 V M TK Y QT S I TE Q QS L DL A VA L EP N A TF L KK A LQ A AL R HV T H H uma n 49 6 V M TK Y QA N I TE T QR L GL A MA L EP N T TF L RK S LK A AL K HL G P C hic ken 43 5 V L TT Y SS Q L SP S HR S RL A AA L DG S N AA L RR S LQ A AL G R---Z ebr afi sh 46 2 V L TK Y QS N V NP A CH H TL S SV L SF N E TF L KK S LQ A AL K RI S F whereas repeats 4 and 5 show a right-handed rotation. Because of the reversal in the sense of rotation the helices in repeat 5 become parallel again with those of repeat 1. The FANC repeat The repeated motif identified in the structure of human FANCE represents a novel member of the large family of two-and three-helical motifs that includes the wellcharacterized HEAT and ARM repeats (33). Here we will refer to the helical motif in human FANCE as the FANC repeat. Although the five FANC repeats clearly share the same architecture, their pairwise superposition gives a root mean square deviation (rmsd) varying between 1.4 and 2.7 Å over 24 alpha carbon positions, thus pointing ** * ** * Figure 3. The FANC repeat. (A) Superposition of the five copies of the repeated helical motif revealed by the crystal structure. Each repeat is drawn as narrow tube, with the two helical segments coloured in blue. (B) Structure-based sequence alignment of the five FANC repeats. Conserved hydrophobic residues are highlighted in green. Amino acids that become buried at the intra-and inter-repeat interfaces are boxed. Two sets of three asterisks below the alignment indicate the two triplets of hydrophobic residues that interact in a similar fashion within a repeat. (C) Example of intrarepeat packing of hydrophobic side chains at conserved triplet positions in the FANC3 repeat. The trajectory of the main chain atoms of the repeat is indicated by a narrow tube, whereas the two triplets of interacting side chains are drawn as sticks, coloured in two hues of green. to significant differences between repeats ( Figure 3A). The three C-terminal repeats are structurally closer to each other, whereas the first and second FANC repeats are more divergent. A structure-based alignment of the five repeats of human FANCE shows that no amino acid is absolutely conserved across repeats ( Figure 3B). However, eight specific positions in the FANC repeat show a strong preference for hydrophobic residues with aliphatic side chains, prevalently leucine but also valine, isoleucine and methionine. No significant conservation of any other residue is observed in the repeat. In keeping with the symmetric architecture of the repeat, the eight conserved positions are distributed equally between the two helical segments. Thus, each helix contains two hydrophobic dipeptides spaced one turn apart, located in the third and fourth turn of helix H1 and in the second and third turn of helix H2, respectively. The only non-hydrophobic residues within the eight conserved positions are Thr364 in FANC1, which is buried at the interface between FANC1 and FANC2, and solvent-exposed Glu517, Lys528 and Lys532 in FANC5. The FANCE structure provides a rationale for the observed pattern of amino acid conservation in the FANC repeat: residues at conserved positions contribute the majority of side chains concurring to form the hydrophobic core of the protein. Conservation of amino acids at specific positions across repeats reflects the structural conservation in the mode of packing of the two antiparallel helices in a repeat. Thus, within each repeat it is possible to identify two triplets of amino acids, at positions 8, 9, 35 and positions 12, 31, 32, respectively, that make equivalent hydrophobic contacts ( Figure 3B). The side chain at position 12 in H1 is inserted between the side chains at positions 31, 32 in H2, whilst the side chain at position 35 in H2 interdigitates in a similar fashion with side chains at positions 8, 9 in H1 ( Figure 3C). In the case of the longer FANC1 repeat, an additional triplet is formed by Arg371 and Ile372 in helix H1 and Leu383 in helix H2. Hydrophobic positions 13 in helix H1 and 36 in helix H2 are not involved in triplet-like interactions and engage in inter-repeat contacts with the same helix of the proximal, C-terminal repeat. The conserved hydrophobic side chains further intermesh across neighbouring repeats, giving rise to the compact and uninterrupted hydrophobic core of the molecule. The core of the FANCE structure is completed by aromatic or hydrophobic residues, two or three per repeat, situated in the loops between the helices, that bury their side chains inside the protein. No other position maintains a significant degree of conservation among FANC repeats. However, fourteen residues outside the eight conserved hydrophobic positions are invariant between human and zebrafish FANCE (Figure 2). Of these, eleven are situated within or near intra-and inter-repeat loops, and their conservation is explained by their involvement in direct or water-mediated polar contacts that are important for maintaining the conformation of the polypeptide chain. Thus, Tyr394, Gln416 and Tyr500 provide sidechain-to-mainchain hydrogen bonds that bridge between adjacent repeats, whilst serine residues 356, 380 and 486 act as N-terminal helical caps. Three of the four conserved proline residues are part of a short tetrapeptide sequence of consensus PxL(S/Q) that occurs six times in the structure, spanning inter-helical loops or at the extremities of helical segments. Similarity to other structures Our crystallographic analysis reveals that FANCE belongs to the large family of non-globular protein structures assembled by tandem repetition of helical motifs. Fold-recognition analysis in DALI (34) shows that FANCE is structurally similar to HEAT-and ARMrepeat proteins such as importin-b (35), b-catenin (36), the Cand1 protein (37) and the PR65/A subunit of protein phosphatase 2A (38). Indeed, comparative structural analysis reveals that the FANC repeat shares four of the seven conserved positions that form the hydrophobic core in the HEAT and ARM repeats (33) (Figure 4). Residues 8 and 12 in helix H1 of the FANC repeat coincide with residues 13 and 17 in helix A of the HEAT repeat (helix H2 of the ARM repeat); residues 31 and 35 in helix H2 of the FANC repeat correspond to residues 28 and 32 in helix B of the HEAT repeat (helix H3 of the ARM repeat). The FANC repeat differs from well-characterized helical motifs such as the HEAT and ARM repeats in some important ways. The most evident feature is the absence of conserved positions beyond the selection of hydrophobic amino acids that become buried at the intraand inter-repeat interfaces. In the helical regions of the repeat, no conservation of structural amino acids, such as proline at position 11 in the first helix of the HEAT and ARM repeats or glycine at position 8 of the ARM repeat, is observed. A lack of conserved prolines or glycines at specific positions means that both helices in the FANC repeat are straight or minimally bent, at variance with the kinked first helix of the HEAT repeat, which becomes two different helical segments in the ARM repeat. Likewise, no conservation of inter-repeat contacts is present, such as the interaction between the aspartic acid in position 19 of the inter-helical loop of the HEAT repeat and arginine at position 25 of the next repeat (33). Functional implications for other FA proteins A virtually complete complement of FA proteins can be traced back in evolution to the last common ancestor of vertebrate organisms, 450 million years ago (39). Probing further down the evolutionary tree of individual FA proteins reveals that conservation is limited to the FANCD2 and FANCL proteins (40). Thus, the components of the FA core complex seem to have arisen in order to introduce an additional level of complexity to a primitive FA network of DNA repair. No structural information is presently available about the threedimensional architecture of the FA core complex, or indeed any of its constituent proteins. Interestingly, a majority of components of the FA core complex (FANCA, FANCB, FANCC, FANCE, FANCG and FANCF), as well as FANCD2, share an unusually high content of leucine residues (Supplementary Table 1), ranging from 12.1% for FANCB to 19.5% for FANCG, above the average leucine frequency in vertebrate proteins (less than 10%). Furthermore, their predicted content of secondary structure is high, which suggests that the FA proteins of the core complex are independently folded and do not rely on the structural environment provided by the complex for their tertiary structure. An abundance of aliphatic residues, and of leucines in particular, is an essential requirement in order to achieve the kind of packing observed in the hydrophobic core of nonglobular, helical repeat proteins. For instance, the leucine content of the protein phosphatase 2A PR65/A subunit and b-catenin is 13.8 and 14.7%, respectively. It is therefore likely that, as shown here for FANCE, other leucine-rich FA proteins adopt a non-globular fold based on arrays of a repeated helical motif. Indeed, seven tetratrico peptide repeats, short helical motifs that are structurally related to the HEAT and ARM repeats, have been convincingly located in the FANCG sequence (41). Based on the FANCE structure, we predict that, in addition to FANCE, leucine-rich FA proteins FANCA, FANCB, FANCC, FANCF and FANCD2 fold partially or entirely in solenoidal structures constituted by arrays of FANC-like repeats. Interaction with FANCD2 Recruitment of FANCD2 to the FA core complex is essential for its monoubiquitination after DNA damage. Evidence of a direct interaction between FANCD2 and FANCE in vitro and in vivo has led to the proposal that FANCE represents a physical link between FANCD2 and the FA core complex (21,22). Consequently, we decided to investigate whether the FANCE structure described here included the FANCD2-binding domain. Yeast two-hybrid analysis revealed that a region of FANCE spanning amino acids 273 to 536, coinciding with the recombinant FANCE protein prepared for structural analysis, interacted strongly with full-length FANCD2 ( Figure 5). The C391A mutant used for structure determination retained wild-type affinity for FANCD2 in the assay. Thus, the result of the structurebased yeast two-hybrid analysis recapitulates and refines previous findings concerning the FANCE-FANCD2 interaction, by defining the structural domain of FANCE that is sufficient for FANCD2 binding. It has also been reported that the interaction with FANCE was mediated by N-terminal region of FANCD2 (42). In agreement with such findings, we found that two N-terminal truncations FANCD2 1102 and FANCD2 712 , to amino acids 1102 and 712 respectively, maintained FANCE binding in the yeast two-hybrid assay ( Figure 5). Previous experiments have also demonstrated a strong interaction of FANCE with FANCC, another essential component of the FA core complex, and the existence of a ternary FANCC-FANCE-FANCD2 complex has been postulated (19,22). In contrast to the interaction with FANCD2, binding to FANCC was attributed to a central region of FANCE spanning its two NLS sites. In agreement with these observations, the yeast twohybrid analysis showed that FANCE residues 273 to 536 did not interact with FANCC ( Figure 5). Thus, our data support previous reports that different regions of FANCE are responsible for interacting with FANCC and FANCD2. In order to identify the site of interaction with FANCD2, we performed an in silico analysis of the FANCE structure using the optimal docking area (ODA) method (43). Briefly, ODA searches for continuous patches on the surface of a protein with a favourable desolvation energy when buried, as is the case at a protein-protein interface. ODA analysis identified a set of contiguous residues with favourable desolvation energies centred around Phe522, in the exposed intra-helical loop of repeat FANC5 (Figure 6). Analysis of the ODA epitope reveals that Phe522 is part of a cluster of conserved, hydrophobic residues that includes Met487 and Leu523, which in turn is surrounded by hydrophilic residues Glu448, Ser486 and Lys491. The presence of a hydrophobic centre contoured by polar amino acids poised to make charged interactions is in agreement with known interface features of non-obligate protein complexes. In support of their potential functional role, residues Glu448, Ser486 and Lys491 are conserved between human and zebrafish FANCE. The relevance to FANCD2 binding of the site identified on the FANCE surface was verified experimentally. To this purpose, we designed a double FANCE mutant, where the chemical nature of the solvent-exposed hydrophobic residues Phe522 and Leu523 had been reversed by mutation to the hydrophilic glutamic acid. If the surface patch identified by the ODA analysis is part of the interface in the FANCE-FANCD2 complex, the presence of unpaired charges in the hydrophobic environment of the interface would disrupt or significantly impair the ability of the two proteins to interact. Circular dichroism spectroscopy and gel filtration chromatography experiments confirmed that the mutant FL522-523EE protein retained the native conformation of wild-type FANCE (Supplementary Figure 2). Yeast two-hybrid analysis showed that the simultaneous mutation of Phe522 and Leu523 to glutamic acid abolished the interaction between FANCE and FANCD2. The identification of a single putative protein-protein interaction site implies that the primary function of the C-terminal region of FANCE is to associate with FANCD2 and does not include multiple, concurrent interactions. The occurrence of conserved Ser486 at the putative protein-protein interface suggests that the FANCE-FANCD2 interaction might be regulated by phosphorylation. Disease-associated mutations Several missense mutations leading to amino acid substitutions have been identified in the FA genes. However, the molecular basis for the pathogenic effect of these mutations is still largely unknown. Four such mutations have been reported in the human FANCE sequence: S356G, R365K, R371W, A502T (Fanconi Anaemia Mutation Database) ( Figure 7A). Mapping the mutations on the structure reveals that three of the four changes (S356G, R365K, R371W) disrupt hydrogen-bond interactions between the side chain of the affected amino acid and neighbouring mainchain atoms, that are important for maintaining the conformation of the polypeptide chain. Thus, the structure predicts that the pathogenic effect of these mutations is due to a local destabilization of the ternary structure of the FANCE protein. The clearest example of such an effect involves Arg371 in the FANC1 repeat, which is at the centre of a network of direct or water-mediated hydrogen bonds involving the main-chain carbonyl groups of leucines 333, 336 in the loop between helices 3 and 4, and Leu367 of the FANC1 repeat, as well as the side chain of Asp338 ( Figure 7B). Thus, mutation of arginine to tryptophan would cause the loss of several structural hydrogen bonds, leading to disruption of local protein conformation and severe destabilization of the FANCE fold. When tested in the yeast two-hybrid assay, no interaction was observed between the R371W FANCE protein and FANCD2 ( Figure 5). Thus, our data provide a structural rationale for the pathological effect of the R371 mutation in FANCE. An identical pathogenic mutation, R302W, has also been reported in FANCD2 (Fanconi Anaemia Mutation Database). As the mutation occurs in the part of the FANCD2 sequence responsible for binding FANCE, we determined whether it would disrupt binding to FANCE of the large FANCD2 1102 segment. As observed for the R371W FANCE mutant, no interaction between FANCE and R302W FANCD2 1102 was detected in the yeast twohybrid assay ( Figure 5). It is possible that the R302W mutation in FANCD2 acts in a similar way to the FANCE R371W mutation, by compromising the stability of the protein structure, and that the observed functional defect is caused by disruption of the FANCE-FANCD2 interaction. DISCUSSION Here we have described the crystal structure of a large, evolutionarily conserved region of FANCE, an essential component of the FA pathway of DNA repair. The structure reveals a non-globular, solenoidal conformation assembled by tandem repeats of a short helical motif. Remarkably, FANCE folding requires only general conservation of amino acid type at a small set of repeat positions, which makes detection of such a motif difficult and explains why its presence had not been noted before. Packing in the hydrophobic core of helical repeat proteins relies on aliphatic side chains of hydrophobic residues, such as leucine. The structure of the FANCE protein therefore provides a rationale for the high leucine content of its sequence and suggests that other FA proteins rich in leucines will adopt a comparable fold. Multiple arrays of short helical motifs have evolved as a convenient protein architecture in order to mediate protein-protein association. Indeed, helical repeat proteins are usually involved in constitutive or regulated interactions as part of binary or multi-subunit complexes; a cogent example is the regulation of the SCF multisubunit ubiquitin ligase complex by the HEAT-repeat protein Cand1 (37). A tertiary structure based on arrays of helical repeats, such as that observed in FANCE, would therefore be well suited to the function of the FA proteins, which take part in a complex web of constitutive and regulatory interactions within and outside the FA core complex. Our data indicate that the region of FANCE defined by the crystallographic analysis is sufficient for interaction with FANCD2. Our results therefore provide insight into the process of FANCD2 association with the FA core complex, a necessary step in the activation of the FA pathway of DNA repair. Taken together with the existing evidence concerning the interaction of FANCE with FANCC and FANCD2, our finding of a large soluble segment of FANCE able of independent binding to FANCD2, but not to FANCC, suggests a model of FANCE function. In the model, FANCE would be anchored to the FA core complex through a constitutive interaction with FANCC, whereas the C-terminal region of FANCE identified here would be free of contacts with the rest of the core complex and poised to interact with FANCD2 in a regulated way. Furthermore, our bioinformatic and experimental analysis defines an epitope on the FANCE surface that is critical for FANCD2 binding. In principle, the development of small molecules designed to disrupt the FANCE-FANCD2 interface could be useful in tumour therapy based on DNA cross-linking agents. In summary, the work described here represents an important first step in the structural rationalization of the molecular processes involved in FA. Although the FANCE-FANCD2 interaction is clearly essential, it is one example of a set of concomitant binary interactions that result in FANCD2 monoubiquitination. Future research will investigate the structural basis for the interactions between FANCE, FANCD2 and the other components of the FA core complex that are necessary in order to activate the FA pathway of DNA repair. NOTE ADDED IN PROOF Whilst our manuscript was in production, a structural analysis of the human FANCF protein appeared in print (Kowal, P., Gurtan, A.M., Stuckert, P., D'Andrea, A.D., and Ellenberger, T. (2007) Structural determinants of human FANCF protein that function in the assembly of a DNA damage signalling complex J. Biol. Chem., 282, 2047-2055). The study shows that the C-terminal domain of FANCF folds into a solenoid comprised of repeated helical hairpins, in agreement with our prediction about the widespread adoption of this structural motif among Fanconi Anaemia proteins. SUPPLEMENTARY DATA Supplementary Data is available at NAR Online. ACKNOWLEDGEMENTS We would like to thank Daniel Nietlispach for NMR analysis of wild-type, N 15 -labelled and N 15 -and C 13labelled FANCE protein, Gordon Leonard for help with X-ray data collection and Tammy Cheng for the bioinformatic analysis of the FANCE structure. This work was supported by a Wellcome Trust senior research fellowship award to L.P. Atomic coordinates and structure factors have been deposited in the RCS PDB with accession code: 2ILR. Funding to pay the Open Access Publication change was provided by The Wellcome Trust. Conflict of interest statement. None declared. The helices are depicted as pink cylinders, connected by loops drawn as narrow tubes. The side chains of the relevant amino acids are drawn as ball-and-stick models. (B) Close view of the hydrogenbonding network involving the side chain of Arg371. The diseaseassociated mutation R371W causes loss of hydrogen-bond interactions with surrounding residues. The protein main chain is drawn as a ribbon, relevant side chains as sticks and a water molecule as a small yellow sphere. Hydrogen bonds are depicted as orange dashed lines.
2018-05-08T17:58:47.274Z
0001-01-01T00:00:00.000
{ "year": 2007, "sha1": "a02cca3bf8ca41f05ebdb41e1fcad6a2145c2d9a", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/35/5/1638/16761358/gkm033.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95adfe21c4356b42be2ac8ae3b663bf536ea8893", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15264735
pes2o/s2orc
v3-fos-license
What Is This Thing Called 'natural'? the Nature-culture Divide in Climate Change and Biodiversity Policy This paper treats two highly topical and interconnected environmental issues—climate change and biodiversity—in which the nature-culture divide appears in policy and regulation. The aim is to analyze how “the natural” and concerns for biodiversity and climate change are constructed in applicable regulatory frameworks, and to explore social and environmental consequences of these constructions. The analysis indicates that biodiversity and climate change regulation help construct nature and culture as separate categories and give rise to the notion that the natural state is worth protecting from human intrusion. The notion of human agency, however, is ambiguous because humans are depicted as having the power and skill to protect and even recreate “natural nature”. The paper concludes that, although nature and the natural are often used as politically- and socially-neutral concepts, the definition of natural nature as a place devoid of humans has social as well as environmental consequences. Introduction Sean Penn's Into the Wild is a magnificent, epic film that tells an intriguing story of the wilderness.The film, based on Jon Krakauer's book of the same title, is about a young man who leaves what he perceives to be false and materialistic life behind to enter the wild in search of the pure and authentic.In an interview about the film, Sean Penn said that, while human inauthenticity and corruption impede our quality of life, "the wilderness is relentlessly authentic" (MacGrath 2007).However, the young man's quest for the pristine and pure, and his revolt against society, end in misery and death.In this story, the wilderness represents both the untouched and pure, which are worth aiming for, and the untamed and violent, which threaten and destroy human lives. This depiction of the wilderness illustrates humans' deeply ambivalent relationship with nature, which oscillates between romantic devotion to nature and attempts to conquer it.Likewise, it reflects modern society's nature-culture dualism, where nature is defined as "the other" vis-à-vis human society and culture.This notion is manifested in the relationship between humans and nature, whether it concerns human mastery of nature or humans as its keeper.Accordingly, the nature-culture divide permeates the underlying logic of international environmental regulation.In recent decades, environmental regulation has evolved to encompass a more holistic approach, representing a non-anthropocentric view (Emmenegger and Tscentscher 1994). This paper treats two highly topical and interconnected environmental issues, climate change and biodiversity, that are regulated in concurrent international conventions.The Convention on Biological Diversity (CBD) embodies a holistic approach and articulates two values of biodiversity; nonanthropocentric, intrinsic value and anthropocentric, instrumental value.In contrast, the Framework Convention on Climate Change (UNFCCC) presents climate change entirely as a matter of human survival and welfare.Although the two conventions were introduced in the early 1990s, they differ in their view of, and expressed motives for, environmental protection.Yet, in considering what is worth protecting and/or why regulation is required, the CBD and the UNFCCC use "the natural" as a demarcation line. Environmental regulation concerns not only how to direct a predefined reality or to govern specific objects, but it is also involved in constructing these entities.Explicitly or implicitly, regulation creates demarcations that make objects appear hazardous or harmless, important or unimportant, or natural or nonnatural (Lidskog et al. 2009: 3).Although the notion of protecting the natural may seem neutral or congenial, it is complex and deeply value-laden, and must be interpreted and defined when put into practice.All conceptions of the nature-culture divide involve designating boundaries, and help construct the relationship between humans and their environment.Instead of searching for nature and the natural, we should view these concepts as political, and analyze the implications of particular definitions and their applications (Swyngedouw 2007: 19-20). The aim of the paper is to analyze how the natural and concerns for biodiversity and climate change are constructed in applicable regulatory frameworks.Furthermore, the paper explores the social and environmental consequences of these constructions.Analysis of the explicit and implicit assumptions underlying these constructions tells us how human agency and the relationship between humans and nature are defined, and how and to whom responsibility for environmental protection is assigned. This paper comprises six sections, starting with this introduction.Section two provides a brief overview of the modern nature-culture divide and some of its implications for environmental protection.Section three discusses how the concepts of nature and culture are negotiated in relation to each other.Section four analyzes motives and arguments for biodiversity conservation and climate change mitigation.Section five analyzes how the "natural" is constructed in the regulatory framework and the social and environmental implications of this framing.Section 6 concludes that, although the "natural" is often used as a socially and politically neutral concept, the search for it remains a deeply value-laden activity that entails drawing boundaries and assigning priorities. The modern nature-culture divide and environmental protection The modern understanding of the relationship between humans and nature is ambiguous, encompassing a wide range of emotions and rationales for its exploitation, domination and preservation.Out of a tangle of events and ideas, the conception of nature emerged, as an object of scientific inquiry and as a resource for economic progress (i.e., resourcism, conservation for the sake of benefits that humans derive from nature).This way of relating to the environment has its roots in Europe, where an "ideology of conquest and domination towards nature has evolved", an ideology that permeates globally today (Pattberg 2007: 1).In this process, modernism has transformed wilderness into a nature devoid of intrinsic value (Oelschlaeger 1991). One important precursor of this change was the Judeo-Christian ethic.The material world was God's gift to humans for them to master, but it needed improvement, achievable only by human intelligence and physical labor.The Judeo-Christian recovery story is by no means one-sided, as it also alludes to the stewardship of nature and human responsibility for God's creation (Egri 1999: 67 ff.;Merchant 2003: 85 ff.).In this tradition we find the origins of the conception of humankind as both nature's conqueror and its keeper.Another important social and intellectual transition that led to the transformation of "wilderness" to "nature" was the Enlightenment, which ushered in the scientific and industrial revolutions, and the emergence of capitalism.The metaphor of nature as mechanism or machine was established, and nature and its diverse resources (e.g., water, forests and minerals) became mere means to an end, that is, materials fueling consumption, progress and continuous economic growth (Oelschlaeger 1991: 93-95;cf. Merchant 2003: 83-84). Modernism, however, has not unfolded without criticism and counter-argument, and nineteenth century romanticism represents one such counter-argument.One feature of romanticism, a diverse movement, is that it took an aesthetic turn and embraced the idea of self-fulfillment through a personal relationship with wild and pristine nature.Urban life, the city and civilization were viewed as distortions, whereas wild nature was idealized and praised for its beauty and splendor (Faarlund 1993: 163;Hay 2002: 5 ff.). Since the nineteenth century, nature lovers, conservationists and the environmental movement have frequently raised their voices in criticism of the environmental consequences of urbanization and industrialization.The movement has taken various guises, ranging from the conservation movement to ecofeminism.Despite the differences between these lines of thought, the representation of nature as inherently good is predominant in environmental thinking (Gandy 2002:13).Likewise, the presumption is shared that modern humans are at the root of environmental degradation. There are several ideological currents-such as biocentrism, ecocentrism and streams of ecofeminism that question human supremacy over nature and notions of nature as "other" to human society and culture.There are also visions that aspire to transcend the nature-culture divide: the possibility of "a new synthesis" and the notion of "the human project as taking place within rather than outside nature" (Oelschlaeger 1991: 317).However, these lines of thought have been far from prominent in contemporary environmental discourse. Accounts of the relationship between humans and nature, as they appear in the history of ideas, convey ambiguous messages that identify humankind as both destroyer and rescuer, and wilderness or "natural nature" as both threat and refuge.The current, more or less hegemonic discourse of sustainable development unites a number of such diverse elements.Left out of this discourse, however, is a central element of the various guises of the "green critique", namely criticism of the notion of industrial progress itself (Hajer and Fischer 1999: 2).The discourse of sustainable development recognizes the structural character of environmental problems, but it also assumes that the institutions of modern society can deal with them.It promotes further, not less, modernization, responding to criticism of contemporary society with a modernistic answer involving research, technological innovation, market forces, and so on. The discourse of sustainable development is based on the modern nature-culture divide and comprises an ambivalence towards nature and human agency that has been part of the relationship between humans and their environment for so long in the Western world.It is also within this discourse that the treaties of biodiversity and climate change are negotiated. Negotiating nature and culture The notions of nature and the natural, as distinct from culture and society and untouched by humans, can be questioned since we cannot find any site on earth that fits that description.As many scholars have convincingly argued, the idea of nature and culture as separate entities is flawed, and nature is better understood as "socio-environmental arrangements" (Swyngdouw 2007: 20), or "seeming nature" (Gandy 2002: 110).In the modern understanding of a strict division between culture and nature as separate categories, Bruno Latour (1993) claims that "we have never been modern", since our world is and always has been full of hybrids: socio-natural objects and subjects.Despite these insights, the separation of nature and culture is manifest in human thought and environmental policies. The concepts of "nature" and the natural are familiar and yet highly elusive.On the one hand these terms are easily and frequently used; on the other they are difficult to grasp and define (Soper 1995;cf. Newton 2007).Contradictory thinking attaches a variety of meanings, values and functions to nature.Most commonly described is the resource and aesthetic value of the natural world.Nature is also enlisted in environmental protection.Nature can be used to complement technical solutions, for example as barrier and buffer in hazardous waste management (Uggla 2004).In this case, it is a sturdy and stable nature that compensates for human shortcomings. Accordingly, there is not one nature, but a diversity of contested natures constituted through sociocultural processes from which they cannot be separated.Different conceptualizations of nature -as resource, as threatened, or sacred, pure, and stable -imply different responses to potential threats against these natures (Macnagthen and Urry 1998:23).Therefore, nature is an elastic concept, providing an ideological vehicle for almost any position on the relationship between humans and their environment (Gandy 2002:13). The meaning of nature is continuously negotiated in relation to its supposed counterpart -human culture and society.In early sociology, in an effort to explain human consciousness and the mind, animals were frequently used to explain the uniqueness of humankind (Tuomivaara 2009: 49).It is in relation to animals and nature, defined as the other, that the uniqueness of humans and human character stands out.In this sense, nature serves to define what it means to be human. Negotiating nature and culture -drawing boundaries that define the natural versus the non-naturaldefines and justifies certain actions.Humans do something to the world when they draw boundaries between nature and culture, between the natural and the non-natural, and between acceptable and unacceptable human interference in natural systems.Any understanding of nature involves an understanding of society, and of certain social choices.In this sense, nature also fulfills political functions, telling us how to live our lives. Loss of biodiversity and climate change: heading for disaster? Two highly topical international environmental regulatory frameworks concern climate change and biodiversity.The close connection between science and policy in these two areas implies a shift in the conceptual categories according to which people understand and value nature.Notably, notions of climate change and biodiversity entail an understanding of environmental protection as a shared, global concern (Miller and Edwards 2001: 6;Takacs 1996).Furthermore, these issues, which are presumed to be essential to human survival and welfare, are closely interconnected.First, biodiversity is thought to be important for strengthening ecosystem resilience and bolstering ecosystem adaptability to environmental variations such as climate change (CBD 2007;Glowka et.al. 1994: 9).Second, although changing conditions have continuously contributed to the rearrangement of biological systems, human-induced climate change is thought to amplify such variation, threatening to accelerate "the loss of biodiversity already under way due to other human stressors" (Hannah et al. 2005: 3).Accordingly, the link between biodiversity and climate change is understood as reciprocal, that is, human-induced climate change threatens biodiversity, while biodiversity itself can reduce the adverse impacts of climate change on people and production systems (CBD 2007;Lovejoy and Hannah 2005).But how are biodiversity conservation and climate change mitigation justified in the regulatory framework? Motives for biodiversity conservation The concept of biodiversity has an inclusive character promoted by members of both the conservation and scientific communities, representing a range of interests (Farnham 2007: 241 ff.).Tracing the roots of "biodiversity" reveals a deeply value-laden concept that subsumes several values -aesthetic, recreational, scientific, economic, and for life-support -that are related to various aspects of conservation, including wilderness protection, resourcism, and the protection of endangered species.Although several values of biodiversity have been emphasized, resourcism has been central to the biodiversity conservation discourse (Farnham 2007: 4 ff.;cf. Evans 2007: 244;cf. Escobar 1998).The biodiversity concept has a normative component: biodiversity is considered as something that is good for several reasons, implying that greater diversity entails greater value. It is also apparent that human activity is believed to pose the main threat to rich biodiversity through, for example, habitat fragmentation, introducing alien species, and global climate change, all of which are treated as human-induced (anthropogenic) phenomena (Lovejoy 2005: 325;Binimelis et al. 2007).Fragmentation research has been mainly oriented towards human-induced disturbances and defined as an anthropogenic phenomenon that threatens a primordial state, which is characterized by uniform and extensive habitats conducive to rich biodiversity (Lovejoy 2005, p. 325;Evans 2007: 244).In line with this understanding, and in order to protect biodiversity, wild nature and coherent green areas are highlighted.Rich biodiversity is thus framed as a favorable state that is threatened by human agency. Although value-laden and normative, the biodiversity concept is also closely connected to science and scientific values of nature, which have imbued the biodiversity concept with a sense of objectivity (Farnham, 2007: 4;cf. Evans 2007: 243).By subsuming a variety of values and interests, the concept has gathered together actors from a range of fields, including politicians, scientists and conservationists, making them allies sharing a cause (cf.Gieryn 1999; Takacs 1996) and a unified frame of reference. Despite the unity of purpose associated with biodiversity concerns, the precise definition of biodiversity is by no means self-evident.As Escobar (1998: 53) explains, "although 'biodiversity' has concrete biophysical referents, it must be seen as a discursive invention of recent origin".However, a threetiered standard definition, including genetic, species and ecosystem diversity, has evolved (Farnham 2007: 13;23 ff.).This definition was established by the CBD and adopted in 1992.It unites a variety of policy and scientific recommendations from groups such as the sustainable-use movement, farmer's rights movements, and bioprospectors, connecting biodiversity conservation to sustainable development (Farnham 2007: 26;CBD and UNEP 2004).The CBD (Fig. 1) refers to biodiversity as the variability among living organisms from all sources, and the Convention has been described as new and innovative because of its holistic approach. It also introduces the notion of the intrinsic value of biodiversity (Glowka et al. 1994;Rehbinder 2002).In its preamble, in which the negotiating states set out their concerns and motives, the Convention presents the values of biodiversity and, for the first time in a binding international instrument, stresses these intrinsic values.An object with intrinsic value is something that is valuable for its own sake, independent of its usefulness to someone or something else.Referring to an object's intrinsic value often implies that this value inheres within the object, indicating that this value has some kind of external existence independent of human consciousness.The conclusion that something is valuable in itself may be interpreted in two ways.First, value in itself can be seen as self-contained, that is, it would be valuable even if there were nothing else in the world (cf.G. E. Moore's isolation test of intrinsic value).Second, value in itself can be seen as persistent, in other words, the value remains regardless of situation or context (Levinson 2004: 321-322).The concept of intrinsic value is disputed.For example, whether it is only concrete entities (e.g., people) or states of existence (e.g., pleasures and desire satisfaction) that can have intrinsic value (Bradley 2006; see also Zimmerman 2001;Levinson 2004).Likewise, the intrinsic value of biodiversity as a whole is disputed, whether intrinsic value can be attributed to all of it or to specific components (Krishnamurthy 2003: 73).The first paragraph of the preamble to the CBD articulates the contracting parties' consciousness "of the intrinsic value of biological diversity and the ecological, genetic, social, economic, scientific, educational, cultural, recreational, and aesthetic values of biological diversity and its components."Here, the CBD emphasizes the inherent worth of biodiversity and draws a line between this intrinsic, nonanthropocentric value of biodiversity as a whole and other anthropocentric, instrumental values of biodiversity and its components (Holder and Lee 2007: 3). The inclusion of the notion of intrinsic value indicates an alternate perception of nature, viewed against resourcism.Instead, citing the intrinsic value of biodiversity alludes to the perception of a sacrosanct nature, which represents pre-modern as well as postmodern interpretations of nature (Rientjes 2002: 7;cf. Merchant 2003: 192). Motives for climate change mitigation The theory of global warming as a consequence of greenhouse gas (GHG) emissions was raised several times in the twentieth century, but it was not until the 1980s that climate change became a serious political issue.The logic underlying the international community's concern for global climate change is based on climate modeling that indicates potential adverse effects-such as intensified droughts and floods, and rising sea level-due to human GHG emissions.Climate modeling is also the prerequisite for distinguishing between natural climate variability and human-induced climate change, since periods of frequent rainfall, little or no rainfall, or of extreme weather events cannot positively be attributed to anthropogenic climate change.As Edwards puts it: The inherent variability of weather makes it impossible to attribute individual storms, floods, droughts or hurricanes to changes in the global climate.Only by coupling statistical analyses to climate modeling exercises have scientists been able to isolate and display the "fingerprint" of global warming in changing weather patterns around the world.(Edwards 2001: 33) Today, there is a complex alliance between science-based descriptions of climate change and climate policy (Edwards 2001: 34).In this fusion of science and policy, the Intergovernmental Panel on Climate Change (IPCC), set up in 1988 to provide independent scientific advice on climate change, functions as a center of authority that must uphold its credibility in the eyes of both the scientific and policy communities (Edwards and Schneider 2001;cf. Adger 2006).The role of the IPCC is not to conduct research, but to assess "on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change" (IPCC 1998: 1). The IPCC summarizes and communicates its assessments to support policy-making, and its First Assessment Report, published in 1990, constituted the scientific basis of the United Nations Framework Convention on Climate Change (UNFCCC), adopted in 1992 (Fig. 2).The IPCC reports present the scientific consensus that the Earth's climate is being affected by human activities.The Assessment Reports are part of the scientific construct of the consensus, and this consensus has remained relatively stable over time about one key factor: the sensitivity of the climate to atmospheric CO 2 doubling, expressed as a projected increase of global mean temperature (van der Sluijs 1998;cf. von Storch 2009). The policy responses to climate change established in the UNFCCC are mitigation and adaptation."Mitigation" concerns reducing GHG emissions, and enhancing and protecting greenhouse gas sinks and reservoirs such as forests and oceans."Adaptation" concerns various human responses to experienced or expected consequences of climate change, such as flood control and crop adjustment.Although adaptation includes both moderating harm and exploiting beneficial opportunities, the international community's adoption of the UNFCCC is a sign of its great concern about the adverse effects of climate change.Likewise, although previous and present climate variation has resulted and may result in natural disasters such as droughts, floods, and landslides, the concern underlying mitigation is that anthropogenic climate change has contributed to increased frequency of events including heat waves, heavy precipitation and intense tropical cyclone activity.Accordingly, climate policy has identified human interference with the climate, and the need for mitigation measures (Klein et al. 2003;Tol 2005).The focus on human interference with the climate system is consistent with the scientific agenda that supports the logic of mitigation, based on the presumption of a causal relationship between human activities and increased concentrations of greenhouse gases in the atmosphere, resulting in climate change with predominantly negative consequences for society.In turn, this mitigation imperative results in a bias against adaptation (Pielke, 2005).The UNFCCC provides an overall framework for intergovernmental efforts to control climate change.In the preamble to the UNFCCC, the contracting parties articulate their concern "that human activities have been substantially increasing the atmospheric concentrations of greenhouse gases", and that this will lead to additional warming of the earth's surface and atmosphere.This may adversely affect natural ecosystems and humankind, so the ultimate objective of the Convention (Article 2) is to: …achieve stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.Such a level should be achieved within a time frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner (UNFCCC, Article 2).These formulations embrace the notion that human interference with the climate may result in a situation in which ecosystems can no longer adapt naturally, which in turn will threaten human food supply, welfare and economic growth.From the preamble and the overall objective of the UN Convention, it is clear that the concern expressed about the adverse effects of climate change exclusively concern human survival and welfare. Environmental protection: do the motives really matter? The two Conventions draw on different kinds of values in their motives for biodiversity conservation and climate change mitigation.The UNFCCC stresses the threat of climate change to lifesustaining functions of ecosystems and economic development.This is a clear expression of resourcism, which protects the utilities humans gain from natural resources.The decisive criterion for tolerable climate change is, accordingly, whether or not this change is considered harmless or dangerous to humans. The CBD, however, articulates both the intrinsic value and a range of instrumental values of biodiversity, including aesthetic and recreational values.One might say that this way of justifying biodiversity conservation is redundant.If biodiversity has intrinsic value, no further justification for its protection is needed.Likewise, if rich biodiversity is considered essential to human survival, humans should protect it irrespective of any inherent worth it may have.Accordingly, a question that comes to mind is whether the motives for environmental protection really matter as long as the arguments are convincing and result in tangible measures. The wide range of values articulated in the preamble to the CBD most likely stems from the variety of interests that constitute the basis of the Convention.The various cited arguments, motives and values, however, do not always match up, and some even counteract each other.In environmental politics, three categories of arguments are frequently used, corresponding to the categories of instrumental and intrinsic values -aesthetic arguments (beauty of nature and natural scenery), utility arguments (nature as resource for human survival and welfare), and the argument that nature has intrinsic worth.These can be used in combination, and even confused, for example between aesthetic and intrinsic value (Soper 1995: 251 ff.;Uggla 2010).However, it is not self-evident that these different arguments and values should harmonize.References to the beauty of nature imply that specific sights and species considered beautiful or impressive deserve protection, whereas references to intrinsic value also include the protection of places and species considered ugly or even repulsive to humans (Soper 1995: 255;cf. Foale and Macintyre 2005: 13). Some argue that it is necessary to advocate the intrinsic value of biodiversity to ensure its protection, and use the acknowledgement of this principle to criticize actions based on other values.In particular, utility resulting in the monetization of ecosystem services (Justus et al. 2009: 187).In some instances, these different standpoints -intrinsic versus utility values -have polarized debates over biodiversity protection.While some argue that the concept of nature's intrinsic value is needed to free humans from narrow anthropocentrism, others argue that the intrinsic value concept is useless in decision-making and policy processes aimed at biodiversity protection, because policy-making and decisions and action for biodiversity protection necessarily involve stakeholder interests and prioritization (Justus et al. 2009;Norton 2000;cf. Zisenis 2009).This kind of political negotiation is obvious in the discussion of indigenous peoples' rights and biodiversity conservation in protected areas (see, for example, Schmidt and Peterson 2009;Colchester 2004).Furthermore, in most cases it is various individual components of biodiversity that are valued and negotiated, not biodiversity as such.When discussing intrinsic value in policy-making, this value is frequently attributed to nature, ecosystems, or to various life forms, indicating that the concept of biological diversity as a separate category is difficult to deal with (Uggla 2010).Thus, the intrinsic value of biodiversity, if taken seriously, is an unwieldy concept of limited use in policy-making and biodiversity conservation.It is, however, an indication of an alternate approach to nature by the scientific community that is more obviously aligned with the conservation movement than its engagement with climate change. Constructions of 'the natural' The concern for climate change and biodiversity loss caused by human activity has resulted in international regulation and extensive policies to control greenhouse gas emissions and to protect endangered species, habitats and ecosystems.In this endeavor, both the UNFCCC and the CBD include the concept of "the natural" to point out what is worth protecting. Climate variability and climate change The IPCC and UNFCCC each define climate change slightly differently.The IPCC defines it as "any change in climate over time, whether due to natural variability or as a result of human activity" (IPCC 2007: 871).The UNFCCC explicitly defines climate change as "change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over comparable time periods" (Article 1.2).The UNFCCC stresses the distinction between climate variability, which is considered to be a natural phenomenon, and climate change, which is associated with human interference.The IPCC, however, has a more inclusive definition.Despite this difference, both definitions make a distinction between natural variability and changes due to human activities. The ultimate objective of the UN Convention is to achieve "stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system" (Article 2).This formulation is central and yet problematic.First, it implies a distinct dividing line between dangerous and harmless interference with the climate system.Although present natural climate variation may negatively affect human welfare, it is presumed that ecosystems can cope with such variation and naturally adapt to it, whereas anthropogenic climate change threatens this dynamic equilibrium (cf.Swyngedouw 2007: 18).This presumption implies that human knowledge is "sophisticated enough to reveal the limits of nature, thus permitting us to exploit resources safely up to that limit" (Hajer and Fischer 1999: 5).Second, it conceals regional differences and the intricate intertwining of the matter with issues such as climate adaptation, vulnerability and social change (Pielke, 2005).The focus on human interference with the climate system is consistent with the scientific agenda that supports the logic of the mitigation imperative, based on the presumption of a causal relationship between human activities and increased concentrations of greenhouse gases in the atmosphere, ultimately resulting in climate change with predominantly negative consequences for society.In turn, this mitigation imperative results in a defusing of adaptation demands (Pielke, 2005). Furthermore, in compliance with a strict interpretation of the UNFCCC, only adaptation measures to human-induced climate change should obtain financial support (Verheyen, 2002).The expectation of being able to distinguish between human-induced climate change and natural climate variability reflects the Annex II Parties' reluctance to provide financial support for regular development projects.However, the formulation is problematic, since natural climate variability and human-induced climate change cannot possibly be clearly distinguished in practice.Instead, expectations of such a distinction may result in awkward considerations of what can be defined as additional harm and additional costs caused by humaninduced climate change (Pielke, 2005;Verheyen, 2002;Klein, 2003). At the same time, a number of COP (Conference of the Parties) decisions seem to recognize the problem of distinguishing between climate variability and climate change, and include considerations of current and future climate change as well as of climate variability.These interpretations of the UN Convention thus pave the way for a more integrative view, in which adaptation measures are seen as embedded in a complex of material and socio-economic circumstances, and mainstreamed in various planning and development projects (Lidskog et al. 2009: 76-77).This approach is consistent with most of the adaptation literature, which stresses that adaptation measures are seldom carried out because of climate change alone, but rather are part of decisions triggered by other social and economic events reflecting wider social changes (see, for example, Burton et al., 2002;Adger et al., 2005;Smit and Wandel, 2006).In societal planning, risk management and endeavors to reduce society's vulnerability to extreme weather, the distinction between climate variability and climate change are unproductive or even detrimental. The Convention on Biological Diversity The CBD includes the natural versus non-natural distinction in the concepts of "in situ conservation", "natural habitat", "natural occurrence", and "alien species".According to the Convention, habitat refers to "the place or type of site where an organism or population naturally occurs" (CBD, Article 2).In situ conservation -the primary approach to biodiversity preservation apart from ex situ conservationrefers to maintaining or restoring viable wild populations in their natural surroundings, or, for domesticated and cultivated species, conserving them in the surroundings where they have developed their distinctive properties.Biodiversity conservation obviously concerns both wild species and domesticated species improved by breeding.The CBD also introduces the concept of alien, or non-indigenous, species that represent the opposite to those of natural occurrence. References to the natural seem to be central to wildlife conservation, though what constitutes "natural" is far from self-evident in this context.First, distinguishing between the natural and non-natural requires a dividing line and a temporal point of reference, i.e. a particular moment that determines when a species is considered natural at a certain location and when it is alien (Hannah et al., 2005: 11;cf. Haila, 1999: 55).The status of the natural is established with reference to particular points in history.Second, according to Article 8h of the Convention, alien species that threaten ecosystems, habitats, or species should be controlled or eradicated.This Article introduces another dividing line between species that threaten the current order and acceptance of those that can peacefully co-exist.This distinction is negotiable for several reasons. For example, definitions of the natural and non-natural, and discussions of which alien species should be accepted, may be strained by the effects of anthropogenic climate change.What constitutes natural flora and fauna in times when vegetation zone boundaries are shifting and so-called invasive species are gaining a foothold due to climate change?In these cases, controlling or eradicating alien invasive species with reference to natural occurrence patterns could mean forcing animals and vegetation to match conditions that no longer exist (Hannah et al. 2005: 11). Paradoxically, at certain locations and times, alien species may even be considered worthy of protection.A case in point is musk oxen (Ovibos moschatus), which died out in Scandinavia thousands of years ago (Fig. 3).After some effort, musk ox were reintroduced into Norway in the 1940s.They found their way to Sweden in the 1970s, and today there is a small and not particularly vigorous herd.This herd is not only tolerated but also receives protection, which seems to have little to do with what is natural, but rather with the affection people feel for this prehistoric animal and the fact that the musk ox is a tourist attraction in the area (Andersson, 2005: 87 ff.;cf. Harrop 1999).Musk oxen are thus accepted and protected because of the simple fact that people like them, not because of their natural occurrence in the area.Ultimately, the natural and aliens are awkward concepts that are difficult to define in a continuously changing world. The natural also plays a role in biodiversity conservation by its centrality in the conservation movement, stressing the importance of wilderness preservation.For some time, biodiversity conservation has emphasised the creation of protected areas.This Western model of nature conservation-based on the fragmentation concept and the idea that nature is best preserved as wilderness or natural nature, has been exported to the rest of the world.The idea of wilderness as a place devoid of human beings has resulted in removal of native peoples from their land (Colchester 2004;Schelhas 2001).Based on a summary on literature and field studies, Colchester (2004: 147) lists a number of social consequences of this approach: denial of rights to land, disruption of kinship systems, undermining of livelihoods, and forced resettlement.These studies also point out that this approach to conservation can be counterproductive, since the removal of people and dissolution of local communities breaks ties to the natural environment, which may reduce the interest in long-term stewardship and create conflicts between the indigenous people and park managers. In recent decades, indigenous peoples have struggled for recognition, trying to gain access to the centers of power and to be heard in national and international discussions (Schmidt and Peterson 2009;Schelhas 2001).Today, there are a large number of treaties to protect indigenous peoples' rights.There is, however, still need for a change in the field of biodiversity conservation and land management concerning the relationship between dominant societies and indigenous people, and indigenous management and biodiversity conservation; that is, that biodiversity conservation can also be achieved in areas where people live (Schmidt andPeterson 2009: 1464).The emphasis on wilderness and natural nature as a place free from people is strongly asserted in biodiversity conservation discourse, with allusions to Western romanticism.It has resulted in serious impacts on indigenous people's living conditions, while the environmental benefits are questionable. Concluding remarks Biodiversity and climate change regulation help construct nature and culture as separate categories and give rise to the notion that a natural state is worth protecting from human intrusion.In biodiversity and climate change regulation, the concept of "the natural" portrays the idea of untouched nature as desirable, whereas human agency is questionable, representing both destruction and restoration.Human agency threatens the desirable state, for example by interfering with the climate system, intentionally or unintentionally introducing alien invasive species, and fragmenting habitat. Defining humans as separate from natural systems implies that humans and everything they do, by definition, are non-natural (cf.Oelschlaeger 1991: 296 ff.; Keulartz 1999: 83 ff.).The notion of human agency is deeply ambiguous, because humans have the power and skill to protect and even recreate natural nature, for example, by restoring or creating wetlands and so-called wildlife corridors, and by establishing protected areas.According to this concept, nature is neutral, whereas human agency is purposeful but Janus-faced, since humans are both destroyers and rescuers.Thus, in climate change and biodiversity policy there is a profound ambiguity in the view of the relationship between humans and their environment."Natural nature", alluding to romantic ideas of pristine nature devoid of humans, is used to define the desirable.However, it is only when a non-natural human intrusion poses a threat to human existence and well-being that it should be controlled. The analysis of the UNFCCC and the CBD shows that the concepts of nature and the natural are not stable and neutral, but political concepts that have to be negotiated and filled with meaning according to particular circumstances.The intention behind using these concepts in environmental regulations, to protect the environment and natural resources, is certainly positive.The inclusion of the concepts themselves, however, is dubious for at least four reasons: it may result in futile boundary-setting between humans and nature; it may be counterproductive to environmental protection; it assigns responsibility in a narrow way; and it provides an artificial dichotomy between humans and "pristine" nature. By treating humans and nature as inter-related, discussions of environmental protection and social justice could focus on how to reduce human vulnerability and accomplish sustainable living conditions, instead of being caught in futile negotiations around how to define and distinguish natural and human impacts.
2016-02-02T08:36:57.578Z
2010-12-01T00:00:00.000
{ "year": 2010, "sha1": "c521e409aeb7f5b09a012a07f99028c42f699334", "oa_license": "CCBY", "oa_url": "http://journals.librarypublishing.arizona.edu/jpe/article/id/1834/download/pdf/", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "c521e409aeb7f5b09a012a07f99028c42f699334", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Sociology" ] }
15720429
pes2o/s2orc
v3-fos-license
Instantons on a Non-commutative $T^4$ from Twisted (2,0) and Little-String Theories We show that the moduli space of the $(2,0)$ and little-string theories compactified on $T^3$ with R-symmetry twists is equal to the moduli space of U(1) instantons on a non-commutative $T^4$. The moduli space of $U(q)$ instantons on a non-commutative $T^4$ is obtained from little-string theories of NS5-branes at $A_{q-1}$ singularities with twists. A large class of gauge theories with ${\cal N}=4$ SUSY in 2+1D and ${\cal N}=2$ SUSY in 3+1D are limiting cases of these theories. Hence, the moduli spaces of these gauge theories can be read off from the moduli spaces of instantons on non-commutative tori. We study the phase transitions in these theories and the action of T-duality. On the purely mathematical side, we give a prediction for the moduli space of 2 U(1) instantons on a non-commutative $T^4$. Introduction In recent years, starting with the work of [1], the moduli-spaces of vacua have been found for a large class of gauge theories with 8 super-charges in 3+1D and in 2+1D. These solutions were derived from string dualities in [2] and the works that followed. String theory also suggested the existence of new theories in six dimensions [3,4] (see also [5][6]). Compactification of these theories to 3+1D reduces, in certain limits of the external parameter spaces, to ordinary gauge theories. In this paper we will study compactifications of certain 6 dimensional theories down to 3 dimensions and examine their low energy behaviour. As we will see, all the previously solved gauge theories with N = 2 supersymmetry and SU (N 1 )×· · ·×SU (N r ) gauge groups [7] can be recovered at special limits of the external parameters of the compactification. We will start with the 6 dimensional theories that are the world-volume theories on k NS5branes in type-IIA in the limit of vanishing string coupling keeping the string tension fixed [4]. We denote this theory S A (k). It has (2, 0) supersymmetry. There is a similar theory coming from k NS5-branes in type-IIB in the limit of vanishing string coupling keeping the string tension fixed. We denote this theory S B (k). It has (1, 1) supersymmetry. S A (k) and S B (k) are often referred to as the little-string theories. They both have an inherent scale, m s . In the limit m s → ∞, S A (k) becomes the theory on the world-volume of k M5-branes -the so called (2, 0) theory. We will compactify these theories down to 3 dimensions. These theories have 16 super-charges, so if they are compactified on T 3 the resulting theories will have N = 8 supersymmetry in three dimensions. The low energy behavior of N = 8 theories is trivial. Instead we want to study theories with N = 4 supersymmetry, i.e. 8 super-charges. So we have to compactify in a way that breaks half the supersymmetry. We will do that as in [8] by introducing holonomies of the R-symmetry around the three circles in T 3 . To preserve half of the supersymmetries the holonomies were chosen inside a SU (2) subgroup of the Spin(4) R-symmetry group. The low energy behaviour of a N = 4 theory in D = 3 is a sigma-model with the moduli-space of vacua as the target-space. So the low energy behaviour is given by the moduli-space of vacua and the its metric. In [8] the case of k = 2 was studied in detail. The moduli space of vacua was identified explicitly. For general k it was conjectured that the moduli space of vacua was given as a moduli space of instantons on non-commutative T 4 . The purpose of this paper will be to prove and generalize this and, at the same time, make the claim more precise. Let us start by identifying all the parameters of the compactification. Consider S A (k) compactified on T 3 . The scale of S A (k) is m s , the string mass. The T 3 is specified by a metric. For simplicity we will take it to be rectangular. It is easy to incorporate the more general case. Furthermore there can be a flux of the 2-form B NS field of type IIA through 2-cycles in the T 3 . For simplicity we set B NS = 0. It is again not hard to incorporate the more general case. Now we come to the most interesting parameters -the twists. The R-symmetry group of S A (k) is Spin(4) R , corresponding to transverse rotations. The twists are taken inside This preserves 8 of the 16 super-charges. There is a twist, α i , along each of the 3 circles. The α i 's are periodic α i → α i + 2π, i = 1, 2, 3 (1. 2) The twists can be described in the following way. States that are charged under U (1) R receive a phase shift in traversing a circle. In other words, momentum along the circle is shifted from n R to n− α 2π R . By performing T-duality along all circles of the T 3 we get S B (k) on another T 3 . Momentum has been exchanged with winding, so the T-dual of the twists has the following description. States that are charged under U (1) R have fractional winding numbers; n−α 2π instead of n. We call this kind of twist an "η-twist." By combining these two types of twists we learn that the most general twist around a circle shifts both momentum and winding. In other words the S A (k) compactification on T 3 depends on 6 parameters α i , η i , i = 1, 2, 3, (1. 3) where α i shifts momentum and η i shifts winding. The α i 's have a clear geometrical interpretation. In traversing the circle the transverse space is rotated. The η i 's are harder to visualize. They are geometrical in the T-dual S B (k). We can actually generalize this system even more. Instead of k NS5-branes we can consider k NS5-branes on top of an A q−1 singularity. In other words the transverse space to the NS5-branes is IR 4 /Z q , where Z q is a subgroup of U (1) R . U (1) R is still a symmetry of this space, so we can twist as before. These theories have 8 super-charges in 6 dimensions. The U (1) R is a global symmetry which commutes with super-charges. The twists, therefore, do not break any more supersymmetry, so the compactified theory still has N = 4 in 3 dimensions. Theories of branes on top of an ADE singularity have been studied in [9,10]. These 6 dimensional theories are, loosely speaking, quiver gauge theories [11] coupled to tensor theories or vice versa, depending on whether it is in type-IIA or type-IIB. The 3 dimensional theory, obtained after compactification with twists, has a low energy description as a sigma-model with a target-space, which is equal to the moduli-space of vacua. In this paper we will prove that the moduli-space of vacua is equal to the modulispace of k U (q) instantons on a non-commutative T 4 . The non-commutativity is set by the 6 parameters α i and η i . This generalizes the case of compactification without twists where the moduli-space of the theories turns out to be the moduli-space of ordinary instantons [10,12]. This result implies similar results for all the theories which are special cases of this. This includes firstly the (2, 0) theory which can be obtained from S A (k) by m s → ∞. Secondly, it includes all three-dimensional U (k) gauge theories with adjoint matter. By incorporating the A q singularity it also includes all gauge theories with group U (k) × · · · × U (k) and matter in (k,k, 1, . . . , 1) + permutations. By taking the gauge coupling to zero in some U (k) we can get theories with the gauge group being U (k) × · · · × U (k) with fundamental and bi-fundamental matter in various combinations with generic masses. Our results imply that all these 2+1D gauge theories have a moduli-space of vacua equal to the moduli-space of vacua of instantons on non-commutative IR 3 × S 1 . In the case of mass deformed N = 8 this result was derived earlier in [13]. By decompactifying one circle similar results hold for the moduli space of 4 dimensional gauge theories on IR 3 × S 1 . We also find that for certain discrete values of the twists there are Higgs branches emanating from some locus of the Coulomb branch. We will identify these and calculate their dimensions. We will also calculate the existence of these branches from pure field theoretic arguments and find agreement in the structure of the Higgs branches. These branches generalize a branch found in [8]. Moreover, combining our results with the formulas in [8] for the special case of q = 1 and k = 2, we get a prediction for the moduli-space of two U (1) instantons on a noncommutative T 4 . This is a K3 (projecting out the center of mass) and the exact point in moduli-space was given in [8] as a function of the twists. The organization of the paper is as follows. In section (2) we present the proof that the moduli space is equal to the moduli space of instantons on non-commutative T 4 . In section (3) we have a short review of non-commutative gauge theories. In section (4) we use this information about non-commutative gauge theory to make the claim about the moduli-space of non-commutative instantons precise and discuss some features of it. In section (5) we describe the decompactification limit to 3+1D (compactification of the 5+1D theories on T 2 with twists). In section (6) we present a more detailed geometrical formulation of α-twists and especially η-twists. We conclude with a summary of the results and possible further direction. The solution In this section, we derive the solution to the moduli space of the twisted theory. To construct the solution we will start with type-IIA on a space where × α means that locally the space looks like IR 2,1 × T 3 × IR 4 but as we go around a cycle of the T 3 we have to twist the transverse space IR 4 by the appropriate element of Spin(4) corresponding to the twist. Now we take k NS5-branes and let them stretch along IR 2,1 × T 3 and the origin of IR 4 . The question what is the low-energy effective action for this system in the limit that the type-IIA string coupling constant λ → 0. As will be clear later on, it is easier to solve the problem if we first replace the transverse IR 4 with another manifold M 4 . In the limit that the curvature of M 4 is small at the position of the NS5-branes the switch from IR 4 to M 4 will not make a big difference. Moreover, we can argue that the quantum fluctuations in the transverse position of the NS5-brane are related to the fluctuations of the scalars of S A (k) as, and for energy scales m s , Φ is of the order of m 2 s . In the limit λ → 0, the transverse fluctuations of the NS5-brane go to zero and if the point in M 4 is smooth, it would seem that the dynamics of the NS5-brane will be the same as on IR 4 . This argument should be taken with caution since the actual solitonic solution of the NS5-brane has a cross-section of about m s . In any case, we will not have to rely on this argument. The manifold M 4 that we will use is the Taub-NUT space. The metric is, where, and A i is the gauge field of a monopole centered at the origin. The Taub-NUT space has the following desirable properties (these properties were also used in [14]), (1) If we excise the origin, what remains is a circle fibration over IR 3 − {0}. Eqn(2.1) is written such that x is the coordinate on this base IR 3 − {0}. For | x| restricted to a constant, the fibration is exactly the Hopf fibration of S 3 over S 2 . (2) The origin x = 0 is a smooth point. (4) The space has a U (1) isometry group that preserves the origin x = 0. An element g(θ) = e iθ ∈ U (1) acts by y → y + θ. It also acts on the tangent space IR 4 at the origin by embedding e iθ inside Now that we have replaced the transverse IR 4 with a Taub-NUT space we have k NS5-branes on the space, The α-twists are incorporated as follows. As we go around a cycle of T 3 we have to act on the fiber TN(ρ) with g(α i ) where α i is the appropriate twist. In the limit ρ → ∞, TN(ρ) becomes IR 4 and the isometry g(α i ) becomes the element in SO(4) that we have used for the twist. The virtue of working with TN(ρ) instead of IR 4 is that at x = ∞ the circle fiber becomes of finite size which will help in subsequent dualities. To generalize the construction to the case of k NS5-branes at an A q−1 singularity, IR 4 /Z q , we replace the transverse IR 4 /Z q with a q-centered Taub-NUT space, TN q (ρ) with radius ρ → ∞. This space has similar properties, (1') If we excise the origin, what remains is a circle fibration over IR 3 − {0}. For | x| restricted to a constant, the fibration is is a circle bundle over S 2 with first Chernclass c 1 = q. (4') The space has a U (1) isometry group that preserves the origin x = 0. An element g(θ) = e iθ ∈ U (1) acts at x = ∞ by y → y + θ. It also acts on the tangent space IR 4 /Z q at the origin by embedding e iθ inside Note that the discrete Z q by which we mod out is a subgroup of the same U (1) ⊂ SU (2) L as well. Chains of Dualities We have seen that the twisted compactified little-string theories can be realized as follows. Start with type-IIA on IR 2,1 × T 3 × TN q , where the radii of T 3 are R i (of the order of m s ) and the radius of the fiber of the Taub-NUT space is taken to be ρ. Put k NS5-branes on IR 2,1 × T 3 and study the limit, In principle, we could probably settle on a constant m s ρ as well, since the transverse fluctuations of the NS5-brane are small. However, the transverse size of the NS5-brane, as a solitonic object, is of the order of m −1 s . Therefore, to be on the safe side, we take m s ρ → ∞. The technique for solving theories with 8 supersymmetries is [2] to identify a parameter that decouples from the vector-multiplet and such that at one limit of this parameter the theory is described by gauge theory (or little-string theory, in our case) and in another limit a dual description becomes weakly coupled. In that second limit, the theory is no longer described by the gauge theory but the vacuum structure remains the same and is determined by the classical equations of motion. This method was also applied in [15,16,7]. In our case, to solve the problem we take the limit of strong coupling keeping the Taub-NUT radius large. We will also require that λ(m s ρ) −3 → ∞. We can think of ρ as being fixed but very large and λ → ∞ much faster. We will not show that this corresponds to a parameter that is in a hyper-multiplet (and hence decouples from the vector-multiplets) but this is the basic assumption. Recall that in 2+1D hyper-multiplets and vector-multiplets can be distinguished with the help of the U (1) R ⊗ SU (2) U symmetry which is the unbroken subgroup of (1.1). The scalar fields of a vector-multiplet are invariant under SU (2) U while the scalar fields of a hyper-multiplet are in the 2 (see [17]). (The dilaton, which is a singlet, is a quadratic expression in these fields.) Similarly, the fermions of a hyper-multiplet are invariant under SU (2) U and the fermions of a vector-multiplet are in the 2. The next step is to use string-dualities to convert the region (2.2) to a weakly coupled theory. At this point we have k NS5-branes in type-IIA on IR 2,1 × T 3 × TN q with string coupling λ, string scale m s , T 3 -radii R i , and twists α i . For simplicity, we assume that T 3 is of the form S 1 × S 1 × S 1 with no NS-NS 2-form fluxes. Since λ → ∞ we view this as k M5-branes in M-theory on IR 2,1 × T 3 × S 1 × TN q . Let M p be the 11-dimensional Planck scale. The radius of S 1 is, R. They are related according to, The radius of TN q is, ρ. Step 1: Since, in the limit (2.2), we should view the fiber of the Taub-NUT as the 11 th small dimension and convert to type-IIA on IR 2,1 × T 3 × S 1 × IR 3 . We also have k NS5-branes on IR 2,1 × T 3 and TN q became q D6-branes on IR 2,1 × T 3 × S 1 . The α-twists became RR 1-form Wilson lines along the cycles of T 3 . The string coupling constant is given by, The new string scale is, and the radii of T 3 satisfy, This means that we must perform T-duality on T 3 . Step 2: After T-duality on T 3 we obtain type-IIB on There are now k NS5-branes on IR 2,1 × T 3 and q D3-branes on IR 2,1 × S 1 . At this point the α-twists became RR 2-form fluxes, where C jk is the 2-cycle made out of the j th and k th directions in T 3 . The string coupling is now, This means that we must do S-duality. Step 3: After S-duality we get type-IIB with q D3-branes and k D5-branes in the same geometry. The string coupling constant is now, and the string scale is, The radii satisfy, and the radius of S 1 satisfies, At this point, sRi → 0, we must perform another T-duality on T 3 . However, because of the NS-NS 2-form fluxes, just as in [19], another T-duality will not help. Instead, let us do a T-duality on S 1 which brings us to the final setup of gauge theory on a non-commutative Step 4: After T-duality along S 1 we get type-IIA with k D6-branes and q D2-branes. The string coupling is now, s . The radii satisfy, and the radius of the S 1 satisfies, At this point, the α-twists are still NS-NS 2-form fluxes. We thus end up with a system of k D6-branes on T 4 × IR 2,1 and q D2-branes which are points on T 4 . The radii of T 4 are given, in terms of the 3 radii R i of the original T 3 , as follows, HereM s denotes the final type-IIA (with the D2-branes and D6-branes) string scale. The final string coupling constant is, Similarly, we can start with S A (k) with 3 η-twists. By definition, this is S B (k) on the dual T 3 with 3 α-twists. We realize this in type-IIB on the background IR 2,1 × T 3 × TN q and k NS5-branes on IR 2,1 × T 3 . As before, the fiber of the Taub-NUT space is denoted by ρ. We first perform S-duality to replace the NS5-branes with k D5-branes. At this point the η-twists are off-diagonal components of the metric g i9 with i in the direction of T 3 and 9 in the direction of the Taub-NUT fiber. Then, we perform T-duality on the direction of ρ to obtain type-IIA on IR 2,1 ×T 3 ×S 1 ×IR 3 with q NS5-branes on IR 2,1 ×T 3 and k D6-branes on IR 2,1 × T 3 × S 1 . The η-twists became NS-NS 2-form fluxes B i4 where 4 is the direction of S 1 . Then, we do T-duality on the three directions of T 3 . We obtain k D3-branes on IR 2,1 × S 1 and q NS5-branes. The η-twists are now off-diagonal components g i4 . We then do another S-duality to get k D3-branes and q D5-branes and, finally, another T-duality on T 3 . At this point we are back with k D6-branes and q D2-branes. The η-fluxes are now NS-NS 2-form fluxes B i4 . The moduli space is thus the same as the moduli space of q D2-branes inside k D6branes on T 4 with NS-NS 2-form fluxes. In the case of α-twists, these fluxes have both indices in the direction of T 3 ⊂ T 4 . In the case of η-twists, the fluxes had one index in the direction of T 3 and the other index in the 4 th direction. In the generic case, we have both α-twists and η-twists simultaneously. The result is that the NS-NS 2-form flux is nonzero for all 6 2-cycles of T 4 . The string scale, string coupling, and the parameters of the T 4 are as calculated above. We could in principle follow the chain of dualities above with simultaneous α-twists and η-twists but the intermediate steps would involve cumbersome non-linear expressions. The moduli space of q D2-branes inside k D6-branes on T 4 with NS-NS 2-form fluxes, and in the limit that the size of the T 4 vanishes, was shown to be equivalent to the moduli space of k instantons of U (q) gauge theory on a non-commutative T 4 [18][19][20][21]. It is likely that this result is true even for T 4 of finite size, because the size decouples by arguments as above. In the next sections we will review the non-commutative geometry and formulate a precise statement about the moduli space. Review of Noncommutative Gauge Theory In this section we will review the elements of non-commutative gauge theory which are relevant to our situation. Non-commutative gauge theory first entered string theory in [19] where it was shown to provide a matrix model for M-theory on a torus with the C (3) field turned on along the light-like circle. Subsequently, a lot of interesting work on this topic was done [20][21][22][23][24][25][26][27][28][29][30][31][32][33]. What we need here is not the connection to matrix theory but just the study of D-branes with a B NS fields turned on. Consider type-IIA on IR 1,9−d × T d with q D0-branes. The radii of T d are called R i , i = 1, .., d, the string mass m s and the coupling λ. Furthermore let there be a constant In [27] this system was studied using the approach of [34]. The result is that the low energy physics is described by a d + 1 dimensional U (q) gauge theory on a dual torus, and gauge coupling 1 The effect of b ij is to change the action. Every time two fields are being multiplied, the multiplication is with the * -product defined as, The action is the usual gauge theory action just with this modification. If there had been no B NS -field the resulting d+1 dimensional gauge theory could have been obtained by performing T-duality along T d . The q D0-branes would have turned into q Dd-branes. The radii and gauge coupling of the U (q) theory can be calculated in this way. The important point to remember is that the only change from having a B NS -field is to change the product into eq.(3.4). The radii and gauge coupling are independent of b ij . This result could not have been obtained by T-duality, since B NS -fields change the formulas of T-duality and would have given other radii and gauge coupling. There is another way of formulating this gauge theory. Instead of working with the * -product, eq.(3.4), one can say that the torus T d is non-commutative. The algebra of functions on the torus is,A, is generated by U 1 , . . . , U d with relations The generalization of finite dimensional vector fields is finitely generated projective modules over A. Let E be such a module. One can define connections,∇, and curvature F ij of this module [19,21]. One can define the Chern character of the module E τ is the trace on End A (E). ch(E) can be regarded as an element in the cohomology, Here ι(b) denotes contraction with b considered as an element of H * (T d , C) [21]. Suppose for instance that only µ 0 and µ 1 are nonzero, then, This equation reflects the fact that the number of D2-branes is unchanged by the presence of the B NS -field but the number of D0-branes is shifted by the product of the number of D2-branes and the B NS -field along the D2-branes. Noncommutative Instantons as the Moduli-space Let us now go back to our system of q D2-branes inside k D6-branes given above. They have a common IR 1,2 . This is the space-time in which the 3 dimensional theory is living. The 3 dimensional theory has a low energy description as a sigma model with the moduli space of vacua as target space. The moduli space of vacua is a Hyper-kähler manifold. The moduli space of vacua comes from the dynamics on the T 4 , which is the same as the dynamics of q D0-branes in k D4-branes on T 4 . The radii of the T 4 ,R 1 ,R 2 ,R 3 ,R 4 , and the string couplingλ and string scaleM s are given in terms of the parameters of the S A (k) compactification in (2.3) which we repeat here, but the vacuum structure of the vector-multiplets should be independent of ρ in this limit. According to the above review of non-commutative geometry, the moduli space is equal to the moduli space of k instantons in U (q) gauge theory on a non-commutative torus, T 4 , with non-commutativity parameters equal to α i , η i . As explained above the radii and gauge coupling of this gauge theory are the same as if α i = η i = 0. Hence they can be found by T-duality on T 4 . By this T-duality one obtains k D2-branes in q D6-branes on T 4 of radii, and string mass, m s , and coupling, λ, In the U (q) theory, this gives a gauge coupling of, Observe that ρ has dropped out of the radii and the gauge coupling. What about the limit λ → ∞ and m s fixed. To see that the moduli space of vacua is well defined in this limit we should remember that scalar fields in three dimensions have dimension 1 2 , if we want a standard kinetic term. We can either view the moduli space of vacua from the U (q) gauge theory point of view or from the U (k) theory on the D2-branes. From the last point of view the moduli space is the Higgs branch. The action of the U (k) theory has a term, We define Φ i = λ −1/2 m s 3/2 X i . This Φ has a standard kinetic term, The radii of the (4.8) We see that the limit λ → ∞ exists. This last discussion was really superfluous. Since S A (k) only depends on the combination m 2 s and does not feel ρ, this had to be true. For finite m s ρ, it could even be true for the full theory, not just the moduli space of vacua. The effect of the twists is just to deform the moduli space and so does not change the fact that the moduli space is independent of ρ and has a limit when λ → ∞, keeping m s fixed. We can also see from (4.8) what happens in the limit of the (2, 0) theory. For this limit we take m s → ∞. We find that the T 4 degenerates to T 3 × IR. Let us now be more precise about the space of instantons on a non-commutative T 4 . For this sake we will temporarily neglect the uncompactified directions and think of our system as q D0-branes and k D4-branes on T 4 . According to the review of noncommutative geometry above, this is described by a gauge theory on the dual T 4 with non-commutativity parameters, b ij , equal to the twists. By gauge theory we really mean a projective module, E, which is characterized by we get To minimize the energy, (ch 1 ) 34 should be minimized. We see that when b 12 > 1 2k 2π we can lower the energy by taking (µ 1 ) 34 = −1. This phenomena divides the space of b ij into "Brillouin" zones. Each zone is a six dimensional cube of length 2π k in each direction. Inside a zone the low energy physics is described by the gauge theory corresponding to a module with the µ(E) which minimizes the energy. In crossing the boundary between 2 zones, µ(E) jumps. We also see another interesting phenomena. Whenever b 12 2π k is an integer we have (µ 1 ) 34 = − b 12 2π and hence (ch 1 ) 34 = 0. This means that ch(E) is nonzero only in dimensions 0 and 4 (We are keeping all other components of b ij = 0. Only b 12 = n 2π k ). This is exactly like the pure D0,D4 system with no B NS -field. This system has a phase where the D0branes and D4-branes are separated. To reach this phase the system has to go through zero-size instantons. We thus conclude that whenever b 12 = n 2π k ,n ∈ Z there is another phase. Of course, there is nothing special about b 12 . Similar statements could be made for the other 5 components of b ij and even for all of them simultaneously. The point is that for each center of the "Brillouin" zone there is another branch emanating from a locus on the Coulomb branch. It emanates from the points on the Coulomb branch where some instantons have shrunk to zero size. The other phase consists of the k D4-branes with −n D2-branes inside moving away from the q D0-branes. Let us calculate the dimension of this branch. Suppose first n = 1, so there are k D4-branes with −1 D2-brane inside (equivalently 1 anti D2-brane). This system has a bound state. It is not marginally bound. The system has an 8 dimensional moduli space. To see this we should really remember that it is really k D6-branes with −1 D4-brane. 4 of the dimensions are U (1) Wilson lines on the T 4 . They are center of mass coordinates and are always present. We are not interested in these. The other 4 are 3 transverse positions and the dual photon in 3 dimensions. We conclude that the other phase is 4 dimensional. Furthermore it emanates from a point on the Coulomb branch, since all instantons have to shrink on top of each other. The only freedom is the point where they shrink, but that is a center of mass degree of freedom which we ignore. Let us now take n to be generic. Let g = gcd(n, k). The system of n D2-branes inside k D4-branes can split into g separate systems. The dimension is thus 8g − 4, subtracting the center of mass again. It emanates from the Coulomb branch on a locus of dimension 4g − 4. The special case of q = 1, k = 2 was studied in detail in [8]. Here it was found that there was another phase of dimension 4 for α = π. We see that this agrees exactly with what was found here. However we get a much clearer picture of the other branch. In the next section we will understand these branches from a field theory point of view. Phase Transitions from the Gauge Theory With generic twists (non-commutativity parameters), the moduli-space that we obtain is smooth. However, for special values of the twists the moduli space has ADE-type singularities. We would now like to explain the origin of some of these singularities. S B (k) is a gauge theory at low energies. Let us study it with an α-twist along one circle and no twist along the other 2 circles. Since there is a circle without twist we can T-dualize on that direction to S A (k), so these remarks apply to S A (k) as well. We want to reproduce the existence of other branches of the moduli space. For a related discussion see [35]. The fields in 6 dimensions are a U (k) vector-multiplet and an adjoint hypermultiplet. In 3 dimensions there is a tower of U (k) vector-multiplets with masses ( n 1 R 1 , n 2 R 2 , n 3 R 3 ), n i ∈ Z and a tower of adjoint hypermultiplets with masses ( We remember that a mass in N = 4 theories in 3 dimensions is specified by 3 numbers. The moduli space is 4k-dimensional including the center of mass degrees of freedom. On the Coulomb branch the U (k) is broken to U (1) k . Each adjoint hypermultiplet splits into k 2 hypermultiplets of the following charges. There are hypermultiplets with charge (0, . . . , 0), and there are k hypermultiplets with charges (1, −1, . . . , 0) plus permutations. There is a total of k(k − 1) of these. Some of these hypermultiplets can become massless on the Coulomb branch. For that to happen we have to turn on a Wilson line, A 1 , along the first circle and set the other 3k moduli zero. A 1 has the form The tower of hypermultiplets is now as follows. There are k of charge (0, . . . , 0) with mass ( . The uncharged ones never become massless, as long as the twist is not a multiple of 2π. The charged ones become massless if Now it is easy to make some of them massless by choosing A 1 appropriately. However to have a Higgs branch we need to have non trivial solutions to the D-flatness equations. For hypermultiplets charged under a U (1) r group there should be at least r +1 of them to have a non trivial solution. We thus need to find a number of massless hypermultiplets which is bigger than the number of U (4.14) The one of charge (0, 1, −1, 0, . . . , 0) is massless if, and so on, up to the multiplet of charge (0, . . . , 0, 1, −1) which is massless if, This gives k − 1 massless hypermultiplets. To have one more we need (−1, 0, . . . , 0, 1) to be massless. This is the case if, for some integer n 1 . Now, so we need kα 2π to be an integer. So for α = 2π k we have another phase of dimension 4. The dimension is 4 because there are k massless hypermultiplets each having 4 scalar fields and the D-flatness conditions remove 4(k − 1) dimensions leaving 4 real dimensions. This phase agree agrees exactly with the exact result from the previous section. We thus see that a naive field theory treatment, keeping all Kaluza-Klein modes, reproduces the result. This phase emanates from the Coulomb branch whenever a i = a i−1 = α as we saw above. This fixes the a i up to an overall shift. The overall shift is the U (1) part which we discard anyway. This shows that the other phase emanates from one particular point on the Coulomb branch. Note that the field theory treatment is justified when M s R i ≫ 1. More generally, let us take α = n 2π k and g = gcd(n, k). Now we can play the same game as above but within g blocks of the U (k) matrix of size k g . We thus get g sets of k g massless fields. Each set is charged under a U (1) k g −1 subgroup. This gives a 4g dimensional phase emanating from a locus on the Coulomb branch. This locus has dimension 4g − 4. The 4g comes from the diagonal U (1) in each of the g blocks. The center of mass is subtracted again. This branch has a total dimension of 4g + 4g − 4 = 8g − 4. We again find agreement with the exact result described previously. The branches described above are the only ones coming from the naive field theory description besides the cases α = 2πn, n ∈ Z which behave like α = 0. The 3+1D limit In this section we will explain how to obtain the 3+1D Seiberg-Witten curves of the various theories compactified on T 2 with a twist. This time we only have two independent α-twists corresponding to the two cycles of T 2 . The way to obtain the 3+1D SW curves is to start with the moduli space of the theory compactified on T 2 × S 1 where S 1 is of radius R and take the limit R → ∞. Let the 2+1D hyper-Kähler moduli space be of dimension 4n. In the limit R → ∞, it can be written as a fibration of T 2n over a base of dimension 2n. In the decompactification limit the fiber T 2n shrinks to zero. We interpret it as the Jacobian variety of a Riemann surface of genus n which varies over the base. This will then be the Seiberg-Witten curve (see [17]). Starting with the Blum-Intriligator little-string theories of k NS5-branes at an A q−1 singularity compactified on T 2 with twists we can get, in appropriate limits, a 3+1D gauge theory with, and massive adjoint hyper-multiplets in consecutive (k,k) representations. The Seiberg-Witten curves for these models have been derived in [7]. As we will show below, we can reproduce these curves by taking the appropriate decompactification limit of the moduli space of k U (q) instantons on the non-commutative T 4 . To start, we will recall how the reduction of the untwisted compactified Blum-Intriligator theories works. From instantons to quiver gauge theories When we set all the α-twists to zero we obtain the statement that the Coulomb-branch moduli space of the theories of k NS5-branes on an A q−1 singularity, compactified on T 3 is the same as the moduli space of k ordinary instantons with a U (q) gauge group on T 4 . This result has already been established in [10,12]. Suppose we compactified on T 3 = T 2 × S 1 and take the radius of S 1 , R → ∞. It can be checked (see (4.8)) that the auxiliary T 4 becomes a product T 2 B × T 2 F . The complex structure of T 2 F and T 2 B are fixed as R → ∞ while the area of T 2 B is proportional to R and the area of T 2 F is proportional to R −1 . Now take a particular gauge configuration corresponding to an instanton of U (q) with instanton number k. We can encode the information in the instanton as follows (see [36,37]). At a local point on the base, the gauge field reduces to two commuting U (q) Wilson lines on the fiber. We can describe them uniquely as q points on the dual T 2 of the fiber. These q points vary over the base B. The instanton equations imply that they span a holomorphic curve Σ g of genus g = qk + 1. Σ g is called the "spectral curve". To completely describe the instanton we also need to describe a line bundle over Σ g which corresponds to a point in the Jacobian of Σ g (recall that the Jacobian of a genus g curve is T g ). The line bundle is called the "spectral-bundle". Alternatively, we can represent the points. It is also easy to see that as the base B decompactifies to S 1 × IR 1 we reproduce exactly the curves from the brane construction of [7] for the quiver gauge theory. The rôle of the non-commutativity Now let us repeat the same procedure but with two non-commutativity parameters α 1 and α 2 . We can take α 1 to be along the first cycle of the base B = T 2 and the first cycle of the fiber F = T 2 and we take α 2 to be along the second cycle of the base B and the first cycle of the fiber F . The η-twists will similarly correspond to non-commutativity along the second cycle of B and one of the two cycles of F . To translate this to the curve Σ g we take the system of q D6-branes and k D2-branes and put in NSNS 2-form fluxes according to the non-commutativity parameters. After T-duality along F The NSNS fluxes become components of the metric G IJ . Here, τ ≡ τ 1 + iτ 2 is the complex structure of T 2 F , ρ ≡ ρ 1 + iρ 2 is the complex structure of T 2 B , and, so that the overall volume of the unit cell will be m 2 s . We will denote the coordinates in IR 4 by (x 1 , x 2 , x 3 , x 4 ). The D2 and D6 branes became a single D4-brane in the homology class, Here, are two faces of T 4 . Similarly to [7] the D4-brane will find a minimal-area surface in this homology class. In the complex structure given by, the cohomology class ω ∈ H 2 (Z) which is Poincarè dual to [Σ] will, generically, be a mixture of (1, 1), (0, 2) and (2, 0) forms. However, it is always possible to find a complex structure (with respect to the flat metric) for which ω is entirely a (1, 1) form. In this complex structure the T 4 is "algebraic" (see p315 of [38]). Given the complex structure, it is possible to write down the curve Σ as the zero locus of a θ-function on T 4 . These θ-functions are the sections of the line-bundle corresponding to [Σ] and depend on kq parameters which are the moduli (see [38] for further details). It is easy to see that the "elliptic-models" of [7] are recovered in the special limit in which we get a gauge theory with massive hyper-multiplets. In this case τ → ∞ and there are no η-twists. The fiber F ′ is replaced with a strip S 1 × IR 1 . The class [Σ] is analytic (i.e. the class ω is a (1, 1) 2-form) and the Seiberg-Witten curves of [7] are recovered. Another Look at the η-twists In this section, we write explicitly the solution for type-IIA (or type-IIB) theory, with both α-twists and η-twists turned on. These solutions should be interpreted as string world-sheet σ-models with a B-field. We will start with a Taub-NUT space without NS5-branes. It is straightforward to define the α-twist. One starts with some given background, which is a principal U (1) bundle cross a torus T d . Locally, the α-twist is just the change of coordinate in the S 1 fiber of the Taub-NUT space, of the form y → y + α I ψ I . y is the coordinate on the circle (see (2.1)) and ψ I is the coordinate on T 3 (I = 1, 2, 3). Since it is just the change of variables, the string theory equations of motion are trivially satisfied. But globally, this is not a valid coordinate transformation, since α I ψ I is not a periodic function on T 3 modulo 2π. Therefore, we get a different background -we call it the α-twisted background. As for η-twists, they are related to α-twists by T duality in T 3 . We will construct the background with both α and η twists turned on in the following way. We first consider the background containing Taub-NUT space cross a three-torus, without any twists. We introduce α-twists along the three-torus, with the parameters η I . Then, we make a T-duality transformation, and get a background with η-twists. This new background is again a U (1) bundle cross a (dual) torus, and we now α-twist it. In this way, we get a background with both α-twists and η-twists. Let us do it explicitly. Start with IR 1,2 × TN(ρ) × T 3 . The metric is: where we have denoted and A is the connection one-form A = dy − A · d r. Also, we turn on the following B field: We wish to introduce α-twists with the parameter η I . As was explained above, this means just the change of variables y → y−η I dψ I . This amounts to replacing A 2 with (A−η I dψ I ) 2 in (6.1). Now we make three T-dualities. We do this by the standard technique of treating V I α ≡ ∂ α ψ I (where α is a string world-sheet coordinate) as an independent variable and inserting a Lagrange multiplier, ψ I , for, ∂ [α V I β] . We get the following metric: with the notation, η I = g IJ η J , (η, η) = η I η I , and g IJ + b IJ is the matrix inverse to g IJ + b IJ . Also, we have the following B field: If we start with a non-degenerate torus and a very small coupling constant, then T-duality gives us back a very small coupling constant. Now we α-twist this background. Again, α-twisting is just a replacement, in all the formulas for the metric and the B field. It is convenient to absorb b IJ η I d ψ J into α I d ψ I . Then, the background fields are: where Also, the dilaton is not constant. Let λ be the string coupling at | r| → ∞. Then, the string coupling at finite | r| is: The metric (6.7) is not, strictly speaking, Hyper-Kähler. Indeed, although it does have three complex structures, they are not covariantly constant with respect to the standard covariant derivative. But they must be covariantly constant, if we modify Γ ρ µν with the torsion, proportional to H = dB. We want to study the moduli space of the theory on the NS5-brane, sitting at r = 0 in this background. As we remarked in section (2), the NS5-brane has a size of l s and, although it is very heavy, it could affect the metric. We will explore this later in this section. For now, we will assume that it is safe to forget about the NS5-brane. To study the moduli space, we perform the chain of dualities. It is most convenient to think of these dualities as acting on the asymptotic (| r| → ∞) values of the fields. Therefore, we would like to discuss how the background fields near the position of the NS5-brane (| r| → 0) are related to the asymptotic values of the fields at | r| → ∞. Let us look first at the geometry near the origin in IR 3 . From (6.4) and (6.6) we see that the geometry becomes flat when the following two conditions are satisfied: | r| ≪ ρ and | r| ≪ ρ 1 + (η, η)ρ 2 (6.10) In this limit, we have just IR 1,6 × T 3 with the metric The B field becomes: We wish to study the moduli space for the NS five-brane sitting at r = 0. Notice that the transversal fluctuations of this five-brane at energy scale ≃ m 2 s have the characteristic size ∆X ⊥ ≃ λl s . If we take ρ ≃ l s and general η, then both of the inequalities (6.10) are satisfied for | r| ≡ ∆X ⊥ . This suggests that the parameter ρ ≃ l s actually does not affect the moduli space. The reason why it might be not true is that the transversal size of the NS5-brane is, actually, of the order l s . Therefore the curvature of the background should, presumably, affect the physics even in the limit λ → 0. The answer we will get shows that the moduli space does not really depend on ρ. Now let us look at the fields at infinity. They are given by the formulae (6.7) and (6.8) with | r| = ∞. We will denote the limits of R 2 (| r|), G IJ (| r|) and B I (| r|) as | r| → ∞ by R 2 , G IJ and B I . It is convenient to have a dictionary relating the fields at | r| = ∞ with the fields at | r| = 0. Let us first summarize our notations. We have already introduced the matrices g IJ , b IJ , g IJ and b IJ satisfying: We have also introduced G IJ and B IJ in (6.7). Now, we define G IJ , B IJ , g −1 IJ and G −1 IJ in the following way: Then, we have the following dictionary, relating asymptotic background to the local background: (6.14) The local value, λ 0 , of the string coupling is related to the asymptotic value λ by the formula which follows from (6.9): The chain of dualities. We start by replacing the Taub-NUT circle with the M-theory circle. We get a D6brane wrapped on T 4 , with the NS5-brane on top of it. At this point it is useful that we remember how the fields of type-IIA theory are related to the fields of M-theory. M-theory on a U (1) bundle is type-IIA on the base of this bundle. Suppose that the action of U (1) is associated to the vector field v. The M-theory three-form C M splits as follows: Also, we choose some local trivialization, and define the connection one-form A (1) on the base, dA (1) = F (F is the curvature two-form on the base, dA = π * F ). It should be identified with the RR one-form C (1) of type-IIA. Also, B should be identified with the B field of type-IIA (this follows from its coupling to the fundamental string). What is the relation between A (3) and the Ramond-Ramond three-form C (3) of type-IIA? Let us remember the general formula for the couplings of the Ramond-Ramond fields to the D-brane [18]: For example, for the D2 brane we get: Here C (1) should be identified with the connection one-form, A = dφ + C (1) . We have to keep in mind that various forms participating in this formula are, in general, subject to gauge transformations. For example, under the gauge transformation (this is needed for the coupling (6.18) to be correctly defined). This suggests that Let us return to our dualities. We assume that the M Theory circle in our original configuration has radius S = λl s , where l s is the string scale in the configuration we start with, and λ is the original coupling constant (which has to be very small, if we want to get Little String Theory on NS5 brane). The three-form of M Theory is read from (6.7): If we now treat the Taub-NUT circle as the M-theory circle, we get (6.16) with (Notice that A − α I d ψ I is just the connection 1-form after α-twist.) In the new type-IIA theory, obtained by compactifying M Theory on the Taub-NUT circle, we have the following asymptotic values of the background fields: and the new string coupling constant is: Making three T duality transformations along T 3 , we get: with the string coupling constant, The NS5-brane remains an NS5-brane, wrapped on T 3 , and D6-brane becomes D3-brane. It shares with NS5 the directions of IR 1,2 . Now we do S-duality, so that B RR becomes B NS , and NS5 becomes D5. Also, we get the new string coupling and the new string length: Then, doing T-duality along the circle parameterized by θ. We have now D6 brane wrapped on the four-torus, and the D2 brane inside it, orthogonal to the torus. We end up with the following string coupling and string length, and the following metric and B field, Let us summarize. We have started with k NS5-branes sitting at the center of the Taub-NUT space, string coupling λ 0 and string length l s . The background fields are given by the equations (6.11) and (6.12), they correspond to both α-twists and η-twists present. By the chain of dualities, we have mapped this configuration to k D6 branes wrapped on T 4 , and one D2 brane, the metric and the B field given by (6.28) and (6.29). Notice that the volume of T 4 is l 2 s ρ 2 l 4 4 . In the limit we are interested in (λ 0 → 0) it remains finite in the string units (specified by l 4 ). The shape of the torus does not depend on ρ. World-sheet T-duality in the limit ρ → ∞ Let us now see what happens in the limit ρ → ∞. The strategy will be to start with type-IIA string-theory on the purely geometrical background which realizes the αtwist. We will then perform world-sheet T-duality on S 1 to obtain a nonlinear world-sheet σ-model. Finally, we will insert the NS5-branes back. To describe the geometrical background we choose, as the transverse coordinates (on which the R-symmetry SO(4) acts). These replace the coordinates y and r of TN(ρ). We will denote, The other coordinates will be denoted, where X 5 is periodic with period 2π. They are the world-sheet fields corresponding to x 0 , x 1 , x 2 , ψ 1 , ψ 2 , ψ 3 from the previous section. The bosonic part of the world-sheet action is, Let us, for simplicity, twist only along X 5 (= ψ 3 ). The twist implies that Z i are not single-valued but rather, W i = Z i e −i α 2π X 5 , i = 1, 2 are single-valued. The world-sheet Lagrangian now reads, Next we perform T-duality by the standard technique of treating V α ≡ ∂ α X 5 as an independent field and inserting a Lagrange multiplier Y for ∂ [α V β] . The result is a world-sheet action corresponding to the metric and B-field, η µν dX µ dX ν + |dW 1 | 2 + |dW 2 | 2 + dY 2 + j (iW j dW j − iW j dW j ) 2 R 2 + α 2 4π 2 (|W 1 | 2 + |W 2 | 2 ) , B µν dx µ ∧dx ν = dY ∧ j (iW j dW j − iW j dW j ) . (6.30) Adding in the NS5-brane Now we repeat the same excercise with the NS5-brane metric. In string units, the metric is, The dilaton is given by, and the solution is to be trusted when g s ≪ 1. (See discussion in [39].) After T-duality we obtain, This is to be trusted when, We see that as R → 0, the Y -direction stays of finite size 2π α . Large radius limit An interesting question is what is the low-energy description of S B (k) compactified on S 1 of radius R with a fixed η-twist in the limit R → ∞. Naively, one can argue as follows. To perform an η-twist we have to go over the "fundamental" degrees of freedom of S B (k) (whatever they are!) and separate them according to their charge Q under the U (1) subgroup of the R-symmetry and according to their momenta n and winding w along S 1 . We then add ηQR to the mass of this field. In the limit R → ∞ and for generic η, this will push all the Q-charged fields to high energy and we will be left with only the Q-neutral sector. Thus, if we start with N = (1, 1) U (k) SYM in 5+1D, as the effective low-energy description, the conclusion would be that we are left with N = (1, 0) U (k) SYM. This conclusion cannot be correct since the gluinos of the N = (1, 0) vector-multiplet are chiral and the theory has a local gauge anomaly. One possibility is that there is no 5+1D limit. For this to be true we must show that there are no BPS states corresponding to light KK states. On the type-IIA side we must show that there are no states made by strings wrapped on the T-dual S 1 which would become light. Perhaps, when the circle is small enough, they do not form bound states any more? Conclusion Let us summarize the results: 1. The moduli space of the little-string theories of k NS5-branes compactified on T 3 with Spin(4) R-symmetry α-twists is equal to the moduli space of k U (1) instantons on a non-commutative T 4 . The shape of the T 4 is determined by the shape and size of the physical T 3 and by the NSNS 2-form fluxes along it. The non-commutativity parameters are determined from the values of the twists. 2. In principle, there are 6 non-commutativity parameters on T 4 . They are determined from the 3 geometrical α-twists and the 3 non-geometrical η-twists. The moduli space depends only on the 3 self-dual combinations of the non-commutativity parameters and hence only on the sum of the η-twists and α-twists. 3. Combining the result for k = 2 with the result of [8], we obtain a concrete prediction for the moduli space of 2 U (1) instantons on a non-commutative T 4 . This 8-dimensional moduli space is a resolution of (T 4 × T 4 )/Z 2 by blowing up the singular locus. It can also be described as a T 4 fibration over a Z 4 2 quotient of a particular K3. The fiber corresponds to the "center-of-mass" of the NS5-branes and the structure group is Z 4 2 acting as translations of the fiber. The particular point in the moduli space of hyper-Kähler metrics on the K3 was constructed in [8] as a function of the α-twists, i.e. the non-commutativity parameters. This K3 turns out to have a Z 4 2 isometry. The K3 can be described by blowing up T 4 /Z 2 and the Z 4 2 acts by permuting the exceptional divisors of the blow-up. Note that this Z 4 2 does not act freely. 1 4. Similarly, the moduli space of the little-string theories of [10] of k NS5-branes at an A q−1 singularity, compactified on T 3 with α-twists (twists in the global U (1)), is equal to the moduli space of k U (q) instantons on a non-commutative T 4 . 5. We studied the phase transitions which occur at singular points of the moduli space.
2014-10-01T00:00:00.000Z
1998-12-18T00:00:00.000
{ "year": 1998, "sha1": "bc53cecbd25ccd49806d0a48208fa8c7325e2c23", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9812172", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d0648e5b2ff5de6f8c453d0eeb123a449fdadcee", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
16143646
pes2o/s2orc
v3-fos-license
Revisiting the D1/D5 System or Bubbling in AdS_3 In this article we study the relation between the bubbling construction and the Mathur's microscopic solutions for the D1/D5 system. We have found that the regular near horizon D1/D5 system (after appropriated constraints are imposed) contains all the bubbling regular solutions. Then, we show that the features of this system are rather different from the bubbling in $AdS_5\times S^5$, since the perimeter and not the area plays a key role. After setting the main dictionary between the two approaches, we investigate on extensions to non-regular solutions like conical defects and/or naked singular solutions. In particular, among the latter metrics, closed time-like curves are found together with a chronology protection mechanism enforced by the AdS/CFT duality. Introduction String theory is the most promising candidate to accommodate all the fundamental forces of the universe, including gravity. Unfortunately, our understanding of this theory is far from complete. In particular, how quantum gravity and the standard model are realized within string theory is still elusive. However, we do not need to solve in full string theory to learn important lessons: in some cases, even simple toy models produce amazing physical outputs. This is the case of the AdS/CFT conjecture, where we can study closed string theory (and therefore quantum gravity) as dual to super Yang-Mills theory (SYM). Actually what we really mean is that within this framework, we can concentrate on a sector of the theory where a lot of control is achieved but physical relevance is still present. Among the many different possibilities, our studies are motivated by recent works on 1/2 BPS sectors of near horizon geometries sourced by a stack of N D3-branes [1][2][3][4]. In this case, the bubbling construction consists in looking for solutions of type IIB supergravity theory, where only the metric and the self dual Ramond-Ramond 5-form are excited. Furthermore, it is required to have regular solutions with an SO(4) × SO(4) symmetry and at least 1/2 BPS supersymmetry. Once the above solutions are found, by means of the AdS/CFT duality, the corresponding dual operators in N=4 SYM theory are identified. One of the most interesting outcomes of the above studies is the simplicity of the dual field theory, a matrix quantum mechanics, that gives the possibility to obtain a deeper understanding on fundamental issues of quantum gravity: for example the role of closed time-like curves (CTC) [5], the study of black hole thermodynamics [6] and the appearance of stretched horizons [7] probing black hole entropy laws, and even the structure of the quantum phase space [8,9]. Although there is a growing amount of literature on this subject, we noticed that the other superconformal cases, M2, M5, D1/D5, are less understood and therefore they deserve some attention. Following the above reasoning, we study the AdS/CFT conjecture for the D1/D5 case, focusing on the simplest 1/4 BPS sector (from the ten dimensional point of view) with an SO(2) × SO (2) symmetry, that is known as bubbling in AdS 3 . Bubbling in AdS 3 has already been studied, to the best of our knowledge, in three papers [10][11][12]. First we have two papers produced by Liu et al, that consider from first principles mainly the supergravity side, leaving open the relation with the CFT theory. Secondly, the work of Martelli and Morales, that uses an already known family of solutions to obtain the supergravity equations. Then, they make a conjecture relating the regular bubbling solutions to the known regular solutions of the D1/D5 system constructed by the group of Mathur [13][14][15]. We believe that there are still many important points that need to be clarified. In particular, before deeper studies can be done, we need to set the basics of the AdS/CFT dictionary and to get a better understanding on the relation between the bubbling supergravity solutions and the related dual CFT operators, including a study of regularity conditions and of the way other non regular geometries could appear in this framework. The scope of the present work is precisely to add on this direction. The paper is organized as follows: in section 2 we review the form of the bubbling ansatz from [12], paying particular attention to the field equations and their general solution in terms of two different types of sources or boundary conditions on a two dimensional plane. Then, adding the regularity requirement, we are able to reduce the two types of sources into a single one, constrained to live on a monodimensional curve. We compute the total flux of the solutions finding that it comes in terms of a line integral on the source and it is proportional to its length. In section 3 we review the D1/D5 supergravity solution, together with the basic input coming from the CFT theory. Then, we show that it is straightforward to constrain the D1/D5 solutions to include the solutions discussed in the previous section, realizing the conjectured relation between the two approaches presented in [11]. At this point, the basics of the AdS/CFT dictionary for the bubbling construction are given. In section 4 we show how non-regular solutions can be included into this duality study by relaxing the regularity conditions, in order to enlarge the family of solutions, obtaining conical singularities, Aichelburgh-Sexl and naked singularities. Some of these metrics have a well defined dual operator, and some don't. In particular, we have found solutions with CTC that nevertheless seem to have no counterpart in the D1/D5 system, realizing a sort of chronology protection mechanism enforced by string theory. In section 5 we comment on the CFT dual description and the possible role of the Liouville theory in the duality, together with a short discussion on future work. It is important to highlight that along this article we will be always working within the minimal supergravity framework. Hence, we only excite the metric g and the 3-form field strength H (3) , to avoid further complications. Nevertheless, inclusion of tensor multiplets should not be much more difficult and is left for future works. We also point out that, although this last reduction seems to exclude giant graviton configurations (since they have been identified with supergravity solutions including non-trivial dilaton field [13]), we still find states that look very much like a giant graviton, at least from the bubbling point of view. More on this can be found in section (5). Bubbling ansatz for AdS 3 We start this section by reviewing some known facts about bubbling in six dimensional models (see [11,12] for a derivation of the supergravity ansatz). The working hypothesis is to look for the simplest states in the D1/D5 system, identifying their quantum numbers and symmetries to translate them into isometries on the supergravity side, giving form to the corresponding ansatz. In short, for the near horizon D1/D5 system, the spectrum of chiral primaries is classified by the conformal dimensions (h,h) and the R-charges (j,j), related to the symmetries SO(2, 2) × SO(4) ∼ SL(2, R) L × SL(2, R) R × SU(2) L × SU(2) R . Here, we are interested in N = (1, 0) minimal six dimensional supergravity, that is universal for Kaluza-Klein reductions from ten dimensions, either on T 4 or on K3. The simplest family of states in this setting is given by h =h and j =j (see [16][17][18] for studies on chiral primaries). The corresponding supergravity ansatz is defined by the above symmetries and the nature of the chiral states under study. The general form of the solution is worked out from first principles in [10,12] or deduced from previously known results, in [11]. In both cases the metric is given by 1 where i = 1, 2 and * 3 is the three dimensional Hodge dual of flat metric in the space directions (y, x 1 , x 2 ). The self-dual 3-form field strength H (3) is written in terms of a 2-formF (2) and the four dimensional Hodge dual * 4 of the metric in the (t, y, x 1 , x 2 ) directions, as follows: At last, we have that h 2 and z are constrained by the equations Therefore, the whole solution is defined by these two independent functions h 2 and z, obeying second order differential equations. Expanding these last two equations, we observe that they can be understood as Laplace equations in four and six dimensional auxiliary spaces, Hence, the general solution can be written in terms of the Green functions with sources at the plane y = 0, Notice that we have introduced ρ(x 1 , x 2 ) as the source only for h 2 , since for z the source coincides with its value at y = 0 i.e. z(0, x 1 , x 2 ) ≡ z(x 1 , x 2 ). Up to this point, the above solutions solve the bubbling ansatz part related to the symmetries. It is still needed to impose regularity conditions to finish the work. In previous papers, it was conjectured that such a condition would connect these solutions to the known solutions of D1/D5 system characterized by the profile F of the corresponding winding string. This is a natural conjecture since we are just describing the same system from a different perspective and, therefore, the two approaches have to be connected. In what follows we provide such analysis. Regularity condition Following the analysis done in the LLM paper [4], we study how the regularity condition imposes constraints upon our metrics. Due to the form of the solution, the possible conflicting regions are at the y = 0 plane. In fact, we can see that in order to have smooth geometries as we approach the source plane, one or both radii of the circles associated to θ 1 and θ 2 , should mix with the y coordinate and, also, h 2 or its inverse has to be regular. We first consider the case where R 1 remains constant and R 2 recombines to y. In this case, after expanding all the relevant fields in y, we arrive to the following result where f n = d n dy n f | y=0 and f is any of the involved functions. The other possibility where we have a vanishing R 1 and a constant R 2 gives Notice that the above set of equations tells us that z is a constant on the y = 0 plane, with values (1/2, −1/2) only. Also, notice that χ has a rather different behaviour on the two regions of the plane, and that up to now, h 2 is unconstrained. We can use z to define two different regions on the plane, region (I) where z = 1/2 and region (II) where z = −1/2. Since we are in a two dimensional hypersurface, the frontier has to be a curve C. To complete the regularity analysis, we have to probe the vicinity of the boundary of the two regions (I) and (II) or, if you prefer, the neighbourhood of the curve C. Basically, we need both radii to smoothly combine with y at this locus (as occurs in the bubbling solutions for the D3-brane case, that results in the pp-wave solution). Given a general curve C, we use that whatever shape it assumes, we can expand in term of the exterior curvature of C and that locally we can always find adapted coordinates where x 1 is perpendicular to the boundary and x 2 is parallel. Then, we change coordinates to following pp-wave alike coordinates and take the limit y → 0 asking for a resulting smooth geometry. After some algebra, it is not difficult to see that in order to achieve regularity, Now, this is exactly the respond of h 2 to a source with support along the curve C, and constant value 1/2π i.e. In fact, it can be checked that the above density produces the well known examples of AdS 3 ×S 3 and pp-wave in six dimensions, where circular profiles and infinite straight line are used respectively. Therefore, we have arrived to the final form that completely defines the source for regular bubbling solutions. Simply replace the boundary constraint behaviour found before to obtain 2 where β is related to the boundary behaviour at infinity, C is a general curve dividing the plane expanded by (x 1 , x 2 ) into two regions and n is a unit normal vector pointing to the region where z = −1/2. Notice that the integrals are re-parametrization invariant, as we should expect for a geometrical solution in gravity. Hence, we have seen how the two different initial functions appearing on the ansatz, become related via the same boundary condition, that comes in terms of a closed curve. Here the bubbling is realized through changing the shape of the curve. Notice that there is no reason to have a single connected curve, and that in general we will have disconnected closed curves as sources for h 2 and z. Next, we compute the flux f of the 3-form H (3) on the above solutions. Basically, we choose the three dimensional hypersurface as follows: define a two dimensional surface Σ 2 on (y, x 1 , x 2 ) such that at y = 0 ends on a closed non-intersecting curve Σ 1 that encloses the curve C, defining a disc D 2 containing C. Then, define Σ 3 as the fibration of Σ 2 × S 1 , where S 1 is the circle not contracting to zero size on Σ 1 (see figure 1). Computation of f gives where L is the length of the curve C. 2 Where z is reduced to a line integral using Stokes theorem. Let us now summarize what we have found up to now. First, the bubbling ansatz comes in terms of two independent functions (h 2 , z), sourced by independent charge distributions on the plane expanded by (x 1 , x 2 ). Secondly, once we require regularity, the sources get identified, in terms of single distributions on a closed curve C. The total flux f of the above solutions is proportional to the length of the curve. Based on the above, we define bubbling on AdS 3 × S 3 as all the above regular solutions, constrained to have the same flux, i.e. the same length but with arbitrary number of disconnected parts of any shape. Notice that the above construction is similar, but different to the bubbling on AdS 5 × S 5 , where fixing the flux was equivalent to fix the area of the drop. Here, is the length what matters! D1/D5 inputs into bubbling In this section, we study the relation between the above bubbling family of solutions and the well known D1/D5 solutions found by Mathur et al [13]. The idea is to set the dictionary to the dual CFT theory. To make the comparison simpler it is convenient to rewrite the metric (2.1) in terms of new angular variables (α, φ) as where, in this section, we only show the metric for brevity, and have eliminated (G, χ) in terms of (h 2 , z). Also, * 4 is the four dimensional Hodge dual, acting on {y, φ, x 1 , x 2 }. Now we turn our attention to the general near horizon metrics of the D1/D5 system (see for example [19]) with Hodge dual * 4 acting on I = 1, 2, 3, 4 and F is a vector field that describes the embedding of the closed curve F along the x I space directions,˙ F is the derivative of F respect to the parameter v. In this parametrization we have that v = (0, l) and l = 2πR ′ Q 5 where Q 5 is the D5-brane charge, R ′ is the radius of the U-dual compact S 1 (that here is redefined to be 1), with angular variable α. Notice that these solutions are not invariant if we change the parametrization of the curve F, since there is physical content on the above. The D1-brane charge Q 1 and the angular momentum J IJ are given by 3 The above solutions correspond to semiclassical configurations of the D1/D5 system where F is related to the profile of the U-dual bound state of Q 5 fundamental strings and Q 1 units of momentum. In [19] regularity conditions for this family of solutions were studied, finding that |˙ F | has to be different from zero, and F not self-intersecting. Therefore, to make contact with the bubbling solutions of (3.1), we restrict them as follows: • F is constrained to live in two dimensions only, • we identify the curves C and F on a fixed parametrization. The first point is necessary to work with minimal supergravity and regular solutions, the second reduces the solutions to the case where j =j, and the curve F to be embedded into a plane. The third and fourth are just the obvious identifications of coordinates and functions, while the last identifies the curves. It is important to not only identify the curves, but to fix a parametrization once and for all, due to the fact that 3.2 depends explicitly on this parameter. AdS/CFT dictionary At this point, we are ready to derive the dictionary between gravity solutions and CFT states. Let us begin with some basic facts and definitions. First of all, since |˙ F | is constant, we get that |˙ F | = Q 1 /Q 5 . Then, we compute the total flux to obtain that f = 4π 2 √ Q 1 Q 5 . Therefore, we learn that the parametrization of C should be identical to the parametrization of F. Hence, in gravity, the rapidity we circulate on the curve fixes the density of D1-branes, while the length of the total circulation fixes the product of the number of D1 and D5 branes. In other words, once we have set the parametrization in the bubbling ansatz, we have fixed the number of D1 and D5 branes in the system. The supergravity chiral primaries were studied in [16][17][18]. Among them, there is a special family with j =j = 1, 3/2, · · ·, that produces fluctuations on the metric of the 3-sphere, associated with the anti-self dual part of the 3-form H (3) . In [19] it was noticed that such states are related to changes on the shape of the profile F. Since this is the only freedom left in our supergravity solutions, we conclude that these are precisely the chiral primaries that we can probe or excite within the bubbling framework. Following Mathur et al [20], we use the twist operators (σ ++ n , σ +− n , σ −− n , σ −+ n ) 4 to describe the chiral primaries under study. Here σ n permutes cyclically the ends of each D-string in the system. Since we have a total N = Q 1 Q 5 of such strings, n runs from 1 to N. Due to the fact that we consider the case j =j, we are allowed to use only (σ ++ n , σ −− n ). Also, σ −− 1 has the lower conformal dimension, therefore the vacuum configuration in the twist sector is given by the operator [σ −− 1 ] N , while a general configuration is given by the product of [σ −− n i ] m i and [σ ++ n j ] m j subject to the constraint (n i m i + n j m j ) = N. Let us consider the vacuum configuration. First we recall that the vacuum maximizes the total spin (since we have N aligned spin 1/2 short strings). From the bubbling point of view, the vacuum also maximizes angular momentum or, better, corresponds to the curve of fixed length that maximizes its angular momentum: the circle. In detail, we find the vacuum by considering first a circular profile F = a cos(wv)ê 1 + a sin(wv)ê 2 |˙ F | = aw where (ê 1 ,ê 2 ) are the unit vectors on the two dimensional plane defined by (x 1 , x 2 ). Then, using the two constraints f = 4π 2 √ Q 1 Q 5 , and |˙ F | = Q 1 /Q 5 , we obtain that a = √ Q 1 Q 5 and w = 2π/l. Inserting the resulting curve into equation (2.3), we get where δ ij dx i dx j = dr 2 +r 2 dψ. Using the following change of coordinates y = a σ sin θ, r = a √ σ 2 + 1 cos θ, we recover AdS 3 × S 3 metric in global coordinates The other celebrated example is the pp-wave solution that, as in the case of bubbling for D3-branes, can be obtained by focusing on local part of the curve C, which looks like a straight infinite line. In this limit, we get where (x 1 , x 2 ) are adapted coordinates, such that they are perpendicular and parallel respectively to the curve C. Then, changing coordinates like in (2.2), we obtain the familiar metric ds 2 = −(r 2 1 + r 2 2 )dt 2 − 2dtdx 1 + dr 2 1 + dr 2 2 + r 2 1 dθ 2 + r 2 2 dθ 1 . Once we have set the dictionary for the vacuum, we can start to study excited states on this sector of the theory. We point out that we can very well consider disconnected curves like those of figure (3), remaining on minimal supergravity domains. Hence, a little puzzle comes into our minds: these configurations seem to be related to giant gravitons, but the latter were associated to different supergravity solutions with non-trivial dilaton behaviour in [13], and therefore should be out of the minimal supergravity theory. To solve this puzzle more studies on the disconnected curves need to be done, that are left for future works. Bubbling and non-regular solutions In previous sections, we set the rules for bubbling in the D1/D5 system, giving also the dual CFT chiral operators that can be probed. We could now research on physical implications of this sector of the theory. Nevertheless, we decided to postpone such a study for better times, and to work out, instead, the appearance (and the CFT description, if any) of non-regular solutions. Therefore, in this section we will be working with a slight generalization of the bubbling ansatz, relaxing the regularity conditions. In this form, we are able to recover other sectors on the CFT theory (not included into the regular bubbling ansatz that nevertheless is interesting in its own rights) containing for example conical defect metrics. From the D1/D5 brane perspective, among the semiclassical solutions, there is some room for generalization: we can relax the constraints on F, by allowing eitheṙ F = 0 along the curve or self intersections. Also we can consider superposition of regular profiles. On the other hand, from the perspective of bubbling, we have a lot of room to play with. Basically, as soon as we give up the regularity conditions, we have two independent sources. Here, we will conserve the notion of lines of charge as sources, and work with asymptotically AdS 3 × S 3 solutions and with round profiles such that we gain an extra Killing direction that simplifies the studies 5 . We have chosen to parameterize the four possible cases in terms of 2πρ 0 and ∆ − z, where ρ 0 is the source density for h 2 and ∆ ± z = z I ± z II (z I = 1/2 from the asymptotic boundary conditions). Then we have 1) ∆ − z = 2πρ 0 = 1 , 2) ∆ − z = 1 and 2πρ 0 = 1 , 3) ∆ − z = 1 and 2πρ 0 = 1 , 4) ∆ − z = 1 , 2πρ 0 = 1 and ∆z = 2πρ 0 . (4.1) while the regular case is recovered by setting ∆ − z = 2πρ 0 = 1. The fact that we are considering only circular profiles of radius a reduces the form of z, h 2 and V ψ ( the additional Killing vector sets V r = 0) to where L 2 = √ Q 1 Q 5 . Since we work with a total fixed flux f = 4π 2 √ Q 1 Q 5 , the radius of the circular profile is constrained to a = L 2 /2πρ 0 . We also change to AdS-adapted coordinates y = L 2 σ sin θ , r = l 2 √ σ 2 + α cos θ and since z I = 1/2, only one of the two ∆ ± z is independent, rendering the whole supergravity solution a function of only ∆ − z and α. We would like to identify which of the above supergravity solutions are physical solutions, that is to say which ones have a CFT dual configuration. At first sight, it seems that we are out of the D1/D5 system: after all, there should be only one source in the physical situation (here, in general, we are dealing with two independent sources). Nevertheless, we know that there are, within the bubbling ansatz, nonregular solutions that correspond to condensates of otherwise regular solutions with CFT dual states 6 . In the D1/D5 system we could change the value of ∆ − z trying to see this as the result of an average over delocalized sources. The above procedure could also be implemented on the boundary, changing the local amount of D1-branes we locate on this curve, that therefore varies the value of 2πρ 0 . Hence, we just can not rule out the family of solutions (4.2) without further studies. Nevertheless, certainly there are ranges of the parameters ∆ − z and 2πρ 0 that do not have any associated CFT dual configuration. Hopefully, the above non-physical solutions should be associated to CTC or other types of pathological behaviours. In fact, one of the general features of the solutions (4.2), is that there are CTC. Take for example the g ψψ components of the metric g ψψ = L 2 cos 2 θ σ 2 + α sin 2 θ σ 2 + α 1 − α(∆ − z) 2 cos 2 θ from which we can see that there are regions in space-time with CTC if These regions are defined by the equation and the metric has a naked singularity for θ = σ = 0 (see figure 4). It would be very interesting to translate the above supergravity range of parameters into CFT parameters, to actually understand if such pathological solutions are or not in the CFT theory. Unfortunately, these supergravity solutions would be related to averages over microscopical configurations, and it is not assured that we will discover how the average was done, and over which microstates. Nevertheless, we have been able to find the CFT dual configuration for the first two cases listed in (4.1), respectively corresponding to conical singular metrics and to the Aichelburg-Sexl type metric found in [14]. In both cases, we found CFT duals only for the non pathological regimes, enforcing a protection mechanism curing gravity. To make contact with the D1/D5 system, we have to set the correct parametrization of the curve C. We found that the form of the curve and the flux constraint define uniquely the parametrization to be given by F = a cos(w Q v)ê 1 + a sin(w Q v)ê 2 , v = (0, l) 7 Other values of Q are irrelevant, due to the periodicity ofψ. x 1 x 2 At this point, we have to introduce the physical constraint that the curve F is actually closed. This is only achieved if Q = (1 − m)/m with m ∈ N 0 . In other words, we can find a corresponding configuration on the CFT side, as long as Q ∈ (−1, 0). These allowed cases produce only conical defect metrics, with deficit angle δθ 1 = 2πQ. Of course this case is not regular since we have self intersections on F. The relation between conical defects and CFT operators has been studied before for the D1/D5 system [14]. The dual associated operator is given by and produces a deficit angle δθ 1 = 2π(1 − 1/m). Notice that the system is excited to higher energy state that in the U-dual P/F1 picture corresponds to the harmonic w m = 2πm/l. So, we have seen how to recover conical defect metrics within the bubbling picture, in a completely independent way, compared to how such metrics were found in the D1/D5 system. Also, we have gained more physical input into other related supergravity solutions, the conical excess. From the above considerations, these solutions are not in the spectrum of the dual CFT theory and, therefore, should be ruled out as un-physical solutions 8 . This is a typical behaviour of string theory, which teaches us that not all the supergravity solutions are to be labeled as physical (keeping in mind that gravity is just an effective theory). In this particular example, the result is not so unexpected, since conical excess can be thought as the response of space-time to negative energy point particles, that even at the classical level are somehow related to pathologic behaviours. Nevertheless, this example shows the power of the AdS/CFT duality since it leaves no room for doubts about the physical meaning of these metrics. In this second case we set the parametrization as follows where Q runs from −1 to ∞. The resulting metric is that corresponds to the Aichelburg-Sexl metric found in [14]. This can be easily seen by comparing the form of (h 2 , V ψ ) with the corresponding metric functions given in [13] (see equations (3.18) (3.20) and use q = −Q). These solutions were found in the D1/D5 framework by smearing over a single turn on the circular profile F a large number of bits of the curve that remain at a fixed point in space-time, while we move on the curve of parameter v. Therefore we are changing the density of D1-branes in the resulting average curve (see [13] for a complete construction of this solution). The above metrics are conjectured to be dual to operators of the form The n i 's correspond to bits on the U-dual F-string profile, that remain constant at a fixed point in space-time, while we move on the parameter v along the curve. The average on the position corresponds to a distribution with a large dispersion on n i . Then, large numbers of such fixed bits will translate into a reduction of the radius a of the circle described by the profile F, since less bits will be left to close the curve. In the above sense, radii larger than the radius of the vacuum configuration are impossible to be constructed in the CFT theory. Therefore we conclude that the solutions with Q ∈ (−1, 0) (or if you prefer α > 1) are not physical, since there is x 1 x 2 (b) Figure 6: (a) shows a curve that circulates once around the origin, with zero velocity in the last portion of the curve. (b) shows a curve that has many parts whith zero velocity. The Aichelburg-Sexl metric is obtained by the limiting situation when the straight bits are smeared along the curve. no dual CFT configuration. At this point, we stress that precisely when α > 1, the supergravity solutions present CTC! Therefore this is another example of chronology protection implemented by string theory 9 . Summary and discussion In this article we have studied the relation between bubbling in AdS 3 and the more conventional construction of microstates in the D1/D5 system. We have learned a few lessons: the first lesson tells us how both constructions are related once we concentrate on regular solutions only. In this case the D1/D5 family of microstates contains the bubbling solutions as a subset. The connection is possible due to the collapse (in the bubbling framework) of the different sources into a single line of charge. This subset is dual to a particular tower of chiral primaries operators in the CFT with conformal weights (1, 3/2, . . .). Therefore, we have studied all the geometries sourced by connected and/or disconnected closed curves with fixed total length. The second lesson tells us that this is not the full story and that non-regular solutions have also a role to play in this framework. This time, the splitting of the sources (characteristic of the bubbling picture) is understood (from the D1/D5 family of microscopic states) as the result of an average over semiclassical configurations, that effectively smears the string source. The third lesson is that there are solutions within the bubbling ansatz that have no counterpart in the D1/D5 family of microstates and, therefore, do not have a CFT dual. These solutions are artifacts of the low energy supergravity theory and should be discarded as non-physical. In particular, in all the non-physical solutions we have found there are pathologies like CTC. Hence string theory seems to be acting as a chronology protection agency. It would be very interesting to connect the AdS/CF T picture with the Liouville theory living at the boundary of AdS 3 . In fact, it was shown in [23] that the worldvolume theory of a single D1-brane in AdS 3 becomes precisely a Liouville theory once we approach the boundary. The D1-brane needs to be rotating in the S 3 to become a stable BPS state. Such states are called giant gravitons and are of importance in the bubbling framework (at least for the D3-brane case). Now, it is also known that in the Liouville theory there are normalizable and non-normalizable states (see [24] for a review). The normalizable states have a continuous spectrum bounded from below, while the non-normalizable states do not present such a gap. Therefore, the theory gets organized as a series of sectors labeled by this non-normalizable states plus the tower of normalizable states. In [25] non-normalizable states were conjectured to be dual to conical defect metrics. At this point we would like to recall that, from the bubbling point of view, the CFT theory is somehow naturally arranged into regular and non-regular sectors, in such a way that the non-regular sectors act as different vacua, while the regular sector can be accommodated as a deformation on each of these vacua. In terms of bubbling figures we have, for example, a conical defect vacuum, defined by a circular self-interacting curve, and a whole tower of operators, produced by small changes on this profile, deforming the circle. Each conical defect metric is defined by a winding number that separates one sector from the other. So we believe that these similarities signal common structure and that a deeper study on giant gravitons on the D1/D5 system deserves attention, since it may provide a bridge between dual CFT theory of D1/D5 system and the Liouville theory at the boundary of AdS 3 .
2014-10-01T00:00:00.000Z
2005-06-10T00:00:00.000
{ "year": 2005, "sha1": "58bf14593a1a72356b141d9f75e525c3238590a0", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/1126-6708/2005/10/070/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "1d2f603f696cf6a253b4f99bf741d3fd3bfb11b0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
250492250
pes2o/s2orc
v3-fos-license
Climate Response and Sensitivity: Timescales and Late Tipping Points Climate response metrics are used to quantify the Earth's climate response to anthropogenic changes of atmospheric CO2. Equilibrium Climate Sensitivity (ECS) is one such metric that measures the equilibrium response to CO2 doubling. However, both in their estimation and their usage, such metrics make assumptions on the linearity of climate response, although it is known that, especially for larger forcing levels, response can be nonlinear. Such nonlinear responses may become visible immediately in response to a larger perturbation, or may only become apparent after a long transient. In this paper, we illustrate some potential problems and caveats when estimating ECS from transient simulations. We highlight ways that very slow timescales may lead to poor estimation of ECS even if there is seemingly good fit to linear response over moderate timescales. Moreover, such slow timescale might lead to late abrupt responses ("late tipping points") associated with a system's nonlinearities. We illustrate these ideas using simulations on a global energy balance model with dynamic albedo. We also discuss the implications for estimating ECS for global climate models, highlighting that it is likely to remain difficult to make definitive statements about the simulation times needed to reach an equilibrium. Introduction The central question as to how the climate is likely to change as a function of anthropogenic CO 2 emissions can be posed as 'How does an observation of the climate system respond to changes in its radiative forcing induced by changes in atmospheric CO 2 ?'. This question has been studied in various ways for at least over a century [1,2], although efforts to answer it became more intense and in-depth over the last decades. Amongst early efforts was the pioneering work by Charney et al in 1979, who made the first estimates of expected equilibrium warming after doubling of atmospheric CO 2 (while keeping vegetation and land ice fixed at present-day values) using a numerical Global Climate Model (GCM) [3]. This metric has later been named the Equilibrium Climate Sensitivity (ECS) and is still widely used. Since then, researchers have developed a number of different metrics that measure climate response to different scenarios of anthropogenic change in CO 2 and have incorporated information from other sources besides computer models, including historical observations and data from palaeoclimate records. Recently, these efforts were summarised in an assessment of the World Climate Research Programme [4] that synthesised different quantifications of climate response using these different lines lines of evidence and led to the headline that the Earth's ECS is likely between 2.6K and 3.9K. One of the hurdles for this assessment was the variety of definitions of (the quantification of) climate sensitivity -and ECS especially -in the literature. The root of this problem can be attributed to the lack of data on equilibrium climate states or detailed long-term transient data. This can be due to low time resolutions in proxy data, lack of observational data or insufficient computing power to equilibrate modern GCMs. Consequently, equilibrium properties need to be estimated from incomplete data sets, leading to many slightly different ways to quantify climate sensitivity. Common to them all, however, is the need to extrapolate long-term dynamics from data on shorter time scales. In this paper, we describe and discuss this extrapolation process in detail, hereby focusing on estimates of ECS using (idealised) experiments in climate models for the sake of mathematical simplicity. Of particular interest here is the exploration of linear, and non-linear, dynamics that can emerge in multiscale dynamical systems that can cause problems with extrapolation. The common way to obtain estimates of ECS in climate models involves the use of extrapolation and regression methods on non-equilibrated transient simulations -typically of 150 year long runs. Values for ECS obtained in this way are now often referred to as the effective climate sensitivity [5] signalling that it might not encompass all long-term climate change. Although there are many different ways to perform such extrapolation, common is that it is usually based on linear concepts and frameworks. A recent review [6] of climate sensitivity highlighted that it is a key challenge to study the limits of such linear frameworks. Here, we will investigate these limits and in the process highlight the trade-offs that need to be made when designing experiments to quantify ECS: in order to measure a clear signal of warming in relation to the noise of natural variations, large perturbations are desirable but precisely in the case of larger perturbations the nonlinear behaviour becomes important and linear frameworks break down. One of the most important tools to study past and future climate change are the GCMs as used in Coupled Model Intercomparison Projects (CMIP, e.g. [7]), because they provide a globally complete and detailed representation of the climate state while (approximately) satisfying the physical laws. However, specifically for these large models there is no way to determine whether a model really has arrived in the linear regime near an equilibrium, or even if such an equilibrium exists. In this paper we explore some simple conceptual examples of the potential nonlinear dynamics of the climate. We also make a number of observations that we hope illuminate some of the limitations of linear frameworks. (i) We highlight cases where there may be strong dependence on the climate background state and the forcing levels. (ii) We highlight examples where there may be a good fit to transient data but poor extrapolation preventing an accurate estimation of the ECS. (iii) We show that nonlinear systems can have slow tipping points. When these are crossed the tipping dynamics play out on slow time scales, and it can take arbitrarily long times before nonlinear and/or asymptotic behaviour is observed. (iv) We demonstrate how in the presence of multiple-timescales with nonlinear feedbacks a late tipping can occur in which fast processes suddenly dominate after arbitrarily long slow transient behaviour. This highlights the potential for slow and/or late tipping points to be particular obstructions to estimating ECS. The rest of this paper is organised as follows: in the remainder of this section we discuss in general the response of a nonlinear system to forcing. In section 2, we consider the equilibrium response and equilibrium climate sensitivity of the climate system in terms of limiting behaviour. Moreover, we point out the challenges that arise when estimating those from short time series, highlighting the trade-offs that emerge in terms of perturbation size and required simulation time. In section 3, we examine the nonlinear effects that may appear as a result of climate dynamics on multiple timescales, including slow tipping which may in turn lead to late but rapid tipping. We illustrate these effects using multi-scale global energy balance models with dynamic albedo and/or chaotic variability and an example from a LongRunMIP abrupt8xCO2 run [8]. Finally, we briefly discuss these results, and the influence of time-varying forcing on estimation of climate response and sensitivity in Section 4. Response of nonlinear models to forcing Consider a notional state of the climate system y(t) for t > t 0 that evolves in response to various (unknown) forcings, with a partially known initial state y 0 and an input of atmospheric CO 2 generating a radiative forcing ∆F (t) that is specified for t > t 0 . We write this climate state at time t as where Y t is an evolution operator that evolves forward the initial state y 0 (at time t 0 ) up to time t according to a climate model Y with (possibly time-dependent) radiative forcing ∆F . Given a scalar observable O that maps the full climate state y(t), the response clearly depends on the choice of observable O, the choice of model Y , the forcing ∆F experienced by the system, the initial climate state y 0 at time t 0 and the time moment t > t 0 of interest. At the level of a single initial state y 0 starting at t 0 of which we have perfect knowledge and subject to deterministic forcing ∆F , the response in the observable O at time t > t 0 is the difference in the observable's value at times t 0 and t, i.e. This corresponds to a two-point response in the terminology of [9]. As this is often the easiest response type to think about mathematically (and extensions to other types are possible albeit more technical), it is this response type we will be referring to throughout this paper. However, often we are interested not in specific trajectories but rather in the distribution of possible responses for a probability distribution µ 0 of initial states and forcing ∆F . In this case we write the response as This corresponds to a distributional response, namely it is a random variable with some distribution determined by the "pushforward" of the initial probability distribution µ 0 by the dynamics. Furthermore, there are different interpretations of (3), depending on the choice of probability function. These include: • A physical measure on a climate attractor [10,9]. This can be an observable measured in a long palaeoclimate time series, or an observable in a model, where the attractor is (partly) known from the underlying model equations. • An ensemble of initial conditions that are thought to sample subgrid processes in a model (or observational data). • An empirical measure for a finite segment of trajectory, i.e. a choice of states on {Y t (y 0 , t 0 , 0) : t ∈ [t 0 , t 1 ]} over some finite interval with t 0 < t 1 , with equal weight to any given time instant. Such a measure can be approximated from a finite length time series of a palaeoclimate record. Note that µ 0 is a physical measure means that for typical initial conditions the empirical measures converge to one and the same distribution: for a more precise definition of a physical measure, see for example [11,12]. If there are multiple attractors then there can be several physical measures, and typical initial conditions converge to one of these depending which basin of attraction they are in. Equilibrium Response and ECS as limiting behaviour While the response on any time scale can be relevant, often the asymptotic, or equilibrium, response as t → ∞ is considered first. This response is typically easy to analyse and understand in simple models. Taking the limit t → ∞ of (2), the equilibrium response is: Of course, this begs the question of whether the limit exists. In particular, one cannot expect such limit to hold for any forcing ∆F . For instance, if the forcing specifies uninhibited and constant emission of greenhouse gases, the climate system will not evolve to any equilibrium. Hence it makes sense to limit ourselves to forcing scenarios that have constant forcing levels as t → ∞ (i.e. ∆F (t) → ∆F * as t → ∞). In practical model studies of equilibrium climate sensitivity, often the forcing is just taken as a constant throughout the whole simulation. Of particular interest is the equilibrium response to an instantaneous and abrupt doubling of atmospheric CO 2 , which we indicate by the forcing ∆F abrupt2xCO2 . Then, the equilibrium climate sensitivity (ECS) is defined as the response of global mean surface temperature (GMST) to such forcing, i.e. ECS(y Even for such idealised forcing, such a limit may not be well-defined. In any but the simplest models, the asymptotic climate state will have stationary internal variability, for which the limit of the two-point response is not well-defined without first averaging for long enough that any internal variability is averaged out. In such cases, a distributional response may have a well-defined limit, although it can happen that even these do not converge in cases where there is non-ergodic behaviour [13]. It is difficult to say anything definitive about the convergence of climate response in state-of-the-art GCMs. These models are numerical representations of the underlying physical equations, which have been developed to include many physical processes and ever-improving parametrizations of sub-grid scale processes; they are very high-dimensional and complex. We do not have access to the attractors of these models and so cannot exclude the possibility of poor or no convergence. These models are roughly calibrated only by assessing how well they can reproduce the present day climate, including the historical period. However, in practice, reaching the true equilibrium may also be less relevant with such a model; the physical state of the climate systems after a few centuries or even millennia could be difficult to predict anyway because of incomplete knowledge of the initial state y 0 , model details and forcing. For these reasons, a pragmatic Effective Climate Sensitivity [5,14] is often taken, in which response over a few centuries or millennia is taken, ignoring dynamics on longer time scales. However, we focus here on cases where the limit in (5) is well-defined. Background State, Forcing Scenario and ECS In (5), it is clear that the equilibrium climate sensitivity depends on the initial condition y 0 or background state where the latter refers to the initial climate attractor. However, often ECS is given without explicitly stating initial conditions. This can lead to ambiguity about what is meant by ECS when comparing simulations of current and palaeoclimates. Because of the possibility of multistability of the climate system, even for the same CO 2level, may support multiple climate states. In physical terms, the dependence on the background state originates from feedback processes that changes as the forcing is applied [10], necessitating a proper communication of the background state considered when computing the ECS of that background state. Further, in the definition of ECS (5) a doubling of atmospheric CO 2 is given as forcing scenario. However, in practice, ECS is often used as a measure of temperature increase per CO 2 doubling. So by assuming linearity of the climate response to forcing levels, ECS is employed to estimate warming for other CO 2 forcing levels. Specifically, for an abrupt 2 γ xCO 2 forcing, an assumption of linear response would mean that warming of γ times the ECS is expected: lim t→∞ R GMST,Y (t; t 0 , y 0 ; ∆F abrupt2 γ xCO2 ) = γ ECS(y 0 ). (6) Certainly, this assumption will fail when γ is large enough that a tipping point is crossed, but even when that does not happen such linear assumption only holds when the forcing is small enough that nonlinear terms can be ignored. It has been shown that this linearity assumption in fact does break down in GCMs. For instance, palaeoclimate simulations with a wide range of CO 2 -concentrations suggest such linearity can be broken [15] and multi-millennial experiments in the model intercomparison project LongRunMIP [8] also show deviations from linearity; it was found that abrupt4xCO2 experiments lead to more than twice the warming of an abrupt2xCO2 experiment in the same GCM. Further, abrupt8xCO2 experiments led to less than twice the warming of an abrupt4xCO2 experiment. Hence, the usage of ECS as a linear predictor for warming based on CO 2 levels can easily lead to over-or underestimations of warming. Challenges to estimating ECS from timeseries It is computationally expensive to run state-of-the-art GCMs and in principle, millennial length simulations may be needed to get close to equilibrium (see e.g. the LongRunMIP [16]). Because there exists variability on many time scales and spatial feedback patterns in these models, there is no a priori method to determine when or indeed whether a nonlinear model has reached equilibrium. This means that the equilibrium response of a climate model cannot be directly found from time evolution of the model; instead, one needs to derive and extrapolate the equilibrium properties of the model from possibly relatively short transient data. In general, estimation of ECS for a model (such as a GCM) involves four steps: Table 2] that lists 11 different methodologies. However, the most common standard for estimating ECS uses a technique by Gregory et al [17]. Typically, a single abrupt CO 2 -forcing experiment is run (starting from pre-industrial forcing levels, standard is to use an abrupt 4xCO2 forcing) for some years (150 years is the benchmark for CMIP6 models). The transient data on change in the yearly and globally averaged observables near-surface-temperature ∆T and top-of-atmosphere radiative imbalance ∆N is fitted to the linear model ∆N = λ∆T + f . Then, equilibrium warming ∆T * is estimated setting ∆N = 0 in this linear model (since, in equilibrium, there should be radiative balance), yielding ∆T * est = −λ −1 f . Albeit its predominant use in climate sensitivity analyses in GCMs, it is clear that GCMs are not well-approximated by this simple linear model over all time scales; because climate feedback processes operate at quite different timescales, ∆N and ∆T will have a non-linear relationship that has non-zero curvature over the course of a long simulation, and the linear relationship only holds approximately for certain time intervals [18,8,19,20]. Better fits to the response over all the time scales can be found by considering a combination of several linearly decaying modes, i.e. by viewing the climate system as a combination of linear processes with quite different time scales [21,18,22,23]. Other protocols use results from the literature of linear response theory directly [24,25,26,27,28,29,30,31,32]. That is, in relative generality, the response (of an observable O) in the linear regime of a (non-linear) system to a forcing can be characterised via a (causal linear observational) Green's function G [O] (t). Specifically, the yearly and globally (and ensemble) average near-surface-temperature increase ∆T at time t under a certain forcing scenario ∆F is given by the relation Using this relationship, transient data can be used to estimate the Green's function G [T ] from which the equilibrium response can be extrapolated -which can be done through fitting to some prescribed function (typically a sum of decaying exponential functions) or through a discrete Fourier transform algorithm. For all the fitting and extrapolation protocols, the optimal choices in the protocol are not always obvious as certain trade-offs need to be made: Figure 1 -Schematic diagrams illustrating trade-offs between perturbation amplitude and integration time when computing ECS on perturbing a linearly stable state of a nonlinear climate model. The light blue regions illustrate the trade-off needed to give a good signal-to-noise ratio of the estimate of ECS. The pink region illustrates the trade-off needed to ensure the system has entered the linear regime. The green "Goldilocks zone" shows points where accurate prediction of ECS is possible. (a) illustrates a case where the state is globally stable while (b) shows a case where a large enough perturbation (above the red bar) pushes the system out of the linear regime -perturbations above this may in principle give super-long transients and/or convergence to another stable state. Finally, (c) shows a case where accurate estimation of ECS is not possible. 1. The simulation time needs to be as long as possible to ensure (a) we are in the linear response of the final equilibrium state and (b) fluctuations caused by natural variability can be averaged out. However, long simulations for GCMs are computationally expensive and even these will not be able to detect slow timescales beyond the length of simulation time. 2. A large ensemble and/or a long time period for fitting needs to be chosen to reduce noise caused by internal variability. However, each additional ensemble member increases the simulation effort and the time period for fitting needs to start as late as possible to maximise the chance of being in a linear regime. 3. The perturbation needs to be as large as possible to maximise the signal-to-noise ratio for the fitting procedure. However, large perturbations may result in nonlinear effects, including tipping into different climate states. Figure 1 illustrates two important trade-offs between perturbation size and integration time. In particular, the figure highlights the need to find a "Goldilocks Zone" where the perturbation is neither too small nor too big. Examples of these trade-offs in a nonlinear setting using an conceptual energy balance model are discussed within Section 3. Slow linear responses and ECS We start by illustrating some challenges that already arise in the linear response regime of a model. In such setting, extrapolation can be difficult if the time scale of the slowest response exceeds the length of timeseries available. To illustrate this, we now consider the evolution of a linear observable O of a finite M -dimensional linear system. In the absence of repeated eigenvalues, the Green's function will be a sum of exponential functions (with exponents being the eigenvalues) with the following functional form: where λ j ∈ C represent eigenvalues of the linear system and β [O] j ∈ R depends on corresponding eigenvector and observable; often λ j is restricted to the negative reals but more generally they may be complex with oscillatory decay (see e.g. [33]). Estimating the Green's function for high (or infinite) dimensional systems can be extremely challenging -not least because linear operators in infinite dimensions may have a continuous (operator) spectrum. Nonetheless, one can assume a functional form for G [O] (t), and fit parameters from transient data. This approach has been applied successfully to many response problems in the climate system, see e.g. [32,33,29,34]. Let us now assume that (7) holds for the Green's function, and restrict to λ ∈ R. Even then, the number of modes M needs to be determined, and that comes with its own problems as shown in Figure 2. This figures compares responses of an observable given by one of the following: Figure 2 -Examples of the response of observables ∆O1 (black), ∆O2 (blue) and ∆O3 (red), sums of exponential functions as defined in (8). Note that the t-axis is given in log-scale to highlight how long these different equations stay almost indistinguishable: the red case corresponds to a linearly unstable setting, i.e. a 'run-away' scenario.. These three examples differ only by the absence/presence of an eigenvalue λ with |λ| small: only the first two are bounded and these have different asymptotic values; the third describes a 'run-away' response. Nonetheless, Figure 2 shows that all three observables are indistinguishable at first; only over longer time scales does the effect of the small eigenvalue become apparent. It is practically impossible to determine which of these functional forms is correct from short-time transient data only. In the climate system, the dynamics play out over many different time scales [35,36]. Hence, this should play an important role in understanding GCM experiments. In particular, it is important to try to determine time scales on which the constructed estimations and extrapolations can be trusted, as there seems to be no way to completely rule out slow warming, or even slow tipping, on all slow time scales. GCMs very often do not include the very slow climate components such as land ice sheets dynamically, but still need very long spin-up times and almost never are integrated to full equilibrium. For example palaeoclimate experiments with GCMs typically show considerable drifts in the globally averaged ocean temperature after several millennia of simulation, while already in good radiative balance (e.g. [37]). Nonlinear response and ECS for climate models In the previous section, we discussed potential problems associated with timescales that can affect estimation of ECS even for linear systems. In this section, we turn our attention to issues related to non-linear response. Here, a particular challenge are tipping points where fast dynamics can suddenly take over even after long, very slowly evolving transient periods; this is impossible in a purely linear system. To make our considerations in this section more explicit, we consider a global energy balance model (GEBM) that has dynamics on two timescales and the possibility of tipping phenomena on a slow or a fast timescale. We introduce the model in subsection (a). Then, we consider tipping-related effects in this model due to time-scale separation of physical processes in subsection (b) or due to internal variability in subsection (c). A fast-slow energy balance model We consider a GEBM of Budyko-Sellers-Ghill type [38,39,40], which describes the evolution of GMST T according to the model where C is the specific heat capacity, Q 0 is the incoming (predominantly short wave) solar radiation, α is the planetary albedo (so that Q 0 α is the reflected solar radiation) and εσT 4 is the outgoing (predominantly longwave) Planck radiation (with planetary emissivity ε and Boltzmann constant σ). Further, µ represents the mean radiative forcing due to increases in CO 2 and µ N V (t) models variability in radiative forcing, assumed to have zero mean. Following [41], we assume is the concentration of atmospheric CO 2 at time t, and µ 0 is a reference radiative forcing level for a CO 2 concentration of ρ(0). When albedo and/or emissivity are taken to be temperature-dependent, i.e. α = α(T ) and/or ε = ε(T ), the model can have multiple stable climate states each with different climate sensitivity. In this paper we assume there is relaxation towards an equilibrium albedo α 0 (T ) at a rate τ α ≥ 0 We assume a temperature-dependent equilibrium albedo α 0 (T ) given by (12) and an instantaneously settling emissivity ε(T ) given by Both of these functional forms are of sigmoid-type, and change from one constant to another as T moves through a range of temperatures near T α,ε [9]; α 0 (T ) models the (relatively slow) lowering of albedo in the presence of land ice sheets, while ε 0 (T ) models a (relatively fast) transition from a clear to a cloudy planet with large quantities of low cloud. Each of them on their own can lead to a bistability between a colder and a warmer climate state but we include both to allow the possibility of independent slow and fast tipping points. In fact, we believe that both the (slowly settling) temperature-dependent albedo and emissivity are required to have some of the later illustrated phenomena -late tipping in particular -that do not present themselves in models with constant albedo or emissivity. We include natural variability of the energy input at the surface represented by chaotic forcing through a Lorenz-63 model, i.e. natural variability µ N V is given by where x adheres to the Lorenz-63 model, which conceptually represents the chaotic dynamics of weather processes so that ν NV is a measure for the strength of the variability and τ N V is the characteristic timescale of chaotic variability. Parameter values used in the simulations in this paper are given in Table 1, except where stated otherwise. There are two special parameter settings that we distinguish. We say there is dynamic albedo if τ α > 0; in the case τ α = 0, albedo settles instantaneously so that we can eliminate (11) and set α = α 0 (T ). We say there is chaotic variability if ν N V = 0; in the case ν N V = 0, there is no internal variability and we can eliminate the chaotic Lorenz-63 model (15). It is well known that in the case of no internal variability, equations (9) can be bistable [38,40]. Due to the functional forms of temperature dependent albedo and emissivity, the model (9) can have one, two or three stable equilibria depending on the parameter values. This is organized by a fifth order "butterfly" singularity [43]: see Appendix A for a verification and in-depth analysis of the bifurcation structure of this model. Nonetheless, for the parameter values given in Table 1, the model is bistable for a certain range of values of the parameter µ: in this bistable region, the model supports a stable cold "icehouse" and a warm "hothouse" climate state (see Figure 3). Note that the ECS of both type of states (for the same CO 2 -level) differs between branches as albedo and emissivity are different between branches. However, the ECS within a branch is also not constant: Figure 3(b) shows variation between initial points y 0 that lie on the same branch (intra-branch differences). In the climate literature, these variations are not well-quantified, mainly because they depend on a multitude of physical feedback processes, which are difficult to observe and model numerically in full [4,44]. Still, it is good to keep in mind that observed or estimated ECS might vary as the (initial) climate state changes. Nonlinear response: slow and/or late tipping and ECS As discussed in section 2, if the transient relaxation dynamics of a climate model is approximated well by a linear system this can be used to estimate ECS. This thus works for nonlinear systems with small enough forcings. For example Figure 4(a) shows a simulation of (9) with parameters C in Table 1 subjected to an abrupt2xCO2 Table 1. The bifurcation parameter µ represents radiative forcing due to atmospheric CO2. Solid lines correspond to stable equilibria, and dashed lines to unstable equilibria. There are two different branches of stable equilibria: one that corresponds to a cold climate (blue) and one that corresponds to a warm climate (red). (b) Equilibrium 'two-point' response for different forcing levels corresponding to 2 γ ×CO2 starting from an initial state T0 = 293K corresponding to equilibrium temperature before perturbation. The red part of the figure correspond to end states on the warm branch; the blue part to end states on the cold branch. The dashed line indicates the location of a tipping point. (c) ECS (i.e. equilibrium two-point response to CO2 doubling) as function of the initial temperature T0. Blue lines indicate starting points on the cold branch; red lines indicate starting point on the warm branch; the grey region corresponds to unfeasible initial temperatures (i.e., they lie on the unstable branch in (a)). The large peak in the blue line corresponds to tipping from the cold branch to the warm branch; the location of this tipping point is indicated with a dashed line. Table 1) subjected to an abrupt2xCO2 forcing. The initial forcing µ0 is chosen such that there is an initial equilibrium is T0 = 255K. C D units C 5 × 10 8 · · · · · · · · · Jm −2 K −1 Q 0 341.3 · · · · · · · · · W m −2 σ 5.67 × 10 −8 · · · · · · · · · W m −2 K −4 α 1 0.7 · · · · · · · · · α 2 0.289 · · · · · · · · · T α 274.5 · · · · · · · · · K K α 0.1 · · · · · · · · · K −1 ε 1 0.5 · · · · · · · · · ε 2 0.41 · · · · · · · · · T ε 288 · · · · · · · · · K K ε 0.5 · · · · · · 0.1 (11) and chaotic variability (15) used in the numerical simulations. We take the standard choice for the Lorenz parameters: σ = 10, ρ = 28 and β = 8/3. For the simulations, time t is rescaled to years. The equilibrium albedo is given by (12) and (equilibrium) emissivity by (13). The forcing µ is given by (10) and νNV represents the amplitude of a chaotic forcing via (14). The use of "· · · " indicates that values of column A are also used in this case. forcing. The initial forcing µ 0 is chosen such that there is an equilibrium at T 0 = 255K. There is a clear two-stage exponential decay to equilibrium with ∆T * ≈ 3.1K: the right panel of Figure 4(a) shows that there is a nearby equilibrium attractor. Figure 4(b) shows estimates using the Gregory method on rolling windows of 150 years. Within a time window, we regress the time series of ∆T and ∆N := C d∆T dt to the linear model ∆N = f + λ∆T , which gives estimates for the forcing f and the dominant feedback parameter λ. The regression is performed using the MATLAB fit to linear model fitlm; standard errors for best fit are shown, and the bottom panel shows the adjusted R 2 -statistic for this window, where R 2 = 1 implies all variance in the signal is described by the model within the window ending at that time-point. Equilibrium warming is derived from these fits by extrapolation of the linear model, giving ∆T * est = −f /λ. Note that the initial 150 year fit is already good. Indeed one can see decreasing signal-tonoise ratio and R 2 for fits taken later in the time series, as noise dominates the dynamics of the state this late in the simulation. A second equilibrium estimation protocol is shown in Figure 4(c) in which blocks of 150 years are fitted to a decaying exponential function T (t) = T ∞ + be λt using the MATLAB fit to nonlinear model fitnlm. This gives an estimate for T ∞ , b and λ with standard errors and is a direct approximation of a linear response to a Heaviside input; we show λ and ∆T * est = T ∞ − T 0 . Observe that, similarly to the Gregory fits, these fits also become degenerate for later time frames. To contrast with Figure 4, Figure 5 shows the case for an abrupt 4xCO2 forcing but otherwise identical parameters and initial condition, in which the transient dynamics are not approximated well by a linear system, although a long transient period (due to the crossing of a slow tipping point) conceals the nonlinear dynamics. Figure 5(a) shows that the run seems to rapidly approach an equilibrium, but warming then continues slowly as albedo slowly decreases. Then, around t = 1500 years, there is a surprising and rapid "late tipping" followed by a relaxation to the final equilibrium. From the fits in Figure 5(b) and (c), approximately linear behaviour can be seen at first; however, we are near (but beyond) a fold bifurcation on the stable part of the slow manifold where the blue and red nullclines become tangent (i.e. a slow tipping point), and for this forcing the nullclines are barely detached. As the state passes this point (sometimes called a ghost attractor), the dynamics on the slow manifold speed up before tipping over a fold in the slow manifold, causing a rapid late tipping event to another stable branch of this slow manifold. Figure 5(b) shows estimates using a Gregory fit. It can be seen that a fast decay is picked up initially, and slower decay dominates from about t = 250 years. At around t = 500 years, the fitted value for λ passes through zero, suggesting a linearly unstable climate, and the estimated warming becomes unreliable. Only after the late tipping event, from t = 1750 years onwards, the fits make sense again, with negative λ and sensible warming estimates corresponding to the actual equilibrium warming of the simulation. Similarly, for the exponential fit shown in (c), λ ≈ −0.05 corresponding to the initial fast decay but this quickly decays to pick up the slow decay with λ ≈ −0.001 by about t = 250. The fit remains good up to t ≈ 500 years but after this the estimated errors Table 1) subjected to an abrupt4xCO2 forcing. The initial equilibrium is T0 = 255K. Here, a late tipping event happens as the dynamics drive the system over a fold point of the slow manifold. (a) Time series for ∆T and ∆α as well as a the trajectory through phase space (cyan). The red dotted curve in the right panel denotes the nullcline on which dα dt = 0 and the blue dotted curve denotes the nullcline on which dT dt = 0, which also acts as a slow manifold. Fits (b,c) as in Figure 4. on ∆T est increase rapidly as the fit attempts to fit a decaying exponential to something that is actually growing slowly but exponentially. At t ≈ 1500 years the system passes through the late rapid tipping before settling to a fit to ∆T * est ≈ 72. Clearly, in both of these fitting approaches the true equilibrium warming is not estimated accurately at all until after the late tipping event when the system is again approximately linear. From the fits up to about t = 500 years there are no obvious hints that anticipate this late tipping and the fit results seem to indicate convergence to a noisy equilibrium state (hence for example the low R 2 score for the Gregory method as it is mostly noise at this point). Only after t = 500 years there start to be some signs of the passing of a slow tipping point (λ > 0 in the Gregory method and large uncertainties in the exponential fit method) in this example, as the almost-equilibrium (ghost attractor) on the slow manifold is passed around this time. Comparing Figures 5 and 4, we see very similar fits and estimates up to t = 500 years, further indicating the difficulty of distinguishing scenarios with and without late tipping. Moreover, the perturbation that exceeds the threshold shown in Figure 1(b) lies somewhere between 2xCO2 and 4xCO2 for this model and parameters. Figure 6 shows an analogous simulation of (9) under abrupt4xCO2 forcing but parameters D of Table 1. Again, the initial forcing µ 0 is such that there is an initial equilibrium at T 0 = 255K. For these parameters there is no fold in the critical manifold meaning that there is not a rapid late tipping (in the bifurcation sense).However, similarly to Figure 5, the initial (linear) warming is not representative of the equilibrium warming and the transient means one can only see evidence of the final state after t = 1500 years. This indicates that even in the absence of (late) tipping points, an initial good fit cannot exclude a later rapid warming phase in systems that have dynamics on multiple time scales. For all three simulations presented in this section, extrapolations from fits to the initial few hundred years look very similar, although their long-term behaviour is very different, again highlighting that extrapolations may only be accurate after long transients that bring the system into a linear regime. Ensemble variability and ECS When estimating ECS in models with internal variability, one of the ingredients is the precise choice of the initial conditions y 0 . In Section 22.1, we already discussed that the background climate state (i.e., the initial attractor A 0 ) influences the transient and equilibrium response to forcings. However, also the precise initial state y 0 on the initial attractor will impact the observed transient dynamics and can potentially also change the final equilibrium state. We illustrate such situations in this subsection. Figure 7 (a,b) show an abrupt4xCO2 experiment for an ensemble of different initial states on the same initial attractor (a warm climate state), for a simulation of (9) with parameters B of Table 1 -note the presence of chaotic variability. There is potential variation in the warming of the different ensemble members during the transient, which stems from different realisations of the natural variability, corresponding to the different initial states. Gregory fits over a time window starting at time 0 up to time t are shown in Figure 8(a). The associated regression to individual ensemble members (black) are poor, but the regression to the ensemble average (red) is much better as the noise (internal variability) is averaged out. Another example is given in Figure 7(c,d) for a different initial attractor (a cold climate state), where natural variability pushes the state over a tipping point at different times during the simulation of each ensemble member. The simulations initially suggest relaxation towards a state close to the original colder state, but later they consistently exhibit tipping to a different (and much warmer state). In this example the colder state is almost at equilibrium. As long as the natural variation in forcing is small enough, the system remains close to the colder state. For larger fluctuations the system tips into the warmer state. Figure 8(b) shows that even the ensemble average is not adequate to estimate ECS in this case; accurate estimates can only be made if the model has been run until (almost) all individual ensemble members have tipped. Nevertheless, the ensemble averaged response is still much better than the other approaches, because data from tipped and non-tipped ensemble members leads otherwise to very unreliable bimodal estimates with high variance. Even worse, for non-constant forcing the equilibrium response may depend more drastically on the precise initial state y 0 ; some part of the initial attractor A 0 can be attracted to a final attractor A 1 , while the rest is attracted to a different final attractorà 1 . This effect has been called a partial tipping of the attractor and is studied abstractly in [45,9,46]. Because of the relative simplicity of the chaotic GEBM (9), we cannot show this behaviour for constant forcing, but we can illustrate this phenomenon by forcing the model temporarily with an abrupt4xCO2 forcing, after which the initial CO 2 -levels are restored at time t = 75 years. Figure 9 shows the results of this experiment. One can clearly see that some ensemble members experience tipping but others do not. In this situation (details not shown), partial tipping means that none of the ECS estimation techniques will paint a full picture. The ensemble-average does contain some information on the number of tipping and non-tipped states but we suggest more meaningful estimates would need to be made for the attractors separately, first by categorising each individual ensemble member as tipped or not, and using estimation techniques on these Table 1) subjected to an abrupt4xCO2 forcing. The initial equilibrium is T0 = 255K. The left panel shows time series for ∆T and ∆α. The right panel shows the trajectory through phase space (cyan). The red dotted curve in the right panel denotes the nullcline on which dα dt = 0 and the blue dotted curve denotes the nullcline on which dT dt = 0, which also acts as a slow manifold. Note that for the parameters D there is no longer a late tipping in T of the speed as seen in Figure 5 Evidence of late tipping within GCM runs For GCM runs with conditions corresponding to the relatively stable conditions of the Holocene pre-industrial climate, the accepted wisdom is that we do not expect to find any major global tipping effects as extreme as the ice-house to hothouse transitions explored above. Nonetheless there are hints that we may be close to regional tipping points such as changes in the Atlantic Meridional Overturning Circulation (AMOC) or West Antarctic icesheet collapse, and some emissions scenarios are likely to take us over these tipping points. Crossings of these regional tipping points can result in a global signal, such as changes in the AMOC leading to global climatic changes [47,48]. Further, as emission reduction scenarios may take us over tipping points only temporarily [49], also the possibility of a partial tipping of an attractor may be very relevant to study in GCMs. Initial conditions for GCM runs are notoriously difficult to set -they are typically taken as the end of a spin-up simulation, or as a state at some time during a control experiment (in both of which atmospheric CO 2 is kept fixed at the starting levels). In ensemble runs, variation of initial states on the initial attractor are sometimes explored either by sightly perturbing an initial state (called 'micro-perturbations'), or by taking several states of a control run, typically separated by a few months up to a few years, depending on the time scale of the internal variability that is being considered (called 'macro-perturbations') [50,51]. Nonetheless, even after substantial spin-up there may be continued variability that can cause extrapolations such as Effective Climate Sensitivity to continue varying over centennial timescales [5]. For example, [52,53] find multi-century changes in an atmosphere-ocean GCM, mostly to do with the strength of the AMOC, depending on the magnitude of the CO 2 perturbation. The response of GCMs can also include late rapid changes. An example of such a late warming event is visible around year 2, 300 of the abrupt8xCO2 run in the model CESM 1.0.4 within LongRunMIP [8]. Figure 10 shows features of this run, along with associated abrupt2xCO2 and abrupt4xCO2 runs of the same model for comparison. In (a), the time series for the increase in (yearly averaged) global mean near-surface temperature is shown. For the abrupt8xCO2 experiment a late and sudden increase can be seen around t = 2300 years (highlighted in red in the figure), which is not present in the other experiments. We have analysed this data using the Gregory method on millennia-long rolling windows (to suppress the natural variability on shorter time scales) in (c-d). We found an increase in the feedback parameter λ around the same time, and also an underestimation of the equilibrium warming for t < 2400 years. This is similar to our findings in a conceptual energy balance model ( Figure 5) albeit less distinct. Hence, we suggest that this late warming event in the abrupt8xCO2 run could be an example of a late tipping event in a GCM. Appendix B.1 illustrates that this tipping behaviour is probably due to a qualitative regional tipping of the AMOC which appears for the 8xCO2 run, but is not present in the 2xCO2 or 4xCO2 runs. However, we note that unlike in Figure 5, the tipping for the 8xCO2 run is of transient nature: the final state is an "AMOC on" state in all cases. Conclusion and discussion Although many authors have pointed out deficiencies with estimating and using equilibrium climate sensitivity (ECS), it clearly remains an important metric for understanding the response of climate models to changes in forcing CO 2 . In particular, although there may be problems with timescales, low frequency variability and lack of linearity, ECS and variants of it are key metrics that find their way (for example, via integrated assessment models of the socioeconomic impact of an emissions pathway such as used in [54,55,56]) into decision making about climate change and its likely impact on human activities. In this paper, we have illustrated how such linear concepts could break down in many different ways, even after long transient periods in which they seem valid, when nonlinear dynamics start to play a role. Although we have focused in this paper on climate response to idealised abrupt CO 2 forcing scenarios, we also want to stress that in multistable nonlinear systems the precise outcome can also depend on the pathway taken -that is, not only the amount of emissions but also the moment of emissions can be important. This further complicates and challenges too simplistic linear frameworks. See for instance [57,49] for examples in conceptual settings, as well as [58] for a discussion on how this can strongly influence integrated assessments. Climate is a multiscale process that takes place on many fast and slow time scales, so it is unrealistic to assume that all dynamics can be modelled by a univariate linear model. Moreover, we also cannot expect to estimate processes that take place over substantially longer time scales than the simulated duration. As we have illustrated in this paper, this means that even in the case of pure linear response, ECS cannot be accurately estimated unless the simulation times are long enough to resolve the slow timescales such as those common in large scale ocean dynamics or land ice sheets. On top of that, with the examples in Section 3 we have illustrated how nonlinear effects in a multiscale climate model can lead to additional warming effects -such as slow and late tipping, with long transients without obvious hints of these late events. These examples demonstrate that even if a fit is very good for a long period of time, there still may be large and abrupt late tipping points. In section 2 and in Figure 1, we have introduced several trade-offs that need to be made when estimating ECS for a climate model. It would be of great interest to locate the "Goldilocks Zone" in which reliable and accurate estimates ECS are possible, in order to give suitable protocols for experiments with GCMs. In particular, it would be good to understand (a) the minimum times and ensemble sizes needed to reliably estimate ECS and (b) the thresholds in perturbation size for general GCMs that lead to tipping behaviour. This will depend not just on the current climate state but also on the processes that are included in the model and the form of the forcing. We suggest there is a need to find criteria that imply that an estimation protocol will work -and on which time scales. For instance, the Gregory method when applied on data from one decade can typically predict a few decades but is unlikely to be predictive on the scale of centuries; similarly, if only 150 years of data is available, it is unlikely to obtain an accurate estimation on millennial time scales. The most drastic examples of nonlinear response given in this paper concern tipping phenomena. This begs the question of how relevant this is for future projections with GCMs. After all, in these models the GMST response is typically fairly linear to changes in forcing levels, and the transient response seems linear over quite long timescales. This might suggest that tipping points for GMST are not very relevant. However, the parameter space of such models has not been sufficiently explored to capture past and future tipping [59] and, consequently, under standard settings (optimised for stable Holocene pre-industrial climates) those GCMs may operate in a too stable manner [60]. Simultaneously, local or regional tipping has been observed more frequently in GCMs [61], and can be observed in past climate records [35]. Tipping effects at regional levels may give only a small signal in the global average (although e.g. the AMOC restoration in the abrupt8xCO2 experiment in Figure 10 is visible in GMST). Indeed, one might conjecture a global redistribution that almost averages out in data of GMST -similar to what is described in [62,63]. However, such regional tipping is much more problematic than a global mean signal might indicate, as local impacts can be very dramatic. Moreover, when several regional tipping elements are involved, cascading effects may occur [64,65] opening the possibility of an eventual global response for example through triggering of additional carbon cycle feedbacks. It remains an important issue for future climate projections to determine which tipping points may be crossed on time scales of centuries to millennia. It also highlights the importance of going beyond classifying climate response only via GMST, to look at spatial responses and other observables. Data statement Simulation data from models in LongRunMIP data.iac.ethz.ch/longrunmip/, including the here used model CESM 1.0.4, requests for access can be made to the coordinators of longrunMIP. More information and details of the simulations can be found on longrunmip.org and in [8]. The numerical code to simulate and subsequently analyse the conceptual energy balance model introduced in equations (9), (11), (14) is available from https://github.com/peterashwin/late-tipping-2022 Author contributions All authors designed the study. RB and PA undertook the computer simulations and analysis. All authors edited the final text. A Bifurcation structure of the GEBM In the absence of chaotic forcing (ν N V ≡ 0), the energy balance model (9) has equilibria that are independent of the value of τ α ≥ 0; hence we study here the equilibria for the case of no dynamic albedo. That is, we consider with α 0 and ε 0 as in (12), (13). To study the bifurcation structure of this equation, we apply the scalings y := K α T and s := σ(ε 1 + ε 2 )/(2K 3 a C)t, and we introduce the following (composed) parameters Then (16) becomes This system has an equilibrium at y = y r when f (y r ) = 0 for a given set of parameter values. However, bifurcations can occur as parameters change. At such bifurcation points, not only f (y r ) = 0, but also derivatives of f with respect to y vanish. How many derivatives vanish denotes the co-dimension and degeneracy of the bifurcation. If only the first derivative vanishes, it is a saddle-node bifurcation in which two equilibria collide and disappear. If the first two derivatives vanish, it is a cusp bifurcation, in which three equilibria meet -or, in other words, two saddle-node bifurcations. If the first three derivatives vanish, it is called a swallowtail point, at which four equilibria meet (or two cusp bifurcations). If the first four derivatives vanish, it is called a butterfly catastrophe or butterfly singularity [66,43], where five equilibria or two swallowtail points meet (or three cusp bifurcations, or four saddle-node bifurcations). To study the bifurcation structure, it is therefore useful to look at the Taylor expansion of f around a reference point y r . For this we set y = y r + z and define z α := y α − y r , z ε := y ε − y r . Then we obtain Using computer algebra software such as Mathematica the expansion around z = 0 of this equation can be computed as with expressions for f i given in Supplementary Material B.2. A.1 Cusp bifurcations when c = 0 or a = 0 We first inspect some limit cases, starting with the limit case in which c = 0. That is, ε 1 = ε 2 , indicating the emissivity does not change with temperature. In this setting, it can be shown that f 0 = f 1 = f 2 = 0, whenever the following conditions hold simultaneously: Since it is required that a ≥ 0 and y r ≥ 0, from these expressions it can be seen that cusps bifurcations can only occur if z α > 0. Further, higher order degeneracies cannot occur (no choice for z α leads to f 0 = f 1 = f 2 = f 3 = 0 in the case c = 0). These results can be brought back to the original scaling by a series of substitutions and manipulations. For instance, fixing Q 0 = 341.3, σ = 5.67 · 10 −8 , K a = 0.1, T a = 274.5, ε 1 = ε 2 = 0.7, α 1 = 0.7 it can be shown that a cusp bifurcation occurs when α 2 ≈ 0.5071 for µ ≈ 91.663 at T ≈ 273.9529. For lower values of α 2 , the bifurcation diagram (µ, T ) has two saddle-node bifurcations; for higher values it has no saddle-node bifurcations. See figure 11 for numerical continuation of the saddle-node lines in (µ, α 2 )-parameter space, and bifurcation diagrams (µ, T ) for various choices of α 2 below, above and at the critical value for which the cusp bifurcation occurs. Another limit case arises when a = 0. In this case, α 1 = α 2 , indicating that the albedo does not change with temperature. In this setting, it can be shown that f 0 = f 1 = f 2 = 0 occurs whenever the following conditions hold: In this case, to ensure c ≥ 0 and y r > 0, it is necessary to take z ε < 0. Observe that higher order degeneracies cannot occur. A.2 Butterfly catastrophe in the full system When a > 0 and c > 0, both albedo and emissivity change with temperature. In this case, the cusp bifurcations from both degenerate settings are present. In these bifurcations, there is a transition from two stable and one unstable equilibria to one stable equilibrium. Next to these, there is also another cusp bifurcation in the full system. In this additional cusp bifurcation, two unstable and one stable equilibria meet and become one unstable equilibrium. It is possible that these cusp bifurcations meet, and hence the five potential equilibria of the full system meet. This happens for parameter values for which f 0 = f 1 = f 2 = f 3 = f 4 = 0. Here, a bifurcation of codimension 4 occurs, which is sometimes called a butterfly catastrophe. Using the expressions found before, it is possible to find locations of butterfly catastrophes in the full system. For instance, fixing c = 0.003 and d = 5, using a numerical root finding algorithm we obtained the solution ν = 717271, a = 96965.4, y r = 28.9109, z α = 0.212802, z ε = −0.2214. For parameters close to this point, the degeneracy unfolds into four saddle-node branches, with three cusp points. In Figure 13, we show such a unfolding where we follow the fold loci as parameters α 2 and µ vary, and all the other parameters are taken close to above found butterfly singularity. In the diagrams the three cusps are located where the fold curves meet. If parameters would be taken closer to the butterfly singularity, these cusps points move together and meet up precisely at the singularity. Figure 14 shows that the abrupt addition of atmospheric CO 2 causes a rapid weakening of the AMOC in the model CESM 1.0.4. In the abrupt2xCO2 and abrupt4xCO2 experiments this weakening gets restored gradually over time, but in the abrupt8xCO2 the system lingers around in a weakened state for long and then suddenly restores rapidly around t = 2400 years, which seem to be the cause of the rapid increase in global warming this late in the run. [8]. This strength is measured as the (yearly averaged) maximum of the global stream function of depths below 500m in the Northern Hemisphere. The insets show a 50 year rolling average for the first 200 years of each experiment. In all abrupt CO2 forcing experiments, a weakening can be seen at the start of the runs. In the 2xCO2 and 4xCO2 experiments, this is restored gradually; in the 8xCO2 experiment, the AMOC lingers in a weakened state, and suddenly restores around t = 2400 years.
2022-07-14T01:16:00.042Z
2022-07-13T00:00:00.000
{ "year": 2022, "sha1": "959bc0d63dcf147c2f65077b6ecc2a09ffd27a73", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1098/rspa.2022.0483", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "959bc0d63dcf147c2f65077b6ecc2a09ffd27a73", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
24880380
pes2o/s2orc
v3-fos-license
The influence of mouthrinses and simulated toothbrushing on the surface roughness of a nanofilled composite resin The aim of this study was to determine the influence of mouthrinses on the surface roughness of a nanofilled composite resin after toothbrushing. One hundred nanofilled composite resin specimens were prepared and randomly distributed into two groups—brushed and non-brushed—and then assigned to five subgroups, according to the mouthrinse solutions (n = 10): Colgate Plax Fresh Mint, Oral B, Cepacol, Colgate Plax, and artificial saliva. Each sample was immersed in 20 mL of the mouthrinses for 1 minute, 5 days per week, twice a day, for a 3-week period. The control group used in the study was one in which the specimens were not subjected to brushing and remained only in artificial saliva. Toothbrushing was performed once a week for 1 minute, for 3 weeks. Surface roughness measurements (Ra) were performed after the immersion period and toothbrushing, by means of a profilometer. Data were analyzed by two-way ANOVA and Tukey’s test. Analysis revealed that the association between toothbrushing and Colgate Plax Fresh Mint produced the lowest surface roughness (p < 0.05). All other groups tested (Oral B, Cepacol, Colgate Plax, artificial saliva) exhibited no statistically significant differences between surfaces, whether subjected to toothbrushing or not (p < 0.05). It was concluded that the surface roughness of the nanofilled composite resin tested can be influenced by the mouthrinse associated with toothbrushing. Descriptors: Composite Resins; Surface Properties; Toothbrushing; Mouthwashes. Introduction Dental caries is a multifactorial disease. Its development depends on several variables, such as the presence of microorganisms, the substrate, the host’s oral environment, and time.1 Proper oral hygiene accomplished by toothbrushing is the primary means of preventing enamel demineralization. However, in patients with gingivitis and periodontitis,2 or in those wearing extensive splinting or fixed prostheses, orthodontic appliances, overdentures, and implant-supported restorations3, the use of mouthrinses should complement toothbrushing. Wear from toothbrushing can influence the mechanical and optical properties of composite resins. The surface roughness can increase due to the abrasion of the polymer matrix, subsequently followed by filler exposure, and finally loosening of filler particles.4 In addition, mouthrinses can contain alcohol and other substances, such as detergents, emulsifiers, and organic acids, that can lead to the degradation of the composite resin surface.5 According to Gürgan et al.,6 mouthrinses can afDeclaration of Interests: The authors certify that they have no commercial or associative interest that represents a conflict of interest in connection with the manuscript. The inf luence of mouthrinses and simulated toothbrushing on the surface roughness of a nanofilled composite resin 2 Braz Oral Res. 2012 May-Jun;26(3):209-14 fect the hardness of restorative materials. Alcohol content is not the only factor that has a softening effect on the materials: Saliva, for example, can dilute or concentrate mouthrinses, increasing or decreasing this effect. Recently, nanofilled composites were introduced in an attempt to provide a restorative material that could be used in both anterior and posterior areas, associating high initial polishing with superior polish and gloss retention.4,7 There is still much to be learned about this resin. Our study hypothesis was that the surface roughness increases with the use of mouthrinse solutions when associated with toothbrushing. Therefore, the aim of this study was to determine the influence of mouthrinses on the surface roughness of a nanofilled composite resin after toothbrushing. Methodology Experimental design The factors under study were toothbrushing in two levels: • Group A, brushed, • Group B, non-brushed; and mouthrinse in five levels: • Colgate Plax Fresh Mint, • Oral B, • Cepacol, • Colgate Plax, and • artificial saliva. The sample size was determined after an experimental pilot trial and standard deviations of the results were set at n = 10. The experimental sample was comprised of 100 composite resin specimens. The response variable was surface roughness performed before and after experimental periods, with a profilometer. Preparation of specimens One hundred specimens were prepared with a nanofilled composite resin (Filtek Supreme, shade A2, 3M ESPE, São Paulo, Brazil). The composite resin was manipulated according to the manufacturer’s instructions and inserted into a stainless steel matrix (6 mm diameter and 2 mm depth). After the matrix was filled, a polyester strip (KDent, Inc., St. Louis, USA) was pressed onto the surface with a glass slab (1 kg weight) to smooth the material surface. After 30 seconds, the glass slab was removed, and the composite resin was light-cured in a LED light-curing unit with power output of 750 mW/ cm2 (UltraLED, Dabi Atlante, Ribeirão Preto, Brazil) for 40 seconds. The cured specimens were removed and maintained for 24 h in 100% relative humidity at 37°C. After 24 hours, the specimens were subjected to surface polishing with abrasive disks without continuous water irrigation (Sof-Lex, 3M ESPE, St. Paul, USA) in decreasing order of abrasiveness (10 s each),8 in a slowspeed handpiece. Immersion in mouthrinses In the selection of mouthrinses, two main characteristics were considered: the presence of alcohol and dyes. The specimens were randomly allocated into five subgroups according to the mouthrinses: • Colgate Plax Fresh Mint (Colgate/Palmolive, São Bernardo do Campo, SP, Brazil), • Oral B (Procter & Gamble, Brazil), • Cepacol (Aventis Pharma, São Paulo, Brazil), • Colgate Plax (Colgate/Palmolive, São Bernardo do Campo, Brazil), and • artificial saliva (made by a compounding pharmacy). (More specifications are listed in Table 1.) During 3 weeks, each sample was immersed in 20 mL of mouthrinse for 1 minute under constant agitation in a magnetic agitator, 5 days per week, twice a day (12-hour interval between exposures).5 After each immersion, the specimens were rinsed with water and then stored in distilled water at 37°C8 until the next immersion. Brushing Brushing was carried out by means of a toothbrushing machine, following the ISO/DTS 145692 specifications for wear testing (Mavtec Comércio Ltda., Ribeirão Preto, Brazil).9 The machine allows six specimens to be brushed simultaneously. The brushing was performed at a speed of 356 rpm, with a dentifrice slurry (Colgate Total and distilled water, 1:1 ratio). The track covered by the brush was 3.8 cm, and the toothbrushing load was standardized at 200 g.10 The toothbrushes were cut at the neck and fixed by screws at both sides and the top of the brush support. Trauth KGS, Godoi APT, Colucci V, Corona SAM, Benitez Catirse ABCE 3 Braz Oral Res. 2012 May-Jun;26(3):209-14 Correct adjustment of the screws allowed for proper leveling of the toothbrush. Each specimen was subjected to toothbrushing with a soft toothbrush (Condor Plus, Condor SA, São Bento do Sul, Brazil). The specimens were brushed weekly for 1 minute, during a period of 3 weeks (0.356 cycles),10 corresponding to brushing for one week, two times per day. The control group used in the study was one in which the specimens were not subjected to brushing and remained only in artificial saliva. Surface roughness analysis The first specimens were subjected to surface roughness measurements after surfaces were polished. The other readings were performed on the 7th, 14th, and 21st days. The surface roughness of each specimen was determined by means of a profilometer (Surfcom 480A, Tokyo Seimitsu Co., LTD., Tokyo, Japan). Three scans were performed for each specimen. The stylus speed was 0.6 mm/second, cut-off was set to 0.25 mm, and the average roughness values were recorded (Ra). Statistical analysis Two-way ANOVA and Tukey’s test were performed (5% significance level). Statistical calculations were made with GMC software (Version 2002, available at http://www.forp.usp.br/restauradora/gmc/gmc. html#gmc, Ribeirão Preto, Brazil). Results Two-way ANOVA revealed significant interaction between toothbrushing and mouthrinses (p = 0.0034). Tukey’s test revealed that the association between Colgate Plax Fresh Mint and toothbrushing resulted in the lowest surface roughness. All other groups tested (Oral B, Cepacol, Colgate Plax, artificial saliva) exhibited no statistically significant differences between surfaces, whether subjected to toothbrushing or not (p > 0.05). Mean values for all groups are presented in Table 2. Discussion Degradation of composite materials can occur due to mechanical and chemical factors from the oral environment, which can cause changes in surface roughness,11 loss of surface gloss, and increased discoloration of the material,12 affecting the esthetic quality of the restoration. These changes have been attributed to the degradation of the polymer matrix, or the resin-filler interface, and loss of inorganic filler particles.11,13,14 In non-stressbearing areas, the main causal factors of texture changes are the relationship between biodegradation and oral hygiene procedures.15 Thus, regular prophylactic procedures, such as toothbrushing, the use of mouthrinses, or a combination of these, may produce deleterious sideMouthrinses Composition pH Alcohol % Manufacturer Colgate Plax Fresh Mint Triclosan (0.03%), sodium fluoride (225 ppm), copolymer PVM/MA (0.020%), ethyl alcohol, dye 6.6 6% Colgate-Palmolive Brazilian industry Oral-B Olysorbate 20, flavor, methylparaben (0.053%), sodium fluoride (226 ppm), sodium saccharin, sodium benzoate, propylparaben, dye 5.36 – Laboratories Rety Colombian industry Cepacol Cetylpyridinium chloride, sodium phosphate, sodium saccharin, disodium EDTA, polysorbate, glycerin, alcohol 7.29 14.5% Sanofi-Aventis Brazilian industry Colgate Plax Aqua, glycerin, propylene glycol, sorbitol, PEG-40 hydrogenated castor oil, sodium cetylpyridinium chloride, sodium saccharin, sodium fluoride Introduction Dental caries is a multifactorial disease.Its development depends on several variables, such as the presence of microorganisms, the substrate, the host's oral environment, and time. 1 Proper oral hygiene accomplished by toothbrushing is the primary means of preventing enamel demineralization.However, in patients with gingivitis and periodontitis, 2 or in those wearing extensive splinting or fixed prostheses, orthodontic appliances, overdentures, and implant-supported restorations 3 , the use of mouthrinses should complement toothbrushing. Wear from toothbrushing can influence the mechanical and optical properties of composite resins.The surface roughness can increase due to the abrasion of the polymer matrix, subsequently followed by filler exposure, and finally loosening of filler particles. 4n addition, mouthrinses can contain alcohol and other substances, such as detergents, emulsifiers, and organic acids, that can lead to the degradation of the composite resin surface. 5According to Gürgan et al., 6 mouthrinses can af-Declaration of Interests: The authors certify that they have no commercial or associative interest that represents a conflict of interest in connection with the manuscript.fect the hardness of restorative materials.Alcohol content is not the only factor that has a softening effect on the materials: Saliva, for example, can dilute or concentrate mouthrinses, increasing or decreasing this effect. Recently, nanofilled composites were introduced in an attempt to provide a restorative material that could be used in both anterior and posterior areas, associating high initial polishing with superior polish and gloss retention. 4,7There is still much to be learned about this resin.Our study hypothesis was that the surface roughness increases with the use of mouthrinse solutions when associated with toothbrushing. Therefore, the aim of this study was to determine the influence of mouthrinses on the surface roughness of a nanofilled composite resin after toothbrushing. Methodology Experimental design The factors under study were toothbrushing in two levels: and mouthrinse in five levels: • Colgate Plax, and • artificial saliva. The sample size was determined after an experimental pilot trial and standard deviations of the results were set at n = 10.The experimental sample was comprised of 100 composite resin specimens.The response variable was surface roughness performed before and after experimental periods, with a profilometer. Preparation of specimens One hundred specimens were prepared with a nanofilled composite resin (Filtek Supreme, shade A2, 3M ESPE, São Paulo, Brazil).The composite resin was manipulated according to the manufacturer's instructions and inserted into a stainless steel matrix (6 mm diameter and 2 mm depth).After the matrix was filled, a polyester strip (KDent, Inc., St. Louis, USA) was pressed onto the surface with a glass slab (1 kg weight) to smooth the material surface.After 30 seconds, the glass slab was removed, and the composite resin was light-cured in a LED light-curing unit with power output of 750 mW/ cm 2 (UltraLED, Dabi Atlante, Ribeirão Preto, Brazil) for 40 seconds.The cured specimens were removed and maintained for 24 h in 100% relative humidity at 37°C.After 24 hours, the specimens were subjected to surface polishing with abrasive disks without continuous water irrigation (Sof-Lex, 3M ESPE, St. Paul, USA) in decreasing order of abrasiveness (10 s each), 8 in a slowspeed handpiece. Immersion in mouthrinses In the selection of mouthrinses, two main characteristics were considered: the presence of alcohol and dyes.The specimens were randomly allocated into five subgroups according to the mouthrinses: • artificial saliva (made by a compounding pharmacy). (More specifications are listed in Table 1.)During 3 weeks, each sample was immersed in 20 mL of mouthrinse for 1 minute under constant agitation in a magnetic agitator, 5 days per week, twice a day (12-hour interval between exposures). 5 After each immersion, the specimens were rinsed with water and then stored in distilled water at 37°C 8 until the next immersion. Brushing Brushing was carried out by means of a toothbrushing machine, following the ISO/DTS 145692 specifications for wear testing (Mavtec Comércio Ltda., Ribeirão Preto, Brazil). 9The machine allows six specimens to be brushed simultaneously.The brushing was performed at a speed of 356 rpm, with a dentifrice slurry (Colgate Total and distilled water, 1:1 ratio).The track covered by the brush was 3.8 cm, and the toothbrushing load was standardized at 200 g. 10 The toothbrushes were cut at the neck and fixed by screws at both sides and the top of the brush support. Correct adjustment of the screws allowed for proper leveling of the toothbrush.Each specimen was subjected to toothbrushing with a soft toothbrush (Condor Plus, Condor SA, São Bento do Sul, Brazil).The specimens were brushed weekly for 1 minute, during a period of 3 weeks (0.356 cycles), 10 corresponding to brushing for one week, two times per day. The control group used in the study was one in which the specimens were not subjected to brushing and remained only in artificial saliva. Surface roughness analysis The first specimens were subjected to surface roughness measurements after surfaces were polished.The other readings were performed on the 7 th , 14 th , and 21 st days.The surface roughness of each specimen was determined by means of a profilometer (Surfcom 480A, Tokyo Seimitsu Co., LTD., Tokyo, Japan).Three scans were performed for each specimen.The stylus speed was 0.6 mm/second, cut-off was set to 0.25 mm, and the average roughness values were recorded (Ra). Results Two-way ANOVA revealed significant interaction between toothbrushing and mouthrinses (p = 0.0034).Tukey's test revealed that the association between Colgate Plax Fresh Mint and toothbrushing resulted in the lowest surface roughness.All other groups tested (Oral B, Cepacol, Colgate Plax, artificial saliva) exhibited no statistically significant differences between surfaces, whether subjected to toothbrushing or not (p > 0.05).Mean values for all groups are presented in Table 2. Discussion Degradation of composite materials can occur due to mechanical and chemical factors from the oral environment, which can cause changes in surface roughness, 11 loss of surface gloss, and increased discoloration of the material, 12 affecting the esthetic quality of the restoration.These changes have been attributed to the degradation of the polymer matrix, or the resin-filler interface, and loss of inorganic filler particles. 11,13,14In non-stressbearing areas, the main causal factors of texture changes are the relationship between biodegradation and oral hygiene procedures. 15Thus, regular prophylactic procedures, such as toothbrushing, the use of mouthrinses, or a combination of these, may produce deleterious side- effects on the surface and physical properties of restorative materials. 16However, information evaluating the potential effects of mouthrinses associated with toothbrushing of composite resins is needed, because this could affect the maintenance of the surface smoothness of the restoration.Similar surface roughness of the composite was observed whether or not toothbrushing was associated with the mouthrinses, except one type of mouthrinse (Colgate Plax Fresh Mint).The nanofilled composite resin used in this study presented moderate wear and surface roughness. 4he degradation of resin composites is a complex mechanism that depends on the characteristics of the composite, such as the size, 17 volume and type of inorganic filler, 2,4,18 matrix composition, and degree of conversion. 19The most likely explanation for degradation is the presence of nanosized particles (5-20 nm) throughout the resin matrix, 7 which results in less space between the particles, and ultimately more protection of the softer resin matrix 4,20 and less filler plucking, 20 compared with other types of resins.This leads to lower resin porosity 21 and improved abrasion resistance of the material. 4,21onversely, the types and volumes of inorganic fillers also influence the degradation, which in turn results in higher or lower material solubility.Theoretically, a larger total surface area of nanofilled particles with nonagglomerated 20 nm silica fillers allows more water to accumulate at the filler particle-polymeric matrix interfaces, thus increasing salivary sorption and solubility. 17he monomer type also directly influences the potential water sorption of the material.Monomers like UDMA, Bis-GMA, and TEGDMA contain polar groups such as -OH-, -O-, and -NH-.These groups increase the material's hydrophilicity, 19 probably making it more prone to salivary sorption; 17 however, these factors cannot be considered to influence this study, since they were standardized for all groups. The dentifrice used in this study (Colgate Total, RDA 70) was of low abrasion, 12 being more gentle to the composite surface.It has been found that the major factors influencing abrasion are dentifrice and toothbrush characteristics. 4Factors related to dentifrice are the type of abrasive, sizes of particles, and the dilution proportion, while factors related to the toothbrush are the number, stiffness, and shapes of tufts and bristles. 4n addition, a soft toothbrush was used to promote low abrasion. 22As a rule, the correlation between the toothbrush filament diameter and abrasion was lower than the correlation between toothpaste abrasiveness and resulting abrasion, and this should be considered. 23This demonstrates that abrasion of composite resins is affected more by the toothpaste's abrasiveness than by the toothbrush filament diameter. 23Although dentifrice is used for low abrasion and the brush is soft, the joint action of these can justify the higher values found for most groups subjected to brushing in comparison with the non-brushed groups. Another factor that could affect the resulting surface roughness of the composite resin is the mouthrinse.This study revealed that the result was not dependent upon the type of mouthrinse used.Artificial saliva and the non-alcohol mouthrinse solution (Oral B) presented statistically similar surface roughness values.However, the values of surface roughness for Colgate Plax Fresh Mint, which contains less alcohol (6%), were lower than those for Cepacol and Colgate Plax, which contain 14.5% and 8.7% alcohol, respectively.Regarding the pH, the lowest value was for Oral B (5.36), and the highest was found for Cepacol (7.29).These differences may account for the different behavior of the resins when subjected to different mouthrinses.It has been found that low-pH mouthrinses with higher alcohol content may affect some physical-mechanical properties of resin composites, producing softening of esthetic restorative materials. 24Therefore, the role of pH was not as clear during the test period, and longer storage periods may result in statistically significant differences. 11ifferent results of surface roughness have been obtained in other studies. 11,12In our study, the specimens were immersed in artificial saliva during the experimental period.The leaching pattern has been shown to be consistent for at least one year, and was more evident for composites stored in artificial saliva than for those stored in distilled water. 25This procedure may also influence roughening from toothbrushing, because the saliva contains specific proteins and ions that may diminish the roughening effect of the toothbrush; 16 however, despite all of that, this study showed higher roughness values resulting from immersion in saliva compared with Colgate Plax Fresh Mint. It is important to notice that the pattern of toothbrushing wear on restorative materials is the result of the interaction of several factors, and studies have reported the effects on other restorative materials such as glass-ionomer cements, pit and fissure sealants, resinmodified glass ionomers, and ceramics.Further research on clinical conditions should be performed to confirm the results obtained in this laboratory study. Conclusions Within the limitations of this in vitro study, we concluded that the surface roughness of a nanofilled composite resin can be influenced by mouthrinse solutions when it is associated with toothbrushing; however, mouthrinse by itself does not affect the surface rough-ness of composite resins. Table 1 - Immersion solutions used in this study. Table 2 - Mean and standard deviation of surface roughness (Ra) of composite resins.
2017-10-02T04:41:35.576Z
2012-05-01T00:00:00.000
{ "year": 2012, "sha1": "80fd4ba480db64c6ccd5f69a51d37a8fb55f2d67", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bor/a/dBnnvVGdxYWBSMJ6hfHnQDR/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "80fd4ba480db64c6ccd5f69a51d37a8fb55f2d67", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
237327746
pes2o/s2orc
v3-fos-license
TGF-β/IL-7 Chimeric Switch Receptor-Expressing CAR-T Cells Inhibit Recurrence of CD19-Positive B Cell Lymphoma Chimeric antigen receptor (CAR)-T cells are effective in the treatment of hematologic malignancies but have shown limited efficacy against solid tumors. Here, we demonstrated an approach to inhibit recurrence of B cell lymphoma by co-expressing both a human anti-CD19-specific single-chain variable fragment (scFv) CAR (CD19 CAR) and a TGF-β/IL-7 chimeric switch receptor (tTRII-I7R) in T cells (CD19 CAR-tTRII-I7R-T cells). The tTRII-I7R was designed to convert immunosuppressive TGF-β signaling into immune-activating IL-7 signaling. The effect of TGF-β on CD19 CAR-tTRII-I7R-T cells was assessed by western blotting. Target-specific killing by CD19 CAR-tTRII-I7R-T cells was evaluated by Eu-TDA assay. Daudi tumor-bearing NSG (NOD/SCID/IL2Rγ-/-) mice were treated with CD19 CAR-tTRII-I7R-T cells to analyze the in vivo anti-tumor effect. In vitro, CD19 CAR-tTRII-I7R-T cells had a lower level of phosphorylated SMAD2 and a higher level of target-specific cytotoxicity than controls in the presence of rhTGF-β1. In the animal model, the overall survival and recurrence-free survival of mice that received CD19 CAR-tTRII-I7R-T cells were significantly longer than in control mice. These findings strongly suggest that CD19 CAR-tTRII-I7R-T cell therapy provides a new strategy for long-lasting, TGF-β-resistant anti-tumor effects against B cell lymphoma, which may lead ultimately to increased clinical efficacy. Introduction In recent years, adoptive T cell immunotherapy has emerged as a promising therapy for cancer patients [1]. In particular, chimeric antigen receptor (CAR)-T cell therapy has dramatically shifted the landscape of treatment for lymphoid malignancies [2]. CAR-T cells are genetically engineered T cells that carry major histocompatibility complex (MHC)independent specific antigen (Ag) receptors and co-stimulatory molecules, and that can therefore induce an immune response against cells expressing cancer-associated Ags [3,4]. CAR-T cell therapy is successful in hematologic malignancies such as acute lymphoblastic leukemia and chronic lymphocytic leukemia [5][6][7]. However, in solid tumors, which include B cell lymphoma, CAR-T cell therapy faces multiple challenges and has only had limited success, largely because of the immunosuppressive tumor microenvironment (TME) [8][9][10]. The microenvironment of solid tumors often protects tumor cells from the immune system. A hostile TME is a strong barrier to effective CAR-T cells. One of the most significant hurdles is the suppression of CAR-T cell function by soluble immunosuppressive factors [11]. Transforming growth factor-beta (TGF-β) is a soluble immunosuppressive cytokine commonly present in the solid tumor TME. The TGF-β protein is released as an inactive latent complex. Latent TGF-β is usually activated by proteases such as matrix metalloprotease (MMP)-9 and MMP-2 [12]. In mammals, TGF-β1 is predominantly expressed by hematopoietic cells, whereas two other members of the TGF-β family, TGF-β2 and TGF-β3, are present in negligible amounts and are thought to have an insignificant role in the immune system [13]. Active TGF-β1 binds to TGF-β receptor (TR) II on the T cell surface. The interaction of TGF-β1 with the receptor results in activation of the TRII intracellular kinase domain, which recruits TRI and phosphorylates TRI. The resulting TR heterodimeric complex consists of TRI and TRII and its phosphorylation induces the phosphorylation of receptor-regulated (R)-mothers against decapentaplegic homologs (SMADs). Phosphorylated R-SMADs form homo-oligomeric and hetero-oligomeric complexes with co-mediator (Co)-SMADs. These complexes are translocated to the nucleus where they associate with DNA-binding co-factors and transcriptional co-activators (Co-A) and/or co-repressors to regulate the transcriptional activity of target genes, resulting in cell cycle inhibition [14]. TGF-β promotes tumor invasion and metastasis and inhibits T cell activation and proliferation [15][16][17]. Many strategies have been designed to circumvent the T cell inhibitory effects of TGF-β for cancer therapy. For example, the expression of a dominant-negative TRII (tTRII) was used to create T cells that are insensitive to TGF-β. tTRII is truncated and lacks the intracellular domains necessary for downstream signaling. Therefore, it renders effector T cells resistant to TGF-β but leaves their proliferation, cytokine secretion, and cytolytic functions unchanged [18][19][20]. Expression of tTRII enhances anti-tumor immunity [21,22]. Meanwhile, immune-stimulatory cytokines such as interleukin (IL)-7 play an important role in anti-tumor immunity [23]. The binding of IL-7 to its receptor (I7R) results in the phosphorylation of tyrosine residues on the receptor. This leads to activation of the Janus kinase (JAK)/signal transducer and activator of transcription (STAT) 5 and phosphoinositide 3-kinase (PI3-K)/Protein kinase B (PKB, or Akt)/mechanistic target of rapamycin (mTOR) signaling pathways [24]. IL-7 signaling through I7R plays a critical role in T cell survival, activation, and proliferation, as well as in-memory T cell (T M ) formation [25][26][27][28]. Therefore, activation of I7R signaling can enhance T cell anti-tumor effects. Chimeric switch receptors (CSRs) are designed to reverse the outcomes of their original signaling pathways. These receptors were used to confer immune cells with the ability to overcome immunosuppressive signals in the TME and increase their in vivo efficacy and persistence. CSRs combine the extracellular portion of an inhibitory receptor with an alternative signaling domain that provides an immune-activating function [29]. Based on our understanding of the TGF-β and IL-7 signaling pathways, we developed a TGF-β/IL-7 CSR encoding the cytokine-binding portion of the TGF-β receptor extracellular domain linked to the immunostimulatory I7R signaling endodomain (tTRII-I7R). Therefore, whenever TGF-β binds to the tTRII-I7R, instead of transducing a TRII-mediated inhibitory signal, it will transduce an I7R-mediated immune activating signal. This signal is expected to promote potent and sustained T cell-dependent anti-tumor effects in the TGF-β-rich TME. The aim of this study was to determine whether T cells co-expressing both a human anti-CD19-specific single-chain variable fragment (scFv) CAR (CD19 CAR) and tTRII-I7R (CD19 CAR-tTRII-I7R-T cells) showed improved anti-tumor efficacy and inhibition of recurrence in a CD19 + B cell lymphoma mouse model. Characterization of CAR-T Cells We constructed three CARs that incorporated (1) CD19 CAR only, (2) CD19 CAR and tTRII (CD19 CAR-tTRII), and (3) CD19 CAR and tTRII-I7R (CD19 CAR-tTRII-I7R) ( Figure 1). Each CAR was cloned into a third-generation self-inactivating (SIN) lentiviral vector under the control of the human elongation factor 1 alpha (EF1α) promoter (pPVLV2) and tested in peripheral blood mononuclear cell (PBMC)-derived activated human T cells. CD19 CARexpressing T cells (CD19 CAR-T cells, CD19 CAR-tTRII-T cells, and CD19 CAR-tTRII-I7R-T cells) showed significantly superior expansion as compared with untransduced T cells at day (D) 9 post-transduction ( Figure 2A). Subsequently, to verify the expression of the CD19 CAR and tTRII, all T cells were stained with fluorescein isothiocyanate (FITC)-conjugated recombinant human (rh) CD19 and allophycocyanin (APC)-conjugated anti-TRII and analyzed by flow cytometry. All CD19 CAR-expressing T cells stably expressed the CD19 CAR construct, and the tTRII-expressing T cells (CD19 CAR-tTRII-T cells and CD19 CAR-tTRII-I7R-T cells) also stably expressed the tTRII construct. The transduction efficiency of each CAR-T cell type was approximately 40-65% ( Figure 2B). These results demonstrate efficient expression of the CD19 CAR and tTRII constructs in T cells. Furthermore, the expression of these constructs increased the T cell expansion efficiency. tTRII Improves TGF-β1-Mediated Inhibition of Ag-Specific Tumor Killing by CAR-T Cells To investigate the effect of tTRII on CAR-T cells, we exposed untransduced T cells, CD19 CAR-T cells, CD19 CAR-tTRII-T cells, and CD19 CAR-tTRII-I7R-T cells to rhTGF-β1 for 24 h (hours). TGF-β1 binds to a specific cell surface receptor on T cells, TRII. Ligand binding to this receptor results in the activation of SMAD2. Western blot analysis revealed that levels of phosphorylated (p) SMAD2 in tTRII-expressing CAR-T cells were markedly lower than in CD19 CAR-T cells without tTRII expression ( Figure 3A). In addition, we confirmed that levels of phosphorylated Tyr284, one of the major phosphorylation sites in the TR downstream signaling pathway [30], in tTRII-expressing CAR-T cells were markedly lower than in CD19 CAR-T cells (data not shown). These data indicate that tTRII acts as a dominant-negative inhibitor of the TGF-β signaling pathway. Next, to test whether tTRII-expressing CAR-T cells maintain their ability to produce pro-inflammatory cytokines (interferon-gamma (IFN-γ) and tumor necrosis factor-alpha (TNF-α)) in the presence of TGF-β1, we cultured each T cell type with or without 10 ng/mL rhTGF-β1 for 24 h. Expression of mRNA encoding pro-inflammatory cytokines was analyzed by quantitative reverse transcription-polymerase chain reaction (qRT-PCR). In the absence of rhTGF-β1, CD19 CAR-expressing T cells showed higher mRNA expression of pro-inflammatory cytokines than untransduced T cells. Interestingly, tTRII-expressing CAR-T cells maintained a high level of pro-inflammatory cytokine mRNA expression in the presence of rhTGF-β1, while the level of expression decreased in CD19 CAR-T cells without tTRII expression ( Figure 3B). These results indicated that CAR-T cells expressing tTRII retain the ability to produce pro-inflammatory cytokines. Then, to study the effect of tTRII on Ag-specific cytotoxicity of CAR-T cells, cytolytic activity was measured by europium (Eu)-2,2 :6 ,2"-terpyridine-6,6"dicarboxylate (TDA) release assay. All CD19 CAR-expressing T cells showed significant cytotoxicity against CD19 + -K562 cells in the absence of rhTGF-β1. However, in the pres-ence of rhTGF-β1, CD19 CAR-T cells lost their cytotoxicity against CD19 + -K562 cells. By contrast, tTRII-expressing CAR-T cells showed a high level of Ag-specific cytotoxicity in the presence of rhTGF-β1 ( Figure 3C). These results suggest that tTRII induces resistance to TGF-β1-mediated suppression of Ag-specific cytotoxicity. Taken together, these results suggest that tTRII reduces the inhibitory effect of TGF-β1 on Ag-specific tumor killing by CAR-T cells by blocking the TGF-β1 signaling pathway. , and GAPDH. CD19 CAR-, CD19 CAR-tTRII-, and CD19 CAR-tTRII-I7R-T cells were cultured with rhTGF-β1 (10 ng/mL) for 24 h starting at D9 after post-transduction. Whole-cell lysates prepared from CD19 CAR-, CD19 CAR-tTRII-, or CD19 CAR-tTRII-I7R-T cells were evaluated by western blotting for pSAMD2, SMAD2, and GAPDH. The expression of pSMAD2 of Untransduced T cells (Control) was described as 100%. Data were obtained by densitometric analysis of western blots. Data are expressed as the mean ± SEM. (B) IFN-γ and TNF-α mRNA levels in CD19 CAR-and CD19 CAR-tTRII-T cells by qRT-PCR. CD19 CAR-, CD19 CAR-tTRII-, and CD19 CAR-tTRII-I7R-T cells were cultured with or without rhTGF-β1 (10 ng/mL) for 24 h starting at D9 post-transduction. Each T cell type was mixed with CD19 + -K562 cells for 24 h and then total mRNA was extracted. The IFN-γ and TNF-α mRNA levels were determined by qRT-PCR. The 18s-rRNA were used as an internal control. Data are expressed as the mean ± SEM. (C) CD19 CAR-, CD19 CAR-tTRII-, and CD19 CAR-tTRII-I7R-T cells demonstrate Ag-specific killing of CD19 + tumor cells in the presence of TGF-β1. The cytolytic activity of transduced CAR-T cells was determined in a 4 h EU-TDA release assay. T cells were harvested and cultured with rhTGF-β1 (10 ng/mL) for 72 h before use in cytotoxicity assays. Target cell lines were labeled with BATDA for 15 min and subsequently combined with the transduced T cells at the indicated E:T ratios. Lysis was determined after 4 h of incubation. * p < 0.05. ** p < 0.01. CD19 CAR-tTRII-I7R-T Cells Show Increased Anti-Tumor Efficacy In Vivo To determine whether CD19 CAR-tTRII-I7R-T cells show improved anti-tumor efficacy in vivo, NSGA mice were inoculated with Daudi-Fluc tumor cells. On D10 post-injection (PI), mice followed received CD19 CAR-T cells (Group CAR-19, n = 5), CD19 CAR-tTRII-T cells (Group tTRII, n = 4), CD19 CAR-tTRII-I7R-T cells (Group I7R, n = 4), or phosphatebuffered saline (PBS) (Group negative control [NC], n = 5). In all mice (n = 18), in vivo bioluminescence imaging was performed on D10, D14, D21, D28, D35, D42, D49, D59, D63, D77, and D84 PI ( Figure 4A). Imaging performed on D21 PI revealed a significant decrease in the total flux from all mice that received CD19 CAR-expressing T cells compared with Group NC (Figure 4B), suggesting that CD19 CAR-expressing T cells effectively killed Daudi-Fluc tumor cells. As shown in Figure 4B, all mice from Group NC died by D21-28 PI. All mice from Group CAR-19 and Group tTRII died by D35-77 and D35-63, respectively. All mice from Group I7R survived more than 84 days (d). In Group CAR-19, all mice died due to tumor recurrence on D35 or D63. In Group tTRII, one mouse died of unknown causes on D59 and three mice died due to tumor recurrence on D35 or D59. By contrast, in Group I7R, all mice survived and were tumor-free until the experimental endpoint of 84 d. Group I7R, therefore, showed a notably improved survival rate ( Figure 4C) and reduced tumor recurrence ( Figure 4D) when compared with Group CAR-19 and Group tTRII. Taken together, these results demonstrate that CD19 CAR-tTRII-I7R-T cell therapy prolongs survival and prevents tumor recurrence in a CD19 + B cell lymphoma mouse model. Discussion Non-Hodgkin's lymphoma (NHL) is the 13th most commonly diagnosed cancer and the 12th leading cause of cancer death, with approximately 544,000 new cases and 260,000 deaths worldwide in 2020 [31]. B cell lymphoma, the most common type of NHL, is a malignant neoplasm derived from B cells that affect mainly the lymph nodes, spleen, and other non-hematopoietic tissues [32]. CD19 Ag is an attractive target for CAR-based T cell therapy since it is a B cell lineagespecific surface molecule that is expressed on normal and most malignant B cells, but not on hematopoietic stem cells [33]. Currently, two commercial CAR-T cell products targeting the CD19 Ag have been FDA-approved, tisagenlecleucel (Kymriah, Novartis) and axicabtagene ciloleucel (Yescarta, Kite/Gilead). Multicenter global phase II trials have evaluated the safety and efficacy of these two CAR-T cell products in adult refractory or relapsed (R/R) B cell acute lymphoblastic leukemia (B-ALL) and B cell NHL (B-NHL). The efficacy of tisagenlecleucel in children and young adults with R/R B-ALL was assessed in the ELIANA trial. These trials showed overall response rates (ORRs) and complete remission rates (CRRs) of 80% and 60%, respectively [6]. In adults with R/R B-NHL, the efficacy of tisagenlecleucel was evaluated in the ZULIET trial and the efficacy of axicabtagene ciloleucel was evaluated in the JUMA-1 trial. Both trials showed similar ORRs (50-80%) and CRRs (40-50%) [8,9,34,35]. CD19 CAR-T cell therapy in B-NHL patients results in a lower CRR than in B-ALL patients. It is thought that the TME plays a more prominent role in CAR-T cell anti-tumor efficacy in B-NHL than in B-ALL, as CAR-T cell penetration of a solid tumor mass is limited, and the TME inhibits T cell function [11]. To achieve therapeutic success within solid tumors, CAR-T cells need to overcome immunosuppressive signals within the TME. Many cancers, including B cell lymphoma, are known to secrete TGF-β, which promotes immunosuppression within the TME. TGF-β has a crucial immunosuppressive role in both innate and adaptive immune responses [36]. It can directly dampen the function of CD8 + and CD4 + T cells while promoting the recruitment and differentiation of regulatory T cells. It can inhibit the cytotoxic function of tumor-specific cytolytic T cells (CTLs) and promote T cell apoptosis. It can also induce the differentiation of CD4 + T cell subsets with immune-regulatory properties [37]. Previous studies have reported that TGF-β signaling can be blocked using tTRII in vitro and in vivo, in mouse models. These studies demonstrated that Ag-specific T cells expressing a transgenic tTRII were resistant to the inhibitory effects of TGF-β without impaired Ag specificity [21,22,38]. Furthermore, several groups have explored immune checkpoint-based CSRs that consist of the extracellular and transmembrane domains of an immune checkpoint fused to a cytoplasmic CD28 domain, such as a PD-1/CD28 CSR [39,40] or a CTLA4/CD28 CSR [41]. These CSRs convert the negative signaling of the immune checkpoint into a positive signal in Ag-specific T cells [29]. In addition, previous reports have shown the long-term benefits of IL-7 therapy on the anti-tumor efficacy of Ag-specific effector CD8 + T cells [42]. It results from activation of IL-7/I7R-mediated activation of both JAK-STAT and PI3K-Akt-mTOR signaling. These signaling pathways promote T cell survival and proliferation. Interestingly, while some studies have suggested that IL-7 might have anti-tumor effects, other studies indicate that IL-7 might also have potential pro-tumor effects. In addition to the anti-apoptotic effect of IL-7, IL-7 may also promote c-Fos and c-Jun activity in cancers such as non-small cell lung cancer. Thus, IL-7 treatment seems also to have a potential tumor-promoting effect [23]. Based on that observation, we designed a TGF-β/IL-7 CSR encoding the cytokinebinding site of the TGF-β receptor extracellular domain linked to the immunostimulatory I7R signaling endodomain (tTRII-I7R). We generated CD19 CAR-tTRII-T cells co-expressing both the CD19 CAR and tTRII-I7R. We demonstrated CD19 CAR and tTRII expression in CD19 CAR-T cells, CD19 CAR-tTRII-T cells, and CD19 CAR-tTRII-I7R-T cells. Although tTRII-expressing CAR-T cells showed little activation of SMAD2, their regular function (pro-inflammatory cytokine secretion and Ag-specific cytotoxicity) was sustained even in the presence of TGF-β1. In a CD19 + B cell lymphoma mouse model, the overall survival and recurrence-free survival rates of Group I7R was significantly increased compared with the control group. In particular, there was no recurrence in any mouse in Group I7R, unlike in the other groups (Group CAR-19, Group tTRII, and Group NC). Thus, the efficacy of CAR-tTRII-I7R-T is significant and in line with other studies using CSRs. To the best of our knowledge, this is the first report that B cell lymphoma therapy using tTRII-I7R and CD19 CAR-tTRII-I7R-T cells could be beneficial for the treatment of solid tumors. Therefore, it is expected that CD19 CAR-tTRII-I7R-T cells could be clinically applied as a new treatment strategy for patients suffering from CD19 + B cell lymphoma. In addition, for other solid cancers, using T cells that simultaneously express CAR and tTRII-I7R, which target the cancer-associated Ags of each solid cancer, are expected to be used to overcome TME in solid cancer. It may be necessary to generate CD19 and CD20 bi-specific CAR-tTRII-I7R-T cells to overcome the problem of CD19 Ag loss and subsequent relapse. The tTRII-I7R CSR may lead to the development of powerful CAR-T cell-based immunotherapies that overcome the immunosuppressive effects of TGF-β in the TME. Ethics Statement All protocols involving the use of animals were approved by the Institutional Animal Care and Use Committee of PharosVaccine Inc. (PV-IACUC-2108), and all experiments were carried out in accordance with these approved protocols. Mice NOD/ShiLtJ-Prkdc em1Baek Il2rg em1Baek (NSGA) mice (Female, 4-5 weeks old, weight 25 ± 2 g; JA BIO, Gyeonggi, Republic of Korea) were housed and maintained according to the guidelines of the Association for the Assessment and Accreditation of Laboratory Animal Care. All mice were housed in a temperature-and humidity-controlled room under a 12 h light/dark cycle. Construction of Lentivirus-Based Vectors and Vector Design CD19 CAR construct comprised an scFv derived from the CD19-specific FMC63 monoclonal antibody (Ab), the hinge and transmembrane regions of the CD8 molecule, the 4-1BB co-stimulatory domain, and the intracellular CD3ζ chain of the T cell receptor (TCR) complex. The tTRII construct was created by truncating human TRII to remove the intracellular kinase domain, and the tTRII-I7R construct was created by fusing tTRII and the IL-7 intracellular domain. Co-expression of the CD19 CAR and either tTRII or tTRII-I7R was achieved by linking the respective gene encoding sequences to the CAR expression cassette with P2A ( Figure 1). These genes were cloned into the backbone of the third-generation SIN lentiviral vector pPVLV2 (Pharos Vaccine Inc., Gyeonggi, Republic of Korea), which includes the human EF1α (212 bp) promoter region. Lentiviral Vector Production and Titration The lentivirus was packaged with the pPVLV2, pMDLg/pRRE, pRSV-Rev, and pMD.G plasmids by transfection of 293T cells, as described previously [43,44]. Briefly, 2.5 × 10 7 293T cells were seeded 24 h before transfection into a T175 flask. For lentivirus production, 38.72 µg pPVLV2, which encodes the CD19 CAR, CD19 CAR-tTRII, or CD19 CAR-tTRII-I7R, 19.36 µg each of pMDLg/pRRE and pRSV-Rev, and 9.69 µg of pMD.G were mixed with 5 mL Opti-MEM with P3000 reagent. In parallel, 180.4 µL of Lipofectamine 3000 and 5 mL Opti-MEM were mixed. The DNA mixture and the Lipofectamine mixture were mixed in equal volumes and incubated for 15 min (min) at room temperature (RT) before being used to transfect the 293T cells. After 4 h, the DNA and Lipofectamine mixture was removed and 15 mL fresh DMEM supplemented with 2% FBS, 100 U/mL penicillin/streptomycin, 2 mM L-glutamine, and 1 mM sodium pyruvate was added. The culture medium from the transfected 293T cells was collected and centrifuged (450× g for 15 min at 4 • C) 24 h later. The supernatant containing the lentivirus was filtered through a polyvinylidene difluoride (PVDF) membrane (0.45 µm pore size) and concentrated by ultracentrifugation (20,000× g for 1.5 h at 4 • C). The supernatant was discarded and the pellet was resuspended in PBS. The functional titer of the lentivirus was determined by transducing 293T cells seeded in six-well plates (2 × 10 5 cells per well) with serial dilutions of lentivirus in DMEM supplemented with 10% FBS and 100 U/mL penicillin/streptomycin. Expression of CD19 CAR or tTRII was determined by flow cytometric analysis 72 h post-transfection, and lentivirus stock titers were calculated in transducing units (TU) per mL based on the flow cytometric analysis results. Generation of CAR-T Cells Human PBMCs were purchased from STEMCELL Technologies (Vancouver, BC, Canada). Human CD3 + T cells were isolated from human PBMCs using the EasySep Human CD3 Positive Selection Kit II (StemCell Technologies, Vancouver, BC, Canada). CD3 + T cells were stimulated with anti-CD3/CD28 Dynabeads (Thermo Fisher Scientific) at a ratio of 3:1 and cultured in IMSF100 (SOFCO, Billingham, UK) medium supplemented with 30 ng/mL rhIL-21 (Peprotech, Rocky Hill, NJ, USA) and 200 IU/mL rhIL-2 (BMI Korea, Gyeonggi, Republic of Korea) for 48 h. Activated T cells were resuspended at 1 × 10 6 cells/mL in IMSF100 medium supplemented with 30 ng/mL rhIL-21 and 200 IU/mL rhIL-2. Activated T cells were mixed with lentivirus (produced as described above) at a multiplicity of infection (MOI) of 1.0 in the presence of 8 µg/mL polybrene (Sigma-Aldrich, Saint Louis, MO, USA) and culture plates were centrifuged (12,000× g for 2 h at 32 • C). After incubation for 24 h, the transduced cells were harvested, washed, and plated at 3 × 10 6 cells/mL in IMSF100 medium containing 200 IU/mL rhIL-2. Transduced cells were expanded for approximately 12 d, and the culture medium was exchanged with fresh medium every 2-3 d. To evaluate the activation of TGF-β1 downstream signaling, the production of pro-inflammatory cytokines and the cytotoxicity of CAR-T cells, untransduced T cells and CAR-T cells were incubated with 10 ng/mL rhTGF-β1 (Peprotech) for 24-72 h starting at D9 post-transduction. Cultures were maintained at 37 • C/5% CO 2 . Flow Cytometric Analysis To determine the surface expression of CD19 CAR and tTRII, cells were stained with a FITC-conjugated rhCD19 protein (AcroBiosystems, Newark, DE, USA) and an APCconjugated TRII Ab (Abcam, USA). Cells were incubated on ice for 20 min and then washed with PBS containing 1% bovine serum albumin (BSA) and 0.05% NaN 3 . Stained cells were acquired on a MACSQuant Analyzer 10 (Miltenyibiotec, Bergisch Gladbach, North Rhine-Westphalia, Germany). Data were analyzed using FlowJo software (TreeStar, San Carlos, CA, USA). Western Blotting To detect intracellular SMAD2 activation in response to TGF-β1 downstream signaling in untransduced T cells and CAR-T cells, cells were incubated with 10 ng/mL rhTGF-β1 for 24 h starting at D9. Untransduced T cells and CAR-T cells were lysed using the Proprep protein extraction kit (iNtRON Biotechnology, Gyeonggi, Korea) in the presence of a serine/threonine phosphatase inhibitor cocktail (Sigma-Aldrich). The protein concentration was measured using a Bradford assay kit (Sigma-Aldrich). Equal amounts of protein were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes (Thermo Fisher Scientific). The membranes were blocked with 10% (w/v) skim milk in Tris-buffered saline with 0.1% Tween20 detergent (TBST) and then incubated with primary antibodies specific for p-SMAD2 or SMAD2 (diluted 1:1000; Cell Signaling Technology, Danvers, MA, USA) or glyceraldehyde 3-phosphate dehydrogenase (GAPDH) (diluted 1:1000; Bioss, Woburn, MA, USA) overnight at 4 • C. The membranes were washed with TBST 5 times for 1 h at RT and then incubated with horseradish peroxidase (HRP)-conjugated anti-mouse or rabbit immunoglobulin (Ig) G (diluted 1:10,000; Santa Cruz Biotechnology, Dallas, TX, USA) for 2 h at RT. The membranes were washed with TBST 5 times for 1 h at RT and exposed to enhanced chemiluminescence (ECL) reagents (Thermo Fisher Scientific). Signals were detected using a luminescent image analyzer (LAS-4000; Fujifilm, Tokyo, Japan). qRT-PCR To evaluate the production of pro-inflammatory cytokines in response to TGF-β1 in untransduced T cells and CAR-T cells, cells were incubated with or without 10 ng/mL rhTGF-β1 for 24 h starting on D9. Cells were then co-cultured with CD19 + -K562 cells for an additional 24 h and harvested. Total RNA was extracted using the PureLink RNA Mini Kit (Thermo Fisher Scientific). The prepared RNA was subjected to DNase digestion and the concentration was measured using an Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA). Total RNA was reverse-transcribed using the SensiFAST cDNA Synthesis kit (Bioline, London, UK), and cDNA samples were analyzed by real-time qRT-PCR using specific primers and the SensiFAST SYBR Low-ROX kit (Bioline). PCR was conducted using a QuantStudio3 Real-Time PCR detection system (Applied Biosystems, Foster City, CA, USA). An 18s-rRNA was amplified as an endogenous control. The primers used for real-time qRT-PCR were as follows: IFN-γ: forward-CCC ATG GGT TGT GTG TTT ATT T, reverse-AAA CCG GCA GTA ACT GGA TAG; TNF-α: forward-AGA GGG AGA GAA GCA ACT ACA, reverse-GGG TCA GTA TGT GAG AGG AAG A, 18s-rRNA: forward-CTG AGA AAC GGC TAC CAC ATC, reverse-GCC TCG AAA GAG TCC TGT ATT G. Relative expression was calculated using the ∆∆Ct method and expressed as the fold change using the formula 2 −∆∆Ct . All experiments were run in triplicate. Cytotoxicity of CAR-T-19 Cells At D12 post-transduction, the DELFIA cytotoxicity assay (PerkinElmer, Waltham, MA, USA) was used as described previously to assess whether tTRII can block TGF-β1mediated inhibition of the Ag-specific cytotoxicity of CAR-T cells [45]. Briefly, CD19 CAR-T cells, CD19 CAR-tTRII-T cells, and CD19 CAR-tTRII-I7R-T cells were incubated with or without 10 ng/mL rhTGF-β1 for 72 h starting at D9 and used as effector cells (E) at day 12. CD19 + -K562 cells (CD19 CAR Ag-matched cell) and K562 cells (CD19 CAR Ag-mismatched cell) were labeled with bis (acetoxymethyl) (BA) TDA for 30 min and used as target cells (T). BATDA-labeled target cells (1 × 10 3 cells/well) were co-cultured with effector cells at 2.5-20 × 10 3 cells/well (E:T ratios of 2.5-20:1) in 96-well round-bottom plates. After 4 h, 20 µL supernatant was collected and mixed with 200 µL Eu solution, and the fluorescent signal was detected using a Varioskan LUX multimode microplate reader (Thermo Fisher Scientific). The maximum release of TDA was determined by treating BATDA-labeled target cells with lysis buffer (PerkinElmer). Spontaneous release of TDA was measured by sampling supernatant from wells containing only BATDA-labeled target cells growing in culture medium. The percent specific cytolysis was calculated using the formula: (Experimental release) − (Spontaneous release) (Maximum release) − (Spotaneous release) × 100 Bioluminescence Imaging The animals were included in the study if they underwent successful i.v. injection of Daudi-Fluc cells and either CAR-T cells or PBS. We set the inclusion criteria when the ROI was 1 × 10 5 or more, and the exclusion criteria otherwise. Cultured Daudi-Fluc cells were detached from the culture plates using trypsin/ethylenediaminetetraacetic acid (EDTA), washed with PBS twice, and resuspended at 1 × 10 7 cells/mL in PBS. NSGA mice received 1 × 10 6 Daudi-Fluc cells via tail vein injection on D0, followed by an injection of 3 × 10 6 CAR-T cells on D10. All mice met our inclusion criteria. A total number of 18 animals were therefore included in the analysis. A total number of 18 animals were divided into four different groups (4-5 animals per group). On the basis of their position on the rack, cages were given a numerical designation. For each group, a cage was selected randomly from the pool of all cages. All animals were given their permanent numerical designation in the cages. Then, the cages were randomized within the exposure group. For each animal, two different investigator groups were involved as follows: one investigator group administered the treatment based on the randomized condition. These investigators were aware of the treatment group allocation. The other investigator group performed the injection procedure and assessed in vivo bioluminescence imaging. Bioluminescence imaging was performed using the IVIS-Lumina II imaging system (PerkinElmer). In brief, mice were anesthetized and then given an intraperitoneal (i.p.) injection of D-luciferin (150 mg/kg body weight). After 10 min, images were analyzed using Living Image software (PerkinElmer), and data are presented as the total flux (photons/s). A time-point was recorded as the time of tumor recurrence when the maximum bioluminescence was higher or the bioluminescent area was larger than at previous time points for a given animal. Statistical Analysis Statistical analysis was performed using GraphPad software (GraphPad Prism v7.0; GraphPad Software, San Diego, CA, USA). All statistical comparisons were performed using paired t-tests or two-way analysis of variants (ANOVA) followed by Newman-Keuls tests. The data are presented as the mean ± standard error of the mean (SEM). A value of p < 0.05 was considered significant. Kaplan-Meier curves were created to illustrate the cumulative survival and recurrence-free survival after tumor inoculation. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Ag Antigen CAR Chimeric antigen receptor CD19 CAR Human anti-CD19-specific scFv CAR
2021-08-28T06:17:21.987Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "d2d8c61eb5d16d38470faa62a2132de368476499", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/16/8706/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a56558d6bb2917eacbead310061f6539dfa55827", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216636490
pes2o/s2orc
v3-fos-license
EXPERIENTIAL LEARNING IN MANDARIN CLASSROOMS: THE CASE FOR SIMULATION Article History Received: 8 January 2020 Revised: 10 February 2020 Accepted: 12 March 2020 Published: 14 April 2020 Background of the Study Learning today can take place in many forms and can result in more unplanned consequences. According to Rahmat (2020) millennials learn in a variety of ways. Firstly, they learn in a relaxed environment. They crave for relaxed learning environment that offers minimum pressure. This type of environment mirrors their real life that is rather "laid-back". Next, millennials strive for personal relationships. They strive well in environment that encourage interaction, for instance group work. According to Chua, Guan, and Ying (2015) technology has entered the classroom. This may not be all negative for the unplanned outcome of learners. Technology can provide learners with "real-life" and authentic situations into the classrooms. So, in order to bring that "real-life" environment into the classrooms, teachers can plan "simulation" activities as part of the classroom routine. Simulation activities allow learners to gain "hands-on" experience in the form of experiential learning. duties and responsibilities of their roles and functions, and do the best they can in the situation in which they find themselves" (Jones, 1982) so that to carry out their role responsibilities, students are required to express themselves to their peer in a group setting. They have to write simulation scripts based on a given situation. In the process of writing the script, students need to look into their past experience, and then conceptualize what they have learnt, finally they manage to put into practice by creating a new script. In other words, students are expected to practice the experiential learning model in Mandarin classroom. Statement of Problem Ideally, language learning involves the learners learning the rules of the language. Learners need to learn the structure in a conducive, authentic, and natural learning environment. The problem is: more often than not, language learning are often planned in teaching environment (Yang, 2018). The content is often structured and learning takes place in a planned, non-authentic environment. So, one possible solution to provide learners a conducive language learning environment is through the use of simulation. Solution-simulation in learning a language allows students to learn the language through experiential learning. A various teaching strategy has been adopted in order to achieve the learning outcomes effectively since the commencement Mandarin courses in UiTM in 1960s. Simulation was started as an assessment since year 2017. However, there is lack of study which to evaluate the effectiveness of simulation in achieving learning goals of Mandarin courses in UiTM. Therefore, this study aims to solve the gap by examining how experiential learning cycle are practiced in Mandarin simulation and subsequently investigate the reliability of simulation to attain the learning outcomes in Mandarin classroom. Objective of the Study The main objective of this study is to investigate the use of simulation in the learning of Mandarin. Specifically, this study explores how experiential learning influence the use of simulation in learning Mandarin. Introduction This section discusses the problems in learning Mandarin, group and experiential learning, Past studies and theoretical frame work. Problems in Learning Mandarin Mandarin is not an easy language to learn for several reasons. According to Chua et al. (2015) Chinese is a tonal language and it contains five tones that include high level (first tone), rising (second tone), falling rising (third tone), falling (fourth tone) and toneless to distinguish characters that are pronounced identically. For example, the character, 爸/ba/ with first tone, means "father" while the character, 拔/ba/ with the second tone, means "pull out". Thus, the non-native learners who learn Mandarin as a second language often make mistakes in their pronunciation of the Mandarin tones. In a study by Mok, Lau, and Lee (2009) on the pronunciation of Mandarin tones among the Malay students in Malaysia, the subjects had made most mistakes in the pronunciation of second tone, followed by fourth tone, third tone and first tone. Hence, the learning of Mandarin tonal knowledge is essential for comprehending and producing spoken Mandarin (Liu, Hao, Li, & Shu, 2011). Another challenge related to tone is the large number of identical sound pattern or homophones in Mandarin but expressed by a distinctive visual pattern so as their meaning. For instance, 14 characters share the pronunciation /li/ with the fourth tone, each of which has a different meaning. This situation will confuse the learners more. Another problem faced by the learners is the grammar errors which they usually make in the writing of Mandarin sentences or scripts. The most fundamental general feature of Mandarin grammar is that it does not depend on morphological changes, but mainly uses other grammatical means such as word order and function words to express grammatical relationships and grammatical meanings (Shao, 2002). The Mandarin leaners whose mother tongue is languages with lot of morphological changes are definitely cannot get used to the Mandarin grammar. Besides, there is another difficulty lies in the diversification of Chinese semantics, often the same sentence can express a completely different meaning. For instance, "能吃多少吃多少"/neng chi duo shao chi duo shao/ is literally translated as "can eat more or less, eat more or less". But the actual meaning of the sentence is "eat as much as you can", or "eat as little as possible". Simulation in Language Learning Simulations, sometimes known as role-plays, are instructional scenarios where the learner is placed in a "world" defined by the teacher. Simulations represent a reality within which students interact. This strategy encourages student-centred learning as well as constructivist learning and teaching. Simulation is a controlled representation of reality. Simulation means role-playing or rehearsal in which the process of teaching is carried out artificially. Simulation is a yet another way learners can learn a language. According to Vlachopoulos and Makri (2017) simulations help to create a scenario-based environment. This is the environment where students interact to apply previous knowledge and practical skills to real-world problems, and also allow teachers to reach their own goals. According to Angelini and Garcia-Carbonell (2019) simulation can be done in three phases; briefing, action and debriefing. There are some benefits of this approach. Firstly, students can become familiar with the content and build new vocabulary and expressions outside the classroom. In addition to that, instructors and students can spend class time to activate their knowledge of the content and the target language through minor-scale simulations, debates and forums. Simulation can benefit learners in many ways. According to Ranchhod, Gurau, Loukis, and Trivedi (2014) simulation provides many benefits. Firstly, simulation motivates learners. The activities during simulation provide opportunities for learners to participate in the learning process. These activities are usually fun and learners would relax. Group Work and Experiential Learning in the Language Classroom Learners gained more than just content knowledge through group work. According to the theory of social contructivism, learning is better done in a group than alone (Santrock, 2009). Interactions during group work allow learners to assimilate and accommodate their views with that of their peers. According to Rahmat, Othman, Muhammad, Anuarudin, and Arepin (2019) class discussions can lead to learning of new knowledge acquisition among team members. Learners learn assimilation and accommodation during the interaction. They learn these through agreeing and disagreeing with ideas put forward by the team members. The interaction in the learning process provide learners experience that they can only gain through participation and observation. According to Mollaei and Rahnama (2012) meaningful with it through an active participation. It emphasizes learning in which the learner is directly in touch with the phenomenon being studied, rather than just watching it or reading, hearing or thinking about it. Learning is a process of discovery on the part of the learners. Figure 1 presents the theory of experiential learning introduced by Kolb (1984). As the name implies, the "end-product" of learning is the experience that learners gained throughout the length of the learning activities. According to Kolb (1984) learning involves four cyclical stages: (a) concrete experience, (b) reflective observation, (c) abstract conceptualisation and (d) active experimentation. Experiential learning begins with the learner gaining concrete experience by doing the activity. Next, the learner reviews his/her experience and this makes up the learner"s reflective observation. After reflecting on his/her observation, the learner forms an abstract conceptualisation as he/she concludes from the experience. The final stage of experiential learning is when the learner begins active experimentation when he/she tries out what he/she has learnt. Figure-1. The theory of experiential learning (Kolb, 1984). How is experiential learning displayed in terms of classroom learning? Learning often takes place when the learner undergoes changes. Figure 2 presents the actions that take place throughout the process of experiential learning. The process of learning involves several steps. Firstly, the early stages of learning requires the learners to participate in "doing something" (Do). Next, learning takes place when the learner recalls what had happened (Recall). The next stage of learning involves the learner reflecting on what he/she has gone through (Reflect). The learner then draws conclusions from the reflections made (Conclude). Finally, successful learning takes place when the learner uses the conclusions made to inform and prepare for future practical experience. Source: Kolb (1984). Figure 3 presents the learners" behaviour during activities that encourage experiential learning. At the start of the activities, learners go through concrete experience. Initially, learners would watch and begin to diverge their understanding of topics brought up. This is the stage where the learners would feel and watch the on-goings of the tasks planned for them. This is also the stage where the learners watch and reflect on their observations. The may assimilate with what they already knew and ponder upon the new experiences gained. Some learners may then form abstract conceptualization and think further about their experiences. At this stage of learning, learners may then learn to accommodate with new knowledge and converge with new ideas with other learners. Past Studies Past studies have revealed that Mandarin is not an easy language to learn. The study by Yang (2018) investigated the perception of beginner learners of Mandarin as a foreign language. The study looked into how the learners face the difficulties in learning Mandarin and also how they overcame the difficulties they encountered. The participants for the study were 179 students from two state secondary schools in England. The instrument used was a questionnaire as well as interviews. Findings revealed that the respondents found that character recognition was more difficult than their production. Learners were also worried with the use of those homophones in Mandarin. They also felt that this language lacked links between the sound and logography of characters. It is true learning a new language is not easy. Perhaps, measures could be taken to make learners acquire the language instead of learn it. Role paly is done as part of group work activity can encourage learning. In the theory of social constructivism (Santrock, 2009) learners gain more as a group than when they are alone. In addition to that, the theory of constructivism claims that humans are better able to understand the information that they have constructed by themselves. This is because learning is a social advancement that involves language use, the real world, and interaction and collaboration among learners. In the constructivist classroom, the teacher is the facilitator and a guide. He or she guides and also provides direction to the learners (Rahmat, 2018). One of the many outcomes of learning in groups is the product of social interaction. This interaction promotes experiential learning. The study by Boggu and Sundarsingh (2016) investigated the effectiveness of the experiential learning theory by Kolb (1984) in enhancing language learning strategies in an EFL context. Boggu and Sundarsingh (2016) used Kolb (1984) four-stage model to facilitate learning by experiencing, reflecting, conceptualizing and experimenting. The experimental group were selected through purposive sampling technique and comprised of 60 undergraduate students who registered for a Business programme. A series of tasks were designed to facilitate the development of skills at each stage of the cycle. A pre and post strategy evaluation was done using the SILL (Strategy Inventory for Language Learning) devised by Oxford (1990). In addition to the SILL, data were collected through semi-structured interviews and students reflections through reflective learning journals. Findings revealed that there was an extremely significant difference between the pre and post SILL survey results after the period of intervention. It resulted in a rise in strategy use from medium to high. Implications for further research into innovative pedagogical approach that would develop high strategy users are discussed. Another study by Zakaria, Rahmat, Aripin, Jasman, and Ibrahim (2019) looked into role play in ESL classroom. The quantitative study used a survey as instrument. 32 students participated in the study which used the classic theories of behaviourism, social constructivism, pragmatism to facilitate role-play activities. Learners learn through peer interaction and also modelling. Another study to explore further benefits of experiential learning is done by Sharifi and Shariati (2017). The study compared the roles of experiential and traditional learning. The study investigated two homogeneous classes; one group was the experimental group, while the other group was the control groups. The experimental group received a new way of teaching (viz., experiential) as the treatment within 3 months while the control group received a traditional way of teaching for the same amount of time. At the end of the treatment, the researcher used the SPSS software to analyze the data. The quantitative results showed that although there was no significant difference between the two groups, the experimental group outperformed the control group in a sense that the use of experiential learning was more effective than traditional learning. A key element of experiential learning, therefore, is the student, and that learning takes place (the knowledge gained) as a result of being personally involved in this pedagogical approach. (2017)). Figure 4 presents the theoretical framework of the study. This framework is rooted from the theories of Kolb (1984) on experiential learning and Vlachopoulos and Makri (2017) on simulations. The use of simulations helps to create a scenario-based environment. Simulation activities planned for learning Mandarin help learners learn through experiential learning. During the process of the simulation activity, learners gain concrete experience. Theoretical Framework of the Study When the experience ends, the learners may form abstract conceptualization from what they perceive of the lesson. From conceptualization, some learners may have the ability to converge their ideas to accommodate with the new knowledge.
2020-04-23T09:14:18.699Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "e16456a425b74f6a5ebdafb50879508b1d7d5b2e", "oa_license": null, "oa_url": "https://doi.org/10.18488/journal.1.2020.104.171.180", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c6c3890defeae13684c8971b1d959d6cd952e84d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
219739636
pes2o/s2orc
v3-fos-license
Prevalence of bla CTX-M , bla SHV , and bla TEM Genes in Escherichia coli Strains Isolated From Urinary Tract Infection Samples of Patients in the Intensive Care Unit in Qom, Iran Background: Escherichia coli is considered as one of the causes of opportunistic infections. Nowadays, due to the increase in drug resistance, the treatment of these infections has become very difficult and they are recognized as the main causes of death in hospitalized patients. Objectives: The aim of this study was to determine the prevalence of blaTEM , blaSHV , and blaCTX-M genes in E. coli strains isolated from the urinary tract infection in patients in Intensive Care Units of three different hospitals in Qom, Iran. Methods: ThisstudywasconductedinthreemonthsfromOctobertoDecember2014. Atotalof 200 E.coli samplesweretakenfrom thepatientswithurinarytractinfectionsinIntensiveCareunitsof Qomhospital. Thediscdiffusionmethodwasusedtodetermine the susceptibility pattern of antibiotic and phenotypic confirmatory tests for screening of the expanded spectrum beta-lactamase (ESBL) isolates. The presence of bla TEM , bla SHV , and bla CTX-M genes was evaluated by the polymerase chain reaction (PCR) assay. Results: Of 200samples,ampicillin(96%)andnitrofurantoin(19.5%)showedthehighestandlowestdrugresistance,respectively. A totalof 156isolates(78%)wereidentifiedasESBLsusingthephenotypicmethod. Moreover,76(38%),90(45%), and123(61.5%)isolates consisted of bla CTX-M , bla SHV , and bla TEM , respectively. Conclusions: Overall, thefindingsof thisstudyshowedthat bla TEM wasthemostcommongenewithafrequencyof 61.5% inESBL E. coli . Background Escherichia coli (E. coli) is recognized as one of the causes of nosocomial infections (1). Nowadays, due to the high prevalence of antibiotic resistance, E. coli is recognized as one of the most resistant bacteria to broadspectrum antibiotics (2). Also, beta-lactam antibiotics are the most important drugs that are commonly used worldwide to treat bacterial infections (3). Resistance to betalactam antibiotics is created by different mechanisms, such as beta-lactamase enzymes, such as expanded spectrum beta-lactamase (ESBL), efflux pumps, and porins (4). ESBL-producing bacteria are associated with several crucial health problems in the world (5). Currently, more than 300 ESBLs have been identified, forming after mutation of betalactamase enzymes (6). The main encoding genes of the ES-BLs are CTX-M, TEM, and SHV groups of Amber molecular class (7). ESBL-producing bacteria are mainly found in Intensive Care Units (ICUs) of the hospitals (8) Also, they are resistant to other antibiotic groups (3). Therefore, suitable drugs to treat the bacteria will be very limited, which is the main concern regarding the spread of ESBL strains (9). Identification of this bacteria and increasing knowledge about its presence in a geographical location are effective in the proper and effective use of suitable antibiotics in the region (10). Accordingly, it is the responsibility of health and microbiological officials of each region to monitor and track the growth rate of bacteria resistant constantly, especially ESBLs strains, to control and prevent drug-resistant outbreaks (11). Also, the information on the growth and spread rate of these enzymes helps in the treatment and prevention of infections caused by Grampositive and Gram-negative bacteria (12). Copyright © 2020, Gene, Cell and Tissue. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited. Objectives The purpose of this study was to determine the prevalence of antibiotic resistance and ESBL in E. coli isolates of the urine samples taken from the patients in Qom hospitals using the phenotypic and genotypic methods. Separation of the Isolates As stated earlier, during three months (October to December 2014), 200 samples suspected of E. coli were taken from the urinary tract infection (UTI) of the hospitalized patients in ICUs of three different hospitals in Qom, Iran, and then evaluated. Microbiological standard tests, such as gram stain, IMViC, oxidase, catalase, and fermentation/oxidation tests were used to identify the isolates (13) (all strains were Gram-negative bacilli, indol positive, citrate negative, Voges-Proskauer (VP) negative, and methyl red (MR) positive). ESBLs-Producing Isolates by Combination Disc Method Combined carbohydrate-deficient transferrin (CDT) testing was applied to screen the ESBL-producing E. coli. The CDT phenotypic test was performed by the disks of ceftazidime (30 µg) and cefotaxime (30 µg) alone and in combination with clavulanic acid (10 µg) in Muller Hinton Agar. After 24 h of incubation at 37°C, an increase in zone size of ≥ 5 mm was considered as a positive ESBL isolate. In the present study, K. pneumoniae ATCC 700603 (prepared by the Pasteur Institute, Tehran, Iran) was used as positive control (15). DNA Extraction and PCR Amplification for TEM, SHV, and CTX-M Genes DNA extraction was performed using the boiling method. Then, the quality and quantity of all the extracted DNAs were evaluated by the spectrophotometer and gel electrophoresis. The presence of TEM, CTX-M, and SHV were studied using the polymerase chain reaction (PCR) assay (16). The ingredients of the PCR mixture were as follows for 25 mL PCR reaction: 50 mm of Tris-HCL, 50 mM of (pH = 8) KCL, 15 mm of MgCl 2 , 0.2 mm dNTP mix, 20 pM of each primer (Table 1), and 5 mg of the extracted DNA. PCR was performed at the temperature conditions shown in Table 2. The PCRs products were exposed to electrophoresis in 1.5% agarose gel. Statistical Analysis The results of antibiotic susceptibility pattern, a confirmatory test for ESBL production, and the existence of bla TEM , bla SHV , and bla CTX-M genes were analyzed using SPSS software (version 16) and Fisher's exact test. Results The experiment was performed on 200 E. coli isolates taken from the urine samples of cases in the ICU in Qom hospitals, of whom 152 (76%), 25 (12.5%), and 23 (11.5%) patients were women, men, and infants, respectively. The result of the antibiotic sensitivity test is shown in Table 3. According to the results, most of the isolates (192, 96%) were resistant to ampicillin and the least resistance was found to Nitrofurantoin (39, 19.5%). Based on the results of the combined disk tests, 156 samples (78%) were producers of ESBLs, and 44 strains (22%) were non-ESBLs producers. Discussion Over one million people hospitalized for different medical conditions and during a hospital stay have been infected with nosocomial infections. Nosocomial infection is the most common cause of complications and problems for medical personnel, patients, and all hospitals in Iran (17). The risk of nosocomial infections has been reported to be from at least 0.27 (0%) to more than 6/27 (22.2%) between 2010 and 2015, respectively (18). These infections can easily be transmitted among patients and their visitors, hospital personnel, and those with direct contact with the hospital environment (1). The highest number of nosocomial infections occurs in the ICU, therefore, it is known as one of the most important responsibilities of lab technicians to iden-tify and control these infections (9). Additionally, E. coli is the most evident and frequent organism, which is also a critical pathogen for UTI (19). In our investigation, the highest and lowest resistances were found to ampicillin and nitrofurantoin, respectively. However, various resistant rates have been reported from different parts of the world, mostly due to the different patterns of antibiotic use. Rajabnia et al. (19) in Iran indicated that cefotaxime and meropenem had the highest and lowest resistance rates, whereas Jena in India (2017) reported ceftazidime and colistin with the highest and lowest resistance rates (19,20). We also found a high rate of bla TEM (61.5%), bla CTX-M (45%), and bla SHV (38%) genes in clinical strains isolated from the patients with UTI admitted to the ICU. However, bla SHV was less than the other two types of genes. Moreover, another study conducted in India reported that 93.47%, 82.60%, and 4.34% of bla TEM , bla CTX-M , and bla SHV genes in E. coli were isolated from adult patients with UTI, respectively (20). Polse et al. (23) in a study performed in Iraq indicated that E. coli strains isolated from UTI included 87.2%, 54.5%, and 21.8% of blaCTX-M , bla TEM , and bla SHV genes, respectively. In contrast, a recent study by Nojoomi and Ghasemian (25) in 2016 in Iran the prevalence of bla CTX-M-1 , bla SHV and bla TEM was 77.4% (n = 86), 47.4% (n = 53) and 2.4% (n = 2), respec- tively. Interestingly, these findings are consistent with a previous study reflected a high rate of ESBL genes in clinical isolates, which clearly indicates the current challenges for the centers for infection control at hospitals and health centers in ICU Qom, Iran. Financial problems and the lack of facilities for molecular typing methods for epidemiology studies, such as pulsed-field gel electrophoresis (PFGE), multilocus sequence typing (MLST) that are effective in finding the relationship between strains, and ultimately finding their origin are some of the limitations of our study. The patients admitted to the ICU in our hospitals were not evaluated for the extent of beta-lactamase genes in the urine samples, which can be considered as the strength of the present study. It is also hoped that in future studies, we will be able to take some effective steps to help in controlling nosocomial infections through molecular typing methods. Conclusions In conclusion, our results showed that among the examined genes, the most common gene was bla TEM , with a frequency of 61.5% in ESBL-producing E. coli taken from the patients with UTI admitted to the ICU in Qom, Iran. Due to the high level of drug resistance of the studied isolates, it was very difficult to treat the infections. Accordingly, considering the high rate of drug resistance in our study, further studies are needed to find effective drugs, including nanoparticles, for eliminating these bacteria that are resistant to antibiotics. 4 Gene Cell Tissue. 2020; 7(2):e102700.
2020-05-28T09:12:06.740Z
2020-04-30T00:00:00.000
{ "year": 2020, "sha1": "240b4faa35c7dd5d26fdd9bd741135c080975dd7", "oa_license": null, "oa_url": "https://doi.org/10.5812/gct.102700", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3afcf591200dea83f23e76f2345c3663c9c12c1e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
5954375
pes2o/s2orc
v3-fos-license
Invasion biology in non-free-living species: interactions between abiotic (climatic) and biotic (host availability) factors in geographical space in crayfish commensals (Ostracoda, Entocytheridae) In invasion processes, both abiotic and biotic factors are considered essential, but the latter are usually disregarded when modeling the potential spread of exotic species. In the framework of set theory, interactions between biotic (B), abiotic (A), and movement-related (M) factors in the geographical space can be hypothesized with BAM diagrams and tested using ecological niche models (ENMs) to estimate A and B areas. The main aim of our survey was to evaluate the interactions between abiotic (climatic) and biotic (host availability) factors in geographical space for exotic symbionts (i.e., non-free-living species), using ENM techniques combined with a BAM framework and using exotic Entocytheridae (Ostracoda) found in Europe as model organisms. We carried out an extensive survey to evaluate the distribution of entocytherids hosted by crayfish in Europe by checking 94 European localities and 12 crayfish species. Both exotic entocytherid species found, Ankylocythere sinuosa and Uncinocythere occidentalis, were widely distributed in W Europe living on the exotic crayfish species Procambarus clarkii and Pacifastacus leniusculus, respectively. No entocytherids were observed in the remaining crayfish species. The suitable area for A. sinuosa was mainly restricted by its own limitations to minimum temperatures in W and N Europe and precipitation seasonality in circum-Mediterranean areas. Uncinocythere occidentalis was mostly restricted by host availability in circum-Mediterranean regions due to limitations of P. leniusculus to higher precipitation seasonality and maximum temperatures. The combination of ENMs with set theory allows studying the invasive biology of symbionts and provides clues about biogeographic barriers due to abiotic or biotic factors limiting the expansion of the symbiont in different regions of the invasive range. The relative importance of abiotic and biotic factors on geographical space can then be assessed and applied in conservation plans. This approach can also be implemented in other systems where the target species is closely interacting with other taxa. Introduction Biotic and abiotic factors in invasion processes Dramatic impacts of alien species on invaded ecosystems have prompted interest to scientifically understand invasion processes in order to prevent their harmful effects (Strayer et al. 2006;Young and Larson 2011). Invasive species have a combination of attributes that facilitate their arrival and establishment in a novel region (Sol 2007;Karatayev et al. 2009). But several external factors are also involved in an invasion success, usually classified into abiotic, biotic, and dispersal factors. Although some authors give more importance to dispersal factors such as propagule pressure in accounting for the success or failure of an invasion event (e.g., Lockwood et al. 2005), abiotic and biotic factors have been shown as important elements in invasion biology. The role of the abiotic conditions in invasion biology is evident, and physical suitability for an invader obtained from environmental predictors, mainly climatic, has been considered as good predictor of invasibility (Williamson 1996). Several studies also show that spatial and temporal heterogeneity and physical disturbances, usually related to abiotic conditions (like climatic or geographical), may facilitate the establishment of invasive species (Melbourne et al. 2007). Another example of the importance of abiotic factors in invasion biology is the effect of climate change on the invasion processes (Hellmann et al. 2008;Rahel and Olden 2008). In spite of the wide use of climatic conditions to predict the regions susceptible to be invaded by exotic species, biotic interactions have also been shown as important elements limiting the species distributions (Guisan and Thuiller 2005). Indeed, biotic interactions are considered a key factor in biological invasions (White et al. 2006;Roy and Handley 2012). Biotic factors such as community complexity, the existence or absence of enemies (predators, competitors, parasites, and pathogens), and mutualisms or commensalisms with other species may facilitate or hamper the establishment of an invader in a novel area (Mooney and Cleland 2001;Sakai et al. 2001;Prenter et al. 2004;Davis 2009;Engelkes and Mills 2011). For example, the Enemy Release Hypothesis proposes a facilitation of the invasion success due to loss of negative interactions from the native range, including competition, predation, or parasitism, during the early invasive stages of the displacement to the novel area (Sax and Brown 2000;Torchin et al. 2003;Roy et al. 2011). But those symbionts that get to remain with the exotic species during the invasive process have also an important role. Host jump, a key element in the evolution of non-free-living organisms (Poulin 2007), is also essential in invasion biology. An invasion event offers new biogeographic and evolutionary opportunities to the symbionts accompanying an invasive host. The process of symbiont transmissions from invasive to native hosts, also called "spillover" (Kelly et al. 2009), is considered an important threat for native species conservation (Roy and Handley 2012;Strauss et al. 2012). [NB: This work employs the term "symbiosis" with its broad meaning of organisms living in association, including positive (mutualism), negative (parasitism), and neutral (commensalism) interactions, following Sapp (1994). The terms "symbiont" and "non-free-living species" are employed for a smaller organism living in symbiosis with a larger species, termed the "host"]. The ecological niche in set theory and BAM diagrams According to the niche concept proposed by Hutchinson (1957), "an n-dimensional hypervolume is defined, every point in which corresponds to a state of the environment which would permit the species S l to exist indefinitely." The potential niche is the range of environmental conditions available in the geographical space associated with positive intrinsic growth rates. The realized niche is the portion of the potential niche without biotic and/or dispersal constrictions. We want to highlight the distinction between the environmental space, linked to the niche concept, and the geographical space, composed of grid cells covering a particular region, associated with the geographical distribution of species . Based on the application of set theory (Hrbacek and Jech 1999) to niche concepts, BAM diagrams (Sober on and Peterson 2005) offer a framework to configure different hypothetical interactions between biotic (B), environmental or abiotic (A), and movement-related or dispersal (M) factors in the geographical space, which can be applied to invasion biology (Jim enez-Valverde et al. 2011). In this framework, A is the geographical area in which the environment is suitable at a given time, and where the intrinsic growth rate of the species would be positive; B is the geographical area where biotic interactions are favorable for species' existence, and M is the geographical area that is accessible to the species. In these models, the geographical area occupied by the species (G o ) is that with suitable environmental conditions for species existence, favorable biotic interactions, and accessible for the species (A ∩ B ∩ M). Here, A represents the geographical area where the environmental conditions belong to the environmental space of the potential niche, and G o is the projection of the realized niche in the geographical space. Therefore, the BAM diagrams link the environmental space of the niche theory with the geographical space of the species distributions. We can hypothesize the different possible interactions between A and B by means of BAM diagrams. Only three interactions are possible ( Fig. 1) (3) a partial overlap between A and B ((A\B 6 ¼ ∅) ∧ (B\A 6 ¼ ∅)). In a theoretical context in which there are no restrictions by accessibility (i.e., M contains A and B, (A ∪ B) ⊂ M), two areas of the BAM framework of Sober on and Peterson (2005) characterize the three cases: G BI is the geographical area accessible and presenting favorable environmental conditions but inappropriate biotic conditions, and BI is the environmentally unsuitable but biotically appropriate area. In this theoretical context, G BI is the portion of A that remains out of B (G BI = A\B), and BI is the portion of B that does not coincide with A (BI = B\A); therefore, in the first case when A contains B, only G BI (but not BI) will appear; in the second case when B contains A, only BI will appear; finally, in the third case of a partial overlap between A and B, both area types, G BI and BI, will be present. So, G BI and BI can be used to identify which model of interaction between A and B fits or is closer to the case of the exotic species analyzed, through the evaluation of their relative proportion. Moreover, they also represent areas where the species is specifically absent due to abiotic (BI) or biotic (G BI ) factors, so that these factors are acting as specific barriers against the species expansion into those areas. Ecological niche models and set theory Ecological niche models (ENMs) have proven useful in providing statistical tools to predict the environmentally suitable areas for the invasion by an exotic species , a practical approach that has been widely used recently (e.g., Reshetnikov and Ficetola 2011). The predictions are based on modeling the relation between species occurrence data and environmental predictors. Although biotic factors may also affect species distributions, most ENMs are based only on physical predictors because the high complexity of biotic interactions makes their inclusion in an ENM approach difficult. Nonetheless, some studies consider biotic interactions in their analyses, by adding biotic predictors or constraining the model predictions to the presence of interacting species (e.g., Heikkinen et al. 2007;Meier et al. 2010;Schweiger et al. 2012). Recently, novel techniques have incorporated biotic interactions into ENMs through modeling multispecies interactions by means of interaction matrices (Kissling et al. 2012). On the other hand, the application of ENMs to invasion biology is subject to methodological uncertainties derived from doing predictions across space and time. In this sense, the development of ensemble ENM techniques has represented a useful progress in order to assess the modeling uncertainty ). Represented by circles, A is the geographical area with suitable environmental conditions, B the area where biotic interactions allow species existence, and M is the accessible area for the species. G BI is the available geographical area with favorable environmental conditions, but inappropriate biotic conditions (G BI = A\B) and BI the area with unsuitable environmental, but appropriate biotic conditions (BI = B\A) for the species. Within this model frame, the three possible interactions between A and B are as follows: (A) A includes B (B ⊂ A or (A\B 6 ¼ ∅) ∧ (B\A = ∅)), (B) B includes A (A ⊂ B or (A\B = ∅) ∧ (B\A 6 ¼ ∅)), and (C) a partial overlap between A and B ((A\B 6 ¼ ∅) ∧ (B\A 6 ¼ ∅)). Colors for G BI and BI as in ENMs can be applied in a theoretical framework of BAM models to analyze the interactions between A and B in the geographical space of exotic species that are strongly affected by a particular interaction with other species, for example, the dependence on the presence of a specific host, prey, or mutualist, or the absence of a particular predator or parasite. To do so, as we are not focused on M, the assumption of the absence of dispersal factors affecting the study region may facilitate the BAM analyses. So, our species model should have accessibility to all the areas of the study region. Secondly, we need to limit the set of factors involved on A and B. Climatic conditions are a good choice to characterize A when we work at large extension and coarse resolution scales (Elith and Leathwick 2009). The B factors would be limited to the presence of positively interacting species (a host, prey, or mutualist) or its absence if interacting negatively (i.e., a predator or parasite). Once we have established the theoretical framework and the geographical scale (large extension and coarse resolution for climatic variables characterizing A), the next step is to use ENMs to estimate A and B areas. A can be estimated, in a practical way, predicting the climatically suitable areas for the exotic species in the study region, through ENM analysis and using the global occurrence dataset of the species and global climatic information. The estimation of B areas can be carried out in the same way, but predicting the climatically suitable (for a positive interaction) or unsuitable (for a negative interaction) areas for the interacting species. Consequently, we will need global occurrence data for these species. Finally, combining both predictions, representing the A and B areas in the geographical space of our study region, we will be able to highlight the proportion and distribution of the G BI and BI areas that will allow to diagnose which interaction model follow A and B in our target species, and to identify areas where climatic conditions and/or biotic interactions with other species may be acting as specific barriers against the expansion into those areas. Study system: entocytherid ostracods and their host crayfish Invasive crayfish species are known to cause important harms to the native biota from the invaded site (McCarthy et al. 2006;Matsuzaki et al. 2009;Olden et al. 2011). A well-known impact in Europe was the "spillover" effect caused by the oomycete Aphanomyces astaci (Schikora, 1906), carried by American exotic crayfish and becoming one of the main problems for native European crayfish conservation (Gil-S anchez and Alba-Tercedor 2002). The impact of A. astaci on European native crayfish is a typical case of the so-called naive host syndrome: a novel host receiving an exotic symbiont might be severely affected due to lack of history-evolved resistance (Taraschewski 2006;Mastitsky et al. 2010). Crayfishes have a rich associated biota (Edgerton et al. 2002), including entocytherids. The Entocytheridae is an ostracod family constituted entirely by epicommensal species on other crustaceans (Hart and Hart 1974). Entocytherinae, the main subfamily of the group with 183 species, are native from North and Central America living on Cambaridae and Astacidae crayfishes. Recently, two American exotic entocytherid species associated with invasive crayfish were cited in Europe and Japan: Ankylocythere sinuosa (Rioja, 1942), found in some localities of the E Iberian Peninsula, associated with Procambarus clarkii (Girard, 1852) (Aguilar-Alberola et al. 2012) and Uncinocythere occidentalis (Kozloff and Whitman, 1954), cited in a few German and Japanese localities living on Pacifastacus leniusculus (Dana, 1852) (Smith and Kamiya 2001;). In their native range, both entocytherid species have been found in 47 different host species in the case of A. sinuosa and three different species of crayfish in the case of U. occidentalis , suggesting that they are not very host specific as seems to be common in the group (Mestre et al. in press). Although both exotic crayfish species have a much longer history in Europe (more than 35 years), entocytherids had not been previously detected, probably because they are tiny (<0.5 mm in length) and apparently not harmful to their hosts. On the other hand, we found no previous comprehensive study, which has checked the presence of Entocytheridae (native or exotic) in European native crayfish. Exotic entocytherids and crayfishes are particularly adequate to analyze the interactions between A and B in the geographical space. The total dependence of the entocytherids on their crayfish hosts allows to easily estimate B as the crayfish host species presence. Moreover, due to the long invasion history of exotic crayfish in Europe with multiple introduction events by humans in many European countries (Holdich 2002), we can simplify our BAM models assuming the absence of dispersal barriers for these organisms in Europe. Finally, the low host specificity shown by the exotic entocytherids points to the possibility of restriction by host dependence in the invaded range, because they suffer a reduction in host availability from multiple crayfish host species in the native range to just a few exotic crayfish host species in the invaded range. Set theory approach: dominance of biotic or abiotic factors in the invasion process? Symbiont organisms associated with invasive hosts can join them to invaded areas, although a filtering selection in initial invasive stages occurs, as stated by the Enemy Release Hypothesis (Torchin et al. 2003). Having overcome the filters, they must accompany their hosts in the expansive phase. Then two questions arise: Are exotic symbionts able to travel with their hosts wherever they go or could they have physiological limitations preventing them from doing so? Alternatively, could they be limited by their host's tolerances to colonize all the potential areas they are physiologically able to invade (Wharton and Kriticos 2004)? Regarding the last question, the host climatic restrictions are susceptible to constrain the potential distributions of symbiotic organisms in new invaded areas because exotic symbionts often suffer a reduction in host availability from a number of hosts in their native range to just a few or only one invasive host. We can deal with this issue by analyzing the interactions between A (as limited to climatic factors) and B (reduced to host availability) in the geographical space using the set theory approach. In this context, the three different models of interaction between A and B proposed above correspond to the different possibilities that we can find in a symbiont-host system. The first model, where A includes B, would represent a case where the symbiont has broader abiotic tolerance than its host, so its distribution is simply determined by host availability. In contrast, the second and opposite model, where B includes A, represents a case where the symbiont has a tolerance to abiotic conditions much more restricted than their hosts', facing a climatic barrier to invade a region. Finally, the third and intermediate model outcome with a partial overlap between A and B represents a case where there is a spatial segregation between both restriction types, affecting different regions of the geographical space. Aims and research strategy To establish an initial evaluation of the distribution of crayfish-living entocytherids in Europe, we carried out the first extensive sampling campaign on native and exotic European crayfish species using specific entocytherid sampling techniques. Furthermore, the main aim of our survey was to evaluate the interactions between abiotic (climatic) and biotic (host availability) factors in geographical space for exotic symbionts, using ENM techniques combined with a theoretical framework based on set theory. To this end, we used as model organisms the exotic entocytherids found in Europe (A. sinuosa and U. occidentalis) and their hosts (P. clarkii and P. leniusculus). For each exotic entocytherid species, we carried out the following steps: (1) We established the theoretical framework based on the BAM models proposed by Sober on and Peterson (2005), specifying the model assumptions; (2) we estimated A and B areas through ENM modeling; (3) we combined the predicted A and B through a raster operation highlighting the G BI and BI areas, and, finally, (4) we diagnosed the model of interaction between A and B that followed each entocytherid species analyzed assessing the relative proportion and distribution of G BI and BI. Field and laboratory methods In order to evaluate the distribution of crayfish-living entocytherids in Europe, we sampled 12 crayfish species from 93 widely distributed European localities. Eight crayfish species were considered exotic, and four were native to Europe (Table 1). Crayfishes, caught with bait traps or hand nets, were subjected to entocytherid removal protocols based on submerging specimens in anesthetic liquids (carbonated water or chlorobutanol), as discussed and tested in Mestre et al. (2011). In some other cases, we checked the bottom of the container where crayfish were previously preserved in ethanol. Whatever the protocol used, the liquid (carbonated water, chlorobutanol, or ethanol) where crayfishes were submerged was filtered through a 63-lm mesh-sized filter, and the content retained was stored in ethanol. A posteriori, these samples were checked in the laboratory under a stereomicroscope, and the entocytherid species found were identified following Hart and Hart (1974). The copulatory apparatuses of selected adult males were drawn using a camera lucida, and SEM and light microscope photographs of adults were also taken to ascertain identifications. Our spatial analyses were mostly focused on both entocytherid species recently found in Europe, Ankylocythere sinuosa (Rioja, 1942), cited in association with Procambarus clarkii (Girard, 1852) and Uncinocythere occidentalis (Kozloff and Whitman, 1954), living on Pacifastacus leniusculus (Dana, 1852). Applying set theory BAM diagrams were applied by considering A the European geographical areas with suitable environmental (climatic) conditions for entocytherid species, B the European areas where host presence allows the existence of entocytherid symbionts, and M the European accessible areas for the species. It was assumed that: (1) Mobilityrelated limitations (i.e., physical dispersal barriers) do not exist for entocytherids and crayfishes in Europe. In set theory notation, we can express this assumption as: refer to their symbionts). This assumption is based on the long invasion history of both hosts, P. clarkii and P. leniusculus, in Europe with multiple introduction events by humans in many European countries (Holdich 2002); (2) the only B factors considered are the adequate abiotic conditions for host presence, that is, B = A H ; and (3) the climatic predictors used in the ENM analyses are good estimators of A and A H . In this model frame, three possible interactions between A and B exist ( Fig. 1) (3) A partial overlap between both A and B, (A\B 6 ¼ ∅) ∧ (B\A 6 ¼ ∅). Two areas in the models characterize these three cases: G BI = A\B are the available geographical areas with favorable environmental conditions, but inappropriate biotic conditions for entocytherids, which in our models were estimated as the climatically suitable areas for the entocytherid but unsuitable for the host, representing those geographical areas where the symbiont is specifically restricted by host availability; BI = B\A areas with unsuitable environmental conditions, but appropriate biotic conditions, estimated in our models as the climatically unsuitable areas for the entocytherid and suitable for the host, representing those areas where the symbiont is specifically restricted by its own climatic tolerances. Consequently, G BI is present in cases (1) and (3), and BI in (2) and (3) (Fig. 1). Occurrence data The occurrence data for ENM analyses were extracted from three sources: (1) Own data reported in this work; (2) a worldwide database of entocytherid species and their hosts built by Mestre et al. (2012, in press) from published sources; and (3) the Global Biodiversity Information Facility (GBIF; http://data.gbif.org). After checking and cleaning occurrences to remove duplicate and erroneous points, and subsampling oversampled states or countries (i.e., U.K. and Sweden for P. leniusculus) following the same protocol as Iguchi et al. (2004), the number of occurrences, representing the global range of the four species studied, was 281 for A. sinuosa, 75 for U. occidentalis, 266 for P. clarkii, and 307 for P. leniusculus. We did not use real absences, as suggested by Jim enez-Valverde et al. (2011) because they are conflictive data, among other reasons, due to the difficulty, in most cases, to have a complete certainty that the species is absent, as may occur in entocytherid populations with low prevalences (Aguilar-Alberola et al. 2012). Environmental data Environmental predictors were restricted to climatic variables, considered more determinant on large extension and coarse resolution scales (Elith and Leathwick 2009). Climatic data were obtained from WorldClim (Hijmans et al. 2005). Datasets at a 5-arcmin resolution were selected. To avoid problems relating to collinearity between predictors , only four climatic variables were utilized: minimum temperature of the coldest month (MinT); maximum temperature of the warmest month (MaxT); annual precipitation (AnPrec), and precipitation seasonality (i.e., coefficient of variation, PrecSeas). The selection of the variables was based on the fact that they reflect thermal limits and water environmental availability, consistently relating to important physiological attributes in our organisms, that is, thermoregulation and hydric stress. The effect of these climatic variables on the large-scale distribution of both crayfish species treated is well supported (Capinha et al. 2012 . A previous analysis of collinearity between the selected predictors on our occurrence species data was carried out based on graphical tools from the R (R Core Team 2013) RASTER package (Hijmans and van Etten 2012) and calculations on correlations between variables. No graphical evidence for collinearity was found, and all the paired combinations of predictors showed an |r| < 0.7 in all the climatic datasets for the four species studied. ENM modeling We applied ENMs to predict the climatically suitable areas for each entocytherid species as an estimation of A areas of the BAM models, and the climatically suitable areas for each corresponding host species, to estimate B areas. The ENMs were built with BIOMOD2 (Thuiller et al. 2013), and the raster management was implemented using RASTER. Geographical resolution was the same for both models and predictions, determined by the environmental raster data, that is, five arcmin. The extension for the models was global, but European (12°W-60°E; 30°N-75°N) for predictions. ENMs were designed using the world occurrences of each species by also including the invaded range to improve their predictive ability in invaded areas (Broennimann and Guisan 2008;. We applied ensemble modeling techniques with worldwide random selection of pseudo-absences (with the same number than the occurrences), data splitting into 70% for model calibration and 30% to test ENMs, and using eight different algorithms: generalized linear model (GLM), generalized additive model (GAM), generalized boosting model (GBM), artificial neural network (ANN), classification tree analysis (CTA), flexible discriminant analysis (FDA), multiple adaptive regression splines (MARS), and random forest (RF). In BIOMOD2, we used the default algorithm parameters. We repeated the modeling process 800 times, combining ten pseudoabsence selections 98 algorithms 910 calibrating-testing repetitions obtaining, as a result, 800 individual projections. Afterward, we averaged those individual projections built from the same pseudo-absence selection and calibrating-testing repetition, but different algorithm, obtaining 100 ensemble projections. For this, we applied a weighted average giving more weight to those algo-rithms with better performance according to the area under the curve (AUC) parameter . Finally, the 100 ensemble projections were averaged to get a final consensus projection, showing the probability of species presence in Europe according to the climatic predictors. Assessing ENM performance Three different aspects of ENM predictive performance were assessed: the performance of the climatic predictors, the test data predictive ability, and the ENM uncertainty. The performance of the predictors was analyzed with generalized linear models (GLMs) of binomial family with a "logit" link function, where the response variable was a dataset with the occurrence data and a pseudo-absence selection, and the explanatory variables were the climatic predictors. The test data fitting assessment was carried out using the AUC parameter, based on receiver operating characteristic (ROC) plots, representing the probability that the classifier (ENM) will rank a randomly chosen positive instance higher than a randomly chosen negative instance (Fawcett 2006), which reflects the relation between true-positive (well-predicted occurrence) and false-positive (absence predicted as presence) prediction rates . We tested the effects of the algorithm type on the AUC results, through GLMs with the binomial family and a "logit" link function. ENM predictive uncertainty was also assessed by plotting the SD of the probabilities of species presence of the 100 ensemble projections, as in Capinha and Anast acio (2011). Integration of ENM predictions and set theory to estimate the relative importance of abiotic and biotic factors Once A and B areas were estimated through the ENM predictions, we combined both areas to obtain the G BI and BI areas for each entocytherid species, used for the diagnosis. For this, the consensus projection for each species was transformed into presence-absence binary data. Threshold selection was based on threshold optimization by the ROC method (Thuiller et al. 2013). Optimized threshold values from the evaluation of the ensemble models were averaged to obtain a consensus threshold per species. Then, we combined the binary consensus projection of each entocytherid (representing A) and its respective host (representing B) by a subtraction raster operation to highlight the G BI and BI areas for both symbiont species. Finally, we evaluated each case based on the relative proportion and distribution of G BI and BI in the geographical space of Europe. Exotic entocytherids found in Europe As a result of our field survey (Table 1; see also Table S1 in Supporting Information), two entocytherid species were detected: Ankylocythere sinuosa and Uncinocythere occidentalis. A general view of a mating pair and morphological details of the latter species is shown in Fig. 2 (Aguilar-Alberola et al. 2012 already presented pictures of exotic populations of A. sinuosa). New detailed morphological information on the male copulatory apparatus of some of these European populations of both species is provided in the SI (see Fig. S1). Occurrences of A. sinuosa were widely distributed in the Iberian Peninsula (Fig. 3C), while those of U. occidentalis were located in NE Iberian Peninsula, Central and N Europe (Fig. 4C). In all cases, host species were P. clarkii for A. sinuosa and P. leniusculus for U. occidentalis, with one exception, a locality where U. occidentalis was found inhabiting P. clarkii, in a small pond in N Spain, where all four species cohabited (LOC039 in Table S1). More specifically, both entocytherid species were found inhabiting the same P. clarkii specimen. For the remaining crayfish species sampled, no evidence was found for entocytherid occurrences in any case (Table 1), except for an unidentified entocytherid associated with Cherax quadricarinatus Martens, 1868, from a pet shop in Spain (LOC094 in Table S1). ENM predictions for the climatic suitability of exotic entocytherids and their hosts in Europe According to our consensus projections (Figs 3A,B and 4A,B), A. sinuosa was the species with the most limited climatically suitable European areas, restricted to circum-Mediterranean regions and some areas around the Black and Caspian Seas (Fig. 3A), while the climatically suitable areas for its host P. clarkii included, apart from these, a wider region of W Europe (Fig. 3B). In contrast, the climatically suitable areas for U. occidentalis and P. leniusculus occupied most of Europe, excluding the SW Iberian Peninsula, the highest altitudes of mountain ranges, N Fennoscandia, and the coastal lowlands around the Mediterranean region (Fig. 4A,B). AUC and uncertainty assessments The mean AUC values for those individual ENMs built using the same algorithm were higher than 0.8 in all cases (model types and species), with the highest scores obtained with GAMs ( Table S3 for further details on the algorithm effects estimates). However, the model of U. occidentalis has the greater proportion of deviance not explained by the algorithm type (Dev./Null Dev.9100 = 64%). In concordance, the SD of the AUC values for the ensemble models showed the highest value in the species U. occidentalis (SD = 0.008) ( Table 2). Therefore, U. occidentalis ENMs presented the greatest predictive instability, according to the AUC assessment. In the uncertainty assessment, the SD values of the probability of species presence of the ensemble projections remained below 0.2 for all species (Figs 3C,D and 4C,D). Uncinocythere occidentalis (Fig. 4C) was the species with more extended areas with higher uncertainty. Integration of ENM predictions and set theory to estimate the relative importance of abiotic and biotic factors Both combinations of entocytherid-host binary consensus projections followed two different patterns (Fig. 5). The sinuosa-clarkii combination had larger areas with a cli- matic restriction for the entocytherid (BI) in W Europe and circum-Mediterranean regions, with only a few small areas with a restriction by host availability (G BI ) around the Black Sea (Fig. 5A). In the occidentalis-leniusculus species pair, G BI occupied a wide range of S European, N African, and Middle East areas, while the BI areas appeared mainly in N Europe around the Baltic Sea with some small and diffuse areas in Central Europe associated with the highest altitudes of mountain chains (Fig. 5B). Both symbiont species followed two different distributional patterns of G BI and BI areas, with a predominance of BI areas in the case of A. sinuosa, closer to the set model where B includes A (Figs 1B and 5A; case 2 in the Methods section). On the other hand, U. occidentalis presented a more balanced proportion of G BI and BI areas, in accordance with a theoretical model with a partial overlap between A and B ( Fig. 1C; case 3 in the Methods section). Discussion In this work, after carrying out the first comprehensive evaluation of the presence and distribution of entocytherids inhabiting crayfishes (exotic and native) in Europe, we were surprised by the low number of species found, which included only two exotic but widely distributed species. For these two species, and according to the main objective of this survey, that is, to compare the influence of biotic and abiotic factors in the spread of invasive symbionts, we analyzed the interactions between their climatically suitable area (A) and the suitable area according to host availability (B) using ENM techniques in a set theory framework, following Sober on and Peterson (2005). Therefore, for both ostracod symbionts, A. sinuosa and U. occidentalis, we first estimated their A and B areas (according to their climate envelopes and their exotic crayfish hosts' P. clarkii and P. leniusculus) through ENM modeling. The combination of A and B predictions allowed estimating the G BI and BI areas, which resulted largely different between the two focus species and consequently highlight the importance of both biotic and abiotic factors in the expansion processes of exotic species with tight biological interactions. Crayfish-hosted entocytherids in Europe We evidenced the widely ranging presence of two exotic entocytherid species, Ankylocythere sinuosa and Uncinocythere occidentalis, in W Europe, previously observed in some locations of E Iberian Peninsula in the case of A. sinuosa (Aguilar-Alberola et al. 2012), and in a German locality for U. occidentalis ). Both species have been cited in association with more than one host species in their native range , including those observed in Europe, P. clarkii and P. leniusculus, respectively. Notably, both entocytherid species have been observed living on crayfish species belonging to two different families, Cambaridae and Astacidae, showing a broad taxonomic range of hosts. No more entocytherid species were found on the exotic crayfishes sampled in this study across Europe. In contrast, all the sampled American exotic crayfishes had been previously found with entocytherid associates in their native ranges, for example, 27 entocytherid species associated with Procambarus acutus (Girard, 1852) (Mestre and Mesquita-Joanes 2013). Moreover, P. clarkii and P. leniusculus have all been found to be associated with four other entocytherid species . Our results agree with Torchin et al. (2003) about the effects of strong filters acting on parasites and other symbionts such as entocytherids in early invasive stages. The absence of native European entocytherids associated with autochthonous crayfish reminds of a similar pattern for another group of crayfish ectosymbionts: the Temnocephalidae. These Platyhelminta are widely distributed in the Neotropical, Ethiopian, Oceanic, and Oriental regions. However, in Europe, a few species are found living as symbionts on cave prawns and shrimps, but not on native crayfish (Gelder 1999). This absence of native ectosymbionts might facilitate the expansion of recently introduced species through host jump given the absence of competitors in their biotic niche. Nevertheless, exotic entocytherids have not been found in native European crayfish hitherto. The main probable reason is that the crayfish plague (A. astaci) hinders the coexistence of alien and native crayfish populations because the latter quickly extinguish locally when infected with this parasite. Another additional explanation might rely on the small numbers and high isola- tion of populations in native crayfish metapopulations, which makes their potential colonization by exotic entocytherids difficult. Evaluation of the relative importance of climate and host availability in the geographical space of exotic symbionts In our models about interactions between climate and host availability in the geographical space of exotic entocytherids in Europe, we showed two species with different distributional patterns of G BI and BI areas. Ankylocythere sinuosa had a predominance of BI areas, being closer to the model where A is included in B and therefore this species seems to be mainly restricted by its own climatic tolerances. As shown by the analysis of the predictors, the climatic restrictions of A. sinuosa related to BI may be due to the limitations to lower minimum temperatures mainly affecting the BI areas of W and N Europe, and the precipitation seasonality in BI circum-Mediterranean areas. This model suggests the existence of potential invading areas with lower minimum temperatures or higher precipitation seasonality where the host, P. clarkii, with a wider tolerance to these climatic variables, could lose its entocytherid symbionts, with the consequent lost of the hypothetical benefit or harm caused by their interaction. On the other hand, U. occidentalis has a more balanced proportion of G BI and BI areas, fitting with the model of a partial overlap between A and B. Both areas affect different European regions, having a spatial segregation of both restriction types at a high scale level. G BI of U. occidentalis, occupying S European, N African, and Middle East areas, may be related to limitations of P. leniusculus to higher precipitation seasonality and maximum temperatures. The interpretation of the limitations related to the BI areas of U. occidentalis is more difficult to ascertain due to the reduced fit shown by the GLM model for the effect of the climatic predictors on the probability of presence of this species. So, in this model of a partial overlap between A and B, the G BI areas imply the existence of potential areas that U. occidentalis may invade if it is able to jump to other exotic or native crayfish species with a better tolerance to higher precipitation seasonality and maximum temperatures than P. leniusculus. This possibility cannot be excluded given the group's low host specificity, that is, one entocytherid species can inhabit more than one host species (Hart and Hart 1974), including crayfish hosts belonging to different families, as was also evidenced in this study in which we found a locality where U. occidentalis was also associated with P. clarkii, a host having those requirements (tolerance to higher precipitation seasonality and maximum temperatures). We showed an example of spatial analyses combining ENM and a BAM theoretical framework, applied to the evaluation of the relative importance of climate and host availability in the geographical space of exotic symbionts. Both area types that characterize our models, that is, G BI , where the symbiont is specifically restricted by the host availability, and BI, where it is specifically restricted by its own climatic tolerances, apart from their capacity to act as ecological barriers against the symbiont geographical expansion, may have other implications in the invasive (A) (B) Figure 5. Combined entocytherid-host binary transformed consensus projections for species pairs (A) Ankylocythere sinuosa and Procambarus clarkii, (B) Uncinocythere occidentalis and Pacifastacus leniusculus, showing those areas climatically suitable for the symbiont but unsuitable for its host (G BI ), and the climatically unsuitable areas for the symbiont and suitable for the host (BI), in Europe (12°W-60°E; 30°N-75°N). The maps have a 5-arcmin resolution and a Mollweide equal-area projection (see Fig. 1 and text for definitions of the G BI and BI areas; colors for G BI and BI as in Fig. 1). process of symbionts and their hosts. In BI areas, typical of a model where the symbiont has a tolerance to abiotic conditions much more restricted than their hosts (B includes A), climatic barriers could act as a host "cleaning" so that the host could lose its symbiont, with the consequent loss of hypothetical benefits or harms derived from such association that may affect the invasive capacity of the host in these areas. On the other side, G BI areas, characterizing the model where the symbiont has broader abiotic tolerance than its host (A includes B), may be potentially invaded by the exotic symbiont in the case of a hypothetical host jump to other host species (native or exotic), an event that may derive on a conservation issue threatening the native host species through the "spillover" effects (Roy and Handley 2012;Strauss et al. 2012). Practically, all species have symbiotic organisms affecting them. So, this type of research approach contributes to better understanding the invasive processes and could be applied to conservation plans of native species as potential hosts of exotic symbionts. In particular, the crayfish-symbiont system has special interest in crayfish conservation. Taking into account the hypothetical jump of exotic entocytherids to European native crayfish, although the main hypothesis for the entocytherid-crayfish relationship is commensalism, this has not been rigorously dealt with, and the line between commensalism and parasitism is often very narrow (Poulin 2007). Moreover, even if it is demonstrated that they are strictly commensal, the role of entocytherids as vectors for parasites and diseases is another possibility that should be considered. Indeed, a rich fauna has been observed in association with ostracods (Mesquita-Joanes et al. 2012), which can act as intermediate hosts of parasites (e.g., Grytner-Ziecina 1996;Moravec 2004). In this sense, we wish to draw attention to the chance of a hypothetical host jump of exotic entocytherids to European native crayfish. Given the low host specificity of entocytherids (Hart and Hart 1974) and the experimentally tested horizontal transfer between adult crayfishes (Young 1971), this jump is quite likely. The potential negative effects of this event on crayfish conservation remain unknown. In this sense, we showed the role of climate and host availability as limiting factors to the expansion of the exotic entocytherid species and identified the new potential areas that the entocytherid could invade if a host jump to native crayfish would occur, information that can be used to get a better assessment of the process. Approach limitations and recommendations An important issue of these methods and, in general, in ENM approaches applied to invasion biology, comes from A being calculated by ENMs based on environmental predictors without considering biotic interactions, which are actually modulating the species distribution where those predictors are obtained from. Therefore, we do not estimate A, but we actually estimate A∩B GR , where B GR represents the suitable geographical areas for species existence according to all the biotic interactions within the global range (the same applies to A H ). For example, our estimation of BI for A. sinuosa and U. occidentalis could be an overestimation of the real BI due to the existence of geographical restrictions within their native range caused by competition with other entocytherids, considering that five different species have been found associated with each of both native P. clarkii and P. leniusculus populations. So in Europe, the lack of competitors would allow the exotic entocytherids to invade part of those overestimated BI areas from data obtained mainly from native regions affected by intraspecific competition. In that case, the estimated A in our models would actually correspond to the climatically suitable European areas for the entocytherid by considering all the hosts it inhabits and restrictions from competitive interactions with other entocytherids within the global range (A∩B GR ) (the same may occur in A H ). Actually, this is a general issue of ENMs, and in most datasets, environmental effects are confounded with those of competitors and mutualists (Elith and Leathwick 2009). The inclusion of occurrence data from invasive ranges, as we did here, and the design of laboratory experiments about species tolerances against environmental predictors may help to rigorously estimate the A areas of the BAM geographical space in order to minimize this problem. The ENM uncertainty assessment reveals that the G BI and BI geographical areas coincide in most cases with those areas with higher predictive uncertainty (compare Fig. 5 with Figs 3C,D and 4C,D). Probably, the reason is because these areas are usually close to the boundaries of the predicted species distributions, more susceptible to be predictively unstable. Therefore, the estimation of G BI and BI is especially sensitive to ENM accuracy. Consequently, these methods should be based on ENMs with good performance. Along these lines, our ENM assessment based on three ENM performance aspects (i.e., predictors performance, AUC, and uncertainty assessments) give us evidences of weak ENM performance for U. occidentalis models: This was the only species with an inadequate fit of the climatic predictors and showed the highest predictive instability according to the AUC assessment through the GLMs (larger proportion of deviance not explained by the algorithm type) and higher ENM predictive uncertainty based on variability shown by the ensemble projections (wider areas with higher variability). These results strongly suggest that our estimation of G BI and BI for this species could be affected by the bad performance of the ENMs for U. occidentalis, probably due to the lower number of occurrences available for this species. As we have shown, a good ENM assessment is essential to analyze the interactions between abiotic and biotic factors in the geographical space. Assessing the performance of the ENM predictors provides useful information about the effects of each individual predictor for each species and can be combined with the results of niche models to better understand which specific variable could be involved on the restrictions present in the different G BI and BI areas. The use of two different approaches to assess ENM performance based on ensemble modeling techniques (i.e., AUC and uncertainty assessments) gives stronger support to our results and, finally, the uncertainty is specially valuable because it helps us to locate those areas with higher predictive instability, and then, we can compare them with the G BI and BI areas to assess the reliability of our estimations. The methodological approach presented in this work, focused on a symbiont-host system, can also be applied to other systems where the target species is strongly affected by interactions with other species. The range of possibilities may include different kinds of mutualisms, predators with a strong dependence on a specific prey, or species having incompatibilities with the presence of some specific predators, parasites, or competitors. The data required to develop this kind of models are a global occurrence dataset for the interacting species and a global climatic dataset of a large extension and coarse resolution scale. The first step of the analyses through the implementation of set theory is especially important, because it allows a wide variety of theoretical contexts to adapt our models to a particular biological question proposed, for example, the inclusion of dispersal barriers affecting the species expansion through the use of M, or the consideration of more than one interacting species to estimate B. The generalization of our approach to species without tight biotic relationships would require a higher development of this methodology because, in those cases, the B areas do not depend only on the presence of the interacting species, but other parameters would be implied, such as the species densities or the existence of interactions between the environmental conditions and the effect of the biotic interaction. Finally, when applying this kind of models, we do not have to lose the perspective that we deal with dynamic systems (Larson and Olden 2012;Lu et al. 2013). Supporting Information Additional Supporting Information may be found in the online version of this article: Figure S1. Drawings of male copulatory organs of the entocytherid species found during the field survey. Table S1. Detailed European crayfish data checked for entocytherids in this work. Table S2. GLM results for the performance of the climatic predictors. Table S3. GLM results for the fixed algorithm effects on AUC.
2018-04-03T03:29:02.960Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "aee97a33c9e4bd892e7423f2c94ed78ab4e90407", "oa_license": "CCBY", "oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.897", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aee97a33c9e4bd892e7423f2c94ed78ab4e90407", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
20163247
pes2o/s2orc
v3-fos-license
Precision measurement of the η-meson mass at COSY-ANKE A value for the mass of the ηmeson has been determined at the COSY-ANKE facility through the measurement of a set of deuteron laboratory beam momenta and associated 3He center-of-mass momenta in the dp → 3He X reaction. The η meson was identified by the missing-mass peak and the production threshold determined. The value obtained, mη = (547.873±0.005stat±0.027syst) MeV/c2, is consistent and competitive with other recent measurements, in which the meson was detected through its decay products. Recent measurements on the η meson mass performed at different experimental facilities resulted in very precise data but differ by up to more than ten standard deviations, i.e. 0.5 MeV/c2 [1]. Experiments in which the η meson was identified through a missing-mass peak in a hadronic production reaction have all reported a lower value for the mass in contrast to experiments reconstructing the η through the detection of its decay products. In order to clarify this situation a new and more refined missing mass measurement using the ANKE spectrometer at COSY the COoler SYnchrotron of the Forschungszentrum Jülich has been realized [2] [3]. For a simple η mass determination with a missing mass analysis in principle the kinematics of a two-body reaction like dp → 3He η have to be measured only at one single fixed accelerator beam energy. However, the η mass can be determined more precisely by the determination of the production threshold. Therefore, twelve data points at low excess energies in the range of Q = 1 − 11 MeV were investigated. The final state momentum p f of the 3He-particles in the center-of-mass (c.m.) frame p f (s) = √[ s − ( m3He + mη )2] [ s − ( m3He − mη )2] Recent measurements on the η meson mass performed at different experimental facilities resulted in very precise data but differ by up to more than ten standard deviations, i.e. 0.5 MeV/c 2 [1].Experiments in which the η meson was identified through a missing-mass peak in a hadronic production reaction have all reported a lower value for the mass in contrast to experiments reconstructing the η through the detection of its decay products.In order to clarify this situation a new and more refined missing mass measurement using the ANKE spectrometer at COSY -the COoler SYnchrotron of the Forschungszentrum Jülich has been realized [2] [3]. For a simple η mass determination with a missing mass analysis in principle the kinematics of a two-body reaction like d p → 3 He η have to be measured only at one single fixed accelerator beam energy.However, the η mass can be determined more precisely by the determination of the production threshold.Therefore, twelve data points at low excess energies in the range of Q = 1 − 11 MeV were investigated.The final state momentum p f of the 3 He-particles in the center-of-mass (c.m.) frame measured with the ANKE spectrometer, is very sensitive on the η mass and the total energy √ s, where the latter one is completely defined in a fixed target experiment by the masses of the initial particles and the laboratory momentum, p d , of the deuteron beam: The final-state momentum p f = p f (p d , m η ) depends only on the beam momentum, the η mass and other well measured masses.If one can fix the production threshold, p f (s) = 0, the η mass can then be determined from knowledge of the corresponding p d .For a robust threshold extrapolation both quantities, (p d , p f ) have to be measured with highest accuracy.The beam momentum for each fixed excess energy was determined using the resonant depolarization technique, a method developed at the electron-positron machine VEPP at Novosibirsk using the spin dynamics of a polarized beam [4].Thereby the spin precession frequency of a relativistic particle is disturbed by an artificial spin resonance induced by a horizontal rf-magnetic field of a solenoid leading to a depolarization of the polarized accelerator beam.The depolarizing resonance frequency f r depends on the kinematical γ-factor (i.e. with p = m γ 2 − 1 on the beam momentum) and the beam revolution frequency f 0 via the resonance condition: where k is an integer and G the gyromagnetic anomaly of the beam particle.By measuring these two frequencies the beam momentum can be determined with a precision below ∆p/p < 10 −4 .For the first time this method was used at COSY with a vector polarized deuteron beam and the accuracy could be increased by more than an order of magnitude compared to the conventional method.The momenta in the threshold range of 3.1 -3.2 GeV/c were determined with an accuracy of ∆p/p = 3 • 10 −5 , i.e. with ≈ 100 keV/c [5]. The correct final state momenta for the twelve different energies of the 3 He-nuclei of the reaction d p → 3 He η can only be extracted fulfilling two conditions: i) a precise detector calibration and ii) a clear identification of the reaction of interest.At ANKE the produced 3 He-nuclei can be identified using the energy loss and time of flight information.By this the background, consisting mainly of protons and deuterons of the d p elastic scattering and the deuteron break-up, can be suppressed effectively.For a two-body reaction at a fixed center of mass energy √ s the final state momenta in the c.m. frame are distributed on a momentum sphere with constant radius p f , which can be visualized by plotting the reconstructed transverse c.m. momentum p ⊥ against the longitudinal c.m. component p z , as shown in Fig 1a).According to Eq. 1, one expects a centered circle with a fixed radius indicated in Fig. 1a as solid lines.The feature that the ANKE facility has full geometrical acceptance for the reaction d p → 3 He η near threshold up to 15 MeV allows to verify and improve the detector calibration by studying the kinematics of this two-body reaction.The principle of the refined spectrometer calibration is the requirement that the momentum sphere should be completely symmetric in p x , p y , and p z (or ϑ and φ).Therefore it is necessary to study the reconstructed momentum p f carefully as a function of the polar and azimuthal angles ϑ and φ Deviations from the symmetric shape will indicate the need of an improvement of the standard ANKE calibration.This requires a clean separation of the 3 He η signal from the background. 03004-p.2 MESON2012 -12th International Workshop on Meson Production, Properties and Interaction The background left after cutting on 3 He-nuclei (see Fig. 1b), originating mainly from the multi pion production, can be subtracted by data taken below the η production threshold at an excess energy of Q ≈ −5 MeV, but analyzed as if they were taken above.The details of this technique are described for missing-mass spectra in Ref. [6], but the method is equally applicable to p f spectra.Due to the very high statistics of the current experiment, the distribution in p f could be investigated for twenty bins each in ϑ and φ.This is illustrated in Fig. 1b), where examples of the p f spectra summed over φ are shown for six cos ϑ bins for the energy closest to threshold, Q = 1.1 MeV.A similar picture is found for the φ dependence after summing over θ.Mean values of the 3 He momentum p f and peak widths for different ϑ and φ bins were extracted from the background-subtracted d p → 3 He η distributions by making Gaussian fits.A variation of the width of 4-12 MeV/c (rms) was found, as well as a displacement of the mean value.This striking effect results from different resolutions of the ANKE forward detector system in p x , p y and p z .Assuming that the individual momentum components are gaussian distributed with different widths, Monte simulations have shown that the final state momentum, i.e., the shape of the momentum sphere, is stretched to values cos ϑ → ±1 and shows an oscillation in φ.However, p f should be symmetric in cos ϑ and φ. The mean values of the measured p f for background-subtracted d p → 3 He η distributions are shown in Fig. 2a) for twenty individual cos ϑ and φ bins, before and after the improvement of the calibration.The results for the standard calibration, presented in the upper panels, show that the momentum sphere is neither centered nor symmetric.The momentum sphere is shifted to higher p z , i.e., on average p f is higher for 3 He produced in the forward direction than in the backward.The oscillations in the φ spectrum are also far from being symmetric, and this is particularly evident at φ ≈ ±90 • , where the p y momentum component dominates.This asymmetric pattern is rather similar at all twelve energies and this stresses the need for improvement of the detector alignment.By mak- f (red stars) plotted against the deuteron laboratory momentum p d .The extrapolation to threshold is carried out on the basis of Eq. 1.The lower panel shows the deviations of the experimental data from the fitted curve in p f .The errors shown here do not include the overall systematic uncertainty. ing minor changes of the ANKE calibration parameters such that the mean values of the final-state momenta are distributed on a centered and perfectly symmetric sphere in cos ϑ and φ, the detector alignment can be significantly improved, as shown in the lower part of Fig. 2a).This procedure was carried out using the data at all twelve energies simultaneously. 03004-p.3 The improved spectra, shown in the lower half of Fig. 2a) for one of the twelve energies, allow one to study the momentum resolution in the three directions.With the extracted momentum resolution parameters the data can be very well described with Monte Carlo simulations (black crosses).The determination of the η mass has to take these kinematic resolution effects into account because, without so doing, the value extracted for m η would depend on the production angle.For the current ANKE experiment, differences in m η of up to 0.5 MeV/c 2 are found between cos ϑ = ±1 and cos ϑ = 0. Owing to the resolution effects shown in the lower half of Fig. 2a), the reconstructed average of the final-state momentum over all cos ϑ and φ is shifted to a higher value than the true one (black horizontal line).By comparing the averages resulting from the Monte Carlo simulations with and without momentum smearing, correction parameters were calculated for all twelve energies.The correction is about 2.22 MeV/c for the lowest momentum and decreases steadily with p f to 0.7 MeV/c for the highest.Compensation for the momentum resolution effects is essential for an accurate determination of the production threshold.Without this correction, the value obtained for the η mass would be lower by about 150 keV/c 2 .The good statistics of ≈ 1.3 × 10 5 3 He η events for each energy meant that the value for p f could be extracted with an uncertainty of < 100 keV/c, whereas the uncertainty is dominated by the precision of the correction parameters. To obtain a robust value for the mass of the η meson, it is necessary to extrapolate the experimental data set (p d , p f ) in order to determine the value of the deuteron beam momentum at threshold.The extrapolation of the data to threshold is illustrated in Fig. 2b) for both p f and p 2 f versus p d .Whereas, to first order, p 2 f depends linearly on p d , the analysis considers the full dependence p f p f (m η , p d ), as given by Eqs. 1 and 2. Only the η mass, chosen as a free parameter, defines the production threshold.The overall fit to the data in Fig. 2b) has a χ 2 /NDF = 1.28 and the best value of the mass is m η = (547.873± 0.005) MeV/c 2 , where the error is primarily statistical.The corresponding deuteron beam momentum at threshold is p d = (3141.688± 0.021) MeV/c.By far the dominant systematic errors arise from the determinations of the absolute value of the beam momentum and the p f correction parameters.All other sources, such as effects from the time stability of the data, further contributions from the fine calibration, the event selection, the background subtraction for the p f distributions, as well as contributions of the η mass assumed in Monte Carlo simulations, are negligible in comparison.The uncertainty in the beam momentum translates into one in the mass of ∆m η = (4) The value obtained at COSY-ANKE differs by about 0.5 MeV/c 2 from earlier missing-mass evaluations and is consistent with all the recent measurements where the meson decay products were studied.The precision achieved is similar to these works and the deviation from the PDG best value [1] is only 20 keV/c 2 .The COSY-ANKE result [3] shows that, with care, a missing-mass approach can be competitive with experiments in which meson decays are measured. Fig. 1 . Fig. 1. a) The reconstructed transverse c.m. momentum p ⊥ is plotted against the longitudinal c.m. component p z for an excess energy Q = 6.3 MeV with respect to the η threshold.The small and large circles correspond to the kinematic loci for the d p → 3 He η and d p → 3 He π 0 reactions, respectively.ANKE covers the full solid angle for η production near threshold whereas, for pions, only the forward and backward 3 He are detected.b) c.m. distributions of the 3 He momentum from the d p → 3 He X reaction for six typical polar angle bins at the lowest excess energy measured, Q = 1.1 MeV.The experimental data summed over φ are shown by black lines and the background estimated from sub-threshold data by gray lines.The resulting background-subtracted d p → 3 He η signal is shaded gray. Fig. 2 . Fig. 2. a) Mean values of the p f distributions are shown for individual cos ϑ and φ bins for the standard (top) and improved (bottom) calibration at Q = 1.1 MeV (red circles).The results of Monte Carlo simulations are shown without (black horizontal line) and with momentum smearing (black points).The comparison of the data with the simulation leads to a determination of the momentum resolutions in p x , p y and p z .b) Final-state momentum p f (black crosses) and p 2f (red stars) plotted against the deuteron laboratory momentum p d .The extrapolation to threshold is carried out on the basis of Eq. 1.The lower panel shows the deviations of the experimental data from the fitted curve in p f .The errors shown here do not include the overall systematic uncertainty.
2017-12-31T16:39:04.584Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "adad841f03a69a76bd4b38b3970dfed37fc301db", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2012/19/epjconf_meso2012_03004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "930f9b6ff1f2fdb64958f8a557b5981adb1fe346", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
210862705
pes2o/s2orc
v3-fos-license
Combined in vitro IL-12 and IL-15 stimulation promotes cellular immune response in dogs with visceral leishmaniasis Domestic dogs are the main reservoir of Leishmania infantum, a causative agent of visceral leishmaniasis (VL). The number of human disease cases is associated with the rate of canine infection. Currently available drugs are not efficient at treating canine leishmaniasis (CanL) and months after the treatment most dogs show disease relapse, therefore the development of new drugs or new therapeutic strategies should be sought. In CanL, dogs lack the ability to mount a specific cellular immune response suitable for combating the parasite and manipulation of cytokine signaling pathway has the potential to form part of effective immunotherapeutic methods. In this study, recombinant canine cytokines (rcaIL-12, rcaIL-2, rcaIL-15 and rcaIL-7) and soluble receptor IL-10R1 (rcasIL-10R1), with antagonistic activity, were evaluated for the first time in combination (rcaIL-12/rcaIL-2, rcaIL-12/rcaIL-15, rcaIL-12/rcasIL-10R1, rcaIL-15/rcaIL-7) or alone (rcasIL-10R1) to evaluate their immunomodulatory capacity in peripheral blood mononuclear cells (PBMCs) from dogs with leishmaniasis. All the combinations of recombinant proteins tested were shown to improve lymphoproliferative response. Further, the combinations rcaIL-12/rcaIL-2 and rcaIL-12/rcaIL-15 promoted a decrease in programmed cell death protein 1 (PD-1) expression in lymphocytes. These same combinations of cytokines and rcaIL-12/rcasIL-10R1 induced IFN-γ and TNF-α production in PBMCs. Furthermore, the combination IL-12/IL-15 led to an increased in T-bet expression in lymphocytes. These findings are encouraging and indicate the use of rcaIL-12 and rcaIL-15 in future in vivo studies aimed at achieving polarization of cellular immune responses in dogs with leishmaniasis, which may contribute to the development of an effective treatment against CanL. Domestic dogs are the main reservoir of Leishmania infantum, a causative agent of visceral leishmaniasis (VL). The number of human disease cases is associated with the rate of canine infection. Currently available drugs are not efficient at treating canine leishmaniasis (CanL) and months after the treatment most dogs show disease relapse, therefore the development of new drugs or new therapeutic strategies should be sought. In CanL, dogs lack the ability to mount a specific cellular immune response suitable for combating the parasite and manipulation of cytokine signaling pathway has the potential to form part of effective immunotherapeutic methods. In this study, recombinant canine cytokines (rcaIL-12, rcaIL-2, rcaIL-15 and rcaIL-7) and soluble receptor IL-10R1 (rcasIL-10R1), with antagonistic activity, were evaluated for the first time in combination (rcaIL-12/rcaIL-2, rcaIL-12/rcaIL-15, rcaIL-12/rcasIL-10R1, rcaIL-15/rcaIL-7) or alone (rcasIL-10R1) to evaluate their immunomodulatory capacity in peripheral blood mononuclear cells (PBMCs) from dogs with leishmaniasis. All the combinations of recombinant proteins tested were shown to improve lymphoproliferative response. Further, the combinations rcaIL-12/rcaIL-2 and rcaIL-12/rcaIL-15 promoted a decrease in programmed cell death protein 1 (PD-1) expression in lymphocytes. These same combinations of cytokines and rcaIL-12/rcasIL-10R1 induced IFN-γ and TNF-α production in PBMCs. Furthermore, the combination IL-12/IL-15 led to an increased in T-bet expression in lymphocytes. These findings are encouraging and indicate the use of rcaIL-12 and rcaIL-15 in future in vivo studies aimed at achieving polarization of cellular immune responses in dogs with leishmaniasis, which may contribute to the development of an effective treatment against CanL. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Introduction The zoonotic form of visceral leishmaniasis (VL) is caused by the obligate intracellular protozoan Leishmania infantum (syn. L. chagasi, in Americas) [1,2]. VL is the most severe form of leishmaniasis and is fatal in 95% of untreated cases [3]. VL is distributed worldwide, occurring mainly in tropical and subtropical regions with approximately 300.000 new infections each year and an estimated 20.000 to 50.000 deaths [4]. Domestic dogs are considered the main reservoir of the parasite in urban areas [5]. In endemic areas, there is a correlation between the prevalence of seropositive dogs and number of human cases of VL [6][7][8], suggesting that controlling infection and/or disease in dogs (CanL) could contribute to effectively curbing human disease [8]. The current treatments available for CanL have leishmanicidal and leishmaniostatic effects [9] and lead to a reduction in parasite load, infectiousness, and resolution of clinical signs [10]. However, most dogs remain infected and experience disease relapse months after treatment withdrawal, once again becoming a source of parasites for other healthy dogs and human beings [10]. The frequent disease relapses following currently available therapy suggests that new drugs or therapeutic approaches for CanL, such as the association of existing drugs with immunostimulants, should be sought [11]. Untreated asymptomatic dogs (generally resistant to infection by L. infantum) develop an efficient cellular immune response (Th1) with simultaneous production of IFN-γ, IL-2, and IL-12 [12][13][14], and an activation of leishmanicidal mechanisms in infected macrophages [15,16]. In contrast, symptomatic dogs (susceptible to the infection) mount an exacerbated humoral immune response (Th2) that may be accompanied by increased production in of IL-10 [17]. In addition, susceptible dogs present increased expression of programmed cell death 1 (PD-1) and PD-1 ligands (PD-L1 and PDL-2), in splenic cells [18]. Such heightened expression of PD-1 and PD-1 ligands may suppress lymphoproliferation and alters the production of Th1 cytokines, contributing to the development of the disease [19]. Manipulations of certain cytokine signaling pathways may favor control over the parasite in infected individuals [12,13,17,[20][21][22][23]. Interestingly, most human beings who mount inappropriate adaptive immune responses for combating L. infantum and develop the disease, subsequent to treatment with pentavalent antimonials or amphotericin B, reprogram their specific immune responses [21,22], maintain the parasite replication under control and show no disease recurrence. Human patients with VL lack the ability to mount lymphoproliferative response and IFN-γ production following peripheral blood mononuclear cells (PBMCs) in vitro stimulation with soluble Leishmania antigens (SLA), that would relate to development of the disease [21]. However, when PBMCs from such patients are stimulated with SLA in combination with recombinant human interferon gamma (rhuIFN-γ) and interleukin-2 (rhIL-2) they present restoration of lymphoproliferative response [21]. Further, stimulation of PBMCs with SLA together with rhuIL-12 or blocking signaling with anti-IL-10 antibodies results in both restoration of lymphoproliferative response and production of IFN-γ [21,22]. In naturally L. infantum-infected sick dogs, treatment of PBMCs with rcaIL-12 generates an increase in IFN-y mRNA expression or protein production [20,24] and a tendency to enhance lymphoproliferative response to SLA [20]. To our knowledge, studies evaluating the activities of IL-7 in humans or dogs with visceral leishmaniasis have not yet been conducted. Although interfering with a single cytokine pathway, with agonistic or antagonistic molecules, can drive responses in cells of the immune system, simultaneous intervention in two or more cytokines signaling pathways may elicit stronger responses in these cells, even in settings with low cytokine concentrations [25][26][27][28]. Successful attempts to modify immune responses in human or dog PBMCs using a combination of cytokines showing additive or synergistic effects have already been performed. For instance, rhuIL-15 combined with rhuIL-12 promotes higher levels of IFN-γ, compared with rhuIL-15 alone or rhuIL-12 alone, and may generate effective responses to infections caused by intracellular parasites [29]. In addition, rcaIL-12 and rcaIL-2 together stimulate efficient production of IFN-γ, whereas rcaIL-12 or rcaIL-2 alone do not induce this effect or does so in limited amounts [30,31]. In this study, canine recombinant proteins (rcaIL-12, rcaIL-2, rcaIL-15 or rcaIL-7, rcasIL-10R1 with antagonistic activity) were evaluated for their capacity to reprogram responses in PBMCs from dogs with leishmaniasis. These recombinant proteins were assessed in combination or alone. The responses studied were lymphoproliferation, the presence of PD-1 on lymphocyte surface, the production of IFN-γ, TNF-α, IL-10, and NO 2 , and the synthesis of T-bet and GATA3 in lymphocytes. Recombinant proteins potentially capable of stimulating macrophages to control replication or destroy L. infantum could have a positive impact on the development of immunotherapeutic protocols for CanL. Five healthy dogs from Araçatuba, São Paulo, with negative results for the detection of Leishmania DNA by real-time PCR, as well as complete blood counts and mean serum biochemistry parameters within reference ranges, were used as negative controls. These dogs were pet animals and their owners gave written permission for the experiment procedures. Ten dogs were selected from the Araçatuba Zoonosis Control Center that showed at least three of the following clinical signs of CanL: onychogryphosis, cachexia, ear-tip injuries, periocular lesions, alopecia, skin lesions or lymphadenopathy (see supplementary material, S1 Table). Animal screening and sample collection Blood samples from both groups, healthy controls and diseased dogs, were collected in tubes without EDTA to obtain serum for the evaluation of biochemical profiles (S2 Table) and to perform indirect ELISA (S1 Table) for the detection of anti-Leishmania antibodies [32]. Blood was also collected in EDTA tubes for complete blood counts (CBC) (S3 and S4 Tables) and PBMCs isolation. Real-time PCR for the detection of Leishmania DNA was performed in canine blood samples using a calibration curve obtained from the DNA of 10 2 to 10 7 Leishmania promastigotes, as previously described [33]. Sera samples were also tested for Dirofilaria immitis antigens and antibodies reactive to Anaplasma phagocytophilum/Anaplasma platys, Ehrlichia canis/Ehrlichia ewingii and Borrelia burgdorferi using the SNAP 4Dx Plus rapid test (IDEXX Laboratories, Inc. USA), in accordance with the manufacturer recommendations. In addition, blood samples were tested for Ehrlichia spp DNA by conventional PCR using a slightly modified protocol previously described by Labruna et al. 2007 [34]. Flow cytometry analysis for labeling PD-1, T-bet and GATA3 in PBMCs To detect PD-1 expression, PBMCs were suspended in PBS containing 1% bovine serum albumin, 0.1% sodium azide and 20% fetal bovine serum, to block the Fc receptor (FcR). Cells were then mixed with either the PE-conjugated monoclonal anti-Human CD279 (PD-1) antibody [18,19] or an isotype control (BD Pharmigen, USA), according to the manufacturer instructions. Ten thousand events were acquired on the FL2 channel of a flow cytometer, and data analysis was performed as described above in the lymphoproliferation assay section (S2 Fig). To evaluate T-bet and GATA3 expression, PBMCs were fixed and permeabilized with a commercial buffer (eBioscience Bioscience, USA), according to the manufacturer's instructions. Cells were mixed with the FITC-conjugated anti-human monoclonal antibody T-bet (R&D Systems) and with the PE-conjugated anti-human monoclonal GATA3 (R&D Systems), or control isotypes (R&D Systems), according to the manufacturer's instructions. The similarity between human (GenBank, accession # NP_037483 and CAA38877) and canine (XP_548164 and XP_005617214) T-bet and GATA3 proteins is 93 and 96% respectively. Ten thousand events were acquired on channels FL1 and FL2, and cytometric analysis was performed as described above in the lymphoproliferation assay section (S3 Fig). NO 2 determination As a surrogate marker of NO 2 , the Griess method was used to determine nitrite concentrations in supernatants of PBMCs cultured for 5 days with or without the addition of SLA and/or combinations of recombinant proteins [41]. For this, 50 μL of culture supernatant was added to 50 μL of Griess reagent (one part 0.1% NED and one part 1% sulfanilamide in 5% phosphoric acid). After 5 min of incubation at room temperature, optical density readings were taken at 540nm using a 96-well plate reader (Spectra Count, Packard Bio Science Company, Meriden, CT, USA). Nitrite concentrations in cell culture supernatants were determined using a standard sodium nitrite curve (range: 3-200 μM). Statistical analysis Statistical analysis was performed using GraphPad Prism v6 software (GraphPad Software, Inc., La Jolla, CA, USA). All statistical variables were tested for normality using the Shapiro-Wilk test. To compare values corresponding to lymphoproliferation, expression of PD-1, Tbet, GATA3, IL-10, IFN-y, TNF-α, as well as NO 2 production within groups, Friedman's test with Dunn's post-test was used. The Mann-Whitney test was used to compare results among groups. Values were considered significant when p <0.05. Clinical and laboratory findings in naturally infected animals Dogs selected from the Araçatuba Zoonosis Control Center showed at least three signs compatible with CanL, including onychogryphosis and skin lesions (in 7 out of 10 dogs), lymphadenopathy (6 out of 10), periocular lesions and cachexia (5 out of 10), alopecia (4 out of 10) and ear-tip lesions (3 out of 10). Negative control animals showed no clinical manifestations (S1 Table). All 10 diseased dogs, but none of the five negative controls, presented anti-Leishmania antibodies (ELISA OD, mean ± standard deviation, infected dogs: 0.88 ± 0.38 vs. negative controls: 0.10 ± 0.05, cut-off value: 0.27) (S1 Table) and Leishmania DNA (real-time PCR, mean CT value: 27.7) (Leishmania DNA calibration curve CT value range: 13.2-33.7). Furthermore, the infected dogs presented statistically significant reductions in RBC counts, hematocrit, hemoglobin and serum albumin concentrations and the albumin/globulin ratio, as well as increased serum globulin concentrations, in comparison to negative controls (S2-S4 Tables). Based on clinical signs and laboratory findings, the diseased dogs showed moderate disease manifestations classified as clinical stage II leishmaniasis according to Solano-Gallego et al, 2009 [42]. In 8/10 diseased dogs, antibodies specific for Ehrlichia spp were detected by rapid testing. None of these dogs presented Dirofilaria immitis antigens or antibodies specific to Anaplasma phagocytophilum/Anaplasma platys or Borrelia burgdorferi. Conventional PCR carried out in blood samples failed to reveal Ehrlichia spp DNA in either the diseased or control dogs. Combinations of recombinant canine proteins, or rcasIL-10R1 alone, induced lymphoproliferation In dogs naturally infected with leishmaniasis, the ability to mount a lymphoproliferative response is limited after PBMCs are stimulated with Leishmania antigens [12,43,44]. In an attempt to develop protocols to promote a lymphoproliferative response in these dogs, combinations of rcaIL-12/rcaIL-2, rcaIL-12/rcaIL-15, rcaIL-12/rcasIL-10R1, rcaIL-15/rcaIL-7 or rca-sIL-10R1 alone were tested. PBMCs from healthy or infected dogs were cultured together with, or without, the recombinant proteins, and with or without the addition of SLA, or in the presence of PHA alone for five days. The Mean Fluorescence Intensities (MFI) of CFSE-labeled lymphocytes was determined under each condition. Reductions in CFSE-fluorescence were considered an indicator of cell proliferation [39]. In healthy dogs, lymphoproliferation was observed when PBMCs were cultured with PHA or a combination of rcaIL-12/rcaIL-15, with or without the addition of SLA (Fig 1A). In Leishmania-infected dogs, although CFSE-labeled lymphocytes cultured with PHA showed some reductions in MFI, statistical significance was not observed (Fig 1B). Interestingly, the lymphocytes from diseased dogs exhibited a proliferative response when cultured in each of the combinations of recombinant proteins tested (rcaIL-12/rcaIL-2, rcaIL-12/rcaIL-15, rcaIL-12/rcasIL-10R1, rcaIL-15/rcaIL-7), as well as with rcasIL-10R1 alone, regardless of the addition of SLA to cultures (Fig 1B). Combinations of rcaIL12/rcaIL-2 and rcaIL-12/rcaIL-15 promote decreases in lymphocyte PD-1 expression The inability of lymphocytes from dogs with leishmaniasis to proliferate and produce Th1 cytokines may be associated, at least in part, with increased PD-1 expression, which promotes apoptosis during the course of infection [18,19]. To assess whether interference in cytokine signaling could lead to reduced PD-1 expression in lymphocytes, PBMCs were cultured with or without SLA using combinations of recombinant canine proteins, or rcasIL-10R1 alone. No changes in PD-1 expression were seen in healthy dogs regardless of the addition of recombinant canine proteins, regardless of SLA stimulation (Fig 2A). However, lymphocytes from diseased dogs showed significant decreases in PD-1 expression under a combination of rcaIL-12/ rcaIL-2 and rcaIL-12/rcaIL-15, both with and without SLA stimulation (Fig 2B). Although decreases in PD-1 expression were seen using rcaIL-12/rcasIL-10R1, both with and without SLA, no statistical significance was detected (Fig 2B). Stimulation with rcaIL-12/rcaIL-15 induces increased T-bet expression without altering levels of GATA3 Leishmaniasis progression in dogs is associated with the inability to establish an effective cellular immune response (Th1) and the mounting of an exacerbated humoral immune response (Th2) and/or the development of an immunosuppressive state [8,17,45]. The generation of Th1 or Th2 cell subsets involves the expression of master transcription factors T-bet or GATA3, respectively [46,47]. To identify the conditions capable of modifying T helper cell differentiation in dogs with leishmaniasis, PBMCs were cultured with combinations of recombinant canine proteins with or without adding SLA. In PBMCs from healthy dogs, the combinations of rcaIL-12/rcaIL-2 or rcaIL-12/rcaIL-15, both without SLA, generated a significant increase in lymphocyte T-bet expression, which was inhibited by the addition of SLA (Fig 3A). In contrast, in PMBCs from diseased dogs, rcaIL-12/rcaIL-15 induced a significant increase in lymphocyte T-bet expression, both in the absence or presence of SLA (Fig 3B). None of the other recombinant proteins tested, either in combination or alone, were found to affect T-bet expression. In addition, none of these recombinant proteins, regardless of the presence of SLA, significantly altered the expression of GATA3 in any of the lymphocyte cultures (Fig 3C and 3D). Combinations of rcaIL-12/rcaIL-2, rcaIL-12/rcaIL15 and rcaIL-12/rcasIL-10R1 increased IFN-γ and TNF-α expression without altering IL-10 production Driving a long-term specific Th1 immune response, while preventing Th2 and/or an immunosuppressive state, may be useful in the treatment of CanL [8,17,45]. To determine the impact on the production of cytokines mediating Th1 or immunosuppression in diseased dogs, PBMCs were cultured with combinations of recombinant proteins, or rcasIL-10R1 alone. In healthy canine PBMCs, the combinations of rcaIL-12/rcaIL-2 and rcaIL-12/rcaIL-15 induced significant increases in IFN-γ levels (Fig 4A), while the combination of rcaIL-12/rcaIL-15 induced significant increases in TNF-α levels (Fig 5A), both regardless of SLA stimulation. In PBMCs from diseased dogs, the combinations of rcaIL-12/rcaIL-2 and rcaIL-12/rcaIL-15 induced significant increases in IFN-γ and TNF-α levels (Fig 4B and Fig 5B), both in the absence or presence of SLA. In addition, in diseased dogs, the combination of rcaIL-12/rcasIL-10R1 promoted significant increases in IFN-γ and TNF-α only in the presence of SLA (Fig 4B and Fig 5B). In PBMCs from healthy and diseased dogs, no combination of recombinant proteins, regardless of the presence of SLA, significantly altered IL-10 levels (Fig 6A and 6B). Interestingly, an increasing trend in IL-10 production was noted in the cell culture supernatants of dogs with leishmaniasis, especially under SLA stimulation (Fig 6B). Finally, in PBMCs from healthy dogs, minimal IL-10 detection was observed when rcasIL-10R1 was added either alone or in combination with rcaIL-12, yet without statistical significance (Fig 6A). Combined rcaIL-12/rcaIL-2, or rcasIL-10R1 alone, increased NO 2 production in the presence of SLA IFN-γ, TNF-α and IL-2 were associated with enhanced nitric oxide (NO) production in a canine macrophage (48). To assess whether IFN-γ and TNF-α observed in culture supernatant IL-12 and IL-15 promote cellular immune response in dogs with visceral leishmaniasis could lead to increased NO 2 , PBMCs were cultured with combinations of recombinant proteins, or rcasIL-10R1 alone, in the presence or absence of SLA for five days. Healthy dogs presented no changes in NO 2 levels regardless of the addition of recombinant canine proteins and/or SLA to PMBC cultures (Fig 7A). However, significantly increased levels of NO 2 were Fig 6. Evaluation of IL-10 production in PBMCs from healthy and diseased dogs after stimulation with recombinant canine proteins. PBMCs from healthy negative control dogs (n = 5) (A) and dogs with leishmaniasis (n = 10) (B) were cultured in medium alone (Medium) or medium containing rcaIL-12/rcaIL-2, rcaIL-12/rcaIL-15, rcaIL-12/rcasIL-10R1, rcaIL-15/rcaIL-7, or rcasIL-10R1 alone. These PBMCs were cultured with combinations of recombinant canine proteins with or without adding SLA. After 5 days, IL-10 concentration was determined in culture supernatants by capture ELISA. Bars represent cytokine concentration medians and 25 th and 75 th percentile interquartile range. Symbols represent data from individual animals. Asterisks indicate significant differences (Friedman's test with Dunn's multiple comparison, p < 0.05). Discussion The present work considered 10 dogs presenting clinical manifestations and clinical-pathological findings compatible with CanL. Leishmania DNA was detected in the peripheral blood of all animals. While 8/10 of these dogs had antibodies specific for Ehrlichia spp. antigens, Ehrlichia spp. DNA was not detected in any, therefore indicating prior bacterial exposure. None of the animals presented Dirofilaria immitis antigens, or antibodies specific to Anaplasma spp. or B. burgdorferi. Together, these data provide evidence that CanL was the primary disease affecting the studied animals. Dogs naturally infected with Leishmania infantum that develop the disease or present disease relapse following specific treatment, show an inability to mount a specific effective adaptive immune response, the so-called Th1 immune response with long-term memory. In order to develop immunotherapeutic protocols, this study evaluated a set of recombinant canine proteins capable of interfering with cytokine signaling pathways to determine the modulation of cellular responses in dogs with leishmaniasis. The inability to mount an effective response in dogs with leishmaniasis occurs due to T lymphocyte exhaustion [45], involving loss of ability to perform CD4 and CD8 effector cell functions. In this study, several different combinations of recombinant proteins were shown to promote lymphoproliferation in dogs naturally infected with leishmaniasis. Lymphoproliferation occurred following incubation with rcaIL-12/rcaIL-2, rcaIL-12/rcaIL-15, rcaIL-12/rcasIL-10R1, rcaIL-15/rcaIL-7, or rcasIL-10R1 alone, regardless of whether SLA was added to cultures or not. One probable explanation for this phenomenon is the presence of Leishmania parasites within the PBMCs used in experimentation [49]. In fact, Leishmania parasites were detectable in every blood sample from each diseased dog (S1 Table). Previously, it has been reported that stimulation with IL-12 [20] or blocking IL-10 signaling [45,50], result in restoration of specific lymphoproliferative response in dogs with leishmaniasis. The data presented here also indicate that rcaIL-12 and IL-10 blockade (by rcasIL-10R1) can contribute to the generation of lymphoproliferative response in CanL. In future experiments, the subpopulations of lymphocytes stimulated to expand by the recombinant canine proteins tested here will be determined. None of the combinations of recombinant proteins or rcasIL-10R1 caused significant alteration in the PD1 expression of lymphocytes from the healthy negative control dogs. However, the rcaIL-12/rcaIL-2 and rcaIL-12/rcaIL-15 elicited a significant decrease in PD-1 protein expression in lymphocytes from dogs with leishmaniasis. In a previous study, stimulation with IL-12 was shown to cause a reduction in PD-1 and an increase in T-bet expression, and arouse effector function in CD8 + T lymphocytes, rescuing these cells from exhaustion in human patients infected with hepatitis B virus [50]. Recombinant canine IL-12 probably modulates PD-1 in part through induction in T-bet transcription factor expression [51,52]. Recombinant canine IL-12/rcaIL-2 and rcaIL-12/rcaIL-15 promoted an increase in T-bet expression in healthy negative control dog lymphocytes, which was inhibited by the addition of SLA, suggesting some suppressive SLA activity. In lymphocytes from dogs with leishmaniasis, only rcaIL-12/rcaIL-15 induced an increase in T-bet expression, independent of the addition of SLA in the cultures. Interleukin-15 may have elicited an increase in IL-12Rβ1 expression [53] resulting in a higher level of IL-12 signaling and, as a consequence, increased production T-bet. None of the recombinant canine or rcasIL-10R1 protein combinations modified the expression of GATA3 in healthy or diseased dogs. Interestingly, dogs with leishmaniasis also showed a significant increase in IFN-γ and TNFα production by PBMCs cultured with rcaIL-12/rcaIL-2 or rcaIL-12/rcaIL-15 combinations with or without the addition of SLA. Further, PBMCs from these animals produced significantly higher levels of IFN-γ and TNF-α after stimulation with rcaIL-12/rcasIL-10R1 and the addition of SLA in the cultures. The data showing that rcaIL-12/rcasIL-10R1 or rcasIL-10R1 have only limited effect and no effect on promoting an increase in IFN-γ production, respectively, corroborate the findings of Esch and collaborators, 2013, who reported that blockade of IL-10 signaling does not boost IFN-γ synthesis in either CD4 + or CD8 + T lymphocytes in dogs with leishmaniasis [45]. None of the combinations of recombinant proteins or rcasIL-10R1 caused significant alteration in the NO 2 levels in culture supernatant of PBMCs from the healthy negative control dogs. However, the rcaIL-12/rcaIL-2 and rcaIL-10R1 alone elicited a significant increase in NO 2 concentration in culture supernatant of PBMCs from dogs with leishmaniasis when SLA was added to cultures. In previous studies, stimulation with IL-2, IFN-y, TNF-α induced activation of canine macrophages and increased production of NO 2 [48] whereas stimulation with IL-10 negatively regulated NO 2 production in human macrophages [54,55]. In our experiments, in SLA-stimulated PMBCs from diseased dogs, the combination of rcaIL-12/rcaIL-2 and rcasIL-10R1 alone could have promoted significant increase in NO 2 production through IFN-γ and TNF-α induction and blockade of IL-10 signaling, respectively. However, it is unclear why the combination rcasIL-12/rcaIL-15, which also promotes IFN-γ and TNF-α expression would not have stimulated a significant increase in NO 2 synthesis. In conclusion, among the various combinations of recombinant canine proteins and rca-sIL-10R1 alone capable of interfering in the cytokine signaling pathways tested, rcaIL-12/ rcaIL-15 proteins were shown to promote significant lymphoproliferative response, an increase in T-bet without altering GATA3 expression, and an increase in IFN-γ and TNF-α without changing IL-10 production. These data suggest that rcaIL-12/rcaIL-15 may enhance cellular immune responses and contribute to the reprogramming of immune responses, which is potentially useful for developing effective treatments for CanL. PBMCs were cultured for 5 days in medium alone or medium with rcaIL-12/rcaIL-2, rcaIL-12/rcaIL-15, rcaIL-12/rcasIL-10R1, rcaIL-15/rcaIL-7, or rcasIL-10R1 alone. Then, PMBCs were labeled with anti-human CD279 (PD-1) PE-conjugated monoclonal antibodies or PEconjugated isotype control and lymphocyte mean fluorescence intensities (MFI) were assessed by flow cytometry. Gates R were used to delimit lymphocytes and the peaks indicated as (M) correspond to the lymphocytes expressing PD-1. In this representative example, the data shown correspond to PBMCs from a dog with leishmaniasis cultured with medium alone (A) or medium with rcaIL-2/rcaIL-12 (B), rcaIL-12/rcaIL15 (C) rcaIL-12/rcasIL-10R1 (D), rcaIL-7/rcaIL-15 (E) or alone rcasIL-10R1 (F). (TIF) S3 Fig. Histogram representative of the flow cytometric analysis of the labeling of T-Bet and GATA3 transcription factors. PBMCs were cultured for 5 days in medium alone or medium with recombinant canine proteins. Then, PBMCs were labeled anti-human T-bet FITC-conjugated antibodies, and anti-human GATA3 PE-conjugated antibodies or FITCconjugated and PE-conjugated isotype control antibodies, and lymphocyte mean fluorescence intensities (MFI) were assessed by flow cytometry. Gates R were used to delimit lymphocytes and the peaks indicated as (M) correspond to the lymphocytes expressing T-bet or GATA3. In this representative example, the data shown correspond to PBMCs from a dog with leishmaniasis cultured with medium alone (A) or medium with rcaIL-2/rcaIL-12 (B), rcaIL-12/rcaIL15 (C) rcaIL-12/rcasIL-10R1 (D), rcaIL-7/rcaIL-15 (E) or alone rcasIL-10R1 (F). (TIF) S1
2020-01-23T09:07:26.555Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b5c0e23055ca8e6c8e6c936dcc7e04368e63e6b5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0008021&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7930197c7191e3b8e3f6d93d437348082e147b20", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
34748627
pes2o/s2orc
v3-fos-license
Overcoming Barriers to Skills Training in Borderline Personality Disorder: A Qualitative Interview Study Despite evidence suggesting that skills training is an important mechanism of change in dialectical behaviour therapy, little research exploring facilitators and barriers to this process has been conducted. The study aimed to explore clients’ experiences of barriers to dialectical behaviour therapy skills training and how they felt they overcame these barriers, and to compare experiences between treatment completers and dropouts. In-depth qualitative interviews were conducted with 40 clients with borderline personality disorder who had attended a dialectical behaviour therapy programme. A thematic analysis of participants’ reported experiences found that key barriers to learning the skills were anxiety during the skills groups and difficulty understanding the material. Key barriers to using the skills were overwhelming emotions which left participants feeling unable or unwilling to use them. Key ways in which participants reported overcoming barriers to skills training were by sustaining their commitment to attending therapy and practising the skills, personalising the way they used them, and practising them so often that they became an integral part of their behavioural repertoire. Participants also highlighted a number of key ways in which they were supported with their skills training by other skills group members, the group therapists, their individual therapist, friends and family. Treatment dropouts were more likely than completers to describe anxiety during the skills groups as a barrier to learning, and were less likely to report overcoming barriers to skills training via the key processes outlined above. The findings of this qualitative study require replication, but could be used to generate hypotheses for testing in further research on barriers to skills training, how these relate to dropout, and how they can be overcome. The paper outlines several such suggestions for further research. Introduction Dialectical behaviour therapy (DBT) has been found effective in numerous randomised controlled trials for reducing suicide and self-injury, and for improving global outcomes in borderline personality disorder (BPD) [1,2,3]. DBT is based on the biosocial theory of BPD. A major premise is that BPD develops when emotionally sensitive individuals encounter invalidating environments that ignore, suppress or punish their emotions, which further compounds their emotional sensitivity and prevents development of the behavioural and cognitive skills required to self-regulate emotions. DBT therefore has five essential functions: 1) to teach skills for more effective emotional and behavioural regulation, 2) to enhance client motivation to use these skills, 3) to ensure clients can use the skills in a wide variety of situations, 4) to help shape an environment that reinforces skill use and 5) to enhance the therapist's own skills and motivation to keep working effectively with the client. These functions are fulfilled via five modes: 1) group skills training sessions, 2) individual therapy, 3) telephone skills coaching, 4) case management and 5) therapist consultation meetings [4,5]. The frequency with which clients use the DBT skills is positively associated with self-harm reduction [6,7] improvement in other features of BPD [8] and completing treatment [6]. In qualitative interviews, many clients have reported finding the DBT skills helpful for regulating their emotions and behaviour [9,10,11,12]. However, little research on barriers to treatment progress in DBT has been conducted [13,14]. Such research is important in order to optimise treatment delivery and clinical outcomes [15,16]. Given the evidence base for skills training as a treatment mechanism, it may be particularly important to explore what factors facilitate or act as barriers to DBT skills training. Qualitative interviews with clients who have received DBT could provide a rich source of experiential data, whilst comparing experiences between treatment completers and treatment dropouts could enable generation of hypotheses about processes contributing to dropout. The present study therefore used thematic analysis of qualitative interviews with clients receiving DBT to address the following questions: 1. What factors do clients experience as barriers to DBT skills training? 2. How do clients experience overcoming barriers to skills training? 3. How do experiences of barriers to skills training, and overcoming such barriers, differ between treatment completers and dropouts? Inclusion Criteria The inclusion criteria were: 1. Diagnosis of BPD 2. Having initiated in the past twelve months a twelve-month course of DBT, consisting of once-weekly individual therapy, once-weekly group skills training, telephone skills coaching and team consultation. All participants were concurrently taking part in quantitative evaluations of DBT [6,17], which required a recent history of self-harm (past 12 months). Measures Participants' Axis I and Axis II diagnoses were established at the beginning of treatment using the Mini International Neuropsychiatric Interview [18] and the Structured Clinical Interview for DSM-IV Axis II (SCID-II) [19]. Frequency of self-harm in the twelve months before beginning DBT was established using a structured interview based on the definitions of self-harm used in the Suicide and Self-Injury Interview [20], whilst BPD severity was assessed using the Zanarini Scale for BPD [21]. These measures were administered in the course of quantitative evaluations of DBT, in which all participants were concurrently participating, separately to the current study [6,17]. Recruitment and Sampling Forty interviewees were purposively sampled from a group of 89 individuals participating in a process-outcome study of DBT [6], some of whom were also concurrently participating in a randomised controlled trial [17], between 2009 and 2012. The purposive sampling [22] selected both treatment completers and treatment dropouts to be interviewed, where treatment dropouts were those missing four or more consecutive sessions of group and/or individual therapy [5]. This aimed to facilitate maximum variation sampling [22], whereby a range of experiences was captured, and to enable comparison of treatment completers and dropouts. Completers were consecutively selected to be interviewed once they had completed treatment, until data saturation had been achieved i.e. a subjective judgement by the study authors that additional interviews were not contributing new ideas to the analysis [23]. Selection of dropouts for interview was also consecutive until the point of data saturation, but some dropouts lost contact with the research team before they could be interviewed. More completers were sampled (N = 27) than dropouts (N = 13). This was because completers by definition had lengthier and more complex experiences of skills training to draw on, so saturation took longer to achieve in this subgroup. Study Setting Sample Characteristics. The sample was predominantly female (85%) and unemployed (67.5%), and spanned a range of ethnic groups (55% white, 45% black or other ethnic minority). The average age was 33 years (SD = 10.2). On average participants met criteria for 3 personality disorders (SD = 1.3) including BPD, and co-morbidities included substance dependence (33%), alcohol dependence (36%) and post-traumatic stress disorder (50%). In the 12 months before beginning DBT, the average rate of self-harm was 7.6 days (SD = 10.1) per month. The treatment completer subgroup (N = 27) had all completed 12 months of DBT, whilst the treatment dropout subgroup (N = 13) had completed an average of 6.1 months (SD = 3.2, range: 1-11 months). Treatment Setting. Participants were recruited from a community DBT service in an inner city area in the United Kingdom. The service provided a twelve-month course of DBT, including weekly individual therapy, a weekly group skills teaching session, telephone skills coaching and team consultation. Since this paper focuses on barriers to treatment, it is important to establish treatment adherence. A DBT therapist was trained in adherence rating by the Behaviour Research and Therapy Clinics (BRTC), University of Washington, and rating was conducted according to BTRC protocol. All individual and group sessions were audio recorded. Ten percent of individual sessions and five group sessions were randomly selected for adherence rating. A further 10% of the selected individual sessions and one group session were also rated by a BRTC adherence rater to establish inter-rater reliability. The individual sessions received a mean score of 4.1 (range: 3.6-4.5), demonstrating good adherence. The skills teaching groups received a mean score of 4.1 (range: 4.0-4.4). The two raters rated all sessions within a 0.3-point margin of difference. Interview Setting. Participants were interviewed as soon as possible after ending DBT. The interviews were conducted, following informed consent, in a private research office or the client's home, lasted 20 to 90 minutes and were audio recorded for later transcription. Reflexivity The background, research roles and potential influences on each of the authors is detailed in Table 1. The authors aimed to approach data collection and analysis without a priori hypotheses. However, they also aimed to cultivate an attitude of 'mindful inquiry', i.e. to notice, accept and transcend the influence of their own beliefs, knowledge and experiences on interview conduct, interpretation and analysis [24]. Materials The topic guide for the interviews was developed in collaboration with DBT clients and therapists, and was modified, following piloting, to reflect areas which emerged as important in early interviews. The topic guide was semi-structured, with key questions followed by suggested probes for further exploration [22]. The order and wording of questioning was flexible in response to the interviewee's language and the natural course of the conversation [25,26]. The guide prompted interviewers to explore participants' experiences of learning, using and benefiting from the skills, what factors had acted as barriers to this process and how they had overcome them. The topic guide is available as supplementary material (S1 Topic Guide). Thematic Analysis The analysts (KB, LC and SS) took a critical realist approach to the data, viewing participants' accounts of their experiences as grounded in reality, but acknowledging the influence of subjectivity and the social context on data collection and analysis [27]. The interview data was analysed inductively using thematic analysis [28,29]. Analysis and data collection occurred concurrently in an iterative process whereby early analysis informed conduct of subsequent interviews [23]. The analysts first coded the data i.e. in each interview, sentence(s) of potential relevance to the research questions were tagged in MAXQDA software [30] using brief phrases ('codes') that summarised the content [28,29]. Once all interviews had been coded, credibility checks were conducted by a researcher external to the study team, who independently coded a randomly selected ten percent of the transcripts against the code list. Any discrepancies in coding were discussed and used to modify the list of codes until all raters agreed on code application. The codes were sorted into preliminary themes and sub-themes, which were repeatedly reviewed and refined by all analysts to maximise internal homogeneity and external heterogeneity, until all analysts agreed that the themes and sub-themes accurately reflected the overall 'story' told by the data [29]. Once the themes and sub-themes had been finalised, all analysts worked together to develop an analytic narrative for each sub-theme, which summarised what participants had said in each sub-theme and encapsulated the analysts' interpretation of what each sub-theme meant in relation to the research questions [29]. Sub-Group Comparisons The proportion of participants endorsing each sub-theme was compared between treatment completers and dropouts using Fisher's Exact Test. Quality Assurance In conducting and reporting the results of the study, Elliott and colleagues' guidelines for qualitative research [31] and the consolidated criteria for reporting qualitative research (CORE-Q) [25], were adhered to. Ethics Statement This qualitative study was approved on 18/02/2009 by the Camden and Islington Research Ethics Committee as a substantial amendment to the DIALECT trial (ref: 07/H0722/98). Participants provided written informed consent to participate. Only participants with capacity to consent were included, after checking they had understood the information provided. Results The analysts derived themes and sub-themes under two key domains pertinent to the research questions: Participant Experiences of Barriers to Skills Training (Domain 1), and Participant Experiences of Overcoming Barriers to Skills Training (Domain 2). Examples of verbatim quotes from participants coded under each sub-theme are given in Tables 2 to 5. Additionally, for each sub-theme an analytic narrative is presented, encapsulating the authors' interpretation of the meaning of each sub-theme in relation to the research questions. Domain 1: Participant Experiences of Barriers to Skills Training Theme 1. Difficulties learning the skills: too much to take in. Participants reported difficulties learning the skills linked to two key sub-themes: Anxiety during the skills groups (Subtheme 1.1.) and Difficulty with the presentation of the skills material (Sub-theme 1.2). Examples of supporting quotes for each of these sub-themes are given in Table 2 below. Sub-theme 1.1 Anxiety during the skills groups: For twenty-five participants, attending the skills groups was at times an anxiety-provoking experience. Elements leading to anxiety included worrying about being judged by the therapists or other group members, judging themselves when they did not understand the material, or concern that the treatment would not work. For a few, the therapists were perceived as authoritarian and strict, whilst the school- like environment triggered traumatic childhood memories. This interfered with participants' ability to learn the skills in various ways, including interfering with their ability to concentrate on the material being taught, feeling too embarrassed to ask for help when they did not understand, or having urges to escape the room or end therapy. Sub-theme 1.2 Difficulties with the presentation of the skills material: Twenty-five participants reported that certain aspects of the way the skills material was presented made it difficult to understand and learn. These included being faced with a lot of information presented at a rapid pace, the use of specialist DBT jargon that was difficult to understand, and the use of acronyms which were difficult to remember. Theme 2. Difficulties putting the skills into practice: overwhelming emotions. Thirtytwo of the forty participants explained that feeling overwhelmed by intense emotions at times prevented them from putting the skills into practice, by leading them to perceive a loss of control over their behaviour (Sub-theme 2.1) or leading to negative thoughts about using the skills (Sub-theme 2.2). Examples of supporting quotes for each of these sub-themes are given in Table 3. Sub-theme 2.1 Loss of control: Twenty-six participants explained that at times intense emotions seemed to 'take over' their mind, such that they felt a loss of control over their own thoughts and behaviour. They explained that their emotions could overwhelm their thoughts to such an extent that it felt like "blackness" [Participant 22, DBT dropout] or a "massive fog" 2 Negative thoughts about the skills: Thirty-two participants explained that at times they would have negative thoughts about the skills, such as thinking that they did not want to try to use them, and that using them was too difficult, pointless or would not work. They emphasised that continually trying to use the skills in the face of distressing experiences could be a difficult struggle and was often emotionally exhausting. For some, negative thoughts were linked to feelings of defiance or rebellion against having to use the skills. For others, the thought of using the skills was highly anxiety-provoking, forcing them to confront things they would rather avoid, take risks and let go of their 'safe' coping behaviours. Domain 2: Participant Experiences of Overcoming Barriers to Skills Training Theme 3. A personal journey to a new way of life. Participants described the challenging journey they had undergone to overcome initial difficulties in learning and using the skills. As they committed and re-committed to keep working towards change in their lives, (sub-theme 3.1), they gradually began to personalise the way they used the skills (sub-theme 3.2), and to find that the skills became an automatic part of their behavioural repertoire (sub-theme 3.3). Supporting quotes are shown in Table 4. Sub-theme 3.1 A commitment to keep working towards change: Almost all participants (N = 32) emphasised that overcoming barriers to learning and using the skills was a gradual journey, requiring a continual process of committing and re-committing to working towards change. Participants did this by continuing to attend the group, even when at first they did not understand what was being taught, and did not believe it would help them. They practised using the skills again and again in their daily lives, even when at first it was difficult or did not seem to work. When they felt tempted to give up, they re-committed to maintaining their efforts by reminding themselves of how much they wanted to change their lives, and also of the progress they had made. Sub-theme 3.2 Making the skills my own: Over the course of DBT, a minority of participants (N = 13) gradually derived their own personal meanings and interpretations of the skills, and learnt to concentrate on using the skills that worked best for them and their personal circumstances. For some, this personalisation was important in enabling them to let go of rigid, perfectionistic and self-critical thinking about their skills use. Sub-theme 3.3 Using the skills becomes automatic: For nearly half of the participants (N = 18), the end result of this journey was that eventually they became able to use the skills automatically without having to think about it. For a few, the DBT skills became so ingrained that they referred to them as "part of" themselves, an integral aspect of their identity. Theme 4. An environment that supports change. The second theme concerns the beneficial effect of support from others in learning and using the skills: the skills group (sub-theme 4.1), the individual therapist (sub-theme 4.2) and friends and family (sub-theme 4.3). Supporting quotes are shown in Table 5. Sub-theme 4.1 The skills group: Many participants (N = 25) found the skills group to be an environment that supported them in overcoming difficulties with learning and using the skills. Participants emphasised the importance of being able to learn from each other's understanding of the skills and experiences using them, and of being encouraged by more experienced group members to persevere through doubts and difficulties. For some, learning from the group therapists about how they used the skills to deal with problems in their own lives gave them a sense of connectedness and made learning the skills a less isolating experience; less of a 'them and us'. Participants found learning much easier when there was a fun, light-hearted atmosphere in the group, with everybody included. This was created through the interactions between group members and through the efforts of the therapists to teach in a fun way using multiple formats, and to keep checking that everybody understood. Sub-theme 4.2 The individual therapist: Many participants (N = 23) reflected on the importance of their individual therapist in supporting their skill use. The individual therapist was able to explain things they had not understood during group skills training, suggest skills to try during telephone skills coaching and role play skilful behaviour. If target behaviours, e.g. selfharm, had occurred in the previous week, they could use chain analysis to help the participant see where they could have used the skills. Sub-theme 4.3 Friends and family: A minority of participants (N = 13) reported that their friends and family had supported their skill use in a number of ways: by discussing the skills with them, helping them understand difficult concepts, encouraging them to use the skills, or even using the skills themselves. Subgroup Comparisons A comparison of the proportions of participants endorsing each sub-theme in the treatment completers versus treatment dropouts sub-groups is shown in Table 6 below. Dropouts were more likely than completers to have experienced anxiety during the skills groups as a barrier to learning the skills (p = 0.05) and were less likely to report committing to persisting in working towards change (p = 0.02), personalising their use of the skills (p = 0.07) or becoming able to use the skills automatically (p = 0.02). They were also less likely to report experiencing the skills group (p = 0.03), their individual therapist (p = 0.01) or friends and family (p = 0.07) as supporting their skills training. Summary of the Findings Based on in-depth interviews with 40 clients with BPD who had taken part in a DBT programme, the study aimed to establish what factors clients experienced as barriers to DBT skills training, how clients experienced overcoming barriers to skills training, and how these experiences differed between treatment completers and treatment dropouts. A thematic analysis of participants' reported experiences found that key barriers to learning the skills were anxiety during the skills groups and difficulty understanding the material. Key barriers to using the skills were overwhelming emotions which left participants feeling unable or unwilling to use them. Key ways in which participants reported overcoming barriers to skills training were by sustaining their commitment to attending therapy and practising using the skills, personalising the way they used skills, and practising them so often that they became an integral part of their behavioural repertoire. Participants also highlighted the importance of other skills group members, the group therapists, individual therapists, friends and family for explaining the skills and exemplifying how to use them, whilst making skills training an enjoyable and collaborative process. Treatment dropouts were more likely than completers to describe anxiety during the skills groups as a barrier to learning, and were less likely to report overcoming barriers to skills training via the key processes outlined above. Clinical and Research Implications of the Findings Due to the subjective nature of qualitative findings, which are based on participants' interpretations of their experiences which are then in turn interpreted by the analysts, the present study can only raise tentative suggestions for clinical implications and hypotheses for testing in further research. These are outlined below. Monitoring problematic thoughts and feelings. Experiences of anxiety during the skills group, difficulties with the skills material, a perceived loss of control over behaviour, and negative thoughts about using the skills, should all be treated as Target 2 "therapy-interfering behaviours" [5], if they interfere with clients' ability to learn and use the skills. However, these experiences may not always be apparent to therapists. It could perhaps be helpful to add "Anxiety during the skills groups", "Difficulties with the skills material", "Feeling out of control" and "Negative thoughts about the skills" to the Diary Card to enable the individual therapist to monitor, explore, validate and target them as necessary. Skills coaching. Individual therapists may find it helpful to suggest various DBT skills to clients experiencing problematic levels of anxiety during the groups, perceived loss of control or negative thoughts about the skills. These include: considering pros and cons of using the skills to cope versus acting on their urges; 'acting opposite' (e.g. actively participating in group despite urges to withdraw, using the skills despite negative thoughts about them); recognising and letting go of judgemental thoughts about group or the skills; challenging negative thoughts about the group or using the skills; 'cheerleading' (thinking encouraging thoughts); 'Cope Ahead' (e.g. imagining oneself coping with feeling anxious during group); and radical acceptance (e.g. accepting that feelings of anxiety are part of life and must at times be endured) [4,32]. Clients who report becoming so overwhelmed by their emotions that they lose control and "cannot" use the skills, may lack awareness of early warning signs that their emotions are escalating. Therapists may find that coaching clients to scan their bodies regularly could allow the client to notice early indicators of emotional arousal and intervene by using the skills, before reaching the point of feeling overwhelmed [4,32]. Tailored interactional styles. Group therapists may find it helpful to modify their interactions with clients for whom anxiety during the skills groups has been identified as a barrier, by taking a gentler approach focussed on validating clients' answers to questions and feedback on homework practice [32]. Presentation of the skills material. Group therapists may find it helpful to repeatedly define any specialist terminology used (e.g. 'mastery', 'radical acceptance'). Participants in the present study found it particularly helpful when group therapists made the skills teaching interactive and fun, using a variety of teaching techniques including group exercises, diagrams, role plays and self-disclosure. Reinforcing motivation and commitment. Therapists may find it helpful to draw on Linehan's suggested strategies for reinforcing clients' commitment to maintaining their efforts to learn and use the skills [5]. These include evaluating the pros and cons of making such a commitment, playing devil's advocate, using the 'foot in the door' technique, highlighting prior commitments made by the client, and cheerleading the client. Suggestions for further research. Further research could monitor state anxiety before, during and after the skills groups and test whether increases in state anxiety during the group predict dropout. If found to be linked to dropout, a feedback system could be implemented whereby state anxiety measures are routinely administered during DBT skills groups, and clients reporting increases in anxiety during the groups are 'flagged up' to their individual therapists as being potentially at risk of dropout. This feedback approach has been shown to improve outcome in mixed diagnosis groups but has not yet been tested in personality disorder [33,34,35,36]. Participants in the present study highlighted the role of support from other group members in overcoming barriers to skills training. Further research could test whether assigning new group members a skills coaching 'buddy' from amongst the more experienced group members, to provide encouragement and share their own experiences of learning and using the skills, could help reduce dropout. Limitations The interviews focussed only on the skills training element of DBT, preventing exploration of other aspects of the treatment that may have contributed to treatment dropout. Additionally, participants who dropped out of treatment may have retrospectively adjusted their recall of their experiences in line with their outcome, thus being more likely to recall learning and using the skills in a negative light. The samples were small and not randomly selected-hence, differences between dropouts and completers are not robust findings but can yield plausible hypotheses for testing in further research. Conclusion The present study was able to produce novel insights on barriers to DBT skills training and how they can be overcome, through analysis of the rich, experiential data provided by qualitative interviews with DBT participants. Although the findings are subjective and require replication, they could be valuable for generating ideas on how therapists can help their clients overcome barriers to skills training, and also hypotheses for testing in further research. Supporting Information S1 Topic Guide. Topic Guide for Qualitative Interviews. (DOC)
2016-05-12T22:15:10.714Z
2015-10-14T00:00:00.000
{ "year": 2015, "sha1": "50306374900f074496569a606e17c1c956e91bd7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0140635&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50306374900f074496569a606e17c1c956e91bd7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229493187
pes2o/s2orc
v3-fos-license
ICE Relevant Physical-chemical Properties and Air Pollutant Emission of Renewable Transport Fuels from Different Generations – An Overview The fuel demand in transport sector seems to be raised on a short and also on a long term base in the European Union and worldwide as well. A constantly growing trend is foreseen through 2050 worldwide as for using bio-based energy or fuels. Questions can arise before using these kinds of fuels in connection with the use of clean water or in terms of soil degradation, plant nutrients. It is also questionable whether they can be useful regarding their usage. First-, and second generation liquid as well as third generation gaseous bio-based fuels will be in focus in this article. They will be analyzed from physical-chemical properties and pollutant emission points of view. Introduction The fuel demand in transport sector seems to be raised in the near and far future regarding the EU and worldwide as well. This situation may lead to an increasing load and pollution from the transport sector (Exxon Mobil, 2018;Economics, 2018;International Energy Agency, 2018;Shell International BV, 2016;Shell, 2018). As for using bio-based energy or fuel, steady upward trend is forecasted until 2050 worldwide (Fischer and Schrattenholzer, 2001). The share of renewable energy in transport energy use will be slightly below 5 % globally (Exxon Mobil, 2018) while that is significantly higher in Europe (Ricardo, 2018) in the long term, while another study suggests that the share of biofuels in transport will be higher (Shell International BV, 2016). There are many initiatives, possibilities to reduce CO 2 emission e.g. in Hungary (Zöldy, 2018;Török and Zöldy, 2010). There are many alternative fuels in the transport sector, which can be a solution instead of the fossil origin conventional one. The biofuels can be found among these opportunities (Zöldy, 2009). The aim of biofuel's usage is primarily the diversification of fuel resources, preservation of fossil stocks and keeping energy security (Hancsók et al., 2006a;Hancsók et al., 2006b;Ajanovic, 2010;MOL Group, 2012;Mantzos, 2010;Exxon Mobil, 2018;Economics, 2018;International Energy Agency, 2018;Shell International BV, 2016;Shell, 2018). Questions can arise before using the fuel in connection with the use of clean water or in terms of soil degradation, plant nutrients (Reijnders, 2019). Requirements regarding utilization rate of the renewable energy are set out in (European Parliament and Council, 2009). This utilization rate differs from county to country. For example in Poland this value is not so high approximately 12 %, while the share of renewable energy compared to the total energy consumption reaches the 42 % making it a world leader from this point of view (Maczyńska et al., 2019), but much of it comes from wood burning heating. It is also questionable whether they can be useful regarding their usage. First to mention their effect on emission of an internal combustion engine because it is a mixed situation. It means that regarding certain components they have advantages, so these components like CO, HC decrease using biofuel and other emission-components for example particulate mass increases (Gruden, 2008;Whittaker et al., 2011;Vidal Quadras, 2010). Biofuels can be classified into different generations (Alalwan et al., 2019;Saladini et al., 2016;Dahman et al., 2019). The spread of biofuels of first generation is currently in process. Biodiesel and bioethanol can be classified for the first generation. Usually they are blended in a low level rate to fossil fuels (MOL Group, 2012), but there are some examples in different European cities where they are used at a higher level blending rate or pure themselves for city buses (CIVITAS, 2011). Fuels from bio-origin are expensive, bio components are the most expensive ones among additives and components which are added to fossil fuels to make it fit for using in internal combustion engine (Tóth, 2011;Zöldy et al., 2011). Physical-chemical properties, composition of these first generation fuels are regulated in one European Standard for biodiesel (European Standard EN 14214, 2008), and in another one for bioethanol (European Standard EN 15376, 2008). Fuels from the second generation have other basic materials that are produced by a different way compared to first generation's fuel. The raw materials are also different such as wood, grasses waste product of animals, newspapers, agricultural waste crops in most of cases. Enzymes and microorganisms are designed using synthetic biology which should substantially increase the energy yield of these raw materials (Bhatia et al., 2017;Ahmed-Sarkar, 2018;Then et al., 2010). A serious disadvantage of these fuels can be their water solubility which is a problem regarding transportation, storage and combustion as well (Dechambre et al., 2017). Many ICE relevant properties of these fuels are similar to those of fossil derived fuels and because of their composition they are supposed to be more environmentally friendly ones. There are also fuels with fossil base among second generation's fuels. Learning, designing and refining of technologies is in process (Vidal Quadras, 2010; Paraffinic fuels for Europe, 2017; Larson, 2008;Rantanen, 2005). The use of hydrogen and methane produced on a bio-basis is on a level of a demonstration project. They can be used in internal combustion engine. The technology that is used to produce these gaseous fuels does not play role as for their usage in engines. Beside the disadvantages of their production technology they are highly appreciated from their very low level air pollution of view (Eichlseder and Klell, 2010). Generally, renewable fuels from 3. Generations are produced from plants or other crops that are unrelated to food production. There is a literature, which considers microbial lipids as 3rd generation fuel (Leong et al., 2018). According to some other literature, 3rd generation fuels on micro-algae basis are increasingly distributed (Alaswad et al., 2015;Dragone et al., 2010;Brennan and Owende, 2010). Genetically modified algal biofuel can be classified as 4th generation biofuels. This kind of fuel has advantages and disadvantages as well. Non-mature manufacturing technology coupled with low number of results from using them e.g. in engines. According to Abdullah et al. (2019) they are a promising alternative for fossil fuel with high energy content, low emission, and non-polluting nature. However, production of the algal biofuel is not economically viable due to the low yield and high production costs at this time. First-, and second generation liquid as well as third generation gaseous bio-based fuels will be in focus in this article. They will be analyzed from physical-chemical properties and pollutant emission points of view. 2 About the transportation biofuels in general 2.1 Grounds for using bio-derived fuels, technologies for their processing and production The possibility for spreading of fuels and lubricants with bio origin is a question of balance with appropriate proportion among agricultural products for food and feed and agricultural products for industrial use. Another criterion is that the cost of purchasing and the quality of biofuels compared to fossil fuels should be more or less the same. In addition, those kind of biofuels come onto the market that do not require any modification of modern vehicles today or only minimal modification needed. Modernization of existing production technologies of fuels and lubricants can be reached significant saving in resources and stock furthermore in emission of air pollutants (Hancsók et al., 2006b;Barabás et al., 2015;Zöldy and Török, 2015). Most important reasons of using transportation fuels from bio sources: • Reducing CO 2 emissions responsible for global warming, which biofuel itself alone is a realistic option for a short and medium term base; • Reducing the dependence of transport from fossil (primarily oil-based) sources; • Creating (preserving) a job in agriculture, and utilizing occasional overproduction as well as abandoned land parts. Renewable, biomass origin fuels are counted fuels which have a source of vegetable or possible of and their stock does not decrease because they are renewed, reproduced. The more important bio-derived transportation fuels are as follows: bio-alcohols, vegetable oils, bioethers, biogas. Among these have to be mentioned bioethanol and biodiesel which are the most common globally and in our land as well. They are blended to gasoline and diesel respectively with a certain level on volumetric base (Hancsók et al., 2006b). Biofuels of first generation (alcohol, FAME) Biomass derived gasoline components of Otto-engines are called nowadays which are produced entirely or partially from biomass (e.g. from wood, corn, straw or from animal origin materials, etc.) in order to reduce the dependency from fossil origin fuels and to reduce the emission of pollutants with global warming effect. Fuels produced exclusively from biomass are bio-methanol, bioethanol and synthetic gasoline from synthesis gas with Fischer-Tropsch-synthesis. Fuels produced partially from bio origin materials are the methyl-ethers (MTBE, TAME, etc) and ethyl-ethers (ETBE, TAEE, etc.). As for methyl-ethers and ethyl-ethers the proportion of bio origin materials is ca. 35-45 %. Producing bioethanol can be a process e.g. fermentation from sugar, fermentation of hydrolyzed amylum or fermentation of specially prepared and hydrolyzed cellulose (Hancsók et al., 2006b). For propelling Diesel-engines -beside the conventional fuels -biomass derived fuels are also available. These include biodiesels (Fatty-Acid-Methyl-Esters from different stock), other oxygen containing compounds (e.g. ethanol, dimethyl-ether), furthermore the new generation of biodiesel like mixture of n-and i-paraffines produced from vegetable's triglycerides, synthetic diesel (mixture of synthetic n-and i-paraffines, Fischer-Tropsch-diesel) and bio-paraffines manufactured from carbohydrate. Among these fuels there are some which can be used in a pure form in Diesel-engine e.g. biodiesel, dimethyl-ether and there are some which is appropriate only as a blending component for example ethanol without any modification of the engine. Biodiesels are produced in a way, where the vegetable's tri-glycerides are modified into a form with less molecule scale and the average molecule mass is also less. It is a goal as well that the product would be low unsaturated. The widely used technologies are as follows: • catalytic esterification of triglycerides with alcohols; • hydrogenation of triglycerides with different depths (Hancsók et al., 2006a). Air pollution effect of biofuels from the first generation 3.1.1 The bioethanol The effects of blending bioethanol in gasoline on vehicle emissions have been investigated in numerous on engine and in vehicle fleet as well. Based on these evaluations, it can be concluded that the evolution of the emission of pollutants (carbon-monoxide, hydro-carbons, nitrogen-oxides) is mainly influenced by the engine design and combustion process. It cannot be clearly stated that blending ethanol always results in a reduction in CO and HC emissions and an increase in nitrogen-oxide emissions. There is a significant difference regarding emission between e.g. older cars equipped with carburetors and advanced vehicles equipped with electronically controlled injection system. Engines with carburetor do not adapt to the lower stoichiometric air/gasoline ratio of the ethanol-gasoline blend, which results in the mixture becoming poorer, which means that more oxygen is sucked into the combustion chamber compared to the operation with pure gasoline. It slightly improves the combustion of hydrocarbons, but mainly promotes the oxidation of carbon monoxide. Of course, it does not matter whether the vehicle is equipped with catalytic converter or not. The test procedure also significantly determines the emissions of an engine or vehicle. Therefore, it is only appropriate to talk about trends in connection with emissions emitted by an engine or vehicle. In general, engines with advanced electronic control vehicles have 5-30 % less hydrocarbon and carbon monoxide emissions when blending 10-20 V/V % ethanol in gasoline. However, nitrogen-oxide emissions are generally 10-40 % higher may be because of better combustion resulting in higher flame temperatures, which favors the formation of nitrogen oxides. It is important to note the emission of acetaldehyde, which is extremely harmful to human health, is clearly increased in any engine type using ethanol and that ethanol is present in the exhaust gas in measurable quantities. One of the important goals of the use of bio-based components is to reduce greenhouse gas emissions (CO 2 , N 2 O, etc.) throughout the whole life cycle of the fuel. Greenhouse gas emissions during of a fuel's life cycle depend on many factors, for example, on the type of basic material, on the type of production technology, the ethanol content of gasoline as well as on the type of vehicle, etc. According to recent reports using bioethanol from maize can reduce greenhouse gas emissions by 18-29 % compared to conventional motor gasoline with the same energy content when used in the form of E10 (Hancsók et al., 2006b). The biodiesel RME, which is the most widely produced and used in the European Union, can reduce GHG emissions by up to 50 %. Blending biodiesel into diesel can have a detrimental effect on NOx emissions and can have a positive impact on emission of particulates, HC and CO. For newer vehicles, particle emissions from the B100 can be reduced by almost 75 %, while NOx emissions can increase by up to 30 % compared to biodiesel-free diesel (Hancsók et al., 2006a). Biofuels of second generation (synthetic fuels) 4.1 Introduction and ICE relevant properties Second-generation synthetic biofuels are pure, high-quality fuels made from a variety of raw materials. One possible solution is to produce it from lignocellulose-based biomass, which allows the use of lower-cost inedible raw materials, thus limiting direct food fuel competition. Second generation fuels can be classified in terms of the process of converting biomass to fuel. Accordingly, there is a biochemical and thermochemical conversion process. Ethanol or butanol of second generation are produced by a biochemical process. Second generation's biofuels produced by thermochemical processes are less well known because they have no first-generation "equivalent" fuel. There are many second-generation biofuels produced by thermochemical process that can be manufactured from ordinary fossil feedstock, have similar processes to those used to produce biofuels. Such biofuels include methanol, Fischer-Tropsch liquid, and dimethyl ether. Blended alcohols can also be produced from fossil fuels, but this process is not yet widespread due to the immaturity of the technical solution. Unrefined fuels such as pyrolysis oils are produced by thermochemical processes in such a way that they do not require any further treatment and refining before use in engines. Fig. 1 shows the potential for the production of second generation's biofuels (Larson, 2008). Regardless of the raw material used, the following can be said for all synthetic biofuels: • sulfur-free, low-aromatic, odourless, colourless liquid fuel; • it allows for a significant reduction in regulated and unregulated exhaust components from vehicles (NO x , SO x , PM, VOC, CO, HC, CO 2 ); • contributes to the replacement of fossil crude oil, improves diversification opportunities and contributes to security of energy supply; • can be used in existing fuel infrastructures; • can be used in existing engines; • enables the development of a new generation of internal combustion engine technologies to improve engine efficiency and reduce pollutant emissions; • easily degradable and non-toxic to organisms (Paraffinic fuels for Europe, 2017). Advantageous properties of synthetic fuels are shown in Fig. 2. Synthetic biofuels also have many advantages compared to fossil fuels and first generation biofuels. Table 1 summarizes the data for the different fuels from four relevant technical point of view. Parameters of Nesteoil NExBTL biofuel The NExBTL biodiesel is a mixture of normal and iso-paraffines. Its features are comparable to the best premium fossil diesel fuel available today. When added to fossil, the NExBTL component improves the diesel's quality. The NExBTL biodiesel is compatible with existing vehicle fleet and fuel delivery system and blending into diesel is technically easy. The chemical structure of NExBTL and most GTL fuels is different from that of the currently well-known biodiesel (FAME). The long-chain methyl ester of FAME fatty acids, which contains oxygen in the ester group, while NExBTL hydrocarbon-based biodiesel does not contain oxygen. The NExBTL fuel does not contain sulphur, oxygen, nitrogen and aromatics. The cetane number is very high approximately 90. Cold temperatures (turbidity point) can be set by the factory to be between -5 °C and -30 °C in order to meet different environmental conditions. The lubricity of NExBTL biodiesel can be easily improved with the amount of additives used today for conventional sulfur-free diesel oil. The NExBTL complies with both the related European Standard (European Committee For Standardization, 2005) and the Specification 4 of the WWFC. Table 2 shows the NExBTL fuel characteristics compared to other fuels. Based on Table 1, the following can be stated regarding the fuel's ICE relevant properties: its lower density compared to conventional diesel gives better sprayability and easier transport. The significantly higher cetane number improves the inflammatory property, which can become a disadvantage after a certain point. The cloud point is significantly lower than that of other fuels. Its heating value is approximately the same as conventional diesel, which means that the fuel-powered engine has similar power and fuel consumption. It does not contain PAHs, oxygen and sulphur, which can significantly reduce pollutant emissions like soot and particle (Rantanen et al., 2005). Introduction of properties regarding pollutant emission The use of synthetic fuels in internal combustion engines reduces pollutant emissions, contributing to better air quality, especially in urban environments. Fig. 3 shows the emissions of different pollutants based on data obtained from a comparison of fossil diesel and synthetic motor fuels. Road tests with synthetic fuels in several European countries show, that these propellants significantly improve local air quality through reduced emissions of air pollutants (particulate matter, nitrogen-oxides, carbon-monoxide and hydrocarbons). Using synthetic fuel, PM reduction can be achieved as shown in Fig. 4. If the engine is optimized NO x reduction can be achieved. The test, the results of which are shown in Fig. 3, has been conducted with a EURO 4 Toyota car without exhaust after-treatment system. The comparison test series was based on the type-approval test procedure with different types of biofuels. Results show the two most typical exhaust gas components for the Diesel-engine, PM and NO x (Gray Field: Diesel Fuel, Normal Engine; Light Blue Field: GTL Fuel Normal Engine; Dark Blue Field: GTL Fuel, Optimized Engine) (Vidal Quadras, 2010). Greenhouse gas emissions The environmental characteristics of conventional and synthetic fuel technologies were established by Well-To-Wheel based life cycle analysis. Studies show that GTL's total process' greenhouse gas emissions are comparable to those of a conventional refinery system. HVO shows a 40-80 % reduction over conventional fuel. Regarding the refinery system, BTL represents 80-90 % pure improvement. Combined engine development and synthetic fuel technologies will lead to significant CO 2 reductions (Vidal Quadras, 2010). These statements can be seen in Fig. 5. Third generation biofuels (hydrogen, methane) 5.1 Introduction of ICE relevant properties 5.1.1 Hydrogen The hydrogen-internal combustion engine process is based on the process of conventional internal combustion engines (most often by the Otto process), however, some conversation has to be carried out in the mixture formation in order that the system be able to operate with hydrogen itself or for dual-mode operation. Thus, they can operate with hydrogen or with hydrogen-containing gases (hydrogen-natural gas). Although most research work focusing on hydrogen-based fuel cell power generation concept but hydrogen-powered internal combustion engines are also seen as a competitive, good alternative. The use of hydrogen engines makes it possible to use current production lines in the automotive industry as well as applications common in the vehicles. The properties of hydrogen are fundamentally different from those of fuels used in internal combustion engines for the time being. The gas state at ambient temperature is the most striking, but by no means the biggest difference against gasoline and diesel (Eichlseder and Klell, 2010). Table 3 compares the properties of hydrogen with respect to its use in an internal combustion engine to conventional fuels. Unlike the high energy content of hydrogen in mass (H u = 120 MJ/kg), the calorific value per unit volume of hydrogen-air mixture (MJ/m 3 ) is very low. Under real operating conditions, the volume-to-volume calorific value of the hydrogen-air mixture is significantly lower than that of a conventional fuel blend. The wide ignition limits of hydrogen allow quality control over the entire operating range of the engine. One significant difference from conventional fuels is that the hydrogen-air homogeneous mixture can theoretically be burned up to λ = 10 using conventional ignition technology. However, the required ignition energy, as with conventional fuels, increases with increasing air to fuel ratio. The ignition energy required to ignite a stoichiometric hydrogen-air mixture is less than one-tenth the energy required to ignite a gasoline-air mixture. In contrast, the hydrogen self-ignition temperature is significantly higher than that of conventional fuels. This can be an advantage in Otto-process applications for knock formation, but also requires higher compression ratios and other measures to increase the temperature when used in Diesel-engines. High laminar flame velocity enables hydrogen to produce extremely short and very efficient combustion times. Even in the poor mixture range, the laminar combustion rate is significantly higher than what can be achieved with conventional fuels. However, due to rapid combustion and high pressure rise (high Δp/Δt), the crankshaft unit is heavily loaded and excited. Overall, due to the above-described properties of hydrogen, it can be used as fuel for internal combustion engines. As for the ignition entry method we can distinguish the two different method so the external ignition and compression ignition in the case of hydrogen engines. Due to the relatively high auto-ignition temperature of hydrogen compared to diesel fuel, which is approximately 585°C, stable self-ignition operation can only be achieved with high compression ratio and partly with additional air preheating. The current field of application of H 2 -fueled internal combustion engines to drive a passenger car is no longer confined to the Otto-engine concept there was no lack of research and concepts in the past for hydrogen-powered passenger car diesel engines and two-stroke Otto-engines (Eichlseder and Klell, 2010). Methane The critical temperature for methane compared to other hydrocarbons used as fuel is very low at -82.5 °C, therefore, it cannot be pressurized at ambient temperature in a liquid state. Vehicle fuel storage is thus practically accomplished by high pressure compression (200-250 bar). Without high pressure compression, the calorific value of methane per unit volume is only about 1/1000 th of the diesel, 0.036 MJ/ dm 3 compared to 34.7 MJ/dm 3 for diesel. Thus, using the usual 200 bar operating pressure, the density of methane at ambient pressure and temperature is 129.5 kg/m 3 instead of 0.7 kg/m 3 . A 100 litres high pressure cylinder can hold about 13 kg of natural gas with an energy content of approximately equivalent to 19-20 litres of diesel. By the way, this is a very important issue for the car operator (disadvantage), namely, the high storage volume requirement (about five times that of diesel) and the high weight (weight) of high-strength tanks due to high storage pressure (Eichlseder and Klell, 2010). The most important characteristics of methane are given in Table 4. At ambient pressure, natural gas can be kept in a liquid state at 161.5°C. LNG is internationally designated by the English abbreviation. The use of LNG as a fuel requires special containers with a very good thermal isolation, which is also called cryogenic container, which is very expensive. A further disadvantage is the constant evaporation loss due to heat absorption due to imperfect heat isolation. The use of LNG as a fuel for heavy duty vehicles is mainly considered there, where the liquefied natural gas arrives at the place of use in liquid form, thus, no additional liquefaction costs are incurred. Methane has more or less the same engine relevant properties as petrol and thus it is primarily used as an alternative fuel to gasoline (Eichlseder amd Klell, 2010). In the case of Diesel-engines, the following three solutions can be considered for the use of methane: a) Conversion of a Diesel-engine over for operation with clean natural gas, which can be achieved by "converting" the Diesel-engine into an Otto-engine (spark ignition engine). This requires significant structural intervention (modification of piston, cylinder head, reduction of compression ratio, installation of ignition / ignition system). b) Engines converted to pure natural gas at the factory. In this case, serial methane engines are made from serial Diesel-engines. c) Conversion of diesel engines to dual fuel operation. Ignition of the gas / air mixture is provided by the injection diesel as a "torch". In this case, there are two basic solutions: • Application of a central gas inlet system when the gas/air mixture is created by means of a throttle/ mixing unit integrated in the suction system; • In electrically controlled systems, natural gas is introduced separately per cylinder. The most important elements of the engine's natural gas supply system are as follows: • High-pressure tanks and their fasteners for the storage of compressed natural gas; • Piping system for the supply of high pressure gas to the engine and for refueling, and manual, electrical and safety valves; • Natural gas engine consisting of pressure reducers and pressure regulators; • Gas supply / mixing units, which can be either so called Venturi-throat or electronic injection valves; • Catalyst, which can be a simple oxidation one or a 3-way catalyst for engines with electronically controlled injection system (Kiss, 2015). 1) at pressure 1.013 bar; 2) at temperature 0 °C; 3) at 25 °C; 4) at λ = 1; 5) in air; 6) at pressure 250 bar and temperature 280 K Properties regarding pollutant-emission Hydrogen can be treated as a unique one because it is hydrocarbon-free material, which also means that it allows theoretically a combustion without the formation of CO, CO 2 , and CH. However, in real engine operation, due to the presence of lubricating oil in the combustion chamber, these pollutants are slightly present in the exhaust gas. When operating with hydrogen, only nitrogen-oxides should be considered as a relevant emission component. In order to enable a jerk-free, smooth switching between the two modes engine power with petrol has to be equivalent to the power running on hydrogen. This is achieved by electronic engine control. Emission values with hydrogen operation are lower during the various driving cycles. Emission values with hydrogen below 2 % against EURO 4 and also against US SULEV (Super Ultra Low Emission Vehicle) limits. Nitrogen-oxides alone is significant, which has a value approximately 30 % compared to the emission limit. Further reduction of nitrogen-oxides to 10 % of the limit is possible with monovalent hydrogen operation (Eichlseder and Klell, 2010). Fig. 6 shows the emission values recorded during the various driving cycles in relation to the limit values. Comparison and evaluation of different generation fuels based on their presented characteristics From point of view of using in internal combustion engines the first-generation biofuels approximate the important properties of fossil fuels, however, some of their properties are disadvantageous. Bioethanol has a significantly higher evaporative heat, which makes cold starting difficult. Due to the higher density and viscosity of biodiesel, it has a detrimental effect on the atomization and hence on the quality of combustion. They have oxygen in their molecular-structure thereby they may improve the combustion and reducing pollutant emissions. The physical-chemical properties of second-generation biofuels are closer to those of fossils. Regarding some properties e.g. density, inflammation tendency can be even better than that of conventional fossil-origin fuels. They are significantly better than those of the first generation. Because of their different composition, they do not contain or contain only small amounts of certain substances, e.g. oxygen, sulphur, nitrogen, aromatic hydrocarbons, that is because their use significantly reduces the pollutant components of the exhaust gas. For first-generation bio-materials, the individual components are e.g. total HC decreases, but within this, aldehyde emissions increase. Biofuels differ in their energy content from one generation to another. The specific energy content of first-generation fuels is less -but not so much -so close to that of fossils, while the second-generation fuels are nearly the same as conventional ones. Synthetic fuels do not contain or contain some components, e.g. aromatics which reduce the particle emissions during their use. Because of the oxygen bounded in the molecules of first generation fuels, they may burn resulting in less hydrocarbons, particulates and carbon monoxide, while increasing emissions of nitrogen-oxides. Hydrogen and methane are at a significant disadvantage in terms of volume specific energy content. Their disadvantages due to their gaseous state can be reduced by compression, which, in turn, requires complex, expensive systems that the vehicle carries. In case of the internal combustion engines, there can be taken a general statement exactly the amount of pollutants is in correlation with fuel consumption of the engine and the composition of the exhaust gas depending on the composition of the fuel. Hence, hydrogen and methane are the most environmentally friendly fuel. Summary, outlook In this article, the internal combustion engine relevant physical-chemical properties of the first, second and third generation bio-derived fuels and their emission during combustion have been introduced and analyzed. The first generation bioethanol and biodiesel have different physical-chemical properties than conventional gasoline and diesel have. At present, their widespread slight mixing does not cause disadvantage regarding driving dynamic and fuel consumption. They may be beneficial in terms of emissions because of the oxygen in their molecular structure. (Eichlseder and Klell, 2010) The properties of synthetic (second generation) biofuels are closer or better in some cases than their fossil counterparts, therefore, their mixing to fossil fuel or even using in a pure form do not have the disadvantages which can appear by fuels of first generation. They also differ in their composition, and do not contain certain components (e.g. sulfur, aromatics) that result in lower emissions due to their use. Third-generation methane and hydrogen have provided the most significant environmental benefits. They can be used as fuel for internal combustion engines in terms of physical-chemical properties but this requires significant additional infrastructure on the vehicle, which makes it difficult to spread. Although they operate in certain countries e.g. Italy has a significant number of gas vehicles, however, the use of liquid fuels is dominant actually worldwide. Using of biofuels can be a solution for reduction of CO 2 emission if this is possible from point of view of blending rate and specific fuel consumption of an engine. Blending rate to fossil fuels is in a low level nowadays, which means that with the same level in increasing of fuel consumption they are not counted as CO 2 reducer any more. This situation seems to be true for all the biofuels which have inherent oxygen which reduces the heating value of the fuel. And this is only an evaluation regarding tank-to-wheel level. As for air pollution comes from engines running on biofuel biofuels can have advantages regarding certain components like CO, HC, they decrease generally while using biofuel. Other emission-components for example nitrogen-oxides and particulate mass may increase or changeable in engines running on biofuels compared to conventional fossil derived diesel. The vision of the authors can be summarized with regards to using biofuels as follows: First-generation fuels are expensive components, they are not supported by automakers for their high blending rate into today's state-ofthe-art engines. The reason for their use in terms of their physical-chemical properties and emissions is not smashing. The use of synthetic fuels in two of the aspects examined is more justified and their cost-effective production also improves their perception. There is currently a learning process in manufacturing technology with low market penetration. Hydrogen and methane are considered third generation when produced on a renewable basis. Gas technology can be a good basis for hydrogen. However, the use of hydrogen as a fuel for internal combustion engines is highly unlikely due to expensive technology that is still immature. Biofuels of 4th generation have been studying from production technology point of view and hardly from utilization point of view. There is a multidirectional search of way in connection with biofuels. Probably will not be one kind of material that can spread, but the fuel divergence will further expand. The spread of any one will not be significantly influenced by its usability in the engines or the reduction of the environmental load it achieves, but by other, more important, factors already mentioned, such as energy security, long-term preservation of fossil fuels, or fuel diversification.
2020-11-26T09:07:01.480Z
2020-11-23T00:00:00.000
{ "year": 2020, "sha1": "8bffa0010f8b526d5dca92ff58732dfebd70d008", "oa_license": null, "oa_url": "https://pp.bme.hu/tr/article/download/14925/8946", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6a56ed24f95beb2e57de0e7cdeaea9c9452fc216", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
11268022
pes2o/s2orc
v3-fos-license
Complications of nephrogenic systemic fibrosis following repeated exposure to gadolinium in a man with hypothyroidism: a case report Introduction Nephrogenic systemic fibrosis is a condition that has recently been recognized in patients with chronic renal disease and is associated with use of gadolinium-based contrast agents of ubiquitous use in magnetic resonance imaging scans. The condition is believed to arise through inadequate renal clearance of the gadolinium-based contrast agents, resulting in bodily deposition of the gadolinium; this is most widely recognized in the skin, but also occurs in other tissues. Case presentation We report the case of a 52-year-old Caucasian man with hypothyroidism and chronic renal disease who developed nephrogenic systemic fibrosis upon repeated exposure to gadolinium, and who presented with a subsequent malabsorption of levothyroxine. This malabsorption resolved only partially upon amelioration of other conditions that might contribute to malabsorption, including edema and infectious diarrhea. The presence of gadolinium was quantified in specimens from his gastrointestinal tract. Our patient otherwise demonstrated adequate gastrointestinal nutritive absorption, objectively shown by normal albumin levels, resolution of diarrhea, and maintenance of his bodily weight. Conclusions Our observations suggest that nephrogenic systemic fibrosis can also affect tissue of the gastrointestinal tract, potentially contributing to partial malabsorption of levothyroxine in patients with hypothyroidism. Introduction Nephrogenic sclerosing fibrosis (NSF), also known as nephrogenic sclerosing dermopathy, is characterized as thickening and hardening of the skin overlying extremities and trunk secondary to fibrosis of the dermis in patients with renal failure [1,2]. The condition was first described by Cowper et al. as an idiopathic cutaneous fibrosing disorder or scleromyxedema-like illness in patients on dialysis in whom transplantation had been unsuccessful [3]. Other than end-stage renal disease (ESRD) patients on dialysis, patients with hepatorenal syndrome (HRS), recent vascular surgery, recent venous thromboembolic events, and those with an estimated glomerular filtration rate (GFR) < 30 ml/minute/1.73 m 2 appear at greatest risk for developing NSF [4,5]. NSF was initially believed to be limited to the skin, but recent reports, including autopsies, have identified involvement of skeletal muscle, myocardium, lungs, and kidneys, indicating the systemic nature of this disease [2,6,7]. Case presentation A 52-year-old Caucasian man with chronic kidney disease (CKD) was admitted to our university hospital from a skilled nursing facility with generalized edema, woody thickening of his skin, and bilateral hand contractures after recurrent hospitalizations during which time he underwent multiple MRI scans with gadolinium-based contrast agents (GBCA). Our patient's medical history was significant for type 2 diabetes mellitus, hypertension, CKD secondary to diabetic nephropathy, hypothyroidism, congestive heart failure, and hemochromatosis. Upon admission, his medications included neutral protamine Hagedorn (NPH) insulin 15 units twice daily, levothyroxine (LT4) 75 μg once daily, clonidine 0.1 mg twice daily, metoprolol sustained-release 25 mg once daily, amlodipine 5 mg once daily, enteric-coated aspirin 325 mg daily, and calcitriol 0.5 μg once daily. In the four months prior to this admission, he was first admitted for aspiration pneumonia and had a prolonged hospital course. He was started with ceftriaxone and metronidazole, later changed to piperacillin/tazobactam and levofloxacin for broader coverage. His baseline creatinine level was 3.5 mg/dL and he had a glomerular filtration rate (eGFR) of 24 mL/minute. Shortly after admission, he developed acute kidney failure (ARF). Acute tubular necrosis (ATN) and increased creatinine was attributed to episodes of hypotension. Over the course of several days, his creatinine level stabilized to a range between 4.0 and 4.5 mg/dL. His hospital course was then complicated by bloody diarrhea, prompting several gastrointestinal studies. A computed tomography (CT) scan of his abdomen revealed a diffusely fluid-filled colon with colitis. Flexible sigmoidoscopy showed acute and chronic inflammation with biopsy results consistent with ischemic colitis. Thereafter, he underwent magnetic resonance angiography (MRA) and MRI with gadolinium of the abdomen to evaluate for mesenteric vein thrombosis. These tests returned negative results. Our patient was subsequently treated with total parenteral nutrition (TPN) for bowel rest and his symptoms resolved. Subsequently, our patient developed fungemia. An abdominal MRI scan with gadolinium revealed no abscess. At this time, his creatinine level increased to 6.6 mg/dL and his glomerular filtration rate (eGFR) decreased to 9 mL/minute. Our patient was treated for presumptive recurrent ischemic colitis without perforation and was continued on a prolonged course of TPN for bowel rest. His symptoms improved and he was discharged to a skilled nursing facility. Shortly after his discharge, he was readmitted with new right-sided weakness and underwent an MRI scan with GBCA that revealed a left thalamic cerebrovascular accident, from which he recovered quickly. A new physical finding revealed skin with a diffuse woody peau d'orange appearance with tightening of the palmar fascia bilaterally at both hands with all fingers slightly drawn in. His neurological examination results were unremarkable. His admission thyroid stimulating hormone (TSH) level was 11.21 mU/L with a free thyroxine level (T4) of 0.7 ng/dL while on 75 μg of levothyroxine (LT4) (Figure 1). A skin biopsy was performed for evaluation of the hand contractures. The histology revealed numerous dermal fibroblast-like cells with dermal mucin deposition, consistent with NSF. Two weeks later, thyroid studies revealed a serum TSH rise to 117.5 mU/L and free T4 drop to 0.4 ng/dL. His LT4 dose was increased from 75 to 100 μg daily. Our patient was observed by nursing staff for his medication compliance. Because of the possibility of LT4 malabsorption, our patient underwent an oral LT4 loading test ( Figure 2). The protocol for this test entailed obtaining baseline total T4, TSH, and free T4 levels, followed by administration of a single oral dose of LT4 1000 μg, with repeat total T4 measured at two hours, and again at four hours after LT4 administration [8]. Baseline laboratory tests showed a total TSH level of 110.6 mU/L, free T4 level of < 0.3 ng/dL and total T4 level of 1.8 ng/dL. The challenge test revealed a failure of total T4 to rise, with values of 1.7 ng/dL at two hours and 1.8 ng/dL at four hours, consistent with LT4 malabsorption. His LT4 malabsorption was suspected to be partly secondary to NSF caused by exposure to gadolinium from the multiple MRI scans. Our patient was started on intravenous LT4 with some improvement of symptoms. The results of repeat laboratory testing four days later revealed a decline in TSH of 61.95 mU/L and rise in total T4 to 4.4 ng/dL. He was discharged on intra-muscular LT4 100 μg daily. However, two months later, our patient was seen at an outside community hospital for treatment of a urinary tract infection (UTI) and was switched back to oral LT4 upon discharge. Testing one month later showed a recurrent rise in TSH to 104.83 mU/L, and our patient was restarted on intra-muscular LT4 at a dose of 100 μg daily. His pharmacy was contacted to ensure compliance with the prescribed refill. Repeat laboratory tests results from two months later showed modest improvements, with a TSH of 39.82 mU/L. Based on these results, his intra-muscular LT4 was increased to 120 μg daily. Of note, during clinical follow-up, our patient did not appear to have malabsorption of other nutrients. His albumin rose over time from 2.5 g/dL on initial admission up to 3.7 g/dL 10 months later. Also, his diarrhea did not recur, and he maintained his weight. Four years ago, because of a nationwide shortage of intra-muscular LT4, a decision was made to repeat the oral loading LT4 test to help assess whether bowel edema, as a complication of hypothyroidism, had been the primary etiological factor in the malabsorption of LT4. The results of this test showed a total T4 level of 3.9 ng/dL, which rose to a value of 6.1 ng/dL for both the two-hour and four-hour time intervals. With this small, but demonstrable, amount of oral LT4 absorption, a decision was made to do a retrial of oral LT4, starting at a dose of 900 μg daily. Based on repeat laboratory analysis, his LT4 dose was gradually decreased to a maintenance dose of 400 μg daily. He maintained his TSH on oral LT4 for the next year until his death secondary to sudden cardiac arrest. Because we suspected gastrointestinal involvement of nephrogenic fibrosing dermopathy as a contributing factor in this partial malabsorption, gadolinium quantification was performed on gastrointestinal specimens at the laboratory of one of the authors (WAH). This analysis revealed 154.0 parts per million (PPM) in the skin, 22.8 PPM in the colon, and 7.0 PPM in the ileum/cecum/ colonic specimens with all appropriate positive and negative control tissue blanks functioning (Figure 3). Discussion To the best of our knowledge, this case presentation is the first to describe a gastrointestinal involvement of NSF that appeared to result in uncontrolled hypothyroidism secondary to malabsorption of oral LT4. Of particular interest, our patient did not appear to have malabsorption of other oral medications. LT4 is known to be predominantly absorbed in the small intestine, specifically the jejunum and ileum. Ischemic colitis, infectious diarrhea and use of TPN during his hospitalization, along with the presence of diffuse bowel edema, may have each contributed to poor absorption of this medication. However, in such circumstances, one would expect the malabsorption to resolve with improvement in each of these conditions; in our patient's case, malabsorption persisted despite resolution of these other complicating conditions. The issue of potential bowel edema contributing to malabsorption was addressed by utilizing an intra-muscular formulation of LT4, to allow for attainment of a euthyroid state. However, repeat oral loading LT4 testing demonstrated that our patient remained considerably resistant to absorption, despite achieving normalized TSH levels and resolution of visible bodily edema. These findings led us to the theoretical consideration that NSF might have intestinal mucosa involvement. This theory is supported by the presence of demonstrable gadolinium in gastrointestinal biopsies, along with his continued need for supratherapeutic doses of LT4 to maintain a euthyroid state. However, it is critical to note that the presence of gadolinium in the tissue does not necessarily imply pathogenicity, particularly as the gastrointestinal specimen did not show any other significant pathology. Admittedly, there is limited data on gadolinium levels in bodily tissues of patients with NSF. Disturbances in LT4 absorption have been reported with co-administration of calcium carbonate, ferrous sulfate, aluminum hydroxide, chromium picolinate and magnesium, prompting speculation that other divalent and trivalent metal ions could interfere with absorption [9][10][11][12]. It is unclear but certainly worthy of speculation that gadolinium, being a trivalent ion, could interfere with thyroxine absorption, but it is further speculative as to whether this would be a persistent anomaly, considering that gadolinium so deposited in tissue most likely exists as precipitated gadolinium phosphate [13]. These are intriguing avenues for future exploration. NSF has been described in various organs, including the myocardium, lungs, kidneys, testes and dura mater [2,6,7]. There are several hypotheses regarding the pathogenesis of NSF. Some investigators assert that transforming growth factor β is involved in the pathogenesis of NSF [6]. In vitro studies have shown that transforming growth factor β1 has the potential to induce fibrocyte differentiation and to promote expression of fibrocyte collagen I, which deposits in the cells to cause fibrosis [14]. The US Food and Drug Administration (FDA) first notified health care professionals and the public about the gadolinium-related risks for NSF five years ago. GBCAs are commonly used to improve visualization of structures when patients undergo an MRI or MRA procedure. Five gadolinium-based contrast agents were approved for use in the US at the time of this report, and include: Magnevist (gadopentetate dimeglumine), Ominiscan (gadodiamide), OptiMARK (gadoversetamide), MultiHance (gadobenate dimeglumine), and Prohance (gadoteridol). When a specific agent was identified, Omniscan (the gadolinium agent used in our patient) was the most often implicated in NSF, followed by Magnevist and OptiMARK. Approximately three There is no consistently effective treatment for NSF, although improvement has been reported with plasmapheresis, and in most reported cases, the disease appears to correspond to the improvement in underlying renal function [15]. Conclusions As general internists and endocrinologists continue to care for more patients with chronic kidney disease and hypothyroidism, it is prudent that physicians recognize any emerging disorders in these patients, including any untoward complications associated with NSF. Continued observation and additional reporting will be necessary to ascertain if NSF, with deposition of gadolinium in the bowel, results in malabsorption of LT4 or other medications, and what mechanism of malabsorption may be involved in this phenomenon. Consent Written informed consent was obtained from the patient's next-of-kin for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2017-06-24T17:34:33.923Z
2011-12-07T00:00:00.000
{ "year": 2011, "sha1": "d95999352859cbaa40194e560444f70559fb242a", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-5-566", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f465083931de72340dcaac6277c57640acc7055e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255731580
pes2o/s2orc
v3-fos-license
The Evaluation of Risk Factors for Vancomycin-Resistant Enterococcus Colonization and Infection Among Mixed Adult Intensive Care Unit Patients Background and objective Despite the adherence to strict infection control measures, vancomycin-resistant enterococcus (VRE) colonization and VRE infections are still important problems nowadays. However, there are only a limited number of studies examining the factors causing the transformation of VRE colonization to VRE infection in the intensive care unit (ICU). The aim of this study is to delineate the prevalence of VRE colonization and its transformation into infection and the risk factors leading to infection. Methods Patients admitted to the third-level mixed-type ICU from 2012 to 2015 for at least 24 hours and acquired VRE colonization and VRE infection, both during and after their admission, were included in the study, and their medical records were examined retrospectively. VRE rectal swabs were taken weekly from each patient on admission and discharge from the ICU. If the VRE-positive patient was detected negative for VRE on the rectal swap taken three times in total as a surveillance culture successively, this patient was accepted as VRE negative. Demographic data, Acute Physiology and Chronic Health Evaluation II (APACHE-II) scores, invasive procedures, treatments (corticosteroid, antibiotic, etc.), nutrition types, laboratory results, and ICU results were recorded. Results Among 1730 patients admitted to ICU, 101 (5.8%) were found to carry VRE colonization. Twelve (11.8%) out of 101 patients colonized with VRE developed VRE infection. About 56.4% had urinary tract infections, 68.3% had pneumonia, 15.8% had surgical site infections, and 24.8% had catheter-associated infections among these infected patients. The most prevalent factor was Enterococcus faecium in patients with VRE colonization (64.3%) and infection (91%). VRE turned negative in 67% of patients with VRE colonization during their stay in ICU. Renal replacement therapy was statistically significant (p < 0.05) in the group with VRE infection (66.7%) compared to the VRE-colonized group (26.1%). Infection development risk among carriers of VRE for more than one week was again found statistically significant (p = 0.025). Demographic data, APACHE-II scores, treatments, nutrition type, previous antibiotic usage and types, invasive procedures, laboratory results, and ICU results were similar among the patients with VRE colonization and infection. Conclusion A longer duration of ICU stay in patients with colonization and previous renal replacement therapy increases the transformation of VRE colonization to VRE infection. Strategies toward decreasing VRE-colonized patients’ period of stay in ICU is the main objective to control the rate of VRE infection. Introduction Enterococci present in the normal flora of the gastrointestinal system (GIS) of humans are related to endocarditis, wound infection, and urinary system infection and are also factors for nosocomial bacteremia. Despite the strict infection control measures taken, vancomycin-resistant enterococcus (VRE) colonization and VRE infections are still important problems nowadays. However, there are only a limited number of studies examining the factors causing the transformation of VRE colonization to VRE infection in the intensive care unit (ICU) [1]. Enterococci are microorganisms that could resist vancomycin, which causes colonization and infection in patients staying in hospitals. GIS is the major reservoir of Enterococcus faecium. Since people carrying VRE are usually asymptomatic, they could only be detected during surveillance studies. VRE colonization could persist for weeks, even months, especially in patients belonging to high-risk groups. Also, VREs might colonize in the flora of the female genital system other than GIS. Medical devices, tools, articles present in the patient room (bed, table, door knobs, electric switches, etc.), and surfaces are also important VRE sources. Devices commonly used on patients such as electronic thermometers and electrocardiogram electrodes could be responsible for the spread of VRE [2,3]. Since VRE strains exhibit multiple antibiotic resistance, they could become highly resistant to heat and disinfection; strains could continue their existence in a hospital medium for a considerable period of time [4]. For VRE colonization, patients who use oral or parenteral glycopeptide, metronidazole, cephalosporin, fluoroquinolone, carbapenem, beta-lactam antibiotics; patients who stay in hospital for long periods; patients having previous hospital infections; neutropenic patients; dialysis patients; organ transplant patients; central venous catheter patients; patients who are fed via an enteral tube; human immunodeficiency virus-positive patients; patients who had undergone intra-abdominal surgery, and patients having accompanying diseases are under risk [1,[4][5][6][7][8][9]. Early identification of patients who are colonized with VRE and isolation measures prevent the spread of these microorganisms inside the hospital. The number of patients infected or colonized with VRE could be decreased by determining and eliminating the risk factors for patients with VRE in the ICU. In Turkey, there are only a limited number of studies examining the risk factors of VRE colonization among patients in the ICU. This study is aimed to retrospectively determine the prevalence of VRE colonization and infection at the time of ICU admission and during the stay in ICU as well as the risk factors causing these. Materials And Methods Patients admitted to mixed (surgery, medical, trauma)-type ICU with a capacity of 20 beds for at least 24 hours who were hospitalized from 2012 to 2015 at Yildirim Beyazit University Medical Faculty in Ankara Atatürk Training Research Hospital were included in the study. VRE rectal swab cultures were taken from each patient at their first admission to ICU. Medical records of patients with positive VRE colonization and who transformed to infection from colonization according to rectal swap cultures taken within the first 24 hours following the admission to the ICU and weekly rectal swap cultures taken weekly after admission were examined retrospectively from the hospital, ICU, and infectious diseases record system. If a patient's result is negative for VRE according to the first swap taken within the 24 hours in ICU and on weekly rectal swaps taken three times in total consecutively, then the patient was accepted as negative for VRE. However, when VRE colonization/infection was identified in ICU, the screening cultures were continued in the unit weekly. If there are no other patients with VRE colonization and if the patients are negative for VRE, then VRE cultures were taken monthly until a patient is found positive for VRE. In this study, information such as age; gender; the number of accompanying diseases; Acute Physiology and Chronic Health Evaluation II (APACHE-II) score within the first 24 hours; the length of time and history of stay in the hospice, hospital, or ICU within the last three months; the unit from which the patient was transferred to ICU or whether the patient was transferred from another hospital; the reason for admission to the ICU (surgery, medical, or trauma), and whether the patient was neutropenic were recorded. Factors such as whether the patient required hemodialysis, nutritional status (parenteral vs. enteral), enteral nutrition type (oral, nasogastric, percutaneous endoscopic gastrostomy, etc.), presence of nosocomial infection prior to VRE colonization, candidemia status, antibiotic usage, period and quantity prior to VRE colonization, steroid usage and quantity (prior to and/or after admission), whether the patient had postoperative (emergency/elective) and abdominal operation, and the date on which VRE was detected after admission or at the admission to VRE were detected. It was recorded whether the patient had a central venous catheter and/or a dialysis catheter, whether colostomy was performed prior to colonization, whether tracheostomy was performed or the patient was intubated, whether the patient was on invasive/noninvasive mechanical ventilators, and any other invasive operations (chest tube, angiography, bronchoscopy, etc.) were recorded. Albumin, creatinine, blood urea nitrogen (BUN), hemoglobin, alanine aminotransferase (ALT), and aspartate aminotransferase (AST) levels were recorded on the day when colonization was detected. Urinary tract infection, pneumonia, surgical site infection (skin-wound site infection), catheter infection, and other infection status were studied; if VRE infection was present, then its breeding site was detected and studied to determine whether it was bacteremia. The type of VRE colonized and/or causing infection in the patient, the period of carriage of patients colonized with VRE, the number of patients whose VRE colonization turned negative, and the period of ICU stay were recorded. Statistical methods Statistical analyses were evaluated with the SPSS 22.0 (Statistical Package for the Social Sciences, IBM Corp., Armonk, NY) package program. In the evaluation of data, frequencies and percentages were given for qualitative data. Kolmogorov-Smirnov test was used to determine the normal distribution of quantitative data. Mean and standard deviation values from descriptive statistical methods were provided for quantitative data, and frequency and percentage (%) values were provided for qualitative data. Results Among 1730 patients admitted to ICU, 101 (5.8%) were found to carry VRE colonization. VRE infection developed in 11 (10.9%) of the patients with VRE colonization. A total of 42 male and 59 female patients were studied, and the average age of the patients was 75 ± 17.8. Considering the underlying diseases of 91.1% of patients, 41.6% had heart disease, 51.5% had hypertension, 32.7% had diabetes mellitus (DM), 19.8% had kidney failure, 17.8% had asthma, 5% had hematologic malignancy, and 5% had liver disease. The average APACHE-II score of patients was 25 ± 6.2; the lowest APACHE-II score was 7, and the highest APACHE-II score was 38. The average hospital stay of patients was 42 ± 64.2 days, and their ICU stay was 37 ± 62.9 days. VRE colonization was detected on the 20 ± 29th day of their admission to the hospital and on the 13 ± 27.7th day of their admission to the ICU; 18.8% of patients came to our hospital from an external center or hospice. The majority of patients admitted to ICU were patients from the emergency service with a ratio of 39.6% and patients coming from the orthopedics with a ratio of 17.8%. About 12.9% of patients were admitted to ICU after a postoperative emergency operation, and 13.9% of patients were admitted to ICU after a postoperative elective operation. There was at least one dose of steroid usage history in 42.6% of patients in the ICU from admission to the hospital to the detection of VRE. About 16.8% of patients had catheter-mediated parenteral nutrition, 80.2% of patients had enteral nutrition, and the majority of enteral nutrition patients were treated via nasogastric feeding with a ratio of 55.4%. About 29.7% of patients had at least one hemodialysis therapy, tracheostomy was applied to 14.9%, and intubation was applied to 57.4% of patients until VRE was detected from the date of admission to the hospital. About 83.2% of patients had a central catheter, and 6.9% had a colostomy. When it was examined to see whether other infections are present, 56.4% of the patients had urinary tract infections, 68.3% had pneumonia, 15.8% had surgical site infections, and 24.8% had catheter-associated infections. About 81.2% of the patients had a focus of infection prior to VRE, and 81.2% had a focus of infection prior to VRE. During their stay in the ICU, 67.3% of the patients with VRE colonization became VRE negative. It was found that 19.8% of hospital admissions were due to surgery, 70.3% were due to medical reasons, and 7.9% were due to trauma. Invasive procedures were not applied to 76.8% of patients, bronchoscopy was applied to 5.9%, the chest tube was applied to 11.9%, angiography was applied to 4%, and colonoscopy was applied to 1% of patients. About 96% of the patients did not have neutropenia. The most prevalent factor was E. faecium in patients with VRE colonization (27.7%) and infection (91%) ( Table 1). Parameters Having VRE colonization (n = 101) (%) Table 2). When the patients having VRE colonization and VRE infection were compared via chi-square test to determine the risk factors, patients having renal replacement therapy were found statistically significant (p = 0.017). Also, infection development risk among carriers of VRE for more than one week was again found statistically significant (p = 0.025) ( Table 3). Discussion The aim of this study is to delineate the prevalence of VRE colonization and its transformation into infection and the risk factors leading to infection. If we know the risk factors, we can reduce the rate of VRE infection by taking measures to prevent them. In a meta-analysis of 37 studies, the rates of VRE infection among colonized patients ranged from 0% to 45% [10]. In our study, we found the rate of VRE infection in colonized patients as 10.9%, which is similar to other studies. A total of 42 male and 59 female patients having VRE colonization/infection were included in this study. Considering the underlying diseases of 91.1% of patients, 41.6% had heart disease, 51.5% had hypertension, 32.7% had DM, 19.8% had kidney failure, 17.8% had asthma, 5% had hematologic malignancy, and 5% had liver disease. In a study demonstrating the increase of VRE risk by DM, it is specified that both hyperglycemia and corticosteroid usage are causing an additional immune system dysfunction by decreasing the clearance of organisms entering into the bloodstream from the gastrointestinal system. Also, corticosteroid usage might decrease the integrity of the gastrointestinal system and skin [11]. In our study, there was at least one dose of steroid usage history in 42.6% of patients in ICU from admission to the hospital to the detection of VRE as well. However, when two groups having VRE colonization and VRE infection were compared, a significant difference was not found in terms of DM and corticosteroid usage. In a retrospective, single-center study covering 53 patients with VRE colonization, neutropenic patients commonly diagnosed with acute myeloid leukemia and hematopoietic stem cell transplantation were studied, and approximately 38% of patients developed a VRE infection [12]. In our study, 5% of patients had hematologic malignancy, and 4% of complete patients had neutropenia. Although we could not find a correlation between VRE and neutropenia and accompanying diseases, it is commonly known that multiple comorbidities increase the ICU stay. A longer ICU stay increases the possibility of taking antibiotics and possible pathogen infection risk as well. For this reason, it is very important to take infection control measures. In a study involving pediatric oncological patients, the duration of neutropenia, the number of antibiotic agents, and the duration of antibiotic treatment were shown as risk factors, and it was found that environmental contamination had a significant effect on the patient-to-patient transmission of VRE and interventions that included the application of infection control measures were associated with a decrease in the incidence of gastrointestinal colonization [13]. As longer staying periods in the ICU increases the possibility of receiving a longer antimicrobial treatment, the risk of infections resistant to antibiotics may also increase. VRE incidence could be reduced by decreasing the length of stay in the ICU and vancomycin usage [14]. Newly acquired VRE incidence during ICU stay was examined in a study, in which 871 patients were screened, and length of stay in the ICU was found as an independent risk factor [15]. Patients having colonization with VRE were evaluated at two ICUs in a teaching hospital in Sao Paulo, and it was found that the only risk factor significantly associated with VRE under intensive-care conditions was the length of stay in the ICU [16]. In our study, we found that the risk of developing infection increased in colonized patients who were carriers for more than one week. Strategies to reduce the length of stay in the ICU of patients who remain colonized in the ICU for more than one week will also enable us to control the rate of VRE infection. In addition, the length of stay in the hospital before admission to the ICU was also found to be an independent risk factor for VRE carriage [10,14]. In our study, when the mean hospital stay of colonized patients and patients with VRE infection were compared, no significant change was observed. A prospective, observational study in Brazil found that antibiotic use and carbapenem use prior to VRE were independent risk factors for VRE colonization [17]. In a study involving 43 ICUs in Italy, antibiotics were used in 75% of the patients in the ICU without sepsis, and no cause could be identified in 20% of them, and the majority was called "prophylaxis." It was emphasized that the unnecessary use of antibiotics is very common [18]. The American Thoracic Society Guideline recommends the use of early and appropriate antibiotics to reduce nosocomial infections. In our study, there were 82 patients who had infections before VRE and were using antibiotics, but no significant difference was observed between the two groups when the groups were compared. In a study, 24 renal patients with VRE colonization/infection were compared with 29 renal patients with vancomycin-susceptible enterococcal infection, and a greater VRE risk was detected in patients with severe renal disease [19]. In another study, it was indicated that hemodialysis was a significant risk factor for the acquisition of VRE colonization/infection in all the patients hospitalized; however, it was also indicated that end-stage renal failure was not a risk factor [20]. It is known that central venous catheters and dialysis catheters applied before hemodialysis can also be a source of infection [21]. For this reason, the association between VRE and central catheters was searched in various studies. In a study carried out by Sakka et al., invasive devices were found as a risk factor for VRE. However, it is not evident whether the catheters are functioning as a primary channel for VRE infection or whether only weakness, long hospitalization, and more frequent catheter usage in patients having serious comorbidity are causing the infection [22]. Similarly, in our study, statistically significant results were obtained when renal replacement therapy is compared with the group with VRE infection (63.6%) and the group with VRE colonization (25.5%) but there was no correlation between invasive operations and central catheter with VRE infection development. Comorbidities of hemodialysis patients, their frequent exposure to other bacterial factors, and immune system deficiency are in parallel with these factors, and as patients having longer hemodialysis treatment stay in the hospital for a longer period of time, we believe that hemodialysis is a risk factor for VRE. The retrospective nature of our study brought some limitations. Patients coming from an external center or hospice constituted 18.8% of all patients. As a result, it was not possible to determine whether these patients had previous colonization/infection and, if so, how long they had been colonized or infected. This could have had an impact on our statistical data. Conclusions In this study, we determined that patients receiving hemodialysis treatment and patients with long-term VRE carriers are at a high risk of VRE infection. Consequently, early detection of patients with VRE colonization is vital to prevent VRE infection and to take measures against VRE infection. Logically, in order to decrease the VRE infection rate, our main target has to be the early detection of patients with VRE colonization and the implementation of strategies that are intended for decreasing the period of stay in the ICU. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-01-12T18:43:18.281Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "8859ecec8f72c0fc87622c1f8e1e0ad1a4a86154", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/128582-the-evaluation-of-risk-factors-for-vancomycin-resistant-enterococcus-colonization-and-infection-among-mixed-adult-intensive-care-unit-patients.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2369e06897508d980fafe37ce9d4ee9db985d0a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
199573183
pes2o/s2orc
v3-fos-license
Effect of Supplementation with Trimethylglycine (Betaine) and/or Vitamins on Semen Quality, Fertility, Antioxidant Status, DNA Repair and Welfare of Roosters Exposed to Chronic Heat Stress Simple Summary Semen, reproductive traits, and the welfare of males are negatively affected by environmental stressors. Stress-alleviating agents, such as vitamins and osmoregulators, may improve semen quality, seminal and blood plasma constituents, antioxidants’ status, and the welfare of roosters exposed to chronic heat stress (CHS). It has been shown that betaine (Bet) may be a useful tool for improving the reproductive traits of roosters exposed to CHS, and may have comparable effects to vitamin C and/or E, thus improving the breeding strategy. Abstract In this study, we investigated the influence of betaine (Bet, 1000 mg/kg), with or without vitamin C (VC, 200 mg/kg ascorbic acid) and/or vitamin E (VE, 150 mg/kg α-tocopherol acetate) on semen quality, seminal and blood plasma constituents, antioxidants’ status, DNA repair, and the welfare of chronic heat stress (CHS)-exposed roosters. A total of 54 roosters were divided into six groups of nine replicates. One group was kept under thermoneutral conditions, whereas the other five were kept under CHS. One of the five groups served as an unsupplemented CHS group, and was fed with a basal diet. The other four CHS groups were supplemented with Bet, Bet + VC, Bet + VE, and Bet + VC + VE, respectively. Our data indicate that supplementation with Bet, Bet + VC, Bet + VE, and Bet + VC + VE, resulted in complete recovery of the CHS effect on sperm concentration and livability, semen pH, and fertility compared to the thermoneutral group. Seminal plasma total antioxidant capacity (TAC) was significantly (p < 0.05) increased with Bet, with or without vitamins, compared to the thermoneutral and CHS groups. Urea and blood plasma malondialdehyde (MDA) were totally recovered with Bet, with or without vitamin treatments. Both the jejunum and ileum DNA were partially recovered following Bet, with or without vitamin supplementation. In conclusion, Bet, at 1000 mg/kg feed, may be a useful agent for increasing semen quality, fertility, welfare, and to improve the breeding strategy of breeder males in hot climates. Introduction Oxidative stress plays a key role in sperm function and motility, cell quality, and fertility, since lipid peroxidation increases under chronic heat stress (CHS), particularly when the temperature exceeds care and handling of the animals maintains their rights, welfare, with minimal stress, according to International Guidelines for research involving animals (Directive2010/63/EU). A total of 54, 32-week-old male Mandarah (a dual-purpose breed) chickens, with a similar initial body weight were distributed randomly among six treatment groups of nine males during weeks 32-52 of age. Each male served as a replicate, and was individually housed in galvanized wire cages in batteries with standard dimensions (30 × 50 × 60 cm) in an environmentally controlled lightproof house (close system; controlled for temperature, humidity, and light). Each cage was provided with a manual feeder and nipple waters. Chickens were offered free access to mash diets and water throughout the experimental period. The experimental diets' chemical composition was done according to [32]. The health care, housing condition, heat stress protocol, indoor and outdoor temperature, relative humidity (RH), and light schedule were similar to those reported in [15]. The roosters were reared (indoor) either at an optimum temperature of 22-24 • C, with RH of 45-55%, serving as the thermoneutral group (positive control). They were fed with a basal diet (Table 1) or under CHS (38 ± 1 • C; 55-65% RH) for three successive days a week, from 11.00 a.m. to 15.00 p.m., and returned to the thermoneutral condition thereafter. Roosters under CHS were divided into four groups: roosters kept under CHS and fed with a basal diet without additional Bet, Bet + VC, Bet + VE, and Bet + VC + VE serving as the CHS negative control. The Bet group, roosters kept under CHS and fed with a basal diet supplemented with 1000 mg/kg of Bet (natural Betafin ® S4 contain 93% dry Bet, Danisco Animal Nutrition, Marlborough, UK). The Bet +VC group, roosters kept under CHS and fed a basal diet supplemented with 1000 mg/kg betaine + 200 mg/kg ascorbic acid (l-ascorbic acid; a heat stabilized product, Hoffmann-La Roche, Basel, Switzerland). The Bet+VE group, roosters kept under CHS and fed a basal diet supplemented with 1000 mg/kg betaine + 150 mg/kg α-tocopherol acetate (VE; α-tocopherol acetate, Hoffmann-La Roche, Switzerland). The Bet+VC+VE group, roosters kept under CHS and fed with a basal diet supplemented with 1000 mg/kg betaine + with 200 mg/kg ascorbic acid + 150 mg/kg α-tocopherol acetate (VC + VE). 3 179.6 Ether extract, g/kg 3 28.5 Crude fibre, g/kg 3 4.78 Methionine, g/kg 2 4.1 Methionine + Cysteine, TSAA, g/kg 2 6.70 Lysine, g/kg 2 8.8 Calcium, g/kg 2 1.10 Available P, g/kg 2 3.94 Data Collection Roosters were individually weighted at 32 and 52 weeks of age, body weight gain was calculated from the difference between the initial and final body weight. Daily feed intake (g/bird) and mortality were recorded for each replicate. Semen was collected weekly from all roosters after 10 weeks of treatments at week 42 of age, and maintained for another 10 weeks to determine the semen's quality as outlined by Attia et al. [33]. Semen collection was performed under the procedure of abdominal massage. The internal temperature of the semen at the time of collection was kept in the range of 41-44 • C using a water bath (at 37 • C). Semen samples were then transferred immediately after collection to the laboratory to determine spermatozoa quality. Moreover, special attention was given to protect semen from cold shocks and direct light. Throughout the course of semen collection, time, place of collection, and collector were kept constant. At weeks 44, 48, and 52 of age, the roosters' fertility was evaluated. Semen samples were artificially collected by abdominal massage and used for the artificial insemination of hens. Semen was used after 1:1 dilution using a 0.9% saline solution as a diluent [34]. The semen of each male was used to inseminate 10 hens. Each hen was inseminated with 0.5 mL of semen over two successive days. After two days of insemination, the eggs were collected for ten days, and stored at room temperature (22-24 • C with 45-55% RH), incubated (37.6 • C, 55% RH), and hatched (36.8 • C, 65% RH) in an automatic incubator. Fertility was calculated by dividing the number of fertile eggs by the total eggs set. At 52 weeks of age, blood samples (5 mL) were withdrawn from the wing vein after each treatment. Blood samples (n = 5) after each treatment were collected heparin (an anticoagulant agent) tubes in the morning from the overnight-fasted roosters. Blood plasma and seminal plasma were obtained by blood and semen centrifugation at 1500× g for 20 min, and kept at −20 • C until the analysis was performed [33]. Plasma and seminal plasma metabolites, seminal and blood plasma total antioxidant capacity (TAC), and malondialdhyde (MDA) were determined using a diagnostic kit and following the manufacture's recommendation (Diamond diagnostics, 23 EL-Montazah St. Heliopolis, Cairo, Egypt, http://www.diamonddiagnostics.com). Blood plasma creatinine was measured using special kits delivered from N.S. BIOTEC (http://www.nsbiotec.com). The aspartate aminotransferase (AST) and alanine aminotransferase (ALT) as (U/L) in the blood and seminal plasma were determined using commercial kits produced by the Pasteur Lab (http://www.pasteurvetlab.com). Blood plasma alkaline phosphatase was measured according to the method by Yan et al. [35]. Seminal plasma α-, β-, and γ-globulin were determined using commercial ELISA according to the method by Bianchi et al. [36]. Blood hematological parameters, such as hemoglobin (Hgb; %), were determined according to the method by Tietz [37]. Red blood cells (RBCs) were counted on a bright line hemocytometer using a light microscope at 400× magnification according to the methods by Helper [38], and Hawkeye and Dennett [39]. Packed cell volume (PCV; %) was measured according to the method by Wintrobe [40]. The mean cell volume (MCV), the mean cell hemoglobin (MCH), and the mean cell hemoglobin concentration (MCHC), were estimated as absolute values as reported by Attia et al. [34]. The phagocytic activity and index were determined according to the method by Kawahara [41]. White blood cells (WBCs) were assessed according to the methods by Helper [38], and Dennett [39], using a light microscope at 100× magnification. The blood film was prepared according to the method by Lucky [42] to determine different leucocytes. High molecular weight DNA was extracted from intestinal parts (jejunum and ileum) according to the method by Sambrook et al. [43], with some modification according to Abdel-Fattah [44]. The DNA concentration was estimated from the optical density (O.D.) reading of a UV spectrophotometer at a 260 nm wave-length (1.0 O.D. = 50 µg DNA/mL of solution), according to the method by Charles [45]. Statistical Analysis Data were tested using the GLM procedure published by SAS ® (SAS Institute, Cary, NC, USA) [46], using one-way ANOVA according to the following model: yij = µ + τj + εij, where µ = the general mean, τj = the effect of treatment, and εij = the experimental error. A p ≤ 0.05 value of significance for the student Newman Klaus test was used for testing mean differences among the experimental groups. Prior to the analyses, all the percentages were subjected to logarithmic transformation to normalize data distribution. Results Overall, neither diseases symptoms nor mortality occurred throughout the experimental period. The initial roosters' body weights (BW) were not significantly (p > 0.001) different, while their BW changes and feed intake significantly decreased with the increase in the level of CHS, as indicated in Table 2. Body weight changes were completely resorted with high supplementations, except for Bet + VC, which caused partial recovery. All Bet-supplemented groups produced similar partial recovery in feed intake. Table 2 indicates that the group on CHS without antioxidant addition significantly decreased semen's physical characteristics. The ejaculate volume, sperm concentrate, concentrate/ejaculate, sperm motility %, sperm livability %, total live sperm/ejaculate, semen quality factor, and fertility for CHS groups were significantly decreased (p > 0.001) by 23.2, 23.6, 40.8, 13.1, 9.3, 46.1, 46.0, and 17.9%, respectively, compared to the thermoneutral control group. However, sperm mortality (%) and pH significantly increased (p > 0.001) by 62.8 and 3.7%, respectively. The antioxidants, either individually or combined, induced complete recovery in sperm concentration, livability, pH, and fertility. Moreover, supplementation of Bet + VC + VE restored ejaculation volume, concentration/ejaculate, sperm motility, total live sperm/ejaculate, and the semen's quality factor to the thermoneutral group, but differences among these groups and Bet or Bet + VE groups were not significant. Groups of CHS roosters without antioxidants had significantly impaired total protein, globulin, AST, ALT, and TAC in seminal plasma compared to the thermoneutral group (Table 2). Antioxidants significantly (p > 0.001) restored total protein, globulin, AST, ALT, TAC, and MDA, indicating no significant difference from the thermoneutral group. When Bet + VC + VE were added, this caused an increase in γ-globulin compared to the group on Bet alone. Antioxidant groups showed increased TAC in comparison to both the thermoneutral and CHS groups. The different treatments did not cause any significant effects on plasma albumin, albumin/globulin ratio αand β-globulin, and AST/ALT ratio. Results in Table 3 show that the CHS group without antioxidants had significantly impaired RBCs, Hgb, PCV %, pH, PA, and PI compared to the thermoneutral group, the values decreased by 17.8, 20.6, 19.0, 2.8,2 and 28.9%, respectively. When antioxidants were added, a complete recovery of all hematological traits occurred except for PI, which was recovered only when the three additives were combined. Furthermore, PA was partially recovered similarly due to the different antioxidant supplementations. There were no significant effects from the different treatments on MCV, MCH, and MCHC. White blood cell counts (−14.5%), lymphocytes (−9.4%), heterophile (+15.4%), H/L ratio as the welfare index (+22.9%), were negatively affected under the CHS exposure without antioxidants condition compared to the thermoneutral group. The addition of Bet + VC, Bet + VE, and Bet + VC + VE completely recovered the negative effects of HS on the WBC parameters and index of wellbeing (H/L). Monocyte, basophil, and eosinophil percentages were insignificantly affected by the different treatments. Results in Table 4 show that CHS caused a significant impairment in plasma glucose, protein profiles except for plasma globulin, lipid profiles, and the liver function index, renal function index except for plasma urea/creatinine ratio, TAC, and MDA. The addition of Bet alone or in combination with vitamins caused similar complete recovery in plasma glucose, the A/G ratio, total lipids, cholesterol, AST, ALT, the AST/ALT ratio, urea, and TAC. Instead, Bet and Bet + VC + VE caused a complete recovery in plasma total protein, and the combination of Bet + VC, Bet + VE, and Bet + VC + VE groups resulted in a complete recovery in plasma creatinine. The combination of Bet + VE and Bet + VC + VE completely recovered TAC, and had stronger effect than Bet alone. Furthermore, the three agents had a stronger effect compared to Bet + VC. Bet treatments caused a complete recovery in MDA, and the combined agents resulted in a stronger effect compared to Bet alone. Antioxidant supplementation showed a partial recovery in plasma albumin with Bet and Bet + VC + VE, and resulted in a stronger recovery compared to the Bet + VE group. In addition, plasma triglyceride was partially recovered similarly due to the different antioxidants. Plasma alkaline phosphatase was partially recovered by the antioxidants. However, Bet + VE and Bet + VC + VE produced stronger effects compared to Bet and Bet + VC. Furthermore, the combination of the three agents resulted in a stronger effect compared to Bet + VE. The combination of Bet + VC, Bet + VE, and Bet + VC + VE showed a synergetic effect on TAC and MDA, which surpassed the effect of Bet and the thermoneutral groups. Jejunum and ileum DNA concentrations were significantly affected, negatively by HS and positively by antioxidants ( Table 4). The CHS treatment without antioxidants significantly decreased jejunum and ileum DNA concentrations by 10.2% and 19.8%, respectively, compared to the thermoneutral group. Antioxidants significantly increased DNA concentrations in the jejunum and ileum compared to the CHS group. With Bet and Bet + VE surpassing the other antioxidant groups to result in complete recovery of ileum DNA only. Discussion Heat stress negatively affects both animal and human welfare. The use of Bet with or without vitamin fortifications to relief the adverse influence of HS is an essential tool in human and animal nutrition. In this study, we demonstrate that HS negatively affects fertility and semen quality, and that Bet supplementation alone showed promising relieving effects. The negative effects of CHS caused an increase in the H/L ratio in roosters exposed to HS, suggesting their low [1,6]. Similarly, H/L appears to be a more reliable indicator for determining stress and welfare in poultry [47], which has negative effects on immunity and disease resistance [18,48]. The reduction in TAC and elevation in MDA confirmed the adverse effects of CHS on antioxidant conditions [49,50]. In addition, blood biochemistry and immunity indices were negatively affected herein by HS. More specifically, we observed a decrease in seminal and blood plasma total protein, and blood plasma albumin (nonspecific immune protein), γ-globulin (innate immunity), PA and PI (nonspecific immunity), WBCs and lymphocyte (cell mediated immunity), and DNA of the intestinal segments. In addition, RBCs, Hgb concentration, and PCV were adversely affected due to CHS, showing low animal welfare and health. This could be attributed to the decline in RBCs, which reduces oxygen uptake, resulting in less metabolic heat loss. In accordance with the present results, studies described in References [51] and [52] show that HS decreases Hgb and PCV while increasing blood pH. This was in agreement with the low availability of essential nutrients for DNA synthesis, functions, and repairs [53,54] and increased water intake [48]. Our study shows a decrease in the semen quality and fertility of roosters exposed to HS, and this was associated with the reduction in this groups feed intake. A HS > 31 • C is known to depress rooster sperm motility, viability, and fertilization potential [6]. This could be attributed to the decrease in sperm motility and the number of spermatozoa stored in the sperm host gland of hens [1]. In addition, HS can have negative effects on testosterone, causing hypertrophy and weakening of Leydig cell function [55]. The sperm's damaged DNA can determine abnormal spermatozoa, which could cause low male fertility and subsequently fewer surviving embryos [4,56,57]. Even if the sperm can fertilize eggs normally, embryos that have received an injured paternal genome could die, causing poor performance [58]. The biochemical and immunological change in blood and seminal plasma was reflected in lower semen quality and reproductive efficiency, showing low breeding strategy under CHS. The positive effect of Bet on reproduction and semen quality was further emphasized by the elevated feed intake, by up to 6.1% compared to the CHS group. Similarly, Singh et al. [62] found that Bet at 1.3 and 2 g/kg diet significantly increased feed intake of broiler chickens under thermal stress. In the literature, dietary Bet of 0.63 and 1.26% for boars increased sperm concentration by 6% and 13%, respectively, and total sperm output in comparison to the control group [14]. The complete recovery in fertility was validated by th increased motility, concentration, pH, and livability of the sperm in the antioxidant-fortified groups, with sperm motility being the most important indicator [1,6]. Furthermore, Bet decreased AST leakage from sperm, hepatocytes, and plasma ALP, and improved renal function (urea and creatinine). The positive effect of Bet on seminal plasma protein, globulin and total protein, blood plasma albumin, globulin and the albumin/globulin ratio indicate the boosting impact of Bet on liver function by increasing protein synthesis as a methyl group donor [10], in addition to decreasing AST and ALT [18]. Lipid metabolites (total lipids, triglycerides, cholesterol) were also improved by Bet fortification. Furthermore, MDA significantly decreased due to Bet supplementation showing the antioxidant effects of Bet. In the literature, Bet decreases meat lipid, increases the yield of breast meat [63], and improves Hgb and PCV [18]. Bet also has a sparing effect on methionine and choline [10], and thus increases choline for biosynthesis of very low-density lipoproteins, which prevent the lipid deposition via increased lipid removal from the liver [64], and regulates cholesterol in chicken [65]. It is interesting to report that Bet restored DNA in the ilium and caused a partial recovery in the jejunum DNA [66]. Bet, as a methyl donor group, increases the methionine and/or cysteine for glutathione synthesis that protects the cell from ROS and reaction metabolites, and boosts the synesthesia of DNA [17,67]. In addition, Bet is an osmolyte that stabilize proteins, cell membranes, organelles, and cells under stress [13,15]. Furthermore, recent studies by Gholami et al. [68], Nosrati et al. [69], and dos Santos et al. [70] indicate that Bet improves immunity, plasma biochemistry, and plasma osmolality of broiler chickens, suggesting its multibeneficial effects on many body functions. In general, VE + Bet did not exceed the effect of Bet alone, even the combination of the three agents did not surpass the effect of Bet alone or Bet + VE. Body weight changes, ejaculate volume, concentration per ejaculate, total live sperm/ejaculate, and the semen quality factor of roosters supplemented with Bet + VC + VE surpassed only those of Bet + VC. Bet + VC + VE had an additive effect on seminal γ-globulin that surpassed other supplements, showing a synergetic effect over the Bet group in the current study. A similar effect was observed in H/L when the combination of the three agents exceeded the effect of Bet alone. In addition, Bet + VE or Bet + VC + VE improved plasma ALP better than Bet alone or Bet + VC, showing a synergetic effect. TAC and MDA improved due to VC and VC + VE addition compared to Bet, demonstrating an additional impact over Bet. Bet alone or Bet + VE resulted in the greatest recovery in jejunum and ileum DNA, suggesting that Bet alone is adequate for recovering the negative impact of HS on jejunum and ilium DNA. The drawback effect of Bet + VC compared to Bet alone and other combination was further validated by a drawback in the jejunum and ilium DNA recovery of VC-supplemented groups. This may adversely affect intestinal absorption capacity. The synergetic effect of VE supplementation over Bet increased lymphocytes (cell mediated immunity), and decreased plasma albumin (nonspecific immune protein) and plasma alkaline phosphatase (index of liver function and bone mineralization), showing that a combination of Bet + VE might be beneficial for antioxidant status, and thus the animals' immunity. The synergetic effect shown in blood biochemistry could be elucidated by the antioxidant influence of VE as confirmed herein by the increase in TAC and decrease in MDA [8]. Vitamin E is known to prevent oxidation of vitamin A and lipids. Furthermore, by increasing oxygen levels, it supports muscle growth and increases oxygen assimilation by RBCs [7]. In addition, VE acts as an essential part of the antioxidant system to prevent peroxidation of PUFA in the sperm's membrane [71], reducing the adverse influence of corticosterone induced by stress [7,22]. VE protects cells such as macrophages, plasma cells, and lymphocytes against oxidative damage and increases the proliferation and functions of immune cells, thereby boosting animal welfare. Therefore, a 250 mg/kg VE is optimum for partly alleviating the negative influences of HS [72]. In this study, VE at 150 mg/kg feed caused a significant reduction in MDA concentrations and improved sperm motility, confirming the protective influences of VE on semen quality. This result supports VE use for treating male fertility dysfunction [25,28]. In addition, ALP, AST, and ALT were significantly decreased in males receiving 150 IU VE compared to the control roosters (15 IU VE), suggesting that 150 IU VE may be valuable for semen quality [25]. A synergetic effect of Bet + VC + VE induced a decrease in blood plasma ALP, creatinine, and MDA, and increased TAC over Bet. However, the three agents combined did not surpass the influence of Bet + VE alone. The combination of the tested agents did not exceed the effect of Bet alone on DNA repair in the intestinal segments, with Bet causing the highest and Bet + VC the lowest recovery, respectively. Jacob [73] quoted that VC enriches VE antioxidant activity by reducing tocopheroxyl radicals to their active form of VE or by sparing VE availability. In addition, Hoehler and Marquardt [30] indicated that the in vivo antioxidant influence of VE might be greater than that of VC. VE increased antibody titer and Bet + VC significantly elevated serum total protein and globulin of hens exposed to HS compared to controls [74], while decreasing serum glucose, triglyceride, HDL, and cholesterol [31]. However, Bet + VC had a similar impact on production and metabolic profiles of laying hens exposed to CHS [15]. Conclusions Betaine at 1000 mg/kg feed for roosters had comparable effects to the combination of 1000 mg Bet with 200 mg VC and/or 150 mg E per kg diet. Therefore, Bet at 1000 mg/kg feed may be an adequate agent for compacting CHS, given the improvement of semen quality, fertility, physiological, antioxidant status, wellbeing, and intestinal DNA damage of breeder roosters, suggesting that Bet is a valuable tool for enhancing the breeding strategy of roosters in hot regions.
2019-08-15T13:05:16.179Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "134b0a829f849719263e16d9cc9ec0ea08b56cd4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/9/8/547/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "396a28ac3c2189bb406f77f699f82a777a452100", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
56310756
pes2o/s2orc
v3-fos-license
Data Collection Mode Effect on Abortion Questions : A Comparison of Face-To-Face and Web Surveys Public opinion on abortion has been changing overtime (Shaw, 2003). The attitudes toward abortion are very complex around the world and in the U.S., it is largely polarized. The debate over pro-choice versus pro-life has always been a central topic for discussion (Medoff, 2013) and the public opinion has becoming even more divided on this. For example, Shaw (2003) showed that the in 1992 35% of the public were pro-life and 59% were pro-choice. In 2003, the difference became much smaller (45% pro-life versus 48% pro-choice).The acceptance of abortion varied depending on a several factors, such as pre-adulthood factors (Pacheco & Kreitzer, 2016), demographic (Woodhams, Hill, Fabiyi, & Gilliam, 2016), media consumption (Altshuler, Gerns Storey, & Prager, 2015), and occupation (Begun, Kattari, McKay, Winter, & O’Neill, 2017; Sjöström, Essén, Sydén, Gemzell-Danielsson, & Klingberg-Allvin, 2014).Along with the change of public opinion on abortion, in the U.S. the estimated abortion rate per 1000 women aged 15 to 44 years also declined from 19.4 to 14.6 between 2008 and 2014, a 25% change (Jones & Jerman, 2017). However, the rate of decline varied depending on the demographics of women, such as age, race and ethnicity, and income. Introduction Public opinion on abortion has been changing overtime (Shaw, 2003).The attitudes toward abortion are very complex around the world and in the U.S., it is largely polarized.The debate over pro-choice versus pro-life has always been a central topic for discussion (Medoff, 2013) and the public opinion has becoming even more divided on this.For example, Shaw (2003) showed that the in 1992 35% of the public were pro-life and 59% were pro-choice.In 2003, the difference became much smaller (45% pro-life versus 48% pro-choice).The acceptance of abortion varied depending on a several factors, such as pre-adulthood factors (Pacheco & Kreitzer, 2016), demographic (Woodhams, Hill, Fabiyi, & Gilliam, 2016), media consumption (Altshuler, Gerns Storey, & Prager, 2015), and occupation (Begun, Kattari, McKay, Winter, & O'Neill, 2017;Sjöström, Essén, Sydén, Gemzell-Danielsson, & Klingberg-Allvin, 2014).Along with the change of public opinion on abortion, in the U.S. the estimated abortion rate per 1000 women aged 15 to 44 years also declined from 19.4 to 14.6 between 2008 and 2014, a 25% change (Jones & Jerman, 2017).However, the rate of decline varied depending on the demographics of women, such as age, race and ethnicity, and income. It is important to note that both public opinion research toward abortion and population estimates of the abortion are oftenbased on survey data.This type of studies often carries a profound policy making impact when policies are informed by findings of public opinion survey research.Given that, research on the measurement of the relevant questions will benefit the understanding of public opinion as it relates to abortion.More specifically, a solid understanding of mode effect on abortion attitudes can have implication on how researchers, practitioners and policy makers measure the public opinion of this important topic.To date, survey researchers have spent limited efforts in examining the survey methodology aspect of the abortion-related attitudinal question.An early study on measuring abortion in surveys of U.S. women analyzed three major surveys -National Survey of Family Growth, National Surveys of Young Women and National Longitudinal Surveys of Work Experience of Youthand found that the self-reported abortion was highly deficient, especially among nonwhite women (Jones & Forrest, 1992).In a survey conducted in Mexico, Lara, Strickler, Olavarrieta, & Ellertson (2004) tested four data collection modes, namely face-to-face, audio computerassisted self-interview, paper-based self-administered questionnaire and a random-response technique.They found that the random-response technique yielded the highest selfreport abortion attempts.A meta-analysis also showed that the random-response technique increased the self-report to sensitive question, including abortion, compared to other data collection mode (Lensvelt-Mulders, Hox, Van der Heijden, & Maas, 2005).A recent study by Singer and Couper (2014) compared using "baby" versus "fetus" when asking about attitudes toward abortion and found that there was no significant difference on abortion preferences but the preference for a prenatal testing for genetic defects differed by the question wording (Singer & Couper, 2014).The choice of data collection mode is a crucial factor when it comes to measuring attitudes and opinions toward abortion.The literature shows that survey responses vary, depending on the survey mode, especially for sensitive questions like the ones examined in this study (Kreuter, Presser, & Tourangeau, 2008). There are some literatures on the mode effect between face-to-face and Web surveys, and some consistent findings can be drawn from previous studies.First, the response rates tend to be higher for face-to-face than Web surveys.This is possibly due to different levels of interviewer contacts between these two modes (Christensen, Ekholm, Glümer, & Juel, 2014;Heerwegh & Loosveldt, 2008;Manfreda et al., 2008;Revilla & Saris, 2012).Interviewers in the face-to-face surveys recruit respondents, introduce the survey, address their concerns, and persuade them to participate, while in Web surveys, respondents are usually recruited by email or mail, and they initiate the survey themselves.Very limited direct inter-personal interaction with respondents from the survey organization exists in a Web survey.Second, face-to-face surveys suffer from a higher level of social desirability bias than Web surveys, especially when asking sensitive questions (Heerwegh, 2009).Social desirability bias refers to the phenomenon of over reporting socially desirable attitudes and behaviors while under reporting socially undesirable ones (Callegaro, 2008).Social desirability bias is most prevalent when respondents answer questions asking for sensitive information or questions with a potentially socially desirable response (Christensen et al., 2014;Duffy, Smith, Terhanian, & Bremer, 2005).Face-to-face respondents are more likely to provide a socially desirable answer to present themselves in a favorable image or to avoid tensions and negative judgments from the interviewer or when other people are present during the survey.The relatively higher level of anonymity and confidentially in a self-administered Web survey can increase the disclosure of undesirable responses.Third, the data quality for these two modes is mixed.While Web surveys tend to show a higher level of item nonresponse in general and more non-differentiation for rating scales, face-to-face surveys are susceptible to more extreme response bias (Beukenhorst et al., 2014;Goldenbeld & de Craen, 2013;Heerwegh, 2009;Heerwegh & Loosveldt, 2008).Given the mixed findings, more comparative research is pressing to examine the mode effects on measurement and data quality between face-to-face and Web surveys.This current study intends to expand the existing literature on the measurement of abortion questions by examining the data collection mode effect on attitudes toward abortion.In particular, this study examines the responses to eight attitudinal questions on abortion from two nationally representative surveys, one through face-to-face and one through Web.More specifically, this study compares the substantive responses and item nonresponse rates of abortion questions between these two modes of data collection.The eight questions asked respondent's opinion about abortion in eight scenarios, including nonfatal health risk of pregnant women, fatal health risk of pregnant women, incest, rape, birth defect of fetus, financial hardship, child will not be the sex woman wants it to be, and woman's choice (see Appendix for exact wordings).Why should we expect that a mode effect exists in abortion questions?On the one hand, attitudinal questions on abortion are typically seen as sensitive and the differential levels of social desirability bias between face-to-face and Web surveys are likely to result in different patterns of response.Specifically, there are two possibilities.First, respondents hide their real attitudes and provide more acceptable albeit untruthful responses (Christensen et al., 2014;Duffy et al., 2005;Liu & Wang, 2015).In this case, we should expect to observe a different response pattern to the abortion questions.Second, respondents can also withhold their opinion completely by not offering a substantive response (Christensen et al., 2014).This will result in item nonresponse, including "don't know" and "refusal" responses.The interviewer involvement in the face-to-face survey is likely to increase the social desirability bias, and hence face-to-face respondents are more likely to provide socially acceptable responses or not provide answers at all.On the other hand, the higher motivation resulting from the interviewer involvement in the face-to-face interviews is likely to result in more thoughtful and conscientious responses compared to Web surveys (Beukenhorst et al., 2014;Goldenbeld & de Craen, 2013;Heerwegh, 2009;Heerwegh & Loosveldt, 2008).Therefore, face-to-face respondents will provide answers that are less ambiguous answers (i.e., the middle option) and less non-substantive answers (i.e., item nonresponse).This study will also examine whether the survey mode effect differs by gender.Research has shown gender differences on attitudes toward abortion (Finlay, 1981;Lohan, Cruise, O'Halloran, Alderdice, & Hyde, 2011;Schwandt et al., 2013).Women hold more liberal attitudes and a higher approval of women's autonomy in abortion decisions than men (Patel & Johns, 2009).It is possible that when a topic is more related to the respondents, as abortion is to women, respondents are likely to possess a well-formed attitude and be less susceptible to the impact of the survey mode.By contrast, men's attitudes toward abortion may not be as solid and hence they are more likely to edit their responses based on a sense of privacy, anonymity and confidentially of the survey mode. Study population and data collection This study examines the data collection mode effect using the 2012 American National Election Studies (ANES).The 2012 ANES is a national representative survey examining the general population's electoral participation, voting behavior, and public opinion.The target population is U.S. citizens aged 18 or older as of the 2012 Election Day.The 2012 ANES includes two waves of data collection, namely a pre-election study and a post-election study, and the same respondents were interviewed twice.The field period for the pre-election study was between September and November 2012, and the post-election study was between November 2012 and January 2013.The 2012 ANES innovatively conducted two parallel surveys, one through a face-to-face mode and one through the Web, using two independent national samples and one identical questionnaire. The Web survey was conducted using GfK Knowledge Panel, which is a nationally representative online panel.The panelists were recruited through address-based sampling and random-digit dialing.All household members were enumerated at the recruiting stage and demographic information was collected before any survey.Respondents for the 2012 ANES were selected from this probability-based GfK Knowledge Panel.The face-to-face survey used an address-based, stratified, multi-stage cluster sample.The first stage of sampling consisted of stratifying the 48 contiguous states and the District of Columbia into nine regions corresponding to Census Divisions, which constitute this study's strata.Within each of the nine regions, census tracts were then randomly selected proportionally to the region's proportion of the U.S. adult population.In the second stage, residential addresses within each tract were randomly selected.In the third stage, one eligible person per household was randomly selected.The sample included a main sample and two over samples for African Americans and Hispanic Americans, respectively.Within each household, random selection was performed, and one person was selected for the survey.Since these two probability samples both target the same U.S. general population, they should have comparable coverage . Measures As mentioned, eight abortion questions were asked about a third into the post-election survey (see Appendix A).The order of the first seven questions was randomized, and the last question on women's choice was always asked last.Three response options, namely favor, oppose, or neither favor nor oppose, were provided.Face-to-face respondents can answer "don't know" or refuse to answer any question and it is coded as item nonresponse.Web respondents can skip a question, which is coded as item nonresponse.In the analysis, I calculated the percentage of item nonresponse for each of the eight questions and compared them across the two modes. Analytical approach The analyses contain two parts.First, the percentages of item nonresponse are compared 3 of 10 GENDER AND WOMEN'S STUDIES Liu, M. Gender and Women's Studies.2018, 1(1):2. between face-to-face and Web surveys using a chi-square test.Second, the response distribution of each question is compared between the two modes using a chi-square test.Both analyses are performed for the whole samples, and for males and females separately.Considering the quasi-experimental nature of the survey data, I analyzed the data through propensity score weighting technique.Specifically, I first conducted a propensity model to predict the participation in face-to-face versus Web survey with variables that were potentially correlated with the response propensity for both data collection modes.The variables used in the propensity model included respondent's gender, age, marital status, education level, employment status, belonging to social class, race and ethnicity, number of children, home internet access, household income, home ownership, and years lived in the current address.Taking the predicted probability of the participating to face-to-face versus Web survey, I created a propensity weight for each respondent.Last, I calculated the weighted distribution for both the substance responses and item nonresponse, and performed the weighted statistical test accordingly.All the results showed in this paper were adjusted with the propensity weight.All analyses were conducted in R. Demographic distributions The unweighted demographic distributions differ significantly between face-to-face and Web (Table 1).Once weights are applied, the demographic distributions become not significant between modes.For both surveys, the weighted analyses show over 70% of the respondents are non-Hispanic white; over 53% of the respondents are married; and half of the respondents' household income falls less than $50,000. Item nonresponse As a first step to assess data quality for these two modes, I calculated the percentage of item nonresponse for each of the eight questions and compared them across the two modes.Across both mode and all questions, the item nonresponse rate is between 8.2% and 10.6%.For six out of the eight questions, Web survey has slightly more missing data than face-to-face survey although none of the difference is statistically significant.Similarly, when comparing the item nonresponse rate between the two data collection modes for male and female separately, there is no statistically significant difference for any of the questions.This suggests a lack of mode effect on item nonresponse for abortion related questions.Respondents are neither more or less likely to provide a non-substantive response, including "don't know" and "refuse," to these three questions in either modes. Attitudes toward abortion I next present the results of the mode effect on substantive responses for the eight questions in Table 3.For each abortion question, I compared the percentages of favor, oppose, and neither favor nor oppose under each mode.The distribution comparisons of the substantive responses between the two modes reveal a fairly consistent pattern.In particular, the chisquare tests for these eight questions across the two modes indicate that significant mode effects exist on all questions at p<.0001 level. For the "oppose" response option, seven out of eight questions indicate a higher percentage for this negative category under face-to-face interviews than Web interviews, whereas the only remaining question (Rape) show no difference between face-to-face and web surveys.A close look at the distribution comparison shows that the oppositions are high and quite disparate for child gender wrong (face-to-face 86% oppose vs. Web 74% oppose), financial hardship (face-to-face 61% oppose vs. Web 50% oppose) and nonfatal health risk (face-toface 40% oppose vs. Web 26% oppose) scenarios.The percentages of opposing are middling for both woman's choice (face-to-face 42% oppose vs. Web 37% oppose), incest (face-toface 32% oppose vs. Web 23% oppose) and birth defect (face-to-face 29% oppose vs. Web 22% oppose) scenarios, although the differences between face-to-face and Web surveys are smaller compared to the first three questions.Fatal health risk and rape scenarios both receive low opposition and the differences between the two survey modes are relatively small and negligible. For the "neither favor nor oppose" response, which is also regarded as the neutral option, all the questions show a higher percentage for Web than face-to-face surveys.The questions about birth defect (Web 29% vs. face-to-face 15%), incest (Web 28% vs. face-to-face 14%), nonfatal health risk (Web 28% vs. face-to-face 17%) and financial hardship (Web 26% vs. face-to-face 12%) scenarios received substantially more "neither favor nor oppose" answers from Web respondents than from face-to-face respondents.The higher percentages of the two end points of the rating scales in the face-to-face survey indicate that face-to-face respondents tend to provide more divided attitudes toward abortion than Web respondents.Web respondents, in contrast, provided answers that are more neutral or ambiguous, suggesting that Web surveys tend to elicit less clear-cut opinions toward abortion than faceto-face surveys do. I also conducted the analyses separately by the respondent's gender (Tables 2 and 3).For the item nonresponse analysis, similar to the whole sample analysis, the differences between face-to-face and Web survey for male and for female respondents are small and not statistically significant.For substantive responses, the patterns of response differences between face-to-face and Web for both genders are similar to the combined whole sample.All differences are statistically significant.Both male and female face-to-face respondents are more in favor of abortion for six out of eight scenarios (nonfatal health risk and child gender wrong scenarios show little to no difference between modes) while more Web respondents are selecting the middle option.The patterns of the level of favoring/opposing across all the items for both genders are also similar to the combined whole samples. Discussion This study set out to examine the abortion attitude difference between face-to-face and Web surveys through two independent national samples.The questions under study are sensitive which can lead to social desirability bias.the respondent feels that his or her real underlying attitude is not in accordance with the social norm, one of the choices is to withhold his or her opinion, which results in item nonresponse. Considering the relatively higher level of anonymity and confidentially in a self-administered Web survey, I expected lower item nonresponse in Web survey than face-to-face survey.In other words, I expected the Web survey respondents to be more forthcoming when responding to sensitive questions.However, the results showed that the item nonresponse rates between the two modes were similar and not statistically significant.This suggests that respondents in face-to-face survey mode were no more likely to withhold their opinions in face of the abortion-related questions than Web survey respondents. The responses to eight abortion questions in this survey show significant mode effects.Web respondents are more likely to choose the neither favor nor oppose option than face-to-face respondents for all of the questions.On the other hand, face-to-face respondents are more likely to choose favor or oppose than Web respondents.This is in line with the previous study, which reports that face-to-face respondents select more extreme answers from ordinal rating scales than Web respondents (Goldenbeld & de Craen, 2013).These findings suggest that mode effects exist in terms of people's responses toward abortion-related attitudinal questions, and the estimates on abortion attitudes drawn from face-to-face interviews and Web interviews are not entirely comparable.One possible explanation for the mode effect is the respondent's motivation.The presence of an interviewer in a face-to-face survey is likely to enhance the motivation of the respondents, and consequently, respondents are likely to take the survey more seriously and hence give more informative answers (such as whether they favor or oppose abortion) in comparison with the Web respondents (Christensen et al., 2014;Heerwegh, 2009).The middle options are more difficult to interpret in comparison to the other two options that clearly show where the respondents stand on the topic.More Web respondents endorse the middle options, possibly due to the lack of motivation in the selfadministered interview.Social desirability bias may also contribute to the response difference.However, both directions of the responses could be seen as socially desirable, depending on what the respondents think that the interviewer or society in general perceive as the norm.Another possibility is the narrow nature of the response options, which may have forced respondents who feel mildly in favor of or oppose the statement to choose the middle option, since otherwise they will risk seeming hard-line.The specific question topic may also contribute to the response differences between face-to-face and Web respondents.For the scenarios where abortion is more acceptable, such as fatal health risk, the oppose rate is low overall and no difference exists between the surveys.For the less socially acceptable scenarios, such as incorrect fetus gender, most respondents disapprove of abortion and relatively large differences exist between surveys.One may also argue that Web panel respondents have more survey experience than cross-sectional face-to-face respondents and that may contribute to the difference.However, the literature shows that the survey experience has little impact on survey responses (Toepoel, Das, & Soest, 2008). Although the results show significant differences for many items, the absolute differences are not large for many of them.In addition, when ranking the scenarios from the most favoring to the least favoring, the patterns for both face-to-face and Web surveys are almost identical.In many situations, such as policy making, the general degree of favoring/opposing, rather than the exact number, is of the most interest.In that case, there is very little mode difference in the impression gained regarding public attitudes to the comparative acceptability for the various reasons for abortion. Future work is definitely needed to further examine the mode effect on abortion questions, using other data sources.When asking abortion-related attitudinal questions, I encourage future researchers to explore other modes.Since both interviewer motivation and privacy are the potential factors contributing to the measurement bias, one should consider a combination of modes that can maximize the effectiveness of both.For example, computer-assisted selfinterviews could be a worth while research effort.Similarly, a leave-behind self-administered questionnaire for sensitive questions after a face-to-face survey is also a potential approach. One major limitation of the 2012 ANES is the quasi-experimental nature of the survey and low response rates for both modes of data collection, and for Web surveys in particular.The low response rate is potentially correlated with higher nonresponse bias.Selection bias is another possible source of error for the observed mode difference, given the response rate difference between the two surveys (Vannieuwenhuyze & Revilla, 2013).The data at hand do GENDER AND WOMEN'S STUDIES Liu, M. Gender and Women's Studies.2018, 1(1):2.not allow us to tease apart the mode difference from the selection bias.Therefore, the real mode difference may be smaller or even nonexistent.Future surveys should consider a strict randomized experiment for studying the face-to-face vs. Web differences, and use techniques to improve the response rate and make them comparable between the modes under comparison.Another limitation is the limited scale type analyzed in the study.It is entirely possible that mode effect interacts with the rating scale characteristics and variations in scales is necessary to make conclusions that are more general about mode effect on abortion questions.As previous research shows, the way people respond to answer scales can differ by the data collection mode (Liu, Conrad, & Lee, 2016;Weijters, Schillewaert, & Geuens, 2008).Therefore, future research should a variety of scales on the same topic between modes to see if the results reported here still hold.Last, survey is just one of the methodologies for collecting public opinion on abortion.In fact, there is a long running debate between quantitative and qualitative research methods on issues like the one studied here (for examples, see Jayaratne & Stewart, 1991;Lawson, 1995;Westmarland, 2001).Whether different survey modes will draw similar or different conclusion from qualitative studies is unknown but worth exploring. Regardless of the limitations, this is the first study that reports the potential abortion attitude differences between national face-to-face and Web surveys.There are slightly more face-toface respondents who did not provide an answer to the questions than Web respondents.Also, larger responses differences exist between modes observed from scenarios that are less socially acceptable while smaller differences exist for scenarios that are more acceptable.The relative degrees of favoring/opposing for various scenarios are quite similar between the two survey modes.For one thing, the differences between these two modes urge that caution be taken when directly comparing results collected through these two different modes.For another, the mode differences do not impose a serious threat on measuring the general population's opinions toward abortion. APPENDIX Question wordings used in the analysis.Do you favor, oppose, or neither favor nor oppose abortion being legal if staying pregnant would hurt the woman's health but is very unlikely to cause her to die.Do you favor, oppose, or neither favor nor oppose abortion being legal if staying pregnant could cause the woman to die.Do you favor, oppose, or neither favor nor oppose abortion being legal if the pregnancy was caused by the woman having sex with a blood relative.Do you favor, oppose, or neither favor nor oppose abortion being legal if the pregnancy was caused by the woman being raped.Do you favor, oppose, or neither favor nor oppose abortion being legal if the fetus will be born with a serious birth defect.Do you favor, oppose, or neither favor nor oppose abortion being legal if having the child would be extremely difficult for the woman financially.Do you favor, oppose, or neither favor nor oppose abortion being legal if the child will not be the sex the woman wants it to be.Do you favor, oppose, or neither favor nor oppose abortion being legal if the woman chooses to have one? Table 2 . Item nonresponse to abortion questions by mode of data collection, 2012 American National Election Studies (weighted results) Table 3 . Attitudes toward abortion by mode of data collection, 2012 American National Election Studies (weighted results) When facing abortion-related attitudinal questions, if Liu, M. Gender and Women's Studies.2018, 1(1):2.
2018-12-18T23:03:06.218Z
2018-02-15T00:00:00.000
{ "year": 2018, "sha1": "8d324008aa2adb2825ca3ad90b20009f6a98fefb", "oa_license": "CCBY", "oa_url": "http://riverapublications.com/assets/files/pdf_files/data-collection-mode-effect-on-abortion-questions-a-comparison-of-face-to-face-and-web-surveys.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8d324008aa2adb2825ca3ad90b20009f6a98fefb", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
119193790
pes2o/s2orc
v3-fos-license
A Computational Multiscale Model for Contact Line Dynamics The conventional no-slip boundary condition leads to a non-integrable stress singularity at a moving contact line. This makes numerical simulations challenging, especially when capillary effects are essential for the dynamics of the flow. This paper presents a new boundary methodology, suitable for numerical simulation of flow of two immiscible and incompressible fluids in the presence of moving contact points. The methodology is based on combining a relation between the apparent contact angle and the contact point velocity with the similarity solution for Stokes flow at a planar interface. The relation between angle and velocity can be determined by theoretical arguments, or from simulations using a more detailed model. The approach here uses the phase field model in a micro domain, with physically relevant parameters for molecular diffusion and interface thickness. The methodology is used to formulate a new boundary condition for the velocity. Numerical results illustrate the usefulness. INTRODUCTION Flow problems involving two immiscible incompressible fluids that are in contact with a solid form so called moving contact line problems. The contact line is located where the interface between the two fluids intersects the solid wall. Moving contact line problems form an important class of two-phase flows and appear both in nature and in industrial applications (1). Examples include droplet spreading on a solid surface and liquid rising in a narrow tube. Industrial applications where the contact line behaviour is important are coating processes, lubrication, inkjet printing, biological flows and microfluidics such as micropumps and so called lab-on-a-chip devices (2,3,4,5,6,7,8). The moving contact line problem has been a subject of debate for many years. The physics governing the dynamics of moving contact lines is still not completely understood (4,9,1). The conventional no-slip boundary condition leads to a non-integrable stress singularity at the contact line (9,10). In fact, molecular dynamics simulations show that there is some sort of fluid slip along the wall in the microscopic region close to the contact line (11,12). Even if one is interested in the macroscopic behaviour, the dynamics at the contact line is intertwined with the flow at the larger scale in a way that is not possible to model using standard two-phase models (1). These small-scale dynamics of the contact line represents a significant numerical difficulty, as it is several orders of magnitude smaller than global flow features in many important applications (13). The physics at different length scales is illustrated by the successive close-ups near the contact line in Figure 1 . In the macroscopic region, of typical length scales > 10 −7 m , the flow and the interface shape may be influenced by several different physical phenomena such as capillarity, gravity etc. When capillarity is important, the large scale flow is mainly governed by either inertia and surface tension, or by viscosity and surface tension. Flow situations where inertia and surface tension are dominating are characterized by low Ohnesorge numbers Oh = √ = √ We Re = √ Ca Re , where is the fluid viscosity, is surface tension, is the fluid density and is a characteristic length scale (14). The length scale could for example be the physical width of a channel, or the cross section of a droplet or a cavity. Further, the Weber number, We, relates inertia to surface tension, the Reynolds number, Re, relates inertia to viscous forces and the Capillary number, Ca, relates viscous forces to surface tension. At smaller scales, illustrated by the intermediate region in Figure 1 , the flow is governed by a viscocapillary balance (15,16). This balance is characterized by the capillary number Ca = , where is a characteristic velocity. As mentioned above, the capillary number represents the relative effect of viscous forces versus surface tension acting across an interface. At a viscocapillary balance ∼ 1. At this scale the interface shape is influenced by viscous effects and the interface may be strongly curved close to the contact line (17). Extrapolating the outer solution toward the contact line leads to an apparent macroscopic contact angle , see Figure 1 . At the molecular length scale of < 10 −9 m, the blue region in Figure 1 , the conventional hydrodynamics breaks down and other models are necessary (15,16). More details on the physics of dynamic contact lines are given for example in the review papers (18,15,1). One of the best documented methods to overcome the stress singularity at the moving contact line is to replace the no-slip condition with a Navier slip condition and introduce a related slip length parameter (19,15,20). However, unrealistically large slip length values are necessary for most of these simulations due to grid refinement limitations (20). Additionally, when capillarity is important for the large scale flow, contact line dynamics also need to be accounted for. The slip condition then needs to be combined with a prescribed dynamic contact angle or velocity (1,19). The simplest approach is to impose a constant angle corresponding to the static angle (21,22,23,24,25). For cases when the large scale flow is governed mainly by inertia and surface tension (i.e. low Ohnesorge numbers), the dynamic angle is close to the static angle (14) and static models can perform rather well (20). On the other hand, when the large scale motion is mainly governed by viscosity and surface tension (i.e. high Ohnesorge number in capillary driven flows), the evolution is directly dependent on the contact line velocity. For such flows the dynamic or apparent angle is typically different from the static angle (1,14), and the contact line velocity is influenced by small scale features near the contact line. To enhance computational efficiency in large scale simulations some sort of sub-grid modelling can be very beneficial to take such effects into account. A common approach is to prescribe an apparent contact angle according to some empirical law (18) or hydrodynamics theories (26,27,28,29). In (19,30,5,31) for example the Cox theory (26) is used to relate the apparent contact angle to the microscopic angle and contact line speed by Here is the slip length and is the macroscopic cut-off length scale imposed by the grid resolution where the angle is measured (20). The explicit expressions for ( ) are given in for example (19). The Cox relation is either directly applied (19,30) or by using an adjustable parameter that needs to be empirically determined from experiments (5,31). Further, the Cox theory is based on the special case of lubrication theory, and the appropriate dynamic contact angle will depend on the scale on which matching between outer and inner scales occurs (18). A different approach is found in the phase field method, where a Cahn-Hilliard equation describes the dynamics of the fluid interface, and molecular processes at the interface between fluids and at the contact line are modeled by diffusion (32). In this model, contact line dynamics is handled by a boundary condition relating the surface energy to the contact angle via Young's relation. With this approach there is no stress singularity when the no-slip boundary condition is used for velocity. However, for accuracy the diffusion processes need to be modeled at physically relevant length scales, which means tens to hundres of nanometers (33). This becomes computationally very demanding, and therefore often unphysically large diffusion parameters are used. Another option is to use a multiscale approach where different physical descriptions at different length scales are coupled using for example the heterogeneous multiscale method (34,35,36). The micro model is usually based on molecular dynamics (12,37,38,39). However, these multiscale models have only been applied to two-phase systems in Couette or Poisseuille flows where the densities and viscosities are assumed to be the same in the two fluids. Different contact line models are reviewed in for example (1,16,20). In this paper we will describe a multiscale method for simulating contact point dynamics where a conventional macroscale solver is coupled to the local phase field solver in (13). There, phase field computations determine a quasi-steady state in a contact point region for a particular apparent contact angle. The relation between the angle and the corresponding contact point velocity characterizes the local dynamics in the contact point region, as discussed in (15). The phase field simulations in (13) are carried out using physically correct diffusion coefficients, and in a domain of corresponding size. The length scale of this domain will here be referred to as the microscopic length scale, and it covers the molecular and parts of the intermediate length scale mentioned earlier. The microscopic length scale is illustrated by the green region in the schematic illustration in Figure 1 . As discussed above the phase field model must resolve features at the microscopic scale to accurately model the contact point dynamics. A direct coupling between a local domain, modeled by the phase field method, and a macrodomain modeled by the standard Navier-Stokes system would require the macro resolution to match the micro resolution at the interface between the two domains. Due to the scale separation such a matching would be computationally challenging. We take a different approach, where we use the quasi-steady state description of the local dynamics in the contact point region. A particular apparent contact angle and a particular contact point velocity characterize the dynamics at each moment in time. A main ingredient is the Huh and Scriven similarity solution (9), which describes the flow around a plane interface at zero Reynolds number, with the interface meeting a solid wall at a well defined contact angle. In many settings the similarity solution is a good approximation of the flow at the intermediate length scale, see (15). We will use the similarity solution to bridge the gap between the scales of the micro region and those of the global features of the flow. Based on the Huh and Scriven similarity solution we formulate boundary conditions for the Navier-Stokes system at an artificial boundary, which is placed at a distance from the contact point such that the features of the flow can be resolved at the global scale. The aim of the paper is to describe the idea of our approach, and to show a possible implementation. We focus on flow situations where the dynamic contact angle differs from the static angle (and methods imposing a static angle are not appropriate). For capillary driven flows, these situations are characterized by high Ohnesorge numbers. Our starting point is a Navier-Stokes solver for two-phase flow coupled to the conservative level set method described in (40,41). The level set function enables computing geometric quantities of the interface, such as normal direction and curvature. The algorithm in (40,41) is developed for problems without contact lines. We emphasize that in the presence of contact lines, some parts of the methodology may not be as accurate as before. For example mass conservation is no longer guaranteed. However, high accuracy for evaluation of geometric quantities and conservation is not in the scope of this work. The paper is organized as follows. In Section 2 we start by presenting the macroscopic two-phase model used here. Then we proceed by describing the new multiscale model for dynamic contact points in Section 3 and implementational details in Section 4. Section 5 presents numerical simulation results for two test problems and finally a summary and conclusions are given in Section 6. TWO-PHASE FLOW MODEL In this section we describe the macroscopic two-phase model used to perform the numerical simulations in Section 5. We also give a brief description of the discretizations of the equations from the two-phase model. Navier-Stokes equations The motion of two immiscible fluids is given by the incompressible Navier-Stokes equations for velocity u and pressure in non-dimensional form, Here, F is the surface tension force at the fluid interface and Re denotes the Reynolds number, which controls the magnitude of viscous stresses relative to advection. Further, ∇ s = 1 2 (∇ + ∇ ) denotes the rate of deformation tensor and the parameters and denote the density and viscosity measured relative to the parameters of fluid 1, The conservative level set method The level set method is used to keep track of the fluid-fluid interface and the moving contact point. The level set function (x, ) is used as an indicator function, where the fluid-fluid interface Γ is given by the zero level set of . The subdomain Ω 1 occupied by fluid 1 is given by > 0 and the subdomain Ω 2 occupied by fluid 2 is given by < 0. The interface is here captured by the conservative level set method developed in (40,41). The indicator function is a smoothed color function; the function smoothly switches value form +1 to −1 in a transition region around the interface. At the initial time, is computed from a signed distance function ( ) around the interface by where is a parameter that controls the thickness of the transition region. The level set function is advected in time by the underlying fluid velocity according to the Hamilton-Jacobi equation After advecting the fluid interface, the surface tension force F is calculated using the approximation of a smeared surface tension according to (42), where is the interface curvature, We is the Weber number and is here given by The signed distance function ( ) is reconstructed from . The function ℍ denotes a one-dimensional smoothed Heaviside function that changes value from 0 to 1 over a length scale proportional to . The interface curvature can be computed using the level set function by where = ∇ More details about the surface tension term is given in (42) and (43). Over time the level set function will loose its shape and thus its relation to the signed distance function due to discretization errors and non-uniform velocity fields. To retain the shape of the level set function, has to be reinitialized with regular intervals. For the conservative level set method, this is done by solving the following equation to quasi-steady state where is a pseudo time step and is the interface normal. This equation calculates a smoothed color function by balancing diffusion in direction normal to the interface by a compressive flux. If the fluid-fluid interface does not intersect the computational boundary homogenous Neumann boundary conditions can be used for the reinitialization. However, for the case when contact points are present the contact point position must not be distorted during reinitialization and we need other conditions. We discuss this further in Section 4.3. Discretizations For the implementation we use the existing two-phase flow solver described in (43) with suitable modifications to account for moving contact points (see Section 4). The solver is implemented in the C++ based finite element open source library deal.ii (44,45). The equations in Section 2.1 and Section 2.2 are discretized in space using the finite element method. For the level set function piecwise continuous linear shape functions on quadrilaterals, i.e. 1 elements, are used. For the incompressible Navier-Stokes equations we use the Taylor-Hood elements 2 1 , i.e. shape functions of degree two for each component of the velocity and of degree one for the pressure. With these elements the Babuŝka-Brezzi (inf-sup) condition (46) is fulfilled in order to guarantee the existence of a solution. Finite element discretizations of equations of transport type, such as the level set equation, typically need to be stabilized. Here however, no stabilization is used since the reinitialization will take care of possible oscillations. To improve robustness, the equation for curvature calculation (7) is solved by projecting the divergence of the normal vector to the space of continuous finite elements with a mesh-dependent diffusion 4ℎ 2 (43,42). For time stepping, each of the Navier-Stokes equations and the level set equation are discretized using the second order accurate, implicit BDF-2 scheme. In order to avoid an expensive coupling between the incompressible Navier-Stokes part and the level set part (via the variables u and ) a temporal splitting scheme is introduced. For more details about the time discritization we refer to (43). CONTACT LINE MODEL In this section we derive macroscopic boundary conditions for simulations of dynamic contact points. Similarly to the work in (13) we assume both a temporal and a spatial scale separation between the local dynamics at the contact point and the global fluid flow. For capillary driven flows (or other flows where capillarity is essential) the dynamics at contact points influence the global flow (13,14). For these type of problems the macroscopic time scale is typically large compared to the time it takes for the microscopic problem to reach a steady state (13), why it possible to assume a temporal scale separation. Consequently the microscopic dynamics is in equilibrium for each apparent contact angle and no additional information from the macro model is required (13). This assumption is also supported for example in (17) where it is concluded the macroscopic dynamics is affected by the microscopic regime mainly through the contact angle, and in the review paper (15) where they note that in many flow situations where Ca is small the apparent contact angle completely describes the dynamics. With the scale separations as a motivation, we consider multiscale modelling, where it is possible to use different models for describing the microscopic, mesoscopic (intermediate) and macroscopic dynamics. The coupling of the models is done via the macroscopic contact angle and the local contact point velocity. Here, we focus on how to communicate the information about the local contact point velocity from the microsimulation to the macro simulation via macroscopic boundary conditions. The macroscopic moving contact point model and boundary conditions we develop here will directly make use of the local contact point model developed in (13). The model in (13) describes the dynamics at a length scale of tens to hundreds of nanometers, i.e. the green region in the schematic illustration from Figure 1 . The idea is based on performing a series of simulations using the Cahn-Hilliard and Stokes equations for the micro scale dynamics, and taking the molecular effects into account by using a standard phase field boundary condition. The model takes the apparent contact angle as input and calculates the local contact point velocity. The viscous bending of the interface is to a large extent captured by this model. Since the contact point velocity only depends on the apparent contact angle (under the assumptions made here) it is possible and most efficient to perform the local simulations in (13) independently of the macro simulation. The results from the local phase field simulations can then be tabulated and efficiently employed by the macromodel. We refer to (13) for more details about the local model. It is possible to couple the macroscopic boundary conditions developed here to any local simulation that calculates the local contact point velocity from a given apparent contact angle. Another option would be to couple the macroscopic boundary conditions to a molecular dynamics simulation. However, since molecular dynamics simulations are limited to lengths scales of < 10 −9 (only the blue region in Figure 1 ) an extra mesoscopic model would be necessary for the intermediate mesoscale dynamics. Matching To couple the local phase field model and the macroscopic model we use ideas from matched asymptotics. Just as in (13) the macroscopic scale, with an apparent contact angle, represents an outer solution and the microscopic scale represents an inner solution. The matching between the outer and inner solutions is done at the intermediate mesoscale, close to the contact point at the outer scale but far from the contact point at the inner scale (13). The continuum approximation is assumed to be valid in the intermediate region, but the length scale is small enough for viscous effects to dominate the convection. By this assumption we have a vanishing local Reynolds number (based on the characteristic length scale of the intermediate region) and the creeping flow approximation of the Navier-Stokes equations can be employed. Asymptotically the interface at the microscale becomes increasingly planar far from the contact point (13). This is seen theoretically where a logarithmic dependence of the contact angle is predicted (18,15): the curvature of the asymptotic microscopic solution goes to zero as the distance to the contact point goes to infinity. From the logarithmic dependence it can also be concluded that viscous bending of the interface is most prominent near the contact point and will to a large extent be captured by the phase field model in the microdomain. Consequently, the contact angle varies only very slowly at the matching scale and we approximate the interface to be essentially planar at this scale (13). The flow around a flat fluid interface under the assumption of a creeping flow approximation (wedge flow) was studied theoretically by Huh and Scriven (9). In their paper Huh and Scriven derive an analytic similarity solution to the creeping flow approximation of Navier-Stokes equations by rewriting them in the form of a biharmonic equation for the stream function ( , ) (in plane polar coordinates and ). The origin of the polar coordinate system is fixed to the contact point position. In terms of the stream function the polar velocity components are = − −1 and = . By imposing appropriate boundary and interface conditions an analytical expression for the stream function in the region close to the contact point is derived. The analytical Huh and Scriven solution is a function of the contact angle 0 < < 180, the magnitude of the contact point velocity and the viscosity ratio . It is given by where the coefficients for the two different fluids are given by (subscripts denotes fluid 1 and 2 respectively) In Figure 2 the magnitude of the Huh and Scriven similarity velocity is plotted for the case with a contact angle = 45, non-dimensional contact velocity = 1 and viscosity ratio = 1. It can be seen that the velocity is zero along the whole solid boundary, i.e. along the whole line = 0, also at the contact point. However, just inside the boundary (i.e. along the line = , where is a small number) the velocity is non-zero in the vicinity of the contact point. This illustrates a velocity discontinuity at the contact point. The Huh and Scriven solution is recognised to be a useful tool to describe flow at an intermediate scale close to contact points, see for example the review paper (15), and also (47,48,10). It is also well known that this solution is not a full solution and cannot be used to describe the full moving contact point problem. The solution is singular exactly at the contact point. However, in this small region atomistic phenomena come into play and are not included in the standard Navier-Stokes system. If the Huh and Scriven solution were regular there, it would not be physically relevant. Further, a completely planar interface is unrealistic. There is a jump in the pressure over the interface, which can only be balanced by surface tension of a curved surface. However, if the surface tension effect is strong, which is the case for flow driven by capillary forces, a small curvature suffices. In (49) a modified Huh and Scriven solution is presented, which reveals that at an intermediate length scale the curvature decreases away from the contact point, and the flow approaches the flow of the planar case. In the paper (13) results from numerical simulations using the phase field solution also shows agreement with the wedge flow close to the contact point (at the macroscopic scale). Furthermore, Huh and Scriven make a similar conclusion in their paper: "the model may approximate reality well in a slightly removed region where the fluid interface is substantially flat and the flow qualifies as creeping" (9). The conclusion is that the planar interface and the Huh and Scriven solution are appropriate for matching an outer solution with an inner solution, at some intermediate distance from the contact point. Macroscopic boundary conditions With the motivation from previous subsection we use the analytic Huh and Scriven solution to develop the macroscopic velocity boundary conditions. To avoid the singularity at the contact point in the analytical solution, parts of the intermediate matching region is excluded from the macroscopic simulation. If the excluded region is of height and width this will result in a modified boundary according to the schematic illustration in Figure 3 . In the excluded region the flow is given by the local model from (13) and the analytic Huh and Scriven solution. The length scale of the excluded region should be in the order of the length scale of the intermediate region discussed in previous section. Hence, and are parameters that should depend on the physics of the two fluids. They need to be small compared to macroscopic features, but large enough compared to molecular diffusion lengths. Along the new artificial boundary, we impose the analytic velocity from the Huh and Scriven model as a velocity Dirichlet boundary condition for the macroscopic simulation. In this way the information about the contact point velocity from the micro model, i.e. the information about the movement of one single point, is transformed into a macroscopic velocity boundary condition along the modified boundary. Figure 4 shows an example of such a boundary function. The magnitude of the analytic Huh and Scriven velocity along the artificial boundary is plotted for the case when a region of size = 0.05 and = 4 has been excluded, for a non-dimensional contact point velocity = 0.02 and contact angle = 140. Note that the relation between contact angle and contact point velocity involves a nondimensional velocity obtained from the local phase field model. To get the corresponding physical velocity we need to take the scaling by the reference velocity = ∕ into account, see (13) for more details. Further, we also note that in our model the shape of the fluid interface is not prescribed at the boundary. The level set function develops dynamically as part of the solution and the contact angle is calculated from the resulting level set function. The temporal evolution of the level set function is modeled by the advection equation, and the parabolic reinitialization equation. For the reinitialization equation, Dirichlet boundary conditions are imposed, while no boundary condition is imposed in the advection step, see subsequent section. We use this option, since numerical experiments have shown that prescribing Dirichlet or Neuman boundary conditions in the advection step distorts the contact angle. IMPLEMENTATION OF CONTACT LINE MODEL In this section we describe implementational details related to the contact point model presented in previous section. Artificial boundary The modified boundary (Figure 3 ) is described using a so called bump function ( ): where and is the height and width of the bump function respectively (see Figure 3 ) and is the -coordinate of the contact point position. A fist simple approach to implement the macroscopic boundary conditions is to apply the velocity boundary condition at the gird points along the physical boundary, but read the values of the analytical velocities from the modified boundary (i.e. from the bump in Figure 3 ). In this work we are interested in a first investigation to see if the contact point model is able to advect the contact point accurately and we will perform simulations of model problems. With this motivation, we start with this simple approach of directly projecting the values of the velocity boundary function along the modified boundary to the physical boundary. The size of the excluded domain, i.e. and , puts a restriction on the spatial mesh size. It is important to resolve the small scale features of the peak in the velocity boundary function around the contact point, see Figure 4 for example. Figure 5 shows an example of how the peak and the small scale features of the boundary function depends on and . Calculation of contact point position and angle In order to calculate the contact point and contact angle, the method illustrated in Figure 6 is used. First, all intersections points, ( , ), between the fluid-fluid interface Γ and the mesh faces are found, that are closer to the boundary than a distance . Among these points, the point located on the domain boundary, Ω, is taken as the contact point. The interface is then approximated by a second order polynomial = 1 + 2 + 3 2 by making a least square fit in order to minimize The contact angle is then computed from the slope of the second order polynomial = 2 + 2 3 , evaluated at the contact point position = . FIGURE 6 Intersections between the mesh faces and the interface. Reinitialization at boundaries with contact points As discussed in Section 2.2 we use the conservative level set reinitialization according to equation (9). For the case when contact points are present, using the standard homogenous Neumann boundary condition for reinitialization will distort the contact point position. To fix the contact point position during the reinitialization we instead use a Dirichlet boundary condition at the boundaries with contact points. To minimize the distortion of the contact angle we base the Dirichlet boundary condition on a contact angle that is calculated before the reinitialization cycle. The boundary condition is based on approximating the fluid-fluid interface to be a circular arc, with the calculated contact angle, in an area of size ∼ 2 close to the contact point (where is the parameter that controls the transition region of the level set function, see Section 2.2). Thus we force the level set function on the boundary to take the form were (x) is the signed distance function around the interface according to a circular arc interface. Note that this boundary condition will not force the fluid-fluid interface to take the form of a circular arc inside the domain, it will only effect the level set function on the boundary. Also, since the color function (20) takes the value +1/−1 along the boundary except in a region close to the contact point ( ± ), it is only a local approximation. Summary of full solution algorithm To summarise, we look at the full solution algorithm. In each time step of the simulation we perform the following steps: 1. Calculate the contact point position and the apparent contact angle according to Section 4.2. 2. For the apparent contact angle , obtain the local contact point velocity from the pre-tabulated data from the local phase field model in (13). NUMERICAL EXPERIMENTS AND RESULTS As a model problem we consider two-dimensional capillary driven flow in a horizontal channel (no gravity). The model domain is illustrated in Figure 7 . We refer to the fluid to the left of the interface as fluid 1 and the fluid to the right as fluid 2, and all contact angles are measured from fluid 2. For all simulations the pressure is fixed to zero at the open boundaries (left and right boundaries). We first present simulation results for a simplified problem (Preliminary Test), and then proceed by demonstrating results for the full capillary driven channel flow (Channel Flow). In both cases we solve the non-dimensional Navier-Stokes equations (2) with Re = 1 and surface tension force given by (5), where We = 1. Preliminary Test To investigate if and how well the macroscopic boundary condition is able to transport the contact point according to a given contact point velocity, we first perform simulations in a simplified set-up. A . We see that the numerical solutions converge as the computational mesh is refined. However, the convergence is to a solution with a slightly different speed than the prescribed. Possible reasons for the approximately 1.8% discrepancy of the limiting solution will be discussed below. There are grid related oscillations in the contact point velocity, similar to those seen in Figures 11-14 , but their amplitude decrease with mesh refinement for the initial refinements. At a refinement level of ℎ = 0.0125 the decay of the oscillations seems to stagnate. We believe the remaining oscillations are triggered by similar processes as the spurious velocities observed in (42), and references therein. Channel Flow In this section we present results for the full problem of capillary fluid-fluid displacement in a horizontal channel. Channels of non-dimensional lengths = 120, 240 and 480 are used, all with a width of = 20. Non-dimensional parameters are set to 1 = 0.3, 2 = 1, 1 = 1, 2 = 0.73, = 1, which corresponds to oil being displaced by water. For this set-up the static contact angle is 140 degrees measured from the oil side. The micro model is first used to pre-compute relations between the contact point velocity and contact angle. The set-up of the micro model is the same as in (13) and the results are the ones for a microbox of non-dimensional size 30, which is much larger then the non-dimensional diffusion length of 1 (see (13)). The microdomain size is sufficient to include most of the viscous bending, see the discussion in (13) for more details. The resulting relation between contact angles and velocities obtained from the microsimulations is plotted in Figure 9 . Note that to get the corresponding dimensional velocities one would need to take into account the scaling by the reference velocity = ∕ . However, we have used the same scalings for both the macro and micro models, and can therefore directly read the non-dimensional values from Figure 9 . Now, macroscopic simulations are preformed using the tabulated data in Figure 9 . In each time step the velocity boundary conditions developed in Section 3.2 are applied according to the given contact point velocities from the micro model, depending on the calculated angle. The height and width of the bump function (Section 4.1) are here = 0.5 and = 4 (i.e. = 0.025 ). We assume the excluded part of the domain has a size that corresponds to the scale of the intermediate region according to the discussion in Section 3.1. To begin with we consider the shortest channel and run the simulations for a total time of = 2000 ( = 800 for the finest mesh) using three different meshes with 40 × 240, 80 × 480 and 160 × 960 grid cells, respectively. Corresponding mesh sizes are ℎ = 0.5, ℎ = 0.25 and ℎ = 0.125. Time steps of Δ = 2, Δ = 1 and Δ = 0.5 are used. The apparent contact angle is calculated at a height of = 0.5 for all meshes (i.e. at the height of one coarse grid cell above the wall). For all simulations the initial interface is a vertical line located at a non-dimensional distance of 25 into the channel (measured from the left end) and the initial velocity is zero in the whole domain. After an initial transient the system goes into a quasi-steady state, where the flow is determined by a balance of capillary forces and viscous stress. At the quasi-steady state the solution consists of a curved interface, and a Poiseuille flow profile away from the interface, see Figure 10 . FIGURE 10 Velocity field and fluid interface at the end of the simulation for channel length L=120, mesh size h=0.25. Note that only very few velocity vectors are presented, compared to the degrees of freedom in the simulation. In Figure 11 a the contact point velocity is plotted as a function of time. In Table 1 we compare local time averages of contact point velocities, and amplitudes of grid-level oscillations, for the three meshes. The amplitude is computed from maximum and minimum values in the time interval 500 ≤ ≤ 520. The convergence rate for the time-averaged velocity is over 3, but there is a stagnation in convergence for the grid-level oscillations also in these computations. We must expect a discrepancy for converged solutions, similar to the one discussed above for the Preliminary Test, also in this case. For this test there is no exact solution to compare with, but as the length of the channel increases the flow is expected to approach a limiting state with an interface shaped of a perfect circular arc. There is an analytic formula describing the movement of such an interface. In (50) in a horizontal capillary channel is derived. We have modified the expression to be applicable for twodimensional flow (equivalent to three-dimensional flow between two infinite parallell plates) and get ( ) = cos( − ) where is the distance between the contact point position and the left end of the channel and is the static contact angle. In Figure 11 the simulated contact point velocities are shown for the three channel lengths, together with the corresponding limiting velocities (21). The acceleration of the contact point is due to that the more viscous fluid 2 (oil) is displaced by the less viscous fluid 1 (water). For the two longer channels the multiscale simulation results agree well with the theoretical expression for the interface velocity. For the shorter channel, = 120, the simulation predicts a lower velocity compared to the velocity in the limiting case, given by (21). Another comparison is also possible. In (13) phase field simulations of capillary fluid displacement with a set up equal to ours are presented. In Figure 12 we compare our results for the shorter channel with the result from the full phase field simulation in (13). Again, the multiscale model predicts a lower contact point velocity. In fact, the phase field simulation predicts a velocity in good agreement with (21). Sensitivity of the method Above we have demonstrated that with our model for contact point dynamics and our implementation thereof, numerical solutions converge as the computational mesh is refined, to a solution in the vicinity of the correct solution, but with a small discrepancy. We identify two possible reasons for the discrepancy. One possibility is that the interface is not completely flat at the matching region. This has two consequences: the Huh and Scriven solution is only an approximation and it becomes difficult to define and calculate the apparent contact angle. Another possibility is errors due to the implementation strategy of the velocity boundary conditions: the Huh and Scriven velocity is read along a contour (to avoid the singularity precicely at the contact point) but imposed as a velocity boundary condition at the solid wall, and not along the corresponding contour (see Section 4.1). To understand the sensitivity to the specific choices in our implementation we have made some variations in the implementation. In particular, we changed the contour determining the velocity boundary condition, and how the apparent angle was determined. Results from these numerical tests are reported below. (21), which is valid for the limiting shape of a perfect circular arc. Effect of angle calculation From the phase field solutions in (13) it is observed that the interface shape is essentially circular in the central part of the domain throughout the simulations. However, it is also observed that the interface is bent close to the contact point, and this effect is larger for the shorter channels. In fact, the relation between the apparent angle and the interface speed in the phase field simulation of channel flow, is reported to be consistent with the results from the microsimulations, only when the apparent angle is computed from the interface shape at a certain distance (non-dimensionalized by diffusion length) from the wall. We also see a corresponding effect here when comparing angle calculations at different heights above the wall (see Section 4.2 for definition of ). For the longer channels we observe that angles calculated at different heights agree well with each other, but for the shorter channel there is a difference in the calculated angle depending on the height . With this motivation we perform a simulation for the shorter channel , = 120, where the angle is calculated at a height = 10 (i.e. at the center of the channel) instead of at the height = 0.5 used above. The result is presented in Figure 13 and we see that the resulting contact point velocities now agree well both with the theoretical expression (21) and the phase field solution. We also see that with grid refinement the result is converging towards the theoretical result/the phase field result, and that the oscillations decrease. We also note that in the present simulation the number of grid points is only a small fraction as compared with the phase field simulation. Velocity as a function of position, compared to limiting velocity in (21). Velocity as a function of time, compared to phase field simulation. FIGURE 13 Contact point velocity for the = 120 case when the apparent angle is calculated with = 10. Implementation of the velocity dirichlet condition The aim of this section is to investigate the effect of the modified boundary in Section 3.2 and the simple approach of applying the boundary conditions (described in Section 4.1). Simulations are performed for the shorter channel = 120, with different sizes of the excluded part of the domain, i.e. different values of (and = 4 ). We also vary the spatial mesh size for different , to investigate the effect of resolving the small scale features of the peak in the velocity boundary function (Section 4.1). In Figure 14 we present results for three different sets of and mesh sizes ℎ. For the simulation where = 0.5, ℎ = 0.25 the size of the excluded part is half the size compared to the simulation where = 1, ℎ = 0.5, but the resolution of the boundary function is the same (i.e. the same number of grid points to resolve the peak in the boundary velocity function). Comparing the two simulations we see that the result slightly improve for the case with smaller . The effect is however small compared to the effect of changing the way the contact angle is estimated, compare to Figure 13 . To investigate the effect of resolving the small scale features of the boundary function, the result for the simulation with = 0.5, ℎ = 0.25 is compared to a simulations where = √ 0.5, ℎ = 0.25, i.e. larger but also a higher resolution. In Figure 14 we see that a larger with higher resolution only slightly improves the result as well. This indicates the error due to the simple approach of applying the boundary condition (Section 4.1) does not have an significant effect on the result. CONCLUSIONS AND DISCUSSION We have presented a new idea for including contact point dynamics in standard two-phase models. The idea is based on multi-scale modeling, where the result from a local micro model for contact point dynamics (presented in (13)) is used. The micro model includes nano-scale physics, such as molecular diffusion, which is not present in the standard Navier-Stokes model for two-phase flow. The effect of the micro model is communicated to the global model via special macroscopic boundary conditions. The boundary conditions are based on the Huh and Scriven analytic velocity for steady movement of a contact point (9). The approach is very general, and can be modified to include other nano-scale effects that may be important for the large scale dynamics. The so called contact-line friction is an example, see (14). This effect can be incorporated in our method straightforwardly by changing the boundary condition in the micro problem. The presented method is exemplified in computations of 2 fluid-fluid displacement in a channel. The potential of our technique is demonstrated by the fact that we can achieve as accurate result as a full phase field simulation at a fraction of the computational cost. Accuracy for a phase field simulation relies on resolving features at the length and time scales of molecular diffusion, while for our method such resolution is not required. Our computations demonstrate convergence as the computational mesh is refined. A correct solution is known in a few simplified cases, and then the converged solution is in the vicinity of the correct solution (a 1.8% discrepancy was observed). We discuss reasons for the discrepancy, and have found that results are sensitive to the precise algorithm for estimating the apparent contact angle, while less so to several other implementational choices. For a realistic channel flow model we compare velocities from our simulations with the theory for a limiting steady interface displacement with an interface shape of a perfect circular arc. For the longer channels, the resulting velocities show good agreement. It is reasonable that the results for the longer channels show better agreement, since the flow and interface shape is approaching the limiting case as the channel length increase. We have found that for the shorter channel our results are more sensitive to how the contact angle is estimated, then for the longer channels. This shows that further studies on how to estimate the apparent angle are needed. However, we believe there are possible gains in also improving the implementations of the velocity boundary condition, the level set boundary conditions, the reinitialization procedure and the curvature calculations, especially in the vicinity of the contact points. For example, an ideal reinitialization would preserve both contact point positions and angles between the interface and the boundary. Possibly a geometric reinitialization would be able to fulfill the requirements better than our approach does, however probably at a cost of efficiency. Similarly, the advection step should move the contact point to a new position, while the angle evolves according to the local velocity. We have tried a few different combinations of boundary conditions, but found that applying a Dirichlet condition in the advection of the level set function leads to a distortion of the contact angle. More research in this area is likely to improve the accuracy of the methodology. Another area where improvements are needed is related to the calculation of the interface curvature. Here, the calculation is carried out by projecting the divergence of the normals (equation (7)), including a diffusion term of size 4ℎ 2 , onto the space of linear elements, see Section 2.3 and (43). This means solving an elliptic partial differential equation with a laplace-term. With no contact points present a homogenous Neumann boundary condition suffices, while such a condition leads to numerical artefacts in the calculated curvature close to contact points. As discussed in Section 2.3 we have no boundary conditions for the calculation of the curvature. An area for further research could be to find suitable boundary conditions for the curvature calculation. Further improvement is possible by applying the macroscopic velocity boundary condition at the modified boundary, instead of the simple approach used here, and described in Section 2.3. One possible way forward would be to use some kind of immersed methodology, for example cut-FEM. A starting point could be the cut-FEM method for handelling a fluid interface described in (51).
2017-09-14T13:59:31.000Z
2017-09-14T00:00:00.000
{ "year": 2017, "sha1": "096388db0d17db7d1601d7603670a87526388cfb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "096388db0d17db7d1601d7603670a87526388cfb", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
269656529
pes2o/s2orc
v3-fos-license
Wild Seizing Gliomas! Time-Dependent Characteristics and Prognosis of Glioblastoma-Related Epilepsy Characteristics and Prognosis of Tumor-Related Epilepsy During Tumor Evolution in Patients With IDH Wild-Type Glioblastoma Pallud J, Roux A, Moiraghi A, Aboubakr O, Elia A, Guinard E, Oppenheim C, Tauziede-Espariat A, Parraga E, Gavaret M, Chrètien F, Huberfeld G, Zanello M. Neurology. 2024;102(1): e207902. doi:10.1212/WNL.0000000000207902 Background and Objectives: Tumor-related epilepsy is a well-known symptom of glioblastoma. However, the particular characteristics of epileptic seizures related to glioblastoma, isocitrate dehydrogenase (IDH)-wild-type is almost unexplored longitudinally during the whole course of the disease. We assessed tumor-related epilepsy and seizure control during tumor evolution and the prognostic significance of tumor-related epilepsy. Methods: We performed an observational, retrospective single-center study at one tertiary referral neuro-oncology surgical center (2000-2020). We included adult patients treated for a newly diagnosed supratentorial glioblastoma, IDH-wild-type with available preoperative and postoperative MRI and with available epileptic seizure status at diagnosis. To determine factors associated with tumor-related epilepsy or seizure control, univariate analyses were performed using the χ2 or Fisher exact tests for categorical variables and the unpaired t test or Mann-Whitney rank-sum test for continuous variables. Predictors associated with tumor-related epilepsy and seizure control in unadjusted analysis were entered into backward stepwise logistic regression models. Results: One thousand six patients were enrolled. The cumulative incidence of tumor-related epilepsy increased during tumor evolution (33.1% at diagnosis, 44.7% after oncologic treatment, 52.4% at progression, and 51.8% at the end-of-life phase) and is related to tumor features (cortex involvement, no necrosis, and small volume). Uncontrolled epileptic seizures increased during tumor evolution (20.1% at diagnosis, 32.0% after oncologic treatment, 46.7% at progression, and 41.1% at the end-of-life phase). Epileptic seizure control after oncologic treatment was related to seizure features (uncontrolled before oncologic treatment and focal-to-bilateral tonic-clonic seizures) and to the extent of resection. Epileptic seizure control at tumor progression was related to seizure features (presence at diagnosis and uncontrolled after oncologic treatment) and to the time to progression. Tumor-related epilepsy at diagnosis was a predictor of a longer overall survival (adjusted hazard ratio, 0.78; 95% CI 0.67-0.90; p < 0.001) independent of age, Karnofsky Performance Status score, tumor location and volume, extent of resection, standard combined chemoradiotherapy, levetiracetam use, and MGMT promoter methylation. Discussion: The progression of tumor-related epilepsy with the evolution of glioblastoma, IDH-wild-type, and the effects of surgery on seizure control argue for proper antiseizure medication and maximal safe resection. Tumor-related epilepsy is an independent predictor of a longer survival. Background and Objectives: Tumor-related epilepsy is a well-known symptom of glioblastoma.However, the particular characteristics of epileptic seizures related to glioblastoma, isocitrate dehydrogenase (IDH)-wild-type is almost unexplored longitudinally during the whole course of the disease.We assessed tumor-related epilepsy and seizure control during tumor evolution and the prognostic significance of tumor-related epilepsy.Methods: We performed an observational, retrospective single-center study at one tertiary referral neuro-oncology surgical center (2000-2020).We included adult patients treated for a newly diagnosed supratentorial glioblastoma, IDH-wild-type with available preoperative and postoperative MRI and with available epileptic seizure status at diagnosis.To determine factors associated with tumor-related epilepsy or seizure control, univariate analyses were performed using the w 2 or Fisher exact tests for categorical variables and the unpaired t test or Mann-Whitney rank-sum test for continuous variables.Predictors associated with tumor-related epilepsy and seizure control in unadjusted analysis were entered into backward stepwise logistic regression models.Results: One thousand six patients were enrolled.The cumulative incidence of tumor-related epilepsy increased during tumor evolution (33.1% at diagnosis, 44.7% after oncologic treatment, 52.4% at progression, and 51.8% at the end-of-life phase) and is related to tumor features (cortex involvement, no necrosis, and small volume).Uncontrolled epileptic seizures increased during tumor evolution (20.1% at diagnosis, 32.0% after oncologic treatment, 46.7% at progression, and 41.1% at the end-of-life phase).Epileptic seizure control after oncologic treatment was related to seizure features (uncontrolled before oncologic treatment and focal-to-bilateral tonic-clonic seizures) and to the extent of resection.Epileptic seizure control at tumor progression was related to seizure features (presence at diagnosis and uncontrolled after oncologic treatment) and to the time to progression.Tumor-related epilepsy at diagnosis was a predictor of a longer overall survival (adjusted hazard ratio, 0.78; 95% CI 0.67-0.90;p < 0.001) independent of age, Karnofsky Performance Status score, tumor location and volume, extent of resection, standard combined chemoradiotherapy, levetiracetam use, and MGMT promoter methylation.Discussion: The progression of tumor-related epilepsy with the evolution of glioblastoma, IDH-wild-type, and the effects of surgery on seizure control argue for proper antiseizure medication and maximal safe resection.Tumor-related epilepsy is an independent predictor of a longer survival. Commentary Glioma is the most common form of central nervous system (CNS) tumor that arises from glial cells.Glioblastoma multiforme (GBM), the most aggressive form of glioma, affects *75 000 people worldwide annually. 1The invasion of the neocortex by GBM is accompanied by seizures, with at least 25% to 30% of patients experiencing seizures as the presenting clinical sign, and 40% to 70% develop seizures at some point during the disease's course. 2 Although glioma-related seizures often correlate with longer survival, 3 seizures significantly contribute to patient morbidity and negatively impact quality of life.Accumulating evidence also indicates that seizures encourage tumor proliferation and invasion while glioma growth stimulates seizures, suggesting that the two conditions may share common pathogenic mechanisms. 4,5lioma classification was mainly based on histological and immunohistochemical characteristics and their resemblance to the presumed cell of origin.The rapidly increasing knowledge of tumor molecular biomarkers (TMMs) over the last 2 decades has allowed more robust diagnostic TMMs to be introduced into clinical practice.This is more evident in the 5th edition of the World Health Organization (WHO) Classification of Tumours of the CNS, updated in 2021. 6 (IDH mut and IDH1 wild-type (IDH wt ) gliomas. 6Besides their role in prognosis, TMMs have improved our understanding of the pathophysiology of glioma-related epilepsy.For example, IDH mut gliomas have been shown to have a higher rate of seizures, with an incidence approaching 80%. 7lthough the epileptogenic effects of IDH mut gliomas have been highlighted, 7 seizures are also a practical concern in IDH wt gliomas including, GBM. 2 Little is known regarding the time-dependent risk factors, prognosis, and mechanism for epileptogenesis in GBM. 8,9Prior studies evaluating GBM-related epilepsy are restricted by the inclusion of patients diagnosed with GBM according to the pre-2021 revised WHO classification, likely resulting in a misclassification bias. 8Using a retrospective monocentric study of 1006 adults with IDH wt GBM, Pallud et al have attempted to address these. 9They longitudinally (at histomolecular diagnosis, early postoperative period, before and after oncologic treatment, at tumor progression, and the end-of-life phase) studied the prevalence and control rates of GBM-related epilepsy, predictors of GBM-related epilepsy and seizure control, and prognostic significance of GBMrelated epilepsy on survival. Multiple key findings were reported. 8First, the cumulative incidence of GBM-related epilepsy increased during GBM evolution (33.3% at histomolecular diagnosis, 44.7% after oncologic treatment, 52.4% at progression, and 51.8% at the end-of-life phase) and is related to tumor features (cortex involvement, no necrosis, small tumor volume <30 mL), competitive presenting symptoms, and a longer time to diagnosis.Similarly, uncontrolled seizures progressed during tumor evolution (20.1% at histomolecular diagnosis, 32.0% after oncologic treatment, 46.7% at progression, and 41.1% at the end-of-life phase).Interestingly, seizure control after radiochemotherapy is related to seizure features (uncontrolled before oncologic treatment and focal-to-bilateral tonic-clonic seizures) and to the time of tumor progression.Predictably, the extent of resection is the main predictor of seizure control, with supramarginal removal associated with better seizure control.The authors found that tumor-related epilepsy at diagnosis was an independent predictor of longer overall survival. The findings of the article reviewed in this commentary have several implications.First, identifying the risk factors of seizures during tumor evolution may help manage anti-seizure medication (ASM) therapy (introduction, withdrawal, and continuation) to improve the quality of life for these patients.Since uncontrolled seizures progress during tumor evolution despite oncologic treatments and are associated with tumor progression, ASMs could be pursued after oncologic treatment, even in seizure-controlled patients.This decision must be carefully balanced with the various serious adverse effects associated with ASMs. Secondly, the finding that epilepsy is more frequent in cases with a small tumor, without necrosis, and with no signs of raised intracranial pressure suggests that the interaction between glioma cells and functional neocortex may trigger GBM-related epilepsy.Indeed, recent studies have elegantly demonstrated neurogliomal synapses, in which neuronal hyperexcitation stimulates bona fide glutamatergic synapses on glioma cells and orchestrates glioma cell growth and invasion. 10These observations provide an impetus for future studies to identify drugs specifically targeting these synapses for potential dual anti-tumorigenic and anti-seizure effects, similar to IDH mut gliomas, for which several IDH inhibitors are being investigated for the same purpose. 11our main stages are associated with clinical GBM progression: pre-resection, post-resection (with/without radiochemotherapy), recurrence or progression (with/without radiochemotherapy), and end-of-life.The third key finding of the current study is that only the presence of tumor-related epilepsy at diagnosis confers a survival benefit.This indicates the highly malignant, dynamic character of GBM and the apparent differences in implications for patient survival when seizures occur earlier or later in the disease's treatment.This difference also suggests that the mechanisms underlying seizure occurrence at these respective time points differ in GBM progression. Finally, the current study reinforces the importance of a maximal safe resection for both epileptologic and oncologic purposes in patients with IDH wt GBM, similar to patients with IDH mut gliomas.Despite its poor oncologic prognosis, supramarginal resection is a worthwhile consideration (with a higher probability of seizure freedom), as uncontrolled epilepsy negatively impacts the quality of life in this already vulnerable population. Despite its strengths, including being the largest ever study on exclusively IDH wt GBM cohort and long-term follow-up, the study is plagued with limitations inherent to its retrospective monocentric study design.Also, the authors fail to explain the unexpected finding that the time to diagnosis was longer in patients with GBM-related epilepsy than in those presenting with other symptoms.Similarly, among patients with GBMrelated epilepsy, the time to diagnosis was shorter in patients with a single seizure than in those with recurrent seizures.This is in contrast to prior observations that seizures could trigger earlier presentation for care, thus accelerating diagnosis and initiating earlier treatment of smaller GBMs. 3 This also contradicts their discussion that the earlier detection of GBM because of the "sentinel" seizures, as suggested by their smaller tumor volume, might have accounted for the longer survival.Significantly, the current study did not address the question of how the process of epileptogenesis evolves throughout GBM progression. In summary, this publication nicely demonstrates the progression of GBM-related epilepsy and uncontrolled seizures during IDH wt GBM evolution.However, these findings should be interpreted with caution given the limitations and must be confirmed within prospective large databases.Also, studies are needed to elucidate the exact mechanism of epileptogenesis during GBM evolution.Significantly, we need further insight into the bidirectional neuron-glioma interactions and a better understanding of how seizures beget glioma proliferation and invasion and vice versa. 5Doing so has the potential to generate exciting candidate drugs with simultaneous anti-tumorigenic and anti-seizure properties for further clinical trials in GBM, Gliomas are first broadly divided into isocitrate dehydrogenase 1 mutated Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/)which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
2024-05-11T15:13:28.991Z
2024-05-09T00:00:00.000
{ "year": 2024, "sha1": "20737fca33cd8618d182c05d6113500868590b78", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15357597241253374", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fbb61c59b09a347bf241f1090bb99a326aadb0bb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
3421167
pes2o/s2orc
v3-fos-license
Evaluation of Suppressiveness of Soils Exhibiting Soil-Borne Disease Suppression after Long-Term Application of Organic Amendments by the Co-cultivation Method of Pathogenic Fusarium oxysporum and Indigenous Soil Microorganisms Preventive measures against soil-borne diseases need to be implemented before cultivation because very few countermeasures are available after the development of diseases. Some soils suppress soil-borne diseases despite the presence of a high population density of pathogens. If the suppressiveness of soil against soil-borne diseases may be predicted and diagnosed for crop fields, it may be possible to reduce the labor and cost associated with excessive disinfection practices. We herein evaluated the suppressiveness of soils in fields with the long-term application of organic amendments by examining the growth of pathogenic Fusarium oxysporum co-cultivated with indigenous soil microorganisms on agar plates. Soils treated with coffee residue compost or rapeseed meal showed suppressiveness against spinach wilt disease by F. oxysporum f. sp. spinaciae or spinach wilt and lettuce root rot diseases by F. oxysporum f. sp. spinaciae and F. oxysporum f. sp. lactucae, respectively, and the growth of pathogenic Fusarium spp. on agar plates was suppressed when co-cultured with microorganisms in a suspension from these soils before crop cultivation. These results indicate the potential of the growth degree of pathogenic F. oxysporum estimated by this method as a diagnostic indicator of the suppressiveness of soil associated with the inhabiting microorganisms. A correlation was found between the incidence of spinach wilt disease in spinach and the growth degree of F. oxysporum f. sp. spinaciae by this co-cultivation method, indicating that suppressiveness induced by organic amendment applications against F. oxysporum f. sp. spinaciae is evaluable by this method. The co-cultivation method may be useful for predicting and diagnosing suppressiveness against soil-borne diseases. Plant diseases include those on the above-ground part of plants and soil-borne diseases. Effective countermeasures against above-ground diseases are possible by diagnosing the initial incidence of disease. Although this is not the case for soil-borne diseases, resistant cultivars, cultural pest control, and biological pesticides may be employed as countermeasures against these diseases (11). The pathogenic fungi causing soil-borne diseases spread throughout a field, and it is often too late to employ countermeasures when the early symptoms of a disease are detected. Thus, preventive measures need to be implemented before cultivation. Some soils suppress soil-borne diseases despite the presence of a high population density of pathogens and are known as "suppressive soil". The physicochemical and biological characteristics of soil contribute to this suppressiveness (5). The effects of organic amendments, such as livestock and green manure, organic waste, compost, and peat, on the suppression of soil-borne pathogens have been extensively examined (6). If the suppressiveness of soil induced by various management approaches such as the application of organic fertilizers may be predicted and diagnosed for crop fields, it will be possible to reduce the labor and cost associated with excessive disinfection practices. Pathogenic F. oxysporum causes serious damage to various crop species, and the control of diseases caused by F. oxysporum is very challenging. Bonanomi et al. (6) analyzed an extensive data set of 2,423 studies to identify the characteristics of organic amendments to soil related to suppressiveness against soil-borne diseases. They found that the most useful features showing positive correlations with the disease suppression of Fusarium spp. were total fungi, total bacteria, the C to N ratio, pH, fluorescent pseudomonads, EC, and sporegenous bacteria. Previous studies investigated suppressiveness against Fusarium diseases. Very diverse factors including biotic and abiotic factors are involved in the expression of the suppressiveness of soil against Fusarium diseases. For example, the suppression of F. oxysporum f. sp. raphani by Pseudomonas stutzeri attaching to chlamydospores (14), the antagonism to F. oxysporum f. sp. raphani by microorganisms colonizing radish roots (15), the suppression of F. oxysporum f. sp. spinaciae by increases in microbial activity and populations after the application of compost (8), and the suppression of F. oxysporum f. sp. cubense by toxic organic acids produced in biological soil disinfestation (10) have been reported. It is not possible to examine all of these factors for their soil suppressiveness against Fusarium diseases, and those related to suppressiveness may vary depending on the crop, Fusarium species, and soil conditions. Therefore, the suppressiveness of soil against Fusarium diseases needs to be comprehensively assessed. In order to evaluate soil suppressiveness against soil-borne diseases, inoculation experiments of pathogens to plants are required under crop growing conditions. However, these experiments are laborious and a long time is needed to observe the occurrence of diseases. It is practically impossible to conduct these experiments as a preventive diagnosis in farmers' fields because the implementation of countermeasures against soil-borne diseases needs to occur before crop planting. Therefore, a new method that evaluates suppressiveness against soil-borne diseases in a short time is required. We herein employed and investigated the ability of a simple method to evaluate the suppressiveness of soil microorganisms against pathogenic F. oxysporum f. sp. spinaciae by co-cultivating the pathogenic fungus with indigenous soil microorganisms using the dilution plate technique (12). This method (the co-cultivation method) comprehensively estimates suppressiveness against a pathogen by the abundance, activity, and antagonistic ability of indigenous soil microorganisms. Suppressiveness against F. oxysporum f. sp. spinaciae was found in cow dung compost and a microbial inoculant, and the incidence of disease in spinach by the pathogen was reduced in soils treated with these organic amendments. Furthermore, positive correlations were observed between the incidence of disease in spinach and the growth degree of F. oxysporum f. sp. spinaciae on agar plates in an inoculation experiment of the pathogen to spinach in soils treated with several types of organic amendments (12). However, since these findings were obtained from a pot scale experiment using one type of soil, the applicability of the co-cultivation method to field soil with crop cultivation needs to be confirmed. In the present study, the co-cultivation method (12) for evaluating soil suppressiveness against pathogenic Fusarium spp. was examined using field soils with the long-term application of organic amendments that exhibited suppressiveness. Inoculation experiments of pathogenic F. oxysporum f. sp. spinaciae (spinach wilt disease) and F. oxysporum f. sp. lactucae (lettuce root rot disease) to spinach and Boston lettuce, respectively, were performed for soils from two long-term experimental fields, and relationships between the disease incidence of plants and the suppressive degrees of growth of Fusarium spp. estimated by the co-cultivation method for soil before and after crop cultivation were investigated. Soil Soil samples were taken from a long-term experimental field with the application of organic fertilizers from two locations: Aichi prefecture and Ibaraki prefecture, Japan. Soil samples in Aichi prefecture (Aichi soil) were collected at the Nagoya University Farm, Graduate School of Bioagricultural Sciences, Togo-cho, Aichi, Japan, on March 8, 2016. This field experiment has been continued since 1987 and soil was Yellow soil (Ultisols). Five plots were used: unfertilized plot (NF), a plot with chemical fertilizer (CF, 500 kg N ha -1 y -1 , 133 kg P 2 O 5 ha -1 y -1 , 500 kg K 2 O ha -1 y -1 ), a plot with chemical fertilizer and 40 t ha -1 y -1 of farmyard manure (CF+FYM), a plot with chemical fertilizer and 40 t ha -1 y -1 of coffee residue compost (CRC), and a plot with 400 t ha -1 y -1 of farmyard manure before 2006 or 200 t ha -1 y -1 of farmyard manure after 2006 (FYM). Each plot was 100 m 2 (4×25 m) without replication. The cultivation history of this field is shown in Supporting information. The soil samples used for the inoculation experiment were placed in a plastic bag or container with a lid to avoid drying and stored at room temperature. Chemical and microbial characteristics of soil The pH and electrical conductivity (EC) of soil were measured in a water extract (5:1 [v/v]) with the pH meter M-12 (Horiba, Tokyo, Japan) and EC meter ES-51 (Horiba). The concentrations of elements were measured using the Soil & Plant Analyzer SFP-4i (Fujihira Industry, Tokyo, Japan) following the manufacturer's instructions for soil samples air-dried at room temperature and passed through a 2-mm sieve. The diversity of bacterial communities was investigated by analyzing the 16S rRNA sequence of Ibaraki soil. DNA was extracted from 0.5 g of soil using the FastDNA SPIN kit for Soil (MP Biomedicals, Santa Ana, CA, USA) and sequencing was performed with a MiSeq (Illumina, San Diego, CA, USA). In the analysis of alpha diversity, data of an array were sampled in accordance with the minimum number of reads with a threshold value of 97% using QIIME's workflow script. Measurements were conducted for triplicate plots, except for Mix in which one of the triplicate plots was used. Cultivation of crops and inoculation of the pathogenic F. oxysporum strain Spinach seeds pretreated in water for 2 d were sown into a cell tray (200 holes) filled with nursery soil ("YASAI-BAIDO ICHI GOU", Katakura & Co-op Agri, Tokyo, Japan) and grown in a greenhouse for approximately 10 d. Conidiospores of F. oxysporum f. sp. spinaciae were added to the soil sample collected from the long-term experimental field at a dose of 10 6 conidia g -1 soil, and 500 mL of soil (approximately 400 g) was placed into polycarbonate pots (outer diameter of 12 cm×height of 11.5 cm; 0.01 m 2 ). Regarding the proliferation of conidiospores, F. oxysporum was cultivated on potato dextrose agar (potato extract [prepared from 1 kg potatoes boiled in 1 L of water] 100 mL, glucose 20 g, agar 15 g, and distilled water 900 mL) at 30°C for 7 d, and a square section of 5 mm on each side of the fungal lawn was cultivated in 100 mL of potato sucrose broth (potato extract 100 mL, sucrose 20 g, and distilled water 900 mL) at 30°C for 7 d by shaking horizontally. The number of conidiospores was enumerated on a hemocytometer and diluted culture solutions with a predetermined density were used for the inoculation. Soils were fertilized with the compound inorganic fertilizer (N-P-K=150-300-150 kg ha -1 ), inoculated with the conidiospores, and planted with the spinach seedlings on the same day. Three spinach seedlings were planted in each pot in quadruplicate on April 22, 2016 (Aichi soil) and September 24, 2015 (Ibaraki soil). We noted the disease incidence and severity of wilting for each spinach plant on May 7, 2016 (15 d after planting, Aichi soil) and October 6, 2015 (12 d after planting, Ibaraki soil) and collected soil samples from the pots in order to measure the growth degree of F. oxysporum. The incidence and severity of wilt were evaluated as follows: 0, healthy; 1, one leaf had wilted; 2, two or three leaves had wilted; 3, half of the leaves had wilted; 4, more than half of the leaves had wilted; 5, dead or nearly dead. Boston lettuce seeds were sown into a cell tray (200 holes) filled with nursery soil ("YASAI-BAIDO ICHI GOU") and grown on a cultivation shelf in the laboratory, which was kept at 25°C. Conidiospores of F. oxysporum f. sp. lactucae were added to the soils at a dose of 10 6 conidia g -1 soil and 500 mL of the soil (approximately 400 g) was placed into polycarbonate pots (outer diameter of 12 cm×height of 11.5 cm; 0.01 m 2 ). Soils were fertilized with the compound inorganic fertilizer (N-P-K=150-300-150 kg ha -1 ), inoculated with the conidiospores, and planted with the Boston lettuce seedlings on the same day. Three Boston lettuce seedlings were planted in each pot in triplicate on July 26, 2016 for both soils. We recorded the disease incidence and severity of wilting for each Boston lettuce plant on August 8, 2016 (13 d after planting) and collected soil samples from the pots in order to measure the growth degree of F. oxysporum. The incidence and severity of wilt were evaluated in the same manner as that described above for spinach. The cultivation of spinach and Boston lettuce inoculated with the respective pathogenic fungus was also performed using sterilized CRC, FYM, and RSM soils. Soil was autoclaved at 121°C for 60 min in a bag and the inoculation test was performed in the same manner as the above method. Soil samples were subjected to analyses by the co-cultivation method within two weeks of sampling and stored at 4°C before being analyzed. Co-cultivation of F. oxysporum with soil microorganisms Ten-gram portions of the soil sample, which was collected in triplicate before crop cultivation and from pots in quadruplicate for spinach or triplicate for Boston lettuce for each treatment after crop cultivation, were taken into a sterilized tube containing 90 mL of sterilized tap water and shaken reciprocally at 200 rpm for 30 min. One milliliter of the suspension was poured into 9 mL of sterilized tap water, mixed well, and serially diluted in the same manner. A dilution series was prepared to a magnification of 10 -6 fold. A quantity of 1.0 mL of 10 -1 to 10 -6 -fold diluted suspensions was inoculated into a petri dish and 16 mL of YPMG agar medium (Peptone-yeast extract medium; yeast extract 3 g, peptone 5 g, beef extract 3 g, glucose 10 g, agar 15 g, and distilled water 1 L; pH 7.0) (2) was poured and mixed. A square section of the F. oxysporum hyphal lawn was placed in the center of agar medium. As a control, sterilized water was inoculated instead of the diluted suspension of soil samples. Plates were incubated at 30°C for approximately one week (7 or 8 d), by which time the colony of F. oxysporum had spread fully on the control plate. The length of the shortest part of the colony together with the longest length was measured; i.e., the extension of hyphae was measured at the parts at which hyphae had grown the most (long diameter) and least (short diameter) in the colony of F. oxysporum for soil samples and the control, and the mean of these values was used to calculate the growth degree. As a representative value for the growth degree of F. oxysporum, the median of the estimated values of the growth degree at six dilutions from 10 -1 to 10 -6 was calculated (12). Statistical analysis All statistical tests were performed with Microsoft Excel 2016 for Windows (Microsoft, Tokyo, Japan) and Ekuseru-Toukei 2015 (Social Survey Research Information, Tokyo, Japan). A principal component analysis was performed using the data of pH, EC, NH 3 -N, exchangeable Mg, bacterial density, and fungal density. These parameters were selected by assessing similarities using a cluster analysis (Ward's method) assuming the distance between variables using the correlation coefficient between variables. Differences in the disease incidence of spinach and Boston lettuce and the growth degree of F. oxysporum in the co-cultivation method from those in control plots (NF and Cont) were statistically tested with Steel's and Dunnett's tests, respectively. The relationship between the disease incidence of crops and the growth degree of F. oxysporum estimated by the co-cultivation method was analyzed by Spearman's rho. Chemical and microbial characteristics of soil The chemical and microbial characteristics of Aichi and Ibaraki soil samples are shown in Table S1. Several characteristics were significantly different between Cont and the other plots with organic amendments in Ibaraki soil. PCA was performed based on chemical and microbial characteristics (Fig. S1). Cont, CF, and NF were located on the second and third quadrants and the plots with organic amendments were located from the center to the first and fourth quadrants. FYM and CRC were located apart from the other plots with organic amendments in the fourth and first quadrants, respectively. The diversity of bacterial communities in Ibaraki soil based on the elucidation of 16S rRNA sequences showed no significant difference among plots (Table S2). Disease incidence of spinach and Boston lettuce planted in soil with the long-term application of organic amendments In NF and Cont, a large number of leaves wilted, while disease incidence was relatively suppressed in CRC, FYM, and RSM (Fig. S2). The disease incidence of spinach was significantly lower in CRC (P=0.001) and FYM (P=0.000) than in NF in Aichi soil, and was also significantly lower in RSM (P=0.007) and FM (P=0.023) than in Cont in Ibaraki soil ( Fig. 1A and C). The disease incidence of Boston lettuce was significantly lower in FYM (P=0.001) and RSM (P=0.042) than in NF and Cont in Aichi and Ibaraki soils, respectively ( Fig. 1B and D). When soil was sterilized, the disease incidence of spinach increased in CRC, whereas it was maintained at a low level in FYM and RSM. The disease incidence of Boston lettuce increased in CRC and FYM, but did not significantly increase in RSM after soil sterilization (Fig. S3). Disease was observed in spinach or Boston lettuce grown on soil without the inoculation of pathogenic F. oxysporum (data not shown). Growth degree of F. oxysporum for soil The growth degree of F. oxysporum f. sp. spinaciae was significantly lower in CRC (P=0.000) than in NF before spinach cultivation in Aichi soil ( Fig. 2A). After crop cultivation, the growth degree of F. oxysporum was significantly lower in CRC (P=0.011) and FYM (P=0.029) than in NF (Fig. 2B). In Ibaraki soil, the degree of F. oxysporum proliferation was significantly lower in RSM than in Cont (P=0.001 [before crop cultivation] and P=0.000 [after crop cultivation], Fig. 2C and D). The growth degree of F. oxysporum f. sp. lactucae was significantly lower in CRC (P=0.001) than in NF before Boston lettuce cultivation in Aichi soil (Fig. 3A). After crop cultivation, no significant differences were observed in the growth degree of F. oxysporum (Fig. 3B). Before crop cultivation in Ibaraki soil, the growth degree of F. oxysporum was significantly lower in RSM (P=0.000) than in Cont (Fig. 3C). After crop cultivation, the growth degree of F. oxysporum was significantly lower in RSM (P=0.000) and Mix (P=0.005) than in Cont (Fig. 3D). The representative value of the growth degree based on the extension length of the colonies (median value) was significantly lower for CRC than for NF (P=0.023) before spinach cultivation in Aichi soil (Fig. 4A). After crop cultivation, the median value of F. oxysporum was not significantly different in Aichi soil (Fig. 4B). Before crop cultivation in Ibaraki soil, median values were significantly lower in RSM (P=0.041) and FM (P=0.023) than in Cont (Fig. 4C). The median value was significantly lower in RSM (P=0.035) than in Cont (Fig. 4D) after crop cultivation. The median value of F. oxysporum f. sp. lactucae was significantly lower in CRC (P=0.033) than in NF before Boston lettuce cultivation in Aichi soil (Fig. 5A). The median value of F. oxysporum f. sp. lactucae was not significantly different after crop cultivation in Aichi soil or before crop cultivation in Ibaraki soil ( Fig. 5B and C). The median value was significantly lower in RSM (P=0.011) than in Cont after Boston lettuce cultivation in Ibaraki soil (Fig. 5D). Relationship between the growth degrees of F. oxysporum and disease incidence of plants The relationship between the disease incidence of plants and growth degrees of F. oxysporum in Aichi and Ibaraki soils is shown in Fig. 6 (spinach, F. oxysporum f. sp. spinaciae) and 7 (Boston lettuce, F. oxysporum f. sp. lactucae), respectively. Positive correlations were found between the disease incidence of spinach and growth degree of F. oxysporum f. sp. spinaciae based on the extension length of the colonies before (P=0.012) and after crop cultivation (P=0.011) (Fig. 6). A correlation was not observed between the disease incidence of Boston lettuce and growth degree of F. oxysporum f. sp. lactucae (P=0.556 [before crop cultivation] and P=0.467 [after crop cultivation], Fig. 7). Discussion Decreases in the disease incidence of plants indicated that CRC, FYM, RSM, and FM soils expressed suppressiveness against spinach wilt disease, while FYM and RSM soils expressed suppressiveness against lettuce root rot disease ( Fig. 1 and S2). Since suppressiveness against soil-borne diseases may be developed by the application of organic amendments (6), the long-term application of organic amendments in fields may have contributed to the suppressiveness of soils. Before crop cultivation, the growth of F. oxysporum f. sp. spinaciae and F. oxysporum f. sp. lactucae was suppressed more in CRC than in NF as well as in RSM than in Cont ( Fig. 2 and 3). These results suggest that the co-cultivation method and Ibaraki (C, D) soils. Values show the mean of medians at dilutions from 10 -1 to 10 -6 with SE before (A, C) (n=3) and after crop cultivation (B, D) (n=4). NF, unfertilized; CF, chemical fertilizer; CF+FYM, chemical fertilizer and 40 t ha -1 y -1 farmyard manure; CRC, chemical fertilizer and 40 t ha -1 y -1 coffee residue compost; FYM, 400 t ha -1 y -1 farmyard manure; Cont, compound inorganic fertilizers; RSM, 940-4,700 kg ha -1 rapeseed meal; FM, 710-3,600 kg ha -1 fish meal; SBM, 1,300-6,300 kg ha -1 steamed bone meal; Mix, 930-4,600 kg ha -1 mixture of rapeseed meal, fish meal, and steamed bone meal. * indicates a significant difference from NF or Cont at P<0.05. has the capacity to evaluate suppressiveness against soil-borne diseases of soils before crop cultivation in fields with the long-term application of organic amendments. A positive correlation was observed between the disease incidence of spinach and growth degree of F. oxysporum f. sp. spinaciae (Fig. 6), as reported in the pot experiment (12). When the co-cultivation method is used to diagnose soil suppressiveness against Fusarium diseases, it will be necessary to reveal the relationship between the growth suppression degree of pathogenic Fusarium strains by the co-cultivation method and the disease incidence of crops. The disease incidence of spinach was low when the growth degree was less than approximately 5 mm (Fig. 6), indicating that a growth degree of 5 mm may be an index for a diagnosis. However, caution is needed regarding an outlier observed in CF (Fig. 6B); despite the high incidence of 4, the growth degree of F. oxysporum f. sp. spinaciae was 0 mm. On the 10 4 -fold dilution plate, the colony of F. oxysporum was covered with the lawn of a fungus and the growth of F. oxysporum was inhibited, resulting in an extension length of the colony of 0 mm. Thus, care is needed in this case because the growth degree of F. oxysporum estimated by the co-cultivation method may be affected and, thus, does not correspond to the disease incidence of plants. Further investigations of the relationship between the growth degree of pathogenic Fusarium strains estimated by the co-cultivation method and the disease incidence of plants are needed before the application of the co-cultivation method for practical use. A correlation was not observed between the disease incidence of Boston lettuce and growth degree of F. oxysporum f. sp. lactucae (Fig. 7), indicating that the co-cultivation method is not useful for evaluating soil suppressiveness against lettuce root rot disease. However, RSM showed the growth suppression of F. oxysporum f. sp. lactucae by the co-cultivation method and disease incidence was reduced in RSM (Fig. 1D, 3D, and 5D). The disease incidence of Boston lettuce was mostly 3 or 4 and this may have affected the evaluation of the growth degree by the method and resulted in no relationship between the disease incidence and degree of F. oxysporum f. sp. lactucae. The difference observed in the degree of suppressiveness between spinach and Boston lettuce observed in CRC (Fig. 1) may have indicated that the fungistatic capability of these soils was less effective for F. oxysporum f. sp. lactucae than for F. oxysporum f. sp. spinaciae. There are very diverse pathogenic types (Formae speciales) of F. oxysporum (4) and differences in their infectivities may have contributed to this difference. In addition, lettuce root rot disease showed symptoms with a low density of F. oxysporum f. sp. lactucae in soil (13). Therefore, it was not possible to evaluate suppressiveness using the co-cultivation method; however, the growth degree of Fusarium was assessed using this method. Disease suppression for spinach wilt was not shown in the sterilized soil of CRC, indicating that biotic factors were involved in the suppressiveness of CRC. The population density of fungi in CRC was two orders of magnitude higher than that in other plots and the fungal community structure may be one of the factors related to the suppressiveness of CRC for spinach wilt disease; however, suppressiveness was not observed for lettuce root rot disease. Adams et al. (1) reported that spent coffee grounds reduced the population density of F. solani f. sp. phaseoli. Hamanaka et al. (9) also showed suppressiveness against crown and root rot of tomato caused by F. oxysporum f. sp. radices-lycopersici in CRC soil. They demonstrated that fungi were predominant in the microbial community and suggested that fungal members close to F. oxysporum were responsible for this suppressiveness. Fungi related to Fusarium in soil may also have been involved in the suppression of F. oxysporum f. sp. spinaciae on plates of up to 10 -4 -fold dilutions because the population density of Fusarium spp. was estimated to be approximately 10 4 cfu g -1 in CRC soil (data not shown). Suppressiveness against spinach wilt disease was observed in the sterilized soil of RSM, indicating that abiotic factors are involved in the suppressiveness of RSM. Ueda et al. (16) reported that composted rapeseed meal exhibited lytic activity against F. oxysporum f. sp. cucumerium and reduced the population density of F. oxysporum in soil. Zakaria et al. (18) found that volatile inhibitory substances including ammonia were involved in the reduction of F. oxysporum and F. solani in soils treated with oilseed (linseed, cottonseed, and soybean) meals. Some of the substances involved in suppressiveness produced during the decomposition process of rapeseed meal may have been heat-stable, whereas substances such as ammonia were degraded or removed from soil by autoclaving. In addition, suppression of the growth of F. oxysporum was observed using the Fusarium co-cultivation method in RSM ( Fig. 4C and D), indicating that biotic factors were also involved in suppressiveness in RSM. Members of Actinobacteria were previously suggested to play a key role as microbial defenders in suppressive soil against Fusarium wilt of strawberry caused by F. oxysporum f. sp. fragariae (7). The population density of actinomycetes was higher in RSM than in Cont (Table 1). We observed higher number of colonies of actinomycetes on the 10 -5 -fold dilution plates for RSM than on the plates for other plots, indicating that the suppression of Fusarium is related to actinomycetes. Although FYM exhibited suppressiveness against spinach wilt and lettuce root rot diseases ( Fig. 1A and B), the co-cultivation method showed no effect on the suppression of the growth of F. oxysporum f. sp. spinaciae ( Fig. 2A and 4A). Abiotic factors may have been involved in this suppressiveness because the sterilized soil of FYM still showed suppressiveness in spinach. Ueda et al. (17) reported that sterols extracted from bark compost that were not degraded by autoclaving may have functioned as defensive substances against pathogenic fungi. These substances may contribute to the suppressiveness of FYM soil on plates at low dilutions by the co-cultivation method. Toyota and Kimura (14) found that bacterial strains attached to the chlamydospores of F. oxysporum f. sp. raphani and lysed them, but did not inhibit mycelial growth. In this case, a biotic factor was responsible for suppressiveness; however, it was not possible to evaluate suppressiveness using the co-cultivation method. The co-cultivation method has some limitations, i.e. it cannot be applied when the cause of suppressiveness is abiotic. In addition, the degree of suppression achieved with the combination of a pathogenic strain of F. oxysporum and a crop cannot be extrapolated to other combinations of diseases and crops; the pathogenic strain of F. oxysporum responsible for each combination of disease and crop needs to be used. However, a merit is that a quick and easy estimation of soil suppressiveness is possible by using common equipment only such as an autoclave and a clean bench within 10 d. Therefore, the co-cultivation method may be useful for predicting and diagnosing suppressiveness against soil-borne diseases. Conclusions We applied the co-cultivation method of F. oxysporum and soil microorganisms to soil that exhibited suppressiveness against soil-borne diseases due to the long-term application of organic amendments. A correlation was found between the disease incidence of spinach and growth degree of F. oxysporum f. sp. spinaciae, indicating that it is possible to evaluate suppressiveness against spinach wilt disease by the co-cultivation method. However, there were some exceptions where soil showed a low disease incidence, but the growth of F. oxysporum was not suppressed on agar plates, suggesting the limitations of this method in its applicability. Further investigations are needed to elucidate the relationship between suppressiveness and the biological and chemical properties of soil in addition to the growth degree of F. oxysporum in order to comprehensively predict and diagnose suppressiveness.
2018-04-03T04:11:35.375Z
2018-02-16T00:00:00.000
{ "year": 2018, "sha1": "f8addd8e8c9d5bc4bbca7511a50c1150fc86aa33", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/jsme2/33/1/33_ME17072/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70e1bdda05499202b1ac4a146eb84b9bc9901600", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252027278
pes2o/s2orc
v3-fos-license
Pain Assessment with the BPS and CCPOT Behavioral Pain Scales in Mechanically Ventilated Patients Requiring Analgesia and Sedation Background: Intensive Care Unit (ICU) patients often experience pain, especially during diagnostic, nursing, and therapeutic interventions. Pain assessment using the Behavioral Pain Scale (BPS) and Critical Care Pain Observation Tool (CCPOT) are recommended, but they are difficult to do in patients undergoing deep sedation. This study analyzed the usefulness of the BPS and CCPOT scales in assessing pain among patients with varying degrees of sedation. Methods: In 81 mechanically ventilated and sedated ICU patients, 1005 measurements were performed using the BPS and CCPOT scales. The study was conducted by 3 trained observers 3 times a day (each measurement at rest, during painful nursing interventions, and after the intervention). The Richmond Agitation-Sedation Scale (RASS), the Simplified Acute Physiology Score (SAPS II), and the Acute Physiology and Chronic Health Evaluation (APACHE II) were also analyzed from medical records as well as information on the length of hospitalization and treatment. Results: It was shown that signs of pain increased significantly (p < 0.001) during interventions in patients on both scales (BPS and CCPOT), and then returned to values close to the resting period. RASS results correlated significantly (p < 0.05) and positively with the results of the BPS and CCPOT. A strong correlation was found between the results of both scales at each stage of the study (R = 0.622–0.907). Conclusions: Nursing procedures are a source of pain in analgosedated patients. The BPS and CCPOT scales are useful tools for assessing the occurrence of pain in mechanically ventilated patients, including those in deep sedation. Introduction In the Intensive Care Unit (ICU), emphasis is placed on the control of pain, agitation, delirium, patient immobility, and sleep (PADIS) [1]. PADIS is an extended concept based on the 2013 guidelines, which state the occurrence of the so-called ICU triads, meaning pain, agitation, and delirium (PAD) [2]. Failure to treat the above conditions worsens the effectiveness of therapy, causing unnecessary suffering for patients and negatively impacting patient quality of life after ICU discharge [1,3,4]. Studies have shown that patients hospitalized in the ICU often experience pain, while at rest [5][6][7] but especially during diagnostic, care, and treatment activities [5][6][7][8][9][10][11][12][13][14]. Unrelieved pain can result in chronic pain, posttraumatic stress disorder symptoms, and lower quality of life. To provide optimal pain management to ICU patients, accurate routine pain assessments is recommended [3]. Better outcomes were found after the implementation of pain assessment tools including a reduction in the duration of mechanical ventilation and ICU stay [15,16]. Material and Methods The observational study was carried out in 2017-2019 in the Intensive Care Unit of the University Hospital in Krakow after obtaining the consent of the bioethics committee (consent No. 1072.6120.161.2017). The study group consisted of 81 patients-34 (41.97%) women and 47 (58.03%) men aged 56 to 75.8 (mean age 63.1 ± 17.21). The study included adult patients unable to self-report pain, requiring sedation (RASS less than or equal to −1) and analgesia, mechanically ventilated, hemodynamically stable, and staying in the ICU for at least 48 h. The study excluded patients with paresis or paralysis of the upper and/or lower limbs, receiving neuromuscular blocking drugs, after sudden cardiac arrest (SCA), and after injuries that prevented pain assessment (e.g., in the craniofacial region). The researchers did not intervene in the treatment regimen of the study patients. Prior to our study, monitoring of sedation and analgesia with standardized tools had not been routinely used in the studied ICU. Analgesia and sedation in patients were considered individually and adjusted to the patient's clinical condition, as stated in the DAS guideline [22], and in surgical patients, analgesic treatment using algorithms developed by Polish pain relief teams were applied [23]. Deep sedation (according to eCASH guidelines) [21] was used in the following patient groups: with severe respiratory failure and dyssynchrony, in surgical treatment requiring absolute immobilization, and in cases of severe brain injury with intracranial hypertension. Analgesics were administered by continuous infusion according to doses selected individually for each patient. For the purposes of the study, no additional boluses of analgesics were administered during any stage of observation. The study used behavioral pain scales, meaning the BPS, CCPOT and RASS scale (Richmond Agitation-Sedation Scale) to assess the depth of sedation. In addition, data from medical records were analyzed, such as patient records, information on the length of hospitalization, sedation and analgesics, and the risk of death scales through the Simplified Acute Physiology Score (SAPS II) and Acute Physiology and Chronic Health Evaluation (APACHE II). The Polish version of the CCPOT is a useful and reliable tool for analyzing pain in critically ill, intubated patients using analgosedation without hypnosis protocol based on opioid infusions [24]. The Polish version of the BPS also shows reliable internal consistency (Cronbach's alpha 0.6883) and is recommended for pain assessment in sedated, mechanically ventilated patients [25]. In our study, pain was defined as BPS score ≥ 5 and CCPOT ≥ 3 [24,26,27]. The study was conducted simultaneously by three independent observers trained theoretically and practically in the uniform use of the BPS and CCPOT scales. The researchers did not consult the results of the assessment with each other. The observation period was 24 h (randomly selected from the entire stay in the ward), during which pain was assessed three times a day (in the morning from 6.30 a.m. to 9.00 a.m., in the afternoon from 1.00 p.m. to 3.00 p.m. and in the evening from 7.00 p.m. to 10 p.m.) in the patients (during rest, nursing interventions, and after an intervention). Rest meant no medical or nursing intervention for at least 30 min. Interventions in which pain was assessed included the procedure of evacuating secretions from the bronchial tree, changing the dressing on a surgical wound, or repositioning the patient in bed. The evaluation after the intervention was carried out for dozen to several dozen minutes without stimulation from external stimuli. Since in some patients, during the scheduled observations, there were disrupting factors, such as withdrawal of sedation on the day of measurement, diagnostic and surgical procedures with the use of muscle relaxants, inhospital transport for tests or procedures, the analysis finally included 1005 measurements meeting all the correct criteria. Statistical Analysis Methods The R CRAN software for Windows (version 4.0.2, created by Bengtsson H., Jacobson A. and Riedy J., Vienna, Austria) was used for statistical analysis. The analysis of quantitative variables (raw scores of BPS, CCPOT and RASS) was performed by calculating the mean, standard deviation, median, quartiles, minimum and maximum. The analysis of qualitative variables (scores of BPS and CCPOT cut into two intervals: no pain and occurrence of pain) was performed by calculating the number and percentage of occurrences of each value. The comparison of the values of the qualitative variables in the groups was performed using the chi-square test (with Yates' correction for 2 × 2 tables) or Fisher's exact test, where low expected frequencies appeared in the tables. The comparison of the values of quantitative variables in the two groups was performed using the Mann-Whitney test. Correlations between quantitative variables were analyzed using the Spearman correlation coefficient. Non-parametric methods were chosen since all quantitative variables were not normally distributed, by definition (raw scores of BPS, CCPOT and RASS could take only few distinct values). The analysis of the influence of quantitative variables on a dichotomous (binary) variable was performed using the logistic regression method. The results are presented in the form of OR (odds ratio) with a 95α% confidence interval. The level of significance was p < 0.05. Results The hospitalization length of the studied patients ranged from 11 to 44 days (mean 32.7 ± 35.47 days). On the APACHE II scale, the respondents obtained from 20 to 29 points, on average 25.43 ± 9.39 points in the assessment of the severity of their condition and 30-55 (average 42.73 ± 24.38) points in assessing the risk of death. The SAPS II scores, however, came out to 48-70 (average 59.3 ± 16.57) points and 41-85 (average 60.96 ± 24.42) points respectively. The risk of death increased significantly with the age of the studied patients (OR 1.062; 95% CI: 1.025-1.101; p < 0.001). Moreover, the length of hospitalization negatively correlated with the SAPS II scores (p < 0.05). The more severe the patients' health condition and the higher the risk of death on this scale, the shorter the hospitalization, often ending in death. A similar relationship has not been demonstrated with the APACHE II scale. Most of the patients required deep sedation ( Figure 1). Results The hospitalization length of the studied patients ranged from 11 to 44 days (mean 32.7 ± 35.47 days). On the APACHE II scale, the respondents obtained from 20 to 29 points, on average 25.43 ± 9.39 points in the assessment of the severity of their condition and 30-55 (average 42.73 ± 24.38) points in assessing the risk of death. The SAPS II scores, however, came out to 48-70 (average 59.3 ± 16.57) points and 41-85 (average 60.96 ± 24.42) points respectively. The risk of death increased significantly with the age of the studied patients (OR 1.062; 95% CI: 1.025-1.101; p < 0.001). Moreover, the length of hospitalization negatively correlated with the SAPS II scores (p < 0.05). The more severe the patients' health condition and the higher the risk of death on this scale, the shorter the hospitalization, often ending in death. A similar relationship has not been demonstrated with the APACHE II scale. Most of the patients required deep sedation ( Figure 1). Analgesia and sedation in patients were tailored to the patient's clinical condition. Among the analgesics, Oxycodone and Ketamine were used and Buprenorphine transdermal patch (mostly during the later period of hospitalization). The sedating drugs (apart from Ketamine) used in the study group were Propofol and Midazolam, and occasionally neuroleptics were also administered (Haloperidol-1-2 mg i.m 1-3x/day). Data on average doses of analgesics and sedatives used intravenously in continuous infusion in the study group of patients are presented in Table 1. In the vast majority of measurements, patients had no signs of pain, except during the interventions. The no-pain scores were very similar on both scales (95.42% and 96.52%, respectively, before and 93.23% and 95.33% after the intervention). In about 1/3 of the measurements, signs of pain were observed in patients during the intervention ( Table 2). Analgesia and sedation in patients were tailored to the patient's clinical condition. Among the analgesics, Oxycodone and Ketamine were used and Buprenorphine transdermal patch (mostly during the later period of hospitalization). The sedating drugs (apart from Ketamine) used in the study group were Propofol and Midazolam, and occasionally neuroleptics were also administered (Haloperidol-1-2 mg i.m 1-3x/day). Data on average doses of analgesics and sedatives used intravenously in continuous infusion in the study group of patients are presented in Table 1. In the vast majority of measurements, patients had no signs of pain, except during the interventions. The no-pain scores were very similar on both scales (95.42% and 96.52%, respectively, before and 93.23% and 95.33% after the intervention). In about 1/3 of the measurements, signs of pain were observed in patients during the intervention ( Table 2). Comparing the mean scores before, during and after the nursing procedures from all measurements for both scales (BPS and CCPOT), it was shown that the signs of pain increased significantly during interventions among patients (p < 0.001), and then returned to values close to resting during third observation (which took place dozen to several dozen minutes after intervention, Table 3). The RASS scores correlated significantly (p < 0.05) and positively with the BPS and CCPOT scores at all stages of the study (Table 4), which means that patients undergoing deep sedation showed fewer signs of pain. Moreover, it was found that the degree of sedation of patients, meaning the RASS score, was a significant (p < 0.05) independent predictor of an increase in pain intensity during an intervention. The regression parameter for the BPS was 0.254, so the higher the RASS score (shallower sedated patient), the more visible signs of pain during an intervention were observed (on average 0.254 points on the BPS scale per one RASS point). In the case of the CCPOT, the regression parameter is 0.363, so for each RASS point, there was an average increase of 0.363 points in pain. Yet, no differences were found related to the supply of specific analgesics and sedatives (p > 0.05). A strong correlation was demonstrated between the results of both scales at each stage of the study (Table 5). Analyzing the obtained data in the subgroups of women and men, it was found that men showed not significantly higher scores than women, both before (p = 0.055 for the CCPOT and p = 0.018 for the BPS) and after interventions (p = 0.054 for the CCPOT and p = 0.087 for the BPS). On the other hand, the results of the scales during the painful procedures were similar in both sexes (p > 0.05). The age of the respondents correlated negatively with the results of the BPS and CCPOT scales, except during interventions. However, only in the case of the BPS scale, a statistically significant negative correlation between its results before the procedures and the age of the patients was demonstrated (R = −0.073; p = 0.027). Thus, the intensity of the signs of pain was not related to the age of the patients during the procedures, but younger patients manifested pain of a slightly greater intensity than older patients before and after the interventions (especially before them, see Table 6). It should be noted that disease diagnoses (meaning the percentages of people after injuries, potentially more prone to pain) were similar in all age groups. Moreover, the results of both scales during and after the procedures positively correlated with the length of patient hospitalization (Table 7). The influence of the time of day on the occurrence of pain in the studied patients was also analyzed. Higher results during evening measurements were found in the case of the BPS during an intervention (p = 0.043) and in the case of the CCPOT, both before the intervention (p = 0.03) and during the intervention (p = 0.03, Figure 2). The influence of the time of day on the occurrence of pain in the studied patients was also analyzed. Higher results during evening measurements were found in the case of the BPS during an intervention (p = 0.043) and in the case of the CCPOT, both before the intervention (p = 0.03) and during the intervention (p = 0.03, Figure 2). Mean BPS and CCPOT scores at different times of day (data from1005 measurements). * p < 0.05; p-Friedman's test + post-hoc analysis (Wilcoxon paired t tests with Bonferroni correction). "Interventions" refer to painful nursing procedures (endotracheal suctioning, patient turning, dressing changes). Discussion Pain is one of the most stressful event and common traumatic memories for patients in the intensive care unit (ICU) [8]. Unrelieved pain can have numerous negative consequences for patients [3,28]. The BPS and CCPOT are scales recognized for pain testing in patients unable to verbalize pain, in line with the PADIS [1] patients with cerebral trauma [10,[31][32][33] and delirium [34]. However, their validity concerning some groups of patients, e.g., with burns or cognitive deficits, requires further research [35]. There are also differences in opinions about the validity of using these scales in unconscious and deeply sedated patients. According to some authors, both of these scales are equally useful for testing pain in patients with varying degrees of analgesia or sedation [36], both conscious or unconscious [37]. This was also confirmed by the studies conducted by Severgnini et al. [29] in which the level of patient awareness did not affect pain detection during nursing procedures, and the combined use of the BPS and CCPOT gave better sensitivity compared to each of these scales separately. This suggests that nursing procedures are a source of pain regardless of the level of sedation [37], and combining the BPS and CCPOT may be a valuable tool for pain assessment in critically ill mechanically ventilated patients [8,29]. According to other authors, however, there are differences in the results of these scales of conscious and unconscious/sedated and unsedated patients, both before and after nursing procedures [7,9]. The heterogeneity of the above data has inspired our research in this area. Another premise for undertaking the research was the fact that the previous studies using the BPS and CCPOT scales were usually conducted on mildly sedated or not sedated patients. For example, Chanques et al. tested the BPS and CCPOT scales among intubated and non-intubated patients who could not verbalize pain, and whose RASS was −3 and above [8]. In turn, Puntillo et al. [9] studied patients with a RASS from −1 to 0. The Polish version of the CCPOT was validated in the ICU where a no-hypnoticanalgesiabased protocol was implemented in the group of intubated patients with a RASS of-3 or greater [24]. Pain scales are usually tested by comparing the results between a painful and a painless procedure or at rest, as in the study by Rijkenberg et al., where the mean difference between painful and painless procedures was 3.13 ± 1.56 (p < 0.001) [11]. In our research, we chose turning the patient, aspiration the bronchial tree or change the wound dressing as the painful procedures, which were considered to be ones of the most painful nursing procedures in multicenter studies from 28 countries [9]. Comparing the BPS and CCPOT results during these procedures with the at-rest results and after the procedures, we also came up with a statistically significant difference (p < 0.0001). Sedated patients, including some profoundly sedated, with a RASS score of less than −3 showed signs of pain during painful procedures on both the CCPOT and BPS scores, with less sedation being a significant independent predictor of increased signs of pain. A separate issue is the matter of deep sedation in our group of patients. Sedation and analgesia are undoubtedly important elements in the treatment of mechanically ventilated patients in the ICU. Analgosedation provides patients with comfort, improves patientventilator synchrony, reduces anxiety and agitation [38], allows one to perform invasive procedures and reduces stress and oxygen consumption. On the other hand, studies indicate that the use of excessive sedation is a disadvantage. Deep sedation is associated with a prolonged duration of mechanical ventilation, longer stay in the ICU [18,39], is a risk factor for delirium [40] and causes long-term adverse psychological sequelae, [18,41] and leads to a reduction in patient survival [42]. According to reports, the use of lighter sedation (compared to deep sedation) does not increase the incidence of adverse effects [4,43,44]. The current recommendations of the Society of Critical Care Medicine prescribe the maintenance of light sedation in all patients undergoing mechanical ventilation while recognizing that it is a conditional recommendation due to the low quality of the available evidence [1,45]. Studies by Wang et al. [46] indicate that over 87% of clinicians use analgosedation in ICU patients, and more than half never apply strategies for keeping the patient conscious. According to the e-CASH guidelines, deep sedation is justified in some patients, despite potential side effects e.g., in patients with ARDS and dyssynchrony, treated with muscle relaxants, in surgical patients requiring immobilisation, and with severe brain injury with intracranial hypertension [21], i.e., in the majority of our study patients. The two most validated and reliable sedation scales are recommended for use in ICU mechanically ventilated patients-the Sedation Agitation Scale (SAS) and the Richmond Agitation Sedation Scale (RASS) [47]. We used the latter in our research. Despite the reproducibility and clarity of the scales themselves, there is no defined cut-off point for "light" sedation. Some studies define deep sedation as a RASS score from −3 to −5 [3,6] and others as a RASS score of −4 or −5 [14]. It should be emphasized that the cut-off point is an important clinical differentiation, as in the RASS scale a result of −3 means that the patient still responds to a voice, while the results of −4 or −5 show that the patient is unresponsive to a voice and is often in a coma [45]. This also has an impact on the pain assessment. The use of deep sedation, although it is not recommended, is a rather common ICU phenomenon in many countries [28,38]. As shown by multicenter studies conducted in European countries, organizational factors (the size of the ICU, the number of staff per patient, teamwork) influence the way sedation is used in patients [48]. The preferences of a given center and physicians, cause large differences in the approach to sedation of critically ill patients, hence the need to optimize sedation and analgesia practices with its consequences [49,50]. In our study, most of the patients were deeply sedated (assuming a RASS below −3). ICU practices related to long-term analgosedation (>24 h) were rather typical for Polish hospitals and included opioid, Midazolam and Propofol treatments [17]. Our patients required sedation and long-term mechanical ventilation, mainly due to ARDS, some of which showed dyssynchrony with the work of the ventilator. Also, a fairly large number of trauma surgical patients and patients with intracranial hypertension was deeply sedated in addition to being under analgesia. In patients with these characteristics, pain monitoring is particularly important, so we included them in the analysis to determine whether the BPS and CCPOT will also allow the detection of signs of pain in patients under deep sedation. We excluded patients from the research who were in conditions that could make the expression of pain impossible, regardless of sedation; for example, people with paresis, paralysis of the limbs, receiving neuromuscular blocking drugs, or after craniofacial injuries. In general, pain was well controlled in the studied patients, because apart from the interventions, they did not show any signs of it according to the adopted criteria (93.2-95.4% on the BPS scale and 95.3-96.5% on the CCPOT scale, respectively). Our study confirms the usefulness of the BPS and CCPOT scales in the assessment of pain in the studied group of patients. Signs of pain, also in patients with RASS scores below −3, were much more visible to independent observers when performing painful procedures in patients than in patients at rest (p < 0.001), and returned to values close to resting sometime after the intervention was over. This was confirmed by the simultaneous use of the BPS and CCPOT scales and a strong correlation between the results of both scales at each stage of the study, especially evident during painful interventions (r = 0.907). Our results show a similar trend to the data obtained by Rijkenberg et al., who found that both BPS and CCPOT scores increased significantly during nursing procedures and returned to baseline levels in a short time [51]. In our research, this increase in scores during painful procedures was smaller (about 1 point), but statistically significant. This could be due to differences in the selection of patients for the study, deeper sedation of the studied patients (and thus a less pronounced manifestation of pain) and a much larger number of measurements. In turn, the correlation between the scales was stronger in our study. A similar correlation between the BPS and CCPOT was demonstrated, in the reports of Liu et al. [52] and a literature review by Birkedal et al. [53]. The severity of pain during painful procedures was more evident in patients who were hospitalized longer in the ICU, which may be explained by the cumulative effect of negative stimuli along with the length of stay in the ward (e.g., fatigue, sleep deprivation, excess stimuli from the ICU environment), which may affect pain experience [54]. Perhaps in a similar way (the amount of stimuli and activity to which patients are exposed during the daytime hours) the clearer signs of pain at the end of the day could be explained compared to the time before noon [55]. Behavioral pain scales do not allow the assessment of pain sensations in patients who cannot manifest it in any visible way, for example, in patients with limb paralysis or craniofacial injuries. It should be assumed that they also experience pain during nursing, diagnostic and treatment activities (although confirming this would require the use of other methods, e.g., pupillometry), so effective methods of minimizing pain should be implemented. Study Limitations The results of our study, based on more than 1000 measurements by three independent observers, seem to confirm the usefulness of the BPS and CCPOT scales in assessing pain in patients undergoing sedation, including deep sedation, yet they have some limitations. The research was conducted in one center (the Trauma Centre for Emergency and Disaster Medicine), which treated the most seriously ill patients-usually those with multiple and multi-organ injuries, requiring highly specialized monitoring, treatment (often surgical interventions) and nursing care in which the analgosedation treatment regimen may have differed from that adopted in other ICUs. Therefore, it was not possible to extrapolate the obtained results. Moreover, the profile of the studied ward, large differentiation among patients in terms of disease entities, the related individual selection of analgesia and the interactions of sedatives and analgesics made it impossible to analyze the relationships between the type and doses of analgesics and the results of the BPS, CCPOT and RASS scales. Further research is required. Implications to Practice As there are no universal signs of pain, and individual pain management based on different tools is a complex task, it should be entrusted to team members after prior training. In pain assessment, tools should be carefully selected according to the patient's health situation [3,56]. It seems that pain monitoring using the BPS and CCPOT scales among sedated, mechanically ventilated patients is sensitive and reliable, and should be a routine element of ICU management (according to the ABCDEF algorithm and e-CASH guidelines), including regular nursing practice. One should be aware that patients who not only cannot verbalize pain, but also do not manifest it in a manner included in the behavioral pain assessment scales, probably also experience it during nursing, diagnostic, and therapeutic activities. Therefore, it is necessary to use other methods of pain assessment (e.g., pupillometry) in this group. Optimizing analgesia (including through nurse-controlled analgesia) and sedation in ICU patients, especially during painful procedures, is essential. Conclusions The results of the study indicate that some nursing procedures commonly used in the ICU are a source of pain, also in patients undergoing deep sedation and receiving analgesics. The BPS and CCPOT scales are useful tools for assessing the occurrence of pain in this group of patients. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the Jagiellonian University (protocol code 1072.6120.161.2017, date of approval: 29 September 2017). Informed Consent Statement: Patients' consent was waived due to their physical and mental inability to give consent, and their conditions were a characteristic feature of the study population. During this observational study, the researchers did not interfere with the routine nursing activities in the ICU. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-09-03T15:15:30.362Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "59dc5a0e9eba992c4c3b059d0215efba3e243049", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/17/10894/pdf?version=1662017068", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb801e8deafffcb40961ef3f0f6ac0b347df334a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247358398
pes2o/s2orc
v3-fos-license
Clinical presentation and outcomes in children with retinoblastoma managed at the Uganda Cancer Institute Background The majority of patients with retinoblastoma, the most common intraocular cancer of childhood, are found in low-and middle-income countries (LMICs), with leukocoria being the most common initial presenting sign and indication for referral. Findings from the current study serve to augment earlier findings on the clinical presentation and outcomes of children with retinoblastoma in Uganda. Methods This was a retrospective study in which we reviewed records of children admitted with a diagnosis of retinoblastoma at the Uganda Cancer Institute from January 2009 to February 2020. From the electronic database, using admission numbers, files were retrieved. Patient information was recorded in a data extraction tool. Results A total of 90 retinoblastoma patients were studied, with a mean age at the first Uganda Cancer Institute (UCI) presentation of 36.7 months. There were more males (57.8%) than females, with a male to female ratio of 1.37 : 1. The majority (54.4%) had retinoblastoma treatment prior to UCI admission. The most common presenting symptoms were leukocoria (85.6%), eye reddening (64.4%), and eye swelling (63.3%). At 3 years of follow-up after index admission at UCI, 36.7% of the patients had died, 41.1% were alive, and 22.2% had been lost to follow-up. The median 3-year survival for children with retinoblastoma in our study was 2.18 years. Significant predictors of survival in the multivariate analysis were follow-up duration (P¯<0.001), features of metastatic spread (P = 0.001), history of eye swelling (P = 0.012), and bilateral enucleation (P = 0.011). Conclusions The majority of children who presented to the Uganda Cancer Institute were referred with advanced retinoblastoma, and there was a high mortality rate. Retinoblastoma management requires a multidisciplinary team that should include paediatric ophthalmologists, paediatric oncologists, ocular oncologists, radiation oncologists, and nurses. Introduction Retinoblastoma is a primary intraocular childhood malignancy arising from the embryonal cells of the retina and the most common primary intraocular malignancy of childhood, with over 90% of cases diagnosed before the age of five [1][2][3]. The retinoblastoma incidence in the USA is 1 in 15,000-20,000 live births, with 60% of cases presenting with unilateral disease [4]. It is estimated that 250 to 300 new cases in the United States of America and approximately 8,000 to 9,000 cases worldwide of retinoblastoma are diagnosed every year [3,5]. However, the annual incidence of retinoblastoma is the same across geographical regions, and the annual prevalence of retinoblastoma is higher in Africa than in higher income countries [3,6]. Studies show that over 95% of children with retinoblastoma in the United States and other developed nations survive their malignancy, whereas approximately 50% survive worldwide [5]. In low-income countries, patients with retinoblastoma are diagnosed at a median age of 30.5 months, with 49.1% of these presenting with extraocular disease and 18.9% having metastatic disease [3]. In contrast, patients from high-income countries were diagnosed at a median age of 14.1 months, with 98.5% of these having intraocular retinoblastoma and only 0.3% exhibiting metastatic disease [3]. Consequently, a higher median age at diagnosis extended lag time from first presentation until patients reach a retinoblastoma treatment centre, and advanced disease at index diagnosis implies that retinoblastoma patients from lowincome countries have poor survival rates compared to retinoblastoma patients from high-income countries [3,7]. In a retrospective study performed on retinoblastoma children in India, leukocoria was the most common presenting symptom, with 21 months being the median age at which first signs were observed by parents, with one-third of patients presenting with bilateral disease at first diagnosis [8]. In this study, age at presentation, lag period, and stage of disease were found to be associated with poor survival outcomes. In a retrospective study performed in the Democratic Republic of Congo, the average age at diagnosis of retinoblastoma patients was 32 months, with the first sign of disease noticed by parents at a mean age of 20 months, thereby resulting in a year of lost time before reaching hospital for diagnosis and treatment [9]. These figures are similar to those reported in an earlier Ugandan study by Waddell Keith and colleagues, where the median age at diagnosis was 33 months and 17.5 months for unilateral and bilateral disease, respectively [10].These delays in diagnosis were thought to contribute to the high mortality from retinoblastoma in Ugandan patients. Retinoblastoma treatment has shifted towards increased utilization of chemotherapy and less radiotherapy, with surgical treatment remaining a predominant and relevant treatment modality. A study by Waddell et al. revealed a higher mortality rate among retinoblastoma children prior to the utilization of chemotherapy modalities, even in those with intraocular disease [10]. However, in a retrospective population-based cohort study in Taiwan, the survival outcome was significantly associated with enucleation intervention [11]. Intraocular retinoblastoma allows vision and eye salvage, whereas extraocular disease has nearly no possibility of vision salvage [8]. Before 2015, most of the children with or suspected to have retinoblastoma were referred for further treatment to Ruharo eye centre, a missionary hospital in western Uganda. Here, children with retinoblastoma who needed chemotherapy were treated with a full chemotherapy regimen (vincristine, etoposide, and cyclophosphamide). However, in the last 4 years, more of these children have been referred to the Uganda Cancer Institute (UCI), a regional centre of excellence in oncology facilities in East Africa. With the increasing number of children with retinoblastoma admitted and managed at the paediatric oncology unit of UCI, there was a need to study the pattern of clinical presentation, outcomes and predictors of survival among children with retinoblastoma. This study aimed to describe the clinical presentation, outcome, and predictors of survival of children managed at UCI. The correlation between the clinical presentation and outcomes will help the health care team involved in the management of retinoblastoma patients to mobilize more resources and develop protocols that will improve patient outcomes. Materials and Methods 2.1. Study Design and Setting. A retrospective review of records of children with retinoblastoma admitted to the Uganda Cancer Institute from 1 st of January 2009 to 28 th of February 2020. The Uganda Cancer Institute is a regional centre of excellence for oncology in East Africa. The inclusion criteria included children below 18 years with retinoblastoma diagnosis admitted and files opened at UCI between January 2009 and February 2020. We excluded 7 patients whose files/charts were missing important information, and whose caretakers could not be contacted. A total of 90 patients with 115 eyes were enrolled in the study. The following data were collected from the medical records: age at diagnosis, sex, clinical signs and symptoms, laterality, treatment (chemotherapy, surgery, and radiotherapy), family history, and vital status (alive or dead). Follow-up information on study participants was also extracted from the charts, and missing information (especially vital status) was obtained through telephone interviews with patients' caretakers. The primary endpoint in this study was 3-year survival, and the secondary endpoint was the ocular outcome (number of salvaged eyes and presence of vision). This study was approved by the School of Medicine Research Ethics Committee (#2020-031). The diagnosis of retinoblastoma was clinically and/or histopathologically made by ophthalmologists, pediatric oncologists, and pathologists. Intraocular disease was classified using the International Intraocular Retinoblastoma Classification. Staging was performed at the UCI to determine the treatment of choice for each participant. Treatment consisted of one or a combination of the following treatment modalities: chemotherapy, surgery (unilateral or bilateral enucleation), and external beam radiotherapy. Data Management and Analysis. The extracted data were entered into an electronic database using Epidata 4.2 by the double entry method. The database was checked and cleaned for consistency. The cleaned data were exported into STATA 17 for analysis. Continuous variables such as age and lag time were summarized as the mean ± standard deviation. Categorical variables such as sex, clinical signs and symptoms, laterality, and vital signs were summarized in proportions or frequencies and presented as tables or graphs. The time-specific survival rate (3-year survival) with a 95% confidence interval (CI) was estimated by the Kaplan-Meier method and assessed by a log-rank test. Three-year survival was defined as the period of follow-up of 3 years from the date of index admission at UCI. Cox regression analysis was used to assess the predictors of outcome and presented as hazard ratios with their 95% confidence intervals at both bivariate and multivariate levels. A P value <0.2 was considered at bivariate analysis, and all variables 2 Journal of Cancer Epidemiology that had P < 0:2 were entered at multivariate Cox regression analysis. A P value <0.05 was considered significant. Results A total of 97 charts with retinoblastoma diagnoses were retrieved from the records department of the Uganda Cancer Institute, and 90 charts that had confirmed retinoblastoma diagnoses were studied. There was a mean age of 33.3 months (SD 25.3) for bilateral retinoblastoma and 38 months (SD 25.6) for unilateral retinoblastoma, with an overall mean age at first UCI presentation of 36.7 months (SD 25.5). There were more males than females, with a male to female ratio of 1.37 : 1. The majority of study subjects (36.7%, n = 33) came from the central region of the country, and 13.3% (n = 12) of children were nationals of neighbouring countries (Table 1). Leukocoria was the most common retinoblastoma sign (77, 85.6%) reported by parents prior to the first UCI visit, followed by eye reddening (58, 64.4%) and eye swelling (57, 63.3%). There was a family history of retinoblastoma among first-degree relatives in 4.4% of children. The majority of patients had prior retinoblastoma treatment at eye departments of tertiary hospitals before UCI admission, with 46.7% having had enucleation of at least one eye, as shown in Table 1. A significant number (37, 41%) of children had a lag time to UCI admission of at least 12 months (Table 1). Using the international intraocular retinoblastoma classification scheme (IIRC), most eyes in our study presented with advanced disease of IIRC groups D (20%) and E (53%). Twenty eyes (17%) had no record of their IIRC classification (Table 2). At the 3-year follow-up after admission at UCI with a diagnosis of retinoblastoma, 41.1% of the patients were still alive, and 36.7% of the patients had died. Furthermore, 54.4% and 3.3% had unilateral and bilateral eye loss, respectively ( Table 3). The median survival time in years for children with retinoblastoma in our study was 2.18 years. (Figure 1). The factors that were statistically significant predictors of mortality in the multivariate analysis were lag time between first sign and UCI visit, history of eye swelling (Figure 2), features of metastatic spread (Figure 3), and bilateral enucleation. Children with metastatic spread (AHR 8.93, CI 3.23-24.69, P = 0:001) were 8.9 times more likely to die than those without metastatic spread. Children who presented with a history of eye swelling prior to any retinoblastoma treatment (AHR 4.39, CI 1.38-14.01, P = 0:012)) were 4.4 times more likely to die than those who had no history of eye swelling. In addition, children who underwent bilateral enucleation (AHR 3.53, CI 1.33-9.36, P = 0:011) were more likely to die than those who did not undergo enucleation (Table 4). Discussion Charts for 90 children with retinoblastoma diagnosis were reviewed. There was a female to male ratio of 1 : 1.4. With the exception of a study in Iran, which showed a higher female to male ratio, the findings in our study were similar to those of other studies performed in India, Singapore, Eastern Nepal, Kinshasa (DRC), and Gezira-Sudan, which revealed a higher male to female ratio [8,9,12,13]. The majority of the patients (95%) had their first presentation at UCI beyond 12 months of age, with 33% (30 children) at least 3 years old. This is because the largest percentage of children underwent retinoblastoma management prior to admission. In our study, the overall mean age at the first UCI visit, 36.7 months (SD ± 25:5), was slightly higher than that from a study performed in Eastern Nepal [13]. Our study also revealed a lower difference in age at first UCI presentation between unilateral (38.0(SD ± 25:6) and bilateral (33.3(SD ± 25:3) retinoblastoma cases than that reported in other studies [14,15]. The disparity in age at first presentation is because a significant number of children (n = 49, 54.4%) admitted at UCI had prior retinoblastoma management. There was no record of IIRC classification for 12 (13.3%) children. This finding complicates the management and follow-up of children with retinoblastoma. This was found mostly in children who underwent enucleation prior to UCI admission. The majority of children had the worst affected eyes with Group E (n = 61, 67.8%) IIRC classification, followed by Group D (n = 16, 17.8%). The majority (n = 55, 97.8%) of children with unilateral Rb whose IIRC staging was performed had Group E (81.8%), followed by Group D (18.1%). This finding is comparable to other studies conducted in less developed countries [9,10,[14][15][16]. Sixty-five (72.3%) children presented with unilateral retinoblastoma, whereas 25 (27.8%) presented with bilateral disease. These findings were similar to those of other studies, which showed higher unilateral retinoblastoma cases than bilateral retinoblastoma. For instance, studies conducted in India by Chawla et al. and in Kinshasa by Kazadi et al. found that 67.6% and 77.5% of children presented with unilateral retinoblastoma [8,9]. In our study, the most common retinoblastoma sign that prompted parents to seek eye health attention was leukocoria, reported in 77 (85.6%) patients, followed by eye reddening in 58 (64.4%) and eye swelling in 57 (63.3%), all of which are features of advanced disease. Our findings were similar to those from other studies that revealed leukocoria as the most common sign noted by parents of children with retinoblastoma in India (83%), Singapore (70.6%), and the Democratic Republic of Congo (DRC) (67.5%) [8,9,13]. Eye reddening (58, 64.4%) and eye swelling (57, 63.3%) were the second most common signs noted by parents in our study. Proptosis was shown to be the second most common sign noticed by parents in studies in India (17%) and in the DRC (15%) [8,9,13]. The percentage of children with proptosis as one of the most common signs was higher in our study than in other studies. This shows that most children in our study sought their first medical attention with advanced disease. At first presentation at the Uganda Cancer Institute, proptosis was the most common presenting sign reported in 41 (45.6%) children, followed by leukocoria reported in 36 (40%) children. This finding is similar to studies by Kazadi Lukusa et al. in the DRC, with 55% and 25% of children with retinoblastoma presenting with proptosis and leukocoria, respectively [9]. In addition, proptosis was the most common presenting sign (56%) at the National Cancer Institute in Gezira, followed by leukocoria (32%) [12]. A family history of retinoblastoma in first-degree relatives (parents, offspring, and siblings) was reported in 4.4% (n = 4) of children who were admitted at UCI. Our findings are consistent with findings from similar studies in Iran and Taiwan, where a family history of retinoblastoma was reported in 4.8% and 3.1% of patients, respectively (18,19) In contrast, a family history of retinoblastoma was reported in 11.0% of children treated between 1981 and 2004 at a medical facility in Turkey [17]. The low percentage of children with a family history of retinoblastoma in our study is possibly due to a lack of knowledge about eye cancers in our community. In addition, patients do not survive to child bearing age. The mean lag time between the first sign noted by parents and the first UCI visit was 11.2 months (SD 10.8), with a mean lag time of 14.5 months and 9.9 months for bilateral and unilateral retinoblastoma, respectively. A significant number of children (41.1%) had a longer lag time (beyond 12 months). This is because most children were managed at local eye units prior to referral to UCI. Consequently, lag time was a significant predictor of 3-year survival, as patients with retinoblastoma who reported to UCI more than 7 months after developing the first sign of retinoblastoma were more likely to die within 3-years compared to patients who presented within at least 6 months of developing the first sign of retinoblastoma (AHR 0.10, CI 0.04-0.25, P = 0:001). In our study, the majority of children had the worst affected eye, with IIRC Group E 61 (67.8%), followed by Group D 16 (17.8%). Additionally, most children with uni-lateral Rb (n = 55, 97.8%) whose IIRC staging was performed had Group E staging (81.8%) followed by D staging (18.1%). The majority of children (63, 70%) had features of extraocular spread at diagnosis at the first UCI visit. These findings were similar to a study by Lim et al., which showed a higher percentage of children (88.6%) with Group E and D staging [18]. In a study by Waddell et al., nearly all diagnoses (n = 282) in the first affected eye were Group E or already had extraocular disease, with only 2 children having IIRC Group C eyes [10]. The lower percentage of findings in our study compared to that of Waddell et al. may be attributed to the awareness campaign for primary health care workers together with patient facilitation, which was recommended. In our study, management included all treatment received by the children in the study prior to and after UCI admission. Forty-nine (54.4%) had received treatment prior to UCI admission, with 42 (46.7%) having undergone [19]. At the 3-year follow-up from the first UCI visit, 37 (41.1%) of all children in our study were alive, and 33 (36.7%) of children with retinoblastoma had died. The number of those lost to follow-up was 20 (22.2%). The 3-year survival observed in our study (41.1%) was significantly lower than that reported in Taiwan (64.41%), Turkey (89.6%), and Iran (94.8%) [17,20,21]. The significantly low 3-year survival in our study could be attributed to advanced disease at first UCI admission. Although enucleation was one of the most utilized treatment modalities in our study (57.8%, n = 52)), it was lower than that in studies in India by Chawla [8,10,16]. This finding can be explained by the increased utilization of neoadjuvant chemotherapy at UCI. Forty-nine (54.4%) and three (3.3%) of the children in our study had unilateral and bilateral enucleation, respectively. The lower percentage of enucleation in our study may be attributed to the noncompletion of prescribed treatment options, which was recorded in 61 (67.8%) children. At 3 years of follow-up from first admission at UCI, 12 (13.3%) and 64 (71.1%) children in our study had bilateral and unilateral vision loss, respectively, including those who had enucleation in the affected eye(s). The significant number of children with vision loss in our study is attributed to the unavailability of vision salvage treatment modalities at the Uganda Cancer Institute during the study period. The number of children lost to follow-up in our study (20, 22%) was slightly lower than that reported in a previous retrospective study (32%) of childhood cancer patients at the Uganda Cancer Institute by Mutyaba et al. [22]. This may be attributed to the recent introduction of a functional ocular oncology unit and personnel at UCI. In bivariate analysis, the predictors of survival in our study were a history of eye swelling, a lag time of 7 months or more, extra ocular retinoblastoma in the affected eye(s), advanced disease stage (IIRC Group E) in the worst affected eye, completion of prescribed treatment, metastatic retinoblastoma, and bilateral enucleation. These predictors were similar to findings in a study in China by Gao et al., which showed that extraocular disease, treatment abandonment, bilateral disease, and advanced tumour stage were predictors of survival in children with retinoblastoma [23]. The above predictors indicate that most children with retinoblastoma seek medical attention at the advanced stages of disease and are referred to tertiary oncology centres with metastatic disease. Conclusion Children with retinoblastoma were referred to the Uganda Cancer Institute with advanced disease. In addition, the mortality of retinoblastoma patients at UCI was high, as evidenced by the very low 3-year survival. Thus, there is an urgent need for the early involvement of a multidisciplinary team of health workers in the management of retinoblastoma, including paediatric ophthalmologists, paediatric oncologists, ocular oncologists, radiation oncologists, and nurses, for better treatment outcomes. Data Availability Data is available on request through the author (email: kalibakaruth@gmail.com). Ethical Approval This study was approved by Makerere University School of Medicine Research Ethics Committee (#REC REF 2020-031).
2022-03-10T16:27:27.218Z
2022-03-08T00:00:00.000
{ "year": 2022, "sha1": "da1fb6f49465ba88717165bb6cfb021b30ff27b3", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jce/2022/8817215.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff31582b456e4c06d328ec758e6f91f16961679f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119235019
pes2o/s2orc
v3-fos-license
Interpretations of the ATLAS Diboson Resonances The ATLAS collaboration has reported excesses in searches for resonant diboson production decaying into hadronic final states. This deviation from the Standard Model prediction may be a signature of an extra bosonic particle having a mass of around 2 TeV with a fairly narrow width, which implies the presence of a new perturbative theory at the TeV scale. In this paper, we study interpretations of the signal and its implication to physics beyond the Standard Model. We find that the resonance could be regarded as a leptophobic vector particle, which could explain a part of the observed excesses without conflict with the present constraints from other direct searches for heavy vector bosons at the LHC as well as the electroweak precision measurements. Introduction Recently, the ATLAS collaboration reports excesses in searches for massive resonances decaying into a pair of weak gauge bosons [1]. These excess events have been observed in the hadronic final states, i.e., the pp → V 1 V 2 → 4j (V 1,2 = W ± or Z) channels. The weak gauge bosons from the resonance are highly boosted so that the hadronic decay products are reconstructed as two fat jets. Constructing the invariant mass of these two fat jets, it is possible to find a resonant peak for the intermediate state. The ATLAS collaboration has performed such an analysis by using 20.3 fb −1 data of the 8 TeV LHC running. Then, the excesses with narrow widths are observed around 2 TeV in the W Z, W W , and ZZ channels with local significance of 3.4σ, 2.6σ, and 2.9σ, respectively. Although we should wait for forthcoming ATLAS/CMS results of relevant searches to obtain a robust consequence about the observation, it should be worthwhile to consider possible interpretations of these anomalous events as evidence for new physics beyond the Standard Model (SM). In fact, the excesses are well fitted with resonances whose peaks are around 2 TeV and widths are less than about 100 GeV. Such narrow resonances may imply new weakly-interacting particles, and then the underlying theories would be perturbative. * In this paper, we especially consider such a possibility to explain the excesses. As mentioned above, the excesses reported by the ATLAS collaboration are in the W Z, W W , and ZZ channels. The tagging selections for each mode used in the analysis are, however, rather incomplete: about 20% of the events are shared by these channels. At the present stage, it may be hard to conclude that one resonance is responsible for the excesses in all the channels. There may be a possibility that one 2-TeV particle contributes to only one part of the channels and the peaks in the other channels are merely contamination due to the incomplete tagging selections. Taking this situation into account, in this paper, we do not limit ourselves to account for all of these excesses simultaneously, and consider the possibility that the new resonance appears in one channel. For each channel, the number of excess events could be accounted for if there is a 2 TeV resonance whose production cross section times decay branching ratio into gauge bosons is about 6 fb. We regard this as a reference value in what follows. In order for a resonance to decay into two gauge bosons, it should be a bosonic state, namely, a particle with a spin zero or one under an assumption of the renormalizable theory. Let us first consider the spin-zero case. If such a particle is a singlet under the SU(2) L ⊗ U(1) Y gauge interactions, it couples to the electroweak gauge bosons and the SM fermions only through the mixing with the SM Higgs boson in the renormalizable potential. Therefore, its production cross section is suppressed by the mixing factor and in general too small to explain the anomalies. A possible way to enhance the production is to introduce new vector-like colored particles. A singlet scalar field generically couples to these colored particles. Then the singlet is produced via the gluon fusion process according to the loop correction involving the vector-like colored particles. If the masses of the vector-like particles are above 1 TeV, such a scalar resonance with 2 TeV mass does not decay into these particles. It turns out, however, that O(10) fb production cross sections require O(10) extra colored particle pairs. Moreover, a large fraction of produced singlets decays into gluons, and thus gives only a negligible contribution to the diboson channels. Hence, a singlet scalar boson is inappropriate to explain the anomalies. An alternative possibility is to exploit SU(2) L doublet scalars. These scalar fields may develop a finite vacuum expectation value (VEV) to directly couple to W and Z, and again mix with the SM Higgs field. To assure a large cross section to the diboson decay processes, there should be a sizable deviation from the SM limit. In addition, the deviation modifies the Higgs couplings, which are stringently constrained by the Higgs data at the LHC experiments. Within the constraint, both the production cross section and the branching ratios to the electroweak gauge bosons of a 2 TeV doublet scalar are found to be extremely small. A higher representation of SU(2) L also suffers from its small production cross section since it does not couple to the SM quarks directly. For these reasons, we conclude that it is quite difficult to explain the required event rate with a new scalar particle, and thus we do not pursue this possibility in the following discussion. Another candidate is a spin-one vector boson. Such a particle naturally appears if a high-energy theory contains additional gauge symmetry that is spontaneously broken at a certain scale above the electroweak scale. If the symmetry breaking occurs at the TeV scale, we expect the masses of the extra gauge bosons to be O(1) TeV. The gauge bosons are produced at the LHC if quarks are charged under the extra gauge symmetry. In this paper, we investigate this possibility. An important caveat here is that such a TeVscale vector boson has been severely constrained by the LHC experiments. The strongest constraint is usually from the Drell-Yan processes [3,4]; for a 2 TeV vector boson, its production cross section times the branching ratio in the lepton final states should be much smaller than 1 fb. This bound makes it quite difficult to realize a sizable event rate for the diboson decay channel in most extensions of the SM with new gauge symmetries. One promising setup to suppress the Drell-Yan processes is given by an SU(2) L singlet heavy charged gauge boson with hypercharge ±1, which we denote by W . Such a W is contained in some simple extensions of the SM, such as SU(2) L ⊗ SU(2) R ⊗ U(1) B−L models [5]. This W couples with right-handed quarks, as well as right-handed charged leptons and neutrinos. If right-handed neutrinos are rather heavy, W is unable to decay to leptons and thus evade the Drell-Yan bounds. Since W couples to the right-handed quarks, it is sufficiently produced at the LHC. After the electroweak symmetry breaking, the W bosons mix with the electroweak gauge bosons, which allows W to decay into W and Z. In Sec. 2, we study whether W can explain the observed excess. We find that it is difficult to realize large decay branch into W Z in a simple version of the W model, and therefore the required event rate for the ATLAS diboson excess is not obtained once the limit from the electroweak precision measurements is taken into account. Even if this limit is avoided by canceling the W contribution to the electroweak precision observables with other new physics effects, the resonance search in the channel consisting of a W boson and a Higgs boson (W h) [6] severely constrains the W Z decay branch. Besides, W generally predicts flavor changing gauge couplings [7], as the W boson does in the SM. We have to assume these couplings to be flavor-diagonal to evade the strong bounds from flavor physics. This gives rise to additional complexity for a concrete model building in this direction. An alternative way is to regard the resonance as a neutral massive gauge boson Z which has no coupling to the SM leptons. This is the so-called leptophobic Z . Such a leptophobic Z may be realized in the Grand Unified Theories (GUTs); if the rank of the GUT group is larger than four, it includes extra U(1) symmetries, and a certain linear combination of the U(1) charges could be leptophobic. Especially, a set of charge assignments inspired by the E 6 GUTs has been widely studied so far in the literature [8][9][10][11][12]. The Drell-Yan bounds on this class of models are then readily avoided because of the leptophobic nature. Again, Z mixes with the Z boson after the electroweak symmetry breaking, and thus it has a decay mode into a pair of W ± . We study the decay properties of such a Z using a simplified model in Sec. 3 to see whether it could explain the ATLAS diboson signal. Finally, in Sec. 4, we conclude our discussion and give some future prospects for probing the scenarios in the future LHC experiments. W model To begin with, we consider a simplified model for W to study whether it explains the ATLAS diboson signal or not. For recent works on phenomenological studies of W , see Ref. [13]. As mentioned in the Introduction, we consider an SU(2) L singlet vector boson with +1 hypercharge as a candidate for W + , since it effectively has no coupling to the SM leptons and thus avoids the severe Drell-Yan bounds. Such a vector boson may be attributed to a gauge boson of a non-Abelian gauge group orthogonal to SU(2) L , like SU(2) R . We may also take up an SU(2) L triplet non-hypercharged vector boson, which, for instance, appears in the SU(2) 1 ⊗ SU(2) 2 ⊗ U(1) Y type models [14]. In this case, however, couplings of the SU(2) L triplet vector bosons to the SM charged leptons are generically allowed, and thus we need an additional mechanism to suppress these couplings to evade the Drell-Yan constraints. In this sense, the SU(2) L singlet vector boson is more favored, and thus we focus on this candidate in our work. Let us denote the massive SU(2) L singlet vector boson byŴ + . There are scalars charged under the additional SU(2) symmetry, and they develop nonzero VEVs to cause the SU(2) symmetry breaking. ThenŴ + gains a TeV-scale mass. We assume that some of the scalars are charged under SU(2) L as well and the finite mass mixing betweenŴ + andŴ + in the SM is generated by their VEVs. The mix is described as where W + and W + are the mass eigenstates. We expect the mixing angle ζ is O(v 2 /M 2 W ) where M W is the mass of W and v 246 GeV is the Higgs VEV. Then, the partial decay width of W + into W + and Z is given as follows: where α 2 is the SU(2) L gauge coupling and M W is the mass of W boson. From this expression, we find that although the partial decay width is suppressed by the small mixing angle ζ, this suppression is compensated for by the enhancement factor of (M W /M W ) 4 . This enhancement factor results from the high-energy behavior of the longitudinal mode of W . Therefore, we expect a sizable decay branch for the W + Z channel. The partial decay width gets increased as the mixing angle becomes large. The size of the mixing angle is, on the other hand, restricted by the electroweak precision measurements since it is induced by interactions which break the custodial symmetry-namely, the bound on the T parameter [15] constrains the mixing angle. The current limit on ζ is given by |ζ| 5×10 −4 for M W = 2 TeV [16], which in turn gives an upper limit on Γ(W → W + Z). We however note that the constraints may be evaded if there is another contribution to the T parameter which cancels the effects of the W -W mixing. The actual realization of this possibility is model-dependent, and we do not pursue it in this paper. The equivalence theorem tells us that the final state gauge bosons in the W + → W + Z channel could be regarded as Nambu-Goldstone (NG) bosons, since the longitudinal mode dominates the decay amplitude as we have just mentioned. Thus, the partial decay width of the channel is related to that of the decay to W + and the Higgs boson in the final state. In fact, we have where h is the SM-like Higgs boson. Currently, the CMS collaboration gives an upper bound on this decay mode [6] as σ(pp → W + )×BR(W + → W + h) 7 fb. Thus, through the above equation, this bound also implies σ(pp → W + ) × BR(W + → W + Z) 7 fb, which somewhat conflicts with the ATLAS diboson anomaly. Since the above relation is a consequence of the equivalence theorem, this bound is robust and almost modelindependent. For this reason, a W model (as well as a Z model as we will see in the next section) in general predicts smaller number of signals in the diboson channel than that observed in Ref. [1], once we consider the limit on the W h channel. W + carries the +1 hypercharge, so that the SU(2) L ⊗ U(1) Y symmetry allows the following couplings the right-handed quarks: where P L/R ≡ (1∓γ 5 )/2 and i = 1, 2, 3 denotes the generation index. Here, we assume the coupling constant g ud is common to all of the generations and ignore flavor non-diagonal parts for brevity, which are in fact stringently constrained by the measurements of the flavor observables, such as the K 0 -K 0 mass difference. At the LHC, W is produced via the interactions in Eq. (4). For W with a mass of 2 TeV, the production cross section at the LHC with the center-of-mass energy √ s = 8 TeV (LHC8) is evaluated as using MadGraph [17]. After the production, W mainly decays into the W Z, W h, or quark final states. The partial decay width for the final state containing a pair of u i and where we neglect the quark masses for brevity. The branching fraction of this decay mode is severely constrained by the dijet resonance searches [18,19]. Following Ref. [18], we have σ(pp → W ) × BR(W → qq ) 100 fb with the acceptance A 0.6 being assumed. The ATLAS collaboration gives a similar limit on the dijet channel [19]. Currently, the W + → tb decay mode is less constrained [20]: σ(pp → W + ) × BR(W + → tb) 120 fb. Taking the above discussion into account, in Fig. 1, we show a contour plot for σ(pp → W ) × BR(W → W Z) on the g ud -ζ plain. The light-green shaded region is disfavored by the electroweak precision measurements: |ζ| 5 × 10 −4 . The dark-green and blue shaded regions are excluded by the limits from the W → W h [6] and the dijet [18,19] channels, respectively. This figure clearly shows that it is difficult to explain the observed diboson resonance with this simplified W model once the electroweak precision bound on the W -W mixing is taken into account. Even if this constraint is avoided by utilizing other new physics effects to cancel the W contribution to the electroweak precision observables, the bound on the W → W h channel restricts σ(pp → W ) × BR(W → W Z) to be less than ∼ 7 fb. One of the most popular models in which an SU(2) L singlet W appears is the so-called left-right (LR) symmetric model based on the SU(2) L ⊗ SU(2) R ⊗ U(1) B−L gauge theory. In this model, the W -quark coupling is given by the SU(2) R gauge coupling constant g R ; g ud = g R . If right-handed neutrinos in this model are heavier than W , then W does not decay into the right-handed neutrinos, and thus this model realizes the setup of the simplified model we have discussed here. In this model, the SM Higgs field is embedded into a bi-fundamental representation of SU(2) L ⊗ SU(2) R . Once this bi-fundamental field acquires the VEV, the W -W mixing is induced and given by tan 2ζ 2 sin 2β where g L is the SU(2) L gauge coupling constant and tan β is the ratio between the diagonal components of the bi-fundamental Higgs VEV. In Fig. 1, we also show the value of ζ obtained through this relation as a function of g ud = g R for tan β = 40 in the brown dashed line. This value of tan β is favored to explain the top and bottom quark masses in this model. It is found that although the predicted values evade the electroweak precision bound, it is far below the values required to explain the diboson excess. Before concluding this subsection, we comment on other possible excesses reported so far which might also indicate the presence of a W with a mass of around 2 TeV. In Ref. [21], the CMS collaboration reported a small excess near 1.8 TeV in the searches of W decaying into W and Higgs boson in the lνbb final state. However, as we have seen above, this conflicts with another constraint on the W → W h channel given by the CMS experiment [6]. In addition, the CMS collaboration announced a possible signal in the searches of W decaying into the two electrons and two jets final state through a right-handed neutrino, whose peak is around 2.1 TeV with its significance being ∼ 2.8 σ [22]. Though there have been several proposals for W models that account for the 2.8 σ excess [23], the models in general predict too small event rates for the diboson channel, and therefore fail to explain the ATLAS diboson resonance signal. Z model Next, we consider a leptophobic Z . For reviews on Z models, see Refs. [24,25]. We regard it as a gauge boson accompanied by an extra U(1) symmetry, U(1) , whose mass is generated after the U(1) gauge symmetry is spontaneously broken. Suppose that there are two SU(2) L doublets and one singlet Higgs bosons H u , H d , and Φ, respectively, with their U(1) charges being Q Hu , Q H d , and Q Φ . We further assume that H u only couples to the up-type quarks while H d couples to the down-type quarks and charged leptons, just as the minimal supersymmetric SM (MSSM) and the Type-II two-Higgs-doublet model. We require U(1) to be leptophobic, i.e., Q L = Q e c R = 0, and then this leads to Q H d = 0. After these Higgs bosons acquire VEVs, the mass matrix for the U(1) gauge fieldẐ and a linear combination of the SU(2) L and U(1) Y gauge fields (Ŵ a andB, respectively), Ẑ = cos θ WŴ 3 − sin θ WB , is given by witĥ Here H 0 g Z is the gauge coupling constant given by g Z ≡ g 2 + g 2 with g and g being the U(1) Y and SU(2) L gauge coupling constants, respectively, and g Z is the U(1) gauge coupling constant. The mass eigenstates Z and Z are then obtained through the diagonalization with an orthogonal matrix as with where M Z and M Z are the masses of Z and Z , respectively. Again, the mixing angle is suppressed by a factor of M 2 Z /M 2 Z . The couplings ofẐ to the SM fermions f are given by with Q f L and Q f R the U(1) charges of the left-and right-handed components of f , respectively. Here again, we have neglected possible flavor changing effects for simplicity. Now let us evaluate the partial decay widths of Z . For the decay mode into quarks, we have where N C = 3 indicates the color factor. For the Z → W + W − , on the other hand, we have Note that this decay width again remains sizable even though the decay mode is induced via the Z-Z mixing, since the enhancement coming from the longitudinal polarization mode compensates for the suppression factor. According to the equivalence theorem, this decay width becomes equivalent to that of the Z → Zh mode in the decoupling limit: From the above equations, we find that the decay properties of Z are determined by the U(1) charges of quarks and H u , the U(1) gauge coupling constant g Z , and tan β. Among them, we can always decrease one degree of freedom via the redefinition of the U(1) charge normalization. In what follows, we normalize the U(1) charges such that Q Hu = 1. In this case, we have where the latter equality follows from Q H d = 0. The production cross section of Z at the LHC8 is estimated as [24] σ Thus, the production cross sections are determined by the quark U(1) charges and g Z once the Z mass is fixed. The production of Z is stringently limited by the LHC experiments. For a leptophobic Z , the strong bounds come from the Z → Zh [6], dijet [18,19], and tt [26] resonance searches. As before, we use σ(pp → Z )×BR(Z → Zh) 7 fb and σ(pp → Z )×BR(Z → jj) 100 fb with the acceptance A 0.6 being assumed for the latter case. Here, BR(Z → jj) denotes the branching ratio for decaying into a pair of quark jets. The limit from the tt resonances is found to be the strongest, as we see below. The limit depends on the width of Z . For a 2 TeV Z with a decay width of 20 GeV, the bound is given as σ(pp → Z ) × BR(Z → tt) < 11 fb, while if the decay width is 200 GeV, the bound is relaxed to be 18 fb [26]. Similarly to the case of W , the Z-Z mixing angle is constrained by the electroweak precision measurements. For a Z model, however, it is not appropriate to merely use the limits on the T parameter to obtain the Z-Z bound, since the presence of Z-Z mixing modifies the Z-boson coupling to the SM fermions simultaneously. In fact, in the case of a leptophobic Z , the constraints from the electroweak precision measurements are relaxed because Z does not couple to electrons [27,28]. It turns out that the present limit on the mixing angle is given as sin θ 0.008 [28], which we use in the following analysis. In Fig. 2, we show branching ratios of the Z → jj, Z → tt, and Z → W W channels in the blue, green, and red lines, respectively, as functions of Q Q . Here, we set M Z = 2 TeV, and tan β = 40 (4) for the solid (dashed) lines. The vertical gray line corresponds to the charge assignment in an E 6 inspired leptophobic Z model often discussed in the literature [8][9][10][11][12], where Q Q = −1/3, Q u R = 2/3, Q d R = −1/3, and Q Φ = −1. We find that the branching fraction for the diboson channel is at most ∼ 0.05. The tan β dependence of the branching ratios is rather small; for instance, if we vary tan β from 40 to 4, BR(Z → W W ) changes by about 10%. Then, in Fig. 3, we show a contour plot for the values of σ(pp → Z ) × BR(Z → W W ) as a function of g Z and Q Q . Here, we set M Z = 2 TeV and tan β = 40, and the vertical gray line shows the charge assignments in the E 6 inspired leptophobic Z model mentioned above. The blue and dark-blue shaded regions are excluded by the resonance searches in the dijet and Z → Zh channels, respectively. The dark (light) gray area is excluded by the tt resonance search if the Z decay width is 200 (20) GeV. We also show the total decay width Γ tot in green dashed lines. Contrary to the case of W , the electroweak precision measurements give no constraint in the parameter region shown in the figure, since the leptophobic nature of Z considerably weakens the limit on the Z-Z mixing angle as discussed above. As can be seen from Fig. 3, the total decay width is well below 100 GeV in the allowed parameter region in the figure. We also find that the tt resonance search gives the most stringent constraint. In particular, if Q Q = −1/3, then σ(pp → Z ) × BR(Z → W W ) should be less than about 5 fb, which corresponds to g Z 0.4. Before concluding this section, we discuss an ultraviolet completion of the simplified leptophobic Z model. In general, a leptophobic symmetry causes gauge anomaly, and hence we have to add extra U(1) -charged chiral fermions so that the contribution of these extra fermions removes the anomaly. A simple way to find such a set of extra fermions is to embed the SM particle content into a realization of an anomaly-free gauge group. Indeed, it turns out that GUTs based on the supersymmetric (SUSY) E 6 gauge group provide a natural framework to realize the leptophobic U(1) symmetry [8][9][10][11][12]. † In SUSY E 6 GUTs, all of the MSSM matter fields as well as the right-handed neutrino superfields are embedded into a 27-representational superfield in each generation. The 27 representation also contains new vector-like superfields and a singlet field with respect to the SM gauge symmetry. A part of these vector-like fields is identified as the MSSM Higgs fields. The rank of E 6 group is six, and thus this gauge group yields two additional U(1) † For a review of the E 6 SUSY GUT models, see, e.g., Ref. [29]. This SUSY leptophobic Z model has several phenomenologically interesting features. First, the U(1) symmetry forbids the mass term for Higgsinos, i.e., the µ-term, and it is effectively induced through the Yukawa coupling between the Higgsinos and the SM singlet field which breaks the U(1) symmetry, similar to the next-to-minimal SUSY SM (NMSSM). As a result, the Higgsino mass and the Z mass have the same origin; in particular, the effective µ-parameter is expected to be around the TeV scale, which solves the so-called µ-problem. Second, in this model, there are new tree-level contributions to the Higgs mass: the F -term contribution via the singlet-Higgsino coupling just like the NMSSM and the D-term contribution of the extra U(1) . Taking into account these contributions as well as the one-loop correction to the CP-even scalars [30], we find that the observed value of the Higgs mass ∼ 125 GeV [31] is accounted for with O(1) TeV stops and a small value of tan β when g Z = 0.4-0.5. This should be contrasted with the MSSM prediction; in this case, if stops have masses of around 1 TeV, the observed Higgs mass is achieved with a large value of tan β, while if tan β is small then stops in general have masses much larger than O(1) TeV to explain the Higgs mass. Notice that the 2-TeV Z favors TeV-scale SUSY particles since the SUSY and U(1) breaking scales are related to each other through the soft mass term for Φ in the scalar potential, which triggers the U(1) breaking. Therefore, this model provides a natural framework for a light stop scenario without conflicting with the 125 GeV Higgs mass, which is desirable from the viewpoint of the electroweak fine-tuning problem. As various particles are predicted to have masses of ∼ 1 TeV, not only the further investigations of the diboson events, but also the direct searches of these particles in the LHC run-II play an important role to test this model. § Summary We have considered some extensions of the SM that could explain the excesses recently reported by the ATLAS collaboration [1]. These possible signals are found in the diboson resonance searches with two fat jets in the final state, and to account for the signals, production cross sections of ∼ 6 fb with narrow decay widths are required. These excesses may be reproduced by an extra vector boson with 2-TeV mass, and we investigated the W and Z models, especially. W boson is, for example, predicted by the additional SU(2) R gauge symmetry, and decays to not only the SM fermions but also W and Z bosons through the W -W mixing. There is a tension between the excess and the bounds from the Drell-Yan processes, which forces us to forbid the leptonic decay of W by making the right-handed neutrinos heavy. We also suffer from the tree-level flavor changing couplings of W , and thus we need to find out a way to forbid the couplings. Besides, the constraint from the electroweak precision measurements is too severe to reproduce the excess in the § We however note that the decay branching ratios of Z in this model may be different from those presented above if some of the additional particles have masses smaller than 1 TeV. diboson channel. Eventually, we conclude that it is difficult to interpret the diboson signal as the W resonance, unless the above difficulties are evaded with additional conspiracy. Z boson is another good candidate for the diboson resonance. It also appears in various new-physics models; for instance, an extra U(1) symmetry is predicted by the GUTs based on large gauge groups, in which the U(1) charges are generally assigned to the SM fermions and the Higgs field according to their gauge structure. In this case, the Z associated with this U(1) symmetry decays to a pair of the SM fermions, W + W − and Zh through the Z-Z mixing generated by the kinetic mixing and mass mixing due to the nonzero VEV of the U(1) -charged Higgs field. Again, a leptophobic Z has an advantage to suppress the Drell-Yan bound. In Sec. 3, we investigate such a possibility and find that there is a sizable parameter region that could explain a large part of the excess and is still allowed by the current experimental constraints. We also consider a concrete leptophobic Z model inspired by the E 6 GUT, and discuss its implication on the Higgs mass and the SUSY scale. This model predicts the new vector-like particles and the SUSY particles to have TeV-scale masses, which are accessible at the next stage of the LHC running. We therefore expect that the future LHC experiments may not only provide us a deeper understanding for the ATLAS diboson excess, but also shed light on new physics behind simplified models discussed in this paper. Finally we briefly comment on the diboson resonance searches performed by the CMS collaboration [32,33]. The CMS collaboration has searched for resonances decaying into two gauge bosons in hadronic final states [32], similarly to the ATLAS search [1], as well as in semi-leptonic final states [33]. Interestingly, a small excess was found in both of these searches around 1.8 TeV for the resonance mass, which might be the same origin as the ATLAS diboson anomaly. If it is not the case, the semi-leptonic search result [33] gives a stringent limit on the 2 TeV excess observed by the ATLAS collaboration. After all, further searches of the diboson events are indispensable for confirming or excluding the 2 TeV diboson anomaly, and are to be done in the near future.
2015-09-02T16:37:13.000Z
2015-06-12T00:00:00.000
{ "year": 2015, "sha1": "2a20b408f0a9dca8277586e1fd632b41d69fd888", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1506.03931", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "99a0d065685af65d60d7682879b30bf513f36b5a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2038505
pes2o/s2orc
v3-fos-license
Determination of Metal Ions in Fuel Ethanol after Preconcentration on 5-Amino-1 , 3 , 4-Thiadiazole-2-Thiol Modified Silica Gel Este trabalho descreve a síntese e caracterização da sílica gel modificada com grupos 5-amino1,3,4-tiadiazol-2-tiol (SiATT), e os resultados de um estudo de adsorção e pré-concentração (em batelada, e em fluxo utilizando-se técnica de coluna) de Cd(II), Co(II), Cu(II), Fe(III), Ni(II), Pb(II) e Zn(II) em meio etanólico. A capacidade máxima de adsorção da SiATT determinada para os íons metálicos estudados foram (mmol g): Cd(II) = 0,11, Co(II) = 0,10, Cu(II) = 0,20, Fe(III) = 0,20, Ni(II) = 0,16, Pb(II) = 0,08 e Zn(II) = 0,12. Os resultados obtidos nos experimentos em fluxo, mostraram uma recuperação de praticamente 100% dos cátions metálicos adsorvidos na coluna empacotada com 2 g de SiATT, utilizando-se 5 mL de HCl 2,0 mol L como eluente. A sorção-dessorção dos íons Cd(II), Co(II), Cu(II), Fe(III), Ni(II), Pb(II) e Zn(II), serviu como base para o desenvolvimento de um método de préconcentração e subsequente determinação por EAA de Chama do teor desses cátions em amostras de etanol combustível. Introduction The direct determination of trace metals in fuel ethanol by conventional analytical chemical methods can be performed after a time-consuming liquid evaporation procedure prior to any measurements 1,2 .Methods using on line flow preconcentration system 3 , liquid-liquid extraction 4 , adsorption 5,6 and ion exchange 7,8 as separation procedures for metal ions have been successfully applied. In recent years, the use of chemically modified silica gel with various chelating organofunctional groups aiming Article to adsorb and preconcentrate metal ions from solutions, have been described [9][10][11][12] .In particular, a column packed with the material in line with a flow analysis system has been suggested as an effective and reliable process for preconcentration of the metal ions before analysing by atomic absorption spectrometry [13][14][15] .In this combined method the enrichment of the analyte and removal of some interferents which may be present in the solution, can considerably improve the method of analysis extending the limit of detection to lower concentrations. This paper describes the preparation of silica gel chemically modified with 5-amino-1,3,4-thiadiazole-2-thiol aiming to find an efficient material for separation and determination of the metal ions present in ethanol, used as fuel for car engines.Primarily the material was tested with a synthetic ethanol solution containing some metal ions and further used in a real sample. Preparations Silica gel (Merck) with specific surface area of 500 m 2 g -1 and average pore diameter of 0.6 nm, was activated at 420 K under vacuum (10 -3 Torr).About 50 g of this silica was immersed in 200 mL of dry xylene and 15 mL of 3-chloropropyltrimethoxysilane was added.The mixture was refluxed under nitrogen atmosphere for 24 h, filtered, washed with xylene and heated under vacuum in order to eliminated all the solvent.The resulting solid was immersed in 150 mL of purified dimethylformamide and 17 g of 5-amino-1,3,4-thiadiazole-2-thiol was added.The mixture was stirred for 24 h at 380 K under nitrogen atmosphere.The resulting modified silica was filtered off, washed with dimethylformamide, ethanol and heated for 8 h at 348 K under vacuum (10 -3 Torr). The equations in Scheme 1 describe the preparation of the material. The quantity of 5-amino-1,3,4-thiadiazole-2-thiol attached to the silica surface was determined by the nitrogen analysis using the Kjeldhal method.The specific surface area was determined by the BET 16 method on a Micromeritics Flow Sorb 300 equipment of Micromeritics Instrument Corporation. Infrared spectra The FT-IR spectra of SiATT of SiO2 and SiATT were obtained in the region between 1800 and 1300 cm -1 , using the pressed disk technique and a Nicolet FT-IR Spectrophotometer, according to a previously described method 17 . Adsorption of the metal ions by SiATT Adsorption of MXn by SiATT from a solution can be described by the equilibrium equation 9 : The time required for this reaction to achieve the equilibrium condition was previously determined immersing 100 mg of SiATT in 50 mL of 5 x 10 -3 mol L -1 of the metal solution and shaken.At different time intervals, an aliquot of the supernatant solution was separated and the metal ion analysed by complexometric titration using EDTA as the titrant 18 . The quantity of the adsorbed metal per unit mass of the adsorbent, Nf, was calculated applying the equation: where Ni represents the initial mole number of the metal ion in the solution phase, Ns the mole number of the metal ion in equilibrium with the solid phase and m is the mass of the adsorbent. Isotherms of adsorption The adsorption capacity of SiATT was determined at 298 K using the batch technique.In 50 mL of ethanol solutions of the metal ions (concentrations between 2.0 x 10 -4 and 2.5 x 10 -3 mol L -1 ), about 100 mg of SiATT were added and the resulting mixture shaken for 30 min.The solid phase was separated by centrifugation and the metal ion determined in the supernatant solution by complexometric titration. Anion influence As can be seen in the equilibrium reaction (Eq.3), the metal ion is followed by the counter ion in the adsorption process and thus, it occurs as an adsorption of a neutral species.Therefore, experiments in order to study the influ- ≡SiOH stands for the silica surface silanol group.For the sake of brevity, (A) will here after be designated as SiATT. ence of the anion in the adsorption process were also carried out.The isotherms of adsorption were determined in presence of 0.10 mol L -1 NaCl, NaNO3, NaClO4 and NaOAc solutions. Preconcentration and recovery of the metal ions This study was carried out using a 15 cm length and 0.6 cm inner diameter glass column packed with 2 g of SiATT.Initially, the column was washed with ethanol and then 100 mL of 0.50 mg L -1 M(NO3)n [M = Cd(II), Co(II), Cu(II), Ni(II), Pb(II), Zn(II) and Fe(III)] ethanol solutions were percolated through the column with a flow rate of 2.0 mL min -1 .The column was washed with 50 mL of ethanol and then the metal was eluted with 5 mL 2.0 mol L -1 HCl solution.All fractions obtained during the elution stage were gathered separately and analysed by Flame AAS. Determination of metal ions in ethanol fuel About 250 mL of ethanol fuel samples were percolated through the column packed with 2 g of SiATT.The adsorbed metal ions were eluted with 5 mL of 2.0 mol L -1 HCl solution and the metal ions analysed by Flame AAS.The concentrations of the metal ions were also determined by Flame AAS after conventional preconcentration, in which the first step has been to evaporate the ethanol solution to dryness 19 . Determination by AAS The concentrations of metal ions gathered from the SiATT column were determined by Flame AAS according to the standard guidelines of the manufacturers (Spectrometer: VARIAN-INTRALAB AA-1475), choosing resonance lines for the metals and deuterium-arc lamp background correction 20 .For the calibration, synthetic standard solutions containing on 1.0 mol L -1 HCl comparable to the samples, were used. Characteristics of the material Figure 1 shows the IR spectra of SiO2 (Fig. 1a) and that of the chemically modified silica gel (Fig. 1b).In Fig. 1 b it is clearly observed that the band at 1660 cm -1 is due to the δNH2 mode, indicating therefore that bonding to the silica matrix by the bridging propyl group is made by the sulphur atom as indicated in Eq. 2. The H2O deformation mode is observed at ca. 1630 cm -1 as a shoulder.Other bands of interest are observed at 1500 and 1440 cm -1 and they can be assigned to the νCN and νCC coupled modes of the functional group. The chemical analysis of SiATT yielded a 0.53 mmol g -1 of the functional groups attached to silica surface and 378 m 2 g -1 as the specific surface area.The attached functional groups were very stable under the various cycles of adsorption-elution of the metal ions by the adsorbent in the column. Adsorption isotherms An important aspect of this material is the time necessary for the adsorption process to achieve the equilibrium condition.Figure 2 shows the plot of Nf in function of time for Co(II) and Zn(II) as examples.The system achieves the equilibrium condition very rapidly, about 15 min for both metals, because in the present case, the functionalization occurs on the matrix surface having sufficiently large pores (0.6 nm). In solvent with lower dielectric constant, the influence that an anion can have on the adsorption process due to the anion-cation interaction may be appreciable, since the anion follows the metal ion when it diffuses from the solution to the solid phase, and it is adsorbed as a neutral species 9 MXn (see Eq. 3). Figure 4 shows the influence of various anions (Cl -, NO3-, ClO4-and AcO -), with a concentration of 0.1 mol L -1 , on the copper adsorption.No significant decrease of the adsorption process for these anions is observed, indicating that the metal-to-ligand bond formation is the main fact that determines the amount of adsorbed metal. Table 1.Recoveries of metal ions using the column method at 298 K, and Recovery and determination of the metal ions Table 1 shows the recoveries of each ion from a column packed with SiATT using HCl as eluent.Passing 5 mL of 2.0 mol L -1 HCl solution, within the experimental error, 100% recovery was achieved for all metal ions. The recovery experiment for each metal ion from a synthetic solution served as basis for a rapid method for preconcentration and determination of metal ions in a fuel ethanol. Table 2 shows the concentrations of the metal ions in samples of fuel ethanol produced in three different plants.In general, Cd(II) does not occur in fuel ethanol or its content is lower than the contents of the other metals by a factor of 10-100.The metals, Co(II) and Pb(II) were not found in detectable amount.The content of Fe(III) depends on the degree of corrosion of the distillation equipment 21 .The concentrations of Cu(II) are the highest in the analysed samples, and correspond to the contents of this metal normally found in fuel ethanol 22 .These results are in accordance with results obtained using the conventional preconcentration method 19 . Conclusion 5-amino-1,3,4-thiadiazole-2-thiol groups attached to a silica gel surface can readily be used to adsorb metal ions from ethanol solution.Its relatively high chemical stability in ethanol, and the velocity with which the metal ions are adsorbed, turns this material potentially useful for analytical purposes. Table 2 . 19termination of metal ions in ethanol fuel after preconcentration by the proposed method (n = 3, 250 mL of sample, Volume of eluente = 5 mL) and by the conventional preconcentration method19.
2018-01-05T19:03:35.348Z
1998-12-01T00:00:00.000
{ "year": 1998, "sha1": "665555f65caa15815d3076bdd9068bdccf1d9f00", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/jbchs/a/5wcsssGN7PwpTVs668J6CmS/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "665555f65caa15815d3076bdd9068bdccf1d9f00", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
235683530
pes2o/s2orc
v3-fos-license
Zero-shot Learning with Class Description Regularization The purpose of generative Zero-shot learning (ZSL) is to learning from seen classes, transfer the learned knowledge, and create samples of unseen classes from the description of these unseen categories. To achieve better ZSL accuracies, models need to better understand the descriptions of unseen classes. We introduce a novel form of regularization that encourages generative ZSL models to pay more attention to the description of each category. Our empirical results demonstrate improvements over the performance of multiple state-of-the-art models on the task of generalized zero-shot recognition and classification when trained on textual description-based datasets like CUB and NABirds and attribute-based datasets like AWA2, aPY and SUN. Introduction Image classification methods have advanced significantly in the past few years. This has largely been driven by a large amount of data per class which has enabled models to learn them. However, data gathering can be timeconsuming and expensive. Further, many rare classes may not have sufficient training data. This has led to the creation of "Zero-Shot Learning" (ZSL) methods which aim to leverage other information, typically natural language descriptions of classes, to learn about classes with little or no directly labelled data available. Recent generative ZSL methods have gone further; instead of only classifying unseen classes, they aim to also generate samples from unseen classes [6,14,13,9,7,2]. Despite recent advances in the field of generative ZSL, there are still significant challenges. Generative ZSL methods do not guarantee that the generated visual examples of unseen classes deviate meaningfully from seen classes. That is, there is a risk that the generated images are too similar to samples from the seen classes. Another problem arises when the model is forced to generate samples of unseen classes that arbitrarily deviate from seen classes. In this case there is a risk that the generated images do not follow the description of unseen classes. Instead, the primarily property of the generated images is that they deviate sufficiently from seen classes. We believe paying closer attention to the details of the description is key to solving both of these issues. Inspired by this, we introduce a new model that aims to encourage the generative model to pay closer attention to the details. Specifically, the model includes a mapping from the generated visual features back to the original text or attributes of a class. By requiring that there exists a mapping from the generated visual features to the class specific description, we force the generator to pay closer attention to these inputs. This is implemented by adding an additional loss function which penalizes the generator and regularizer if the generated description is not similar to the input description. We call our proposed method Description Generator Regularized ZSL (DGRZSL). The approach is unsupervised and not tied to a specific generative ZSL approach so it can be added to any ZSL approach that uses the descriptions of seen and unseen categories with minimal modifications to the underlying generative ZSL approach. Setting In our zero-shot learning setting each data point consists of visual features, a class label, and a semantic representation of the class. These semantic representations are either textual or attribute based. In this section, we introduce notations to represent training and test data. Let r s i ∈ τ and r u i ∈ τ represent semantic representations of seen and unseen classes where τ is the semantic space from distribution p rep . N s is the number of seen (training) image examples, x s i ∈ X is the visual features of the i th image in the visual space X from distribution p data , and y s i is the corresponding category label. The available training data is denoted as where we have K s unique seen class labels. Additionally, we denote the set of seen and unseen class labels as S and U where S and U do not have any labels in common. Then the zero-shot learning task is formulated as predicting the la- Bottom image represents the model and the input data when the model is trained by the hallucinated text introduced in [2]. The second head of the discriminator model which is responsible for classification is not used in this training process. Therefore, it is removed from the graph. bel y u ∈ U of an unseen class sample x u ∈ X. Generalized ZSL (GZSL) is formulated, as predicting the label of y ∈ U ∪ S which means the search space at test time includes labels from both seen and unseen classes [12]. Figure 1 shows an overview of our model which we describe next. The basic generative ZSL model is based on a generative adversarial network [5] and was introduced in [14]. A generator network is trained to map samples of noise and a representation of the class into visual features. The noise, z, is sampled from mean zero standard deviation 1 Gaussian distribution. A discriminator network takes as input visual features. Its output is a classification as to whether the input visual features were real or were generated. The discriminator network can also include an additional output head which predicts the class label. The real/fake prediction of the discriminator for an input image and the predicted label of a seen class k ∈ S given the image are defined as D r (.) and D s,k (.), respectively. The architectures for these networks are as described in [14]. Method The contribution of this work is the addition of a semantic representation generator network (SR) and corresponding loss to this model. The SR generator network learns to map from the visual features of a sample to the semantic representation of the class. An added loss function (described below) penalizes differences between the output of the SR network and the provided semantic representation of the class. This explicitly requires the generated visual features to contain more information about the semantic representation of a class and encourages better generalization to the unseen classes. The SR generator network consists of three fully connected layers accompanied with ReLU activations to generate semantic representations that describe the input visual features. We explain the loss function in detail in the following sections. Previous work [2] identified a challenge with training generative ZSL models of this form. Specifically, the gen-erator, G, never sees data from the unseen classes, neither visual features nor semantic representations. While this is, of course, the definition of zero-shot learning, it means that the generator sees very limited variability in semantic representations during training. In response [2] proposed augmenting the training process to include novel, hallucinated semantic representations of new classes which the generator would try generate samples for. To generate the new semantic representations, two classes a and b are picked at random and with r s a , r s b ∈ τ denoting their semantic representation. Then a random convex sum of these features to used create the hallucinated representation: where alpha is uniformly sampled from interval [0.2, 0.8] [2]. After training, the generative model can be used to generate visual features of unseen classes. These samples can then be used to train a classifier as in a regular, classification task SR Loss Function Here we introduce the DGRZSL loss. This loss is in addition to other terms that are commonly used in existing generative ZSL approaches [13,14,2]. The main contribution of DGRZSL is the addition of a Semantic Representation (SR) network. The SR network maps from visual features to semantic representations. To constrain the output of this network, and encourage the visual features to represent information present in the semantic features, the added loss function encourages the generated semantic representation for the generated features to match the input. In essence, the model ensures that the combination of the visual feature generator and the semantic representation generator form an autoencoder. The loss for our SR generator network is as follows: where x denotes the visual features, r s denotes the semantic representations of the seen classes, p data is the training data distribution of x, r s , r h denote the hallucinated semantic representations generated from p h rep as described above, and sim is a function which measures the similarity between semantic representations. All terms encourage the semantic representations produced by SR(·) to be similar to the "correct" semantic representation, even in the case of hallucinated semantic representations. The first term considers visual features from the training data with known classes and hence known semantic representations. The second term considers generated visual features given semantic representations of known classes. Finally, the third term considers generated visual features with hallucinated semantic representations. While all terms encourage SR to generate accurate semantic representations from the visual features, most critically, the last two terms also encourage G to produce visual features which meaningfully capture the input semantic representations. Training The above SR loss function is used in conjunction with the standard generator and discriminator losses and adversarial training used in GAZSL [14] and CIZSL [2]. Baselines and Evaluation The most relevant baselines for our methods are the GAZSL [14] and CIZSL [2] ZSL models on which our approach is built. These methods are state-of-the-art generative ZSL approaches. Both GAZSL [14] and CIZSL follow the same procedure to evaluate the performance of their models. A dataset is split into training and test sets. The training set is used to train the weights of the networks and performance on the test set is evaluated every few epochs. The final reported performance is of the model which achieved the best test set accuracy during training. A similar procedure was used to tune hyperparameters. However, this is an unrealistic representation of model performance as model selection is done based on the test set itself. Instead, we propose to use a validation set, disjoint from the training and testing sets, to select the final model. After training, the performance on this validation set is used to select a model and evaluate the performance on the test set to give a more fair and accurate picture of model performance. However, as a consequence, the results reported for the baseline models when our evaluation method is used differ from the results reported in their original papers [2,14]. For transparency, we report results with both evaluation protocols. Our model outperforms both baseline models in most cases regardless of evaluation methods used. In what follows we limit analysis to the results obtained by using the validation set for model selection. Table 1 summarizes the accuracy achieved of the proposed method and the two baseline models on the attributebased datasets. DGRZSL outperforms the state-of-theart baseline methods in top-1 accuracy in all cases with an average improvement of 9.4% in the case of the APY dataset. DGRZSL also displayed significantly improved performance in the seen-unseen H metric, improving it by 29.65%, 1.63%, and 10.91% over state-of-the-art on AWA2, SUN, APY, respectively. Table 2 shows the results achieved by our model on CUB and NAB datasets for their easy and hard splits compared to the two baseline models. DGRZSL outperformed the other models on easy splits when top-1 accuracy is used to evaluate the models. The advantage of the model becomes more clear when the seen-unseen AUC metric is used as DGRZSL outperforms other models on most benchmarks. The model is most successful on easy splits resulting in average improvements of 2.13% and 2.73% for top-1 accuracy and seen-unseen H, respectively. CUB HARD is the only case where our method fails to improve upon the baslines. Refer to 2 for the visualization of the seen-unseen curves for our model, GAZSL and CIZSL on all four benchmarks. Conclusion We introduced the Description Generator Regularized ZSL (DGRZSL) model. DGRZSL includes an additional component which produces semantic representations of the underlying classes based on generated visual features. Combined with an additional regularization, this encouraged the generated semantic representations to be consistent with the input to the visual feature generator for both seen and hallucinated classes. Our experiments showed that this modification improved the generalization performance over state-of-the-art generative ZSL models in terms of both top-1 accuracy and seen-unseen metrics. Our evaluation on multiple benchmark datasets shows that the DGRZSL performs well for different types of semantic representation, including both textual-based and attribute-based class descriptions.
2021-07-01T01:42:19.338Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "90bda3f5eaca18250477989f7a30e3e60c85b4b8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "90bda3f5eaca18250477989f7a30e3e60c85b4b8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
224898461
pes2o/s2orc
v3-fos-license
Knowledge transfer on sustainable bamboo forest management through social capital approach in Ngada Regency, Indonesia Bamboo is known as multi-purposes plants and currently, has potentially used as wood substitution products. The demand for bamboo from industrial sector is even higher. Although many countries have practiced bamboo cultivation, in Indonesia Bamboo tends to be allowed to grow naturally and still lack of treatment. The threat of unsustainable exploitation can cause the decreasing of bamboo productivity and lead to its scarcity. The sustainable bamboo forest management system then emerged as a solution. But the knowledge of such system has not been transmitted massively among bamboo farmers and owners. This paper will discuss the transfer of knowledge on sustainable bamboo forest management using social capital as an approach. This paper uses data from research project conducted in 2018 to 2019, which is located in Ngada Regency, East Nusa Tenggara, Indonesia. The results indicated that social capitals such as trust, organizations, social networks, and norms or rules are embedded with social institutions that exist among community. This research shows that Sa’o and BUMDes could be the most potential media as a means on transferring knowledge about sustainable bamboo forest systems. Both can be used as an entry point for any actors to run a small-scale bamboo industry development program. However, there are some potential obstacles could be occurred during the process of knowledge transfer, such as in-group feeling among indigenous community, the assumption that bamboo is a social good, not an economic good, do not concerned with the commercialization of bamboo, the complexity of inheritance law in the customary (adat) system, and the involvement of adat elites in political practices related local elections. Introduction It is undeniable that bamboo known as a multi-purpose plant and currently has potential used as a wood substitution product. Through the modern utilization of bamboo with higher value-added opportunities, it will increase the community income. This has been proven in China where bamboo plays an important role in the development of industry in rural areas in Anji City, Zhejiang Province [1]. While globally, the demand for bamboo from industry players is increasing [2]. This phenomenon ICFP 2020 IOP Conf. Series: Materials Science and Engineering 935 (2020) 012073 IOP Publishing doi: 10.1088/1757-899X/935/1/012073 3 In terms of bamboo, Ngada Regency is highly potential to be developed on it. Bamboo in Ngada is dominated by bambu betung species (Dendrocalamus asper). This species grows well in Ngada as it is suitable for local climatic and geographical conditions. There is also strong connection between bamboo and local culture. Ngada people have customary laws to protect bamboo from any threat. It is called as waja and ri'i. Since 2012, there has been a laminated bamboo industry in Ngada Regency, which is still operating up to now. It has bamboo supply from the bamboo forest surround Golewa Sub-District. The bamboo industry cooperates with NGOs in assisting the bamboo farmers and owners in the supply of their industrial materials. Procedures This study combined quantitative and qualitative approaches in order to target the research objective. There four methods of data collection processed as follow. First was socio-economic survey. This method was conducted in 2018 and 2019 towards ten purposed villages in Golewa Sub-District, Ngada Regency, they were Village Were I, Were IV, Dadawea, Radabata, Ratogesa, Ulubelu, Wajamala, Langagedha, Rakateda II, dan Sarasedu I. The 111 respondents (N=111) were interviewed as samplings using semi-structured questions written on questionnaire sheets. However, of those questions that were asked to the respondents only some Tools and materials This study used tools such as voice recorders and notebooks for recording interviews; visual aids tools such as flipchart papers, pens and markers for assisting participative rural appraisals and focused group discussions; as well as socio-economic tools such as questionnaires and other documentation equipment e.g. digital cameras, smartphones, and computers. Meanwhile, materials that was used for this study were informants, respondents, and other objects e.g. artifacts, momentums, and sociocultural events that fit with the study objectives and appeared the during data collection. Second was ethnographic interview. This method was used to obtain detailed and in-depth information related to social and cultural realities [21]. In this case, this tool was employed to generate the information related to the construction of social capital on SBFM in Ngada. The interviews were conducted in July to August 2019 towards 8 key informants. The informants were selected with some criteria such as mastering knowledge of bamboo and its social, culture, and philosophical dimensions. The snowball technique was employed to determining the key informants. The open interview techniques were used with focusing on the study of customary dominance in bamboo forest management. Third was focused group discussion (FGD). This method invited stakeholders to brainstorm about bamboo forest management, its problems as well as the possible solutions [21,22]. Two FGDs were done in 2018. The first was in Village Were I, in which focusing on the traditional bamboo management system. About 20 people attended this FGD with an equal gender proportion. The second FGD was in Village Ratogesa, which attended by 18 young people due to its stressing on the role of young people and its prospects on bamboo management. Meanwhile, another FGD was conducted in 2019 where by more than 30 villagers attended this meeting. Village Radabata was selected purposively as the host for the meeting. This FGD was focusing on identifying the role of BUMDes (village owned enterprises) and its potentials on supporting market of bamboo. Fourth was participatory rural appraisal (PRA) [24]. This method was done towards two Sa'o (the smallest unit of traditional family system in Ngada) namely Sa'o Susuteme in Village Dadawea and Sa'o Gedhe Ana in Village Waia. About 25 people attended the PRA for each Sa'o. Those PRA were held in the first and at the end of March 2019. This method was aimed to identify the traditional bamboo forest management system among local communities. Social Capital: An Invisible Power Inside the Social Institutions Lyda Judson Hanifan was the person who first used the term social capital. In his writing, The Rural School Community Center, the words social capital appeared when there was an explanation of the community's ability to overcome various problems independently [25]. The word capital refers to something that is owned or an asset that can make the community grow properly. Social capital can be in the form of good will, friendship, mutual sympathy, and social relations and cooperation between individuals and families that form a social group [25]. Social capital can be an overall effort related to the mastery of relationships between groups in an institutional network that is based on knowing and recognizing each other [26]. Social capital is determined by social structure and access to that structure. The social structure consists of organizations and rules that govern its members [15]. There are three pillars supporting social capital, namely trust, access of information, and norms [15]. Moreover, social capital is also can be seen as a feature of social organizations consisting of networks, norms, trusts that facilitate coordination and collaboration among social groups' members in order to reach common benefits [27]. Therefore, social capital is basically a potential asset that owned by the social groups, such as organizations, institutions, communities, or societies, which is commonly used for overcoming common issues and Data analysis Quantitative data from socio-economic survey were statistically analyzed using frequency tables and presented in the narrative manner. Qualitative data from ethnographic interviews, FGD and PRA were analyzed using thematic analysis technique which relied on the needs of this study, such as identification of social capital and social institutions, as well as potential barriers to knowledge transfer through social capital among communities. Those primary data were triangulated by dialogue with the literature gathered from scientific journals and popular writings. The final results of the overall data analysis were presented in a descriptive narrative form. In Indonesia, the discourse on social capital as a hidden power possessed by social groups has made a hot topic discussed among social scientists. For instance, there was a finding from social capital studies in Indonesia, which said that the strong social capital in the community is determined by effective communication and is colored by the similarity of concepts, competencies, connections, credibility as well as care among the group members [28]. While in the forestry sector, it was proven that social capital in Jambi and West Sumatera could encourage forest sustainability. This finding was contrast to the economic benefits of forests that lead to forest destruction [29]. Furthermore, there was a study about social capital bonding, which is an innovation based on traditional concept of social capital, that can be a power in encouraging the adaptation capacity of rural communities when carrying out infrastructure development in East Java [30]. Meanwhile, in the institutional perspective, social capital is perceived as an entity that is integrated with the organization or groups in society. Social capital has a connection with the capacity of the state in natural resource management [16]. Figure 2 shows that the private institutional model is the weakest in terms of social capital and state capacity. It is normally occurred with private companies. Whereas state management institutions have a strong state capacity but do not put social capital out as its priority. It makes its position then becomes weak. This is commonly happened in government institutions or BUMDes. On the other hand, strong social capital and weak state domination lie in the community-based management institutional model. This model is commonly found in natural resource management systems in Indonesia. Lastly, the most ideal institutional model is so called collaborative management. In this model, social capital and state capacity are equally strong in sustaining the institutional superstructure and infrastructure. Partnerships in social forestry schemes in Indonesia may be a good example for this model. Identifying social capital in Ngada The existence of social capital in a society cannot be seen explicitly. However, it will be more easily perceived through the institutions that exist in society, whether formal and informal [16]. Identification of social capital in Ngada Regency was carried out through the existing social institutions that exist in the community. Source. Primary data of socio-economic survey 2018-2019 Table 1 shows that of the nine social institutions that were asked to respondents, only fisherman groups were not available. It made sense as the location of the Golewa Sub-District was in mountainous areas, which was fertile. Another reason was because the sample villages were also located in the mountainous region. Of the nine social institutions identified on survey, religious groups and indigenous (adat) groups played a very dominant role in people's lives. Meanwhile, the results of in-depth interviews, FGDs, and PRA showed that, generally, social institutions in the community consisted of formal and informal institutions as seen in table 2. The forms of formal institutions that have been identified in Ngada District and have relevance to sustainable bamboo management were BUMDes, Koperasi, Farmer Groups, and Village Governments. BUMDes is a village government-owned business institution that aims as a source of income for the village government. This institution is established based on government regulations. BUMDes venture capital comes from the government as well. While, Koperasi is alternative institutions besides banks and moneylenders, which is operating in the countryside. The main function of Koperasi is generally as a savings and loan institution. Communities borrow money from Koperasi mostly to have business capital, finance children's schooling, and traditional party events purposes. Communities re-pay Koperasi loans by paying in installments from the profits of their business or harvests. On the other hand, farmer groups were apparently less popular in the Golewa. Many farmer groups have been formed but they were not working. The coffee farmer group was the most active group in the study site. Farmer groups are a place for its members to share knowledge, information, and control the harvest price by middlemen. Finally, the last institution is the village government. This is the most formal institution that exists at the village level. This institution is an extension of the central government. In addition to carrying out administrative functions of government, the village government is the front line for succeeding government development programs. The effectiveness of village government depends on the quality of the leaders and apparatus in it. While informal institutions that have been identified and have a relevance to sustainable bamboo management were adat, religious groups, young people, and women/arisan. The adat institution was the most influential informal organizations in Ngada Regency. The results of the ethnographic interview explained, since the initial phase of its formation, the Ngada community has been using a tribal social system to regulate their living governance. The smallest unit of the tribe as a social system was so called as Sa'o. Sa'o managed several family heads who were members in one tribe. The regulation included a matter of kinship law. This kinship law would ultimately determine inheritance law and land tenure law as well as economic regulation. In Village Were, most of the tribes applied a patrilineal system. Men played an important role in making decisions and especially about accepting inheritance. Whereas in other regions, such as Villages Radabata, Dadawea, Ratogesa and other lower regions, they had tendency to adhere to matrilineal kinship. In the matrilineal system mothers had the right to make decisions in a negotiation in Sa'o. Women also had the right to occupy Sa'o's traditional house as a place to live. However, in reality these women still sought the opinions and considerations of adult men in Sa'o before deciding on a matter. For example, it occurred when women called for decision in terms of determining the use of bamboo, land use for planting, types of commodities to be planted. Sa'o operates a prohibition system to avoid ecosystem damage and crop failure. Waja and rii customary law are clear examples for this prohibition system. Waja is basically a prohibition for not doing anything in a location (garden) in a certain period of time. This purposed to restore the condition of the damaged ecosystem so that it can be reused in the future. While rii is a traditional ritual to operate a ban on taking the harvest of certain commodities where the Sa'o will use the commodity for certain purposes. The rii application also has a certain period of time to protect the yield to be harvested. Therefore, the harvest can be maximized as expected. Relations in between Sa'o as well as in between tribes could be a powerful power for indigenous people in Ngada. The social relations can be seen clearly, for example when the traditional houses of Sa'o are being repaired. The depth of the relationship is shown by the presence of each tribe representatives and other Sa'o groups to the traditional ceremony. They usually bring rice, moke (palm wine), and livestocks (buffalo, pigs, chickens) to give to the Sa'o family who are repairing their house. All those gifts then were cooked and then to eat together. Next is the religious group. The majority of Ngada people embrace Catholicism so that the existing religious social groups are generally organized by and have affiliation with Catholic Church. For example, the Catholic Base Group (Kelompok Umat Basis-KUB) whose members are also members of the Catholic Churches. In addition to the religious activities, KUB also carries out social activities such as choir training, village cleansing, and early childhood education. The influence of the church can also be seen in youth. The role of youth is organized by the Catholic Church institution, which is named as the Catholic Youth (Orang Muda Katolik -OMK). Almost all young people in the Catholic Church in Golewa are OMK members. OMK activities are generally in the fields of youth and sports, such as organizing volleyball competitions, football, choirs, and performing arts. OMK also facilitates training and sharing knowledge about entrepreneurship independently. Lastly, another informal group is arisan. This group is dominated by mothers and commonly formed in the neighborhood level. Arisan is not just a matter of who gets money for social gathering, but rather it has substantial matters on the communality and communication between members regarding the actual social problems being faced together. The potential institutions with powerful social capital Based on the results of social capital identification, it is known that there are two social institutions that have the potential to be utilized in order to support the transfer of knowledge on SBFM, namely Sa'o, which is an informal institution and BUMDes originating from formal institutions. The characteristics of those institutions are explained in Table 3. [16]. Sa'o is the smallest unit of tribal social system adopted by the Ngada people. Social cohesion in Sa'o is constructed through the elements of social capital, such as high trust. Trust grows from the intensive interactions that last for a very long time [28]. This certainly has been carried out since the ancestors of the Ngada people traveled to the foot of Mount Inerie in Flores from Rear Yunnan in China, as explained in ethnographic interview. The similarity of fate, history, ancestors and forged through the long journey of living together as a group makes trust naturally flourish among Sa'o members. It leads to create a patrimonial leadership in Sa'o, which is hereditary based on the obedience of its members. The leader of Sa'o will be customarily determined according to his descendant line. Initiations at Sa'o appeared in a bottom up manner. The close relationship and the high trust among members made up each individual problem become a common problem. Therefore, to determine a solution it is often to held a deliberated meeting between fellow members of Sa'o. Patrimonialism in the Sa'o makes leadership run stable. The decision was taken after listening to input and information from traditional deliberated meeting. That decision was always firmly determined, undoubted, and implemented consistently by the leader of Sa'o and all members. Sa'o also has a fairly good household management system. Sa'o is an extended family [31] and has many communal assets, which were jointly managed such as land, houses, gardens, or human resources. The amount of the assets is regulated in the division of tasks that is appointed to each family leader in Sa'o (nuclear family) [31]. The division of tasks did not conduct authoritatively by the leader Sa'o but it was based on deliberated meeting. It aimed to avoid misunderstandings and to ensure that each Sa'o member accepted the decisions. Due to those processes, the predetermined work was usually carried out happily and fostered a positive work ethic that encouraged Sa'o members. Sa'o is an informal social institution based on adat (traditional system) and it is non-political. Therefore, the adat elites must not put their political interests into Sa'o. It aims to maintain the trust, norms and rules of adat, as well as social cohesion that have long been developed. On the other hand, Sa'o is referring itself towards customary law. For example, there were customary law of waja and rii, which are applied for protecting natural resource. However, according to the PRA, currently, waja and rii need to be formally strengthened if they want to be implemented in today's society. Strengthening the role of customary law in village level could be processed throughout the village regulations [32]. For example, making village regulations (Perdes) regarding waja or rii in villages that has wide scale of bamboo forest Adat lobbying is the approach that is importantly needed when put Sa'o as an entry point for the knowledge transfer activities, such as promoting the thousand-bamboo village program or SBFM knowledge. Building social and cultural trust between program implementers and the Sa'o community could be a part of lobbying. The lobbying can be also conducted through traditional rituals, traditional ceremonies, staying periodically in a time that is not short (live-in), and building a network with Sa'o members or its representative in order to carry out ongoing assistance. The possible output of the succeeded lobbying could produce the best model of institution, which is a collaborative management model [16]. BUMDes In contrast, BUMDes could be categorized as part of state management institutions [16]. It is because BUMDes has a strong state capacity but it does not put social capital out as its priority. However, it is still possible to select BUMDes as formal institution, which is a potential media to support knowledge transfer on SBFM. This institution is often touted as the most possible entrance to transfer knowledge about SBFM to the community. It is because BUMDes has a strong legal basis and financial resources, which is certainty. Human resources at BUMDes can also be appointed at any time to work based on a decree or bylaws. Financially, the BUMDes funding source is already available, for example there is village funds that could be used as financial support for BUMDes. The distribution of tasks is carried out based on formal documents. The work approach is BUMDes environment is top-down. It is often that the head of BUMDes takes decisions unilaterally without any deliberation processes. This makes BUMD bureaucratic and causes low obedience to leaders. The work ethic that grows is also low. Different situations will occur if the BUMDes is led by person who has social influence, whether inside or outside the organization. That influence can certainly change the work ethic, obedience, and trust in its members to be a positive direction. BUMDes is also vulnerable to vested interests and political interests because of its existence that cannot be separated from village government institutions. The local democratic events such as regional general election to choose local leader e.g. governor, regent, major, or village head, known as pilkada, will strongly affect the performance of BUMDes. The will to power instinct of each village head or local political elites will make it possible to threaten the existence of BUMDes in carrying out their duties and functions. An example for previous explanation was occurred in BUMDes Radabata. Although the foundation of the formal legal superstructure was strong and the infrastructure of funding and human resources were adequate, in practice, the implementation was not able to last long and not sustainable. Out of the six BUMDes institutions, only one was running with not optimal condition. The BUMDes of the brick plant was a clear example of which the state capacity did not always guarantee the sustainability of programs or projects. Conversely, potential social capital that is built from the bottom is very important needed in moving the community in the business sector. 10 However, there is still possible and necessary approach if want to use BUMDes as the entry point for applying knowledge transfer on SBFM towards community. It needs a strong political lobbying supported by formal and bureaucratic regulations in order to ensure that all plans would be run as expected. Potential issues in knowledge transfer on SBFM Although the two institutions are very potential to be an option for knowledge transfer processes on SBFM, it does not mean that there will be no obstacles. There are some potential obstacles that might be encountered in the process of knowledge transfer on SBFM. In-group attitude Although the survey shows that public is open to outsiders, nevertheless the attitude of in-group feeling still cannot be eliminated. In-group feeling is an attitude that is inherent in traditional society [33]. By neglecting this attitude, it will potentially ignite resistance local people to foreign people. The process of transferring knowledge about SBFM will be difficult and can even fail if there are problems in the community. 4.3.2. Bamboo is a social goods, not economic goods For local people, Bamboo is not a priority commodity for meeting economic needs. The fertile and cold in Golewa region makes local people choose the plantation sector (coffee, cloves, corn, sugar palm) and agriculture (ginger, pumpkin, and vegetables) as the main commodity for their income. Instead, bamboo is used in subsistent manner by almost all of its owners, both by the Sa'o community and individuals. Although there are some people who claim to have sold it, but the buyer usually is their own neighbor. And they sold it not for a large-scale industry that consistently sustainable. This will be a limiting factor for the knowledge transfer process on SBFM. Do not care about the commercialization of bamboo The player of bamboo industry in Ngada is only PT Indo Bambu. Although they have implemented SBFM, the SBFM's knowledge and practice remains in the company workers. The bamboo owners only signing contract letter in order to give permit for company to manage their bamboos using SBFM system. Bamboo owners also sell their bamboo pieces. Meanwhile, through their workers, the company carries out the SBFM in the bamboo forest areas, which belong to bamboo owners. The attitude, which merely wanting to earn money easily (without work), will certainly hamper the process of knowledge transfer on SBFM. The complexity of inheritance law in customary systems Most of the young people reluctant to be involved in managing bamboo with their family members. The reason was because they realized that they would not benefit the rights to sell bamboo. The youths (teenagers) who have participated in bamboo management in their family (both extended and nuclear family) claimed that they were only seconded without receiving a share of bamboo sales profits. Youth understanding of communal and private land rules and inheritance law is also poor. Therefore, young people tend to have a negative perception of bamboo management system, which operated in their family, such as bamboo is the business of the elderly, does not make money for youth, and is complicated to divide the land (inheritance) that has bamboo in it. The traditional elite involvement in political practices The rise of political processes such as general election results in fragmented village communities. This political fragmentation has the potential to cause social and cultural fragmentation in society. Local elites who need votes will use the strategy to enter Sa'o and influence citizens to vote themselves. This is a significant latent challenge for facilitators of knowledge transfer on SBFM who will do their job. Furthermore, it is also potentially disrupting and even frustrating the efforts to transfer knowledge on SBFM. This socio-cultural fragmentation has potential for conflict outbrake. Conclusion This study concludes that social capital in the Ngada community is inherent in the existence of social organizations. Sa'o and BUMDes are two potential organizations to become strategic options as media for knowledge transfer on SBFM. However, some obstacles are still likely to arise when the two options are chosen, such as in-group feeling, bamboo is a social goods and not an economic goods, do not concerned with the commercialization of bamboo, the complexity of inheritance law in customary systems, and traditional elite involvement in political practices.
2020-10-19T18:10:36.050Z
2020-09-23T00:00:00.000
{ "year": 2020, "sha1": "fc90eac4a99ee273fefbe6e0b9eddf19e9923dc7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/935/1/012073", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ce517532d25c276af293a683028af53d217efb8f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Physics", "Business" ] }
6631902
pes2o/s2orc
v3-fos-license
Standardizing Visual Control Devices for Tsetse Flies: East African Species Glossina swynnertoni Background Here we set out to standardize long-lasting, visually-attractive devices for Glossina swynnertoni, a vector of both human and animal trypanosomiasis in open savannah in Tanzania and Kenya, and in neighbouring conservation areas used by pastoralists. The goal was to determine the most practical device/material that would induce the strongest landing response in G. swynnertoni for use in area-wide population suppression of this fly with insecticide-impregnated devices. Methods and Findings Trials were conducted in wet and dry seasons in the Serengeti and Maasai Mara to measure the performance of traps and targets of different sizes and colours, with and without chemical baits, at different population densities and under different environmental conditions. Adhesive film was used as a simple enumerator at these remote locations to compare trapping efficiencies of devices. Independent of season or presence of chemical baits, targets in phthalogen blue or turquoise blue cloth with adhesive film were the best devices for capturing G. swynnertoni in all situations, catching up to 19 times more flies than pyramidal traps. Baiting with chemicals did not affect the relative performance of devices. Fly landings were two times higher on 1 m2 blue-black targets as on pyramidal traps when equivalent areas of both were covered with adhesive film. Landings on 1 m2 blue-black targets were compared to those on smaller phthalogen blue 0.5 m2 all-blue or blue-black-blue cloth targets, and to landings on all-blue plastic 0.32–0.47 m2 leg panels painted in phthalogen blue. These smaller targets and leg panels captured equivalent numbers of G. swynnertoni per unit area as bigger targets. Conclusions Leg panels and 0.5 m2 cloth targets show promise as cost effective devices for management of G. swynnertoni as they can be used for both control (insecticide-impregnated cloth) and for sampling (rigid plastic with insect glue or adhesive film) of populations. Introduction Glossina swynnertoni Austen (Diptera, Glossinidae) is restricted to open savannah in north-western Tanzania and south-western Kenya, extending from Tarangire in the south through Manyara to the Serengeti plains, and into the Maasai Mara in the north [1]. Swynnerton [2] found it at 900-1800 m above sea level and considered temperature, humidity, vegetation and the presence of wildlife as the key factors controlling its distribution. It is a vector of both human and animal trypanosomiasis in wildlife reserves and in neighbouring conservation areas used by pastoralists [3][4][5][6]. The challenge is to minimize disease transmission through effective management of the vector in the presence of abundant wildlife reservoirs, especially in protected areas. G. swynnertoni is in the savannah or morsitans group of tsetse. Considerable progress has been made in developing visually-attractive control devices such as traps [7] and insecticide-impregnated targets [8], [9] for this group. However, no comparable effort has been made to develop cost-effective devices for G. swynnertoni since initial tests were conducted in Tanzania in 1991-1993 [10]. Presently, local control of this species is being attempted with techniques refined for other species. For example in Kenya, pastoralists deploy insecticideimpregnated targets or apply pyrethroid sprays to livestock [11] in a largely uncoordinated effort at vector control. Savannah tsetse are attracted to artificial objects of modest size [9] that are conspicuous relative to their immediate environment [12]. Traps and targets of phthalogen blue (peak reflectance at 465 nm) and/or black cloth of about 1 m in dimension are typically effective for this group of tsetse [13]. G. swynnertoni, like G. morsitans [8], is nevertheless difficult to catch with simple stationary devices, as movement and other subtle visual cues [14] are likely involved in host-seeking behaviour. Vehicle patrols or ''flyrounds'' were previously used for sampling G. swynnertoni [15], [16], but recent studies now mostly use blue-black cloth traps designed for other tsetse [6]. There have been few comparative tests of the efficacy of modern tsetse traps or targets relative to other methods for collecting G. swynnertoni outside of the works of Ndegwa & Mihok [17] and Ndegwa et al. [18]. These studies showed that trap designs other than the S3 trap were relatively inefficient compared to a 1 m 2 black sticky target. Unlike Glossina pallidipes Austen (Diptera, Glossinidae), baiting traps with attractants such as acetone, 1-octen-3-ol, phenols and/or cow urine does not result in large increases in catch [10], [19]. Within the Africa-wide WHO-TDR initiative to develop innovative control strategies for tsetse, we set out to standardize long-lasting, visually-attractive devices for G. swynnertoni. The trials were based on existing trap/target/bait technology following a similar experimental approach throughout Africa [20]. Trials were conducted in wet and dry seasons in the Serengeti and Maasai Mara to measure the performance of pyramidal traps and targets in phthalogen blue and various alternatives at different population densities and under different environmental conditions. A simple enumeration method (sticky film) was used at these remote locations to compare trapping efficiencies of devices made of wellcharacterized colour-fast fabrics (and a blue-painted plastic). The relative performance of devices was also compared with and without chemical baits. Various alternatives were compared to a standard phthalogen blue and black pyramidal trap, which has been the tsetse survey device of choice in Tanzania in recent years [4]. The overall goal was to determine the most practical device/ material that would induce the strongest landing response in G. swynnertoni for future use in area-wide population suppression of this fly with insecticide-impregnated devices. Study sites Studies were conducted in open Acacia-Commiphora-Balanites dry savannah woodland in pastoral areas near the Maasai Mara National Reserve in Kenya and deep within the neighbouring Serengeti National Park across the border in Tanzania. Wild hosts remain abundant in the Serengeti but have declined considerably in the Mara in the last few decades [21]. Livestock were present only in the Mara. In the Serengeti, three sets of studies took place at Death Valley 2u199510 S, 34u499600E at an altitude of 1548 m near Seronera. A first set of experiments was conducted in 2009 in the wet season (July) and repeated at the same sites in the dry season (October). In 2010 and 2012, a second and third series were conducted at the same location, both at the start of the dry season (September). G. pallidipes was also present in scattered evergreen thickets. Findings for this species are reported where captures were adequate for analysis. In the Mara, studies were undertaken at the Olarro hills 1u25945.40S, 35u3590.90E at an altitude of 1910 m, and at the Nyonorri hills at 1u27918.90S, 35u33943.50 E at an altitude of 1877 m. Unbaited and baited trials were conducted at these locations separately in the nominal wet season (June), and then repeated only at one location (Nyonorri) in the subsequent dry season (November) of 2009. The shift in locations was necessitated by the unanticipated use of pyrethroid spray-ons by pastoralists at Olarro. Habitats have been extensively altered by human activities in pastoral areas near the Mara. Hence, G. swynnertoni is mainly found in remnant woodlands along hillsides that receive moisture from the highlands all year-round. A severe drought was underway during these tests; hence wild hosts were present only in the nominal ''wet season''. Vegetation cover was particularly sparse during all seasons in the Mara due to the drought. Cover in the Serengeti was not affected as much, and hence wet and dry season contrasts were more typical of natural climatic cycles in Tanzania. Catching devices, materials and baits In 2009, three catching devices were tested: standard cloth pyramidal traps [22], rectangular cloth targets, and smaller allblue ''leg'' panels [23]. The dimensions and design of the targets and leg panels were chosen to reflect current practices in East Africa and are summarised in Table 1. Devices were set in the open, 30 cm off the ground; vegetation was removed from within a few metres of each site. The targets in Kenya were 1.5 m 2 (1.5 m wide by 1 m high) divided vertically into equal rectangles of blue and black cloth [24]. In Tanzania, the targets were the same size but were divided vertically into three equal rectangles of blueblack-blue [25]. Leg panels in Kenya were made of blue cloth with a surface area of 0.32 m 2 (65 cm wide by 46 cm high for the upper ''torso'', plus two ''legs'' 15 cm high by 8 cm wide). Two kinds of slightly larger leg panels were tested in Tanzania. One was 0.47 m 2 blue cloth (70 by 64 cm plus 14 by 9 cm legs) and the other was 0.45 m 2 blue-painted plastic (90 by 45 cm plus 15 by 15 cm legs). The plastic was 3-4 mm thick and was painted glossy phthalogen blue. All targets were mounted on supports that allowed for limited rotational movement in the wind. The wet season trial in Tanzania occurred under particularly windy conditions. Two blue fabrics were tested: C180 Azur 623 phthalogen blue 100% cotton (180 g/m 2 , TDV, Laval, France) with a reflectance peak at 460 nm as measured with a Datacolor Check Spectrophotometer (Datacolor AG, Dietlikon, Switzerland), referred to hereafter as standard blue cotton, and turquoise blue 65% polyester/35% viscose (234 g/m 2 , Q10067 Sunflag, Nairobi, Kenya) with a peak at 480 nm. The phthalogen blue paint on the plastic leg panel had a peak of 460 nm. A 100% polyester black (225 g/m 2 , Q15093 Sunflag, Nairobi) was used for all devices in all trials described here. To monitor the number of tsetse landing on cloth targets and leg panels, one-sided adhesive Author Summary Glossina swynnertoni is restricted to open savannah in north-western Tanzania and south-western Kenya, where it is a vector of both human and animal trypanosomiasis in wildlife reserves and in neighbouring conservation areas used by pastoralists. Despite the challenge to minimize disease transmission through effective management of the vector in the presence of abundant wildlife reservoirs, little has been done to test the efficacy of modern tsetse traps or targets for controlling G. swynnertoni. We made field tests in the Serengeti and Maasai Mara to determine the most visually-attractive, long-lasting and practical object that induces the strongest landing response in G. swynnertoni. Fly landings were twice as high on 1 m 2 blue-black targets as on pyramidal traps when equivalent areas of these devices were covered with adhesive film. Furthermore, blue leg panels in either cloth or plastic and blue or blue-black-blue cloth targets under half the size of traditional targets captured tsetse at equivalent numbers per unit as the latter. These smaller targets and leg panels show promise as cost-effective devices for management of G. swynnertoni populations as they can be used for both control (insecticide-impregnated cloth) and monitoring of this species (rigid plastic with insect glue or adhesive film). film (30 cm wide rolls, Rentokil FE45, UK) was stitched with thread to both sides of the trapping devices. However, in 2009 in Tanzania only the lower 60 cm of the targets was covered. Plastic leg panels in Tanzania were coated with a non-setting shiny glue (Temoocid, Kollant, Italy). Transmittance spectra for both adhesives are compared to polybutene in Figure 4.4 in IAEA TECDOC 1373 [26]. All of these adhesives are highly transparent in the visible spectrum, but Rentokil film absorbs significantly in the ultraviolet (,400 nm). In 2010, two supplementary trials were conducted in Tanzania to enumerate flies landing on pyramidal traps compared to 1 m 2 square targets, divided vertically into equal parts of blue and black material (referred to hereafter as the standard target). For this, adhesive film was also attached to the blue-black fabric of the pyramidal traps to enumerate flies that land but may not be captured. In an additional test only 161 m squares of adhesive film on its own (i.e. without targets as a backdrop) were compared to 1 m 2 square targets (with equal parts of blue and black material) covered with adhesive film to ascertain whether the adhesive film in itself was attractive. A further set of trials was conducted in Tanzania in 2012 to compare six different two-dimensional cloth targets to evaluate the influence of size, shape and colour combination on fly landing rates. The six devices were: two types of 1 m 2 square targets, one divided vertically into equal parts of blue and black material, the other divided vertically into three equal parts of blue-black-blue; two types of 0.5 m 2 targets (0.9 m60.55 m), one divided vertically into equal parts of blueblack-blue and the other all blue, both set up horizontally; and two types of 0.25 m 2 square targets (0.5 m60.5 m) one divided vertically into equal parts of blue-black-blue, the other all blue (see Table 1). An 1:4:8 mixture of 3-n-propylphenol (P), 1-octen-3-ol (O), and p-cresol (C) (Ubichem research LTD, Budapest, Hungary with a global purity of up to 98%) was used as an attractant for experiments comparing baited devices based on general efficacy for several tsetse species [26]. Sachets made of 500 gauge/ 0.125 mm polyethylene containing 3 g of the mixture were placed below the catching devices, 10 cm above the ground, next to a 250 ml bottle buried in soil up to the shoulders containing acetone (A) with a 2 mm aperture in the stopper. This combination of chemicals is termed the POCA bait. Experimental design Testing trapping devices and blue materials. To assess which was the best catching device and the most attractive blue material in the wet and dry seasons at each location, a six-day experiment was carried out to compare six devices in a 666 Latin square design of days6sites6treatments, with 3 simultaneous replicates. Trapping positions were always .100 m apart and flies from each device were counted and sexed after 24 hours at each position. The six devices and blue materials tested were: pyramidal traps in standard blue cotton and turquoise blue polyester/viscose; local targets in standard blue cotton and turquoise blue polyester/ viscose; leg panels in standard blue cotton (both countries) and turquoise blue polyester/viscose (Kenya only) or plastic leg panels covered with phthalogen blue paint (Tanzania only). The 6 device experiment was repeated using the POCA bait, after the unbaited trials, in the same general area with trapping positions .200 m apart. Flies from each device were counted after 24 hours at each position. The objective was to determine whether baiting changed the performance ranking of the devices/materials. Landing on trapping devices. To assess the efficiency of 3-d traps versus 2-d targets as landing devices, catches in pyramidal traps with adhesive film on the blue-black fabric were compared to 1 m 2 blue-black (50:50) targets covered with adhesive film in 2010. All catching devices were made of standard phthalogen blue cotton and black polyester. Flies caught in the cage of the traps were not included in the total for this comparison. The surface area of adhesive film was the same on both devices, i.e. 2 m 2 . Traps without attached adhesive film were included as controls to estimate trapping efficiency of the pyramidal device. A 3-day experiment was carried out to compare the three devices in a 363 Latin square of days6sites6treatments in four replicates. The trapping positions were always .100 m apart and flies of each sex from each device were counted after 24 hours at each position. Landing on targets of different size, shape and colour. To assess the influence of size, shape and colour on fly landings, six target types (two of 1 m 2 square, two 0.5 m 2 horizontal oblongs and two 0.25 m 2 square targets; see Table 1) were compared in 2012 in a 666 Latin square design experiment of days6sites6treatments, with 3 simultaneous replicates. All targets were made of standard phthalogen blue cotton and black polyester. Target positions were set up and fly counts were made as in previous experiments. To investigate the efficiency with which 0.5 m 2 targets capture tsetse, a fully randomized trial was made in 2012 with three replicates of oblong sticky targets in blue-black-blue and all blue ( Table 1) were each flanked with an adjoining transparent adhesive film target of the same shape and size (sticky on only one side; Figure 1). The aim was to estimate what proportion of flies attracted to the targets circle the device. Testing adhesive film. To assess whether the adhesive film on its own was attractive to tsetse and could affect the catching device, a comparison was made between catches of tsetse attracted to a stationary 1 m 2 cloth target of standard phthalogen blue cotton and black polyester (50:50) with adhesive film applied on both sides with a 1 m 2 square of adhesive film alone (sticky on one side only, minimal supports). The two devices were orientated E-W, and a 4-day experiment was conducted following a 262 Latin square design of days6sites6treatments in four simultaneous replicates. The trapping positions were always .100 m apart and flies from each device were counted and sexed after 24 hours at each position Normalizing fly catches. Area-wide population suppression involves consideration of the cost-effectiveness of materials versus deployment and maintenance. Hence, it was important to also quantify catches normalized for the size of each trapping device in the main series of experiments. To derive an empirical adjustment factor for the fact that in 2009, only the bottom 60 cm of the targets in Tanzania was covered with adhesive film, we recorded the heights of flies landing on the blue-black targets in an indicative experiment to test if a standard blue-black target (1 m 2 , Table 1) would catch as many flies (covered on both sides with adhesive film) as a local blue/black Kenyan target (1.5 m 2 , Table 1). Statistical analysis In all trials randomization was set up using design.lsd in the package agricolae [27], R version 2.13.0 [28]. Data were analysed using a linear model in R version 2.13.0 [28], including the following additional packages: MASS [29] and multcomp [30]. Analysis was performed on log (x+1) transformed data including day and position as ordering parameters and Tukey contrasts were calculated to compare treatments. The Wilcoxon paired test was used to compare fly landings on the blue and black portions of targets and to compare catches on the transparent film versus the blue-black target. Unless otherwise specified, results are presented as detransformed means. G. pallidipes is not mentioned where captures were too low for meaningful analysis. Performance of unbaited trapping devices When unbaited, both types of blue-black targets (Kenyan and Tanzanian) covered with adhesive film were the best devices for G. swynnertoni. In both countries and irrespective of season or fabric, sticky targets in the unbaited trials captured more G. swynnertoni than pyramidal traps (P#0.001, Table 2 and Figure 2). Catches were 2.4-6.7 times higher in three of the trials, and nearly 20 times higher in one trial (wet season, Tanzania). Targets covered with adhesive film also out-performed the smaller all-blue leg panels (all types and regardless of adhesive), capturing 2.2-3.7 times more flies in Kenya (P#0.01, Table 2) and 1.5-2.8 times more flies in Tanzania (P,0.05 for the plastic leg panel, not significant for the cloth leg panel, Table 2). The leg panels similarly captured more flies than the pyramidal traps in Kenya (P,0.05, wet and dry season, Table 2) and Tanzania (P#0.001, wet season, not significant P.0.05 in the dry season, Table 2). There was no difference between the performance of any of the same devices made from the different blue materials (P.0.05; Table 2 and Figure 2), and sex ratios were similar on the different devices. Performance of POCA-baited trapping devices The relative rankings of the POCA-baited devices were very similar to those of unbaited devices for G. swynnertoni. As before, the sticky targets greatly outperformed the pyramidal trap, with the largest difference in catch in the wet season in Tanzania (P#0.001, Table 2 and Figure 2). Catches were 5.5-6.7 times higher in three of the trials and up to 12.7 times higher in the wet season in Tanzania. In Kenya, the baited target captured 3.2-4.2 times more flies than the smaller leg panels in both seasons (P#0.001, Table 2), and an average of 5.6 times more in the dry season in Tanzania. In contrast, in the wet season in Tanzania, the catches of the target were on average 2.2 times higher than on leg panels and this was not significantly higher on either the cloth or plastic leg panels (P.0.05, Table 2). The baited leg panels consistently caught more flies than baited pyramidal traps, but not all contrasts were significant; In Kenya, where just cloth leg panels were tested, Table 2, the standard blue leg panel compared with both pyramidal traps in the wet season and both leg panels compared to the turquoise blue pyramidal in the dry season). In Tanzania, leg panels in both cloth and plastic caught significantly more flies than pyramidal traps in the wet season (4.6 times, P#0.001, Table 2), but there was no difference amongst all four devices in the dry season (P.0.05, Table 2). As in the unbaited trials, there was no difference between the performance of any of the same devices (trap, target, leg panel) made from different blue materials (P.0.05, Table 2 and Figure 2), and sex ratios were similar on the different devices. Baited devices were tested shortly after unbaited devices for logistical reasons, hence differences in catches across the two sets of experiments of the same devices have not been interpreted. Landing on trapping devices A 2-d blue-black 1 m 2 target with attached adhesive film induced more G. swynnertoni to land relative to a 3-d pyramidal trap with its blue-black surfaces covered with the same surface area of film ( Figure 3). Twice the number of flies landed on the target relative to the pyramidal trap (110.6/55.5, P,0.05; Figure 3), and six times more flies landed on the target than were caught in a control trap without film (110.6/17.6, P,0.05; Figure 3). For G. pallidipes, 3.3 times more flies landed on the target than the pyramidal trap covered with adhesive film (25.1/6.0, P,0.05; Figure 3), and 1.5 times more landed on the target than in the control trap without adhesive film (25.1/16.7, P.0.05; Figure 3). Sex ratios were similar on the three devices for both species. Efficiency of the pyramidal trap Trap efficiency, i.e. the proportion of flies caught in the trap cage of those that approach at close range, was estimated by dividing the mean daily catch in the cage of the unaltered pyramidal trap by the mean daily catch of the trap with adhesive film on the cloth, i.e. summing flies caught on adhesive film and in the cage. Efficiency for G. swynnertoni was 30% (17.6/(55.5+3.6)6100, Figure 3). Very few flies were caught in the cage of traps with adhesive film (6%), suggesting that few flies are caught without first landing on the blue-black cloth. Trap efficiency for G. pallidipes could not be estimated as the trap with adhesive film caught fewer flies than the trap without film. Testing adhesive film The 1 m 2 target of adhesive film on its own (unbaited) caught very few tsetse of either species compared to the cloth target covered with adhesive film. This target caught 2% of the detransformed mean daily catch of G. swynnertoni on the cloth target (2.9/119.1, P#0.05), and 6% of the detransformed mean daily catch of G. pallidipes (1.3/21.3, P#0.05). Note that the sticky surface area of the cloth targets was twice that of the stand-alone adhesive film target. Performance of leg panels as landing devices Catches were low in the indicative experiment to test if a standard 1 m 2 blue-black target would catch as many flies as a local blue/black 1.5 m 2 Kenyan target as pastoralists were attempting to reduce tsetse in the study area in Kenya when the experiments were conducted and vegetation was also being heavily-grazed. The 1.5 m 2 target caught a mean of 5. (Table 1) as in every other case the entire device was covered in adhesive film. Based on this logic, detransformed mean catches per m 2 of total surface area of Kenyan cloth leg panels averaged 1.5 times those of the Kenyan local target (range 1.0-2.1 times, Table 3), with similar trends among turquoise and standard phthalogen blue cloth. After adjusting for the partial adhesive coverage of Tanzanian targets, detransformed mean catches on Tanzanian leg panels averaged 1.0 times those of the Tanzanian local target (range 0.4-1.7 times, Table 3), with similar trends for turquoise and standard blue cloth, or blue-painted plastic. Note that the leg panels in Kenya were smaller than in Tanzania, and the Kenyan target, although of the same size as in Tanzania, had a different configuration of blueblack. Considering the high performance of the leg panels relative to the, on average, 3.5 times bigger targets, we conducted an extra Optimal target colour and size, and target efficiency The 1 m 2 targets in blue-black (standard) and blue-black-blue equal sized vertical stripes (Tanzanian style) both caught very similar numbers of both G. swynnertoni and G. pallidipes which suggests that there is no difference between the two designs for inducing landing (P.0.05; Figure 4). The daily landing rate by flies on the blue-black-blue 0.5 m 2 oblong targets was higher than the all blue targets of similar size for G. swynnertoni (48.1 and 34.2 flies/day, respectively; Figure 4), but this difference is not significant (P.0.05). The landing rate by G. pallidipes on the two 0.5 m 2 oblong target types was very similar (P.0.05; Figure 4). Likewise, there was little difference between the daily landing rates by flies of either species on the blue-black-blue and all blue smaller 0.25 m 2 square targets (P.0.05; Figure 4). The linear model indicates that beside the ordering factors of target position and experimental day, size is the only significant parameter retained (P,0.001), i.e. neither colour pattern (blueblack-blue, blue-black or all blue) nor shape (oblong or square) significantly affects landings by G. swynnertoni or G. pallidipes. Fly landings on the 0.5 m 2 targets were reduced to 44% of the 1 m 2 targets for G. swynnertoni and to 60% for G. pallidipes (P,0.01 for both), and to 19% and 14%, respectively, on the 0.25 m 2 target compared to the 1 m 2 targets (P,0.01 for both; Table 4). Reducing target size from 0.5 m 2 to 0.25 m 2 caused landings to be reduced to 44% for G. swynnertoni but to only 23% for G. pallidipes (P,0.01 for both; Table 4). All percentages were calculated by detransforming the coefficients from the linear model and are very similar to the catch indices calculated from the detransformed means in Table 4. Analysis of fly landings on the blue-black-blue targets alone also retained size as a significant factor (P,0.001). When the daily landing rates are corrected to an equal target size of 1 m 2 , landings on the 0.5 m 2 oblong blue-black-blue targets are nearly the same as on the standard blue-black targets for G. swynnertoni and G. pallidipes (Table 4). These corrected landing rates also indicate that landings on the best performing 0.25 m 2 square target are 74% of those on the standard 1 m 2 target for G. swynnertoni but only 66% for G. pallidipes (Table 4). For G. swynnertoni, the ratio of flies landing on the blue and black portions of the bicolour targets was very close to 50:50, irrespective of the area of each colour, with females showing a slight preference for the black portion (55%) and males a slight preference for the blue (55%). In contrast, in G. pallidipes, both sexes showed a strong preference for landing on the blue portion of targets (85%; P,0.001). In the experiment with adjoining adhesive film targets placed next to the 0.5 m 2 oblong targets (Figure 1), only 1% of the total G. swynnertoni catch on the blue-black-blue target was caught by the adjacent transparent target (6 of 464 flies) and the proportion was 6% for the all blue target (15 of 256 flies). In the case of G. pallidipes, only 2% of the total catch on both coloured target types was made on the adjacent transparent target (11 of 457 flies for the blue-black blue target and 5 of 288 flies for the all blue target). Discussion This study shows that independent of season or presence of chemical baits, targets in phthalogen blue or turquoise blue cloth and covered with adhesive film proved the best devices for capturing G. swynnertoni in all situations, catching up to 19 times more flies than pyramidal traps. Baiting with chemicals did not affect the relative performance of devices. When equivalent areas of targets and pyramidal traps were covered with adhesive film, fly landings were twice as high on 1 m 2 blue-black targets as on the pyramidal traps. When landings on 1 m 2 blue-black targets were compared to those on smaller 0.5 m 2 all-blue or blue-black-blue cloth targets and to landings on all-blue plastic 0.32-0.47 m 2 leg panels, the smaller targets and leg panels captured equivalent numbers of G. swynnertoni and G. pallidipes per unit area as bigger targets. Comparison of trapping devices and fabric types In both Tanzania and Kenya, and independent of season or the presence of baits, targets covered with adhesive film were the best indicate the twenty-fifth and seventy-fifth percentiles, the solid line in the box is the median, the capped bars indicate the tenth and the ninetieth percentiles, and data points outside these limits are plotted as circles. doi:10.1371/journal.pntd.0002063.g003 Table 3. Catches* and catch indices** for G. swynnertoni normalized to an equal area for each device. trapping devices for G. swynnertoni in all situations, catching over 19 times more tsetse than the pyramidal traps. When not baited, large blue-black targets captured 1.5 to 3.7 times more tsetse than much smaller leg panels. Blue leg panels made of either phthalogen blue, turquoise cloth or phthalogen blue-painted plastic nevertheless captured more tsetse than pyramidal traps, or at worst, equivalent numbers. Both targets and leg panels were particularly effective relative to pyramidal traps during the wet season in Tanzania. Of all the experiments, the wet season trials in the Serengeti represented the greatest challenge in terms of attracting flies to artificial devices during peak vegetation cover [31]. Small leg panels that deviate from large square or oblong blueblack fabric targets, [32,20,33], were tested as an alternative for G. swynnertoni based on their efficacy for sampling G. austeni in Zanzibar [34]. Indeed the performance of leg panels covered with insect glue was remarkably high in capturing G. swynnertoni. Leg panels were 21% of the surface of targets in Kenya and 30% of the surface in Tanzania, but per unit area, captured 1.5 times more flies than the targets in Kenya and the equivalent number to targets in Tanzania. These results with leg panels in 2009 were confirmed in Tanzania in 2012 when similarly sized 0.5 m 2 oblong blue-black-blue cloth targets covered with adhesive tape induced landings per unit area at nearly the same level as on the 1 m 2 targets for G. swynnertoni. The potential cost-effectiveness of small targets for different tsetse has been demonstrated only very recently for a few tsetse species [32], [35]. This success with smaller all-blue leg panels and all-blue or blue-black-blue cloth targets as landing devices for G. swynnertoni stands out relative to poor results for small all-black targets for two other savannah species G. pallidipes and G. morsitans morsitans in Zimbabwe [9]. However, earlier results have shown that all blue and blue-black targets perform better than all-black targets for G. pallidipes [25]. Indeed, the 0.5 m 2 oblong target in blue-black-blue or all blue cloth induced landings per unit area at nearly the same level as on the 1 m 2 targets for G. pallidipes in our trials. This may be related to the predominance of blue in the 0.5 m 2 targets tested here, but concurs with earlier findings where doubling the target size doubles the catch for G. pallidipes [25]. G. swynnertoni lives in very open and often windy habitats where visual cues (including colour) may be more important than host odours; this is also manifested in terms of its well-known attraction to large, moving objects [16]. Regardless of this, small blue leg panels or targets of approximately 0.5 m 2 in size clearly show promise for trapping G. swynnertoni, particularly as wind damage can be a problem with the larger local targets at many sites. However, 50% of flies captured landed on the black portion of all the targets tested, even those with only one third of the surface area in black. It would therefore be advisable to maintain a black element in visual devices targeting this species, as the presence of adhesive film used in these experiments has been shown to significantly reduce the landing rate on the black section of targets by G. tachinoides and G. palpalis gambiensis [20]. It is therefore likely that the proportion of G. swynnertoni landing on the black would be higher on unmodified targets. Similar catches with cloth and plastic leg panels also indicate that this strategy can be used for both control (insecticide-impregnated cloth) and sampling of this species (rigid plastic with insect glue or adhesive film). Considering the fall off in landings per unit area on the very small 0.25 m 2 targets, it would be inadvisable to employ targets much smaller than 0.5 m 2 for control programmes. There was no difference between the performance of any of the same devices made from the different blue materials between seasons and locations. The two blue fabrics chosen for these experiments (phthalogen blue cotton and turquoise blue polyester/ viscose) were manufactured with only minor differences in fabric texture and with slight but clear differences in blue-green colour. These fabrics, and the equivalent blue paint used on plastic leg panels [36], performed equally well in targets/leg panels and traps under diverse conditions for G. swynnertoni. These results agree with findings for the same fabrics tested in similar devices for several tsetse species in West Africa [20]. Phthalogen blue cotton cloth has been used for about 30 years in tsetse sampling and control, and is the standard against which all other blues should be compared for attractive properties [13]. It has the maximum colour fastness possible for a pure blue fabric due the formation of copper phthalocyanine (Pigment Blue 15) in situ through a unique dyeing process but now remains in limited production in just a few countries. This has resulted in the ad hoc use of several alternative blue fabrics in tsetse control, some of which are less than optimal for attracting tsetse [37]. Hence, it has become important to develop appropriate fabrics that can be produced locally with nonproprietary methods. The turquoise blue fabric, produced in Kenya by Sunflag for these experiments using generic dyes and processes, performed well in our studies. This clearly shows that it is possible to produce a deep turquoise that can be used as a practical alternative to phthalogen blue, as suggested by Mihok et al., [38]. Their suitability in control devices is currently being investigated in terms of optimising colour-fastness and insecticideretaining qualities. Performance of targets versus traps as landing devices This study provides a comparison of the efficacy of several target designs relative to the most common simple trap (pyramidal) in current use for the savannah tsetse G. swynnertoni. For a standard 1 m 2 blue-black target, catches were twice as high as on the equivalent area of a pyramidal trap for G. swynnertoni, and three times higher for G. pallidipes. The adhesive film used to enumerate tsetse here (and also in Rayaisse et al., [20]) was found to be unattractive when used alone. The fabrication of insecticideimpregnated cloth targets has obvious practical advantages over traps for area-wide population suppression programmes. The potential cost effectiveness of using target-type devices for controlling G. swynnertoni is highlighted in this study by the efficacy with which leg panels trap the species. Per unit area, leg panels and 0.5 m 2 oblong targets were as effective as the two local styles of bigger targets in common use in Kenya and Tanzania. The efficacy of smaller 2-d devices for capturing G. swynnertoni follows a pattern recently demonstrated for a range of riverine spp. [32,33,35,39]. Evidently, simple blue-black-blue or all-blue targets and all-blue leg panels of equivalent size are clearly effective in providing adequate visual stimuli to attract G. swynnertoni to land, the key behaviour that underlies the principle of insecticideimpregnated control devices and they are also less prone to wind damage because of their smaller size. Effect of the POCA bait on trap and target performance Pyramidal trap entry/retention did not appear to be improved by baiting traps with POCA, i.e. baited targets still caught far more tsetse than baited pyramidal traps. As the baited and unbaited trials were sequential, they could not be compared directly. Nevertheless, our results are consistent with previous failures to substantially improve catches of G. swynnertoni with traditional tsetse baits [17], [19]. Here, baiting devices with POCA did not affect their performance relative to one another; altogether results were remarkably constant between seasons at the same location and between the Serengeti and Mara. Considering the efficacy of the leg panels and targets, one should consider how much effort to invest in deploying and maintaining chemical baits (some of which are toxic, e.g. phenols) when it may be possible to adequately compensate by simply deploying more targets. In particular, the deployment of many small leg panels or targets with long-lasting insecticide impregnation may prove to be a costeffective strategy. This approach would be particularly appropriate in conservation areas in East Africa, especially if fabrics could be engineered to be biodegradable after their effective lifespan. Pyramidal trap and 0.5 m 2 target efficiency As expected from many other studies on savannah tsetse, the pyramidal trap was found to be inherently inefficient as a trapping device for G. swynnertoni, i.e. less than two thirds of the flies that landed on attractive surfaces of the trap ended up being captured in the cage of the trap. This trapping rate is very similar in magnitude (31% efficiency) to that already measured for the S2 trap [17] where most G. swynnertoni that landed on the panels of the trap were never captured. In contrast to this, the pyramidal trap proved more efficient for G. pallidipes. There has been an underlying deficiency with traps for savannah species of tsetse. An absolute estimate of trapping efficiency is, however, difficult as there are many untested assumptions concerning fly behaviour and counting accuracy near traps that could affect the outcome. In contrast to this, the low number of flies (,2%) landing on transparent 0.5 m 2 sticky targets compared to the number alighting on adjoining blue-black-blue cloth targets of the same size and shape suggests that very few G. swynnertoni or G. pallidipes circle the oblong 0.5 m 2 targets before landing. Ndegwa and Mihok [17] used electric nets placed radially from the S2 trap and found that most (89%) G. swynnertoni flying in the vicinity of the trap circled within 0-1 m of it, and by covering the blue trap sides with adhesive fly rolls they found that 84% of approaching flies landed on the trap before entering it. Concluding remarks Area-wide tsetse population suppression typically requires the deployment of many thousands of devices; devices need to be effective and inexpensive, and ideally should also be maintenancefree [40]. G. swynnertoni is abundant in areas with difficult logistics; it also presents limited options for dealing with as it occurs in protected areas frequented by large numbers of tourists. Simple targets that attract flies, which then land on insecticide-impregnated surfaces, are most suitable in this context. The most practical device for area-wide suppression of G. swynnertoni populations would be a large blue-black, insecticide-impregnated 1 m 2 target. Our results show that there is no significant difference between the blue-black and blue-black-blue 1 m 2 targets. A number of smaller targets in blue and black or all-blue leg panels with the same surface area would achieve the same result. Although all-blue leg panels would also provide a satisfactory control device a black element in the target is recommended where G. swynnertoni is the target species. The most cost-effective size of these devices and the associated costs of fabricating, deploying and maintaining large targets versus a higher number of small targets or leg panels still need to be determined in field trials. Our findings indicate that targets smaller than 0.5 m 2 are not recommended for either G. swynnertoni or G pallidipes. Long-lasting but ultimately biodegradable devices of simple construction could be used to reduce disease transmission in the high-profile wildlife conservation areas of Tanzania and Kenya where G. swynnertoni is the main vector of human and animal trypanosomiasis. Either phthalogen or turquoise blue would be suitable for these visual control devices. Effective control will also require adaptive management [41] whereby tsetse populations are monitored and disease-transmission hot spots are identified for additional intervention. For longterm eradication goals, the detection of very low-density, residual pockets of tsetse is also critical [42]. The best monitoring tool would clearly be a leg panel or cloth target of equivalent size covered with adhesive film. Since this approach is not very economical or practical outside of a research context, all-blue plastic leg panels covered with insect glue can be used as an effective alternative.
2016-05-12T22:15:10.714Z
2013-02-01T00:00:00.000
{ "year": 2013, "sha1": "18498be5aaba336ce84f0743fbb4c3aee8e5aa94", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0002063&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22138cd5f44ad0106a8b99c059f75d620153d2ff", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
118541431
pes2o/s2orc
v3-fos-license
Finite mass corrections for B ->D(*), D** \ell \nu decays in the Bakamjian-Thomas relativistic quark model The Bakamjian-Thomas relativistic quark model for hadron current matrix elements, while non-covariant at finite mass, is successful in the heavy quark limit : form factors are covariant and satisfy Isgur-Wise scaling and Bjorken-Uraltsev sum rules. Motivated by the so-called"1/2 vs. 3/2 puzzle"in B decays to positive parity D**, we examine the implications of the model at finite mass. In the elastic case 1/2^- ->1/2^-, the HQET constraints for the O(1/m_Q) corrections are analytically fulfilled. A number of satisfying regularities is also found for inelastic transitions. We compute the form factors using the wave functions given by the Godfrey-Isgur potential. For 1/2^- \to 3/2^+ the departures from the heavy quark limit are small, but we find a strong enhancement in 1/2^- ->1/2^+ (for 0^- ->0^+). This enhancement is linked to a serious difficulty of the model at finite mass for the inelastic transitions, namely a violation of the HQET constraints at zero recoil formulated by Leibovich et al. These are nevertheless satisfied in the non-relativistic limit for the light quark. We conclude that these HQET rigorous constraints are crucial in the construction of a sensible relativistic quark model of inelastic form factors. Introduction The Bakamjian-Thomas (BT) relativistic quark models [1,2,3,4] We have proposed a formulation of this scheme for the meson ground states [5] and demonstrated the important feature that, in the heavy quark limit, the current matrix elements, when the current is coupled to the heavy quark, are covariant. We have extended this scheme to P-wave excited states [6]. In [17], we have chosen the Godfrey-Isgur Hamitonian [18], that gives a very complete description of the light qq and heavy Qq meson spectra in order to predict within the BT scheme the corresponding IW functions for the ground state and the excited states. Similar work has been been performed for Qq meson decay constants [19] and to demonstrate within the BT scheme new Heavy Quark Effective Theory (HQET) SR involving Isgur-Wise functions and decay constants [20]. A detailed and very useful account of the BT scheme for the calculation of Isgur-Wise functions and heavy meson decay constants and their numerical calculation within the Godfrey-Isgur Hamiltonian has been given in the PhD Thesis of Vincent Morénas [21]. As a further test, we have computed in [22], the vector, scalar and axial charge densities for the ground states 0 − and 1 − ( 1 2 − doublet) and for the excited states 0 + and 1 + ( 1 2 + doublet). In this case the active quark is the light quark, and one can show that, unlike the case of the active heavy quark, the current matrix elements are not covariant. For the calculation, we have adopted the natural reference frame for this problem, the heavy meson rest frame. As shown in [22], the agreement with lattice data in the unquenched approximation is really striking, and provides both a test of the BT scheme and of the GI Hamiltonian that describes the spectrum. A main motivation to undertake this work has been the so-called " 1 2 versus 3 2 puzzle" that, based on rather old data, states the fact that the semileptonic decay rates 1 2 − → 1 2 + are much larger than the expectations of the heavy quark limit, while the semileptonic decay rates 1 2 − → 3 2 + are roughly consistent with this limit. A precise discussion of this puzzle has been done in ref. [23]. Updated data by BaBar [24] and Belle [25] confirm the problem, although there are significant differences between both experiments. On the other hand, calculations in the lattice in the unquenched approximation [26] point to a similar conclusion τ 1/2 (1) = 0.29 ± 0.03 τ 1/2 (1) = 0.52 ± 0.03 Let us finally underline that the 1 2 vs. 3 2 puzzle does not seem to be present, assuming factorization, in the nonleptonic decays B → D * * π, as shown by the Belle results [27], phenomenologically analyzed in ref. [28]. This feature makes the puzzle even more obscure. Recently, in ref. [29] has been done a necessary, precise and updated discussion of the situation for both the semileptonic and nonleptonic data. The paper is organized as follows. In Section 2 we give the definitions of the form factors for the transitions on which we are interested, reproducing some needed results at leading and O(1/m Q ) subleading order within HQET. In Section 3 we give the master formulae defining the theoretical framework of BT quark models. Since the current matrix elements in the BT model are only covariant in the heavy quark limit if the current is coupled to the heavy quark, the calculation of the 1/m Q corrections must be done in a particular reference frame. We discuss this problem in Section 4 and give arguments to adopt the Equal Velocity Frame (EVF), where the moduli of the initial and final three-vector meson velocities are equal. In Section 5 we check that this frame allows to obtain very reasonable results for the 1/m Q corrections for the elastic transitions and finite mass. In Section 10 we give the numerical results for the branching ratios B → D ( * ) ν and B → D * * ν in the heavy mass limit and also at finite mass, and in Section 11 we expose a discussion of the obtained results and problems. We leave a number of technicalities to the Appendices. In Appendix A we give the needed formulas of the different form factors in terms of matrix elements. In Appendices B and C we give the wave functions in the GI model, respectively in the heavy quark limit and at finite mass. In Appendix D we give some formulas defining a family of collinear frames and in Appendix E we give the formulas for the decay rates in the different considered cases. Heavy quark expansion of form factors in HQET To compare with the results of the BT model at finite mass, let us give here the expressions of the form factors in powers of 1 m Q in HQET. Let us set the notation Q = 1 2m Q . To first order in the heavy quark expansion one has, for the elastic form factors B → D ( * ) [30] : Luke's theorem [32] states that, at first order in 1 m Q , one has and therefore follows the important result that at zero recoil (w = 1) the subleading corrections to h + (1) and h A 1 (1) begin at order 1/m 2 Q : The functions L i (w) (i = 4, 5, 6), corresponding to the so-called Current perturbations, are not independent according to HQET, and are given in terms of two independent functions Λξ(w) and ξ 3 (w) [30] : where ξ(w) is the elastic IW function. One finds therefore the relation : that reduces to the linear relation at zero recoil : Inelastic form factors For the inelastic form factors B → D * * we reproduce only the leading order in the heavy quark expansion [8,31] : where the different O 1/m Q (w) corrections are given in the detailed and careful paper by Leibovich et al. [31]. Among these corrections, we reproduce the ones that do not vanish at zero recoil, very relevant for what follows : 3 Bakamjian-Thomas approach to quark models As explained in [5], the construction of the BT wave function in motion involves a unitary transformation that relates the wave function Ψ (P ) s 1 ,···,sn ( p 1 , · · · , p n ) in terms of one-particle variables, the spin S i and momenta p i to the so-called internal wave function Ψ int s 1 ,···,sn ( P , k 2 , · · · , k n ) given in terms of another set of variables, the total momentum P and the internal momenta k 1 , k 2 , · · · , k n ( i k i = 0). This property ensures that, starting from an orthonormal set of internal wave functions, one gets an orthonormal set of wave functions in any frame. The base Ψ (P ) s 1 ,···,sn ( p 1 , · · · , p n ) is useful to compute one-particle matrix elements like current one-quark matrix elements, while the second Ψ int s 1 ,···,sn ( P , k 2 , · · · , k n ) allows to exhibit Poincaré covariance. In order to satisfy the Poincaré commutators, the unique requirement is that the mass operator M , i.e. the Hamiltonian describing the spectrum at rest, should depend only on the internal variables and be rotational invariant, i.e. M must commute with P , ∂ ∂ P and S. The internal wave function at rest (2π) 3 δ( P )ϕ s 1 ,···,sn ( k 2 , · · · , k n ) is an eigenstate of M , P (with P = 0), S 2 and S z , while the wave function in motion of momentum P is obtained by applying the boost B P , where P 0 = P 2 + M 2 involves the dynamical operator M . The final output of the formalism that gives the total wave function in motion Ψ (P ) s 1 ,···,sn ( p 1 , · · · , p n ) in terms of the internal wave function at rest ϕ s 1 ,···,sn ( k 2 , · · · , k n ) is the formula where p 0 i = p 2 i + m 2 i and M 0 is the free mass operator, given by The internal momenta of the hadron at rest are given in terms of the momenta of the hadron in motion by the free boost where the operator B p is the boost ( √ p 2 , 0) → p, the Wigner rotations R i in the preceding expression are and the states are normalized by The current one-quark matrix element acting on quark 1 between two hadrons is then given by the expression where Ψ P s 1 ,···,sn ( p 1 , · · · , p n ) is given in terms of the internal wave function by (45) and < p 1 , s 1 |J (1) | p 1 , s 1 > is the one-quark current matrix element. After having presented the general calculations, there will remain to specify the mass operator M , which will be chosen as the one of the Godfrey and Isgur model in the following section. We are interested in this paper in transitions between heavy quarks b → c where the initial meson is a pseudoscalar B. We particularize the general formula (50) to the meson case q 1 q 2 where q 1 → q 1 labels the heavy quarks, q 2 the light antiquark and the current operator J (1) acts on the heavy quark. As shown in [5], one can transform (50) in a Pauli matrix formalism and then in a Dirac matrix formalism. We reproduce here the needed master formula in the Dirac formalism : In formula (51) the following unit four-vectors are used where the convention 0123 = −1 is adopted, u are the polarizations relative to the four-vector u , four-vectors for the J P = 1 P (P = −, +) states, and a tensor for the J P = 2 + state. Matrix elements in the heavy quark limit We now consider the heavy mass limit, defined as m 1 , m 1 → ∞ with v = P /M and v = P/M fixed, and M/m 1 → 1, M /m 1 → 1. One has, in this limit On the other hand, one has, due to the invariance of the scalar product, and therefore the matrix element (51)(52) is given by the following covariant expres- where the Dirac matrices Γ v are identical to Γ u in (54) with u replaced by the four-velocity v . The radial wave functions ϕ ( k) and ϕ( k) depend only on k 2 , and from (56) one 3.2 The Isgur-Wise functions ξ(w), τ 1/2 (w) and τ 3/2 (w) From the matrix elements (57), the operators (54) and the definitions and 1/m Q expansion of the form factors given in Section 2, the Isgur-Wise functions ξ(w), τ 1/2 (w), τ 3/2 (w) are given by the expressions where all the radial wave functions for the 1 2 − , 1 2 + , 3 2 + states in the heavy quark limit are normalized by Limitations of the BT model at finite mass : choice of a convenient reference frame As we have emphasized above, the BT model provides a Poincaré covariant description of the states in motion, and also a Lorentz invariant formulation of the current matrix elements in the heavy quark limit. In the present paper we are interested in studying the 1/m Q corrections to the matrix elements. However, although the current matrix elements can be formulated in the BT model by (51)(52), this expression is not Lorentz covariant. Another important point, also a limitation of the BT model, is that at finite mass, although one has lost Lorentz covariance, one does not even have Galilean covariance. In order to have Galilean covariance one would need to take the full non-relativistic limit, i.e. to consider the non-relativistic quark model : the model must be non-relativistic not only for the heavy quarks b and c but also for the light quark. However, the non-relativistic quark model is not suited for our purpose, because what we want is to understand the departures relatively to the heavy quark limit predictions of the BT model due to the finiteness of the masses m b and m c . Then, we are left to consider the BT model at finite mass in a definite reference frame. How to choose this frame ? Fortunately, there is a theoretical criterium for choosing a convenient frame. Namely, we will adopt the frame that is consistent with known theoretical results in the 1/m Q expansion of HQET. In Appendix D we have formulated a set of collinear frames, that go from the B meson rest frame to the D meson rest frame, dependent on a single parameter α. The B and D rest frames correspond respectively to α = 0 and α = 1. There is an intermediate frame, that we call Equal Velocity Frame (EVF), in which the spatial velocities are equal in modulus , that corresponds to the value α = 1 2 . In this latter frame, the initial and final velocities then write, in terms of Considering the matrix element at arbitrary masses (51) for the ground state (14)- (19) is not fulfilled in any of the considered collinear frames, except in the EVF. In this frame, relations (14)- (19), at least up to first order in 1/m Q , are exactly satisfied. This seems to us a good enough criterium for choosing the EVF in our numerical calculations. We will below compute all the ground state subleading functions L i (w)(i = 1, ...6) and verify also that Luke's theorem is satisfied. A last important point of principle is in order here. Had we adopted the nonrelativistic quark model (including the light quark), relations (14)- (19) are exactly satisfied in any Galilean frame. However, as pointed out above, we need to consider the b and c quarks as heavy, and the spectator light quark as relativistic. Quantitatively, the results of the non-relativistic quark model would not make much sense in order to consider departures of the heavy quark limit results of the BT model due to the b and c finite masses. where Γ D u = 1 and Γ D * u = γ 5 / * u . For the sake of clarity we now adopt the notation To compute the 1/2m Q subleading functions L i (w) (i = 1, ...6) (14)- (19), we need to expand the matrix element (64) in powers of b , c up to the first order. Simbolically we can write, simplifying the notation, In the preceding equation, the subindex 0 means b = c = 0 (heavy quark limit). We have separated the perturbation of the kernel G and of the wave functions ϕ, in an obvious notation. In what follows we will neglect the second term in (67) since we have realized numerically that the perturbation of the wave functions gives a very small contribution. Using (67), it is convenient to write the matrix elements (14)-(19) using the following notation : Performing analytically an expansion of the matrix elements for the different and from them obtain the subleading functions L i (w) (i = 1, ...6) appearing in (14)- (19), by using the straightforward relations : From these relations, and the expressions for the different functions H (Q) (Q = b, c), we find analytically that Luke's theorem [32] (20) is satisfied Moreover we find, for the functions L i (w)(i = 1, 2, 3), corresponding to the so-called Lagrangian perturbations, the following results, that do not follow from HQET, and are specific to the BT model : In the BT model, for the functions L i (w) (i = 4, 5, 6) that correspond to the Current perturbations, we find analytically relation (26) that holds in HQET : More explicitly, we find in the limit m D = m D * = m c + Λ, calling from now on the light quark mass m 2 = m : where the internal wave function normalization has been used. The relation (84) is in agreement with (23) at zero recoil. These formulas hold in all collinear reference frames considered in Appendix D, because they coincide at zero recoil. We observe that the 1/m Q dependence agrees with the prediction of HQET for all three form factors g + (1), g V 1 (1) and f V 1 (1) (formulas (41)-(43)), in particular the BT model predicts for the two states belonging to the same doublet 0 − → 0 + 1/2 , 0 − → 1 + 1/2 : while the form factor f V 1 (1) for 0 − → 1 + 3/2 is independent because a different radial wave function ϕ The spin-orbit term is small and one can therefore assume that the level spacing is about the same for both j + states : Then, the form factors at zero recoil (41)-(43) are in the ratios while we find, from (87)-(89), in the BT model within the same assumption of small spin-orbit coupling : The contradiction between the results of HQET (92) Therefore, due to the first terms in the r.h.s. of (94) and (95) one gets in the BT model τ 1/2 (1) = τ 3/2 (1). As analyzed in detail in [11] the Wigner rotations are at the origin of these terms : The Wigner rotation, second term in (96) is a relativistic effect dependent on the spin that gives the difference between τ 1/2 (1) and τ 3/2 (1). 6.1 BT model 1/m Q form factors at zero recoil for transitions to excited states in the non-relativistic limit Let us first observe that expressions (87)-(89) are independent of the light quark mass m. Therefore, the same expressions must be valid in the non-relativistic limit of the BT model, i.e. taking | p| << m. Let us assume this limit and consider the non-relativistic Hamiltonian for the light quark interacting with the heavy quark : where r is the relative position between the light quark and the heavy quark. Let us first remark that in the non-relativistic limit, since the spin-orbit term does not contribute, one has In the non-relativistic limit one has τ 1/2 (w) = τ 3/2 (w), that at zero recoil is given by Using (99) and the non-relativistic Hamiltonian (97) let us compute and we obtain therefore the common factor in the r.h.s. of eqns. (87)-(89). The argument has a transparent physical interpretation in configuration space. In the non-relativistic limit of (96) the Wigner rotations are subleading and one has Computing the matrix element of the axial current A 0 at zero recoil one has, since the active quark is the heavy quark labelled 1 : then one has, from the non-relativistic Hamiltonian (97) and p 1 = − p 2 = − p, where p is the momentum of the light spectator quark : where E 0 , E 1 are the energies of the ground state and the excited state. Therefore, the dependence on the level spacing of HQET follows in the non-relativistic limit, as we have already seen from (100). The Godfrey-Isgur quark model for spectroscopy Let us now particularize the above expressions for the choice of the mass operator M given by the Godfrey-Isgur model [18], and perform the numerical calculations. The Godfrey-Isgur model for meson spectroscopy [18] describes the whole set of meson spectra qq and Qq,where q is a light quark (q = u, d, s) and Q is a heavy quark (Q = c, b), with the important exception of the recently discovered narrow states D sJ 0 + and 1 + , that are too low compared with the predictions of the model. The model contains a relativistic kinetic term of the form that is identical to the operator M 0 at rest [18], and a complicated interaction term that includes : (1) a Coulomb part with a q 2 dependent α s , (2) a linear confining piece, and (3) terms describing the spin-orbit and spin-spin interactions. All singularities are regularized -e.g. terms of the type δ( r) or 1/m 2 , where m 2 is the light quark mass. The hamiltonian H depends on a number of parameters that are fitted to describe all the meson spectra. Form factors for the ground state B → D ( * ) ν This Section contains the numerical results for the ground state form factors B → D ( * ) ν using the Bakamjian-Thomas model exposed above and the internal wave functions provided by the Godfrey-Isgur spectrocopic potential [18], that are given in Appendix B (heavy quark limit) and Appendix C (at finite mass). In Fig. 1 we give the prediction for the elastic IW function ξ(w) and in Figs. 2-7 we give the results for the different B → D ( * ) ν form factors at finite mass compared with their heavy quark limit. The finite mass effect is small in general, even in the case of the form factors that vanish in the heavy quark limit, h − (w) and h A 2 (w). 1, 2, 3). First, we observe that Luke's theorem [32] (20) is indeed satisfied : First order 1/m Q functions and Luke theorem On the other hand, the result that we find for L 3 (w), L 3 (1) = 0, is not a predic-tion of HQET. Considering now the Current perturbation functions L i (w) (i = 4, 5, 6), these functions are not independent according to HQET, and are given in terms of two independent functions Λξ(w) and ξ 3 (w) [30] (22)- (24). We recall here the expression of L 5 (w) in terms of the elastic IW function ξ(w) : and the linear relation It is important to emphasize that relation (106) In the BT model we find indeed that the results satisfy Luke's theorem (20), and therefore the corrections at zero recoil to h + (1) and h A 1 (1) begin at order 1/m 2 Q , eqn. (21). We get for the sum of all orders 1/m n Q (n ≥ 2) that contribute at zero recoil vided by the Godfrey-Isgur spectrocopic potential [18] given in Appendix B (heavy quark limit) and Appendix C (finite mass). Unlike the elastic case, the finite mass effects for these inelastic form factors are not small, even for some form factors that vanish in the heavy quark limit. This is particularly true for the transition 0 − → 0 + . In this case, the leading form factor g − (w) is reduced by about a factor 1.5, while the absolute magnitude of the form factor g + (w), that vanishes in the heavy quark limit, becomes of the same order as the leading one. 10 Branching ratios of B → D ( * ) ν, D * * ν, D ( * ) π, D * * π We now use formulas (159)-(165) to compute the semileptonic branching ratios, and formula (166) to compute the pionic ones. At infinite mass, only the form factors are computed in the heavy quark limit, while the kinematics contains the physical masses. One obtains, for the semileptonic modes : and for the corresponding pionic decays : The pionic decays with form factors in the heavy quark limit have been compared to the Belle data [27] in ref. [28]. On the other hand, at finite mass one has the following semileptonic BR : Comparing the finite mass results with those in the heavy quark limit, we observe an enhancement in the case of the 0 + modes in both the semileptonic and pionic cases (about a factor 5), while the difference is moderate for the other decay modes. The enhancement for the 0 − → 0 + transitions is due to a constructive interference between the two form factors g + (w) and g − (w) in the decay rates. Of course, the magnitude of the enhancement is not trustable, since in this particular mode it is clearly related to the violation of the relation of Leibovich et al. In this case only two form factors contribute, and the subleading one should satisfy this relation. In such a situation, it is not sensible to compare with the data of BaBar and Belle. A detailed discussion has been done recently of the experimental situation, compared with the BT model in the heavy quark limit and with the lattice results, in ref. [29]. Discussion There cannot be a clear-cut conclusion for this work. This model provides also a physical, phenomenological interpretation of a number of features of the heavy quark limit. One notorious example is the inequality |τ 3/2 (1)| > |τ 1/2 (1)|, that in the BT model is a spin effect, the Wigner rotation of the spin of the spectator light quark. In the present paper we have tried to extend the BT model to finite mass, for the ground state transitions and for inelastic decays of the ground state to L P = 1 + excited states. However, at finite mass matrix elements are not covariant anymore and some unwanted results are not unexpected. As exposed above, a convenient frame is the equal-velocity-frame, that we have The BT scheme is not a particular model, but a very general framework. In fact, a framework quite similar to the one of BT is at the basis of the light front relativistic quark models [2,3,4]. The same inelastic transitions L = 0 to L = 1 have been studied in the light front models of Cheng et al. [34]. But, to our knowledge, the problem of the identities of Leibovich et al. has not been evoked in this study. On the other hand, this problem has been clearly raised by Ebert et al. [35]. In their relativistic quark model the identities are not automatically fulfilled, but imposed by a choice of the parameters of the potential. In our BT scheme, this latter possibility is clearly excluded. For our part, one would wish to solve the problem of inelastic form factors in a general way through a fully covariant approach. This approach exists in the Bakamjian-Thomas framework in the heavy quark limit, but is lacking for the moment at finite mass. Appendix A. Form factors in terms of matrix elements To isolate the B → D * form factors we need to consider the longitudinal and transverse polarization four-vectors. Assuming the motion along the Oz axis, we can adopt the following four-vectors : Then, the different form factors for the 0 − → 1 − transitions are given by the expressions : Similar relations for the form factors of the transitions to excited states can be obtained from the definitions (7)(13) : .v) 2 and similar formulas for the form factors f A (w), f V 1 (w), f V 2 (w) and f V 3 (w) for the D (3/2) (1 + ) state. Notice also that in the definition of the axial current matrix element for the ground state D * , the form factor h A 1 (w) is affected by a factor (w + 1), that does not appear in the corresponding definition of the vector form factors g V 1 (w), f V 1 (w) for the 1 + states. To isolate the different form factors for the 2 + states, let us first write the corresponding tensor polarizations (λ) µν , that are symmetric (λ) µν = (λ) νµ , traceless g µν (λ) µν = 0 and transverse v µ (λ) µν = v ν (λ) µν = 0. The polarization tensors we are interested in (the currents are vectors) can be written as Oz (116) we have The different 2 + form factors will write, with the notation (0) = (L) , Appendix B. Wave functions in the heavy quark limit in the GI model We have computed the ground state wave function j P = 1 2 − by expanding it in a truncated harmonic oscillator basis With the parameters Similarly, one gets the following wave function for the lowest 1 2 + state : with the following coefficients And the wave function for the lowest 3 2 + state : with the coefficients for the ground states J = 0, 1, and by The pseudoscalar B meson wave function is common to all intial states that we are considering. In the GI model, the mass parameters that fit the data for B we are then left with the sum of products of the radial functions for given j = 1 2 and j = 3 2 that, from (151)-(154), indeed vanishes : Appendix D. A set of collinear frames We have seen above that the current matrix elements in the Bakamjian-Thomas are covariant in the heavy quark limit. However, the subleading corrections in 1/m Q are dependent on the frame. We could simply give the results in a "natural" frame, like the B meson rest frame. But we want to see how these subleading corrections are dependent on the frame. To study such effects we consider a family of collinear frames, with the mesons moving along the Oz axis : going continously between the B meson rest frame through the final D meson rest frame. These frames can be labeled by a parameter α, with 0 ≤ α ≤ 1 : (1 − α)v z + αv z = 0 (157) The B and the D meson rest frames correspond respectively to α = 0 and α = 1, while the intermediate equal velocity frame (EVF), in which the spatial velocities are equal in modulus (v 0 = v 0 , v z = −v z ) corresponds to the value α = 1 2 . In terms of this parameter and of the variable w = v.v , the four-vectors (142) Appendix E. Formulas for the decay rates The differential rates can be expressed in terms of the helicity amplitudes under the form where r = m D m B (m D being the mass of the corresponding charmed meson) and the helicity amplitudes squared write, in the different cases : |H 0 (w)| 2 = (w 2 − 1) [(1 + r)g + (w) − (1 − r)g − (w)] 2 • B → D * * (1 + 1/2 ) ν Of course, in the preceding formulas the masses of the charmed mesons, and hence the parameter r, vary according to the considered state D, D * , D * * (0 + 1/2 ), D * * (1 + 1/2 ), D * * (1 + 3/2 ) or D * * (2 + 3/2 ). Remember also that the form factor h A 1 (w) is affected by a factor (w + 1), that does not appear in the corresponding definition of the form factors g V 1 (w), f V 1 (w) for the 1 + states and also the form factors h A 2 (w) and h A 3 (w) are affected by a minus sign, contrarily to the definitions of g V 2 (w), f V 2 (w) and g V 3 (w), f V 3 (w) for the 1 + states, as we see in the definitions (6)-(13). The decays rates for pionic decays read : where a 1 1 is a combination of Wilson coefficients, and m D is the mass of the corresponding charmed meson.
2016-04-30T11:42:47.000Z
2014-07-04T00:00:00.000
{ "year": 2014, "sha1": "49681d58a9aa582d556903a17cd3edd4c5503291", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1407.1152", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "49681d58a9aa582d556903a17cd3edd4c5503291", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235425794
pes2o/s2orc
v3-fos-license
Ultra-Processed Profits: The Political Economy of Countering the Global Spread of Ultra-Processed Foods – A Synthesis Review on the Market and Political Practices of Transnational Food Corporations and Strategic Public Health Responses Background: Ultra-processed food (UPF) and Ultra-processed beverage (UPB) consumption is associated with higher risks of numerous non-communicable diseases (NCDs). Yet global consumption of these products is rising due to profound changes in production, processing, manufacturing, marketing, retail, and consumption practices, alongside the growth of the resources and political influence of Big Food. Whilst the sales of UPFs and UPBs in high-income countries (HICs) are stagnating, sales are rapidly expanding in more populous middle-income countries (MICs). In this paper, we adopt a political economy of food systems approach to understand how growth of Big Food in MICs drives the NCD pandemic. Methods: We conducted a mixed methods synthesis review. This involved quantitative data collection and development of descriptive statistics; a search for academic, market and grey literature on the expansion of UPF in MICs; and the development of themes, three illustrative case examples (South Africa, Colombia, and Indonesia), and synthesis of the enablers of successful campaigns in MICs into recommendations for public health campaigns. Results: We project that the combined sales volume of UPFs in MICs will reach equivalency with HICs by 2024, and the total sales volume of UPBs in MICs is already significantly higher than in HICs. Similarly, annual growth in UPF sales is higher in MICs compared to HICs. We also show how Big Food has entrenched its presence within MICs through establishing global production and hyper-local distribution networks, scaling up its marketing, challenging government policies and scientific expertise, and co-opting civil society. We argue that public health can counter the influence of Big Food by developing an expanded global network of driven and passionate people with diverse skillsets, and advocating for increased government leadership. Conclusion: The projected increase in sales of UPFs and UPBs in MICs raises major concerns about the global capacity to prevent and treat NCDs. I n a recent contribution to the ongoing debate about the role of power in global health, Gorik Ooms emphasizes the normative underpinnings of global health politics. He identifies three related problems: (1) a lack of agreement among global health scholars about their normative premises, (2) a lack of agreement between global health scholars and policy-makers regarding the normative premises underlying policy, and (3) a lack of willingness among scholars to clearly state their normative premises and assumptions. This confusion is for Ooms one of the explanations "why global health's policy-makers are not implementing the knowledge generated by global health's empirical scholars. " He calls for greater unity between scholars and between scholars and policy-makers, concerning the underlying normative premises and greater openness when it comes to advocacy. 1 We commend the effort to reinstate power and politics in global health and agree that "a purely empirical evidence-based approach is a fiction, " and that such a view risks covering up "the role of politics and power. " But by contrasting this fiction with global health research "driven by crises, hot issues, and the concerns of organized interest groups, " as a "path we are trying to move away from, " Ooms is submitting to a liberal conception of politics he implicitly criticizes the outcomes of. 1 A liberal view of politics evades the constituting role of conflicts and reduces it to either a rationalistic, economic calculation, or an individual question of moral norms. This is echoed in Ooms when he states that "it is not possible to discuss the politics of global health without discussing the normative premises behind the politics. " 1 But what if we take the political as the primary level and the normative as secondary, or derived from the political? That is what we will try to do here, by introducing an alternative conceptualization of the political and hence free us from the "false dilemma" Ooms also wants to escape. "Although constructivists have emphasized how underlying normative structures constitute actors' identities and interests, they have rarely treated these normative structures themselves as defined and infused by power, or emphasized how constitutive effects also are expressions of power. " 2 This is the starting point for the political theorist Chantal Mouffe, and her response is to develop an ontological conception of the political, where "the political belongs to our ontological condition. " 3 According to Mouffe, society is instituted through conflict. " [B]y 'the political' I mean the dimension of antagonism which I take to be constitutive of human societies, while by 'politics' I mean the set of practices and institutions through which an order is created, organizing human coexistence in the context of conflictuality provided by the political. " 3 An issue or a topic needs to be contested to become political, and such a contestation concerns public action and creates a 'we' and 'they' form of collective identification. But the fixation of social relations is partial and precarious, since antagonism is an ever present possibility. To politicize an issue and be able to mobilize support, one needs to represent the world in a conflictual manner "with opposed camps with which people can identify. " 3 Ooms uses the case of "increasing international aid spending on AIDS treatment" to illustrate his point. 1 He frames the I n a recent contribution to the ongoing debate about the role of power in global health, Gorik Ooms emphasizes the normative underpinnings of global health politics. He identifies three related problems: (1) a lack of agreement among global health scholars about their normative premises, (2) a lack of agreement between global health scholars and policy-makers regarding the normative premises underlying policy, and (3) a lack of willingness among scholars to clearly state their normative premises and assumptions. This confusion is for Ooms one of the explanations "why global health's policy-makers are not implementing the knowledge generated by global health's empirical scholars. " He calls for greater unity between scholars and between scholars and policy-makers, concerning the underlying normative premises and greater openness when it comes to advocacy. 1 We commend the effort to reinstate power and politics in global health and agree that "a purely empirical evidence-based approach is a fiction, " and that such a view risks covering up "the role of politics and power. " But by contrasting this fiction with global health research "driven by crises, hot issues, and the concerns of organized interest groups, " as a "path we are trying to move away from, " Ooms is submitting to a liberal conception of politics he implicitly criticizes the outcomes of. 1 A liberal view of politics evades the constituting role of conflicts and reduces it to either a rationalistic, economic calculation, or an individual question of moral norms. This is echoed in Ooms when he states that "it is not possible to discuss the politics of global health without discussing the normative premises behind the politics. " 1 But what if we take the political as the primary level and the normative as secondary, or derived from the political? That is what we will try to do here, by introducing an alternative conceptualization of the political and hence free us from the "false dilemma" Ooms also wants to escape. "Although constructivists have emphasized how underlying normative structures constitute actors' identities and interests, they have rarely treated these normative structures themselves as defined and infused by power, or emphasized how constitutive effects also are expressions of power. " 2 This is the starting point for the political theorist Chantal Mouffe, and her response is to develop an ontological conception of the political, where "the political belongs to our ontological condition. " 3 According to Mouffe, society is instituted through conflict. "[B]y 'the political' I mean the dimension of antagonism which I take to be constitutive of human societies, while by 'politics' I mean the set of practices and institutions through which an order is created, organizing human coexistence in the context of conflictuality provided by the political. " 3 An issue or a topic needs to be contested to become political, and such a contestation concerns public action and creates a 'we' and 'they' form of collective identification. But the fixation of social relations is partial and precarious, since antagonism is an ever present possibility. To politicize an issue and be able to mobilize support, one needs to represent the world in a conflictual manner "with opposed camps with which people can identify. " 3 Ooms uses the case of "increasing international aid spending on AIDS treatment" to illustrate his point. 1 He frames the View Video Summary Implications for policy makers • Middle-income countries (MICs) now represent huge markets for Big Food and are likely to grow significantly over the next decade. • The projected expansion of Big Food and ultra-processed food (UPF) markets in MICs raises major concerns about the global capacity to prevent and treat non-communicable diseases (NCDs). • To effectively respond to the NCDs pandemic, governments must understand the market and political practices used by Big Food to establish, grow, and sustain its markets in MICs. • We propose recommendations that public health campaigns should include to effectively monitor and counter these market and political practices. • Governments, civil society groups and public health practitioners need to work together to build long-term global networks and social movements with diverse skillsets to mitigate the harms associated with Big Food. Implications for the public UPF consumption is associated with higher risks of obesity, cardiovascular disease, type 2 diabetes, certain cancers, and other non-communicable diseases (NCDs). Yet consumption of these products is rising worldwide. This reflects profound food system changes currently underway -including production, processing, manufacturing, marketing, retail, and consumption -alongside growth in the size, resources and global reach of Big Food. As the sales of UPFs in high-income countries (HICs) are stagnating, sales are rapidly expanding in MICs. Using new data, we project that the combined sales in MICs will outweigh sales in HICs by 2024 and demonstrate the increasing importance of MICs for Big Food's growth. We also show how Big Food uses a corporate playbook of market and political practices to establish, grow and sustain its markets. This expansion raises major concerns about the capacity of MICs to prevent and treat NCDs. Key Messages significant environmental degradation, including plastic waste entering marine ecosystems. 13,14 We adopt the definition of UPFs as "products with additives and industrially processed ingredients that have been technologically broken down and modified. " 15 Examples include sugar-sweetened beverages (SSBs), confectionery, savoury snacks, refined baked goods, sweetened yoghurts, biscuits, and many varieties of fast food. We use UPF to refer to both UPFs and beverages, although we distinguish between these categories where relevant by referring to ultra-processed beverages (UPBs). In this paper, we examine the role of transnational UPF corporations (which we refer to as 'Big Food') as a vector of disease through the production, marketing, distribution and political activity of promoting these products on a global scale. 1,2,[16][17][18][19] We focus on the industry's expansion into lowand middle-income countries (LMICs) where nearly 80% of NCD-related deaths occur, and related morbidity is rapidly increasing. 20,21 MICs are more likely than high-income countries (HICs) to be affected by the double burden of malnutrition, food insecurity, and under-nutrition, as well as an increase in obesity and related complications. 19 Policies limiting the consumption of UPF products are fundamental to efforts to combat NCDs in LMICs. 22 However, there is increasing evidence that the market and political practices of Big Food shape patterns of health and disease, and pose a risk to the development and implementation of effective NCD prevention policies. 3,7,[26][27][28] In this paper, we address questions requiring much greater attention in the public health literature. What explains the rapid growth in the size and global reach of the UPF industry in MICs? How do these transnational corporations (TNCs) then sustain these high consumption levels? To answer these questions, we examine this global expansion within its historical context and the growing power of TNCs to shape food systems on a global scale. We examine the political and market strategies used by Big Food to expand in MICs, including efforts to undermine effective public health regulations in three countries -South Africa, Colombia, and Indonesia. These MICs were chosen using a convenience sample, based on access to existing research and journalistic reporting, to show the diversity of growth strategies used in three regions which have seen significant investment from Big Food over the last two decades. Our analysis explores these countries' experience as MICs over the past decade, with the understanding that their status as 'emerging markets' for Big Food may change as they transition to becoming HICs. We then propose some recommendations that public health campaigns should adopt to limit the corporate power of Big Food in MICs. We conclude by considering what the projected expansion of Big Food in MICs over the next decade means for NCD prevention and treatment. Methods Given the complexity of the topic, we adopted a mixed methods synthesis review method that draws from diverse data sources. This involved quantitative data collection and development of descriptive statistics; a search for academic, market, and grey literature; and development of themes, illustrative case examples, and synthesis of results. Each step was guided by the growing number of CDOH frameworks which identify and categorise the strategies used by corporations. 2,17,18,[29][30][31][32][33][34][35][36] Quantitative Data Collection and Analysis Market share data (percentage of market sales attributed to a global company) were sourced from the Euromonitor Passport Database for the world's largest 80 markets for the years 2011-2019. Market sales volume data (kg) were sourced from the same database for the years 2006-2019, with projections to 2024, for UPF and UPB categories. The methods used by Euromonitor to collect this data are described elsewhere. 6,37 Sales volume data has been used in similar analyses in other studies 6,9 and were converted to a per capita basis using population estimates from the World Bank World Development Indicators Database. Countries were categorised by the World Bank income categories. Descriptive statistics and figures were generated using R version 4.0.2. Search for Relevant Literature We searched academic databases (Google Scholar, Scopus, Web of Science, EconLit, MEDLINE, Embase, PsycINFO, Business Source Premier, and CINAHL), industry news sources, and the websites of international organisations and non-governmental organisations (NGOs). We first searched for literature published since 2006 using a combination of product (ultra-processed foods*, ultra-processed beverages*, sugar-sweetened beverages*) and industry (industry*, corporations*, company*, market*, commercial*) related terms identified from the CDOH frameworks. We then searched for articles examining the political economy of Big Food, using the same search terms as the first search strategy along with the terms such as 'policy, ' 'lobbying' and 'politics. ' To identify additional grey literature, these search strategies were supplemented by hand searches of references lists. Development of Themes, Case Examples and Synthesis of Results Included literature was reviewed to identify key themes relevant to the aim of the study. To illustrate the market and political practices used by industry to promote consumption of UPFs, we included three country case examples. To develop these examples, we searched Google, industry news sources and government documents using a combination of product, industry, and corporate power (economy, partnership, support, donate, investment, consumption) related terms. We also consulted with experts working in the field. Searches were conducted in English and the national language of each country (Spanish and Bahasa Indonesia). We then studied the social media accounts and websites of relevant Big Food TNCs and civil society organisations to contextualise the corporate practices. Finally, we synthesised the data and literature to create recommendations for public health campaigns. Results The results are divided into three sections. First, we describe global trends and dynamics in UPF markets. Second, we summarise the market and political practices used by TNCs to establish, promote, and maintain high levels of UPF consumption within MICs. Finally, we present three country case examples -South Africa, Colombia, and Indonesia -to illustrate how Big Food has established, grown and protected its markets. Global Trends and Dynamics in UPF Markets Market sales data from 2006-2019 have shown that per capita UPF consumption has reached remarkably high levels in HICs, with levels significantly higher than in upper-middle income countries (UMICs) and LMICs. 6 Survey data show that UPFs contributed 42%, of dietary energy intake in Australia in 2011-2012 38 and 58% in the US in 2009-2010. 39 The contribution of UPFs to dietary energy intake is currently much lower in UMICs and LMICs than in HICs, ranging from 21.5% in Brazil in 2008-2009 41 to 29.8% in Mexico in 2012. 42 However, whilst growth is relatively stagnant in HICs, UPF market sales, and the contribution of these products to energy intake are rapidly growing in UMICs and LMICs. 43 We present new data from 2019, shown in Figure 1, which projects that the combined sales volume of UPFs in UMICs and LMICs will reach equivalency with HICs by 2024, and the total sales volume of UPBs in UMICs and LMICs is already higher than in HICs. Additionally, as shown in Figure 2, annual growth in UPF sales is much higher in LMICs (3.5%) and UMICs (2.3%) compared to HICs (-0.1%). Similarly, annual growth in UPBs in LMICs (6.0%) and UMICs (1.7%) is much higher than in HICs where markets are shrinking (-0.4%). This data indicates that UMICs and LMICs are now as important as HICs to Big Food in terms of market size, and more important in terms of growth. As markets in HICs begin to stagnate, Big Food is moving to pursue growth opportunities in UMICs and LMICs, attracted by their large, growing and increasingly urbanised populations whose incomes are rising. 6 Strategies Used to Establish, Promote, and Sustain High Levels of UPF Consumption Whilst there have been many different categorisations of the corporate playbook used to establish and promote UPF consumption, 17,18,[29][30][31][32][33][34][35][36]44 we broadly categorised these strategies into market practices and political practices. These practices are defined as applied business strategies and tactics employed to advance a corporation's economic performance and create a more favourable external environment. 45 Market Practices We identified three main categories of corporate market practices used to grow and sustain UPF markets: establishing global production networks, establishing large-scale and hyper-local distribution networks, and scaling up marketing. Big Food's Transnational Expansion -Establishing Global Production Networks The first strategy for establishing and growing UPF markets is the transnational expansion of corporations through the establishment of globally integrated sourcing and production networks. This expansion is enabled by TNCs' access to finance that facilitates their growth, vast human resource capabilities and knowledge capital, trademarks and global brand recognition, logistical and manufacturing technologies, and capacity to adapt operational practices to diverse regulatory, economic and social contexts. 43,46,47 Big Food includes some of the leading corporations of economic globalisation. The industry's expansion rapidly accelerated in the 1980s as domestic markets in North America and Europe became increasingly saturated, and LMICs became more open to foreign trade and investment via rapid industrialisation and income growth. 48 The establishment of the World Trade Organization in 1995, and the subsequent increase in regional and bilateral trade agreements, supported corporations to move investments, production inputs, and final products across borders, expand their intellectual property protections, and foster market deregulation. 49,50 The rapid growth in the flow of foreign direct investment from corporations headquartered in HICs into MICs demonstrates where TNCs intend to expand in the longterm. 49 This takes the form of investments in new production capacity through greenfield investments in manufacturing plants, distribution centres, and research and development units; mergers with, or acquisitions of, domestic competitors; and the expansion of networks of franchisees and affiliated partners. These investments have made the productive capacities of Big Food extensive. For example, the Coca-Cola system includes 225 bottling partners and 900 bottling plants, generating 2 billion servings sold every day in over 200 countries. 51 The McDonald's system has 38 695 outlets in 119 countries, most of them owned and operated by franchisees. 52 In many instances, rapid growth is achieved through partnership with or acquisition of domestic competitors. For example, Coca-Cola became the market leader in India in 1993 by acquiring Parle Products' leading soft drinks brands, including Thums Up Cola. 53 Similarly, Nestlé acquired the Chinese companies Hsu Fu Chi and Yinlu in 2011 to tap into growing Chinese demand for UPFs. 54 These acquisitions include tangible productive assets of domestic corporations as well as intangible assets, such as staff expertise and knowledge of local market conditions and cultural preferences, existing relationships with suppliers, and pre-established distribution networks. 49 The increased capacity of TNCs to shift investments, manufacturing plants, and jobs internationally translates into significant political power as governments compete for these investments. This can include governments deregulating their markets or providing tax or other concessions for those corporations. The power of TNCs grows as their investments and the market for their products increase within a country. This is because their impact on the labour market, knowledge transfer to domestic firms, and purchasing of domestic production inputs become increasingly important to the country's economy. 18 With increased power and leverage over governments, TNCs can more effectively avoid or reduce payment of corporate tax. 55 This in turn reduces the capacity of the government to finance health services and programmes, 55 and the public health system's capacity to prevent and treat NCDs. Big Food's Sub-national Expansion -Establishing Hyper-local Distribution Networks Big Food's expanding global sourcing and production networks function as the main vectors for the spread of UPFs across countries. However, sophisticated distribution strategies are used to make UPF products widely available to consumers across many different market segments and localities within these countries. The growth of supermarkets and convenience stores is a significant driver of the nutrition transition in many MICs. 43 Although supermarkets can have positive impacts on food safety and improve nutrition in some circumstances, 56 they also act as a major distribution channel for UPFs and processed foods. Due to the economies of scale in the high-volume supply chains of TNCs and chain supermarkets' procurement contracts, supermarkets can provide UPFs at a much lower per unit cost than traditional retailers. 6 For example, in Brazil the share of UPFs as a proportion of total food purchased was 25% higher at supermarkets and prices for these products 37% lower compared to other food retailers. 57 Where modern supermarkets do not exist, Big Food uses hyper-localised distribution strategies to reach poorer and rural populations at 'the base' of the consumer pyramid. For example, Coca-Cola provides store-owners the goods necessary to run tiendas, which are informal vendors or familyrun general stores, on the condition that the tienda stock and promote Cola-Cola's drinks in Mexico. 50 Similarly, it was reported in 2010 that the Nestlé até Você micro-distribution system uses 7000 door-to-door saleswomen to sell Nestlé's 'affordable nutrition' products to 250 000 households in Brazilian favelas. 58,59 These types of employment programmes reinforce the economic dependence of countries on TNCs. Marketing and Promotion Practices Big Food fosters and sustains sales growth by employing an integrated, pluralistic, and rapidly evolving range of marketing techniques aimed to increase the consumption of its products by local consumers. The World Health Organization (WHO) defines marketing as "any form of commercial communication or message that is designed to, or has the effect of, increasing the recognition, appeal and/or consumption of particular products and services. " 60 This section will only focus on digital marketing and corporate social responsibility (CSR) programmes as they are increasingly being used to expand markets in MICs. Digital marketing has facilitated the marketing of unhealthy foods to become more targeted, personalised, and capable of changing consumer behaviour. It is designed to spread rapidly ('virally') on the internet, and can be separated into three types of content: paid (eg, targeted/personalised ads and influencer endorsements), user-generated (eg, content generated by users -including shares, likes and comments) and owned (eg, brand-owned websites, apps, and social media platforms). These types of content are often used in integrated approaches, and further enhance the 'glocalisation' strategies used by Big Food, where corporations adapt their products and marketing to local cultures of consumption and regulatory contexts. These strategies have proven very successful at increasing consumption, with data from Europe demonstrating that combining online marketing with marketing on television and in cinemas can amplify returns on investment by approximately 70%. 61 Big Food is considered to be at the forefront of innovation within digital marketing. 62 A major part of the success of Big Food's digital marketing strategy is spurred by what Shoshana Zuboff describes as 'surveillance capitalism, ' a system that "unilaterally claims human experience as free raw material for translation into behavioural data. " 63 This means that the more consumers engage with digital platforms, the more information is provided to TNCs to create unsolicited and personalised advertising that is highly effective in influencing consumer behaviour. This information is also used to increase target audience reach, ad memorability, and brand likeability. 64 Finally, the acceptability and likeability of TNCs are crucial in the profitability of these corporations. Proactive CSR initiatives have been a powerful mechanism to create a stronger intent to purchase from the company, 65 and the communication of CSR initiatives can increase positive public attitudes to unhealthy commodities and legitimise their consumption. 66 This increases sales and creates a receptive environment to loosen or resist regulations, as these corporations obtain greater "social and reputational resources. " 67 In emerging markets, CSR has been demonstrated as a valuable non-market strategy to "help reduce transaction costs when market-supporting institutions are absent or weak" whilst also increasing investment and future sales. 67 Similarly, public-private partnerships (PPPs) also provide corporations with reputational benefits, even when PPPs tackling NCDs rarely result in positive outcomes. 68 In fact, PPPs generally lead to government policies being 'watereddown' to the minimum level of intervention acceptable to industry, resulting in narrow policy responses and voluntary, rather than mandatory and enforceable, commitments. 69 The challenge of tackling Big Food's marketing practices is multi-faceted and complex, and thus any remedial action taken should be comprehensive and multisectoral. Otherwise, Big Food will switch to unregulated media and channels, as the tobacco industry did following the introduction of restrictions on cigarette marketing. 70,71 Political Practices Big Food directly impacts health through political strategies used to foster favourable policy and regulatory conditions for market expansion, and to sustain and protect its markets in the long-term. The ability of TNCs to undertake these political practices intensifies with the further concentration and consolidation of these already large TNCs. 72 Capturing Policy -Fostering Favourable Regulatory Environments Industry efforts to shape government policy in ways favourable to its commercial interests, known as corporate political activity, has been identified as a substantial challenge to NCD prevention efforts. 73 This is because Big Food's covert 'below-the-line' activity often leads to the implementation of 'watered-down' NCD prevention programmes, where Big Food's profits are privileged over the health of the population. 74 The expanding economic power associated with growing market shares of TNCs exacerbates their political power. The prevalence of such behaviours has led public health experts to propose that corporations producing unhealthy products should not be involved in the development of public health policies. 75 Big Food uses a range of corporate political activities to ensure that implemented policy represents its interests. One key tactic used is lobbying, which is "any legal attempt by individuals or groups to influence government policy or action. " 76 This typically involves TNCs hiring an external company to persuasively communicate their interests to a legislator or government official. 77,78 Other tactics used alongside lobbying include direct and indirect financial incentives to political parties and policy-makers. Direct incentives take the form of donations, gifts, and other financial inducements, whilst indirect incentives include promises of economic benefits from employment, production, and supply of UPF. Another common tactic used to disincentivise governments from employing stricter, typically much more effective, public health policies is through the threat of legal action. 79,80 Finally, Big Food influences policy through policy substitution. Whilst this can involve providing amended versions of policies that benefit the corporation or industry, it usually occurs through the introduction of 'self-regulatory' codes of conduct. The four MICs that have self-regulatory codes on advertising to children -South Africa, Mexico, Thailand and Brazil -are all countries where government regulation had been proposed. [81][82][83] Research has shown that these codes have very low efficacy in changing industry behaviour as they are typically designed to replace government regulation without affecting sales. 84,85 Capturing Science -Fostering Favourable Knowledge Environments Big Food engages in evidence shaping to make governments disregard legitimate science. 18,86 Tactics used include funding research that seeks to obscure public health evidence, disseminating data that favours industry, using unpublished evidence to obstruct policy, hosting scientific events, 87 and criticising evidence to emphasise complexity or uncertainty. 88 These strategies were used in China, where the industryfunded research organisation International Life Sciences Institute (ILSI) has facilitated industry involvement in 'scientific' research and events as well as having successfully lobbied the Chinese government to reframe its obesity policy. 27 Chinese policy now argues that a lack of physical activity is the main causal factor for obesity, and that physical activity rather than diet should be the main focus for interventions. This policy frame strongly contrasts with how public health organisations argue that obesity is a normal response to an obesogenic environment characterised by the ubiquitous marketing and availability of UPFs, and that the UPF industry should be regulated to reduce obesity. 89 ILSI also appears to have been successful in shaping health policy in India. 90 Big Food actively seeks to reduce the ability and credibility of public health organisations and researchers to advocate for regulation of the UPF industry. This includes threats to sue individual scientists and/or research institutions, monitoring individuals' movements and using the media to launch character assassinations. 91,92 Big Food also infiltrates and distracts the public health community by poaching advocates to work for industry-funded research groups, such as the Coca-Cola-funded Global Energy Balance Network. 93 Working for such groups reduces the credibility of advocates that oppose industry tactics. Similarly, Big Food provides funding to public health organisations to stifle their ability to advocate for system reform. For example, Coca-Cola has funded programmes with the Mexican Federation of Diabetes and Funsalud, which subsequently stopped advocating for health system reform. 94,95 Capturing Civil Society -Mobilising a Grassroots Lobby for Big Food TNCs use PPPs, CSR, and sponsorship to generate a smokescreen of goodwill with civil society organisations, sports groups, and community members who can be called on to lobby for the corporation. This smokescreen is primarily driven by TNC investment in external research, services, and programmes where a powerful alliance of inter-dependent, co-opted organisations -including public relations agencies, management consulting firms, advertising agencies, and key media companies -are used to reshape how civil society perceives Big Food. These organisations help Big Food shift from defensive strategies that deny the role of its products in promoting NCDs to more conciliatory strategies that emphasise TNCs' role in ostensible solutions to combatting such NCDs. 96 Investments in CSR and PPP programmes, along with arguments about the economic value of TNCs, can also be used to co-opt some elements of civil society as 'grassroots' lobby groups that advocate for regulations favouring TNCs. 97 Country Case Examples -South Africa, Colombia and Indonesia In this section, we examine the UPF and UPB sales and consumption patterns in South Africa, Colombia, and Indonesia, as well as market and political practices used by TNCs within these countries. South Africa Nutrition Status and UPF Sales Trends Dramatic nutritional changes have occurred in South Africa in the last 20 years with the proportion of overweight girls increasing from 8.9% in 2000 to 29.4% in 2016 and obesity in children increasing from 1.8% in 2000 to 12.8% in 2016. 98 Similar rises have occurred in adults. 98 These changes occurred despite the South African Government having comprehensive policies that respond to the major NCD risk factors, 99 including the establishment of a health promotion levy on SSBs in November 2017. The increases in overweight and obesity rates have been mirrored by high and increasing per capita UPF and UPB sales. As shown in Figure 3, per capita UPB sales increased by 55% between 2006 and 2019, with a further rise of 12% anticipated by 2024. Similarly, UPF sales per capita increased by 29% between 2006 and 2019. This was accompanied by high market concentration within SSBs. 100 Market Practices The significant growth of UPF and UPB products in South Africa has been accompanied by increased availability through supply chains and distribution systems, through the expansion of supermarkets into townships and verticallyintegrated networks of informal vendors 101 ; affordability 102 ; and acceptability 100 through changes to product design and increased marketing. 103,104 Big Food has been highly active in implementing CSR initiatives, including physical activity and food distribution programmes, with the South African departments of Basic Education, Sport and Recreation, and Health and Agriculture. 105 These include the Nestlé Healthier Kids Initiative which aims to provide Nestlé products to 50% of all South African primary school students in the guise of 'nutrition, ' 106 and Coca-Cola's youth employment program which sponsors the ownership of spaza shops in townships. 107 Additionally, TNCs have moved to create "deep and broad market penetration linked to people's passions, even into the townships and rural areas. " 108 This has primarily been done by appealing to South Africans' love of sport -for example, Coca-Cola's sponsorship of the 2010 FIFA World Cup. 109 Political Practices Big Food has used front groups, such as the ILSI, and trade associations, such as the Beverage Association of South Africa, to advocate for its interests. When an SSB tax was introduced in 2018, the Beverage Association and American Chamber of Commerce in South Africa publicly argued and lobbied the government that the introduction of such a tax would represent significant job losses that would destabilise the national economy. 105 Their corporate submissions on the tax misrepresented evidence, in a way that did not "observe widely accepted approaches to the use of either scientific or economic evidence, " 110 to argue that the tax would not improve health outcomes. Following the proposal of government regulation for food advertising to children, Big Food in South Africa lobbied for a policy substitution. This led to the development of two voluntary codes for marketing to children, which focused on regulating television and school-based advertisements. 111,112 Like most self-regulatory initiatives, 84 these codes were ineffective in changing industry behaviour, with foods of low nutritional value accounting for 53% of all food advertising in peak after-school child viewing time in 2011, 113 up from 50% in 2006. 114 Colombia Nutrition Status and UPF Sales Trends Overweight and obesity rates increased in Colombia from 45.9% to 56.5% between 2005 and 2015; increases were observed in men and women across all ages, and in both rural and urban inhabitants. 115 There has also been a steady increase in the prevalence of diabetes over the last 30 years in Colombia 116 with 4000 people between 30 and 70 years old estimated to die prematurely every year from diseases related to obesity. 117 Colombia is a growing market for value-added, processed, and packaged food products. It has a high level of per capita sales for UPBs and has seen considerable growth across both UPFs and UPBs, with per capita UPF sales projected to continue to grow over the next four years (Figure 3). Market Practices TNCs in Colombia have invested significantly in PPPs and CSRs, which may help explain why the Colombian government provides Big Food with significant leeway around regulations. For example, in 2016 the government recognised Postobón, the largest Colombian beverage company which provides significant financial support to five health foundations that operate major hospitals within Colombia, 118 as one of Colombia's most innovative corporations. This allowed Postobón to reduce how much it pays in income tax. 119 Postobón is not alone in providing significant support to civil society organisations. Coca-Cola FEMSA, the franchise bottler for Coca-Cola in Latin America, also works closely with multilateral organisations, cultural institutions, governments, and civil society. 120 Political Practices Lobbying and coalition management have been core strategies used by Big Food to resist the implementation of NCD prevention policies in Colombia. For example, Postobón and its allies, including the National Association of Businessmen of Colombia, had over 90 lobbyists working to influence legislators during the soda tax bill debate. These lobbyists argued that the soda tax would reduce jobs and negatively affect the owners of independent stores and the economy. 117 During committee hearings on the bill, in a blatant violation of the rules of the Colombian Congress, these lobbyists sat next to legislators. 117 This bill did not pass despite widespread community support. Finally, Big Food has sought to intimidate organisations who advocate for NCD prevention. For example, in 2016 Postobón filed a complaint against a commercial created by Educar Consumidores, a civil society organisation, with the government's consumer protection agency. The commercial in question showed that consuming four sugary drinks a day equates to 47 teaspoons of sugar. Despite evidence supporting this claim, the agency ruled in favour of Postobón and ordered the advertisement to be withdrawn. After its withdrawal, Educar Consumidores employees reported that their phones and computers were hacked and placed under surveillance. 117 The organisation's director also reported being personally intimidated by threats made over the phone and in person. 117 Indonesia Nutrition Status and UPF Sales Trends Over the last three decades, Indonesia has undergone a profound socioeconomic and epidemiological transition. Seven out of ten Indonesians now experience NCD-related deaths 122 with dietary risks being one of the three leading factors of death. 123 Between 2007 and 2018, overweight and obesity rates in Indonesian adults increased from 26.3% to 35.4%, with the percentage of obese individuals increasing from 10.5% to 21.8%. 124 Between 1999 and 2014, Indonesians' caloric intake of pre-prepared and packaged food nearly doubled. 125 With the largest population in Southeast Asia and the fourth largest in the world, Indonesia represents potential for significant market growth for Big Food, particularly as UPB sales per capita have more than doubled since 2006. 126 Market Strategies To expand UPF markets within Indonesia, Big Food is making significant investments in advertising and marketing. Compared to the previous year, in 2016 the UPB industry increased its advertising spending by 33% to US$1.4 billion whilst the UPF industry increased its advertising spending by 54% to US$700 million. 127 Given high levels of television viewing by Indonesian adults (around 4.3 hours per day) 128 and children (around 7.4 hours per day), 129 the marketing expenditure of Big Food is focused on television advertisements. Big Food has consistently ranked amongst the top three highest buyers of television advertising in Indonesia. 127,130,131 Its expenditure has focused on children, with 15 minutes of every hour of children's television programming being food advertising. 132 Political Practices Big Food exercises great influence over the decisions of the Indonesian government as it is one of the highest contributors towards Indonesia's gross domestic product outside of the oil and gas sector. 133 Nestlé appears to have a close relationship with the Ministry of Industry, as the Minister remarked in 2019 that he hoped Nestlé would become an "investment ambassador of Indonesia in the food and beverage sector. " 134 The importance of this working relationship, and the Ministry's relationship with other TNCs, was recognised when the food and beverage industry was named as one of the five priority sectors in the Indonesian Government's economic growth plan. 135 Since the release of this plan, Nestlé has invested US$100 million to increase its manufacturing capacity in Indonesia. 134 To generate further goodwill, and gain access to new markets, Big Food has undertaken CSR initiatives and engaged in PPPs in Indonesia. For example, Nestlé has established partnerships with schools and NGOs through its Nestlé Healthy Kids program 136 and distributed 1.6 million food and beverage products during the coronavirus disease 2019 (COVID- 19) pandemic. 137,138 These practices are not isolated to Nestlé, with Coca-Cola Amatil Indonesia and Mondelez Indonesia also undertaking significant CSR projects to strengthen their relationships with the government, local NGOs, and religious institutions. 139,140 Big Food has also sought to reframe public debates to ensure the continuation of its markets within Indonesia. For example, the Association of Indonesian Soft Drink Producers opposed the suggested introduction of an SSB tax by the Indonesian Finance Minister in 2020. The Association falsely claimed that such a tax will bring about no health benefits and result in the loss of 120 000 jobs. 141,142 Recommendations for Public Health Campaigns The previous sections outline the considerable political and economic power of Big Food, the practices it uses to maintain this power, and the coterie of co-opted organisations that support them. These sophisticated practices can seem overwhelming to those seeking to limit the corporate power and health harms of Big Food. However, it is important to remember that tobacco control advocates felt similarly in the 1960s and 1970s. 143 The major problem is one of how rather than what. We know what to do in terms of the strategies and programmes that need be implemented. The major barriers lie in how to garner the necessary political, bureaucratic, and civil society support to implement these effective public health controls. 144 Drawing on the latest evidence on factors enabling the successful passage of policies and regulations targeting Big Food 145 as well as the successes of the tobacco control movement, this section outlines some of the recommendations for public health campaigns. Get the Right People The right people, with the right skills, training, and experience, are key to countering Big Food's power, and reducing harms from UPFs. This includes bringing in people who have expertise in implementing and administrating public health interventions as well as countering industry attacks on these programmes. A study of the factors that supported the successful passage of the SSB tax in Mexico shows that good leadership, including skills in organisation, cooperation, planning and the ability to effectively partner with other sectors, is essential to the successful implementation of NCD prevention interventions. 146 This was also evident in Thailand, where the Prime Minister's Office was able to bring the right people together from education, agriculture, law enforcement, finance, transport, academia, and civil society to lead its NCD response. 147 Based on local and international ideas and evidence, this team used its diverse skillsets to "put the interests of people before self and commercial interests. " 148 This specific mix of skills helped the Thai government to introduce SSB, tobacco and alcohol taxes. Build Networks to Pool Resources Building networks of individuals and organisations with a shared purpose is an essential driver of political commitment and nutrition policy change. 149 The importance of networks was evident in the passing of Mexico's non-essential foods and a peso-per-litre tax on SSBs in 2013. This campaign involved activists bringing together 22 NGOs and 690 civil society organisations from public health and consumer rights perspectives to advocate for the tax's implementation. With significant philanthropic financial support, this network was able to build relationships with legislators and undertake strategic communication campaigns and community outreach that were key to the bill's passage. 74,146,[150][151][152] Networks need to be developed, expanded, nurtured, and supported over the longer-term 153 as the formation, expansion and support of coalitions are crucial to resisting and overcoming the political power of Big Food. Whilst membership diversity helps build the credibility of a network, it also presents a significant challenge in terms of developing unified and effective responses. This emphasises the need for strong leadership, opportunities for ongoing dialogue, and the development of shared norms within a network. 149 Lessons on successful networking can be drawn from transnational tobacco networks, that have brought together researchers, advocates, and international and national health organisations to embed tobacco control in HICs and, increasingly, MICs. 154 Finally, public health networks cannot operate in isolation. In fact, they should learn from Big Food TNCs who, despite competing against each other in the marketplace, collaborate and pool organisational, financial, and human resources to undermine, delay, or stop effective public health action. 110,155 To increase their credibility, networks should partner with practitioners and researchers from a broad range of disciplines, including international agencies, bilateral aid agencies, philanthropists, 156 and journalists. 157 Additionally, there are many aligned and genuinely non-conflicted, nonhealth harming organisations which work in trade, poverty alleviation, environment and education as well as in the private sector that public health could partner with. However, practitioners often do not sufficiently understand the perspectives or language of these sectors enough to effectively partner with them. As Bronwyn King, founder of Tobacco Free Portfolios, explains: "I needed to learn the language, systems, structure, rules and regulations that defined the whole finance sector… only then, when I understood the landscape from within, could I advocate for change" (Personal communication, 2020). Building these cross-sectoral alliances and drawing on the expertise of other aligned sectors and organisations will allow public health practitioners to build more comprehensive, effective, and ultimately successful campaigns for NCD prevention. 158 Governments Need to Step Up Similar to tobacco control, 154 governments ought to be very cautious about working with highly conflicted UPF corporations. 159 Governments, not just NGOs, should be monitoring the upstream drivers of harmful consumption of UPF and their levels of production, cost, availability, advertising, and sponsorship. Governments should also monitor and be transparent about political donations, major investors, funding of research, and the legislative and regulatory environment relevant to these products. This is more likely to occur with strong support from 'cohesive, responsive and strongly led' public health networks. 160,161 Expand What 'Counts' as Public Health Skills To reduce the harmful impacts of Big Food, we need to actively recruit people who have the skillsets that are typically missing in public health teams. This should include bringing in people with lived experience of NCDs as well as digital strategists who understand how we can adapt and utilise the rapidly changing and expanding digital media ecosystem to advance health. 62,151 We also need to work with business, trade, and governance analysts to help us develop and frame how we communicate our strategies to policy-makers and civil society. 146 We need to build a cohort of political strategists who understand the political system and can work across government. 146 We need investigative journalists who are passionate in uncovering the truth and standing up to corporate power. 157,161 Finally, we need to find persuasive advocates and lawyers who are prepared to fight for people's health. 146,[161][162][163] Discussion Across the globe, Big Food is becoming increasingly powerful and strategic in how it grows its market in MICs. Examples used in this article of Nestlé and Coca-Cola are the norm, not the exceptions. Whilst UPF sales in HICs are only marginally expanding, we have shown that the growth and sales of Big Food is rapidly expanding in MICs and that the combined sales of MICs currently outweigh sales in HICs (Figure 1). These sales are projected to grow significantly over the next decade as the lower per capita UPF and UPB consumption in heavily populated MICs increases at a much higher rate than HICs. Our data shows how important MICs are for Big Food's current and future growth. We are greatly concerned that the growth of Big Food has been exacerbated by the COVID-19 pandemic. COVID-19 is drawing public attention away from NCDs 166 and TNCs are quickly adapting their market practices within MICs to maximise their penetration in this environment. Additionally, government funding for NCD prevention is likely to significantly decline during and after the pandemic, which will lead to increased calls for multistakeholder responses, typically PPPs and CSR, to prevent NCDs in MICs. This will allow highly opportunistic Big Food TNCs to further embed themselves within many MICs. However, public health cannot effectively counteract these profound long-term trends unless we understand how Big Food is growing and sustaining UPF markets within MICs. In this article, we first considered the market practices that TNCs use to advance their economic performance, exploring how they have established global production networks, created hyper-local distribution networks, and scaled up their digital marketing and involvement in CSR and PPP programmes. We then explored how TNCs are changing government policies, challenging scientific expertise, and co-opting civil society into a grassroots lobby. Government and practitioners need to understand these practices to create interventions that counteract the growth of Big Food. Although we use the term 'transnational' to describe Big Food corporations throughout this article, it would be more appropriate to refer to them to as 'supranational corporations. ' This is because the size, power, global reach, and capacity of these corporations allow them to circumvent the laws and regulations of countries in which their products are produced and consumed in, effectively allowing them to operate 'above' the nation state. Interestingly, this is happening whilst TNCs become increasingly hyper-local in their market practices, which has also bolstered the development of domestic UPF corporations with transnational ambitions. The challenge of regulating these 'supranational corporations, ' and their hyperlocal interests is increasingly clear as national governments struggle to regulate the borderless digital ecosystem where digital marketing is increasingly being used by Big Food to target consumers within MICs. Whilst the corporate political activities of Big Food corporations are increasingly studied and monitored, 78,94 the academic study of these activities within MICs is relatively new. 97 To counteract the expansion of Big Food, academic study "that investigates industrial diseases and the corporations that drive them" 7 needs to expand and move from descriptive studies to studies that can underpin effective interventions. This includes interventions within the digital ecosystem, where public health practitioners need to understand the reach and impact of digital marketing by Big Food, and how Big Data informs their practices, in order to proactively work with civil society and policy-makers to set the parameters within which Big Food marketing can operate. Our study had several limitations. First, our perspective is limited by our positionality with only two authors being from MICs. Second, our study was not comprehensive as it did not include the experiences of policy-makers and public health advocates within MICs. Third, our analysis was primarily empirical with limited engagement with political theory. Fourth, we focused on high-level global trends, without analysing trends for UPF/UPB subcategories or regional data. Finally, the quality of our analysis relies on quality of the analysed market sales data. Future studies on this topic should fill the gaps left by this study by developing better data sets, engaging more heavily with political theory, exploring the nuances of UPF/UPB data, and including the lived experience of people in MICs. Conclusion The country case examples of South Africa, Colombia, and Indonesia demonstrate how Big Food uses a combination of sophisticated market and political practices to maximise sales, minimise civil society and scientific opposition, and win over local politicians and bureaucrats. 26,28,36,78,167 Yet, the NCD prevention policies of countries such as Thailand, Mexico, and South Africa demonstrate that governments and civil society can effectively curb the expansion of Big Food. Based on these experiences, we proposed recommendations for how public health campaigns can counter the influence of Big Food in MICs. We argue that this would involve developing an expanded global network of driven and passionate people with diverse skillsets, and increased government leadership. With these elements in place, we will then have better tools to be able to oppose the pernicious activities of Big Food and improve public health.
2021-06-15T06:16:26.178Z
2021-05-24T00:00:00.000
{ "year": 2021, "sha1": "a67b67f5366e649b0ec5bd33e12d448ed1faa431", "oa_license": "CCBY", "oa_url": "https://www.ijhpm.com/article_4050_d6b6752c599e659d1b02adf051fa729e.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "deae0b8ad8f011b6ea4aa43e08ae6e2bc870a8e2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
109456140
pes2o/s2orc
v3-fos-license
Commissioning report of the MuCool 5 Tesla solenoid coupled with helium refrigerator This report describes results of the commissioning of the MuCool refrigeration system coupled with superconducting 5T solenoid. The commissioning was done from March 4 through April 1, 2010. Since 2005 the solenoid and a combination of RF cavities have installed in the experimental hall (see Fig. 2). Until recently the solenoid has been cooled down with combination of liquid nitrogen and liquid helium from portable dewars. A standard cooldown procedure would use three to four 180L liquid nitrogen tanks to cool the coil down to 90-100K utilizing large latent heat of nitrogen. Then the nitrogen supply would have to be pulled from the solenoid"s fill connection and replaced with the supply from the 500L liquid helium dewars. Then helium would be used to purge nitrogen from the system and cooldown the coil to 4K (see Fig.3). These operations were labor intensive; required experts to insert fill bayonet properly to match the helium collection cap receptacle 29 inches below the cryostat top flange (see Fig.1); stressed components of the fill connections; required careful purging nitrogen with helium; and finally wasted cooldown helium to atmosphere. Consequently, all helium required to maintain solenoid at operating temperature was lost to atmosphere. It is estimated that up to 1,500 liters of helium would be required for cooldown, plus up to 100 liters a day to maintain boil-off rate. Therefore a month-long operation would waist 4,500 liters of helium. In 2006 the experiment ordered 35,500 liters of helium. Most importantly, in order to maintain solenoid cold, two technicians were assigned to monitor parameters, order cryogens and maintain levels. In 2008 Accelerator Division commissioned the MTA refrigeration system, which consisted of Sullair #1 (Brown) compressor, helium refrigerator #1 (Brown), 2-phase helium dewar, and 2phase Mark-III helium dewar with in-line JT valve. The auxiliary system included 3,000 gal, LN2 dewar #31 and associated transfer line, 1360 ft 3 helium gas storage tank, inventory management system, ODH system. The following Table 1 provides summary of performance for that system. The transfer line carries helium and nitrogen between the refrigerator room and the detector hall at MTA. The piping system is arranged like a capital "L" with a bayonet box at each end and an expansion box at the corner. Three vessels are connected with transfer lines. The cylindrical bayonet box in the refrigerator room provides connections to the supply dewar and refrigerator. The rectangular bayonet box in the detector hall provides valves and bayonets for connections to the magnet. The expansion box, also in the detector hall, allows thermal contraction back to the two bayonet boxes. All components are vacuum insulated and have common vacuum. Bayonet can (dwg. 9213.550-ME-435660) has five bayonet connections, four for helium 5K (in/return) and 40K shield (in/return), and one for liquid nitrogen. Only three connections are used: helium 5K supply and return and liquid nitrogen return. The transfer line (dwg. 9213.550-ME-435663) carries helium and nitrogen between the refrigerator room and the detector hall at MTA. In the past the solenoid was interconnected with helium (or nitrogen) supply with flexhose stinger that had to be inserted every time with detailed precision to match the helium receptacle cap on the bottom of the solenoid"s helium vessel. The insertion length of the helium supply ½" stinger is 42". That stressed the bayonet connection supported with G10 material and increased probability of critical failure. The insertion length also made it impossible to un-sting helium supply after the solenoid is raised to its new specified position. In order to avoid that, a new J-type permanent stinger was built (dwg. 1650-ME-381348), installed with solid support to provide a bayonet for the helium U-tube connection from the valve box. New cooldown piping was designed and installed. It provided two paths for the helium return to the refrigerator suction. The first one is via bypass valve EVDC (see Fig. 5) to be used in the initial stages of the system cooldown. The second path is via a ¾" electrically actuated valve EVVT from the solenoid via vaporizer and helium heater to be used during solenoid"s cooldown and operations. A 1 kW dual-path electric heater was installed in the experimental hall in the cooldown piping from the solenoid to suction. All electrical equipment and wiring had to be rated for operations in hydrogen environment. The controls for the refrigerator, including helium dewars and inventory controls for the system, are ACNET based. The schematics of the system as represented on the ACNET graphical interface are shown below on Fig. 6-8. More detailed data, snap shots and data plots can be found in the MTA Cryo electronic log book http://www-mta-crl.fnal.gov/mta/Index.jsp?viewTopic=Operations/Cryo. The controls utilize I/O system and two thermometry crates typical for the Tevatron refrigerators [5]. The second thermometry crate was modified to have much lower current of 9 ua, a factor of 0.01 compare to the standard Tevatron crate in order to decrease a localized heating of the Cernox resistors, and increased the voltage gain by a factor of 100. For comparison a LakeShore module 218 was used to process the signals from Cernox resistors. A standard set of ACNET parameter pages, including F8, F9, F61, and graphical interfaces were developed to monitor the system. All parameters are data logged in 1s or 15s data loggers. The front end FrigMU CPU is located in the MTA building. Additionally, ODH chassis, Benshaw motor starter parameters and LakeShore module 218 are routed to ACNET via IRM module located in the MTA building. Cooldown and Fill rate The cooldown of the refrigerator started from warm conditions on March 18, 2010 and took 8 hours to drop wet engine exhaust temperature below 9K. It took additional 4 hours, or total of 12 hours to build 60% level in the 100 liter 2-phase separator. The bypass valve EVBH was opened up to 80% to alleviate cooldown and fill. The solenoid liquid nitrogen shield level was built to 60% within first 3 hours since connecting cooldown and stayed stable through the entire operations. As the solenoid fill valve EVMH was opened during the transfer line cooldown, the fill temperature was dropping alongside the helium supply and reached 5K within first 9 hours of cooldown. It took additional 26 hours since reaching level in the 2-phase separator or total of 36 hours to drop the solenoid temperature (as registered by the diode TDHE) below 5K. It took additional 8 hours, or total of 44 hours to build 30% level LLHE in the solenoid. The solenoid temperature was ~4K and the system was fully operational. It took 4 hours (from 8:30 to 12:30 on March 22) to fill the solenoid from 23% to 63%. It is fairly difficult to calculate liquid volume as a function of liquid level prove linear coverage without knowing exact quality of helium, dimensions and position of the probe (as shown on Fig.1), but it maybe estimated that the fill rate was approximately as 30 liters/hour. This is half the fill rate to the on-line Mark-III dewar measured in 2008 for the same refrigerator in the liquefier mode. Certainly the higher heat losses along the different components of the system and flashing while expanding into solenoid might have contributed to lower fill rate. Economics of Nitrogen and Power Consumption Assuming that the helium system is tight, then after initial cooldown and fill the main cost contributors are electricity and nitrogen. The system was cooled down and stably maintained with one Sullair compressor at 220 kW. Average liquid nitrogen boil-off rate in dewar #31 was measured in 2008 as 60 gal/day (or 2%). In 2010 the boil-off rate was measured as 40 to 50 gal/day. Average liquid nitrogen consumption for the refrigerator precool and solenoid shied was measured as 20 gal/hr (including boil-off). This means LN 2 consumption of ~450÷500 gal/day and that the 3,000 gal liquid nitrogen dewar #31 must be refilled every 4 days. For the commissioning period we selected 2-day refill period. Helium Inventory and Leaks Helium inventory required to fill and maintain the system cold at 4K is less than 400 liters or 10,640 scf of helium, thus a helium inventory tank at 150 psig should be sufficient to condense and maintain liquid helium levels. The 24-hr helium loss measurement was done from 14:00 on March 27 to 14:00 on March 28 while the helium levels at 2-phase dewar and solenoid were kept constant and the ambient temperature made full day-night cycle. The loss rate was calculated by the mass loss from the 1,360 ft 3 inventory helium tank from 23.7 psig to 16.7 psig. That loss was 0.28 lb/hr or 27 scfh. This is not a significant helium loss. Most likely it can be attributed to o-rings in the power lead flowmeters and other small leaks. Performance and Stability The system demonstrated a very stable overall performance. As shown on Fig.9, the system stably ran in JT mode without wet engine. Both dry and wet engines had to be locked in order to stay below certain maximum speed (typically below 500 rpm) in order to prevent dragging down the compressor discharge and wet engine inlet pressures. It may be beneficial to open clearances for the wet engine so to reduce mass flow through the valves. Plots on Fig.10 show clearly that helium levels in both 2-phase subcooler and solenoid could be maintained at different expander speeds, inlet or exit pressures while expanding helium in wet engine from ~5K to 2-phase. Since the load was maintained stably at different modes, the only optimization was to conserve liquid nitrogen. Plots on Fig. 11 show that the nitrogen usage rate is very dependent on opening of the solenoid shield valve EVDN. Most probably, the valve is oversized and it will be modified for much smaller C v . There was only one system instability, namely sudden pressurization of the solenoid when opening solenoid helium fill valve EVMH. Plots on Fig. 12 show pressure spike PTHE up to 15 psig when opening EVMH from 20 to 50%. The loop needs tuning. An additional pressure sensor will have to be installed upstream of the EVVT solenoid valve to provide better pressure protection of the solenoid. After the rupture disk was replaced and the system was checked for tightness, the system was re-cooled and helium level restored without any problems. Eight new Lake Shore Cernox CX-1050 resistors were installed to accurately measure temperature in the helium paths (http://www.lakeshore.com/temp/sen/crtdts.html). In order to measure, scale and use the temperature with Cernox resistors, we used two methods: Eight channel Lake Shore Model 218 monitor with two computer interfaces, IEEE-488 and serial port. The monitor required entering tabulated data in Log(Resistance, Ohm) versus Temperature (deg.K). The module then was computing the temperature in deg.K and transmitted it to ACNET via internet rack module (IRM). This solution offered precise and reliable, though relatively expensive method of using Cernox resistors. Tevatron-style thermometry crate modified to have much lower current of 9 ua, a factor of 0.01 compare to the standard Tevatron crate in order to decrease a localized heating of the Cernox resistors, and increased the voltage gain by a factor of 100. In order to scale measured resistance into temperature, the tabulated data for each individual Cernox resistor was fitted into equation of the following form: This fit allowed accuracy of better than +/-2% at each temperature level over the entire range from 4K to 300K. The raw resistance readings from each of the eight Cernox sensors were routed though a specially designed switch box that allowed easy switching between LakeShore monitor 218 and thermometry crate. Then the readings were compared to evaluate comparative accuracy. It was found that localized heating does not produce significant error (above 0.2K at 4K level) if the switching current is reduced to 9 ua. Still, the issues of low level signal noise were persistent for the Cernox censors located in the experimental hall more than 100 feet away from the thermometry crate. Helium Boil-off Test The helium boil-off rate of the unpowered solenoid was measured on March 29-31 with fully closed both liquid helium EVMH and liquid nitrogen EVDN valves (Fig. 14). It was measured to be from 0.75 %/hr (with LN 2 shield above 40%) to 1.5 %/hr (with LN 2 shield below 40%). As it is difficult to integrate actual volume of the helium vessel over the depth of the probe, we assume the boil-off level to be 1.1 %/hr. It is important to note that the solenoid temperature stayed between 3.9K and 4.K within the whole range of LHe level. Assuming initial level of LHe in the solenoid as 70% or 200 liters, this translates to 3 Watts heat load for the unpowered solenoid with compromised nitrogen shield. By comparison, previous results indicated 3÷4 Watts boil-off for the powered solenoid with nominal helium and nitrogen levels. The solenoid can stay up to 48 hours cold and minimally filled if the nitrogen shield is maintained. V System Improvement List Though the system performed well, there are several areas where additions and improvements are needed to ensure equipment safety, ease of remote operations, redundancy and accuracy. The following need to be done: Complete modifications and commissioning into service the second Sullair "Red" compressor. That will ensure support of the operations with much greater availability. Mechanical, electrical, controls and safety items need to be completed very similar to those done for the "Brown" compressor. Install a pressure transmitter in between the solenoid and the electrical cooldown valve EVVT. That pressure transducer will be able to detect pressure in the solenoid with much smaller lag. If the pressure in the solenoid spikes above 10 psig, EVVT should be opened with the additionally created finite state machine. Another finite state machine will need to be created to turn on the electric heater in the cooldown helium path. Replace seat and bullet of the nitrogen fill valve EVDN with smaller orifice valve and stroke it. Find a better tune for the helium fill valve EVMH. Install better thermo insulation for the exposed cold piping (nitrogen vent and helium cooldown to EVVT) as well as the top flange of the solenoid in order to minimize icing and condensation. Investigate electrical noise and accuracy issues for the Cernox resistors read via Tevatronstyle thermometry crate. Create algorisms, finite state machines and graphical interface for helium inventory, startup of the compressors and system cooldown similar to Tevatron. Complete writing and approval of the dangerous operations procedures, operating guides and provide training to AD/Cryo/Operations for providing support for MuCool cryo operations. VI Conclusion MuCool 5T solenoid was successfully cooled down and operated coupled with MTA "Brown" refrigerator. The system performed as designed with substantial performance margin. All process alarms and interlocks, as well as ODH and fire alarms, were active and performed as designed. The cooldown of the refrigerator started from warm conditions and took 44 hours to accumulate liquid helium level and solenoid temperature below 5K. Average liquid nitrogen consumption for the refrigerator precool and solenoid shield was measured as 20 gal/hr (including boil-off). The system was stable and with sufficient margin of performance and ran stably without wet expansion engine. Quench response demonstrated proper operation of the relieving devices and pointed to necessity of improving tightness of the relieving manifolds. Boil-off test demonstrated average heat load of 3 Watts for the unpowered solenoid. The solenoid can stay up to 48 hours cold and minimally filled if the nitrogen shield is maintained. A list of improvements includes commencing into operations the second helium compressor and completion of improvements and tune-ups for system efficiency. VII Lessons Leaned MuCool 5T solenoid was again cooled down and operated coupled with MTA "Brown" refrigerator again on May 3-7, 2010 (see Fig. 15). In the beginning, this cooldown was not successful. We attempted to cool down with wide open EVBH and EVMH fill valves for respectively 2phase dewar and the solenoid. That resulted in low supply pressure PI13 and inefficient JT-ing to the helium volumes. When this was understood, we set the EVBH to regulate PI13=17 psig and EVMH to regulate the wet engine inlet temperature TR27=9K, same way as the satellite refrigerators do. The system performed as designed with substantial performance margin. All process alarms and interlocks, as well as ODH and fire alarms, were active and performed as designed. After the system was properly configured it took 10 hrs to accumulate level to 60% MCLL11 and 80% MCLLHE. After the system was properly tuned up to regulate MCEVLN on MCLLLN and MCEVX1 on MCTI5 30-100%, the average liquid nitrogen consumption for the refrigerator precool and solenoid shield was measured as 15 gal/hr (including boil-off). After some discussions with number of people, we are proposing the following to ensure better safety and efficiency in the operations of the 5T magnet. Please review and comment. We feel that though it may result in some inconveniences or delays in operations, it will make operations easier and safer in the long run. 1. We found that cooldown solenoid EVVT must be kept open at all times to keep the solenoid pressure below 10 psig. We do not want to exercise safety reliefs to keep that pressure in check. At the same time, we also found that keeping the solenoid wide open makes the pressure differential between the solenoid and compressor suction low and insufficient to secure good flows through the power leads. We made a test this morning and saw large response in both leads flows and temperatures when we ran the solenoid pressure at 4-5 psig versus less than 2 psig at fully opened EVVT. Therefore, we propose to a) keep the solenoid fully opened at all times with power required for its closing and b) incorporate a back pressure regulator in the cooldown line back to compressor suction to keep the solenoid pressure at 4-5 psig. This is overall good solution for pressure safety; instead of throwing helium to atmosphere in case of pressure surge, we would first vent it to suction. This is an easy fix; all in the frig room, no welding. 2. We are concerned with "no-flow" conditions for the power leads. This condition can exist should the compressor suction pressure become high, e.g. if the compressor trips off. We would like to incorporate the following hardwired logic. -IF helium level MCLLHE is below 40% -the magnet PS is OFF -[[IF differential pressure (MCPTHS-MCPTHE) is below 0.5 psig] OR [IF compressor is OFF]] AND [magnet PS is ON], then the vent solenoid OPENs to blow downstream of the power leads to atmosphere. This is a medium difficulty fix in the frig room; check valve is required; no welding. 3. We found that nitrogen level in the solenoid is kept so well at helium temperature that it requires almost no supply flow. Therefore, the supply valve is kept closed. This is too bad, as this is the only path for the nitrogen from the LN2 dewar to the magnet via transfer line. With nitrogen flow OFF, the transfer line and the valve box, which are designed without shield flows, keeps warming up. Therefore, we would like to incorporate a turn-around U-tube in the hall valve box to bypass the magnet and turn around the nitrogen flow via existing available control valve back to the refrigerator via one of the non-used helium lines in the transfer line. Then that small nitrogen flow will be vented outside. The control loop will keep the shield temperature at the predefined temp level.
2018-12-05T04:36:30.647Z
2010-05-01T00:00:00.000
{ "year": 2012, "sha1": "403590eb6704f01e3807695835784cafa5d06ce1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.4052", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "386423b3e0445fc155c7f4bfb2defd1b5ff36c2f", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
247966330
pes2o/s2orc
v3-fos-license
Investigating the Influence of Financial Literacy, Socioeconomic and Demographic Factors on Saving Behaviours of Nigerians Financial product awareness is an efficient remedy for poverty reduction as against lack of money. However, a holistic literature on financial product awareness in the six Geo-Political Zone of Nigeria is scarce. Using data from a quarterly survey of households in Nigeria, this paper investigated the influence of financial literacy, Socioeconomic and demographic factors on saving behaviors of Nigerians, age 15 to 70. With a pool of methods, our finding supported the observation from similar economies, but revealed some differences as well. We observed that financial literacy and proximity to financial products and services among others are the most significant determinants of savings behaviors of Nigerians. It is fair to say financial awareness and factors that influences it are necessary for the formulation of strategies to increase the inclusion of more members of the society into the formal financial stream. © 2022 by the authors. Licensee SSBFNET, Istanbul, Turkey. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Introduction In this paper, financial inclusion means individual participation in formal financial platform. It is the ownership of an account with a formal financial institution in Nigeria. There are ample evidence showing that increased participation of country's citizen to financial platforms encourages economic growth, employment, societal welfare and sound financial system. The literature also argued that widespread financial inclusion reduces prices and income inequality in an economy, (Ray & Prabu, 2013;Swamy, 2014;Sahay et al., 2015;Mehrotra & Yetman, 2015;Lenka & Bairwa, 2016;Musau, et al., 2018;Vo, et al., 2019;Yin, et al., 2019;Cavoli, et al., 2020 andOrazi, et, al., 2020). Financial inclusion is an efficient remedy for poverty reduction as against lack of money (Sodipo, et al., 2021). The closure of the Nigeria economy in 2020-2021 left no choice for household members to access financial resources through the online platform of the formal financial institution. The outcome of the lockdown shows an increase in people's access to financial resources from the comfort of their home. See figure 1. The volume of mobile payments was 8237.34, 16933.95and 35497.11 in December 20198237.34, 16933.95and 35497.11 in December , 2020 and 2021 respectively; it shows an increase of 51.35% of 2020 over 2019 and 76.99% of 2021 over 2019. In addition, it shows the level at which people have embraced financial innovation to foster access to financial products IJFBS VOL 11 NO 2 ISSN: 2147-4486 and services. The impetus to this growth could be the level of financial literacy among Nigerians on financial innovation. Theoretically, an increased level of financial literacy increases the willingness of the people to access financial products and services in the formal financial sector, (Ozili, 2020). Financial literacy is the act of understanding the knowledge, mastering of the required skills, attitudes and creation of the necessary awareness in order to make sound financial decisions that spur the welfare of the society, (Organisation Economic Co-operation and Development (OECD), 2013). The importance of financial literacy is paramount to growth and development through accumulation of savings by households and investments by firms. per cent of adults on the continents are financial included with the formal financial sector. The report shows that Nigeria financial inclusion rates was 30 per cent and was above Ghana by 1 per cent and below South Africa by 24 per cent. Also, the reports by Global Microscope (2020) shows that Nigeria had dropped far below South Africa and Ghana. The report also shows that the direct bank transfer (Targeted Credit Facility) initiated by the central government in Nigeria, 2019 to reduce poverty and foster financial inclusion was criticised for requiring proof of capital and targeting only people with previously verified bank accounts with a balance of less than NGN5,000 which further hinder her rate of financial inclusion. The overtaking of neighboring economies and criticism motivated this study in Nigeria. Using data from a quarterly survey of households in Nigeria and a questionnaire consisting of 21 sections, this paper investigated the influence of financial literacy, socioeconomic and demographic factors on Nigerians savings behaviours. The paper used the outcome of the survey on savings attitude of household members as a measure of financial inclusion. To make a broader analysis, we examined respondent's ownership of financial products and services with Bank and Nonbanking financial institutions. The questions of whether the respondents owned any of the banking financial Products with either of the Deposit Money Banks (DMBs) and/or Microfinance Banks; owned account with Non-Banking financial institutions with either of Mortgage bank, Pension administrator and/or the Capital Market and; owned financial service products (Debit/Credit-Cards, Mobile banking Application, Internet Banking Application and Wallets) are considered. We also formulated a score for financial literacy and Financial Products Accessibility or proximity to financial service point. We aggregated financial literacy as the combination of financial education and financial knowledge because both scores tested the understanding of the respondents on financial products, service and decision. The questions on financial education tested the respondent's familiarity with financial products and services. The questions were asked in three categories (i) Familiarity with the products as well as understanding the terms, (ii) Familiarity with the product but cannot offer any explanation about it and (iii) not familiar with the product. The spirit is to access an in-depth knowledge of the respondents understanding of the financial products. The questionnaire sampled fifteen financial products on the survey of financial education. We equally tested the knowledge of the respondent towards financial decision-making; the questions examined the respondents' skill or knowledge on money market interest rates, inflation, risk, bond price and mortgage. On accessibility, the questionnaire sampled six payment service centers. Apart from this section, this paper investigated issues in literature in section two, section three offers the methodology used for analysis while section four discusses the findings together with analysis of the data. Section 5 draws conclusion of the work by referring to the findings of the study. Literature Review There are two sets of principles outline to address financial inclusion. The theories of beneficiaries of financial inclusion (public good theory, dissatisfaction theory, vulnerable group theory and system theory of financial inclusion), the delivery theories of financial inclusion (community echelon, public services, special agent, financial literacy and collaborative theories) and the financial delivery funding (public money, private money and intervention money theories). These theories in one way or the other illustrate the 'who and where' questions of financial inclusion funding, delivery and beneficiary. For instance , Bhandari, (2018) has argued that the poor should be at the forefront of financial inclusion beneficiary (vulnerable theory), while Swamy, 2014, Mehrotra andYetman, (2015), Kim, et al., (2018) and Ozili, (2018) resolved that the entire economy partake on the benefit of financial inclusion (public goods theory). Demirguc-Kunt et al., (2013b), Swamy, (2014 and Ghosh and Vinod, (2017), affirmed that women, young people and the aged person should be the target beneficiary of financial inclusion which is in line with dissatisfaction theory. On delivery theory of financial inclusion, there are debates pointing to who needs the delivery and enlightenment of financial products and services, for example, Aggarwal and Klapper, (2013); Staschen and Nelson, (2013); Chibba, (2009) insisted that financial products and services delivery should be a public cost (public service and community echelon theories). Gabor and Brooks, (2017) and Ozili, (2018) positioned that such product delivery should be a special agent responsibility. On the other hand, Arun and Kamath, (2015) and Pearce, (2011) believed that financial products delivery should be a collaborative project between the private and public sectors. On the funding of financial products, Marshall, (2004) believe that the cost of delivery should rest on the government through taxpayer's money. Mohiuddin, (2015) insists that financial products delivery fund should be privately fund as the private sector contributed widely to the growth and development of the economy. Dashi et al., (2013) and Cobb et al., (2016) believed that financial products program should be a special intervention project between the public and private sectors, in other to reduce the burden, hindrance of one sector taking the responsibility of creating financial products awareness and delivery. These ideas had been echo by various empirical literature. There are empirical evidence showing that financial literacy is the main factor driving financial inclusion or people savings behaviours around the globe. In a cross-country analysis Morgan and Long, (2020), shows financial literacy significantly influenced the people of Asian community's savings behaviours. In addition, it shows that the influence of financial literacy on savings behaviours varies with the different measures of financial inclusion used, and those with sound financial literacy scores saves in both the formal and informal financial sectors than persons with low financial literacy scores irrespective of the level of income and education of the individuals. In a single country case, Kandari, Bahuguna, and Salgotra, (2021), Akileng, Lawino and Nzibonera, (2018), Abel, Mutandwa and Roux, (2018). Kodongo, (2018), and Mhlanga & Dunga, (2020) demonstrated that financial literacey is the major factor that influences finanacial decision making. Adetunji & David-West, (2019), argued with 22000 respondents that financial literacy and income significantly influence saving behaviours of Nigerians in a formal and informal financial institution. In addition, they demonstrated that age group significantly determined financial inclusion but older people tend to save more or are financially included than younger persons. While Akileng, Lawino, & Nzibonera, (2018) still on financial literacy, show that financial innovation is a major driver of financial inclusion among the households in Uganda. There are also scholars whose argument support access to financial products and financial inclusion, Ndanshau & Njau, (2021), Mhlanga & Dunga, (2020), Nwidobie, (2019) and Abel, Mutandwa, & Roux, (2018). These studies shows that the main determinants of financial inclusions are greater proximity to financial service point (financial intermediaries). Looking at the role of theory, Maity & Sahu, (2020), investigates the efficiency of public sector bank on financial inclusion. By using a nonparametric method of efficiency measurement or Decision-Making Unit, they observed that public sector banks performed differently on the overall average efficiency of financial inclusion. Sound financial decision-making had correlate better with socioeconomic and demographic factors such as education, age, wealth/income, marital status, gender, location/environment, occupation, and social relationship. In supporting the socioeconomic and demographic factors relationship to financial inclusion, some scholars (Lusardi & Mitchell, 2011;Atkinson, 2012;Allen, Demirguc-Kunt, Klapper, & Peria, 2016;Asuming, Osei-Agyei, & Mohammed, 2018;Esquivias, Sethi, Ramandha, &Jayanti, 2020 andNdanshau &Njau, 2021) put forward that low-income earners are more prone to low financial decision-making than high-income earners. Esquivias, et al., (2020), observed a significant gap between financial inclusion dynamics of South Asian countries (Vietnam, Indonesia and Philippines). They argued that the drivers of this gap are gender and age disparity, income and educational disparity, social location, and job status. On the side of gender disparity, they observed that Females have a higher probability of being financially included than males. Females are more likely to hold savings accounts, to participate in informal finance institutions because they perceived fewer barriers to formal banking. Their conclusion is not the same with the observation of Kandari, Bahuguna, & Salgotra, (2021), Kim, Yu, & Hassan, (2020) and Adetunji & David-West, (2019) whose study argued that being a female shows higher financial exclusion than being a male. However, on literacy level, Agarwal et al., (2009) ;Hastings, and Mitchell (2011); Atkinson and Messy (2012); OECD (2013); Scheresberg (2013) mentioned that men are more financially literate than their women counterpart is, also men's financial literacy increases faster than that of women. Others (Amadeu, 2009;Lusardi and Mitchell 2011;Asuming, Osei-Agyei, & Mohammed, 2018) have identified that higher education increases the chance of being financially included than those with low education background. Dew (2008); Brown and Garf (2013) stated that married individuals have higher financial literacy than singles. On average, those who aged between 30 and 40years are associated with higher financial literacy level than younger and elderly individuals (see Agarwal et al., 2009;Lusardi and Mitchell, 2011;Atkinson and Messy, 2012;OECD, 2013 andScheresberg, 2013). Kim & Garman (2004) stressed that individuals with longer labour experience have significantly high financial literacy due to familiarity with economic and financial subjects while the unskilled and unemployed have lower financial literacy that affects their saving behaviours. Research and Methodology This paper examined the factors that influence Nigerians saving behaviours. Model Specification Following the financial literacy and collaborative theory, the determinants of financial inclusion (or savings behaviour) in Nigeria is model as; Pr ( = ⃒ ) = ( 0 + 1 1 + 2 2 + ⋯ + ) (1) Equation (1) defines the conditional probabilities of =1 (i.e. Y occurring) given X. 0 , 1 , 2 , , , , ℎ ℎ For a more compact representation: In the Logit form, the model can be express ˄( ) The equation above is the cumulative (logistic) distribution function (cdf) and it ranges between zero and one for all values of( ). The non-linearity of ˄( ) may have violated the use of the Ordinary Least Square (OLS) estimator but the sample size are large and dismissed any possibility of heteroscedasticity, hence, the model are estimated with the OLS and the Maximum Likelihood estimator. The Odds Ratio is the probability of Y=1 to the probability that Y=0. It is express as Modern Statistical Packages reports the coefficients of equations (5). Mathematical Illustration of the coefficients determination The paper is interested in how the saving behaviour of Nigerians are determined, Such that the response variable is binary (Yes/No). The model is express as follows: Equation (6) shows whether Nigerians own either of Savings, Current, Loan and Domiciliary accounts with or Deposit Money Banks (DMBs) or of Microfinance banks. Equation (7) shows whether Nigerians own either of Mortgage Products, Insurance Products, Non Interest, Pension Products, and the Capital Market, Cryptocurrency with a non-banking financial institutions. Equation (8) is the outcome of survey on saving attitude of the household members. The respondents are assign one if they own any of the listed bank account, Non-banking financial products and financial service products otherwise zero. Represent whether an individual owned any of the banking financial product, Non-Banking financial Products ( ) and financial Service product ( ). is the socioeconomic and demographic factor variables. It includes respondent's age, income range, marital status and gender. Others are level of formal education, location type, types of employments and Geopolitical zone. We generated a series of dummy for the categorical variables in the models. Pre-school and primary school education was regroup into primary education, Junior-Secondary and Secondary schools were regrouped into secondary education, Post primary specialized training or certificate and Postsecondary specialized training or certificate were regrouped into Specialized education. Other members of the level of formal education group are in figure 1 and primary education are the reference group. The income group is in figure 1 and respondents with income below 30,000 naira are use as the reference group. Other reference groups are male for gender, married for marital status, urban for location type, North Central for geopolitical zone and Government workers for employment type. is the index of Financial Literacy Score and represent Financial Products Accessibility Score. is the combination of Financial Education Score and Financial Knowledge Scores, (i.e., = + ). The questions on financial education were asked to know the respondents familiarity with financial products. The questions were asked in three categories (i) Familiar with the products and can explained what the term means; (ii) Familiar with the product but cannot explained what it means and; (iii) not familiar with the product. The essence is to access an in-depth knowledge of the respondents in understanding the meaning of the financial product. To obtain the financial education score we assigned 1 to those who are Familiar with the products and can explained what the term means, 0.5 to those who are Familiar with the products but cannot explained what it means and zero (0) to those who are not familiar with the products. The study sampled fifteen financial products on the survey of financial education score, thus, the highest score is 15 and the lowest is zero. We generated a score for the access and or availability of payments service points to household members. We computed the score by assigning 1, if the accessibility of a payment services centre is a walking distance, 0.5 if the respondents must take a bike or vehicle and zero if there is none. The study sampled six payment service centres, thus, the highest score is six and the lowest is zero. We tested the knowledge of the respondents to identify their financial decision-making skill on money market interest rates, inflation, risk, Bond price and mortgage. The computation of the financial knowledge score was on the number of correct answers; thus, each respondent could attain a maximum financial knowledge score of five. We address the issues of financial literacy with financial education score and financial knowledge because both scores tested the understanding of the respondents on financial products. The study estimated Equation (2) with the probit and the linear probability regressions. To ascertained reliability and unbiased estimate of the probit models, the assumptions below are tested. The assumption includes but not limited to; i. The models are correctly specify or Hat-Test: The test instruments are the hat-statistics and hat-square-statistics. The models are correctly specify if the hat-statistics are significant and the hat-square-statistics are insignificant. ii. The models are better fit or Goodness-of-fit test: The instruments are Likelihood Ratio (LR), Pseudo-R 2 and Hosmer & Lemeshow's (HL) goodness-of-fit test. The coefficients of the Likelihood Ratio (LR) and Pseudo-R 2 are default coefficients from the regression models. The models are better fits if the HL coefficients are statistically insignificant, LR are significant and the size of the Pseudo-R-Square are large enough. Figure 2 & 3 and table 1&2 shows the demographic and socioeconomic characteristics of Nigerians. It shows that the distribution of the questionnaires among genders was 50.7 per cent of the female and 49.3 per cent of male. Almost half of the respondents are below 30 years of age. The lowest age group is 60years and above. The computation of marital status is unbalance, 63.8 per cent of the respondents either are currently in marriage, separated, divorced or widowed, while those that had never been married are 36.2 per cent. About 46.8 per cent had completed either junior/senior secondary school education and only 0.9 percent had completed a post-graduate program. Over half the respondents created jobs for themselves (self-employed). Others either works for government institutions or privately owned organizations, while 15.5 and 9.9 per cent are students and unemployed, respectively. We also observed that 60 percent of the respondents are from the northern part of Nigeria and 73 per cent are urban dwellers. Table 1 shows the cross analysis of gender, social-location and marital status within geopolitical zone. We observed a balance distribution of the questionnaires among gender within the zone. However, the distribution skewed to the urban, with South-West and North-Central having the highest and lowest distributions respectively. The table equally shows that more of the respondents were married within zone, South-South champion the highest participation of unmarried or single persons and South-West the highest of married persons. Table 2 shows distribution of gender, location and marital status within income range and figure 3 shows the distribution of income among geopolitical zone. Sodipo et al.,International Journal of Finance & Banking Studies 11(2) (2022) Source: Author's Computation For the sake of the analysis on figure 4 and table 3, we define financial literacy score as the summation of financial education score and financial knowledge score. The highest score is 20, and the lowest is zero. We divided the score of each respondent into five groups. The groups are, very low score (fewer than or equal 4.5), Low score (between 5 and 8.5), Average score (between 9 and 12.5), High score (between 13 and 16.5) and Very High score (between 17 and 20). Table 4 is the distribution of the measures of financial inclusions in the six geopolitical zones in Nigeria and table 5 is the distribution of the measures of financial inclusion in the urban and rural areas. In table 5, North-West is the region that had persons with lowest (33.3%) bank account with banking financial institution and South-South region is the highest with 60.2%. North-West is the region that had persons with lowest (4.7%) accounts with Non-Banking Financial institution and South-West region is the highest with 16.8%. North-West is the region that had persons with lowest use of financial service products (26.2%) and South-West have the highest users of 51.7%. In the urban area, 50.8% of the dwellers have accounts with either of DMBs or Microfinance banks, while only 33.1% of rural dwellers had an account with the banking financial institution. These statistics shows that the increased in the accessibility of financial products in the country is not encompassing. The models on table 6 were test to see whether they satisfy the assumptions of the Qualitative Response Model (QRM) stated in section 3 of this paper. The summary result of the tests is on table 6. Source: Researcher Computation. Note: *, ** and *** indicates significance at 10%, 5% and 1% respectively. The hypothesis tested are in null form. Results on table 6 shows that the 'Hat-Statistics' is statistically significant and 'hat-square' is not. It implies that the models are correctly specified. It also shows that the LR statistic is significant, and the chi-statistic of HL is insignificant implying that the models are better fit. Overall, the validity of the estimated models is justifiable by the diagnostic results. However, McFadden Pseudo R-square suggested a weaker goodness-of-fit of the models. Nevertheless, Frost, (2013) recommended that the prediction of human behaviour typically has R-squared values lower than 50% because human behaviour is simply harder to predict than, say, physical processes. The summary of the estimated coefficients on the determinants of saving behaviour of Nigerian's is on table 7 The table has three models, model_1, Model_2 and model_3, which are models for the decision of Nigerians to own any of banking financial products, non-banking financial products and financial service products respectively. The equation for linear probability regression is label OLS and the equation for the discrete regression is label Prob. The result shows that financial knowledge (Ability to analyse money market interest rates, inflation, risk, Bond price and mortgage instruments) spurs Nigerians to own an account with banking and non-banking financial institutions as well as financial service products. This implies that an increase in financial knowledge influences positive saving attitude of Nigerians. A one percent increase in financial knowledge increases the likelihood of financial inclusion by 4.5 to 6.6 for the products of banking financial institution, 1.3 for non-banking financial institution products and 1.3 to 2.8 chances of having either of Debit/Credit-Cards, Mobile banking Application, Internet Banking Application and Wallets. Thus, having a sound financial knowledge increases the chance of having all three financial inclusion products sampled. Financial education score was positive and statistically significant to drive financial inclusion in Nigeria. The higher the familiarity of the respondents to financial products the lower their chances of financial exclusion. The result shows that a percentage increase in respondent familiarity with financial products will increase their likelihood of owning an account with a banking financial institution by 4.1 to 5.5, with non-banking financial institution by 1.7 to 2.2 and owning financial service products by 3.4 to 4.2. The empirical findings are statistically significant and corroborated with the findings of Adetunji & David-West, (2019) in nigeria, Morgan & Long, (2020) in Asian countries, Kandari, Bahuguna, & Salgotra, (2021) in India, Akileng, Lawino, & Nzibonera, (2018) in Uganda, Kodongo, (2018) in Kenya, Abel, Mutandwa, & Roux, (2018) and Mhlanga & Dunga, (2020) in Zimbabwe. These shows financial literacy (financial knowledge and education) is a major factor for driving financial inclusion around the globe. Although, we observed similarity with other empirical literature, our findings advance this literature by using at least three indicators of financial inclusion as against using only banking financial institution instruments, and the breakdown of financial literacy into two components give a better understanding of the variable that have a stronger impact on financial inclusion. This finding is also a frontier by looking at respondent's willingness to hold any of financial service products. 19158 19158 19158 19158 19158 19158 Source: Researcher Computation. Note. *, ** and *** indicates significance at 10%, 5% and 1% respectively Observations With the increased in mobile banking, we included the accessibility of respondents to financial service points to the models. The results show that proximity to financial service point accelerates the ownership of an account with banking financial institution in Nigeria. We observed that, the closer the respondents is to payment centre, the more possibility of being financially included and usage of financial service products, especially, the debit cards. In model 1 a percentage closer Nigerian get to point of Sales (PoS), ATM, Bank Branches, etc., the likelihood of opening a bank account with either of DMBs or Microfinance Banks increase by 1.9 to 5.7, the likelihood of using any of banking service products increase by 6.5 to 8.8. However, we observed that the relationship between proximity to financial service point negate owning account with non-banking financial institution in Nigeria. Our results is in line with Ndanshau & Njau, (2021), Mhlanga & Dunga, (2020), Nwidobie, (2019) and Abel, Mutandwa, & Roux, (2018), whose studies shows that greater proximity to service point or financial intermediaries increases the chances of financial inclusion. Demographic characteristics of the respondents also have implications on the individual savings behaviours. We observed that age positively influences financial inclusion; the older the individual become the likelihood for them to own an account, but the likelihood is more favourable for owning non-banking financial products (Mortgage Products, Insurance Products, Non Interest, Pension Products, and the Capital Market). Being a female discourage financial inclusion or saving in the formal financial institution of Nigeria, with a higher probability to non-banking products. Being single also discourage financial inclusion or having a poor saving attitude in the formal financial institution. Having no form of education discourage saving attitudes of Nigerian. The probability is more with the ability to use financial service products. We also observed that living in the rural area and living in the northern part of Nigeria negate financial inclusion or discourage saving in the formal financial institutions. Other factors that negate saving behaviour or financial inclusion are not-working, selected area of employment, unemployed, unpaid family workers and not earning income. Conclusion Financial Inclusion will remain a hot button issue, as it is a catalyst for economic growth and sustainable development, more so for nation like Nigeria. Financial inclusion has been identified as an efficient remedy for the reduction of poverty and knowing the factors that affect or influence it, is critical. Several factors have been identified as key factors that influence or shape financial inclusion. Out of the various factors identified, financial literacy tops the list. The financial literacy of the population, access or proximity to financial service point, gender, location, education, and employment status are key factors that influence how financially included members of a society turn out to be. It is fair to say financial inclusion and factors that influence it are necessary for the formulation of a strategy to increase the inclusion of more members of the society. In future, it is hope that the findings of this paper will go a long way in helping to shape policies that will deepen the inclusion of a larger share of the population. At the end of the day, the financially excluded members of the population have a detrimental effect on the growth of nations and as such, it is imperative that more work goes into understanding the dynamics at play that hinder or boost financial inclusion, as ultimately this will greatly improve the economy.
2022-04-06T15:27:44.729Z
2022-04-04T00:00:00.000
{ "year": 2022, "sha1": "41c0c91acf3b94752e59fb1e6410fd9be6812bf5", "oa_license": "CCBY", "oa_url": "http://www.ssbfnet.com/ojs/index.php/ijfbs/article/download/1697/1234", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c3c3550e9e40077c0aa8ffa275052367d89a20b5", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [] }
245373196
pes2o/s2orc
v3-fos-license
Slipware from Tykocin Castle (Poland) from the 16th–18th Century The main goal of this article is to analyse post-medieval slipware found during archaeological excavations in Tykocin Castle and to describe its distinguishing features: decorative characteristics and forms. Further considerations are aimed at reconstructing the functions of the Tykocin slipware vessels in the castle household throughout the 16th to 18th centuries and attempting to determine their provenance. The analysis is preceded by the list of terminological problems pertaining to this pottery group in the Polish literature as well as elementary information on its production centres in Poland against the European background. INTRODUCTION Pottery studies constitute an integral and constant aspect of archaeological research, since pottery is one of the most elementary archaeological sources, usually well-preserved and often found in large quantities, shedding light on everyday life in the past. Generally this holds true for the post-medieval period and the developments in pottery-making on the European continent that occurred throughout the 16th-18th centuries, which were predominantly manifested in the introduction of innovations unknown in the late Middle Ages: an extended assortment of forms and the differentiation of types of the produced vessels. This process mostly entailed the emergence and the gradual spread of luxury ceramics (such as majolica, faience, or porcelain) and more ordinary items (including glazed-ware, whiteware, or slipware) that complemented the "traditional" pottery (redware and greyware). These novelties also included personal and specialist utensils (such as plates, cups, kettles, or vases). This diversity is also reflected in the assemblage obtained from the castle site in Tykocin, situated in Podlachia, north-eastern Poland. It comprises finds collected a during archaeological excavations conducted there in the years 1961-1963 and 1999-2007 (for more information on the scope and results of these works, see Bis and Bis 2006;2015a). Pottery clearly dominates the rich collection of artefacts discovered at that site, amounting to 65,871 pieces or 72% of the total number of artefacts. The majority are vessels of various types -50,469 fragments, i.e., 76.6% of the ceramics and 55% of the whole assemblage from the site (see Bis 2015: 96-97, Table 2). Only a few of them -137 fragments, merely 0.2% of total pottery -comprise sherds of the slipware discussed in this article (75 vessels in total). They are made of ferrous clays (redware), fired in an oxidising atmosphere, and decorated with painted ornaments of different complexity and techniques. For this purpose a slip (diluted clay) was used, creating an underlay for ornamentation and serving as a paint for the patterns. The decorated surface was covered with lead glaze. These vessels were used by the residents of Tykocin Castle in the modern era and after destruction were deposited at the site in cultural layers dated to the period between the second half of the 16th and second half of the 18th centuries. The aim of this article is to draw attention to this small but distinctive pottery group and contribute to the Polish archaeological discourse on its production in what is contemporary Poland as well as the related terminological questions in the context of the European background. The main part of the text offers a characterisation of the finds from Tykocin Castle in regard to their morphology, function, and decoration, to the extent allowed by their state of preservation. These observations then serve as a basis for a discussion on the functions of these vessels in Tykocin Castle throughout the 16th-18th centuries. An attempt has also been made to determine their provenance. TERMINOLOGY AND RECENT FINDS Foreign literature uses several related terms connected to the pottery discussed here. In the English nomenclature, they are usually referred to as slip-decorated lead-glazed earthenware, redware with slip decoration, or, simply, as slipware, whereas the ornamentation itself is called slip trailing, slip-trailed decoration, or slipped decoration. German works tend to use Malhornware or bleiglasierte Irdenware (cf., Stephan 1987;Gaimster 2006). These terms are commonly accepted and used by foreign scholars in regard to this type of pottery. The above state of affairs, just as the whole development in research on these matters in Europe, is largely a consequence of the seminal study by Hans-Georg Stephan, published 34 years ago and monumental in its chronologicalgeographical scope (Stephan 1987; this work also lists older literature). Stephan's legacy has been continued (for more on the studies on slipware in Europe, as well as general remarks on subsequent publications and discoveries, see e.g., Stephan 1991;Gaimster 1991;2009: 534-535). generally identify this type of vessels. These took place even though the prefixes used, such as "pseudo-" or "semi-", are value-laden and depreciate pottery called this way by comparing it to other types of presumably better quality -majolica or faïence -or suggesting that they are their imitations or forgeries. On the other hand, "lead-glazed earthenware with under glaze decoration" may just as well refer to vessels decorated this way but of a different type -the so-called Pomeranian faïence. It seems that the most adequate term in the Polish language would an expression reflecting two primary features of these artefacts, i.e., the raw material and ornamentation -Polish: ceramika ceglasta angobowana szkliwiona [lead-glazed redware with slip decoration]. In my opinion, it is worth considering using the simplest, common English term -slipware. Both of these definitions (slipware and ceramika ceglasta angobowana szkliwiona) are broad enough to include slip-decorated vessels, where the slip differs in terms of methods of application and consistency. These are earthenware vessels with slip of thicker consistencies, which could be trailed, poured, or squeezed, or of thinner consistencies, which could be applied with a brush, a rag, or by hand. A design could also be cut through the overlying slip, exposing the contrasting colour of the clay body beneath (sgraffito decoration;see MPRG 1998: chapter 12.5-12.6;Orton and Hughes 2013: 86-88). I use this extensive meaning as the definition of the group of vessels discussed. The unfading scholarly interest in this pottery is a European trend (the newer foreign publications are Amato et al., 2009;Funke and Leiber 2012;Gawronski 2012;Kröll 2012;Witte 2014;Demuth 2015;Bikić 2017;Gajić-Kvaščeva et al., 2018;Blažková 2019;Giorgio 2019;Heege 2019a;2019b;Matějková 2019;Ose 2019: 72-74, 117-119; the works list further literature). On the one hand, it is related to the ongoing development of historical archaeology and studies of post-medieval pottery, including the expanding knowledge of slipware. On the other, it reflects the considerable frequency of these artefacts and their spread across the continent throughout the 16th-18th centuries, as well as European developments in decorated pottery production inspired by the Renaissance. As indicated by the above-mentioned examples, the last decade has seen an increase in the known source base related to this pottery in Poland as well as publications referring to it. Such vessels have been found in various regions at sites containing post-medieval archaeological material. The dating of the Polish finds falls into the period between the second half of the 16th century and the 18th century. They match the European manufacturing standards at that time, both in terms of ceramic forms and their decoration techniques. PRODUCTION CENTRES Some of the most important regions in Europe where slipware was manufactured were located in today's Germany, of which the best-known are Weserware and Werraware. The period when these vessels were produced in larger quantity extended over several decades (1580-1620/1630 and 1568-1620, respectively) and ceased when the Thirty Years' War broke out. The basic assortment consisted of tableware supplemented with kitchenware. Weserware was manufactured between the rivers Weser and Leine, in Altenhagen, Brüninghausen, Dörpe, Höxter, and Völksen. The primary distinguishing features of these vessels were simple colour ornamentation, mostly geometric and floral motifs (including zigzag lines) on a bright overlay (thanks to the use of white slip), and flat bases. Werraware was manufactured in Hesse, in the following production centres: Eschwege, Grossalmerode, Hannoversch Münden, Heiligenstadt, Treffurt, Wanfried, and Witzenhausen. What distinguished these vessels was that many specimens had painted production dates and ornaments made predominantly in the sgraffito technique, usually bright figural motifs against a dark underlay (e.g., Stephan 1987: 85-110, 274-280;Gaimster 1988;Bartels 1999;Demuth 2001). In the territory of modern-day Poland, the considerable amounts of known slipware finds contrast with the scarcity of identified and published post-medieval potterymaking centres or at least potential production sites. According to my findings so far, there were at least about a dozen such sites, located in different parts of Poland (due to the limited size of this paper, they are only listed below). In this regard, the archaeological perspective remains clearly distinct from the findings made by ethnographers (cf., Fryś-Pietraszkowa 1970: 68-69, il. 259; this author included 59 półmajolika production centres that operated in the post-medieval period and are inactive nowadays). The production centre that initiated Polish studies of this type of pottery in Poland, and which remains the best-researched so far, is situated in Miechocin, nowadays a suburb of Tarnobrzeg in Lesser Poland (current Podkarpackie Voivodeship;Szarek-Waszkowska 1967;Szetela 1969a;Szetela 1969b;Szetela-Zauchowa 1994; see also the leaflet by Handerek 2006). Excavation work at this site uncovered remains of 12 workshops, along with finished products and post-production waste. These operated in different periods between the late 16th century and the end of the 18th century, and the heyday of that production centre is dated to the first half of the 17th century. The goods manufactured there were divided into several groups according to their chronology and ornamentation. Other potential production sites in the region were at Rzeszów, Łańcut, and Jarosław, based on slipware finds and historical records concerning pottery workshops functioning there in the 16th and 17th centuries. However, no material remains of workshops manufacturing slipware have been discovered so far. The collected potsherds represented an assortment similar to that from Miechocin, except for slight differences in decorative motifs and colour schemes (cf., Kotula 1953;1956;Supryn 1975;Czopek and Lubelczyk 1993: 25-27). One more place where slipware was manufactured in the 16th-18th centuries may have been Lublin (currently in the Lublin Voivodeship) or its environs. The abundance of finds and their significant representation in pottery assemblages from different local sites (e.g., Niedźwiadek 2019: 249), especially in manor houses (often above 50%), seem to support the above assumption. Despite such a high frequency, no slipwaremanufacturing workshops have been discovered so far (personal communication with Rafał Niedźwiadek, July 19 2021). Nevertheless, excavations have provided undisputed physical evidence for slipware production in Cracow (currently in the Lesser Poland Voivodeship) in the second half of the 16th century (Dryja 2014: 131-132), as remains of a workshop were unearthed in the suburb of Garbary, in 11 Loretańska Street. These vessels were fired in kiln III, along with a wide repertoire of other products (stove tiles as well as ceramic details and building materials). Another workshop that probably manufactured redware with slip-trailed decoration alongside stove tiles, was discovered in Greater Poland, in Garczary, a suburb of Śmigiel (Wyrwińska and Wyrwiński 2005). At this site, archaeologists recorded remains of two kilns that operated in the second half of the 17th century. Glazed redware vessels with slip decoration were deposited mostly around the kilns and in their backfills and were interpreted as unused specimens or production waste (Wyrwińska and Wyrwiński 2005: 304-305, 307, Fig. 8). It is sometimes suggested in publications that a workshop or a complex of workshops manufacturing similar post-medieval pottery existed also on Wzgórze św. Wojciecha (St. Adalbert Hill) in Poznań (Greater Poland Voivodeship; e.g., Poklewska-Koziełł 2013: 117;Paterczyk 2018: 87). The workshop that was recorded at that site produced panel tiles, among other things, which were dated to about the middle of the 16th century, as well as some unspecified vessels (Łaszkiewicz 1993). Until these finds are fully published, however, this information remains unconfirmed. The slipware excavated in Brzeg, Silesia (current Opole Voivodeship), at a site located in 10-12 Dzierżonia Street, has been considered to be of local origin. These finds were semi-majolica plates with redware and cream-white bodies, dated to the late 16th and 17th centuries, and constituted only a small percentage of the ceramic finds. The operation of a dynamic post-medieval pottery production centre in the town is attested by other archaeological sources and written records (Rodak 2017: 149-166). Vessels of this type were also manufactured in Mazovia, in Warsaw, in one of the two pottery kilns (upper one, marked as no. 1) located within the former moat of the Old Town. This facility was interpreted as the workshop of Master Jan Rosołowicz, active in the late 17th century (Świechowska and Dukwicz 1955: 154-157, tab. 15; see also Meyza 2017b: 189-190). Its production focused on redware vessels, stove tiles and clay tobacco pipe bowls (Polish: lulki). In a recent verification of the excavation results, the dating of the finds was changed to the first half of the 18th century (Meyza 2017b: 196). Two more slipware production sites were found in the Zachodniopomorskie Voivodeship. Such manufacturing activity in Myślibórz (Soldin) is evidenced by a pottery kiln preserved with its entire load and numerous pits filled with potsherds and fragmented stove tiles (Kałagate and Kościukiewicz 2004;Szymczyk 2011;Majewski 2019: 208), located near the town walls, behind the Pyrzycka Gate. The workshop functioned between the late 16th and early 18th centuries. The vessels found there stand out through their rich decoration (including the use of the chattering technique) and diversity of forms. Remains of two presumed pottery workshops that operated after the late 16th century were discovered in Recz (Reetz/Neumark), in the housing blocks adjoining the town walls. Admittedly, no traces of manufacturing facilities have been found there, but the local production is evidenced by pottery wasters -potsherds and fragmented stove tiles, as well as unfinished and defective products (Majewski 2010;2016: 81-84;2019: 208-209). Based on the above-mentioned examples, we may assume that the slipware characterised by its average quality and schematic decorative motifs may have been manufactured in many other pottery production centres and workshops as a part of a wider range of pottery, along with plain earthenware. MATERIALS AND METHODS The vessels discussed in this study were discovered in Tykocin Castle. This is a fortified structure situated opposite the town of Tykocin, on an elevation in the flood plain of the Narew River. It existed in its elementary form -a quadrangular brick-built 56 | Magdalena Bis building -between the third quarter of the 16th century and the second half of the 18th century. The castle, together with bastion-type fortifications, was built on the orders of King Sigismund II Augustus. 2 Probably before 1630, when Krzysztof Wiesiołowski served as the castle's starost, the building and its interiors were modernised, with the surrounding fortifications transformed into a large bastion-type stronghold. A siege in 1657, during the Polish-Swedish war, brought severe damage to two of the four wings of the castle. However, the remaining part of the complex was still in use, at least for the next several decades. The castle belonged to the king until 1661 and afterwards was taken over by private owners: Hetman Stephan Czarniecki followed by the Branicki family of the "Gryf " coat of arms (to 1771). The fate of the building was sealed when a fire consumed it in 1734, along with the furnishings. After this disastrous event, no further attempts at reconstruction were made and finally (in the late 1760s) what remained of the castle was dismantled. Thus, its regular functioning was decisively terminated. Throughout the 19th and early 20th centuries, the relics of the building continued to deteriorate (cf., Bis and Bis 2006;2015b). The functions of the building changed throughout the two centuries of its existence depending on its proprietary situation and geo-political conditions. Since the beginning, it was meant to act as an important defensive point in this part of the Polish-Lithuanian Commonwealth, a garrison, and an arsenal. In addition, it safeguarded the private belongings of the last of the dynasty of Jagiellons (until 1573). Even at the beginning of the 18th century, it was still an important strategic point occupied by various armies during the military conflicts taking place in the Podlachia region since the mid-17th century. It became the seat of the Tykocin starosts (after 1572) and burgraves (castle administrators), a workplace for its numerous staff (including craftsmen of various trades and local villagers tasked with different services), and a place for judicial activities. The complex was also important for the local economy, as it comprised a building for economic functions, a granary, a brewery, an inn, a coach house, stables, and ponds. The castle was visited by several Polish elective monarchs and their courts: Stephen Báthory, Sigismund III Vasa, Ladislaus IV Vasa, Augustus II the Strong, Stanislaus I Leszczyński. It also served as a temporary residence for its owners -the Branicki family -and hosted important national events as well as local gatherings. In the course of the two centuries of its functioning (from the 1550s to the end of the 1760s), the number of people residing and eating at the castle fluctuated and is currently difficult to determine with precision. The same is true for their social or material standing and the related differences in consumption patterns and demands (cf., Bis forthcoming). The 1960s, when the castle was to be transformed into a permanent ruin, saw the first archaeological excavations and architectural studies at the site, conducted by Jerzy Kruppé. Towards the 1990s, further archaeological exploration was undertaken and continued until 2007 to enable planned reconstruction of the castle. The latter earthenworks were managed by Magdalena Bis and Wojciech Bis (for more details, e.g., about discoveries, stratigraphy, objects, and other finds, see Bis and Bis 2006;Bis 2015). This research encompassed the remains of castle buildings -their interior and the castle's direct vicinity -up to the line of the bastion fortifications. They focused on the western, south-western, and north-western parts of the complex (cf., Bis 2015: 80, Fig. 30). All finds discussed in this paper were discovered during the above-mentioned excavations (in total, 137 slipware sherds coming from 75 vessels). Most finds (44 vessels, i.e., almost 59%) were found during the excavations carried out in the years 2001-2007, whereas the rest (31 specimens, i.e., 41% of the total vessels) in the course of work conducted between 1961 and 1963 ( Fig. 1). It is noteworthy, because the earlier finds are predominantly loose, as they have been obtained from architectural trial trenches or regular excavations but with stratigraphic contexts that are currently difficult to reconstruct. Some of the vessels from the methodical excavations at the beginning of the 21st century were obtained from recently-mixed layers -transposed within the complex or forming backfills of the earlier trenches. In these cases (41% of the slipware in total), it was assumed that they came from the time of the brick-built castle, i.e., the second half of the 16th to the second half of the 18th century. Therefore, they have no bearing for the chronological diversity of the assemblage. The majority of the remaining, and well-dated, artefacts come from the cultural layers formed in the 17th and 18th century. Their frequency in different parts of the complex was uneven. Slightly more vessels (16) were discovered near the northern bastille (trenches nos. VII, 20 and 21). It was an area where broken items or elements of castle equipment (e.g., stove-tiles) were discarded during cleaning works in different periods, e.g., after the Swedish Deluge (1655-1660). The drawback of many of the discussed finds is their poor state of preservationheavy fragmentation, the small size of the sherds, and surfaces damaged due to postdepositional factors. In effect, assessing their morphology and differences in particular elements is problematic, just as is inferring their decoration, including the types of depictions, their distribution and layout, and connection to the vessel tectonics. Only three vessels were completely or largely reconstructed: one plate and two bowls. The majority of the finds are parts of rims (31), bases (19), bodies (24), and a single handle. They represent two main categories of vessel forms -closed wares and open wares (cf., MPRG 1998: chapter 1.3.3). They were: plates, bowls, pots, mugs, jugs, and a lid (Table 1). The open wares prevail in the assemblage -plates (29 specimens, 39% of all the vessels) and bowls (21 specimens, 28% of the total). Other items, less frequently noted, belong to the group of closed wares. The mugs and jugs survived in the worst condition; they were found as fragmented bodies, which complicates their stylisticmorphological characteristic. The surfaces of many of the discussed vessels (57%) are discoloured and damaged, with the covering glaze often chipped (47%). Hence, the current appearance of these artefacts differs significantly from the original. In order to analyse the above-described pottery assemblage, I followed the main guidelines of the British Medieval Research Pottery Group for standard procedures related to medieval pottery (cf., MPRG 2016), also used for post-medieval pottery assemblages (e.g., Gaimster 2006). The quantification method was based on the identification of the minimum number of vessels (MNV), which determines the smallest number of vessels that could have produced the sherds found in the ground. The method involves examining all the sherds objectively and placing similar ones, which may have originated from the same vessel, together (Gaimster 2006: 48). All examinations were made by macroscopic method. The slipware was defined by a combination of fabric (colour, texture, surface treatment, and glaze), form, and decorative characteristics (Gaimster 2006: 52), as well as vessel size, method of manufacture, evidence of use, and state of preservation (MPRG 2016: 20-32). I used the glossary of forms and types after the MPRG 1998 (see also Bauer et al., 1986;Orton and Hughes 2013). All the investigated morphological and technological features were noted and entered onto a matching questionnaire (in a database system). The information registered there serves as the foundation for the further conclusions presented below. Selected potsherds were drawn and photographed and prepared for the figures included in the text. RESULTS -CHARACTERISTIC OF SLIPWARE The slipware found at the castle site in Tykocin was of decent quality. The vessels were manufactured from the most common raw materials -ferrous clays -which were adequately prepared. The ceramic mass usually contained sand in the form of glassy grains of quartz (for 60% of the vessels), characterised by coarse and medium size of grain inclusions (0.5-1 mm and 0.25-0.5 mm; cf., Orton and Hughes 2013: 280-282). The ceramic mass of bowls typically contained more coarse-grained inclusions (for 52% of them), usually due to utilitarian purposes. Single glistering flakes of white mica were also noticed. The walls of these vessels are usually 3 to 7 millimetres thick. The thickness is greater in the parts of the bodies closer to the bases, especially in bowls, and the smallest in the middle parts of bodies in pots and jugs. From the outside, the walls were carefully smoothed, so that no irregularities can be felt. The cores are usually compact and hard (cf., Orton and Hughes 2013: 277). The glaze covering the surfaces of all the vessels is made of lead oxide, colourless, and transparent. It forms a thin layer that got tarnished or chipped in about a half of the specimens, due to post-depositional conditions (Figs 2:9-11; 3:1-2, 6). Apart from its practical purpose, i.e., increasing the impermeability of the vessels, the glaze was also a decoration -it emphasised the colour of the body and its ornamentation and provided the pottery with gloss. The glazing was applied to the already slip-covered and decorated outer walls of pots, jugs, mugs, and lids, as well as the inner walls of plates and bowls. Plates and bowls were also glazed (only partially and not always) on the other, undecorated side, while pots, jugs, and mugs were usually glazed on both sides. What distinguishes the discussed artefacts is their decoration -colours and patterns. Unfortunately, due to their heavy fragmentation, not much can be said about the arrangement of the ornament or its correlation to the shape of the vessel. In most cases, only small parts of the ornaments are visible. Their specific primary feature is the use of the slip, which is obtained through suspension of fine clay in water (this is present on 60 earthenware, i.e., 80%). In 75% of the discussed vessels, their surfaces had been covered with white slip. Upon firing, such a layer put on clay bodies and under colourless glaze resulted in a bright, beige or yellowish overlay (Figs 2-5). In the case of the remaining 25% of the earthenware, a brown or green slip was used, thus creating a dark cover, strengthening the natural colour of the raw material (Fig. 6). Drawing by M. Wagner, photo and computer graphics by W. Bis. 3) presumably zoomorphic (1 item) -since the ornament is only partly visible, it remains unclear whether it is a schematic depiction of a sitting bird (its torso and legs) or an element of a different motif (Fig. 2:4); Fig. 3. Slipware from Tykocin Castle with bright slip overlay, simplified painted ornaments, and decorated with other techniques, the second half of the 17th-18th centuries: 1-5, 7-10 -painted ornament; 2 -sgraffito ornament; 3, 6 -chattered ornament. Drawing by M. Wagner, photo and computer graphics by W. Bis. The employed colour scheme is limited to several hues: green (a grass-like colour being the most common), brown, white, and yellow, with occasionally occurring turquoise, bluish, or reddish. In 20% of the vessels, the patterns are outlined in a colour contrasting with the underlayer, which adds to the regularity and sharpness of the motifs. The outline is almost always brown (15 items), with a single white example. On several vessels, the ornament was applied with different techniques. In three cases (4%), it was sgraffito where a part of the slip was removed with a sharp tool to reveal the colour of the underlying layer or that of the clay itself (Orton and Hughes 2013: 88;MPRG 1998: chapter 12.6) creating a pattern of thin wavy and semi-circular lines (Figs 3:2; 4:3-4). However, five other items (7%) were decorated with rows (bands) of dots impressed with a chattering tool or a roulette. These dots are dark since their colour matches that of the redware body uncovered below the layer of bright slip (Fig. 3:3, 6). This technique is known as chattering or hemring, in German: Kerbstichdekor, Springfederdekor, Hemrad dekor (see Heege 2019a: 95-96;2019b: 84). Only on three of the analysed vessels was the ornament created with different techniques combined -painted ornament with sgraffito ( Fig. 3:2) and painted ornament with chattering ( Fig. 3:3). The most common category in the Tykocin slipware assemblage are plates (see Table 1), which represent shallow dishes (e.g., Figs 2:13; 3:7; 6:7-8). As far as the fragmentation of the vessels allowed, it could be determined that in most cases (6 specimens) their shoulder and body were of similar height, with both parts separated with a gentle cut; alternatively, in rare cases, the shoulder was shorter than the body or the other way round (cf., MPRG 1998: chapter 5.4, forms a-c). Their edges (n=12) are usually everted from the vessel wall (5) or, less commonly, formed differentlyinturned (3), clubbed (2), or flat (2; cf., MPRG 1998: chapter 11.7.1). Typically, the profile of the rim is collared (7), in some cases thickened (3) or simple (2). The diameters of the edges of the rims range from 20 to 31 cm. Recurring sizes are 28 (in 4 specimens), 26, and 30 cm. The bases of the plates (n=8) are 8 to 12 cm in diameter, with three specimens measuring 11 cm. It is equally common for them to either have a footring or not; but when they do, the footrings are flat and their surfaces are smoothed. The only reconstructed specimen is 5.5 cm high. Decorations are applied from the inside, both on the bodies and the rims of the vessels. The bowls come in diverse sizes, from small through medium to large, although the bigger ones are the most common (e.g., Figs 2:12; 3:6, 8-10; 4:1). The prevalence of large bowls is evidenced by their rim diameters ranging from 10 to 28 cm and divided into three size categories: 10-11.4 cm (3 specimens); 16-20 cm (3); and 21-28 cm (7). In regard to the profile of their walls, they represent three types: carinated, flared, and rounded bowls (MPRG 1998: chapter 5.1.1-5.1.6). Their preserved and measurable bases (n=4) range between 10 and 12 cm in diameter. In two cases, it was possible to determine the height of the vessels -8 and 10 cm. In terms of depth, they were probably shallow and medium vessels (MPRG 1998: chapter 5.1). As shown by the investigated sherds (n=13), the angle of the rim is usually everted (8), sometimes flat (3) or upright (2), whereas the rim forms include: rounded (5), simple (4), thickened (3), and collared (1). The profile of the rim edge is usually rounded (7) or bevelled (6). The shape of the base in profile is concave or flat, with the majority of the bases smoothed from the outside. The ornament is found on the inside of the bodies and the bases or flattened mouths. Since the plates and bowls are open wares, their inner part forms the largest and most exposed surface suitable for presenting diversely arranged decorative elements. This is attested even by the fragmentarily preserved slipware vessels discussed in this paper. It also reveals this pottery's important decorative purpose. The pots in the discussed group of vessels are rather small and morphologically close to rounded mugs, i.e., mugs with a rounded body profile below a deep neck and with vertical loop handle(s) (MPRG 1998: chapter 6.3.5); such handles were found on three of them. It may be assumed that these pots were used as individual drinking vessels (Fig. 5:1-4, 6, 8). The diameters of their mouths (n=5) measure 8, 10, and 12 cm (the last measurement obtained from a single specimen), whereas the bases (n=2) -6 and 7 cm. The shape of the rim is everted (3 specimens) or inturned (2) in relation to the vessel wall, whereas the profile of the rim edge is rounded or bevelled. Only one base is fitted with a foot. Rod handles are oval in section, measuring 1.2 and 1.4 cm in width. The surface of the pots was covered exclusively in white slip, on which was applied decoration composed mostly of monochrome (green) spots, sometimes with streaks, chaotically scattered across rims and bodies. The decoration of these pots shows the most consistency and the least diversity compared to the other vessels. Mugs, i.e., basic drinking vessels, are preserved as bodies (5) and a base (1; Figs 2:3; 5:7; 6:1). The base has the diameter of 8 cm and has a foot. The shape of the bodies suggests that they derive from barrel-shaped, conical, or cylindrical mugs. Presumably, they had different capacities. The majority of the sherds are covered with a bright slip with various patterns (5). Information about the jugs is limited, due to their fragmentary state of preservation. Mostly parts of bodies (5 specimens) were recovered, along with a single fragment of a rim (Fig. 2:9-11). These small sherds reveal little about the shapes of the whole jugs. However, it may be supposed that they were relatively pot-bellied vessels -pear-shaped or shouldered jugs (MPRG 1998: chapter 3.1). The rim, 12 cm in diameter, is upright (in angle), simple (in form), and internally bevelled (in rim profile). The ornament was applied on a bright or coloured underlayer. The only lid from the assemblage is a fragmentarily preserved specimen with a shallow domed profile and a central integral knob (Fig. 4:5). The knob is wedge-shaped in elevation (MPRG 1998: chapter 7.1.4c) and measures 1.5 cm in diameter. In this case, the ornament, presumably with floral motifs, was applied on a background layer of dark slip. PROVENANCE, CHRONOLOGY, AND FUNCTION OF SLIPWARE VESSELS The slipware used at the castle in Tykocin probably comes from at least several manufacturing centres operating in the post-medieval period. In this context, it is crucial to determine whether they were local products or imports. Answering this question requires indicating those features that would enable identification of particular manufacturing regions and more precise dating of the Tykocin finds. In terms of technological sophistication, the slipware from the whole discussed period, i.e., from the second half of the 16th century to the second half of the 18th century, is relatively consistent. The basic differences in quality are best visible in ornamentation, despite it being only partially preserved. Two quality standards could be distinguished: carefully and precisely executed, presumably by proficient craftsmen not without a spark of creativity; and more schematic, merely referencing certain stylistic tendencies seen in slipware production, manufactured in a sparing and simplified way by pottery-makers with average skills. The first group would include most of the specimens with bright background, some vessels with the ornament painted against a dark underlay but outlined with a contour, and specimens with marbled decoration or ornamented with sgraffito and chattering techniques. The provenance of the specimens decorated with the sgraffito technique has not been determined (Figs 3:2; 4:3-4). The reason for that is their poor state of preservation which prevents identification. It is possible that they are of local production, like similarly decorated finds e.g., from Skaryszewy (unpublished materials, oral communication from Dr Michał Starski). The second group, in turn, would include vessels ornamented with colour spots, painted decoration applied directly on the body, and that applied on a dark underlay but without an outline. These also happen to be the youngest finds within the assemblage. This group of finds may be well represented by two bowls and a plate. The bowls have redware bodies decorated only with strips of white slip covered with irregular green and brown spots ( Fig. 4:1-2). The remaining part of their inner surfaces is undecorated and covered with transparent glaze. Their provenance is difficult to determine; perhaps they were imports. They may be dated to the period between the late 17th century and the second half of the 18th century. A similar specimen was found, for example, in Amsterdam, dated to the years 1700-1850, and originating from the Lower Rhineland (Gawronski 2012: 277, no. 995). The plate is an unfinished product ( Fig. 6:7). A part of its body was covered with brown slip, but the layer melted and blurred. On top of it, a white slip was applied to paint a jagged geometric pattern. The find came from a layer dated to between the 17th century and the second half of the 18th century. Such subpar specimens may have originated from local Podlachian pottery workshops, perhaps operating outside the guilds. I was unable to find any analogies for it in the published works. Potters were active in Tykocin itself and other towns of the region in the post-medieval period, as attested by written records, but there is no information whether slipware was manufactured there at the time (cf., Maroszek 1976). On the other hand, vessels ornamented with loosely scattered colour (green) spots against an overlay of white slip (Fig. 5) were produced, for example, in the Miechocin workshop in the 18th century (Szetela 1969b: 100, 103, Fig. 69). Such artefacts are known, for instance, from Warsaw (e.g., Meyza 1996: 58-59, no. 32;80-81, no. 44;82-83, nos. 48-50), Gdańsk, where they are considered imports from Miechocin (Oniszczuk 2013b: 427, 438), and Prague from the 17th and 18th centuries. In the last case, they were supposedly products from the Czech centres in Beroun or Levín (Matějková 2019: 136-137, figs 6-7). The finds from Tykocin Castle are stylistically and chronologically consistent with the aforementioned specimens. They may be dated to the first half of the 17th (?) century and between the mid-17th century and the second half of the 18th century. However, it is difficult to define where exactly they were manufactured. They are a testimony to pottery trade on, at least, interregional level. The attribution of the vessels preserved as small fragments with non-characteristic ornamentation remains unknown (e.g., Figs 2:5-8; 3:4-5, 7-8; 6:2-5). Hence, the route which brought these items to Tykocin Castle cannot be reliably reconstructed. Some of the fragments lacking distinguishing features but of otherwise high quality are likely to have been imported from Western Europe. How did they end up in the castle household? Polish goods were probably bought at local markets and fairs or delivered as a part of the tributes due to the castle. However, there are no written records explicitly confirming such a practice. On the other hand, in the case of the foreign vessels, there are many potential ways of reaching to the fortress. They were purchased (just as many other imported goods) predominantly in the capital (Warsaw) or in the Baltic ports (Gdańsk and Königsberg). Alternatively, they could have been ordered directly from foreign workshops, as evidenced by the 18th-century written accounts about pottery bought for Jan Klemens Branicki (cf., Bis forthcoming). An equally important question is the way in which slipware was used. Its aesthetic features and the morphology of the vessels would undoubtedly make it a suitable table ware for serving dishes, eating, and drinking. For the same reason, they could have been used as interior decoration, especially that some vessels were prepared specifically for this purpose (e.g., a bowl with a perforated foot for hanging it as a decorative dish). Their technical parameters combined with the decorative potential suggest that their quality surpassed that of the cheapest and most common kitchen ware -brownware, greyware, and redware. The same is true for the less numerous white ware, of slightly higher quality and also glazed, which would be put on the table and serve secondary purposes in the kitchen and larder. As indicated by the analysis of ceramic finds from Tykocin Castle, the quality and status of the slipware match those of the common vessels of the so-called Pomeranian faïence but are significantly below the quality of other faïences -imported or produced in Polish manufactories -as well as majolica, stoneware, and porcelain -more expensive and rarely recorded in archaeological layers. The slipware was probably used by officials, craftsmen, administrators, etc. -a wide group of consumers of average means -who would have resided in the building. It is assumed that the demand for this kind of pottery in European countries was a result of the growing requirement for more sophisticated goods, serving as ceramic substitutes for metal-and glassware. This process was fuelled especially by the aspirations of the lower classes to imitate the lifestyle of the aristocracy. Other factors contributing to it may have been an inflow of imported goods and cosmopolitism (cf., Cumberpatch 2003;Gaimster 2006). The local slipware production was a response of the domestic workshops to the new styles and solutions developed by western European workshops and renowned Polish manufactories. As demonstrated by the analysed vessels -probably made in Podlachia region, with mixed results. CONCLUSIONS The slipware obtained from Tykocin Castle, despite the limitations caused by its fragmentary preservation and damaged vessel surfaces, has prompted interesting observations. The assemblage included specimens representing the most popular morphological-stylistic trends and decorative techniques and motifs used in postmedieval pottery from different manufacturing centres. It seems likely that it originated predominantly from Polish workshops, including the renowned Miechocin or the Western-Pomeranian Myślibórz, as well as undetermined foreign production centres. It also contains products of other local workshops, presumably from Podlachia region, which delivered poor imitations of the superior slipware. The majority of the Tykocin finds is dated to the 17th and 18th centuries. They are an example of, and a testimony to, one of the innovative trends in the pottery-making of that period. On the other hand, they also illustrate an important process taking place within post-medieval pottery production -a gradual downturn in the artistic value of the products until it reached the level comparable to the folk pottery of the 19th century.
2021-12-22T16:25:01.769Z
2021-12-20T00:00:00.000
{ "year": 2021, "sha1": "f4a247f7636e594be5fa7a2bcb4293724c0cf036", "oa_license": "CCBY", "oa_url": "https://journals.iaepan.pl/apolona/article/download/2843/2831", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9f5b028f4150f25bc9ce4166699e60ffe2cab339", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
119349114
pes2o/s2orc
v3-fos-license
Atom optical elements for Bose condensates A simple model for atom optical elements for Bose condensate of trapped, dilute alkali atomns is proposed and numerical simulations are presented to illustrate its characteristics. We demonstrate ways of focusing and splitting the condensate by modifying experimentally adjustable parameters. We show that there are at least two ways of implementing atom optical elements: one may modulate the interatomic scattering length in space, or alternatively, use a sinusoidal, externally applied potential. I. INTRODUCTION Since the first experimental realisation of Bose-Einstein Condensation (BEC) with dilute, trapped alkali gases in 1995 [1][2][3], rapid progress has been made in studying their physical properties. One of the exciting prospects for BEC from the outset has been the possibility of realising the matter equivalent of optical laser, i.e. the atom laser [4,5]. The recent demonstration by the MIT group of an output coupler for condensates [6] along with the interference of two independent condensates [7] has brought the realisation of the atom laser one step closer. In fact, one could argue that we already have a rudimentary atom laser and that we should be looking for effective ways to control the output beam of such sources. By control we mean focusing and beam splitting of flowing condensates, in analogy to conventional optics using light. In the last few years, there has been tremendous progress made in the field of atom optics, both linear and non-linear, in which the motion of atoms is controlled via light forces [8][9][10][11][12][13]. Direct application of these techniques using light forces will not always be possible due to a variety of factors. These include the possible heating of the condensate by the laser field which may destroy the condensation. Ideally one wants to apply light fields to transfer momentum to the atoms in a controlled manner while keeping the atom assembly condensed. This is, of course, a particularly severe task for ultracold BEC's. In this paper we apply the well-known idea of optically generated potentials used in atom optics in the context of Gross-Pitevskii Equation (GPE) to model how the atom optical elements produce focusing and splitting of a condensate. We should emphasise that we shall be demonstrating the effect of such arrangements rather than providing detailed descriptions of how such a configuration would be constructed in practice. We shall, however, discuss some of what we see as the main limitations of such schemes. In section II we shall specify our theoretical model and in section III the results of our simulation are presented. Finally, these results are discussed in view of available experimental parameters, in section IV. A. The NLSE or Gross-Pitaevskii Equation The mean-field theory for an assembly of condensed atoms in an external field may be obtained in two ways: in the first, we start with the non-linear atom optics point of view of many-body wave function and then find an appropriate limit for Bose condensation; in the second, we use a field theory approach of Ginzburg and Landau [14] which is the standard approach in much of the literature on Bose condensates. The mean field theory obtained from these two approaches is identical, and we shall briefly outline only the second route below. For a detailed discussion of formalism of nonlinear atom optics, readers are referred to papers by Lenz et al. [10] and Zhang et al. [11][12][13]. The Hamiltonian for a system of interacting atoms in a trap is given by: whereĤ 0 is the single particle Hamiltonian and the field operatorΨ obeys the standard boson commutation relation We shall assume, given the fact that we are dealing with a dilute gas, that the interatomic potential can be approximated by that of contact interaction:V Additionally, we shall suppose that, That is to say, the strength of the overall interatomic interaction may be viewed as the sum of that due to intrinsic interactions and the modification due to an external influence such as that of the laser, magnetic field or rf-induced interaction. Such externally induced modification may occur, for example, when pairs of atoms, driven by the external field, have the ground state mixed with an electronically excited quasimolecular state. This quasimolecular state can have much stronger interaction, modifying the scattering amplitude in the process. Description of the interaction strength is facilitated by introducing the scattering length a, which is proportional to V. The nonlinear Schrödinger equation for the condensate wave function, ψ, is then given by: where = 4πNh 2 a m , (8) with N being the number of particles in the sample, and m the mass of a single particle. a intrinsic denotes the scattering length in the absence of an external field and is an experimentally measurable quantity; a induced is that induced by an external electromagnetic field for which there are various theoretical predictions. It is predicted, for example, that the overall scattering length, a, may be varied by the application of a magnetic field for F = 3, m F = −3 hyperfine state in Cs [15]. A blue detuned "optical shielding" type laser of variable intensity [16,17] is also believed to be a good way to achieve control over the effective scattering length. Experimental measurement of an a induced , however, has not been performed to date. We note here that, in theory, the non-linear coefficient, C n may be made position dependent, since the induced scattering length, a induced , may be varied in space with an application of a laser field with a suitable variation in intensity. The nonlinear Schrödinger equation or the Gross-Pitaevskii Equation, Eq. (6), describes the dynamics of a single atom moving in the presence of mean field induced by other atoms in the system, in addition to the effects of the confining trap potential. The GPE has been found, over the last couple of years, to provide very good description of condensate behaviour over a range of temperatures from near T = 0 K to a sizeable fraction of T /T c where T c is the critical temperature of phase transition [18][19][20][21]. B. Modified GPE We intend to investigate two types of possible experimental arrangement. As the first possibility, we consider a typical atom optical configuration as shown schematically in Fig 1: Firstly the condensate is prepared in a trap and released so that it freely falls under gravity. Such a situation is described by turning the trap off i.e. setting V trap = 0. The cloud of atoms will undergo ballistic expansion. The condensate is then passed through an interaction region in which it is subject to an interaction potential, before it again falls freely in space onto a detector or a surface. We look at two types of interaction with this set up: spatial modulation of the scattering length in the interaction region, so that the corresponding evolution is given by: and use of sinusoidal potential in the interaction region as described by: where C and A are the amplitudes and k the wave number which can be adjusted. We present the results for the case when D = 0, and also the case when the sinusoidal variation is limited to positive scattering lengths, by a suitable adjustment of D. Using either method, the atomic motion may be modified in a controllable manner. The two methods are, however, qualitatively different since the first involving C n is of an intrinsic nature, which arises from interaction of atoms in a many body system while the second involves imposing an external potential. We shall see that unlike conventional atom optics, we now have two avenues to impose potential gradient, which may or may not be applied simultaneously and opens up the possibility of manipulating the relative amplitudes and phases of these two potentials. In this way, it seems possible to generate an effective overall potential gradient, which may even vary over time, to tailor our atom optical component. As the second possible experimental arrangement, we look at the effect of spatially varying the interatomic scattering length in the presence of the confining trap. The condensate is not dropped, and therefore we can study the case in which one has both types of potentials present simultaneously. This is modelled by an equation with a minor modification to the standard GPE: We consider the case of f (r) = Gr 2 as well as f (r) = C(cos k · r + D), where G, C, D are constants, in order to test general features of such an arrangement. The condensate formed recently at MIT using the cloverleaf trap configuration was reported to have an aspect ratio of 20 [22]. This means that a 1 dimensional GPE may be expected to provide an adequate qualitative description of the evolution of a cross section of such condensate. The spatial modulation of C n discussed above would, in this case, be along the long axis. To be more precise, one may allow some Gaussian spreading of C n in the radial direction of the cigar-shaped condensate. i.e. C n (r) radial of the form We assume the coupling between the C n (r) axial and C n (r) radial to be weak. The 1D GPE with modified linear and non-linear potentials was integrated using a 4th order Runge-Kutta integration routine over various configuration times in order to find the optimal condition for focusing and splitting. In the following we assume that all our atoms originally have positive scattering lengths. A. Results for the atom optical configuration In order to simulate the experiment of type shown in Fig. 1, we need to specify how long we let the condensate to free fall, interact with the chosen field and then fall freely again onto a detector or a substrate. It was found from simulations that, once the initial free evolution time is chosen, the final spatial distribution of the condensate that we see at the detector depends, as would be expected, quite sensitively on the duration of interaction and the subsequent free fall. In our simulations, we chose a reasonable initial free fall time and then selected the interaction and the subsequent free fall times so as to give optimum probability distribution in terms of sharpness and regularity of the peaks on the final detector. Spatial modulation Cn in the interaction region In conventional atom optics, parabolic optical potentials are used to provide a focussing effect. We have, therefore, tried a parabolic modulation of C n in the interaction region. It was found that, in order to see any noticeable focusing effect by such an arrangement, we need a very steep position dependence, such that around the tail region of the condensate, C n of the order of around 20 times the original value is needed. A theoretical treatment by Fedichev et al. [23] indicates that although it might be possible to change the scattering length by comparable amounts, the accompanying inelastic scattering rate also undergoes a correspondingly large increase. Since this is not experimentally favourable, we do not pursue the parabolic variation any further and instead concentrate on the sinusoidal modulation of C n in the interaction region. We have let our condensate to ballistically expand for t = 1π/ω to model the initial free fall part of Fig. 1. The axial trap frequency of the MIT cloverleaf trap was reported to be ω = 2π × 18 Hz which translates t to time of order 27 ms. Figure 2 shows in solid line the probability distribution immediately after the interaction region, interacting for duration t = 0.3π/ω with modulation amplitude C = C n , D = 0 and wave number k = 2 in harmonic oscillator units. The dashed line indicates the ballistically expanded condensate just before the interaction. This interaction time, which corresponds to approximately 8 ms in the MIT trap parameters, gave the best result in terms of sharpness, height and regularity of the peaks. The shape of the spatial modulation is shown as the dotted line. We see that the number of peaks and valleys match the number of spikes in the condensate as to be expected from conventional potential gradient idea. Indeed, it was found that in an identical simulation with k = 1 i.e. half the wave number, we get 4 prominent spikes in the final distribution instead of 8 as seen here. It is then reasonable to expect that the sharpness of these peaks, as indicated by the ratio of their height to widths would increase if the amplitude of the potential was increased; the results agree with this prediction. The maximum peak height of 0.094 for Fig. 2 was increased to 0.162 as a result of doubling the amplitude of modulation while keeping the general shape. After the interaction region, the condensate was let to evolve freely. Figure 3 shows the final probability distribution after the distribution shown in Fig. 2 was freely evolved for duration t = 0.3π/ω. The time of free evolution was again chosen so as to give the best distribution. The peaks are now broader and have been displaced so that now there is a peak at the origin, as a result of rather complex dynamical motion of various parts of the condensate as it expands. The dotted lines represent the shape of the effective potential that was originally in place in the interaction region. It was found that a regular probability distribution is again restored after t = 0.8π/ω, with shorter and broader peaks. The case of a sinusoidal variation of the scattering length which remains positive was also investigated. This was simply accomplished by translating the potential vertically so that the minimum value of C n is now 0 instead of −C n as in the previous case. The final result is shown in Fig. 4. The solid line is the case when C n varies from 0 to C n . The dot-dashed line indicate the case when C n varies from 0 to 2C n . We see that there is no qualitative difference from the previous case in which negative scattering length was allowed; the amplitude of modulation seems to be the determining factor rather than the sign. The effect of longer initial free evolution time than 1π/ω was to give shorter peaks, but with the same general shape and width. This can be explained from the fact that after a longer free fall, the condensate has spread out more and its peak height is consequently shorter when it interacts with fields in the interaction region. Using a sinusoidal potential in the interaction region In a direct analogue to the case of spatial modulation of C n , we try and simulate the same experiment of Fig. 1, but here we employ a sinusoidal potential in the interaction region while leaving C n spatially invariant i.e. Eq. (10). The initial free fall time was again chosen to be 1π/ω. The probability distribution immediately after the interaction region of Fig. 1 for a sinusoidal potential is shown as solid line in Fig. 5. In this case, Eq. (10) was integrated with A = 1 and k = 2 over interaction time t = 0.2π/ω. As before, the shape of the sinusoidal potential is shown in dotted line, and the dashed line indicates the ballistically expanded condensate immediately before the interaction. When the amplitude of modulation was doubled, the maximum peak height of 0.073 shown in Fig. 5 was found to increase to 0.102 while keeping the same general shape. Again, when an initial free evolution time longer than 1π/ω was considered, shorter peaks were produced, giving smaller peak height-to-width ratio, as discussed previously. The final probability distribution after a subsequent free evolution of duration t = 0.3π/ω is shown as solid line in Fig. 6. It was again found that a regular probability distribution is restored after t = 0.9π/ω, with shorter and broader peaks. When one uses A = 2 in the sinusoidal potential, the corresponding probability distribution is as given by the dot-dash line in Fig. 6. For this case, free evolution time after the interaction of 0.4π/ω was used. We see here that the effect of modifying the confining trap potential is qualitatively the same as that of modulating C n and that their effect in both cases may be understood by simply using the potential gradient idea, as in conventional atom optics. We note also that parabolic potential over the interaction region is another possible configuration for focusing. It was found, however, that although focusing effect does arise in the sense of counteracting the ballistic expansion, its effectiveness was rather small and broadening took place on a very short time scale after leaving the interaction region. This is possibly because by this stage, repulsive interactions dominate in a dynamically expanding condensate, such that the original trap potential is not sufficient to counteract the expansion. Again a very steep parabolic potential was needed to observe an efficient focusing effect. B. Spatial modulation of Cn in the presence of a trap In this subsection, we look at the possibility of imposing both the spatial modulation of C n and a confining trap. We look at the evolution such as that described by Eq. (11), which, in addition to the atom optical configuration discussed in the previous subsection, is another possible experimental arrangement. Since, in this case, the confining trap prevents ballistic expansion, a parabolic variation in C n with more realistic values of the scattering length could be studied. Figure 7 displays the probability distribution of the original trapped condensate in solid line while the dot-dash line displays the distribution with a parabolic variation on the non linear coefficient in Eq. (11). This is the result after time t = 0.5π/ω where ω is the trap frequency. In this simulation we assumed that C n varies as C n x 2 /400 in harmonic oscillator units so that we have C n (x) = 0 at the origin and C n at x = ±20. We see here a focusing effect within the trap. The height, and consequently the width, of the peak was found to vary over time in an oscillatory fashion. Figure 8 shows the evolution of the maximum height of the peak over time in the presence of this parabolic potential. The behaviour can be qualitatively understood from the fact that the condensate on the "wings" of the gaussian-like initial distribution 'feels' a push towards the center, resulting in the build up of the peak height; the peak height then starts to decrease after a maximum as the condensate pass through each other and results in broadening of the base of the distribution. A slowly-varying beating type envelope appears on the oscillatory curve when a different parabolic distribution is used; for instance when the C n varies as C n x 2 /100 rather than C n x 2 /400. This slowly-varying beat pattern is caused by effective phase difference between the potentials acting on the condensate. In this example, the potentials act as two distinct "springs" which are out of phase with one another in time; this effect becomes more pronounced when the tightness of the parabolic potential due to C n is changed. The resulting probability distribution when a sinusoidal rather than a parabolic variation of scattering length is imposed is shown in Fig. 9. We see that an action similar to beam splitting occurs with two prominent peaks. The time of interaction shown is t = 0.4π/ω. Again there is a time dependent behaviour as to be expected. The somewhat unexpected shape of Fig. 9 can be explained by superposing the two independent potentials; Fig. 10 shows the shape of the overall potential when the parabolic trapping potential and the sinusoidal one modulating the non-linear constant is superposed. (This is a plot of x vs. x 2 /4 + C n cos x.) It is clear from the shape that the most likely probability distribution is that of two peaks, as any further peaks would require additional energy. Again, the time-dependent variation of the probability distribution can be explained by the fact that, as the condensate changes its shape over time, both the trapping potential and the sinusoidal potential from the nonlinearity exert time-varying forces. It is noted that in Fig. 9, we used k = 1 modulation, and the separation between the two peaks was reduced for higher k, as to be expected. When the trap was turned off, it showed a lot more peaks than the expected number of around 4 (c.f. Figs. [4][5][6]. This is readily seen to be the effect of the trapping potential; there is an extra force from the sides toward the center, resulting in high number of peaks in the main region x = −10 . . . 10. What is clear from simulations of this subsection is that we can manipulate the shape of an intially stationary condensate within a trap by modulating the non-linearity constant. IV. DISCUSSION In this section we attempt to estimate possible experimental values. First of all, we note that the interaction times used in simulations of the order of 10 ms are very much within the limits of present technology, since a typical atom lithography set up has the total time taken from a thermal source to substrate of the order 0.67 ms [9]. It is also noted that typical time of flight of the condensate used in recent experiments before imaging is around 40 ms, so the total time taken from source to detector in our numerical simulations matches those of real experiment very closely. The total distance the condensate falls in our example from source to detector is, then, around 1 cm. The width of each peak in simulations is around 2 harmonic oscillator units which, for sodium atoms and the cloverleaf trap, corresponds to approximately 7 µm. Possible ways to reduce this width even further would be by using the modulating field of higher value of amplitude to k ratio in the interaction region and also by using a trap of higher frequency than 2π × 18 Hz. The effect of increasing k alone would mean the peaks become more closely spaced. The area under each peak was such that the number of atoms in each peak would be around 10% of total number of atoms. A very important point is whether there is any limit to the number of atoms in each peak due to interactions between the particles. This aspect was investigated by looking at the probability distributions for the case where we employ sinusoidal potential, i.e. Eq. (10), but with various values of interatomic interaction strength characterised by C n . It was indeed found that the "fringe visibility" defined as where P max and P min represent the maximum and the minimum of the "fringe pattern" decreases quite dramatically when we use a larger value of C n . As an approximate indication, simulation with C n = 150, as compared to C n = 20 used in Figs 5 and 6 gives V = 0.24 as compared to original V = 0.72. The times used were 0.5π/ω, 0.2π/ω, and 0.3π/ω. The initial free evolution time was chosen to be t = 0.5π/ω rather than 1π/ω because in this case of large C n , the cloud spreads out too much by the time t = 1π/ω. Noting that C n is also proportional to N , the total number of the condensate particles, this brings us to the question of whether production of an intense beam of atoms in an atom laser is a requirement when atom optical elements of this kind is used. This aspect seems to be an important limiting factor in the design and implentation of any atom optical experiment using bose condensates. One may, in principle, use a higher value of A for high values of C n to improve the visibility. This implies that to be effective, various parameters have to be adjusted carefully to optimise the set up for each instance. Modulation of scattering length and use of sinusoidally varying potential raise the issue of experimental feasibility, as these will certainly involve interaction of an electromagnetic field with ultracold atoms. The primary effect of light on cold trapped atoms is to cause loss processes by giving atoms enough energy to escape the trap, through such processes as photon recoil and photoassociation. The rate at which an atom heats up as a result of photon recoil is given by where γ is the natural atomic linewidth, Ω the Rabi frequency and δ the detuning. It is assumed here that |δ| ≫ Ω and |δ| ≫ γ. A recent theoretical study by Fedichev et al. shows that it is, in fact, possible to alter the scattering length by as much as 350% for 7 Li and minimize recoil or photoassociation losses with a selection of frequency detuning and Rabi frequency [23]. In order to have small recoil losses along with a significant change of a it is necessary that the detuning, δ be large and negative, and chosen to be not too far from a vibrational resonance with one of the bound p states of the electronically excited quasimolecule. All our simulations depend on the assumption that the possible heating of the condensate is either negligible, or has typical time scale which is long compared to the interaction times considered here. We also note that generation of optical potentials by Raman transition has recently been proposed as a way to reduce the spontaneous emission rate [24]. The atom optical elements discussed in this paper may be achieved via magnetic as well as optical fields, and it is conceivable that certain arrangement such as, say, an optical shielding laser combined with a magnetic field-induced potential gradient might provide a solution. More study, however, would be required on exactly how one can implement these atom optical elements. In summary, it was found from numerical simulations involving various configurations that focusing and splitting effects can be achieved within the context of the non-linear Schrödinger equation with modified potentials. It is pointed out that our simulations just show a possible theoretical model, without inclusion of such effects as the spontaneous emission of atoms as might be the case when the external potential involves an optical dipole force. Various possible improvements may be made in order to accommodate such effects into theory. For example the effect of inelastic collisions may be incorporated by letting C n be complex. However, at present, there is no clear indication of how big such effects will be in practice. Further studies along the lines of atom optics could involve the study of coherence property of the condensate which have been modified in this way. Other possibilities include finding a way to use the two potentials simultaneously to tailor an atom optical element of an arbitrary specification, and eventually finding a way to prepare vortex states in the condensate through such modification. This will then be of relevance to the study of superfluidity in trapped alkali bose gas, and in this sense, the atom optical elements would be more than just equivalents of conventional optical elements. ACKNOWLEDGMENTS This work was supported by UK EPSRC, and the European Union under the TMR Network programme. SC would also like to thank UK CVCP for support. terms. This represents an effective potential to generate the probability distribution shown in Fig. 9
2019-04-14T03:23:09.727Z
1997-08-29T00:00:00.000
{ "year": 1997, "sha1": "5046cc9428574dc0599c9aa5dbe3e7f8a85a167c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/9708055", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1f2fd142c70043ff932c32750e69e4a3d1241f5b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231699275
pes2o/s2orc
v3-fos-license
Modulating organelle distribution using light-inducible heterodimerization in C. elegans Summary The relative positioning of organelles underlies fundamental cellular processes, including signaling, polarization, and cellular growth. Here, we describe the usage of a light-dependent heterodimerization system, LOVpep-ePDZ, to alter organelle positioning locally and reversibly in order to study the functional consequences of organelle positioning. The protocol gives details on how to accomplish expression of fusion proteins encoding this system, describes the imaging parameters to achieve subcellular activation in C. elegans, and may be adapted for use in other model systems. For complete details on the use and execution of this protocol, please refer to De Henau et al. (2020). SUMMARY The relative positioning of organelles underlies fundamental cellular processes, including signaling, polarization, and cellular growth. Here, we describe the usage of a light-dependent heterodimerization system, LOVpep-ePDZ, to alter organelle positioning locally and reversibly in order to study the functional consequences of organelle positioning. The protocol gives details on how to accomplish expression of fusion proteins encoding this system, describes the imaging parameters to achieve subcellular activation in C. elegans, and may be adapted for use in other model systems. For complete details on the use and execution of this protocol, please refer to De Henau et al. (2020). BEFORE YOU BEGIN Manipulation of subcellular organelle positioning can be achieved using the light-dependent heterodimerization system LOVpep-ePDZ, an example of an optogenetic approach, in which a photosensitive LOVpep domain binds an engineered PDZ domain (ePDZ) after exposure to blue light (<500 nm) (Strickland et al., 2012). In the following, we describe how to implement and use this system. While we focus on the application of this system in the Caenorhabditis elegans zygote, most principles described here are likely adaptable to any transparent biological model system or organism. To set up this system, plasmids encoding LOVpep and ePDZ need to be generated, where one is fused to an organelle targeting sequence and the other to a protein domain that localizes to the desired site for organelle relocation (Figure 1). Subsequent stable integration of these constructs is preferred: this allows the relative concentrations between dimerizing proteins to be more consistent, which improves the efficiency of light-induced activation (Krishnamurthy et al., 2016). Stable integration is also essential for expression in the C. elegans germline and early embryo. However, transient expression can be useful for rapid screening of functional constructs. In the following, we describe how to design and generate the constructs encoding the LOVpep-ePDZ system. Note: Besides LOVpep-ePDZ, a number of other light-inducible protein dimerization techniques have been developed. The binding affinity and activation and reversion kinetics of these light-inducible protein dimerization systems are important parameters to consider: systems with a tighter affinity are more suited to induce a fast functional response. However, it also makes them more sensitive to residual dark-state binding. For multiday experiments, minimal dark-state binding seems crucial to avoid unwanted background activation and perturbation of the system. The activation and reversion kinetics in turn determine how frequently the system needs to be exposed to maintain dimerization, and faster kinetics allow for more temporal resolution Pathak et al., 2014;Strickland et al., 2012). A third example of a blue-light-inducible dimer is the CRY2-CIB1 pair. Rather than an intrinsic change in affinity upon blue light stimulation, it appears it is the light-induced homo-oligomerization of CRY2 that drives colocalization with CIB1 (Bugaj et al., 2013;Hallett et al., 2016;Kennedy et al., 2010;Lee et al., 2014). Because of this, orientation-specific tagging effects of either component need to be considered when implementing this system. Compared with the previous two methods, CRY2-CIB1 shows slower activation and reversion kinetics . (B) Scheme for the induced trapping of mitochondria with TOMM20::HALO::ePDZ to membrane EGFR-TM::mTagBFP2::LOV and underlying construct design. (C) Scheme for the induced transport of mitochondria with TOMM20::HALO::LOVpep via dynein heavy chain fused to ePDZ and mCherry, together with underlying construct design. A final example of a light-sensitive heterodimerization system is the Phy/Pif pair, which uses a different light spectrum than the three previous methods: this pair forms under red light and dissociates in darkness or under far-red illumination. Compared to the previous systems, it shows higher fold levels of activation and low background binding. This method is limited in that it requires the external addition of the cofactor phycocyanobilin, which is not always straightforward (Adrian et al., 2017;Levskaya et al., 2009;Pathak et al., 2014). A direct comparison of these systems, which allows for a better understanding of their unique advantages and limitations, can for example be found in Hallett et al. (2016) and Pathak et al. (2014). We advise the reader to carefully assess the requirements of their own light-dependent heterodimerization experiments before deciding on which system to use. Design the plasmids containing the LOVpep and ePDZ constructs Timing: 1-4 h 1. Choose sequences encoding proteins or protein domains that connect LOVpep and ePDZ to the organelle of interest and to the protein that will be used to relocalize the organelle to the desired site respectively (Figure 1). To achieve local activation, it is critical that diffusion of activated LOVpep outside of the activated region is limited as much as possible. To this end, LOVpep is tagged to the least dynamic structure of the two. For example, to transport mitochondria along microtubules we fused LOVpep to the relatively immobile mitochondria using TOMM-20 and fused dynein heavy chain DHC-1 to ePDZ ( Figure 1C) (Fan et al., 2019). On the other hand, to trap mitochondria at the cell membrane we fused LOVpep to a transmembrane domain to localize it to the plasma membrane, while ePDZ was now fused to the relatively more mobile mitochondria using TOMM-20 ( Figure 1B) (De Henau et al., 2020). 2. ePDZ can be placed either N-or C-terminal of the protein (domain) of choice in the final construct. On the other hand, LOVpep can only be placed at the C terminus of the final construct. LOVpep interacts with ePDZ via a C-terminal Ja-helix, which needs to remain free to maintain its functionality Strickland et al., 2012). 3. Ideally, label both constructs with fluorophores to ensure proper expression and localization. Keep in mind here that LOVpep is traditionally activated using 470-500 nm light (Harper et al., 2003;Strickland et al., 2012) but is also significantly activated by shorter and slightly longer wavelengths. For example, we observed light-induced LOVpep-ePDZ heterodimerization using an excitation power of 4.8 mW 405 nm laser light, 0.002 mW 458 nm laser light and 0.005 mW 514 nm laser light (all 5 iterations and pixel dwell time of 8 ms, exposure every 2 s). These settings are considerably lower than what we normally use to excite blue, green, and yellow fluorophores. Therefore, in case the LOVpep or ePDZ constructs need to be visualized and followed during light-activation experiments, one cannot use fluorophores that require excitation with wavelengths between 400 and 520 nm, such as mTAGBFP2, eGFP and Venus . Excitation of fluorophores with an excitation spectrum that is distinct from the excitation wavelength of the LOVpep domain, such as mCherry, mScarlet and HaloTag combined with the JF 646 Halo-Tag ligand (Grimm et al., 2015), do not induce LOVpep-ePDZ dimerization. When it is not required to visualize LOVpep or ePDZ during light-activation experiments, you can tag them with mTAGBFP2 or eGFP to ensure proper expression. This way, higher wavelengths remain free for visualization of other proteins or biological processes. 4. It is recommended to use suitable linkers (such as GGSGGSGGS or GAGAGAGAGAGA) between the LOVpep, ePDZ, and the protein domains of choice to allow for proper protein folding. 5. LOVpep-ePDZ is sensitive to differences in expression levels because it is restricted to a 6-fold increase in dimerization affinity upon illumination. It is therefore recommended to use identical or comparably strong regulatory regions (promoter, 3 0 UTR) to drive expression. When going for single-copy integration techniques, aim for regulatory regions that cause moderate-strong protein expression, in order to achieve LOVpep and ePDZ levels that are sufficiently high to ll OPEN ACCESS STAR Protocols 2, 100273, March 19, 2021 permit light-depending LOVpep-ePDZ interactions. We successfully used mex-5 promoter or fbf-1 promoter in combination with tbb-2 3 0 UTR (sequences and expression patterns detailed in Merritt et al. (2008)) to achieve LOVpep and ePDZ germline expression that is compatible with lightdepending activation (De Henau et al., 2020;Fan et al., 2019). 6. Once the transgene is designed, codon optimization for expression in C. elegans (Redemann et al., 2011), or the model organism of choice, is advised. To specifically increase expression in the C. elegans germline and avoid germline silencing, germline-specific codon optimization is available and highly recommended (Fielmich et al., 2018). Additional approaches to avoid germline silencing are the removal of homology to piRNAs (Batista et al., 2008) and introduction of PATC introns (Frokjaer-Jensen et al., 2016). CRITICAL: Make sure that the final ePDZ and LOV constructs show an overlapping subcellular expression pattern, so that upon light activation LOVpep is capable of binding ePDZ. When using motor proteins, make sure that they are active in the cell of interest. Alternatively, use constitutively active or inducible active motor proteins (Nijenhuis et al., 2020). Also make sure the cell of interest has the cytoskeletal structures that are required for the motor proteins to relocate the organelle of interest. Assemble and integrate the plasmids carrying LOVpep and ePDZ constructs Timing: 2 weeks Assembling the LOVpep and ePDZ constructs requires combining multiple genetic elements. Modular cloning techniques such as Gateway (Walhout et al., 2000), Gibson (Gibson et al., 2009) and Golden Gate (Engler et al., 2009) are preferred, given that they are faster and hence more favorable for screening for functional constructs. New constructs for photo-inducible heterodimerization do not always result in the expected organelle manipulation and might for example suffer from high levels of dark-state heterodimerization or low efficiency light-induced heterodimerization. It might therefore be necessary to test alternative organelle adaptors, relocalization proteins, mutants of LOV-ePDZ with different affinity properties (Strickland et al., 2012) or even different light-inducible heterodimerization systems (Adrian et al., 2017;Hallett et al., 2016;Nijenhuis et al., 2020) before the desired experimental setup is achieved. To stably introduce transgenes in C. elegans, we recommend generating plasmids that are compatible with the Mos1-mediated Single-Copy Insertion (MosSCI) method (Frokjaer-Jensen et al., 2012;Frokjaer-Jensen et al., 2008;Frokjaer-Jensen et al., 2014). This method creates single-copy transgenes at defined positions within the genome, which facilitates subsequent crossing of the transgenes into a single line, generates comparable expression levels of the transgenes and decreases the probability of germline silencing. Alternatively, CRISPR/Cas9 mediated gene editing (Nance and Frokjaer-Jensen, 2019) can be used when endogenous proteins need to be tagged. With the above in mind, we recommend the recently developed Golden Gate cloning technique SapTrap to assemble C. elegans MosSCI transgene vectors (Fan et al., 2019) carrying LOVpep and ePDZ constructs in a fast, efficient, inexpensive, and scar-free manner ( Figure 2). The MosSCI backbone and donor plasmids carrying LOVpep and ePDZ that are compatible with this method are available at Addgene (Fan et al., 2019). In the following we detail how MosSCI transgene vectors using the SapTrap method can be generated. 7. Find templates encoding the parts you want to combine and order primers to produce gene fragments flanked by SapI restriction sites. Key in the SapTrap method is that SapI cuts DNA at defined positions adjacent to its recognition sequence to generate three-base 5 0 overhangs ( Figure 2, red sequence/bars=recognition site, blue sequence/bars= three-base 5 0 overhangs). By designing SapI restriction fragments with complementary overhangs, multiple fragments can ll OPEN ACCESS be assembled together in a defined order in a single digestion and ligation reaction ( Figure 2B, the defined order is illustrated by the fragments ranging from light gray to dark gray). Primer design and examples of primers can be found in Figure 2C and in the study by Fan et al. (2019). If templates are not available, codon-optimized sequences can be ordered. We order sequences as gBlocks (Integrated DNA Technologies). 8. When you amplify your gene fragments with your primers and template DNA, use a high-fidelity PCR kit as per the manufacturer's instructions. 9. Generate donor plasmids by cloning the PCR products or gBlocks into the pCR BluntII vector backbone using the Zero Blunt Topo system (Thermo Fisher Scientific), as per the manufacturer's instructions ( Figure 2A). 10. Transform the Zero Blunt Topo assembly reaction into chemically competent cells: Thaw 50 mL competent cells on ice. Still on ice, add 5 mL of assembly reaction to the cells and incubate for 20-30 min. Heat shock the cells at 42 C for 30 s. After transformation and before plating, add 0.2 mL of 18 C-24 C SOC Medium and shake at 225 rpm 37 C for 1 h. Spread the cells onto pre-warmed kanamycin-selective plates and incubate 16-20 h at 37 C. Pause point: Bacterial colonies can be stored at 4 C for up to 2-3 weeks. 11. Pick colonies, grow 16-20 h at 37 C and extract the plasmid DNA using a miniprep kit, as per the manufacturer's instructions. Carry out diagnostic restriction enzyme digests followed by sequencing to confirm that the donor plasmid has the correct, mutation-free insert. 12. Adjust the volume of donor plasmids to bring to a final concentration of 50 nM. 13. MosSCI targeting vectors are assembled using the SapTrap method in a single tube ( Figure 2B) using the following protocol: add 1 mL 50 nM pXF87 (MosSCI backbone, available at Addgene), 1 mL 50 nM of each donor cassette plasmid, 5 mL 103 NEB cutsmart buffer, 5 mL 10 mM ATP (not dATP), 1 mL SapI enzyme (10 units), 1 mL T4 DNA ligase (400 units) and ddH 2 O to a final volume of 50 mL. Incubate this reaction mixture 5 min at 37 C (=SapI digestion) and 5 min at 16 C (ligation), repeating this for a total of 35 cycles. This is followed by a final SapI digestion step of at least 1 h and up to 16 h at 37 C to cut any remaining, unligated pXF87 backbone. After this final step, put immediately on ice and use 5 mL for transformation into chemically competent cells as described above. Spread the cells onto pre-warmed ampicillin-selective plates and incubate 16-20 h at 37 C (Troubleshooting 1, Troubleshooting 2). 14. Isolate plasmid DNA and screen for correct assembly using diagnostic restriction and sequencing. In our hands, between 10%-90% of colonies have the correctly assembled plasmids. 15. Once the MosSCI transgene vectors are made, these can be integrated at defined landing sites in the chromosome of choice by injecting the vectors into universal MosSCI strains (Frokjaer-Jensen et al., 2014). We co-inject the MosSCI transgene vector (50 ng/mL) together with a helper plasmid encoding the Mos1 transposase (50 ng/mL pCFJ601), and with three negative selection markers to select against extrachromosomal array-bearing transgenic animals (10 ng/mL pMA122, 2.5 ng/mL pCFJ90 and 5 ng/mL pCFJ104) (Frokjaer-Jensen et al., 2014). For a detailed protocol describing the microinjection procedure, please refer to Berkowitz et al. (2008). 16. Injection of 20-30 animals is in general sufficient to obtain at least one stable transgenic line. Place 3-4 injected animals per NGM plates with OP50 E. coli and keep them at 25 C until they are starved (7-10 days). 17. Heat-shock animals for 2 h at 34 C to activate the peel-1 toxin (encoded by pMA122) and selectively kill animals that carry extrachromosomal arrays. Using an air incubator with a fan and spreading out plates evenly ensures that plates warm up relatively fast to 34 C to efficiently induce the heat shock. 18. 4-24 h following the heat shock, screen for plates that contain animals that are alive, move well, and lack the fluorescent co-injection markers. Chunk positive plates to new seeded NGM plates. Two days later, transfer a single adult animal to a new seeded NGM plates. If possible, screen for expression of the inserted transgene (germline expression, for example) before picking a worm to set up a clonal culture. 19. In the isolated lines, verify that the transgene is expressed and localizes to the expected location. For germline expression, note that it can take 1-3 generations before germline silencing of the transgene disappears and the transgene becomes expressed (Troubleshooting 3). 20. Verify that the transgene has been correctly inserted at the chromosome of choice using the oligos described on www. STEP-BY-STEP METHOD DETAILS Prepare imaging setup Timing: 10 min CRITICAL: Very low levels of blue light are sufficient to activate LOVpep, so all environmental blue light that could reach the animals during culturing and sample preparation needs to be eliminated as much as practically feasible. 1. Store the animals in a box that blocks environmental light, e.g., by wrapping the box in aluminum foil. 2. If the room used for sample preparation and imaging needs to be illuminated, use a light source filtered for blue light. If necessary, aluminum foil can be used to cover the microscope setup and blue light of monitors can be omitted by adjusting the hardware display settings. 3. Insert optical (orange) filters in the light paths of the dissection scope and of the imaging microscope to remove LOVpep-activating wavelengths from the transmitted light used to handle samples. Prepare microscope settings for global and local light-induced heterodimerization Timing: 3-4 h CRITICAL: The conditions used for light-induced activation need to be thoroughly optimized to ensure minimal activation before the experiment or outside the region of interest, especially since LOVpep activation requires very low light exposure. 4. To setup the microscope, use a strain in which heterodimerization is straightforward to score and where ePDZ is tagged with a red or far-red fluorophore so its distribution can be visualized without activating LOVpep. a. For example, a strain expressing cytosolic ePDZ combined with a membrane anchored LOVpep allows for easy scoring of membrane recruitment of ePDZ-mCherry (De Henau et al., 2020; Fielmich et al., 2018) ( Figure 3A, Methods videos S1 and S2). A cytosolic ePDZ combined with an organelle anchored LOVpep works equally well . 5. Assess for unintended light-induced heterodimerization during sample preparation ( Figure 4). a. Prepare the sample for imaging, while preventing exposure to environmental blue light as described above. b. Locate the sample of interest using transmitted light that has been filtered for blue light and acquire a snapshot of the ePDZ fusion protein. parameters for which you can still see ePDZ relocation and which are compatible with your experiments. c. Take along a negative control that only expresses the ePDZ fusion construct. No relocation or phenotype (e.g., light-induced toxicity) should be observable using the settings from (b). Optional: After acquisition, process the time-lapse images to reduce background noise and segment activated organelle/structure if needed for signal quantification. d. Calculate the ratio of fluorescence intensity inside the activated ROI to the intensity in a ROI of similar size outside the area of activation. This analysis produces a maximum intensity ratio as well as the activation half-life ( Figure 3B). e. With the results from the previous step, determine the minimal amount of power needed to achieve light-induced activation. Note: The sensitivity and activation and reversion kinetics of LOVpep-ePDZ dimerization following light illumination are important parameters to consider. They determine how often the proteins need to be exposed to blue light in order to induce, maintain, and turn off dimerization to get the desired experimental conditions. Light-induced dimerization of LOVpep-ePDZ occurs with a half-life of approximately 40 s and will revert to its original state in the dark with a reversion half-life of approximately 50 s (Fielmich et al., 2018;Hallett et al., 2016) ( Figure 3B). This is relatively fast and in practice means that photoactivation is carried out in between every image acquisition to maintain dimerization. h. Also here, take along a negative control that only expresses the ePDZ fusion construct. No relocation should be observable using the settings used in (g). CRITICAL: When designing and interpreting experiments keep in mind that also in the dark-state ePDZ will bind to LOVpep, although with an approximately 6-fold lower affinity compared to the photo-activated state . Note: Fusing LOVpep to different protein domains might influence activation efficiency or prevent ePDZ binding altogether . For example, we needed approximately 10 times more energy to activate LOVpep anchored to the membrane using a transmembrane region (Figure 6), compared to when anchoring it using a pleckstrin homology domain ( Figure 3B) (De Henau et al., 2020). 7. Perform light-induced heterodimerization experiments a. Using the settings for 488 nm light-induced activation determined in step 6, set up your imaging parameters. b. Locate the sample of interest using transmitted light that has been filtered for blue light. c. As noted above, global light-induced heterodimerization can be achieved similar to acquiring GFP images, while local light-induced heterodimerization is achieved by exposing only the region of interest (ROI) with a 488 nm laser. d. Afterwards, determine if the light-induced heterodimerization of LOVpep and ePDZ has the desired effect on organelle relocation. For example, to determine if local activation of membrane LOVpep effectively trapped mitochondria labeled with ePDZ in the activated area, we used particle tracking analysis of the mitochondrial signal ( Figure 5C) (De Henau et al., 2020). Alternatively, organelle redistribution dynamics can be quantified by comparing changes in total intensity in the activated and non-activated area (Nijenhuis et al., 2020) Note: Trapping of an organelle, for example trapping of mitochondria near the cell membrane ( Figure 5) (De Henau et al., 2020), will most likely result in only moderate organelle enrichment at the site of interest. Trapping also requires the organelle of interest to occasionally be present at the site of interest in order for LOV and ePDZ dimerization to be able to occur upon light activation. We therefore prefer to refer to this method as local trapping of an organelle rather than relocation. EXPECTED OUTCOMES Using a strain in which heterodimerization is straightforward to score, such as cytosolic ePDZ and membrane anchored LOVpep, there will be no to minimal dimerization observable in dark state (Figure 6). In such a strain, global light activation will cause a clear and fast dimerization of ePDZ and LOVpep (Methods video S1). Local light activation will cause clear ePDZ-LOVpep dimerization in the activated region, with an expected 2-to 6-fold increase in relative fluorescence intensity of ePDZ in the subcellular location of LOVpep, with lower levels of dimerization observable in the surrounding region (Methods video S2, Figures 3 and 6). Exposure to laser light of 561 nm or above will not cause dimerization, nor exposure to transmitted light passed through an orange optical filter. LOVpep and ePDZ fusion constructs that are expressed individually will not change subcellular distribution upon light activation. Using LOVpep-ePDZ for organelle relocation will cause moderate to strong redistribution of the tagged organelle, depending on the cellular machinery that is used (and available) for relocation (Figure 7, compare panel C and E, Methods videos S3, S4, and S5). LIMITATIONS Blue-light-induced LOVpep-ePDZ dimerization is a powerful technique that allows to address the importance of protein and organelle positioning. This technique is applicable to many organisms, and the single wavelength of light necessary to manipulate their dimerization makes for a simple experimental setup. However, newly designed constructs for photo-inducible heterodimerization do not always result in the desired organelle manipulation and/or can show unwanted dark-state OPEN ACCESS binding, and it is likely that alternative organelle adaptors, proteins used for relocalization, LOV-ePDZ variants with altered affinity (Strickland et al., 2012) or even other light-inducible heterodimerization systems need to be tested. In addition, LOVpep does not tolerate C-terminal fusions, posing a problem to directly label a number of organelle adaptors (van Bergeijk et al., 2015). It is therefore recommended to combine the technique of light-inducible heterodimerization with an efficient and fast cloning approach to be able to rapidly test alternative fusion constructs. Given the high sensitivity of the LOVpep-ePDZ system, preventing unwanted activation by environmental light or light scattering during culturing and during imaging can be difficult (see for example Figure 3B, asterisks). Activation is also caused by imaging fluorescent proteins that have spectral overlap with the LOVpep domain, such as mTagBFP2 and Venus. Combining the LOVpep-ePDZ system with multiple fluorescent proteins can therefore become challenging and requires careful controls for aberrant activation. In addition, also in the dark-state ePDZ will bind to LOVpep, and while this is with an approximately 6-fold lower affinity compared to the photo-activated state LOVpep-ePDZ dimerization following light illumination has a relatively high activation and reversion half-life. While these properties allow to control organelle position with high spatial and temporal resolution, they also impose that the system needs to be continuously illuminated with blue light if stable activation is desired. This could be technically challenging to combine with fast live imaging and might potentially induce phototoxicity in long-term imaging sessions. Finally, as discussed earlier, several light-inducible protein dimerization techniques have been developed, each with their own sensitivity and dynamic range. The choice of system to use will ultimately depend on the requirements for speed of activation, reversibility, and depth of tissue to be accessed. It is therefore recommended to understand the advantages of each system before deciding which one to use Nijenhuis et al., 2020;van Bergeijk et al., 2015). Problem 1 No colonies after transformation of the SapTrap assembly Potential solution Make sure all the SapI cleavage sites are compatible. Make sure to resuspend SapI before pipetting, SapI is prone to precipitation. Make sure to use ATP and not dATP. Upon heterodimerization of LOVpep-ePDZ, dynein is expected to transport mitochondria along these microtubules in retrograde direction, toward the centrosomes and (pro)nuclei. (C-E) (C) Mitochondrial dynamics in WT embryos that do not express LOVpep and ePDZ, and (D-E) in embryos that express dynein heavy chain fused to ePDZ (epdz::mcherry::dhc-1) and mitochondria labeled with LOVpep. Embryos in (C) and (E) were exposed to 488 nm laser light, with laser intensity at 0.001% and pixel dwell time of 8 ms, applied in between each time point (5 s interval). The embryo in (D) was imaged under the same conditions but with the 488nm laser shut off and shielded from all environmental blue light. Note that mitochondria relocate toward the center of the cell (= toward the (pro) nuclei), in a moderate manner in (D) and an even more pronounced manner in (E). Anterior side of the embryos is to the left, t(0) = contact between maternal and paternal pronuclei. Mitochondria were visualized using Mitotracker Deep Red FM (Thermo Fisher Scientific). Problem 2 Many colonies with no insert after transformation of the SapTrap assembly Potential solution Make sure the last 37 C cutting step is at least 1 h. In addition, place the reaction tube immediately on ice after the SapTrap assembly protocol is finished. Leaving it at room temperature (18 C-25 C) allows re-ligation of the cut empty pXF87 backbone. Problem 3 No expression of a transgene construct designed for germline expression Potential solution Verify that the construct has no internal stop codons or design flaws, for example by expressing it in non-germline tissues. If the construct is properly expressed in non-germline tissues, it likely suffers from germline silencing when targeted to the germline. To circumvent silencing, optimize the sequence for germline expression (Fielmich et al., 2018), remove homology to piRNAs (Batista et al., 2008) and/or introduce PATC introns (Frokjaer-Jensen et al., 2016). Problem 4 Significant unintended light-induced heterodimerization during sample preparation Potential solution Make sure to work in a closed room so all environmental light that might reach the sample can be controlled. Ideally, perform sample preparation and imaging in the same room to avoid light exposure in between these two steps. Problem 5 Significant light-induced heterodimerization outside the activated ROI Potential solution Lower the exposure to the activating 488 nm laser to reduce light scattering. In addition, consider if the mobility of the tags used to anchor LOVpep might explain diffusion of activated heterodimers outside the ROI and potentially replace these with tags with lower mobility. Problem 6 No light-induced heterodimerization or organelle translocation Potential solution Analyze if LOVpep and ePDZ in the fusion constructs are still functional, by combining them with characterized and suitable complementary constructs. For example, to know if an organelle targeted LOVPEP fusion construct is functional, combine it with a cytoplasmic ePDZ and determine if ePDZ relocates to the organelle after light activation. If both constructs are functional but no organelle translocation is observed, make sure that the constructs effectively overlap in subcellular location and are able to interact. When motor proteins are used, make sure that they are active at the time of light activation and that the cytoskeletal network that is needed for translocation is available. Problem 7 Heterodimerization or organelle translocation is observed in dark-state conditions. Potential solution Firstly, make sure that the dark-state condition lacks all forms of blue light, by using filters that block blue light in the dissection scope and microscope, illuminating the working room with red light instead of white light, turning off blue light emission from computer monitors, etc. Secondly, the binding affinity of LOVpep and ePDZ in dark-state can be sufficient to cause constitutive and unwanted organelle translocation even in the absence of all forms of blue light. As mentioned above, we observed a clear example of dark-state mitochondrial translocation when using a combination of mitochondrial ePDZ and LOVpep fused to dynein heavy chain DHC-1 ( Figure 7D). Reducing dark-state activation might be achieved by lowering expression levels of both constructs or of the LOVpep containing construct in case overexpression constructs are used (Nijenhuis et al., 2020). Fusing alternative motor proteins to LOVpep could also help to reduce dark-state activation. A noteworthy example here is the development of a photosensitive kinesin that is activated upon blue light. This adds a second layer of light-sensitive control and effectively reduced dark-state activation (Nijenhuis et al., 2020). Finally, mutants of LOVpep or ePDZ with lower dark-state binding affinity (Strickland et al., 2012), or other light-inducible heterodimerization systems with lower dark-state binding affinity, such as the milli variant of the iLID-SspB system , could help reduce dark-state organelle translocation. RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Sasha De Henau (sasha.dehenau@gmail.com). Materials availability This study did not generate any unique materials or reagents. Data and code availability This study did not generate any unique datasets or code.
2021-01-26T05:31:19.822Z
2021-01-13T00:00:00.000
{ "year": 2021, "sha1": "36c397fdd051ceb29d20a52d045759b37010138c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.xpro.2020.100273", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36c397fdd051ceb29d20a52d045759b37010138c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236586504
pes2o/s2orc
v3-fos-license
Identification of Gibberellins and Cytokinins in Saladette Tomato Seeds Aims: Tomato (Solanum lycopersicum L.) is one of the most important vegetable crops worldwide, mainly as a result of its economic and nutritive contribution to human society. The presence of endogenous gibberellins and cytokinins in seeds of several crops has been related to a good germination; however, little is known in tomato. Therefore, the aim of this study was to analyze and identify the presence of gibberellins and cytokinins in saladette tomato seeds. Study Design: Laboratory analysis for gibberellins and cytokinins were conducted in solvent solutions groups with three technical replicates using a complete randomized design and results Original Research Article Ramírez et al.; IJPSS, 33(8): 20-28, 2021; Article no.IJPSS.68021 21 when applicable were analyzed using the statistical program 'RStudio' (version 10) and data obtained subjected to a comparison of means with the Tukey (P≤0.05) test. Place and Duration of Study: The experiment was conducted at the Department of Horticulture in Universidad Autónoma Agraria Antonio Narro, Saltillo, Mexico, during 2020-2021. Methodology: Lots of 50 grams dry weight of saladette “SVTE8832” tomato seed samples were freeze dried and prepared with several organic solvents for the extraction, purification and identification of gibberellins and cytokinins using the techniques of gas chromatography mass spectrometry (GC-MS) and gas chromatography mass spectrometry with selection ion monitoring (GCMS-SIM) respectively. Results: Gibberellins A1, A4, A7, A9, A12, A15, A17, A20, A44 and A53 were found in tomato seed tissue. The cytokinins zeatin and zeatin-ribozide were also detected in analyzed tomato samples. Conclusion: The endogenous gibberellins A1, A4, A7, A9, A12, A15, A17, A20, A44, A53 and the cytokinins zeatin and zeatin-riboside are present in saladette “SVTE8832” tomato seeds. INTRODUCTION Tomato (Solanum lycopersicum L.) is one of the most important vegetable crops worldwide, mainly as a result of its economic and nutritive contribution to human society. It represents the 30% of total world´s cultivated horticulture land. China is the main producer and consumer of tomato, whereas USA is the principal importer of this crop. Recent statistics indicates that Mexico is the main exporter of tomato to USA and Canada at 90% and 65% rate of their total imports respectively [1]. A healthy tomato seedling depends from a high quality seed on germination capacity where endogenous hormones play an important role in this process [2]. Gibberellins and cytokinins have been related to seed germination of various crops such as peppers, lupins and apple [3]. Tomato and hot pepper seed treatments with gibberellic acid and 6-benzyl amino purine in a range of 40-70 mg/L -1 have proved to increase germination percentage and seedling growth. The presence of endogenous gibberellins such as GA4, GA7 and GA9 are reported in apple seeds and have been related to an improvement in germination and seedling development [4]; however, the presence of gibberellins and cytokinins in tomato seeds is less documented [4,5]. Therefore, the aim of this study was to investigate the possible presence of endogenous gibberellins and cytokinins in saladette tomato seeds. Plant Material, Site and Design The present investigation was conducted during the period 2020-2021 at the plant physiology laboratory, Horticulture Department, Universidad Autónoma Agraria Antonio Narro in Buenavista, Saltillo, Coahuila, México. Extraction, Purification and Identification Gibberellins Saladette "SVTE8832" tomato seed samples (50 g dry weight) were water imbibed during 8 to 72 hours, frozen, freeze dried and grounded. The extraction and purification procedure prior to GC-MS for endogenous gibberellins analysis as reported by Ramirez et al. [6] is illustrated in Fig. 1. Purified tomato seeds were dissolved in a few drops of methanol and methylated with diazomethane. A portion of the methylated extract was dissolved in pyridine and treated with trimethylchlorosilane and hexamethyldisilazane. Aliquots were examined using a Pye 104 GLC coupled through a silicone membrane separator to an AEI MS30 dual beam mass spectrometer. Silanized glass columns (213 x 0.2 cm) were packed with 2% SE-33 on 80-100 Gas Chrom Q. The He-flow rate was 25 mL/min and the column temperature was programmed from 180°C to 280°C at 2°/min. The MS was determined at 24eV at a source temperature of 210°C and a separator temperature of 190°C with a scan speed of 6.5 s per mass decade. The spectra were recorded by a DEC Linc 8 computer. Identification of gibberellins was conducted by comparison of KRI and MS spectra of their methyl ester trimethylsilyl ether derivatives with those of authentic samples. Cytokinins The procedure for the extraction and purification of cytokinins is presented in Fig. 2. Purified samples were prepared and identified as previously described above using the modified technology reported by Palni et al. [7,8] and by Nandi et al. [9] The permethylated cytokinins fractions were quantified using a gas chromatograph-mass spectrometer (GC-MS, QP-5000; Shimadzu Inc., Kyoto, Japan) for selecting ion monitoring (SIM) analysis with a fused silica capillary column (CBP1, 0.22 mm i.d. x 25 m; Shimadzu Inc., Kyoto Japan) according to Watanabe et al. [10]. Each sample was replicated three times using a completely randomized design with the statistical program 'RStudio' (version 10) and data obtained subjected to a comparison of means with the Tukey (P≤0.05) test when applicable. Endogenous Gibberellins The analysis of mass spectrometry was focused on the prominent fragment ions in the peak corresponding with the retention time of authentic GAs as it has been described in great detail by Ramirez et al. [6]. Active and inactive gibberellins were both identified in seeds of saladette "SVTE 8832" tomato (Fig. 3). The gibberellins extracted and eluted through a silicic column with ethyl acetate in n-hexane and identified by GCMS. The presence of biological inactive gibberellins A9, A12 and A15 were found in the 10% ethyl acetate/n-hexane fraction. The biological active A4 and A7 were found in the 20% fraction; whilst, the inactive A20 and A53 in the 30% fraction; and the A17 and A44 in the 40% fraction. A1 which is highly active was detected in the 60% fraction (Fig. 4). Table 1 shows the frequency of gibberellins during the water seed imbibition time. The biological active gibberellins A1, A4, and A7 were present at all times. The GA9 appeared on days 2 and 3, GA12 on day 1, GA15 on day 2, GA17 on day 3, GA20 on days 2 and 3, GA44 and GA53 on day 3. The identification in this study of gibberellins A1, A4, A7, A9, A12, A15, A17, A20, A44 and A53 may reflect the complicated role of these hormones in seed germination. The retention time intensity and ion number for each gibberellin show their particular chemical nature (Fig. 3). The extraction of each gibberellin through different percentage of solvents as seen in Fig. 4, reflect the polarity for particular hormone [7]. It is well established that exogenous application of GA4 and GA7 promote seed germination in tomato and several other crops [6,11]. GA1 has also been reported as seed germination promotor [7,12,13]. These three hormones move through different plant organs whereas the rest of GAs founded are classified as biological inactive [14,15,16]. On the basis of results obtained in the present work and previous data [17], it is possible to speculate as to which gibberellins are involved in the germination process. The rate of hydroxylation has been related to the movement of gibberellins [7]. Feeding labeled gibberellins to intact fruit has shown that some degree of hydroxylation is necessary for movement. GA9 is immobile [18], but GA3 [19,20] and GA4 (Figs. 3, 4) both move from the fruit into spur tissue [6]. Therefore, it is likely that gibberellins A9 and A12 are immobilized due to their lack of hydroxylation. GA4 moved out of fruits into the bourse-bud without being further hydroxylated [6]. On the basis of these results it seems that the more highly hydroxylated gibberellins such as GA1, GA4 and GA7 [19,20] may be involved in the process of tomato seed germination; whereas the gibberellins A9, A12, A15, A17, A20, A44 and A53 which as a result of lack of hydroxylation seem to be not so important for germination as those with more hydroxyl group (Fig. 4). Chen et al. [15], have proposed that tomato seed gibberellin 2-oxidases may play a direct role in germination and other physiological processes. Therefore, the presence of a range of gibberellins found in seed extracts opens further possibilities for explaining the way in which they exert their effect on the germination of tomato seeds. Endogenous Cytokinins The present study resulted in the identification of the endogenous cytokinins zeatin and zeatinriboside (Figs. 5, 6). The presence of these two endogenous plant hormones (Fig. 7) was consistently evident at all times during the imbibition process ( Table 2). The possible involvement of endogenous cytokinins in the process of seed germination is not well documented. Aremu et al. [21], have well documented the beneficial effects in yield and fruit quality when exogenous cytokinins are applied to several fruit crops. There is also evidence that cytokinins may be a key driver for seed yield [22], or act as a fruit growth promotor when deficiency of seed physiological function is present [23]. Matsuo et al. [24] established the role and regulation of cytokinins in tomato fruit development. Therefore, these researchers claim the importance of cytokinins in seed and fruit development. The findings of Z and ZR in this study are supported by the reports of Emery et al. [25] who found cis-zeatin, trans-zeatin and zeatin-riboside during seed development in white lupin and by Rijavec and Dermastia [26] who pointed out the importance of these cytokinins in developing seeds. Although the results on this study and the reports of other authors are useful [27,28], more research is need on the presence of cytokinins in tomato seeds and their possible role during seed germination.
2021-08-02T00:06:03.135Z
2021-05-08T00:00:00.000
{ "year": 2021, "sha1": "4069888040f834da2f6d3613ada258628de02411", "oa_license": null, "oa_url": "https://www.journalijpss.com/index.php/IJPSS/article/download/30454/57149", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "887cbe8ccce46f9791b0d048fa8edf6fa8ed7792", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
270016804
pes2o/s2orc
v3-fos-license
Microbial infections in burn patients Polymicrobial infections are the leading causes of complications incurred from injuries that burn patients develop. Such patients admitted to the hospital have a high risk of developing hospital-acquired infections, with longer patient stays leading to increased chances of acquiring such drug-resistant infections. Acinetobacter baumannii, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Proteus mirabilis are the most common multidrug-resistant (MDR) Gram-negative bacteria identified in burn wound infections (BWIs). BWIs caused by viruses, like herpes simplex and varicella zoster, and fungi-like Candida species appear to occur occasionally. However, the preponderance of infection by opportunistic pathogens is very high in burn patients. Variations in the causative agents of BWIs are due to differences in geographic location and infection control measures. Overall, burn injuries are characterized by elevated serum cytokine levels, systemic immune response, and immunosuppression. Hence, early detection and treatment can accelerate the wound-healing process and reduce the risk of further infections at the site of injury. A multidisciplinary collaboration between burn surgeons and infectious disease specialists is also needed to properly monitor antibiotic resistance in BWI pathogens, help check the super-spread of MDR pathogens, and improve treatment outcomes as a result. INTRODUCTION The human skin is the largest anatomical organ involved in various physiological functions like thermoregulation, maintaining homeostasis, proprioception, and protection from external agents [1].The skin is man's physical barrier to resist pathogen attack.Conditions that lead to loss of skin integrity therefore have numerous serious consequences [1]. BURN INJURIES Burn injury is a major global public health crisis.It disrupts the epidermal barrier, leading to down-regulation of both local and systemic immune responses [1].As a result, burn wounds become an ideal breeding ground for microbes [1,2].The burn wound serves as an ideal microenvironment predominated with biological fluids called burn wound exudates (BWEs), which collectively create a perfect niche for the growth of pathogens [3]. First-degree (superficial) burns damage only the epidermal layer, so they heal rather quickly without scarring [2].Second-degree (partial-thickness) burns involve the deeper layers of the epidermis and dermis and heal slowly [2].Third-degree (full-thickness) burns fully destroy the epidermal and dermal layers of the skin and can also cause significant damage to the underlying tissues and bones as well [2]. EPIDEMIOLOGY OF BURN INJURIES Burn is a very common and devastating form of trauma.It has been ranked seventh among all traumatic injuries by the World Health Organization, with a crude mortality rate of 5% [4].Across the world, around 2.65 lakh deaths occur every year due to burn injuries.Such cases are more prevalent in developing and under-developed countries, and in these cases, patient mortality potential soars up to 100% with burns covering more than 40% of the total body surface area [3,5].Around 80% of burns occur at home [6].Domestic burn injuries are more common among children and adolescents [6,7]. Asia records the highest number of intentional burn injuries in the world, with Southeast Asia topping the list, followed by Africa [8].Among the Asian countries, India records the highest number of cases of intentional self-harm by burning, followed by Pakistan, Bhutan, and Bangladesh.Africa records the highest mortality rate from burn injuries, 23.5% per year [8].In India, 65% of the burn victims are young women, due to self-immolation or domestic violence [9].On the other hand, in Africa, children are the predominant victims of burn injuries.Prevention of burn injuries in Asia and Africa is hampered due to the high population density, lack of education, low income rate, and poor surveillance systems [9]. Reported cases of burn injuries are significantly lower in continents like North America, South America, Australia, and Europe [8].The victims of intentional self-harm by burning in Europe are more prevalent among men in the age group of 40-50 years [9].Australia records the highest number of admissions of burn patients in hospitals each year, followed by Asia.These developed continents are in a much better situation concerning burn injuries compared to the under-developed and developing countries.Polymicrobial infections are responsible for 75% of all deaths from burns [2].The risk factors influencing microbial infections at the burn site include the size and surface area of the burn, age, immune status, the degree of burn, and comorbidities [2]. ETIOLOGY OF BURN INJURIES Burns occur at temperatures above 44 °C [10].Trans-epidermal necrosis happens in just a second at 70 °C, while it happens in 45 minutes at 47 °C [10].Fire flaming and scalding represent 23.8% and 66.2% of burn injury cases, respectively [11].The remaining 10% of burns have other causes [11].Scalding causes first or second-degree burns, while flame causes second or third-degree burns [10]. PATHOGENS OF BURN WOUND INFECTIONS Following burns, microorganisms colonize and grow quickly at the site of injury due to the loss of the skin barrier.The skin barrier otherwise serves as the first line of immune defense for any individual [12][13][14].Any breach in the skin allows for easy entry and access of the infecting microbe to the inner tissues of the body, thus complicating the etiology [12][13][14].Hence, it has been observed that microbial infections, especially those caused by multidrug-resistant (MDR)-bacteria, including Pseudomonas and Acinetobacter, are the main cause of increased morbidity and mortality in burn patients [12][13][14]. The 2016 National Burn Repository Report mentioned that ■ The loss of skin epidermis due to burn injury provides easy access for different microorganisms to enter the human body and cause infections. ■ Most of the complications related to burn injuries that are reported occur due to the increased susceptibility to several other secondary diseases caused by microbial infections. ■ Multidrug-resistant bacterial, yeast, fungal, and viral infections of burn wounds are very common during prolonged hospitalization, and immunosuppression is the main cause. seven out of ten most frequent complications in burn patients are attributed to polymicrobial burn wound infections (BWIs), with urinary tract infections (UTIs), pneumonia, and cellulitis topping the list and respiratory tract infections being the most frequently reported [13].After a burn injury, the duration of hospitalization is directly proportional to the types of bacterial species that infect the patients, with the major contributor to infection being Staphylococcus aureus (Figure 1, Table 1) [15].During the first week of hospitalization, skin and soft tissue infections occur majorly, whereas pneumonia, UTIs, and bloodstream infections tend to occur later during the stay (Table 2) [15]. Gram-Positive Bacteria The most commonly found Gram-positive bacteria in BWI include Staphylococcus species (spp.),Enterococcus spp., and β-hemolytic group A Streptococci (GAS) [12].Specifically, vancomycin-resistant Enterococci (VRE) and methicillin-resistant Staphylococcus aureus (MRSA) are the pathogens of high concern in patients with severe burns [12,13].Over recent decades and with the uncontrolled over-the-counter availability of broad-spectrum antibiotics, MRSA has become the most pre- dominant pathogen in the intensive care unit of burn patients [14].Colonization with any of these bacteria may also lead to biofilm infections, resulting in severe illness and death [14]. In most of the studies performed so far, about 86.6% of S. aureus found were methicillin-resistant, a major pathogen of hospital-acquired infections (HAIs) in most countries [16].The toxic products proceeding Staphylococcus spp.infection, such as proteinases, collagenases, and hyaluronidases, allow the bacteria to enter local tissues and the bloodstream, which in turn cause generalized systemic infection and sepsis [14].In addition to causing pneumonia, sepsis, and other sequelae related to invasive BWIs, Staphylococci are a significant cause of graft loss when the burden of infective organisms exceeds 10 5 colony-forming units (CFUs) [17].Vancomycin has been one of the most preferred treatments for curbing MRSA infection.Yet for the past few years, there has been an emergence of other antibiotic-resistant strains like Vancomycin-intermediate Staphylococcus aureus [16].A potential solution to this problem is being catered to by new antimicrobials such as linezolid (an oxazolidinone), daptomycin, tigecycline, quinupristin-dalfopristin, and dalbavancin [14]. Enterococcus also has been a Gram-positive bacterium of concern but fortunately was not seen to be fatal until the emergence of VRE [18].Combination therapy, including ampicillin and an aminoglycoside, is nowadays used to treat VRE infections [18].GAS (Streptococcus pyogenes) is the major cause of graft failure in burn patients, followed by group B Streptococci (Streptococcus agalactiae) [17].These Streptococci can be eradicated with the penicillin group of antibiotics [19]. Gram-Negative Bacteria P. aeruginosa are not only the major pathogens that cause respiratory tract infections (HAIs) but are also ubiquitous in invasive burn wounds, owing to their preference for moist environments [20].These bacteria are also responsible for sepsis, leading to burn-associated death [20].Pseudomonas infections, particularly those by P. aeruginosa, usually start as a localized, superficial lesion with a typical characteristic yellow or green color and a malodorous fruity smell, which may become an invasive infection termed "ecthyma gangrenosum, " causing blue-purplish "punched-out" lesions in the skin [21].P. aeruginosa can subsequently spread into deeper tissues rapidly to cause sepsis [22].Because of the developing drug resistance patterns in P. aeruginosa, piperacillin-tazobactam combination therapy is administered.Aztreonam is used as an alternate therapy for MDR-P.aeruginosa [22]. The Gram-negative bacterium seconding the list of highconcern microbes in burn patients is A. baumannii because of their enhanced capacity for transfer between patients.Survivability in both wet and dry conditions, also on both inanimate and animate objects, helps them to achieve this.[23].Colistin has been developed as the fallback treatment for pan-resistant Acinetobacter spp.[23]. The failure of burn treatment regimens is mostly caused due to the formation of a biofilm in the burn wound microenvironment of a patient; this may lead to death in many complicated cases [24].The bacterial community encased within a polysaccharide matrix biofilm is more resistant to disinfection, the rigors of the host immune system, and critically, more tolerant to antibiotics [22].It is assumed that burn wound-associated biofilms act as a launch pad for the pathogenic bacteria to establish deeper, systemic infections, and ultimately bacteremia and sepsis (Figure 2) [24].Bacteria of the genera Pseudomonas, Acinetobacter, and Staphylococcus usually adopt a biofilm-encased mode of growth, with P. aeruginosa being the most common (33.3%) burn wound isolate with biofilm-forming abilities, followed by Acinetobacter spp.(23.3%) and Staphylococcus aureus (16.6%) [25,26]. MDR Bacteria Antibiotics are used as a prophylactic measure to treat burn patients [27].According to the European Centre for Disease Prevention and Control and the Centers for Disease Control and Prevention, among the drug-resistant (DR) bacteria, there are extensively drug-resistant strains that are resistant to at least one agent in all antimicrobial categories except a few, and pan-drug resistant strains, which are resistant to all agents under all antimicrobial categories [28,29]. Two principal factors that govern MDR-pathogen attacks are the severity and extent of the burn and the duration of hospital stay of the patient [30].A prolonged hospital stay increases the risk of MDR infections by mostly Gram-negative bacteria (GNB) [30,31].Further increases in such BWIs might be due to previous exposure to antibiotics, and the use of invasive medical devices like urinary catheters [30].This was supported by a Canadian Burn Center study, where 125 patients were admitted [32].Over the first 7 days, 6% of bacterial isolates were MDR, whereas after 28 days of hospital stay, it increased to 44% [32].This increase in the prevalence of MDR-GNB during long hospital stays of burn patients is thus a serious treatment challenge [33]. Some of the most concerning MDR-GNB strains are A. baumannii, Stenotrophomonas maltophilia, P. aeruginosa, and carbapenem-resistant members of the Enterobacteriaceae family.These, along with Klebsiella pneumoniae, Proteus mirabilis, and Escherichia coli are regarded as the most common MDR-GNB in BWIs [33,34]. In a study conducted at a burn unit of a tertiary care referral center located in North India, it was noted that MRSA and GAS were endemic, where MRSA strains were reported to exhibit resistance to erythromycin, ciprofloxacin, netilmicin, gentamicin, and cefotaxime [35].MDR P. aeruginosa was also one of the most frequent microbes cultured from the infected burn wounds there, and 90% of those displayed resistance to amikacin and ceftazidime [35]. The preliminary identification of these MDR pathogens is done by studying their physical morphology, Gram-staining properties, and biochemical characteristics [36].Along with this, antimicrobial susceptibility tests are carried out using various antibiotics, like ciprofloxacin, gentamicin, trimethoprim-sulfamethoxazole, ceftazidime, and others, to check for the zone of growth inhibition [36].Here, multi-drug resistance is defined if a pathogen shows resistance to at least one agent in 3 or more antimicrobial classes [37]. Yeast and Other Fungal Infections Fungi are the second major BWI-causing microbes [38].BWIs caused by fungi are a part of mono-or polymicrobial infections, opportunistic infections, fungemia, and rare aggressive soft tissue infections [39].These infections are mostly misdiagnosed due to the same kind of manifestations of bacterial infections and due to the lack of a suitable mycology laboratory [38].These fungal infections have a very high mortality rate, and infection is only nonfatal when there is early diagnosis and treatment [40,41]. Invasive Candida infections are one of the major causes of morbidity and mortality among burn patients [42].Due to the introduction of new antifungals, changes in the epidemiology and drug responses of such fungal infections have been observed [45][46][47][48].It has been found that non-albicans Candida is becoming increasingly resistant to the common anti-mycotic substances [45][46][47][48][49]. Burn patients are usually exposed to these fungal infections after the second week of their thermal injury [50].The high mortality rate is due to the presence of fungemia, multiple positive cultures, and deep-rooted invasion of healthy skin [51].The age of the patient, total burn size, body surface area (30%-60%), full-thickness burns, long hospital stay, long-term artificial ventilation, inhalational injury, late surgical excision, artificial dermis, central venous catheters, fungal wound colonization, open dressing, antibiotics (such as imipenem, vancomycin and aminoglycosides), steroid treatment, hyperglycemic episodes, and immunosuppressive disorders all accentuate fungal infections in burn patients [39,[45][46][47][48]50]. The methods of diagnosis are conventional and mostly or- ganism-specific for the identification of mycoses at the burn site [45].Direct tissue biopsy is performed in some cases [45].However due to the voracious growth of fungal culture, sometimes it becomes too late to start an appropriate anti-mycotic therapy [45].Burn wound samples are collected at proper time intervals for laboratory diagnosis of fungal infections [52].The burnt tissue should be excised after the 7th, 14th, 21st, and ≥28th days [51].Tissue biopsy is done for a demonstration of fungal wound infections, and the culture tissue-specific biopsy is interpreted semi-quantitatively using the following formula [51]: In cultures, the germ tube test, characteristic growth on cornmeal agar, cultural characteristics on HiCrome agar, tetrazolium reduction test, and carbon and nitrogen assimilation tests are evaluated for yeast identification [51].Molds are identified using lactophenol cotton blue (LPCB) wet mount preparation for conidiogenesis, pattern, and arrangement [51].Identification of non-sporulating molds is carried out using slide cultures with potato dextrose agar [51].E-strip or broth micro-dilution using antifungals like amphotericin B, fluconazole, itraconazole, voriconazole, and caspofungin are the tests to check for the antifungal susceptibility of yeasts [52].The antifungal susceptibility of molds is tested by an E-strip test using amphotericin B [52].If Candida albicans are isolated, a lower concentration of nystatin is needed as a local treatment in contrast to its higher concentration for the other Candida spp.[40,50,53].With the burn wounds persisting longer, the propensity of fungal infections increases further [49].Therefore, the development of pharmaceutical products to recover the wound more rapidly, advancements in topical antifungal therapy, and implementations of appropriate systemic antifungal regimes as guided by antifungal susceptibility tests help to improve the treatment outcomes for severely injured burn patients susceptible to fungal infections [50]. Viral Infections Burn patients are very susceptible to viral infections [54].The immunosuppressed state of the patient after an injury triggers the reactivation of latent infection.This becomes the most common cause of viral infection post-injury [54].Administration of acyclovir for a minimum of 10 days is the most commonly used antiviral therapy to treat viral infection [54]. Herpes simplex virus infections The frequency of both herpes simplex virus (HSV)-1 and HSV-2 infections in burn patients increases with the age of the victim [54].There can be primary, secondary, or opportunistic HSV infections due to viral reactivation following reduced immunity in burn patients [54].It not only impairs the healing process, prolonging the recovery time, but also causes a reduction in the number of T-lymphocytes, down-regulation of Toll-like receptor-mediated nuclear factor-κB expression, and abnormal production of interleukin (IL)-2, cytokines, and antibodies [55,56]. The viral infection manifests itself as groups of vesicopustules or rashes in the burnt area [53].Reactivation of the latent virus in immune-debilitated burn patients causes diseases like tracheobronchitis, acute respiratory distress syndrome, pneumonia, liver necrosis, focal necrotizing hepatitis, and encephalitis [57]. Fluorescence in-situ hybridization, polymerase chain reaction (PCR), and next generation sequencing are common methods of detecting HSV in BWIs [54].Intranuclear eosinophilic inclusion bodies in the viral-infected cells are also looked for under a light microscope as a characteristic marker for HSV infections [54]. Cytomegalovirus infections Burn patients can also be affected by cytomegaloviruses (CMVs), either by primary or exogenous infection or reactivation of latent infections [54].The infection causes anomalous immune responses involving macrophage hyperactivity, enhanced cytokine production, and over-activation of T-helper cells [58,59].A 2011 study showed that 71% of CMV infections occurred in CMV-seropositive burn patients, while only 12.5% of CMV-seronegative burn patients were affected [60].The associated complexities include colitis, pneumonia, organ dysfunction, and encephalitis [60].PCR, quantitative nucleic acid testing, and immunochemistry are used to detect CMV infections [54].Histological detection involves the observation of intra-nuclear basophilic inclusion bodies with a characteristic "owl's-eye" appearance under the light microscope [54]. Varicella zoster virus infections Varicella zoster virus (VZV) infections in burn injuries are extremely rare, but when they occur, they are accompanied by critical post-infection complications with an increased mortality rate [61].It is quite prevalent among pediatric burn patients [62].PCR is the most sensitive method of detecting VZV infec-tions, as compared to culture, serology, or immunochemistry [54].Sometimes microscopic observation of intranuclear inclusion bodies also confirms the presence of the virus [54].Any previous infection by the same VZV strains or VZV vaccination lowers the rate of occurrence of VZV infections [62]. Poxvirus infections Parapoxvirus belonging to the Poxviridae family induces infections in burn patients with skin grafts, either by direct transmission or through infected fomites by indirect transmission [62].It affects the epidermal keratinocytes of the patients [54].Vascular endothelial growth factor is upregulated during burn injuries, which promotes angiogenesis, thus facilitating infection [54].Cell culture isolation, PCR, enzyme-linked immunosorbent assay, and Western blotting are some common methods of detecting the virus [54].Treatment includes cryotherapy, electrocautery, and the administration of cidofovir or imiquimod [63].Some large Orf disease (ecthyma contagiosum or contagious pustular dermatitis) lesions might require excision and skin grafting [64]. Human immunodeficiency virus infections A study of burn patients living with human immunodeficiency virus (HIV) infection in Malawi showed a high probability of death if sepsis or multi-organ dysfunction developed [65].HIV-positive patients who suffer from burn injury but do not have AIDS are treated similarly to HIV-negative patients [66].Burn injury, along with a co-existing HIV infection, causes a depletion of CD4+ T cells and defective release of cytokines [67]. Human papillomavirus infections Human papillomavirus (HPV) replicates when the immune system becomes under-functional in burn patients [54].These infections were first reported in 1996 when a boy aged 4 years, with a small burn on the left ring finger, was found to develop a "keloid scar" in that burn area, four weeks after the injury [68].HPV could survive and replicate in the wound, as the basal layer of the skin remained intact [68]. IMPACT OF GEOGRAPHICAL CONDITIONS ON THE MICROBIAL PROFILE OF BWIS Geographical conditions play a critical role in influencing the development of infection in burn patients, shaping the microbiome found in the BWIs [69,70].In a study conducted in a hospital in Tanzania, Acinetobacter spp.emerged as the main cause of HAIs in burn patients, whereas in a study done in Nigeria on burn patients, Klebsiella spp. was found to be the predominant pathogen [36,71].This difference in pathogen preponderance in BWIs is due to varying geographical conditions and different control measures [36].The survivability of burn patients differs significantly depending on ethnicity and race, as well as on the cost and utilization of health care services [69]. IMPACT OF COVID-19 ON BURN INJURIES Many countries resorted to social isolation and lockdown for quite a long span of time for the containment of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which is the causative agent of Coronavirus disease 2019 (COVID-19), the global pandemic.This caused an increase in the occurrence of domestic accidents, leading to burn injuries, although a reduction in amenities available for burn care was observed worldwide during the pandemic, especially in lower-income countries [72]. IMPACT OF BURN INJURIES ON THE IMMUNE SYSTEM The skin is the largest anatomical barrier and defensive against the entry of pathogens, which induces a state of immunosuppression when disrupted in burn patients [73].Host defense has two branches, namely the innate and adaptive immune responses.Of which, the latter takes a longer time to set in [73].The innate immune response is, however, immediate, severe, and prolonged [73].At first, there is a pro-inflammatory response where IL-1, IL-6, tumor necrosis factor-α, and interferon-γ cytokines are secreted, and later, the anti-inflammatory response maintains homeostasis by secreting IL-10 and by transforming growth factor-β [73]. Mast cells are the first immune cells to respond to BWIs.Dendritic cells, neutrophils, and monocytes migrate to the site of inflammation under the influence of chemotactic factors [74].Neutrophils produce reactive oxygen species to destroy the pathogens in the burn wounds, which, in turn, causes damage to skin structures and elicits a strong inflammatory response defined as systemic inflammatory response syndrome (SIRS) [73,75].SIRS is dampened in elderly patients as compared to younger patients, despite the burn size [73]. The innate immune system is often significantly altered during major burn wounds, where neutrophil and intracellular killings are disrupted, down-regulation of major histocompat-ibility complex-class II expression occurs, and phagocytic activities of macrophages are diminished [76][77][78].These anomalies diminish the natural defenses of the body, increasing the chances of notorious pathogen attacks in burn patients [50,79]. INFECTION CONTROL IN BURN PATIENTS There are three types of BWIs, namely cellulitis, burn wound impetigo, and invasive wound infections within unexcised eschar (necrotizing infection-fasciitis) [80].Regular laboratory surveillance along with routine microbial wound culturing are essential for strict infection control practices and appropriate antibacterial therapy [80].Receiving antibiotics before the infection, as well as during the hospitalization period, is a major risk factor for the acquisition of antibiotic-resistant microorganisms [81].Thus, routine follow-up of the antibiotic-resistance pattern of burn wound flora is absolutely mandatory for successful infection control [81].Antibiotics must be chosen only after proper monitoring of the antibiotic resistance trend in an individual burn center to restrict infection by MDR microorganisms [80].Also, systemic antibiotic administration should be carried out for only a very short period of time in burn patients to avoid the spread of multi-drug resistance [81]. Patients with large burn wounds need to be provided with advanced burn wound care [80].Such advances in wound care include advances in wound exudate and edema control, optimization of the wound environment with ideal skin disinfectants, advances in wound debridement systems, and enhancements in systemic care and management through new applications of medical technologies [82]. Some useful techniques used in burn wound cleansing are high-pressure irrigation, low-pressure irrigation, swabbing, showering, bathing, and washing the affected area under a running liquid [83].Water, saline, or other antiseptic formulations are used as the cleansing liquid, as applicable [83].Nowadays, a large number of dressings are available, which are very effective in the healing of cleansed wounds [83].Some therapeutic applications, involving the use of collagen, hyaluronic acid, growth factor, vacuum-assisted closure, and skin grafting are used to treat burn wounds of varying severities [40].The Versajet hydrosurgery system is very advantageous for burn wound debridement, which includes optimal preservation of viable tissue, a reduction in blood loss, and effective elimination of bacterial colonization [84]. CONCLUSIONS At the very outset, the prevention of burn injuries should be highly prioritized, as it stands as a global public health crisis, especially in underdeveloped and developing countries.Patients with burn injuries have increased susceptibilities to a wide range of pathogens, including various MDR species of bacteria, fungi, and viruses, particularly during their hospital stay for treatment.This occurs mainly due to their impaired immune system responses, inappropriate vascular organization within the burn-injured area, and intensification of severe oxidative stress.Immunosuppression, prolonged hospitalization, and geographical factors influence the susceptibility of burn patients to MDR-bacterial and fatal viral infections.Microbial transmission and infestation in burn wounds need to be reduced to improve the survival chances of burn patients.For this, an effective infection control policy at every stratum of health care is essential.A combined effort of burn surgeons and burn care units to control the overuse of antibiotics and provide a sterile environment and efficient medical equipment for effective and critical care of the patients should effectively tackle the otherwise sinking situation in burn care across the world. Figure 1 . Figure 1.Burn wound infection microbes and their effect on a burn patient.MRSA: methicillin-resistant Staphylococcus aureus; spp.: species. Figure 2 . Figure 2. (A) Burn wounds typically contain burn wound exudates, which facilitate the initial inoculation and reversible attachment by bacterial pathogens.(B) Bacteria begin to produce extracellular matrix (ECM) and form micro-colonies during the process of irreversible attachment.(C) During the maturation stage, the biofilm grows in size and structural complexity.(D) The mature biofilm enters the dispersal stage, releasing bacterial cells from the ECM, which can then colonize new sites within the wound.Adapted from Maslova et al.NPJ Biofilms Microbiomes 2021;7:73 [3]. Table 1 . Bacterial pathogens isolated from burn wound infections Table 2 . Categories and effects of different pathogens causing burn wound infections Escherichia coli Common It causes enteric diseases, such as diarrhoea/dysentery, colitis, meningitis, low grade fever, vomiting, renal impairment.Multidrug resistant bacteria P. aeruginosa Most common, dangerous It causes infections in the blood, lungs (pneumonia), soft tissue infection, UTIs. A. baumannii Most common It causes diseases such as pneumonia and meningitis, bloodstream infections (bacteremia and sepsis), delays in wound healing, graft losses, UTIs.Klebsiella pneumoniae Most common, concerning It causes endophthalmitis, pyrogenic liver abscess, splenic abscess, necrotizing skin infection, soft tissue infection, meningitis, antibiotic-associated hemorrhagic colitis, bacteremia, pneumonia, Lemierre syndrome.Methicillin-resistant Staphylococcus aureus Common, dangerous MRSA causes skin infections like atopic dermatitis, followed by invasive infections like osteomyelitis, meningitis, lung abscess, pneumonia, brain abscess and central nervous system infection.Escherichia coli Common It causes enteric diseases, such as diarrhoea/dysentery, colitis, meningitis, low grade fever, vomiting, renal impairment.Proteus mirabilis Common It mostly causes UTIs, along with meningoencephalitis, empyema, and osteomyelitis.Fungi Candida spp.Most common They cause intense itching.Symptoms also include red, growing skin rash, rash on the skin folds, genitals, middle of the body, buttocks, under the breasts, and other areas of skin.+ T-cells.It eventually leads to chronic multi-organ diseases and severe impairments within the central nervous system.Papillomavirus Rare It causes intraepithelial neoplasias.spp.: species; UTI: urinary tract infection; MRSA: Methicillin-resistant Staphylococcus aureus; IL: interleukin.
2024-05-26T15:17:37.169Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "f0e385dfe55ad2d63ebb7d645aedab5ccdee25fc", "oa_license": "CCBYNC", "oa_url": "http://link.springer.com/content/pdf/10.1186/s12879-020-4920-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ebf317a5feb32d43927c0392ebc7a9065c08ba8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
121316952
pes2o/s2orc
v3-fos-license
Quasi-Exactly Solvable Time-Dependent Hamiltonians A generalized method which helps to find a time-dependent Schrödinger equation for any static potential is established. We illustrate this method with two examples. Indeed, we use this method to find the time-dependent Hamiltonian of quasi-exactly solvable Lamé equation and to construct the matrix 2 × 2 time-dependent polynomial Hamiltonian. Introduction Another direction of investigation of quasi-exactly solvable Schrödinger is the study of time-dependent Hamiltonian.Time-dependence can be set through the potential.A first step is the direction was done in [1].This is related to the quasi-exactly solvable sextic anharmonic oscillator potentials.The Schrödinger equation is now considered with a time-dependent potential ( ) where ( ) The time-dependent potentials constructed from the well-known family of quasi-exactly solvable sextic anharmonic oscillator potentials ( ) ( ) are of the following form [1] ( ) where 0, 0 x t > ≥ , n is a non-negative integer, 0 k ≥ , β is real constant and ( ) u t is an arbitrary function of 0 t ≥ which is positive.If 1 k > , the last term in the above potential ( ) , V x t may be viewed as a centrifugal term in radial equation with x playing the role of radial coordinate.The domain of the definition of the po- tential (4) may be extended to the real line if 0,1 k = . After some algebraic manipulations, one has obtained the algebraic solutions of the Equation (1) of the form where the function ( ) , log 4 2 In this paper, we will construct time-dependent Schrödinger equation for any potential.It means that we will find algebraic solutions namely ( ) of that equation and one can build a time-dependent potential from any non time-dependent one.Note here that the static potential considered can be either quasi-exactly solvable (QES) or simply exactly solvable [2]- [4].It is understood that we will generalize the formalism considered in Ref. [1] where the authors have constructed a time-dependent Schrödinger equation for only one family of quasi-exactly solvable sextic anharmonic oscillator potentials. Construction of a Time-Dependent Schrödinger Equation The main results are summarized by the following proposition: Let ( ) V y be a potential and ( ) y φ be a solution of the eigenvalue equation with eigenvalue λ .Let ( ) t ω be a positive (and derivable) function of t .Then, the solution of the Schrödinger equation with time-dependent potential is given by Proof of the Proposition We will discuss here an original method to construct time-dependent Hamiltonians which possess algebraic eigenvectors.Let us consider the Schrödinger equation, with ( ) y φ is an eigenfunction with eigenvalue λ of the Hamiltonian ( ) Note here that this Hamiltonian H (or the potential ( ) V y ) doesn't depend on time t explicitly, it means that t doesn't enter neither in the eigenvalue λ , nor in the eigenfunction ( ) As a consequence, the spectral Equation ( 11) is written as Let us pose and extend the effective potential of the above equation noted by adding a new term ( ) and consider a full Schrödinger equation of the form The next step is to determine the unknown function ( ) , R t x so that one can deduce the time-dependent algebraic solutions ( ) of the Equation ( 15) and relate it to (14).Obviously, the above Equation ( 15) can be developed as follows which can be rewritten Manifestly, this equation can be written in terms of φ (i.e. the first derivative terms of φ must be omitted (must vanish)) only if the following condition is imposed with this expression of the function ( ) , R t x , the Equation (17) takes the following form Replacing the expression by its equivalent one in this above equation, i.e. 2 λω φ as it is given in (14), one can write which can be rewritten ( ) From this equation, the added term ( ) , t x ∆ to the initial potential in (15) is easily expressed as , , Replacing ( ) , R t x in this equation by expression (18) and after some algebraic manipulations, one can write ( ) where One can easily remark that ( ) , t x ∆ is real and non-dependent on the eigenvalue λ only if it is expressed as ( ) This is possible due to the following condition Solving the above differential equation and after some algebraic manipulations, one can easily obtain the expression of the function ( ) With this expression of the function ( ) R t , the algebraic solutions of the time-dependent Schrödinger equa- tion , , , with the time-dependent potential are determined as where ( ) t ω is an arbitrary positive function of t and ( ) y φ is the eigenvector of the equation It means that one has constructed a time-dependent potential from the potential ( ) V y which is non time-dependent.This is the generalization of the particular case of potentials considered in Ref. [1].This is a particular case of ours because one can replace the original potential (i.e. the potential which is non time-dependent) in Equation ( 28) by any one which leads to a time-dependent potential associated to the above solutions ( ) as it is given by the Equation (29).These solutions are expressed in terms of the eigenvalues λ of the Schrödinger equation.The values of λ depend on a potential considered, i.e. when the potential is quasi-exactly solvable, only a part of the eigenvalues is found algebraically whereas when the potential considered is exactly solvable, all eigenvalues λ are calculated explicitly.So, we have constructed a generalized formula which helps to find time-dependent potentials, it means that one can deduce for a non time-dependent potential its associated time-dependent one.In the next step, we will use this method established previously, i.e. we will manipulate simply the Equation (28) and Equation (29) respectively to construct the time-dependent Lamé potential and the algebraic solutions of Schrödinger equation.We will also apply the above method to the known QES matrix polynomial operator [5] [6] and interesting remarks will be pointed out. Example 1: Construction of Time-Dependent Lamé Potential In this section, along the same lines of the above method, i.e. simply from the Equation (28), we will transform the non time-dependent potential associated to the Lamé equation into the time-dependent one.The Lamé equation is quasi-exactly solvable and the original form is as follows [7] where the Lamé potential is λ is the eigenvalue of the Lamé Hamiltonian and ( ) , sn y k is the Jacobi elliptic function with modulus ( ) . This function is periodic (i.e. the Lamé potential is also periodic) with period ( ) 4K k which denotes the complete elliptic integral of the first type, i.e. V t x ω in the Equation (28) by the above Lamé potential (32), we find the following time-dependent Lamé potential It is easily observed that this last term in 2 x of (34) isn't periodic so that it spoils the periodicity of the above time-dependent Lamé potential.The above time-dependent Lamé potential (34) can become periodic only if the following condition is satisfied 0 ∆ = , ( ) where c is a real constant. From the expression of ( ) From the above expressions ( 35) and ( 36), the time-dependent Schrödinger Equation ( 1) is of the following form Referring to the Equation (29) and Equation ( 35), the algebraic solutions of this Schrödinger equation are obtained Note that one can deduce from a non time-dependent potential (for which the eigenvalues λ exist) its corre- sponding time-dependent one by using the general formula established in Equation ( 28) while the algebraic solutions of the Schrödinger equation are found from the Equation (29). Example 2: Extension to Matrix Time-Dependent Schrödinger Equation The goal of this section is to construct a matrix time-dependent Schrödinger equation by the above method used to find the time-dependent potential of the non coupled Lamé equation.Let us consider the following matrix Hamiltonian [5] [6] ( ) ( ) where the potential ( ) where 1 3 , σ σ are the Pauli matrices, 2 1 is the matrix identity, 1 2 0 , , p p k are free real parameters and m is an integer. ( ) H y can be written in the matrix form as follows ( ) From the Equation (46), this equality can be considered
2018-12-06T01:58:47.118Z
2014-08-25T00:00:00.000
{ "year": 2014, "sha1": "9c2565a82e7d4067d4f52eeb8393fa99ce0b0ccd", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=49251", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9c2565a82e7d4067d4f52eeb8393fa99ce0b0ccd", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119247535
pes2o/s2orc
v3-fos-license
Quantum dense coding over Bloch channels Dynamics of coded information over Bloch channels is investigated for different values of the channel's parameters. We show that, the suppressing of the travelling coded information over Bloch channel can be increased by decreasing the equilibrium absolute value of information carrier and consequently decreasing the distilled information by eavesdropper. The amount of decoded information can be improved by increasing the equilibrium values of the two qubits and decreasing the ratio between longitudinal and transverse relaxation times. The robustness of coded information in maximum and partial entangled states is discussed. It is shown that the maximum entangled states are more robust than the partial entangled state over this type of channels. Introduction Decoherence represents the most difficult obstacles in quantum information processing. This unavoidable phenomena can be seen in different pictures such as, the undesirable interactions between systems and their surroundings [1], device imperfections [2,3], decay due to spontaneous, emission and noisy channel [4]. These interactions corrupt the information stored in the system and consequently cause errors in the transferred information. Therefore, investigating the dynamics of information in the presence of decoherence is one of the most important tasks in quantum computation and information. Quantum coding is one of techniques that has been used to transfer information between two users [5]. To achieve quantum coding protocol with high efficiently, one needs maximum entangled state and ideal channels. There are some protocols which have been presented different treatments of quantum coding over noseless channels theoretically [6,7] and experimentally [8]. In real word, it is very difficulty to keep systems which are used for quantum coding isolate. Therefore, it is important to introduce quantum coding protocols over noisy channels. Recently, Shadman and et.al., [9] have investigated super dense coding over noise, where they consider the case of Pauli channels in arbitrary dimension and derive the super dense coding capacity. In the present work, we introduce different type of quantum channels called Bloch channels. The decoherence effect of these channels on the entanglement and information has been investigated by Ban and et. al. [10,11]. They considered one qubit passing through the Bloch channel. Metwally [12] have investigated the effect of the Bloch cannel on the fidelity of the teleported state, where the two qubits are pass through the channel. This motivated us to investigate the effect of the Bloch channels on the dynamics of coded information. Also, in this context the behavior of the local and non-local information is studied. The paper is organized as follows: In Sec.2, we examine the evolution of a general twoqubits state passes through Bloch channel. The quantum dense coding is discussed in Sec. 3. The dyanmics of the local and non-local information is investigated in Sec. 4. Finally, we discuss our results in Sec. 5. The characterization of the 2-qubit states produced by some sources requires experimental determination of 15 real parameters. Each qubit is determined by 3 parameters, representing the Bloch vectors, and the other 9 parameters represent the correlation tensor. Analogs of Pauli's spin operators are used for the description of the individual qubits; the set σ 1x , σ 1y , σ 1z for the first qubit and σ 2x , σ 2y , σ 2z for the second qubit. Any two qubits state is described by [13,14,15], where → σ 1 and → σ 2 are the Pauli's spin vectors of the first and the second qubit respectively. The statistical operators for the individual qubits are specified by their Bloch vectors, ) and the second qubit, . The Bloch vectors and the cross dyadic are given by Let us consider that each qubit is forced to pass through Bloch channel. This type of channels is defined by the Bloch equations [10], for the first qubit, while for the second qubit, they are given by where T 1i and T 2i , i = a, b are the longitudinal and transverse relaxation times for the first and the second qubit, and σ 1z eq , σ 2z eq are the equilibrium values of σ 1z t and σ 2z t respectively. Now, we assume that the two qubits pass through the channels (3), and (4). Then the output state is given by [12], where, and (5), represents the time evaluation of any two qubits state passes through the Bloch channels Eqs. (3) and (4). Assume that the users, Alice and Bob share one maximum entangled state of Bell's states, ψ ± or φ ± . The dynamics of these states can be obtained from (5) by setting s x (0) = s y (0) = s z (0) = 0, for the first qubit and r x (0) = r y (0) = r z (0) = 0 for the second qubit, Q ij (0) = 0 for i = j and Q xx (0) = Q yy (0) = Q zz (0) = −1 for ψ + , and Q xx (0) = Q yy (0) = Q zz (0) = 1 for φ + , Q xx (0) = Q zz (0) = 1, Q yy (0) = −1 for ψ − and so on. On the other hand, the users can use an initial pure partial entangled state. This class of states is characteristic by one parameter [14]. It is defined by : → S = (0, 0, p), → R = (0, 0, −p) and Q ij = 0 for i = j, Q xx = Q yy = − 1 − p 2 andQ zz = −1. 3 Quantum coding Let us consider that the partners Alice and Bob share maximum or partial entangled state. The aim of Alice is sending the coded information to Bob. But for some reasons the carrier of these coded information is forced to pass through the Bloch channel. We quantify the amount of information which decoded by Bob and investigating the effect of the channel parameters and the type of the initial carrier on the accuracy of the decoded information. To show our idea, we implement the original dense coding protocol which has been proposed by Bennett and Wienser [5]. This protocol is described as follows: 1. Alice encodes two classical bits by using one of local unitary operators. 2. If Alice applies these unitary operators randomly with probability η i , then she codes the information in the state, The dynamics of the decoded information in a state initially prepared in maximum entangled state (a) the solid, dash-dot and dot curves are for σ 1z eq = 1, 0.9, 0.8 respectively and σ 2z eq = 0.9, α = 0.5. (b) the solid, dash-dot and dot lines for α = 0.7, 0.6, 0.5 respectively and σ 1z eq = 1, σ 2z eq = 0.9. 3. Alice sends her qubit to Bob, who makes joint measurements on the two qubits. The maximum amount of information which Bob can extract from Alice's message is Bounded by, Fig.(1a) displays the effect of the equilibrium absolute values of the first qubit, σ 1z eq on the decoded information. It is clear that, before the interaction is switched on, the decoded information is very large and and goes down quickly once the interaction is devolped. This decay of the decoded information increases as σ 1z eq decreases. However as time increases, the decoded information increases gradually and reaches its upper bound. The effect of the ratio between longitudinal and transverse relaxation times, α i is depicted in Fig.(1b). As α i increases and the equilibrium absolute values of the two qubits are large, the decay of the decoded information becomes faster. For t > 50, the decoded information increases faster for small values of α i . In Fig.(2), we consider that Alice coded her information in partial entangled state, where we set p = 0.5 in (7). The dynamics of the decoded information is similar to that shown in Fig.(1). However the maximum amount of the decoded information for the PES is smaller than that for MES. From Figs. (1 & 2), it is clear that the travelling coded information in a state prepared initially in maximum entangled states, MES is much better than using partial entangled states, PES. This means that MES are more robust than PES for this type of channels. Dynamics of Local and non Local Information Suppose we have a source supplies each user with a qubit to code their own information. In this case, one says that these information are local information. If the qubits are forced to pass through environment (say Bloch channels), then the two qubits will entangled with each other and interact with the environment. As a resultant of this interaction the local information will be transferred between the two qubits and called non-local information. In this section, we investigate the dynamics of local information I A which coded in Alice's qubit ρ a = tr b {ρ c }, I B is the local information which is coded in Bob's qubit, ρ b = tr a {ρ c } and the non-local information between Alice and Bob, I ab which is coded in the state ρ c [16]. Due to the undesirable interactions there are some information lose. These interactions can be considered as another person (Eve), who tries to distill information from the travelling state between Alice to Bob. Mathematically, this information is defined as: where F is the fidelity that Bob decoded the information. Alice and Eve, I AE . In Fig.(3), we investigate the effect of the absolute equilibrium values on the dynamics of the travelling coded information where we assume that Alice has coded her information in maximum entangled state. It is clear that, at the beginning the non-local information between Alice and Bob, I AB and between Alice and Eve, I AE are zero, while I A and I B are non-zero. As soon as the interaction times goes on, I AB and I AE increase on the expanse of the local information owned by Alice and Bob. As time increases more, Eve distill more information from Alice and I AE is much larger than I A . On the other hand, due to the lose of the information from Alice side, the non-local information between Alice and Bob decreases. The dynamical behavior of these different types of the information are depicted in Fig.(3a). Fig.(3b) describes the dynamics of the local and non-local information for small value of σ 1z eq (say≃ 0.3). In this case, the amount of information which is distilled by Eve is smaller than that shown in Fig.(3a). However, Alice's information is slightly affected. Therefore decreasing the absolute equilibrium value of one qubit, maximize the non-localinformation between the two qubits. Fig.(4) displays the dynamics of information, where Alice has coded her information in partial entangled state. In this case, for large values of σ 1z eq and σ 2z eq the information which is gained by Eve, increases abruptly on the expanse of Alice's information and for t > 100, I AE > I AB . However as one of the absolute equilibrium values is decreased, the non-local information between Alice and Bob, I AB is increased very fast and its maximum value is always larger than that depicted in Fig.(3). As time goes on, I AB , decreases slowly and its minimum value is always larger than that depicted in Fig.(3b). Although Eve's information increases fast, but I AE < I AB . In a very small range of time I AE > I A . So for this choice of the channel's parameters, Alice and Bob can communicate safely for long range of time. The effect of the ratio between longitudinal and transverse relaxation times, α i is shown in Fig.(5), where we set α 1 = α 2 = 0.3, σ 1z eq = 1 and σ 2z eq = 0.9. It is clear that, from Fig.(5a), ( we assume that Alice coded her information in MES), I A decreases very fast and Alice's state turns into a completely mixed state for t > 200 and consequently Eve's information increases very fast and reaches its maximum value faster than that shown in Fig.(4a), in which α i = 0.7. As soon as Alice loses her information completely, I AB and I B have asymptotically the same values t > 200, which is much earlier than that displayed in Fig.(3a). In Fig.(5b), we assume that the information is initially coded in PES. In general, the dynamics of information is similar to that shown in Fig.(5a), but from Fig.(4a) and Fig.(5b), we can see that the safely communicate time decreases and the non-local information between Alice and Bob, I AB decreases very fast. From Figs. (4 & 5), one concludes that the absolute equilibrium values and the ratio between longitudinal and transverse relaxation times can be considered as control parameters. One can improve the local information for one qubit by decreasing the equilibrium values of the other qubit. Also, the eavesdropper information can be minimized be decreasing the absolute equilibrium of one qubit and increasing the ratio between longitudinal and transverse relaxation times. In this case, the information lose is always smaller than the information between the sender and receiver. Therefore, the users can increase the safety communication time and improve the non-local information. Also, the initial state which is used to code information plays an important role on the secure communication. It is clear that, coding information in maximum entangled state is much better than using partial entangled states, where for the first the users can increase the safe time of communication by controlling on the channel parameters. Conclusion The time evaluation of a system consists of two qubits passes through Bloch channel is investigated. The quantum dense coding protocol is implemented by using two different initial states setting:maximum and partial entangled states. The coded information is send with high accuracy by increasing the absolute equilibrium values of the two qubits and decreasing the ratio of the longitudinal and transverse relaxation times. However, if the absolute equilibrium value of one qubit decreases, the decoded information decreases. It is shown that, using maximum entangled state for coding information is much better than using partial entangled state. This means that, the maximum entangled states are more robust then partial entangled states when they travel through Bloch channels. The local and non-local information are quantified for different values of the channel parameters. There are some cases, where the eavesdropper can distill more information on the expanse of the travelling coded information. However the partners can communicate safely when the non-local information between the two users is larger than that distilled from the travelling coded information. Also, the absolute equilibrium values and the ratio of the longitudinal and transverse relaxation times can be considered as a control parameters. It is clear that, for large values of the equilibrium absolute parameters for both qubit, the local information of both qubit decreases faster and consequently the information gained by eavesdropper increases. However, if the equilibrium absolute value of one qubit decreases, its corresponding local information is slightly affected. Therefore, to send the coded information from the sender to the receiver safely, one has to decrease the absolute equilibrium value. Also, as one increases the ratio of the longitudinal and transverse relaxation times, the survival time of the local and non-local information increases. Introduction Decoherence represents the most difficult obstacles in quantum information processing. This unavoidable phenomena can be seen in different pictures such as, the undesirable interactions between systems and their surroundings [1], device imperfections [2,3], decay due to spontaneous, emission and noisy channel [4]. These interactions corrupt the information stored in the system and consequently cause errors in the transferred information. Therefore, investigating the dynamics of information in the presence of decoherence is one of the most important tasks in quantum computation and information. Quantum coding is one of techniques that has been used to transfer information between two users [5]. To achieve quantum coding protocol with high efficiently, one needs maximum entangled state and ideal channels. There are some protocols which have been presented different treatments of quantum coding over noseless channels theoretically [6,7] and experimentally [8]. In real word, it is very difficulty to keep systems which are used for quantum coding isolate. Therefore, it is important to introduce quantum coding protocols over noisy channels. Recently, Shadman and et.al., [9] have investigated super dense coding over noise, where they consider the case of Pauli channels in arbitrary dimension and derive the super dense coding capacity. In the present work, we introduce different type of quantum channels called Bloch channels. The decoherence effect of these channels on the entanglement and information has been investigated by Ban and et. al. [10,11]. They considered one qubit passing through the Bloch channel. Metwally [12] have investigated the effect of the Bloch cannel on the fidelity of the teleported state, where the two qubits are pass through the channel. This motivated us to investigate the effect of the Bloch channels on the dynamics of coded information. Also, in this context the behavior of the local and non-local information is studied. The paper is organized as follows: In Sec.2, we examine the evolution of a general twoqubits state passes through Bloch channel. The quantum dense coding is discussed in Sec.3. The dyanmics of the local and non-local information is investigated in Sec.4. Finally, we discuss our results in Sec. 5. The characterization of the 2-qubit states produced by some sources requires experimental determination of 15 real parameters. Each qubit is determined by 3 parameters, representing the Bloch vectors, and the other 9 parameters represent the correlation tensor. Analogs of Pauli's spin operators are used for the description of the individual qubits; the set σ 1x , σ 1y , σ 1z for the first qubit and σ 2x , σ 2y , σ 2z for the second qubit. Any two qubits state is described by [13,14,15], Let us consider that each qubit is forced to pass through Bloch channel. This type of channels is defined by the Bloch equations [10], for the first qubit, while for the second qubit, they are given by where T 1i and T 2i , i = a, b are the longitudinal and transverse relaxation times for the first and the second qubit, and σ 1z eq , σ 2z eq are the equilibrium values of σ 1z t and σ 2z t respectively. Now, we assume that the two qubits pass through the channels (3), and (4). Then the output state is given by [12], where, and γ i = exp{− t T 1i }, β i = exp{− t T 2i }, and i = a, b. Equation (5), represents the time evaluation of any two qubits state passes through the Bloch channels Eqs. (3) and (4). Assume that the users, Alice and Bob share one maximum entangled state of Bell's states, ψ ± or φ ± . The dynamics of these states can be obtained from (5) by setting s x (0) = s y (0) = s z (0) = 0, for the first qubit and r x (0) = r y (0) = r z (0) = 0 for the second qubit, Q ij (0) = 0 for i = j and Q xx (0) = Q yy (0) = Q zz (0) = −1 for ψ + , and Q xx (0) = Q yy (0) = Q zz (0) = 1 for φ + , Q xx (0) = Q zz (0) = 1, Q yy (0) = −1 for ψ − and so on. On the other hand, the users can use an initial pure partial entangled state. This class of states is characteristic by one parameter [14]. It is defined by : → S = (0, 0, p), → R = (0, 0, −p) and Q ij = 0 for i = j, Q xx = Q yy = − 1 − p 2 andQ zz = −1. 3 Quantum coding Let us consider that the partners Alice and Bob share maximum or partial entangled state. The aim of Alice is sending the coded information to Bob. But for some reasons the carrier of these coded information is forced to pass through the Bloch channel. We quantify the amount of information which decoded by Bob and investigating the effect of the channel parameters and the type of the initial carrier on the accuracy of the decoded information. To show our idea, we implement the original dense coding protocol which has been proposed by Bennett and Wienser [5]. This protocol is described as follows: 1. Alice encodes two classical bits by using one of local unitary operators. 2. If Alice applies these unitary operators randomly with probability η i , then she codes the information in the state, The dynamics of the decoded information in a state initially prepared in maximum entangled state (a) the solid, dash-dot and dot curves are for σ 1z eq = 1, 0.9, 0.8 respectively and σ 2z eq = 0.9, α = 0.5. (b) the solid, dash-dot and dot lines for α = 0.7, 0.6, 0.5 respectively and σ 1z eq = 1, σ 2z eq = 0.9. 3. Alice sends her qubit to Bob, who makes joint measurements on the two qubits. The maximum amount of information which Bob can extract from Alice's message is Bounded by, Fig.(1a) displays the effect of the equilibrium absolute values of the first qubit, σ 1z eq on the decoded information. It is clear that, before the interaction is switched on, the decoded information is very large and and goes down quickly once the interaction is devolped. This decay of the decoded information increases as σ 1z eq decreases. However as time increases, the decoded information increases gradually and reaches its upper bound. The effect of the ratio between longitudinal and transverse relaxation times, α i is depicted in Fig.(1b). As α i increases and the equilibrium absolute values of the two qubits are large, the decay of the decoded information becomes faster. For t > 50, the decoded information increases faster for small values of α i . In Fig.(2), we consider that Alice coded her information in partial entangled state, where we set p = 0.5 in (7). The dynamics of the decoded information is similar to that shown in Fig.(1). However the maximum amount of the decoded information for the PES is smaller than that for MES. From Figs.(1 & 2), it is clear that the travelling coded information in a state prepared initially in maximum entangled states, MES is much better than using partial entangled states, PES. This means that MES are more robust than PES for this type of channels. Dynamics of Local and non Local Information Suppose we have a source supplies each user with a qubit to code their own information. In this case, one says that these information are local information. If the qubits are forced to pass through environment (say Bloch channels), then the two qubits will entangled with each other and interact with the environment. As a resultant of this interaction the local information will be transferred between the two qubits and called non-local information. In this section, we investigate the dynamics of local information I A which coded in Alice's qubit ρ a = tr b {ρ c }, I B is the local information which is coded in Bob's qubit, ρ b = tr a {ρ c } and the non-local information between Alice and Bob, I ab which is coded in the state ρ c [16]. Due to the undesirable interactions there are some information lose. These interactions can be considered as another person (Eve), who tries to distill information from the travelling state between Alice to Bob. Mathematically, this information is defined as: where F is the fidelity that Bob decoded the information. Alice and Eve, I AE . In Fig.(3), we investigate the effect of the absolute equilibrium values on the dynamics of the travelling coded information where we assume that Alice has coded her information in maximum entangled state. It is clear that, at the beginning the non-local information between Alice and Bob, I AB and between Alice and Eve, I AE are zero, while I A and I B are non-zero. As soon as the interaction times goes on, I AB and I AE increase on the expanse of the local information owned by Alice and Bob. As time increases more, Eve distill more information from Alice and I AE is much larger than I A . On the other hand, due to the lose of the information from Alice side, the non-local information between Alice and Bob decreases. The dynamical behavior of these different types of the information are depicted in Fig.(3a). Fig.(3b) describes the dynamics of the local and non-local information for small value of σ 1z eq (say≃ 0.3). In this case, the amount of information which is distilled by Eve is smaller than that shown in Fig.(3a). However, Alice's information is slightly affected. Therefore decreasing the absolute equilibrium value of one qubit, maximize the non-localinformation between the two qubits. Fig.(4) displays the dynamics of information, where Alice has coded her information in partial entangled state. In this case, for large values of σ 1z eq and σ 2z eq the information which is gained by Eve, increases abruptly on the expanse of Alice's information and for t > 100, I AE > I AB . However as one of the absolute equilibrium values is decreased, the non-local information between Alice and Bob, I AB is increased very fast and its maximum value is always larger than that depicted in Fig.(3). As time goes on, I AB , decreases slowly and its minimum value is always larger than that depicted in Fig.(3b). Although Eve's information increases fast, but I AE < I AB . In a very small range of time I AE > I A . So for this choice of the channel's parameters, Alice and Bob can communicate safely for long range of time. The effect of the ratio between longitudinal and transverse relaxation times, α i is shown in Fig.(5), where we set α 1 = α 2 = 0.3, σ 1z eq = 1 and σ 2z eq = 0.9. It is clear that, from Fig.(5a), ( we assume that Alice coded her information in MES), I A decreases very fast and Alice's state turns into a completely mixed state for t > 200 and consequently Eve's information increases very fast and reaches its maximum value faster than that shown in Fig.(4a), in which α i = 0.7. As soon as Alice loses her information completely, I AB and I B have asymptotically the same values t > 200, which is much earlier than that displayed in Fig.(3a). In Fig.(5b), we assume that the information is initially coded in PES. In general, the dynamics of information is similar to that shown in Fig.(5a), but from Fig.(4a) and Fig.(5b), we can see that the safely communicate time decreases and the non-local information between Alice and Bob, I AB decreases very fast. From Figs. (4 & 5), one concludes that the absolute equilibrium values and the ratio between longitudinal and transverse relaxation times can be considered as control parameters. One can improve the local information for one qubit by decreasing the equilibrium values of the other qubit. Also, the eavesdropper information can be minimized be decreasing the absolute equilibrium of one qubit and increasing the ratio between longitudinal and transverse relaxation times. In this case, the information lose is always smaller than the information between the sender and receiver. Therefore, the users can increase the safety communication time and improve the non-local information. Also, the initial state which is used to code information plays an important role on the secure communication. It is clear that, coding information in maximum entangled state is much better than using partial entangled states, where for the first the users can increase the safe time of communication by controlling on the channel parameters. Conclusion The time evaluation of a system consists of two qubits passes through Bloch channel is investigated. The quantum dense coding protocol is implemented by using two different initial states setting:maximum and partial entangled states. The coded information is send with high accuracy by increasing the absolute equilibrium values of the two qubits and decreasing the ratio of the longitudinal and transverse relaxation times. However, if the absolute equilibrium value of one qubit decreases, the decoded information decreases. It is shown that, using maximum entangled state for coding information is much better than using partial entangled state. This means that, the maximum entangled states are more robust then partial entangled states when they travel through Bloch channels. The local and non-local information are quantified for different values of the channel parameters. There are some cases, where the eavesdropper can distill more information on the expanse of the travelling coded information. However the partners can communicate safely when the non-local information between the two users is larger than that distilled from the travelling coded information. Also, the absolute equilibrium values and the ratio of the longitudinal and transverse relaxation times can be considered as a control parameters. It is clear that, for large values of the equilibrium absolute parameters for both qubit, the local information of both qubit decreases faster and consequently the information gained by eavesdropper increases. However, if the equilibrium absolute value of one qubit decreases, its corresponding local information is slightly affected. Therefore, to send the coded information from the sender to the receiver safely, one has to decrease the absolute equilibrium value. Also, as one increases the ratio of the longitudinal and transverse relaxation times, the survival time of the local and non-local information increases.
2010-07-08T19:35:14.000Z
2010-07-08T00:00:00.000
{ "year": 2010, "sha1": "68ac48b683ee8d1b515a23f9afa87feff2136a76", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1007.1446", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "68ac48b683ee8d1b515a23f9afa87feff2136a76", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
46969161
pes2o/s2orc
v3-fos-license
Time to Push: Use of Gestational Age in the Electronic Health Record to Support Delivery of Relevant Prenatal Education Content Introduction: Clinicians must provide anticipatory guidance pregnant clients would find useful but might not seek out independently. Client-facing health information resources should a) satisfy clients’ health self-management queries, and b) provide anticipatory guidance at developmentaly appropriate times. Care Guide (by our technology partner, Maternity Neighborhood™) is an online maternity education platform positioned to meet pregnant clients’ information needs through high-quality, curated content paired with secure provider/client messaging. The research version of Care Guide is called Maternity Information Access Point (MIAP). Little is known about how clients perceive or engage with maternity education delivered via patient portal or personal health record. Methods: This qualitative study employed focus groups and four week field-testing periods with English- and Spanish-speaking pregnant women enrolled in Medicaid. User satisfaction and system usability were evaluated through self-report instruments. Results: Twelve of the 16 participants logged usage of MIAP, with amount of usage varying widely. Satisfaction (4.3/5) and usability (4.7/5) were rated highly. Weekly content push emails were a popular feature; participants agreed the content was relevant, timely, and useful. Forgetting passwords and lack of experience with technology were barriers to use. Discussion and Conclusion: Gestational age captured in the electronic health record can support automated pushing of content containing highly relevant anticipatory guidance. Platform features that guide the user through content can be leveraged to promote continued user engagement. Users’ desire for easy access to content must be balanced against the need to safeguard protected health information. Digital newcomers may require in-person technical support. Introduction Pregnancy is a time of high information need as maternity care clients manage the health of their changing bodies and look ahead to birth and parenting. Pregnant clients' motivation to get answers to their maternity questions is evident from their frequent consultation of numerous online and offline informational sources. [1][2] However, information needs during the maternity care episode extend beyond those identified by the client; clinicians and health educators must provide anticipatory guidance to cover material that the client would find useful but might not seek out independently. 3 In fact, comprehensive and timely anticipatory guidance is a feature of prenatal care that is highly valued by clients 4 but may not be fully realized during a time-constrained office visit. 5 Therefore, client-facing health information resources can be a valuable adjunct to clinical visits 6 provided (1) that they satisfy clients' health self-management queries, and (2) that they provide anticipatory guidance at developmentally appropriate times. Publicly available pregnancy and childbirth websites and mobile apps are popular informational resources. 3,7 Their easy availability is an advantage but many contain advertisements 7 and inaccurate information 8,9 and thus are perceived by users as not being completely trustworthy. 6,10 Popular commercial sites such as Babycenter.com and WhatToExpect. com provide anticipatory guidance via weekly subscription emails about fetal and maternal development tailored to the user's gestational age. However, given that maternity clients do not always discuss their findings with their care providers, 3,[10][11] it may be difficult for providers to insure that the anticipatory guidance their clients receive in this way is accurate, complete, and appropriately timed. Educational content delivered by way of patient portals and Personal Health Records (PHRs) has the potential to mitigate some of the drawbacks of publicly available sites by putting the sponsoring organization or clinician in control of the content and the timing of its delivery. Additionally, the opportunity to find one's own clinical data on the same platform as content vetted by a provider may be appealing for clients. Although the concept of delivering informational content via portal or PHR is not new, 11 there are few mentions in the literature of this format being applied to maternity care [12][13] and of these, two describe only system features 13,14 and one is a study protocol. 15 The only one that examined users' (n = 12) experiences reported that the most popular user activity was viewing informational resources but got mixed reviews for usability. 16 Thus, little is known about how clients perceive or engage with the maternity education components of portals or PHRs. The goal of our research program is to ensure that online maternity information resources are accessible to and meet the communication and learning needs of the women at greatest risk of health disparities. As such, the aim of the qualitative study reported here was to explore the feasibility and acceptability of an online maternity-education platform among Medicaid-enrolled women. To that end, we partnered with Maternity Neighborhood, the maker of a maternity-specific electronic health record (EHR) by the same name. Integrated with the EHR is a maternity education platform called Care Guide. This mobile-ready site is designed to meet pregnant clients' information needs through the provision of high-quality, curated, browsable and searchable educational content paired with secure provider and client messaging. Key data elements such as gestational age are shared between the EHR and Care Guide so that content can be pushed to users automatically via email based on their stage of pregnancy. Clinicians have the option of loading their own content into their practice's Care Guide as well Volume 5 as the ability to push content to individual clients manually as clinical needs arise. The research version of Care Guide is known as Maternity Information Access Point (MIAP). The theoretical framework for this study was Coiera's common ground theory of health care communication, which posits that "for computational tools to be of value, they have to share ground with human beings. Users need to know how to use the system and the system needs to be fashioned to users' needs." 14 In this paper we report selected findings from MIAP that relate to the use of electronic health data. Methods This qualitative study consisted of four pairs of focus groups each bookending a four week fieldtesting period. Two pairs of groups were conducted in English and the other two in Spanish. The study was conducted in New York City from September through December, 2015. Approval was obtained from the Columbia University Medical Center Institutional Review Board. Eligible participants were ages 18 years or older, 35 weeks pregnant, enrolled in Medicaid, able to speak English or Spanish, and had a Wi-Fi enabled device. Potential participants were referred by local organizations that provide doulas or early Head Start services to low-income women. Participants received Target gift cards ($50 for each of two sessions) in compensation for their time. The following data were collected at intake by paper-and-pencil self-report: basic demographic data, information about internet access, and a measure of eHealth literacy (eHEALS). 15 The Newest Vital Sign (NVS), 16 a measure of health literacy, was also administered by study staff at intake. During the follow-up sessions, we collected the Wellness Portal Survey 17 as a measure of user satisfaction and selected items from the Newest Vital Sign (NVS), [18][19] as a measure of system usability. Both were adapted to reference MIAP and are scored from 1 to 5 where 1 signals dissatisfaction/lack of usability, 3 is neutral, and 5 indicates satisfaction/usability (see Appendix). All focus group sessions were audio recorded and conducted by trained facilitators according to approved focus group guides. Audio recordings were supplemented with notes taken by study staff. Maternity Neighborhood provided a log file of all MIAP activity during the study period. Each log file entry included a time and date stamp, the type of event (e.g., resource read, message sent by client), the user's ID, the user's gestational age on that date, and the name of the resource accessed, if applicable. At the time of the study, a limited number of Spanish-language resources came preloaded onto Care Guide, but the interface was available only in English. Spanish-speaking participants were provided with written instructions for navigating the few English-language menu items they would encounter. In order to ensure parity of content across languages, additional Spanish-language content was loaded onto MIAP from publicly available sources including womenshealth.gov, March of Dimes, and Lamaze International. Existing Spanish translations were used for the NVS, 19 eHEALS, 20 and UTAUT 21 . All other study materials were translated by the bilingual study staff and checked by at least one additional native Spanish speaker. Topics of initial focus group sessions included barriers and facilitators to online access and information-seeking, preferred information formats, and desired features of an education platform and content attributes. Participants received a mobile hotspot to ensure adequate connectivity, as well as instructions for using the hotspot and logging in to MIAP. They were asked to log in and view a resource or send a message to study staff at least once a week. To the extent that time permitted, study staff provided basic technical support to participants who required assistance. During field testing, study staff pushed content via email, as shown in Figure 1, to participants once a week. Prior to the start of the study, each informational resource was tagged with an optimal gestational age, and a schedule was developed so that there would be at least one resource sent in each language for every gestational week. The platform calculates gestational age based on the user's estimated date of delivery. It has the functionality to automatically push content based on gestational age, but because of an inability to toggle the language of the interface content was pushed manually so that users received content only in their preferred language. The follow-up focus group sessions served to elicit participants' experiences using MIAP, including their suggestions for improvement. Participants unable to attend follow-up sessions were interviewed individually. Focus group sessions were followed by peer debriefing sessions for the study staff in attendance. Volume 5 Audio recordings of focus group and peer debriefing sessions were transcribed and checked for accuracy. Conventional content analysis was employed to analyze the transcripts and notes in their original language using TAMS Analyzer 4.0. Two coders analyzed the transcripts independently, then discussed the codes to consensus. The principal investigator (AA) used an inductive approach to cluster codes into thematic categories and subcategories. Quantitative data were analyzed with simple descriptive statistics. Log files of each participant's activities on the platform were analyzed to determine the following: (1) number of login sessions, (2) number of messages sent by user, (3) total number resources viewed, (4) average number of resources viewed per login session, (5) number of articles received in pushed emails, (6) number of articles read from pushed emails, and (7) number of articles read that did not originate from a pushed email. Results Sixteen women-nine English speakers and seven Spanish speakers-participated in the focus groups (see Table 1). Their average age was 26.0 (SD = 5.6) and their average gestational age at the start of the study was 24 weeks, 0 days (range 13 weeks, 5 days to 32 weeks, 2 days). The Spanish-speaking participants were more likely to be experienced mothers (1 nulliparous, 6 multiparous) than the English-speaking participants (6 nulliparous, 3 multiparous). Participants were evenly split between Hispanic and non-Hispanic, most (62 percent, n = 10) were married or living with a partner, and they had an average income of $4,809 (SD = $4,842) per household member per year. Most (69 percent, n = 11) had high-speed internet at home, just under half (44 percent, n = 7) had unlimited mobile data, most (69 percent, n = 11) used Android devices, and the great majority (88 percent, n = 14) rated themselves as either "intermediate" or "expert" users on the internet. There is some likelihood, as measured by the NVS, that the majority of participants (69 percent, n = 11) have limited health literacy. However, based on their eHEALS scores (M = 31.5, SD = 4.4) they felt confident about their ability to find and use health resources on the internet. Log file data revealed that 12 of the 16 participants logged in to MIAP during the field-testing period. The 12 users averaged 6.3 login sessions (range 2-13) each and accessed an average of 14.7 informational resources (range 3-37), or about 2.5 resources per login session. Users received between 5 and 14 resources over the course of four content push emails (M = 9.7). On average, participants viewed over a third (M = 3.6, 37 percent) of the resources pushed to them. Ten of the 12 users viewed resources that were not pushed to them, for an average of 11.1 (range 2-31) resources not associated with a content push email. Three users sent the study staff one message each and two users sent two messages. Overall, the 12 users rated satisfaction and usability highly using a 1 to 5 scale where higher scores are desirable. The average across items for the Wellness Portal Survey was 4.3. Only two items averaged scores less than 4: "I understand the instructions on how to send messages with MIAP" (M = 3.8), and "MIAP helped me improve my health" (M = 3.9). The highest scoring items were "I can navigate MIAP easily" (M = 4.7) and "Information I get from MIAP is exactly what I need to make more informed health decisions" (M = 4.6). The average score across UTAUT usability items was 4.7. The lowest scoring item was "Using MIAP increases my learning productivity" (M = 4.3). Items 1, 5, 6, 10, and 12 tied for the highest average rating of 4.8. HOW WOULD YOU RATE YOURSELF AS A USER ON THE INTERNET? Beginning user 2 Volume 5 In focus group discussions, participants said that they very much liked receiving pushed content weekly because they found the content to be relevant, easy to understand, and useful to them at their stage of pregnancy. Some reported that the weekly email prompted them to engage with the platform and, that once logged in, they followed links from the pushed resource to "related" resources. Otherwise, participants reported engaging with the platform primarily through scrolling through the library of resources; very few used the available search or filtering functionality (see Figure 2). Participants cited the need to remember a password and log in as a barrier to use. Because of this, and because most used their phones to access MIAP, they said would have preferred an app that keeps them logged in over the existing mobile-ready website. None seemed concerned about potential loss of privacy that staying logged in would imply but none were using it in a real-world context in which their EHR data also would be accessible through the app. Few participants felt that all their information needs where met by their care provider, and many participants described using a tenacious approach to finding the additional information they sought. They repeatedly spoke of consulting multiple sources of information to triangulate and establish the trustworthiness of the information they found. Most said that knowing that content had been vetted by their care provider would increase their confidence in its trustworthiness, and several volunteered that they had largely stopped consulting other sites after gaining access to MIAP. Several participants told anecdotes about the value of the anticipatory guidance they received from MIAP. One woman was surprised to learn that her nasal congestion could be pregnancy related and another was relieved to learn that her bleeding gums and back pain were likely to resolve after delivery. Overall, English-and Spanish-speaking participants shared similar thoughts and experiences across the various focus group topics. The Spanish-speaking participants did report occasional difficulties with understanding materials if they were available only in English. As a research team, we also noticed that there were more digital newcomers among the Spanish-speaking participants. Generally, they required more one-on-one in-person support to do things like adjust settings on their phones, set up passwords, navigate email, log in to the platform, and send messages. These difficulties were not primarily language-related as their phones were set up in Spanish. Rather, the ones who required assistance appeared to be relatively recently arrived in the United States and simply had limited familiarity with mobile technology and the internet. Time constraints on participants' availability prevented study staff from providing as much support as they felt participants needed. Discussion In this study we found that an online maternity education platform was feasible and acceptable to the majority of the Medicaid-enrolled women invited to use it. Key among our findings was that MIAP helped to meet users' needs for anticipatory guidance in a timely way, primarily via content push emails. We envision that this popular feature could be extended to other data elements within the EHR to support clients' learning needs for a variety of health conditions. We found substantial variation in the amount of engagement with the platform among the participants. Use patterns appeared to be consistent with the level of enthusiasm the participants demonstrated in focus groups for acquiring information: one woman who felt that her care provider gave her all the information she needed never logged on, whereas another who described herself as a "research nut" logged the most encounters and resources viewed. Our interpretation of this finding is that MIAP provided an opportunity for women to meet their information needs to the extent that they perceived them but without being intrusive. Somewhat in contrast to their self-depictions as aggressive seekers of information, participants' actual usage patterns tended to be relatively passive in nature. Rather than searching for specific terms or using filtering features, most participants opted to scroll through the resource library or respond to the prompt provided by content push emails. Their reported use of links to "Related" resources (see Figure 2) suggests that this style of engagement could be optimized by providing users with readymade paths through the content. This may be done explicitly by suggesting topical curricula or small content bundles to the user based on their knowledge and interests, or more indirectly through deliberate curation and placement of content bundles. Users' desire for a mobile app and password-free access to the platform in place of a passwordprotected mobile-ready website raises the issue of how to balance ease of access with maintenance of data privacy and security on platforms that hold protected health information (PHI). The need to use passwords appeared to deter one or more participants from engaging with MIAP entirely. The need for passwords for platforms containing PHI remains, but supplementation with fingerprint Volume 5 authentication 22 or other methods yet to emerge may reduce the burden on the user. Platforms can explain clearly to the user the possible implications of a data breach so that users can at least understand the importance of creating and using a strong password. In addition to passwords, other potential barriers to use include language and a low level of comfort or experience with technology. It is critically important that both content and interfaces be made available in the languages dominant among the population being served. The principal investigator, who has worked multiple times with dual-language materials, attests that the challenge that such a strategy represents is outweighed by its tremendous value. With respect to experience with technology, even a website with few menu items and straightforward functionality presented a usability challenge to digital newcomers. Future generations will be digital natives, but for the moment, the needs of digital newcomers must be taken into account and adequate supports provided. Online guided tutorials are not enough; the most novice users will need a person at their side to help them get online. Limitations This study has several limitations. Chief among these is that the use of MIAP was not integrated with participants' clinical care. We would expect users to be more engaged if given the opportunity to interact online with their actual care provider. Given the low volume of messages sent by participants, the prospect of messaging with research staff appears to have been a poor proxy for messaging with one's own provider. Also, the sample size was relatively small. Nevertheless, findings across groups and participants were rather consistent, suggesting that this limitation was a modest one. In addition, the high satisfaction and usability ratings participants gave MIAP may have been influenced by social desirability bias or acquiescence bias. We worked to minimize these types of bias by asking openended questions and soliciting both negative and positive opinions but cannot rule out the possibility of their influence on our findings. Lastly, we were unable to confirm nonusage by specific participants until after the focus groups had concluded, and as such lost the opportunity to inquire directly if their non-use was due to lack of interest in the platform or if participants experienced barriers to its use. Therefore, our conclusions about passwords and lack of experience with technology are based more on our observations and on indirect mentions by participants rather than from their explicit statements that these were the primary reasons for nonuse. Generalizability and Future Work Participants' receptiveness to tailored pushed content raises the question of the generalizability of the concept. It is possible that there are other situations in which patients would appreciate receiving pushed content based upon one or more data elements in the EHR. A patient education platform such as the one described here would not be necessary for implementation; the full text and images of the content could be emailed directly to patients who opt in. For example, following the maternity care episode, guidance on infant development and vaccinations could be pushed to parents automatically based on the date of delivery. Indeed, reminders and anticipatory guidance regarding vaccines based on birthdate have relevance throughout the lifespan such as for the shingles vaccine that is recommended for most adults age 60. Many other data elements could be leveraged for the purpose of pushing content including diagnosis codes, medications, items from the problem list, procedure codes, and hospital discharge. Certain new diagnoses, such as diabetes, are similar to pregnancy in that they impose many new selfmanagement learning needs that cannot be satisfied in a single office visit. A programmed schedule of diabetes education content could be triggered by the initial appearance of a diagnosis code. The initial appearance of an element such as a new prescription for prednisone or the inclusion of nausea on the problem list could trigger delivery content with tips for dealing with medication side effects or managing symptoms. Algorithms can be used to look for specific combinations of data elements such as certain procedure codes plus discharge from the hospital to then trigger delivery of anticipatory guidance specific to particular stages of recovery. For example, guidance on wound healing could be pushed days or weeks following discharge for a surgical procedure. Successful implementation of tailored pushed content based on EHR elements would depend on buy-in from patients and clinicians, and thoughtful design. For instance, a content push should be suppressed when delivery would be inappropriate (e.g., a patient is new but their diagnosis is not) or a nuisance (e.g., receipt of the same content twice following the delivery of twins). Verification by a clinician prior to a content push might be desirable in situations where complex judgments must be made about whether or not a patient is a good candidate for receiving the pushed content. The future work planned for MIAP includes making the platform available as a mobile app. As such, additional options become available for alerts and for users to customize when and how they prefer to receive such alerts. The move to the app format is a good opportunity to add an interactive "welcome" tutorial to make the app accessible to digital newcomers. Lastly, we plan to tailor content modules to users' demonstrated competency gaps as assessed by knowledge and attitude questions. Conclusion Gestational age captured in the EHR can support automated pushing of content containing the highly relevant anticipatory guidance that maternity clients need to support optimal self-management during pregnancy. Platform structure that guides the user through a curated content bundle can be leveraged to promote continued user engagement. Users' desire for easy access to content must be balanced against the need to safeguard PHI. Digital newcomers may require in-person technical support to gain access to digital health resources.
2018-06-12T00:46:36.236Z
2017-04-20T00:00:00.000
{ "year": 2017, "sha1": "65f8ac96fc509e1ff425a304053e686733f01870", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.13063/2327-9214.1281", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65f8ac96fc509e1ff425a304053e686733f01870", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249427419
pes2o/s2orc
v3-fos-license
The Effect of Strict Lockdown on Omicron SARS-CoV-2 Variant Transmission in Shanghai Omicron, the current SARS-CoV-2 variant of concern, is much more contagious than other previous variants. Whether strict lockdown could effectively curb the transmission of Omicron is largely unknown. In this retrospective study, we compared the strictness of government lockdown policies in Shanghai and other countries. Based on the daily Omicron case number from 1 March 2022 to 30 April 2022, the effective reproductive numbers in this Shanghai Omicron wave were calculated to confirm the impact of strict lockdown on Omicron transmission. Pearson correlation was conducted to illustrate the determining factor of strict lockdown outcomes in the 16 different districts of Shanghai. After a very strict citywide lockdown since April 1st, the average daily effective reproductive number reduced significantly, indicating that strict lockdown could slow down the spreading of Omicron. Omicron control is more challenging in districts with higher population mobility and lockdown is more likely to decrease the number of asymptomatic carriers than the symptomatic cases. All these findings indicate that the strict lockdown could curb the transmission of Omicron effectively, especially for the asymptomatic spread, and suggest that differentiated COVID-19 prevention and control measures should be adopted according to the population density and demographic composition of each community. Introduction Since December 2019, the COVID-19 outbreak suddenly and rapidly spread all over the world [1]. As there were no available vaccines and drugs at first, all the countries relied on non-pharmaceutical interventions (NPIs) and applied the lockdown strategy in 2020 as a critical prevention and control measure [2]. Even though vaccines from Pfizer, Moderna, Sinopharm, and other companies were later listed by WHO for emergency use, the emergence of new variants of concern (VOC) such as Delta drove a number of SARS-CoV-2 waves, and lockdowns were still implemented by the majority of governments from time to time [3]. On 2 November 2021, the novel SARS-CoV-2 VOC Omicron was first collected in South Africa and then spread rapidly [4]. It caused abrupt epidemic outbreaks across South Africa, then Europe, and eventually the rest of the world by outcompeting the Delta VOC, which accounted for at least 90% of genomes sequenced globally in October 2021 [4]. This change suggests a strong selective advantage of Omicron as proven by further mutational profile study [5]. In detail, the significant number of mutations in the SARS-CoV-2 receptor-binding domain (RBD) renders the Omicron variant a higher affinity for human angiotensin-converting enzyme 2 than the Delta variant. These changes render the Omicron variant a short mean serial interval of 3 days and an assumed R0 as high as 10 [6,7]. Another important observation in the Omicron epidemic is the reduced odds of hospital admission for patients, which then becomes an implication for the relaxation of public health and social measures (PHSM) as chosen by most countries [8]. However, the epidemiology study of the Hong Kong Omicron wave in early 2022 revealed that the intrinsic severity of Omicron may not be much lower than the ancestral strains [9]. Furthermore, Omicron displays key mutations associated with immune escape (K417N, E484A, T478K in the RBD), forcing researchers to develop novel vaccines for Omicron as the vaccines previously administered to the public are not ideal anymore [10]. Thus, lockdown measures may still be needed facing the Omicron wave, but its effect remains under-explored in the current literature. In this article, we compared the strictness of the Shanghai lockdown policy with that of other countries. We further evaluated the effect of strict lockdown on Omicron spread in Shanghai since 1 April 2022 and identified critical factors to make the lockdown strategy work optimally. The asymptomatic rate in Shanghai was also analyzed. We expect the experience of the Shanghai Omicron epidemic could provide valuable information to other countries encountering highly transmissible SARS-CoV-2 strains in the future. We would like to suggest authorities formulate differentiated lockdown strategies according to the population density and demographic composition of each community, in which way COVID-19 waves can be controlled effectively with minimal social, economic, and psychological costs. Data Resource The daily case number in this Shanghai To quantify the strictness of government policies, we followed the Oxford Coronavirus Government Response Tracker project [11] which originally considered nine metrics including: school closing, workplace closing, cancel public events, restrictions on gatherings, closed public transport, stay-at-home requirements, restrictions on internal movement, international travel controls, and public information campaigns. In our research, public information campaign metrics were excluded as the difficulty of accessing information from non-English speaking countries such as South Korea and Japan. A final stringency index was then calculated from the rest eight metrics. Stringency indices of Shanghai at three PHSM stages were calculated separately to visualize the change in government response, and the raw data are shared in Supplementary Material. For the United States, United Kingdom, German, France, and other countries, their strictest COVID-19 prevention and control policies since 2020 were input and processed in the same way. Their stringency indices were compared with Shanghai Stage III stringency index to assess government responses during lockdown. The Trend of the Shanghai Omicron Wave The raw data of daily new infections during the three PHSM stages was smoothed following a 7-day averaged, 4th-order polynomial method. The asymptomatic rate was calculated in the following Equation (1) and smoothed: Asymptomatic rate = (asymptomatic cases/daily new in f ections) * 100% The trend of the Omicron wave in each district was plotted in the same way. Effective Reproductive Number Rt Estimation of the effective reproductive number Rt is a reliable and common way to evaluate changes in disease transmission over time. It has been widely used in the COVID-19 pandemic to help policymakers and public health officials to assess the effectiveness of interventions [12]. Based on the daily case number at the three PHSM stages, we implemented a time-dependent method for Rt calculation using a previously reported gamma distribution of the Omicron variant [13]. The real data were confirmed to fit the model. Effective Interval of Lockdown and Its Correlation with Other Factors In order to better assess the effect of lockdown, we defined the leading time from the lockdown starting date to the daily Omicron case peaking date as the effective interval (EI) of lockdown. We further evaluated the impact of infected cases and population mobility on EI. In this correlation analysis, two algorithms, infection index, and active infection index were involved: In f ection index = ln(daily new in f ections per one million people) The infection index only considers the number of daily Omicron cases on the day before lockdown. Active in f ection index = ln(daily new in f ections per one million people) × ln daily subway ridership The active infection index considers both the number of daily Omicron cases on the day before lockdown and population mobility. In Equation (3) the daily subway ridership was used as an indicator of the population mobility [14]. The correlation analysis between these two indices with the EI of 16 districts in Shanghai was examined by two-tailed Pearson correlation in GraphPad Prism 9.3.1. Stringency Index of Shanghai Lockdown Shanghai is one of the largest cities in the world with a population of 25 million. On 1 March 2022, Omicron variant BA.2 hit Shanghai and the case number increased rapidly in the following weeks. The Shanghai government decided to implement a "static management" lockdown in the eastern half of the city on March 28th and then in the whole city on April 1st, to curb Omicron from rapid spreading. During the lockdown time, all schools and workplaces were closed except for necessary health care services including hospitals and COVID-19 testing providers. Public transport was closed, and a "stay-at-home" order was given to almost all residents in the city except those working on Omicron prevention and control. All public events were canceled, and gatherings were restricted. On June 1st, Shanghai started to open up gradually, tentatively, and cautiously. The calculated stringency index of the Shanghai lockdown is 97, which is only slightly lower than the 100 of the India lockdown, but higher than all other countries ( Figure 1). Shanghai lockdown gets 100 marks in seven metrics and 75 marks in international travel controls. By then, India totally closed its international borders and only international flights with special permission to conduct cargo operations were allowed during their lockdown period [15]. Instead, Shanghai has been adopting circuit breaker arrangements for international passenger flights since the start of the COVID-19 pandemic. In detail, designated airline companies are allowed to operate one international flight between Shanghai and another city every week. The passengers will be quarantined for two weeks, and the flight will be paused for a certain period according to the number of COVID-19 cases among these passengers if there are any. Thus, Shanghai already imposed as many restrictions as possible during the lockdown, which is one of the strictest worldwide. Omicron Transmission before and after the Lockdown in Shanghai As aforementioned in the Methods section, we classified three PHSM stages. The daily stringency index during Stage I was merely nine as only individuals with a travel history of the medium-or high-risk regions were monitored. As Shanghai started to control certain communities at high risk in a "2+12" manner in Stage II, the daily stringency index became 45. After the "static management" in the eastern half of Shanghai on March 28th, the daily stringency index further increased to 66. Implementation of lockdown policy in Stage III finally reached a daily stringency index of 97 ( Figure 2, green squares). Next, we calculated Rt, the effective reproductive number indicating changes in disease transmission over time, of the three stages to analyze the Omicron transmission under different PHSM policies. The change of Rt during the Omicron wave was shown in Supplementary Figure S1. Daily averaged Rt of Stage I and Stage II are 1.76 (95% Cl: 1.44 to 2.09) and 1.79 (95% Cl: 1.7 to 1.89), respectively. These Rt are lower than the average Rt of 3.4 in the Omicron epidemics of South Africa, the UK, the Netherlands, and India [16]. After the citywide lockdown of Shanghai (Stage III), the Rt decreased significantly to 1.04 (95% Cl: 1.03 to 1.06), demonstrating that lockdown can effectively prevent the highly transmissible Omicron from spurting. Accordingly, after imposing the lockdown on April 1st, the daily Omicron cases in Shanghai peaked on April 12th, meaning an EI of 11 days (Figure 2). This is significantly shorter than the EI of 20 days during the lockdown measures of Wuhan, China in 2020 facing the original SARS-CoV-2 strain [17], probably due to a stricter lockdown policy in Shanghai than that in Wuhan. Control of Omicron in 16 Districts of Shanghai after Lockdown Shanghai city has 16 districts, including both urban regions with high population density and also rural areas on its outskirt. We further compared the trend of Omicron waves in all the 16 districts of Shanghai ( Figure 3). Interestingly, it was noticed that the 16 different districts have distinct EI days ranging from 6 days to 20 days (mean ± SD: 10.94 ± 4.09). Double peaks were observed in a few districts, including Hongkou, Yangpu, and Baoshan, as predicted by some previous lockdown modeling studies [18]. These two smaller peaks might be inevitable to relieve the burden on the healthcare system. We tried to interpret the significant difference in EI in the 16 districts under the same lockdown policy. As more infections will put a bigger population at risk of Omicron, we analyzed the impact of daily new infections before lockdown on EI. Infection index, an algorithm only considering daily new infections, was introduced and its calculation was given in Equation (2). The infection index of each district was then plotted against the corresponding EI as shown in Figure 4A. However, the low correlation (r) value of 0.416 and the insignificant p-value of 0.109 indicate that daily new infections before lockdown alone cannot determine the EI. Since only Omicron carriers who are in contact with others can form a transmission chain, we introduced an active infection index, another algorithm that takes both daily new infections and population mobility into account. In Equation (3) of the active infection index, the daily subway ridership in each district of Shanghai was adopted as the indicator of population mobility [14]. The correlation analysis between EI and active infection index in Figure 4B revealed a significant positive correlation (r = 0.5974, p = 0.0145), showing both the higher number of infected individuals and the extensive personnel movement before lockdown brings challenges to curbing the Omicron epidemic. This result demonstrates that timely lockdown before the surge of case number is important for rapid control of Omicron transmission, especially for those urban regions with high population mobility. The Asymptomatic Rate in Shanghai Omicron Wave Asymptomatic spread is a characteristic feature of Omicron [19]. In this Shanghai Omicron wave, the asymptomatic rate fluctuated above 80% ( Figure 5A), which might be attributed to factors including the traits of Omicron, the vaccination rate, and early detection of infection cases under mass testing. According to Shanghai Municipal Center for Disease Control and Prevention, people aged 60 or under who are in good health have accounted for 84.5% of cases, pushing up the asymptomatic rate. In addition, more than 88% of Shanghai residents are fully vaccinated. Even though the immunoevasive property of Omicron brings difficulty for vaccines to achieve full protection against infection and transmission, their effectiveness against symptomatic diseases, hospitalization, and mortality are also precious as pointed out by COVID-19 vaccine weekly surveillance reports of the United Kingdom Health Security Agency [20]. In order to understand the effect of lockdown on the asymptomatic transmission of Omicron, we analyzed the asymptomatic rate before and after lockdown ( Figure 5B). The data showed that during Stage I and Stage II, the median asymptomatic rate was 94.67% and 96.84%, respectively. After lockdown, the median asymptomatic rate of Stage III dropped to 90.11%. It is unclear that why there is a significant decrease in asymptomatic transmission after the lockdown. Regardless of the possible change in PCR testing policy, one explanation could be that asymptomatic cases are more common in young and middle-aged individuals [21]. Young people are more active in social contact before lockdown; therefore, their infection risk is sensitive to lockdown. In contrast, older-age residents tend to develop symptoms because of their weakened immune systems. Their activity is more limited in family or close neighborhoods (neighborhoods sharing kitchens and/or toilets). Accordingly, the city lockdown has little effect on the Omicron transmission in family or close neighborhood clusters, which explains the change in the asymptomatic/symptomatic ratio. Discussion In the Shanghai Omicron wave, the municipal government implemented the strictest lockdown policy. These measures effectively stopped the spreading of Omicron, especially its asymptomatic transmission. Through correlation analysis, we found that timely lockdown before the surge of case numbers is critical for putting the Omicron epidemic under control in urban regions. Previous work also utilized the same Oxford Coronavirus Government Response Tracker project to evaluate the association between physical distancing interventions and the incidence of COVID-19 in 149 countries [22]. Among the nice metrics, their primary interventions of interest were those aimed at physical distancing, including the closure of schools and workplaces, restrictions on mass gatherings, public transport closure, stay-at-home regulations, and restrictions on movements within a country [22]. The association between the sequence of interventions and the change in the incidence of COVID-19 was analyzed. They found a greater decrease in the incidence of COVID-19 was associated with the earlier implementation of lockdown rather than later implementation [22]. This is consistent with our correlation analysis implying that timely lockdown before the surge of case number is important for rapid control of Omicron transmission. Some other previous studies investigated the impacts of New Zealand's graduated, risk-informed national COVID-19 suppression measures in early 2020 on the epidemiology of COVID-19 [23]. They did a descriptive epidemiological study of all laboratory-confirmed and probable cases of COVID-19, which makes their results much more reliable and convincing. A feature of this New Zealand's COVID-19 pandemic is the low proportion of asymptomatic infection compared with other countries despite widespread testing. The low level of community transmission was believed to contribute to this. This supports our hypothesis that impeded community transmission after the lockdown in Shanghai reduced asymptomatic transmission between young and middle-aged residents, leading to a lower asymptomatic rate at Stage III. In previous studies, Patricio et. al. conducted an empirical analysis of the impact of lockdown on SARS-CoV-2 transmission and death toll for a panel of 152 countries [24]. This study covered the time period from the start of the pandemic through 31 December 2020, about two previously circulating VOCs, Alpha and Beta [25]. They provided evidence that restrictions indeed had a significant effect in the first weeks after the policy introduction. The effect of NPIs on the reproductive number Rt peaked at about 10 days and disappeared at about 20. However, the authors emphasized that after 120 (continuous or discontinuous) days of strict lockdown, neither the SARS-CoV-2 spread nor daily deaths per million can be reduced by the lockdown. The authors concluded that restrictions played a role early on in the pandemic but had only transient effects that were difficult to replicate going forward. Thus, more attention should be paid to the social, economic, and psychological costs at the late stage of strict lockdown to adjust PHSM on time [26]. Due to privacy concerns, COVID-19 data acquired and released by governments and healthcare authorities are mostly only available at a relatively coarse level, which is usually limited to the nationwide level instead of down to local areas [27,28]. Of note, Gao et al. presented a precious community-level study about ancestral SARS-CoV-2 strain transmission and policy interventions in Wuhan, China [29]. The authors found that the viral spread in local communities was irrelevant to the socioeconomic position and the built environment of a community but was related to its demographic composition. Even though the elderly were more vulnerable to virus infection and had a higher mortality rate, the mobility of the young population was more associated with viral transmission in local communities. In our current work, we also attributed the significant decrease in asymptomatic transmission after lockdown during the age of Omicron to the young and middle-aged individuals whose mobility is limited more severely by lockdown measures. Both these two studies suggest that differentiated lockdown measures should be carried out on young people and the elderly, as the elderly are less likely to be a reservoir of the chain of infection (until they are diagnosed) and the limited food, nutrition, and medical supplies can easily threaten their life and health. Strengths and Limitations The epidemiological data analyzed in this study were collected through numerous thoughtful PCR tests during the 2-month lockdown. Considering that majority of the countries have been adopting loose PHSM, this Omicron outbreak in Shanghai provides a precious chance to accurately evaluate the effectiveness of strict lockdown measures on curbing the spreading of the highly contagious Omicron variant. However, the estimation of population mobility in this study was limited to subway ridership, as other public transportation and personal vehicle information are unavailable. Conclusions After its continuous evolution in the past two years, the new strains of SARS-CoV-2 keep presenting new features that bring challenges to COVID-19 prevention and control. This study confirmed that strict lockdown measures are capable of effectively curbing the transmission of Omicron, the current highly contagious VOC. The success of lockdown measures in Shanghai can be largely attributed to their potency in cutting the asymptomatic spread, which can trigger the COVID-19 explosion unexpectedly. Based on our studies, we suggest that differentiated lockdown strategies should be formulated for each community according to its population density and demographic composition in order to reduce the social, economic, and psychological costs of city-wide lockdown. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/vaccines10091392/s1, Figure S1: The change of time-dependent effective reproductive number in Shanghai's Omicron wave; Figure S2: Distribution of subway stations whose daily ridership on five days in April, 2015 were used for population mobility calculation in each district; Table S1: Calculating subindex scores for indicators of metrics;
2022-06-07T19:08:10.899Z
2022-06-07T00:00:00.000
{ "year": 2022, "sha1": "df1f35a8865de142fd0728ff984e47a73f9fa75c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/10/9/1392/pdf?version=1662470886", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ffe325d4ae9e2de7d32eddc06f67f2e4ba3bf375", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10238909
pes2o/s2orc
v3-fos-license
Manganese-catalyzed allylation via sequential C–H and C–C/C–Het bond activation Manganese-catalyzed sequential C–H and C–C/C–Het bond activation to synthesize allylic alcohols, allylated arenes, functionalized cyclopentenes and skipped dienes is reported. Complimentary to the noble hand sixth-row metals, direct C-H activation 1 using 3d-transition metal catalysis has fascinated chemists owing to their abundance, low price and low toxicity, as well as to their potential to promote novel reactivity. 2 Over the past years, the goal of achieving sustainability in organic synthesis has propelled important research in this eld and signicant progress has been made. Base-metals with exible redox ability, such as Fe, 3 Co, 4 Ni, 5 and Cu 6 are extensively used in organometallic C-H activation today. In contrast to being the third most abundant transition metal, manganese is comparatively underutilized. 7 Manganese-mediated stoichiometric C-H activation has been explored since the 1970s, however, catalytic variants of these reactions have proved challenging. 7 Recently, the groups of Kuninobu and Takai, Wang, Ackermann and others have signicantly advanced this eld of research. 8 Manganese catalysts have been found to be versatile as they can display unique reactivity and enable C-H functionalization with a variety of coupling partners containing polar multiple bonds. 8 Mechanistically, these reactions mainly involve the formal addition of a metallacycle to an unsaturated reaction partner or a substitution reaction. 8 In recent years, considerable efforts have been made to develop processes that can merge C-H activation with challenging C-C/C-Het cleavage reactions, which could allow for the efficient introduction of two different functional groups into one molecule in a single step. 9 However, most of the examples reported to date suffer from the requirement for precious transition metal catalysts and stoichiometric activators. Very recently, a manganesecatalyzed substitutive C-H allylation through highly selective C-H/C-O functionalization was achieved by Ackermann et al. 8k Owing to our continuous interest in 3d-transition metal catalysis, we questioned whether manganese catalysis can serve as an alternative route to integrating C-H activation with b-carbon/-hetero atom elimination, which is largely unexplored in this eld. To date, cyclometalation has been the most straightforward and common method for the activation of C-H bonds. Such processes rely mainly on solvent-based techniques. From a sustainability perspective, solvent-free C-H activation processes are highly desirable. Recently, rhodium(III) and iridium(III) catalyzed C-H functionalizations under solvent-free conditions using a ball mill have been reported by However, to the best of our knowledge, rst-row transition metal catalyzed C-H activation under neat conditions has not been developed thus far. Herein, our manganese catalyzed coupling offers an environmentally friendly, operationally simple alternative to the more traditional solvent-based protocols, featuring a cheap catalytic system. In this report, various coupling partners, including vinyl-1,3-dioxolan-2-one, 2vinyloxirane, vinylcyclopropane and diazabicycle, are applied in manganese catalysis for the rst time, leading to the convenient synthesis of allylic alcohols, allylated arenes, functionalized cyclopentenes and skipped dienes (Scheme 1). We started our investigation by reacting N-(2-pyridyl)-indole (1a) with vinyl-1,3-dioxolan-2-one (2aa) under [Mn 2 (CO) 10 ] catalysis in diethyl ether at 80 C. This transformation involves the cleavage of a stable C-O bond to form an easily modiable C]C bond and an alcohol moiety. To our delight, the desired product 3a could be isolated in 41% yield (for details, see Table S1 in the ESI †). Subsequently, employing [MnBr(CO) 5 ] as the catalyst precursor afforded product 3a in 84% yield in the presence of sodium acetate. The yield of 3a could be further improved to 90% by increasing the temperature to 90 C. Notably, the reaction occurred most efficiently in the absence of solvent and 3a could be isolated in 92% yield. Further experiments showed that Mn(OAc) 2 $4H 2 O or Cp*Mn(CO) 3 in lieu of [MnBr(CO) 5 ] are ineffective. Additionally, manganese is essential for this transformation as in its absence no reaction occurred. With the optimized reaction conditions in hand, the generality of this protocol was rst tested by the reaction of indole heterocycles with 2aa and the results are given in Scheme 2. Compared to the neat conditions, our studies showed that the product 3a could be isolated in higher E/Z ratios when diethyl ether was used. Indoles substituted with reactive electrophilic functional groups which can undergo subsequent functionalization, such as the halides (-F, -Br, -I) and cyano substituents, were well tolerated. Introduction of a methyl or formyl group at the 3-position of the indole ring had no inuence on the reaction efficiency (3f-3h), indicating that the reaction tolerates steric bulk in proximity of the reaction center of 1. Moreover, this protocol was not restricted to indole substrates but also amenable to benzene-and thiophene-containing substrates, furnishing the corresponding products 3i-3k in good yields. Furthermore, an N-pyrimidyl moiety could be employed as a directing group and the expected product 3l was isolated in 79% yield. Notably, this reaction also exhibited high efficiency on a large scale. The desired product 3a could be isolated in 91% yield (1.44 g) on a 6 mmol scale. In addition, this protocol was successfully applicable to 2-vinyloxiranes, as the coupling partner, and the products 3a and 3g were isolated in 54% and 69% yield, respectively. Encouraged by these inspiring results, we then pursued a more challenging successive C-H/C-C activation. Gratifyingly, ne tuning of the reaction conditions allowed the coupling of 1a with dimethyl 2-vinylcyclopropane-1,1-dicarboxylate (2ba) to proceed smoothly under solvent free or concentrated DMF conditions (Scheme 3). Both electron-rich (-OMe) and electron decient (-F, -Br, -I) N-(2-pyridyl)-indoles are amenable to this method, giving the corresponding products 4b-4e in 52-90% yields. A 3-methyl substituent did not diminish the reactivity, as demonstrated by the formation of the desired products 4f and 4g in excellent yields. Moreover, the scope of the arene substrate could further be extended to phenylpyridine and thiophene derivatives, affording the corresponding products 4h-4j in moderate yields. Considering the importance of nitrogen-containing compounds and the ease of further transformation on this moiety, we next sought to apply our developed protocol to introduce a vicinal 2-arylated cyclopentenylamine unit. These are known to be key structures found within biologically active small molecules and are key intermediates in the synthesis of pharmaceutically important cyclopentane derivatives. 9e Pleasingly, when diazabicycle 2ca was utilized, the desired aryl-and amine-substituted cyclopentenes 5a-5g were obtained in 70-94% yields (Scheme 4). Not only N-(2-pyridyl)-indoles, but also Scheme 2 Unless otherwise specified, all reactions were carried out using 1 (0.2 mmol), 2 (0.3 mmol), [MnBr(CO) 5 ] (10 mol%), and NaOAc (20 mol%) in Et 2 O (1.0 mL) at 90 C under argon for 5 h, isolated yield, E/Z value given in parentheses. a Neat. b Reaction performed on a 6 mmol scale with a 47 h reaction time. c 2-Vinyloxirane (0.6 mmol) was used instead of 2aa. d 10 h. e 2-Methyl-2-vinyloxirane (0.6 mmol) was used instead of 2aa. Olenic C-H activation could also be achieved under these conditions. For example, 2-(prop-1-en-2-yl)pyridine (1m) performed well in this transformation and the desired products 3m and 5h, a skipped diene with a valuable handle for further derivatization, were obtained in 75% and 62% yield respectively (Scheme 5). This transformation is presumed to proceed through an organometallic C-H activation process, as was supported by radical trapping experiments (for details, see Scheme S1 in the ESI †). The reaction of 1a and 2aa under standard conditions was found to be compatible with radical scavengers 2,4-di-tert-butyl-4-methylphenol (BHT) and 1,1-diphenylethylene. To gain more insight into the reaction mechanism, H/D scrambling experiments were next conducted (for details, see the ESI †). No H/D exchange at the 2-position of 1a was observed when 1a was simply mixed with CD 3 OD and sodium acetate. Approximately 71% deuterium was incorporated into the 2-position of recovered 1a when sodium acetate was replaced with [MnBr(CO) 5 ]. Furthermore, a larger deuterium incorporation (85%) at the 2-position of 1a was observed when 1a was treated with CD 3 OD in the presence of both [MnBr(CO) 5 ] and sodium acetate. These results suggest that the C-H activation step is reversible and might occur via a base-assisted cyclometalation process. In addition, a k H / k D ¼ 1.0 was observed from parallel reactions of 1a and [D]-1a with 2aa, respectively, which suggests that the cleavage of the C-H bond is not involved in the rate-determining step. Furthermore, when (3-pyridyl)-thiophene was utilized, the reaction occurred exclusively at the more electron rich 2position (Scheme 6). On the basis of the above-mentioned results, we may draw the conclusion that the olen coordination and insertion step is the rate-determining step, and that b-elimination is a facile process. 11 To acquire a better understanding of the observed reaction selectivity, a series of experiments were performed. As shown in Table 1, no obvious isomerization of the C]C bond was observed regardless of prolonged reaction time or decreased reaction temperature (entries 1-3, Table 1). On the contrary, solvent was found to have a dramatic effect on the E/Z selectivity (entries 3-5, Table 1). Compared to the neat reaction, DMF had a negative effect on the nal E/Z selectivity, while diethyl ether had a positive effect. From these results, we inferred that the involvement of the p-allylmanganese intermediate in the reaction mechanism might be excluded and that the solvent imparts selectivity during this transformation. Scheme 6 Intramolecular C-H competition experiment. Conclusions In conclusion, we have developed a general strategy to synthesize allylic alcohols, allylated arenes, functionalized cyclopentenes and skipped dienes, in which earth abundant manganese was utilized as the catalyst. 12 This protocol represents a combination of C-H and C-C/C-Het bond activation. Both aromatic and olenic substrates are functionalized in this reaction. This work broadens the scope of manganese catalysis to include a series of new coupling partners. Additionally, this reaction can occur efficiently under neat conditions yet does not require the use of a large excess of a coupling partner as the solvent, which is unprecedented in abundant metal catalysis. Finally, b-carbon and b-nitrogen elimination were shown to be feasible under low-valent manganese catalysis for the rst time.
2017-08-29T14:57:12.106Z
2017-02-24T00:00:00.000
{ "year": 2017, "sha1": "1aad369b420f453597fc6d2bf5c3f3b8810d368e", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/sc/c7sc00230k", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d00cb616e593c61a7e3267bc630d57e02d1f9ea", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
32900281
pes2o/s2orc
v3-fos-license
An all-silicon , single-mode Bragg cladding rib waveguide In this paper, we demonstrate a direct method of fabricating an all-silicon, single-mode Bragg cladding rib waveguide using proton beam irradiation and subsequent electrochemical etching. The Bragg waveguide consists of porous silicon layers with a low index core of 1.4 that is bounded by eight bilayers of alternating high and low refractive index of 1.4 and 2.4. Here, the ion irradiation acts to reduce the thickness of porous silicon formed, creating an optical barrier needed for lateral confinement. Singlemode guiding with losses as low as approximately 1 dB/cm were obtained for both TE and TM polarization over a broad range of wavelengths from 1525 nm to 1625 nm. Such an approach offers a method for monolithic integration of Bragg waveguides in silicon, without the need for multiple processes of depositing alternating materials. © 2010 Optical Society of America OCIS codes: (230.7370) Waveguides; (160.3130) Integrated Optics Materials; (130.5990) Semiconductor material; (130.3060) Infrared. References and Links 1. S. Y. Lin, E. Chow, V. Hietala V, P. R. Villeneuve, and J. D. Joannopoulos, “Experimental demonstration of guiding and bending of electromagnetic waves in a photonic crystal,” Science 282(5387), 274–276 (1998). 2. E. Yablonovitch, “Inhibited spontaneous emission in solid-state physics and electronics,” Phys. Rev. Lett. 58(20), 2059–2062 (1987). 3. J. C. Knight, J. Broeng, T. A. Birks, and P. S. J. Russell, “Photonic band gap guidance in optical fibers,” Science 282(5393), 1476–1478 (1998). 4. H. Schmidt, Dongliang Yin, J. P. Barber, and A. R. Hawkins, “Hollow-core waveguides and 2-D waveguide arrays for integrated optics of gases and liquids,” IEEE J. Sel. Top. Quantum Electron. 11(2), 519–527 (2005). 5. B. Temelkuran, S. D. Hart, G. Benoit, J. D. Joannopoulos, and Y. Fink, “Wavelength-scalable hollow optical fibres with large photonic bandgaps for CO2 laser transmission,” Nature 420(6916), 650–653 (2002). 6. P. Yeh, A. Yariv, and E. Marom, “Theory of Bragg fiber,” J. Opt. Soc. Am. 68(9), 1196 (1978). 7. P. Yeh, Optical Waves in Layered Media (Wiley, New York, United States 1988) 8. Y. Fink, J. N. Winn, S. Fan, C. Chen, J. Michel, J. D. Joannopoulos, and E. L. Thomas, “A dielectric omnidirectional reflector,” Science 282(5394), 1679–1682 (1998). 9. J. N. Winn, Y. Fink, S. Fan, and J. D. Joannopoulos, “Omnidirectional reflection from a one-dimensional photonic crystal,” Opt. Lett. 23(20), 1573–1575 (1998). 10. M. Ibanescu, Y. Fink, S. Fan, E. L. Thomas, and J. D. Joannopoulos, “An all-dielectric coaxial waveguide,” Science 289(5478), 415–419 (2000). 11. Y. Yi, S. Akiyama, P. Bermel, X. Duan, and L. C. Kimerling, “On-chip Si-based Bragg cladding waveguide with high index contrast bilayers,” Opt. Express 12(20), 4775–4780 (2004). 12. L. Pavesi, “Porous silicon dielectric multilayers and microcavities,” Riv. Nuovo Cim. 20(10), 1–76 (1997). 13. A. Bruyant, G. Lerondel, P. J. Reece, and M. Gal, “All-silicon omnidirectional mirrors based on one-dimensional photonic crystals,” Appl. Phys. Lett. 82(19), 3227 (2003). 14. E. Xifré-Pérez, L. F. Marsal, J. Ferré-Borrull, and J. Pallarès, “Porous silicon omnidirectional mirrors and distributed Bragg reflectors for planar waveguide applications,” J. Appl. Phys. 102(6), 063111 (2007). 15. E. J. Teo, M. B. H. Breese, A. A. Bettiol, D. Mangaiyarkarasi, F. Champeaux, F. Watt, and D. J. Blackwood, “Multicolour Photoluminescence from Porous Silicon using Focused High-energy Helium Ions,” Adv. Mater. 18(1), 51–55 (2006). 16. D. Mangaiyarkarasi, M. B. H. Breese, Y. S. Ow, and C. Vijila, “Controlled blueshift of the resonant wavelength in porous silicon microcavities using ion irradiation,” Appl. Phys. Lett. 89(2), 021910 (2006). 17. E. J. Teo, M. B. H. Breese, E. P. Tavernier, A. A. Bettiol, F. Watt, M. H. Liu, and D. J. Blackwood, “Threedimensional microfabrication in bulk silicon using high-energy protons,” Appl. Phys. Lett. 84(16), 3202 (2004). #124855 $15.00 USD Received 1 Mar 2010; revised 8 Apr 2010; accepted 11 Apr 2010; published 13 Apr 2010 (C) 2010 OSA 12 April 2010 / Vol. 18, No. 8 / OPTICS EXPRESS 8816 18. M. B. H. Breese, F. J. T. Champeaux, E. J. Teo, A. A. Bettiol, and D. Blackwood, “Hole transport through proton-irradiated p-type silicon wafers during electrochemical anodisation,” Phys. Rev. B 73(3), 035428 (2006). 19. D. E. Aspnes, “Optical properties of thin films,” Thin Solid Films 89(3), 249–262 (1982). 20. D. Mangaiyarkarasi, M. B. H. Breese, and Y. S. Ow, “Fabrication of three dimensional porous silicon distributed Bragg reflectors,” Appl. Phys. Lett. 93(22), 221905 (2008). 21. http://www.rsoftdesign.com 22. A. A. Bettiol, S. Venugopal Rao, E. J. Teo, J. A. van Kan, and F. Watt, “Fabrication of buried channel waveguides in photosensitive glass using proton beam writing,” Appl. Phys. Lett. 88(17), 171106 (2006). 23. P. Pirasteh, J. Charrier, Y. Dumeige, S. Haesaert, and P. Joubert, “Optical loss study of porous silicon and oxidized porous silicon planar waveguides,” J. Appl. Phys. 101(8), 083110 (2007). 24. V. Lehmann, F. Hofmann, F. Moller, and U. Gruing, “Resistivity of porous silicon: a surface effect,” Thin Solid Films 255(1-2), 20–22 (1995). 25. G. Z. Mashanovich, M. Milosevic, P. Matavulj, S. Stankovic, B. Timotijevic, P. Y. Yang, E. J. Teo, M. B. H. Breese, A. A. Bettiol, and G. T. Reed, “Silicon photonic waveguides for different wavelength regions,” Semicond. Sci. Technol. 23(6), 064002 (2008). 26. R. A. Soref, S. J. Emelett, and W. R. Buchwald, “Silicon waveguided components for the long-wave infrared region,” J. Opt. A, Pure Appl. Opt. 8(10), 840–848 (2006). Introduction Much progress has been made in recent times in the use of periodic dielectric structures with high refractive index contrast for waveguiding applications.Specific examples include two dimensional slab waveguides where lateral light confinement is achieved by a photonic crystal [1,2], and optical fibers that confine light to a low index core by virtue of a periodic glass cladding [3], the so called 'holey fibers'.In both examples, light is confined by a photonic band gap rather than total internal reflection.Such structures allow for the possibility of light being guided in a core region that has a lower refractive index than the surrounding cladding material.In the case of optical fiber, air guiding is made possible enabling high power transmission with zero dispersion and material absorption.Important applications of such fibers include gas and liquid sensing [4] and high power laser surgery [5]. Another example of a periodic dielectric structure that can be used for light guiding is the Bragg mirror.The Bragg mirror is a one dimensional periodic structure made from alternating layers of high and low refractive index material.The stop band for such a structure can be easily tuned by choosing the thickness and refractive index of each layer.Using a Bragg mirror for light guiding was first proposed by Yeh et.al. [6,7] and later realized by Fink et.al. [8] and Winn et.al. [9].An example of such a structure is the omniguide, which utilizes concentric high index contrast dielectric layers as an omnidirectional (OM) mirror, to enhance mode confinement in the low index core [10].Fabrication of these structures is less stringent on the resolution of the patterning process, since they do not require sub-micron periodicity in all three orthogonal directions.However, this design is difficult to implement on a silicon chip.Recently, Yi et.al. [11] demonstrated Bragg waveguides in silicon by depositing multilayers of alternating Si and Si 3 N 4 layers on an alternative structure using CMOS technologies. In this work, we make use of porous silicon (PS) technology to fabricate omnidirectional waveguides monolithically on to a silicon chip.By varying the current density during electrochemical etching, the refractive index can be tuned over a wide range of values from 1.2 to 3. The thickness of each PS layer can also be independently controlled by varying the etching time [12].Due to the self-limiting process of PS formation, alternating layers of high and low index can be grown in a single etching step.This eliminates the multiple steps of depositing alternating materials using several deposition processes.We also employ an ion beam irradiation step to provide a means for laterally confining light.Although OM mirrors have been demonstrated using PS [13], prospects for waveguiding has only been proposed theoretically by Xifre-Perez et.al. [14].Here, we aim to extend this theoretical analysis to provide fabrication method for the experimental realization of such a structure. Experiment In order to fabricate rib waveguides with multilayered porous silicon cladding we employed a patterning process that utilizes a focused beam of mega-electron volt protons.This process has previously been used to form PS with spatially varying photoluminescence emission [15] and distributed Bragg reflectors that have varying reflectivity [16].Here, two closely spaced lines are selectively irradiated with a focused beam of 2 MeV protons into a bulk Si wafer with a resistivity of 0.02 Ω.cm, in order to create an optical barrier needed for lateral confinement (Fig. 1).The ion irradiation acts to increase the local resistivity of the silicon, reducing the rate of PS formation in the irradiated regions during subsequent anodization with alternating high and low current density [17,18].The resultant structure resembles a Bragg cladding rib waveguide with thinner layers at both sides of the core region.As the patterning occurs before the PS formation, the integrity of the PS is preserved.In order to optimize the waveguide design for guiding at 1550 nm, we simulated the reflectivity from the Bragg reflector for a large range of angles and polarization using the transfer matrix method.Since the waveguide modes approaches the layers at glancing angle, the quarter-wave condition has to be modified for glancing angle incidence.It is found that optimum values occurs at about d h /d h + d l = 0.44, where d h and d l are the thickness of the high and low refractive index layer [14].The results shown in Fig. 2 are for eight bilayers with thicknesses of 300 nm and 200 nm, and refractive indices of 1.40 and 2.38 respectively.The dotted lines in Fig. 2 demarcate the wavelength range where omnidirectional reflectivity occurs.In this case this occurs over a ~100 nm wavelength range from 1480 nm to 1680 nm, covering the C + L communications band.The actual refractive index of the fabricated structure was determined from fitting the reflectance spectrum using the Bruggeman model for the PS dispersion relation [19].Although omnidirectionality is not necessary for guiding, it enhances confinement and is correlated to the polarization independent large band gap [14]. Results The fabricated waveguide structures consist of a central core layer of 1.4 index that is bounded by eight bilayers of alternating high and low index of 2.4 and 1.4.Figure 3(a) and 3(b) show the cross-sectional SEM images of the waveguides with a core width and height of 3 × 4 µm, irradiated using a fluence of 2 × 10 15 protons/cm 2 and 4 × 10 15 protons/cm 2 respectively.The alternating high and low index of the bilayers corresponds to the bright and dark contrast lines respectively.We can observe from these images that the thickness of the PS layers reduces with ion fluence, resulting in an increase in the sidewall angles and the displacement between the two regions.At the intermediate regions where the thicker layer tapers towards the thinner irradiated region, a chirping effect is produced (Fig. 4).Previous studies have shown that chirping widens the reflectance band gap, which is important for stronger light confinement in the low index core [20].In this case, the uniform damage profile of 2 MeV protons for the first ~40 µm is important for maintaining a constant slower PS growth rate over the range of the device layer.The Finite Element Method (FEM) was used to simulate the fundamental mode field diameter and single-mode condition supported by the proton written structures at a free space wavelength of 1.55 µm.This software allows the number of guided modes to be found for an arbitrary structure with high index contrast [21].Each grid can be subdivided into triangular segments using the non-uniform meshing feature in order to better conform to the index profile at the slanted edges of the sidewalls.We were able to accurately model the waveguide geometry for each fluence by incorporating a compression factor and a rate of change of sidewall angle with depth as observed in Fig. 3 and 4. Figure 5 shows the cross sectional profile of the simulated design for 2 × 10 15 protons/cm 2 and 4 × 10 15 protons/cm 2 .It can be seen from this image that most of the power is confined in the low index core for both structures, and the corresponding fundamental TE and TM modes can be seen from the distribution of the electric field components in the x and y direction.It is also apparent that the TE mode is larger than the TM mode, since it is less confined in the lateral direction.According to simulations of the 1/e electric field width for the fundamental mode profile [Fig.6(a)], the TE mode spot size is slightly larger (about 0.1 µm) than the TM mode for both fluences.As the sidewalls of the top and bottom cladding gets sharper, this closes the gap where the modes can leak out and increases the light confinement within the Bragg mirrors.We can see a remarkable decrease in mode size in both TE and TM polarizations.Simulations show that the mode field diameter for the higher fluence irradiation are about 1 µm smaller than the lower irradiated ones for a given core width.This is useful for designing waveguide bends with a smaller radius of curvature.protons/cm 2 and 4 × 10 15 protons/cm 2 waveguide designs, whereby regions below and to the left of the lines shows single-mode in both polarizations.Single-mode guiding can be supported by large cores of several microns in width and height, and the boundary is found to shift further upwards for lower fluence.Since the higher order modes become more confined by the sharper sidewalls of the higher irradiated structure, a smaller core size is needed for single-mode operation.As the height increases beyond 7 µm, no guiding is observed in the TE polarization for the 2 × 10 15 protons /cm 2 waveguide design.The structure behaves like a planar waveguide and light can no longer be laterally confined.Propagation losses were measured using the scattered light technique [22].A 30 mW C + L broadband source (1525-1625 nm) is coupled into the waveguide using a 60 × objective.A cube polarizing beamsplitter and half waveplate are inserted into the beam path, enabling discrimination between TE and TM polarizations.Light scattered from the top of the waveguides is collected using a microscope coupled to a highly sensitive peltier cooled InGaAs camera (Xeva-FPA-1.7-320).Uncertainty in the loss data is determined from statistical fluctuations of five independent waveguide measurements.Figure 7 shows the scattered light intensity in dB over a length of 1 cm.Images are recorded away from the edges in order to avoid edge scattering effects.Both waveguides fall into the single-mode regimes since they have a core width and height of 3 × 4 µm.For a fluence of 2 × 10 15 protons/cm 2 , losses of 0.9 ± 0.1 dB/cm and 0.7 ± 0.1 dB/cm are obtained for TE and TM polarizations respectively.The scattered light for the TE polarization exhibits higher intensity and broader width when compared to the TM polarization.At a higher fluence of 4 × 10 15 protons/cm 2 , the propagation loss increases to 2.8 ± 0.1 dB/cm and 2.5 ± 0.1 dB/cm for TE and TM polarizations.Although the higher irradiated structure has sharper sidewalls to provide better lateral confinement, the modes suffers from more interaction with the sidewalls and light leakage through the thinner bilayers formed with higher fluence.This can be seen from the simulations in Fig. 5, which can account for the higher loss values obtained. In order to reduce the propagation loss, it is important to determine the sources of loss.This includes material absorption, scattering from interfaces (σ rms ~9 nm) and nanocrystallites [23].Free carrier absorption can account for the majority of propagation loss in these p + anodized waveguides.However, it is important to note that absorption in p + nanocrystalline Si is much lower than crystalline silicon due to trapping at surface states [24].This means that increasing the porosity can reduce the free-carrier absorption of the nanocrystals in the core layer.Sidewalls roughness caused by beam intensity fluctuations of the direct writing process and mechanical vibration of the stage scanning also contributes to propagation loss.This fluctuation can be averaged out by scanning the pattern over a few times.Optimization of the waveguide design by increasing the core size and number of bilayers can further help reduce the loss [25]. Conclusion We have demonstrated a direct method of fabricating single-mode Bragg cladding rib waveguide monolithically in silicon, without the need for multiple deposition processing steps.Our structures demonstrate low loss of 1-3 dB/cm in both TE and TM polarizations over a broad wavelength range of 1525-1625 nm.By accurately controlling the ion fluence at each region, it is possible to tune the waveguiding properties for optimal performance in a single patterning step.Larger single mode cores of several microns can be produced with lower fluence, offering lower insertion loss and better mode matching with optical fibers.On the other hand, higher fluence produces structures with sharper sidewalls that supports smaller mode diameter, enabling better lateral confinement needed for guiding light around tight bends.Since these waveguides are fabricated entirely in silicon (oxide-free), tuning the Bragg cladding periodicity should result in low loss mid-infrared waveguiding [25,26].Furthermore, the porous nature of these waveguides makes them also applicable for sensing applications.Currently, we are investigating on waveguide bends and results will be discussed in our future work. Fig. 4 . Fig. 4. show the close-up SEM images of the top and bottom claddings of the waveguide sidewalls for a fluence of 2 × 10 15 /cm 2 and 4 × 10 15 /cm 2 respectively. Fig. 6 . Fig. 6.(a) shows the plot of 1/e electric field width as a function of width in TE and TM polarizations. Figure 6(b) shows the theoretical single-mode boundary as the width and height of the core is varied. Figure 6 ( Figure 6(b) shows a series of simulations where both the width and height of the core has been varied.The lines plotted indicate the boundary between single-mode and multimode for 2 × 1015 protons/cm 2 and 4 × 10 15 protons/cm 2 waveguide designs, whereby regions below and to the left of the lines shows single-mode in both polarizations.Single-mode guiding can be supported by large cores of several microns in width and height, and the boundary is found to shift further upwards for lower fluence.Since the higher order modes become more confined by the sharper sidewalls of the higher irradiated structure, a smaller core size is needed for single-mode operation.As the height increases beyond 7 µm, no guiding is observed in the TE polarization for the 2 × 10 15 protons /cm 2 waveguide design.The structure behaves like a planar waveguide and light can no longer be laterally confined. Fig. 7 . Fig. 7. shows the scattered light intensity as a function of length for (a) 2 × 10 15 /cm 2 and (b) 4 × 10 15 /cm 2 determined from the scattered light images in the inset. 15
2017-09-27T14:50:45.859Z
2010-04-26T00:00:00.000
{ "year": 2010, "sha1": "567720c35a5558fc70ebb1e7b95757fe923976a1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.18.008816", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "567720c35a5558fc70ebb1e7b95757fe923976a1", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }