id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
265677317
|
pes2o/s2orc
|
v3-fos-license
|
A life cycle assessment of biological treatment scenario of municipal solid waste in developing country (case study: Makassar, Indonesia)
Municipal Solid Waste Management (MSWM) is a significant challenge in developing countries, including Indonesia, where landfilling is the predominant waste treatment method. This study examines the case of Makassar City, where landfilling is still in use while composting is utilized to a limited extent. The research aims to evaluate and compare the environmental impact with three alternative scenarios involving biological treatment and life cycle assessment (LCA) processing. The scenarios were examined: Business as Usual Scenario (BaU), Landfill and Composting, Landfill and Anaerobic Digestion, Landfill, Composting, and Anaerobic Digestion. The study considers waste transportation, landfilling, anaerobic digestion, and composting within its system boundary, using annual waste processing as the functional unit. Environmental impacts assessed include global warming, acidification, and eutrophication. The findings indicate that the BaU scenario has the highest environmental impact, particularly regarding global warming, with 8,436,685.61 kg CO2eq/year emissions. On the other hand, alternative scenario 3, which incorporates landfill management, composting, and anaerobic digestion, shows a relatively lower Global Warming Potential (GWP) emission. However, further measures are needed to effectively reduce emissions, such as implementing a cover for the compost pile and arranging the mixing.
Introduction
The challenge faced by developing countries in sustainable development is to create an efficient and economical waste management system.This is particularly applicable to urban waste, which is greatly influenced by the income level of the population, consumption patterns, and economic development [1][2][3].The implementation of sustainable development principles in waste management is exemplified through the utilization of a waste hierarchy, which encompasses a range of initiatives focused on preventing the generation of waste, promoting preparatory measures for reuse, actively engaging in recycling practices, facilitating recovery through alternative processes, and ensuring the appropriate disposal of non-recoverable waste.
Each technology can have a positive impact in one aspect while simultaneously having a negative impact in another.To assess, compare, analyse, evaluate, and estimate the environmental impact of a product, a systematic approach commonly used is the Life Cycle Assessment (LCA) method [4].The LCA study on testing five scenarios of material recovery facility (MRF)/recycling, composting, incineration, landfilling, and collection shows that composting is the most environmentally sustainable approach for MSWM [5].Furthermore, other studies evaluated the effectiveness of composting and mechanical-biological treatment (MBT), and other management strategies.The study revealed that composting and MBT outperform incineration, landfilling, and other methods of waste management methods [6].In other research, a comparison was made between landfilling and alternative MSWM options.The result found that landfills have the highest global warming potential (GWP) among various waste management approaches due to higher emissions of methane and carbon dioxide [7,8].Anaerobic digestion processing has several advantages, including the ability to reduce the need for landfill space, generate a source of energy, and mitigate pollution [9,10].
In developing countries such as Indonesia, the capacity of waste management systems still needs to be improved, primarily centred on landfilling practices.Only 41-42% of the total waste generated, approximately 61 million tons per year [11], is transported and disposed of in landfill sites.Most cities in Indonesia, like Makassar City, utilize open dumping methods in these landfill sites [12], resulting in environmental degradation and risks to human health during their operation.This study utilizes LCA to determine the environmental impacts of several waste treatment scenarios, focusing on biological processing.Various waste management methods include landfilling with or without energy recovery, composting, and anaerobic digestion.
Methods
The method used in this study is LCA manual calculation with spreadsheets in Microsoft Excel.The stages of LCA consist of the following: determining goal and scope definition, system boundary determination, inventory analysis, life cycle impact assessment, and interpretation [13,14].The present study employs the LCA methodology to compare and evaluate multiple biological treatment scenarios, aiming to identify a viable scenario for future implementation.The primary objective is to develop three alternative waste management scenarios designed explicitly for MSWM Makassar.The three scenarios encompass distinct waste management approaches, including landfilling, composting, and anaerobic digestion.The current waste management practices at the Tamangapa Makassar Landfill include composting and landfilling.As part of alternative scenarios, anaerobic digestion is introduced as an additional waste management method.
Goal and scope
The study aims to compare the environmental impact of biological treatment in three scenarios to choose the best scenario with minimum emissions for waste management in Makassar City in the future.The scope of this study includes the transportation of waste from its source, waste management by composting using the windrow composting method, and waste treatment with anaerobic digestion as an alternative approach.
System boundary
Based on the waste composition in Makassar, most of the waste consists of organic waste that can be biologically treated.This study is based on government policies, where 70% of the waste generated will be processed through composting and anaerobic digestion.The system boundary of this study is limited to the open windrow composting method, which covers the entire process from the shredding process, curing windrow tuners, screening and stabilization, and until the compost is ready to use (Figure 1).The application of compost outside the scope of this system is not considered.The anaerobic digestion process is limited to the shredding process, anaerobic digestion process, and until to produce biogas.As for landfill, the system employed is open dumping without energy recovery and electricity generation (based on existing conditions).
Scenario
The waste management scenarios consist of the BaU scenario, representing current waste management practices involving landfill and composting.Scenario 1 assumes 70% of waste to composting and 30% to landfill.Scenario 2 assumes 70% of the waste goes for Anaerobic Digestion, 10% for composting, and 20% for landfill.Scenario 3 assumes 50% of the waste for composting, 40% for Anaerobic Digestion, and 10% for landfill (Table 1).The scenario design considers several regulations in Indonesia, such as Presidential Regulation No. 83/2018 on marine debris prevention and Minister Regulation (MoEF) No. 75/2019 on waste roadmaps by producers [15], which promote the role of recycling and composting treatments.
Waste composition
The main composition of solid waste in developing countries, including Indonesia, primarily consists of waste materials that can naturally decompose, namely biodegradable waste.The municipal solid waste generated from developing countries primarily originates from households (55-80%), followed by market or commercial areas (10-30%) [16].Analysis of waste composition reveals that the category of waste that dominates in terms of proportion is as follows: biodegradable waste represents about 73% of the total, non-biological waste, including plastics, cane, rubber, and metals, account for about 26%, and hazardous waste such as batteries represent a small fraction of about 1%. Figure 2 illustrates the waste composition specifically observed at Makassar.
Inventory analysis
Before the analysis, the waste generation data of the city of Makassar is projected for 2025, which is the target year for National Strategy Policy (Jastranas).Jastranas is a roadmap of government policy that applies nationally in the managing household waste and similar garbage.Jastranas has set a target by 2025, and waste management should reach 100%, with a 30% reduction of waste and 70% treatment of waste [17].To forecast the amount of waste generation in 2025 using data time series population (2017-2021) and population growth rate, then multiplied by the average amount of waste generation in Makassar [18].The projection method used is the geometric method.The result of the projections is calculated at 70% as the input data to be analysed.Makassar produced 410,291 tons of waste in total in 2021, or average1,139 tons per day.The projected waste generation for 2025 is estimated to reach 440,955.13tons per year, equivalent to 1,208 kg per person per day, considering an average population growth rate of 0.58%.Based on the national strategic policy for the year 2025, Indonesia has set targets to reduce waste by 30% and treatment by 70%, amounting to approximately 308,668.59 tons per year.
The data on emission factors and equivalence factors at the midpoint stages were obtained through information from various relevant sources for this study.The data sources utilized include the Intergovernmental Panel on Climate Change (IPCC) report conducted to obtain emission factors for waste transport vehicles [19]; the IPCC report for shredding emission factors [19]; the IPCC report for composting and landfill emission factors [20]; the study for anaerobic digestion emission factors [21]; the research for heavy equipment emission factors [22].The input data for waste processing per ton comes from resource use on transportation, composting, AD, and landfill operations.Each activity requires the input of resources such as fuel, water, and electricity in varying quantities.The specific details regarding the amount and volume of these resources can be found in Table 2.The difference in the amount of garbage is based on the percentage of each scenario.The resource used in the composting process, such as fuel, electricity, and water match the need in terms of the amount of waste input.As well as on anaerobic digestion process, and resources on landfill activity.Type of power in each scenario can be seen in Table 3.
Interpretation
The impact categories presented in this study are divided into three sections: namely global warming potential (GWP), acidification (AP), and eutrophication.The first step in conducting Life Cycle Impact Assessment (LCIA) is categorizing emissions into the selected impact categories [23].As categories CO2, CH4, N2O, and CO are classified of global warming.The next step is the characterization process at the midpoint level.The characterization process is a quantitative measure to calculate various emissions within an impact category, in this case, using equivalency factors to ensure consistent units.The impact of global warming is expressed in equivalence to GWP (kg CO2eq), acidification (kg SO2eq), and eutrophication (kgPO4 3 eq).
The analysis results on existing BaU scenario showed the most significant impact on the GWP category of 8,436,685.61kgCO2eq; these emissions come from garbage transportation activities and landfill activity uses heavy vehicles.In 3 other scenarios, scenario 1 GWP value of 2,512,671.78kgCO2eq yields a higher emission impact than scenario 2, 2,327,498.49kg CO2eq, and scenario 3 is 2,142,325.19kgCO2eq (Figure 3).Global warming increases in scenario 1 due to the composting process, which uses fuel to shred waste materials.Emission values in the global warming category were the lowest in scenario 3, where this scenario was waste treatment with an almost equal ratio between composting and AD, and the amount of garbage dumped into landfill was only 10%.Acidification and eutrophication vary between scenarios because input values in the compounding process and AD differ in each scenario.The environmental impact category on acidification shows the most considerable value in scenario 1, at 302,893.48 kg SO2eq/year (Figure 3).The lowest acidification category emissions in scenario 2 (46,072.48kg SO2eq/year) are from AD.In the eutrophication impact category, the highest value in scenario 1 is 101,808.21kg PO4 3 eq/year, where the emission calculator comes from the compositing process.In this process, NH3 in composition contributes to the environmental burden of eutrophication.The most negligible impact on the eutrophication category was in scenario 2 (27,259.43kg PO4 3 eq/year), dominated by the AD waste process.The eutrophication load on the compounding process has a more significant impact than the AD process.Among the scenarios considered, in scenario 1, most waste processing uses compost processes, showing high emissions in all impact assessment categories, both levels of global warming, absorption, and eutrophication.Scenario 2, where waste processing mostly on the AD process, showed high GWP values but the lowest of acidification and eutrophication, among other scenarios.Scenario 3, a combination of compost and AD, shows low global warming values but higher categories of acidification and eutrophication, where the contribution of emissions comes from the composting process.Scenario 3 was chosen as the optimal waste management approach among the evaluated scenarios.However, mitigation efforts are needed to minimize the emissions from the composition process by arranging the mixing and placing the cover on the pile during the compositing process.
Conclusion
This study shows the results of the LCA analysis on the environmental impact of emerging global warming, with scenario 3 showing the most minimal emissions.However, other impact categories, i.e., acidification and eutrophication, show high emission values origin from the composting process.This scenario is recommended as an optimal waste management approach from other scenarios.Treatment is required to reduce the acidification and eutrophication emissions in the composting process, such as reducing the waste composting activity in the compositing process and providing a cover on compost stacks that can reduce NH3 emissions.0.00 1,000,000.002,000,000.003,000,000.004,000,000.005,000,000.006,000,000.007,000,000.008,000,000.009,000,000.00
Figure 3 .
Figure 3.The emission value for each scenario
Table 1 .
Scenario Assumed waste allocation for MSWM treatment in Makassar 2025 43. Results and Discussions
Table 2 .
The resource input per tonne of waste processed
|
2023-12-07T20:01:51.162Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "14e7f05ea85788df5f32667c2537093838db7719",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1263/1/012070/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "14e7f05ea85788df5f32667c2537093838db7719",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
54647807
|
pes2o/s2orc
|
v3-fos-license
|
Use of antifungal principles from garlic for the inhibition of yeasts and moulds in fermenting green olives
En el trabajo se evalúa la contaminación por levaduras en la fermentación de aceitunas verdes. Se hicieron recuentos en placas, y se aislaron, identificaron y caracterizaron las cepas correspondientes. Los resultados indican que los recuentos de mohos alcanzan el máximo cuando la acidez es elevada. Las especies más abundantes fueron: Pichia anómala, Debaryomyces hansenii, Candida versatilis y C. tropicalis. Las especies anteriores se usaron en ensayos de inhibición in vitro utilizando medios de cultivos con ajo, extracto acuoso y aceites obtenidos por arrastre de vapor, determinándose las concentraciones de inhibición mínima (MICs). Las concentraciones que mostraron actividad no tuvieron inhibición frente a las bacterias lácticas. El aceite esencial fue la composición más activa frente a mohos y levaduras. La utilización adicional de ácido sórbico durante la fermentación y almacenamiento dio lugar a un descenso en la población de levaduras.
INTRODUCTION
The role of yeasts in green olive fermentation is not precised with accuracy.In fact, yeasts may be considered as undesirable microorganisms in green olives fermentation because of quality deterioration of the fermented product when they grow on brine surfaces.The fermentation procedure adopted in Morocco usually results in film formation in high capacity fermentors.Part of the fermenting olives may undergo spoilage.However, the film formation by yeasts and moulds may constitute a barrier to the air (oxygen) to enter the brine and can help developping the fermentation by creating a suitable environment for lactic acid bacteria (LAB).Unfortunately, the presence of yeasts and moulds may lead to enzymes formation such as cellulase and pectin-lyase which may diffuse in the brine and cause some deteriorations to the fermenting olives.
Several plants are now known for their antibacterial (Beylier, 1979) and/or antifungal (Bonchrid and Flegel, 1984) properties.The use of these plants entirely or their extracts are prefered to synthetic preservatives in the field of food preservation (Faid etal, 1995).In a previous study we showed that when the whole plant was used, the activity was greater than the steam distillation oil (Faid etal, 1995).In some cases the entire plant is not easy to handle and not practical for the user.Therefore, oils are often used as the active compunds of the various plants.In the case of olive fermentation the whole plant can be used in the fermentors since there is no activity on lactic acid bacteria at the concentration that would inhibit yeasts.
In the present study, yeasts and moulds involved in the film formation were isolated, identified, and applied in inhibition assays by garlic principles in the laboratory to prevent the film formation and growth of yeasts and moulds during fermentation.
Olive preparation
Green olives of the picholine variety were prepared according to the method described by Paid etal., (1994).Samples were taken from the brine and the upper surface of the fermentor for chemical and microbiological determinations.
Chemical determinations
pH values of brines were determined by using a Crison micro pH 2000 pH-meter.The titratable acidity was determined by titrating 10 ml of brine with a N/10 sodium hydroxyde solution until the pH reach 8.5 measured by the pH-meter.Results are expressed in percent lactic acid.
Microbiological methods
Ten ml of brine were added to 90 ml of sterile saline water (8.5 g/l) and appropriately diluted up to lO-e for a surface plating procedure on Potato-Dextrose-Agar (Difco Laboratories, USA) acidified to pH 3.5 with sterile lactic acid.The plates were incubated at 28° C. Yeast colonies were counted on the medium after 3 days.
Sixty five yeast isolates were picked at random from the PDA plates used for the viable counts.The isolates were subcultured before being identified according to the simplified key for the identification of food yeasts described by Deak and Beuchat (1987).
Water plant
The whole gariic was peeled and cleaned with sterile distilled water.The material was then blended in a UltraTurrax type blender with the same volume of sterile distilled water.The obtained slurry was incorporated directly in culture media (see below).
Water extract 10 g of red gariic were mixed with 60 mL of hot water (90° C), allowed to stand for 15 min (the mixture was shaken every 2 to 3 min) and filtered on filter paper Whatman N.-4.The aqueous extract was allowed to cool to 40° C and mixed with the culture media (see in vitro inhibition).
Steam distillation
Individual fresh plants were steam distilled to isolate their essential oils.Amounts of 200 to 250 g of gariic were introduced in the distillation flask (1 L) which was connected to a steam generator via a glass tube and to a condenser to retrieve the oil.This was recovered in a funnel tube.Steam was applied for 3 hours and the recovered mixture was allowed to settle and the oil was withdrawn.
In vitro Inhibition assays
The different preparation were incorporated in culture media respectively: PDA for yeasts and moulds, PCA for Gram negative bacteria and MRS for lactic acid bacteria.The concentrations used were as bellow: whole gariic (%) 0.2, 3, 5 Aqueous extract (%) 1,1.6, 2, 4,10, 20 Essential oil (ppm) 20, 40 100, 140, 200 Sorbic acid (%) 0.01, 0.02, 0.05, 0.06, 0.075, 0.1 The plates were spot inoculated with the isolates and incubated at 30° C. The growth was examined every day during 1 week.Growth or no growth of the different strains was used to evaluate the inhibition.
Inhibition assays on olives
A bulk of green olives was divided Into 3 pori:ions of 25 kg each which were dispensed in 50 L sterilized jars.The trials were performed as follow: Trial N.-1: Green olives (control) Trial N.-2: Green olives in presence of: -sorbic acid (0.075) + aqueous extract of garlic (10%).Trial N.-3: Green olives in presence of: -sorbic acid (0.075%) + gariic essential oil (200 ppm).The flasks were tightly closed and incubated at ambient temperature (22° C).Yeast counts were followed up for 3 months and the assays were carried out in duplicate.
RESULTS AND DISCUSSION
Viable counts of Yeasts.Figure 1 shows the yeast counts increase during the fermentation process of green olives.Maxima were reached after 11 days and 18 days respectively for the 2 trials.These maxima were maintained at the same level or decrease slightly.Yeasts may represent the most frequent spoilage microorganisms in the olives prior to fermentation (Paid etal., 1994).The high levels of yeasts in the raw material may have an effect on the quality of olives during fermentation and during storage if no heat treatment is applied to the fruits.Similar results were reported by Asehraou etal., (1993) on yeast counts in fermenting green olives.Both yeasts in the brine and in the surface film were high in most samples.Olives are sold in bulk in Morocco.This may suppose that no heat treatment is applied to olives which are exhibited in open containers.A high level contamination in yeasts is undesirable for the product.These microorganisms would cause some defects in the product leading to abnormal odors and colors of the fruits.Some yeast species can use the lactic acid as a source of carbone which may result in a pH increase and consequently the product is susceptible to spoilage by bacteria species such as Bacillus, Pseudomonas and Clostridium.Some other species can release cellulose and pectin dgrading enzymes and make the fruit soften and spoilt during fermentation and/or storage.
Table I shows the most frequent species of yeasts identified in olives.Some species are very common in the olive contamination as it was reported by some authors (Paid et al., 1994) and may have an effect on the technology of the fermented green olives.Among the species widely distributed in high sodium chloride concentrations in brines, Debaryomyces and Pichia.These two groups have a potent role in the film formation with moulds.Strains belonging to Debaryomyces had some interesting properties including growth in presence of high sodium chloride concentrations up to 12-15%, growth at 37° C, the use of nitrate and urea, and the hydrolysis of lipids (lipase).The other species belonging to Pichia are also highly represented.Pichia species are oxydative yeasts known by growth on the surface of brines and fermenting liquids.Pichia strains can use lactic acid as a source of energy leading to a decrease of the acidity which is a dangerous phenomena for the fermentation.This may result in developement of the undesirable microorganisms which were inhibited by the acidity.Results showing the effect of whole garlic on yeast growth are reported in table II.Partial inhibition was seen with a 3% (13 isolates out of 15 were inhibited totally).At 5% all of the isolates were inhibited.There was a net effect of garlic on the growth of yeasts on PDA, but conditions for the application of garlic in food preservation is to be studied for additional data because of the strong odor that can not be accepted in all foodstuffs.
Moulds were more sensitive than yeasts and a 3% content in PDA could inhibit totally all the strains (table II).It would be very interesting to prevent hygienic problems due to mycotoxins formation (Tantaoui-Elaraki and Letutour, 1985).Lactic acid bateria were not inhibited by garlic and only slight inhibition was observed with a 5% concentration.This is the most important finding which may lead to a possible application of garlic as a preventing agent against yeasts and moulds directly in the fermentation process of green olives.Since LAB were not inhibited by the concentration that had totally inhibited yeasts and moulds, the addition of garlic in the tormentor would not disturb the fermentation process and may prevent in the same time mould and yeast growth.
The antifungal activity of garlic was studied in vitro and this is the most known approach in studying the inhibition activity of principles in plants.Studies to concern the application of garlic to some food systems is seldom.It would be more interesting to exploit the natural preservation by the use of antifungal properties of garlic or its extracts.
Inhibition tests using aqueous extract showed a MIC of 10% (table III).This is very interesting for the preservation of such foods since garlic can be used as a natural preservative and flavouring agent.The aqueous extract of garlic is easier to handle and also to apply in olive fermentation than any other coumpound.The concentration that had inhibited the whole strains was relatively high.This would depend on the initial amount used for the preparation of the extract.
Table IV shows the inhibitory activities of garlic essential oil against microorganisms.One can figure out that yeasts and moulds are the most sensitive to garlic principles especially the essential oil.Inhibition seen with whole garlic would be due mostly to the essential oil.The use of garlic as an antifungal inhibitior in food systems is questionable.Hence, the organoleptic changes by the strong flavour of garlic would not encourage the use of this plant in food preservation but in olives the organoleptic change may not constitute a major problem since in traditional fomentations garlic is used as flavouring agent more than for the inhibitory activities.
Antimicrobial activities of various essential oils was deeply studied but little is known on garlic essential oil.Basically plant essential oils were found to be more inhibitory to moulds and yeasts than bacteria (Charai et al., 1995).This property is very interesting since in food fermentations yeasts and moulds which are the main microoganisms that would grow in the association of lactic acid bacteria and cause some deteriorations related to the organoleptic quality and/or food safety, can be destroyed without affecting the growth of the acidifying microbiota.
Inhibition tests using sorbic acid showed a MIC of 0.05 for four strains and less than 0.25% for 1 strain (table V).One can figure out that these concentrations are in the range of limits allowed in processed dried foods (0.15 to 0.3%).
Antimicrobial activities of sorbic acid have been intensively studied (Restaino etal., 1982;Sofos and Busta, 1982;Zamora and Zaritzky, 1987).Sorbic acid is used in dried foods (Busta, 1982;Sofos, 1981).The concentration of sorbic acid may not exceed 0.2% in most foods.The replacement of chemicals in foods by natural preservatives is suitable for more safety.But when it is not possible, a low concentration is preferred.
Sorbic acid is more active on yeasts and moulds in low pH foods.This property is very interesting since in low pH foods yeasts and moulds are the main microorganisms that would grow and cause some deteriorations related to the organoleptic quality and/or food safety.This property is to be exploited in fermenting olives since the pH is low and may allow more inhibition of yeasts and moulds which are the most abundant.The most relevant problem related to olive spoilage comes from yeast and mould contamination.
The use of natural inhibitors combined with chemical inhibitors may decrease the inhibitory concentration of the chemical preservatives.The use of natural plants and or spices in food preservation should be encouraged to lower the toxicity of some foods because of the high concentrations used.Research in this field is still at a preminum and more investigations are required to reduce the use of sorbic acid and related chemicals.
Experiments were applied to 3 bulks of green olives (25 kg each) which were treated with garlic (the amounts used were the MICs found in the «in vitro» assays) and stored at ambient temperature.Yeast counts after one month are reported in figure 2. These showed that yeast levels were reduced in the 3 trials after one month.There was no yeast growth during the following the 3 months of storage after the last sample was taken.A clear difference was observed between trials and the control as it is showed in figure 2. This may give the evidence that the inhibiting systems used in our experiments could be a succesful mean for the preservation of olives against yeasts contamination.
Upon the forgoing, the use of natural inhibitors combined with chemical inhibitors may decrease the inhibitory concentration of the chemical preservatives.The use of natural plants and or spices in food preservation should be encouraged to lower the toxicity of some foods because of the high concentrations used.Research in this field is still at a preminum and more investigations are required to reduce the use of sorbic acid and related chemicals.
Figure 1 Growth pattern of the natural contaminating yeasts in olives during fermentation.(+) debittered olives (•) non debittered olives Figure 2 Inhibition pattern of yeasts in presence of the inhibiting systems, (x) Sorbic acid + water extract (•) Sorbic acid + essential oil (•) control
Table III Antimicrobial activity of the water extract of garlic Micro
Micro: iVIicroorganisms*: in percent Figures are percent of strains inhibited (c) Consejo Superior de Investigaciones Científicas Licencia Creative Commons 3.0 España (by-nc) http://grasasyaceites.revistas.csic.es
|
2018-12-05T05:44:33.699Z
|
1997-04-30T00:00:00.000
|
{
"year": 1997,
"sha1": "faaecfc102f3eada21b7229baa9950e7b8e13416",
"oa_license": "CCBY",
"oa_url": "https://grasasyaceites.revistas.csic.es/index.php/grasasyaceites/article/download/774/783/783",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "faaecfc102f3eada21b7229baa9950e7b8e13416",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
12793079
|
pes2o/s2orc
|
v3-fos-license
|
Neuronal Networks during Burst Suppression as Revealed by Source Analysis
Introduction Burst-suppression (BS) is an electroencephalography (EEG) pattern consisting of alternant periods of slow waves of high amplitude (burst) and periods of so called flat EEG (suppression). It is generally associated with coma of various etiologies (hypoxia, drug-related intoxication, hypothermia, and childhood encephalopathies, but also anesthesia). Animal studies suggest that both the cortex and the thalamus are involved in the generation of BS. However, very little is known about mechanisms of BS in humans. The aim of this study was to identify the neuronal network underlying both burst and suppression phases using source reconstruction and analysis of functional and effective connectivity in EEG. Material/Methods Dynamic imaging of coherent sources (DICS) was applied to EEG segments of 13 neonates and infants with burst and suppression EEG pattern. The brain area with the strongest power in the analyzed frequency (1–4 Hz) range was defined as the reference region. DICS was used to compute the coherence between this reference region and the entire brain. The renormalized partial directed coherence (RPDC) was used to describe the informational flow between the identified sources. Results/Conclusion Delta activity during the burst phases was associated with coherent sources in the thalamus and brainstem as well as bilateral sources in cortical regions mainly frontal and parietal, whereas suppression phases were associated with coherent sources only in cortical regions. Results of the RPDC analyses showed an upwards informational flow from the brainstem towards the thalamus and from the thalamus to cortical regions, which was absent during the suppression phases. These findings may support the theory that a “cortical deafferentiation” between the cortex and sub-cortical structures exists especially in suppression phases compared to burst phases in burst suppression EEGs. Such a deafferentiation may play a role in the poor neurological outcome of children with these encephalopathies.
Introduction
Burst suppression (BS) is an electroencephalogram (EEG) pattern characterized by the pseudo periodic alternant phases of high voltage activity (burst, 150-350 μV amplitude) and electrical silence (suppression, less than 25 μV amplitude), which is considered as a global state of profound brain inactivation [1]. Burst suppression can occur during different conditions: Early-Onset Epileptic Encephalopathies such as Ohtahara Syndrome and Early Myoclonic Encephalopathy [2], coma [1,3], hypothermia [4], and general anesthesia [5]. That all these different conditions produce similar brain activity may suggest that there is a unifying pathophysiological mechanism underlying this broad range of inactivated brain states. While the mechanisms of BS are still poorly understood, different etiologies resulting in BS indicate that the BS pattern represents a low-order dynamic mechanism that persists in the absence of higher-level brain activity [6]. This further infers that there may be a common pathway leading to the state of brain inactivation which may indicate a change in the fundamental properties of the brain's arousal circuits [7].
There are only a few experimental studies trying to understand mechanisms of BS. In vivo studies of anaesthetized cats, which aimed to identify the potential cellular correlates of burst suppression, showed that during EEG flattening, up to 70% of thalamic cells were completely silent while the remainder showed rhythmic discharges in delta frequencies. Note, the deeper the burst suppression, the more thalamic cells completely ceased firing [8]. These findings were supported by a recent positron emission tomography (PET) study of an infant with an early myoclonic encephalopathy showing a profound hypoperfusion and hypometabolism of the basal ganglia and thalamus as well as cerebral cortex in the interictal period [9]. In contrast to interictal findings, an ictal SPECT investigation of the same patient revealed a significant hyper-perfusion of the bilateral basal ganglia, thalamus, brainstem, and deep cortical layers of bilateral fronto-parietal cortices suggesting a functional deafferentation of the cortex from subcortical structures in early myoclonic encephalopathy [9]. In summary, previous studies have shown the thalamus, basal ganglia, brainstem and especially the fronto-parietal cortex, are all involved in the generation of BS. However, it remains unclear, which structure or structures are responsible for bursts, suppression, and further, how the temporal dynamics between these structures may explain the alternating pattern of burst and suppression.
Due to a poor temporal resolution, it remains difficult to use either PET, SPECT or functional magnetic resonance imaging (fMRI) in order to answer these questions. Thanks to higher temporal resolution (millisecond range) EEG provides a better option for the analysis of neuronal networks underlying short transient events such as phases of bursts and suppression [10,11,12,13,14]. However, scalp EEG is spatially blurred due to the ambiguity of the underlying static electromagnetic inverse problem [15]. Furthermore, it is particularly difficult to carry out electrical source imaging of deep brain structures [16,17,18].
Recent developments in EEG inverse solutions have substantially improved the localization efficiency of EEG. This has enabled the use of EEG data for investigations into neuronal networks, even within deep structures in the brain [19,20,21]. One such method, Dynamic imaging of coherent sources (DICS), allows detection of brain regions that are either coherent with each other, coherent with a reference signal or coherent with a brain region [22]. It operates in the frequency domain for EEG and magnetoencephalogram (MEG) data by employing a spatial filter, and it is able to describe neuronal networks by both imaging power and coherence of oscillatory brain activity [22]. Applied to different types of tremor and epilepsies, DICS was able to characterize networks including thalamus, cerebellum and brainstem in MEG studies [22,23,24,25,26,27] as well as the thalamus and brainstem in recent EEG studies [19,20,21,28,29]. However, DICS cannot describe interaction between the sources [21,23,30,31,32,33,34,35]. In order to analyze effective connectivity i.e. information flow between sources, renormalized partial directed coherence (RPDC) can be used [13,36,37].
Here, we examine the neuronal networks underlying burst and suppression phases in neonates and infants with severe encephalopathies using electrical source imaging in order to identify common pathways to the state of brain inactivation, which may contribute to a better understanding of fundamental properties of the brain's arousal circuits [6] Subjects and Methods Subjects Thirteen infants and neonates with severe epileptic and non-epileptic encephalopathies with BS-EEG pattern were selected for the study. Four patients were recruited from the database of the Department of Neuropediatrics at the University Hospital of Schleswig-Holstein, campus Kiel and the Northern German Epilepsy Centre for children & adolescents, Schwentinental/ Raisdorf, Germany. Nine patients were selected from the Department of Neurophysiology, Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK. Clinical and demographical data of the patients are presented in Table 1 (age ranged from one day to one year old, mean age 4.3 months). All patients had global developmental delay of various severity which was assessed by neurological examinations. The majority of the patients had intractable seizures (six patients had no history of clinical seizures to date) (see Table 1). All of the patients had BS EEG pattern (see Fig 1). Four patients had severe hypoxic ischemic encephalopathy, six patients had epileptic encephalopathies, and three patients had neurometabolic disorders. Diagnoses were made according to the ILAE 2001 classification scheme (Commission on Classification and Terminology of the International League against Epilepsy, 2001). The neurological examination and structural MRI were performed in all cases. Routine EEGs (in accordance with the 10-20 system) were recorded in all cases and were independently evaluated by two neurophysiologists who confirmed the type of EEG abnormality.
The study was acknowledged by the Ethics Committee of the Faculty of Medicine, University of Kiel, Germany and was conducted according to the Declaration of Helsinki (current version, 1996) on biomedical research involving human subjects (Tokyo amendment). The study was registered and approved by the research and development office of the UCL Institute of Child Health, London, United Kingdom. Parents or legal guardians of participants were informed about the research purposes and gave verbal informed consent, which was not recorded, to keep the procedure anonymously. This procedure was also approved by the Institutional Review Boards.
EEG analysis
EEG recording. For the patients recruited from the Department of Neuropediatrics at the University Hospital Schleswig-Holstein and the Northern German Epilepsy Centre for children & adolescents standard EEG recordings were performed according to the 10/20 system (EEG recording system: Neurofile; IT-med, Bad Homburg, Germany). In some cases following additional electrodes were used for the analysis: FC1, FC2, FC5, FC6, CP1, CP2, CP5, CP6, FT9, FT10, TP9, TP10, ECG. Impedance was kept below 10 kOhms, Sampling rate was 512 Hz.
Reference was located between Fz and Cz. If required, EEGs were further processed for the correction of ECG artifacts. All EEGs were recorded during sleep.
For the nine patients recruited from the Department of Neurology, Great Ormond Street Hospital for children, London standard EEG recordings, according to a modified 10/20 system (EEG recording system: Natus XLTek, Oakville, Ontario, Canada) were used for the analyses. Following parameters were used during the recording: impedance was kept below 10 kOhms, Sampling rate: 512 Hz. Reference was located between Cz and Pz. If required, EEGs were further processed for the correction of ECG artifacts. All EEGs were recorded during sleep.
Selection of EEG epochs. The EEG segments were visually inspected by two experienced neurophysiologists independently. For an appropriate DICS analyses, long EEG segments are necessary in order to achieve an acceptable signal-to-noise ratio. Therefore, EEG segments with the burst phases were selected, marked and segmented, i.e., cut out from the entire EEG recording and then concatenated together to acquire EEG segments containing 60 seconds of burst-only phases. The same was performed for the suppression phases, so that eventually 60 seconds duration suppression phases were analyzed for each patient. Power spectrum analyses were performed to identify the predominant frequencies in both phases (Fig 2). The FFT analysis revealed the predominant frequency ranging from 1-4 Hz in all patients for both burst and suppression phases. This frequency range was used for further analyses. Spectral Analysis. A multitaper method [38] was used to compute the power spectra for all recorded EEG channels separately for burst and suppression phases. The data epoch of one second length was tapered using a set of discrete prolate spheroidal sequences [39]. The Fourier-transformation was applied to the tapered data epochs and auto-spectra were computed. Subsequently, the spectra were averaged and the power spectrum was estimated. For more detailed description refer to [40]. The main frequency band was then defined accordingly for the subsequent source analysis. The burst and suppression phases have different amplitudes in the raw EEG so in order to evaluate the implications of this change in amplitude on the source analysis, the relative signal to noise ratio (SNR) was estimated by calculating the power at the frequency band 1-4 Hz for the burst as the numerator and the suppression phase as the denominator in each patient. The relative SNR in all the patients ranged from (35.85-39.09 dB). In a further analysis, all initial power estimates of the individual EEG electrodes were combined to get a pooled power estimate. This can be done by computing the individual second order spectra using a weighting scheme and estimate the power to obtain the pooled estimate as previously described [41,42].
Source Analysis. DICS [22] was used to find the sources in the brain responsible for the delta activity during burst and suppression phases. Forward and inverse problems needed to be solved in order to locate the origin of specific EEG activity seen on the scalp. The forward solution was estimated with specified models for the brain. The brain was modeled by a more complex, five-concentric-spheres model [43,44] with a single sphere for each layer corresponding to the white matter, grey matter, cerebral spinal fluid, skull and skin. Standard T1 infant-neonate images (UNC School of medicine) [45] were used to construct the volume conductor model. A part of the forward modeling and the source analysis was done using the open source software Fieldtrip [46]. The lead field matrix was estimated as the next step which contains the information about the geometry and conductivity of the model. The complete description of the forward solution has been previously described elsewhere [47]. In order to determine the coherence between brain areas, the spatial maximum of the power was identified at the frequency band (1-4 Hz), and then defined as the reference region. The selection of the reference region and the subsequent coherent sources was done on an automated basis. The final step was to apply a spatial filter to estimate the source signals from these identified regions to study directionality [48]. For more detailed description of the source analysis please see our previous publications [19,20,21,28,29].
Directionality analysis. Coherent analysis does not provide information regarding the effective connectivity between the identified sources, it only describes sources that are coherent in a given frequency band. In order to study the direction of informational flow between each of the sources we applied the method of renormalized partial directed coherence (RPDC) [49] which is a technique performed in the frequency domain. The RPDC method uses a multivariate (MVAR) modeling approach to find the functional information flow between the bands of 1-4 Hz in our case. For more detailed description of the method see the following references [13,50,51,52,53,54]. The effective connectivity derived from EEG measurements is difficult to identify due to the presence of noise and the volume conduction effects [55]. In order to test the reliability of detecting the underlying neuronal interactions during any functional state of interest some authors used the imaginary part of coherence [56,57] or time reversal technique (TRT) [58]. In a recent simulation study, Haufe and colleagues [58] showed that the TRT is an appropriate method to improve the influence of weak asymmetries (i.e. non-casual interactions caused by zero-lagged coherences (= volume conduction) on the result of any causal measure, while maintaining or even amplifying the contribution of strong asymmetries (i.e. time-lagged causal interactions not caused by volume conduction). The bootstrapping method was used to calculate the significance level on the applied data followed by the TRT as a second significance test after the estimation of the RPDC values.
Statistical analysis. The significance of the sources was tested using a within-subject surrogate analysis. The surrogates were assessed using a Monte Carlo random permutation 100 times shuffling of one-second segments within each subject. The p-value for each of these 100 random permutations were determined, and then the 99th percentile p-value was taken as the significance level in each subject [59]. Next, for the statistical comparison on the source absolute power values, the mean source coherence (or interaction strength) and the RPDC values between all the sources were estimated for testing the significance between burst and suppression phases. A Friedman two-way analysis of variance test was then performed on the mean coherence values. We applied time reversal technique TRT as a second significance test on the connections already identified by RPDC using bootstrapping as a data-driven surrogate significance test. For all the statistical analyses, the significance level was kept at p < 0.01.
DICS
Burst phase. The grand average of the sources described by DICS analysis is shown in Fig 3 (results of DICS analyses for individual patients are shown in S1-S4 Figs). All these identified sources were statistically significant (p = 0.006) according to Monte Carlo random permutation across all the subjects. During burst phases, the source of the strongest power in the frequency band 1-4 Hz was detected bilaterally in the precuneus (BA 39 and BA 7) in all 13 patients. The local maximum of this source varied across the patients (S1-S4 Figs). This first source was defined as the reference region for further coherence analysis between brain areas. In all the cases, there were four sources coherent with the first source, and there were only small differences across the patients with respect to the local maxima of the sources (S1-S4 Figs). Sources with the strongest coherence with the reference source were found bilaterally in the somatosensory cortex (second source; BA 2 and BA 3) in the patients. The next strongest source were detected in prefrontal regions bilaterally (BA 9); subsequent sources were found in the thalamus (BA 23) bilaterally in eleven patients and unilaterally on the left side in two patients, whereas the last coherent source was determined in the brainstem (BA 25), or more precisely in the midbrain tegmentum in all thirteen patients.
Suppression Phase. All the identified sources were statistically significant (p = 0.009) according to Monte Carlo random permutation across all the subjects. The source of the strongest power in the frequency band 1-4 Hz was detected bilaterally in the precuneus (BA 39 and BA 7) in all 13 patients (first source) and was very similar to the strongest source during burst phases (see Fig 4 and S1-S4 Figs). This strongest source was further used as the reference region for coherence analysis between brain areas. In all the cases there were three sources coherent with the first source, and there were only small differences across the patients with respect to the local coherence maxima of the sources (S1-S4 Figs). The second strongest source was found bilaterally in the occipital cortex (second source; BA 17) in eleven patients and unilaterally on the left side in two patients. This source was not present during burst phases. The next strongest coherence was detected bilaterally in the somatosensory cortex (third source; BA 2 and BA3), the location of these sources were analogous to the second sources during the burst phases. The last coherent source were determined bilaterally in prefrontal cortex (BA 9), similar to the third source during the burst phases. No sub-cortical sources were observed during the suppression phases. The source of the strongest power in the frequency band 1-4 Hz was detected bilaterally in precuneus in all 13 patients. The 2nd source was detected bilaterally in the somatosensory cortex in all patients. The 3rd source was detected in prefrontal regions bilaterally; Subsequent sources were detected in the thalamus (4th source) bilaterally and the last coherent source was found in the brainstem, in all 13 patients. B. RPDC during burst phase: showing significantly (p = 0.003) stronger information flow from the precuneus (source 1) towards the somatosensory cortex (p = 0.001) (second source) and the prefrontal cortex (third source) (p = 0.004) as well as from the brainstem (source 5) towards the thalamus (p = 0.004) and from the thalamus to the precuneus (p = 0.004) rather than vice versa. Also, stronger RPDC (p = 0.002) was detected from the somatosensory cortex towards the prefrontal cortex.
doi:10.1371/journal.pone.0123807.g003 RPDC Burst Phases. During burst phases RPDC showed that the direction of information flow was significantly stronger from the posterior regions towards the anterior regions. There was a significantly (p = 0.006) stronger information flow from the precuneus (source 1) towards the somatosensory cortex (p = 0.009) (second source) and the prefrontal cortex (third source) (p = 0.004). The upward information flow was from the brainstem (source 5) towards the thalamus (p = 0.003) and from the thalamus to the precuneus (p = 0.007) rather than vice versa. In addition stronger information flow (p = 0.002) was detected from the somatosensory cortex towards the prefrontal cortex (see Fig 3 and S5 Fig). All cortical regions showed a clear trend of significant information flow from posterior regions towards anterior regions. The other connections between sources 2 to 5 were not significant.
Suppression Phases. RPDC showed no significant differences in flow of information between the sources. This indicates bidirectional and homogenous informational flow (see Fig 4). However, all these bidirectional connections were significant in the data-driven bootstrapping method for the RPDC analyses (p > 0.1) (shown in S6 Fig) and TRT analyses revealed strong symmetries.
Comparison between burst and suppression phases. We compared the source absolute power between the two phases and found that the burst phases were significantly (p = 0.009) higher than those during the suppression phases (see Fig 5). Next, we compared the total interaction strength of coherence between the sources during both phases and found that coherence values during burst phases were significantly (p = 0.0006) higher than those during the suppression phases. In this study, the bootstrapping method followed by the TRT analyses underlined the robustness of the above-mentioned results, as any significant causal interaction identified by RDPC were identified as strong asymmetry by the TRT.
Discussion
This study represents the first descriptive interpretation, by means of imaging power and coherence of oscillatory brain activities, of the dynamics of neuronal networks underlying the BS EEG pattern in neonates and infants with severe encephalopathies. Results of DICS analysis demonstrated that delta activity during the suppression phases was associated with coherent sources in precuneus, occipital cortex, somatosensory cortex and prefrontal cortex. However, DICS analysis during burst phases showed additional sources in thalamus and the brainstem. Furthermore, we demonstrated that burst phases were characterized by significantly higher source absolute power and mean coherence values between the sources and showed stronger informational flow between subcortical and cortical sources compared to the suppression phases. The significant difference in mean coherence is not due to the difference in power between the two phases, as shown via estimation of the relative SNR on the scalp level. The described network underlying burst and suppression phases was found on both group and individual levels. The networks represent a stable inter-individual pattern of functional and effective connectivity, which is independent of etiology, and represents a common mechanism of BS.
The strongest source during both the burst and suppression phases was in the precuneus/ posterior cingulate cortex. These regions, together with the retrosplenial brain area, is known to be one of the critical nodes of neural network correlates of consciousness (NNCC) [60] and an important component of the default mode network [61,62]. Additionally, these regions have the highest level of brain glucose metabolism and cytochrome C oxidase activity [60]. It is known that metabolism in posterior cingulate and retrosplenial brain areas are likely to be driven by the projections from the thalamus. Furthermore, brainstem projections towards the thalamus provide a key coupling between the brainstem system for arousal, and cortical systems for cognitive processing and awareness [60]. The central thalamus receives upwards projections from the brainstem/basal forebrain "arousal systems" that control the activity of many cortical and thalamic neurons during the sleep-wake cycle [63]. These projections are present only during the burst phases, whereas during the suppression phases subcortical structures show no coherence/interaction with the cortical structures. These findings are in line with previous studies showing that up to 70% of thalamic cells are silent during the suppression phases [8]. Also, postmortem investigations of patients in vegetative state frequently showed damage to the mid-brain and thalamus [64,65]. Importantly, bilateral injuries of thalamus are also associated with global disorders of consciousness (coma, vegetative state, and minimally conscious state) [66,67]. Furthermore, PET and SPECT investigations of patients with early myoclonic encephalopathy (EME) suggested that there is a profound functional deafferentation of the cortex from subcortical structures, providing a new insight into the pathophysiology of BS pattern in EME [9]. These observation indicate that thalamic neurons play a causal role in disorders of consciousness [63].
During burst phases there are extensive projections from the basal-ganglia to the frontal lobe through the precuneus, whereas during suppression phases these projections are fully deorganized and show no informational flow towards frontal regions. Extensive and topographically organized basal-ganglia outputs to the prefrontal cortex, via the precuneus, influence cognitive operations of the frontal lobe [68]. Our findings support the theory that each burst phase can be seen as an attempt to recover normal neuronal dynamics, as proposed by Ching and colleagues [6]. By constructing a bio-physical computational model they showed that alternating features of BS may arise through interactions between neuronal dynamics and neurometabolism. They suggest that BS represents a basal neurometabolic regime that ensures basic cell function during states of lowered metabolism. Suppression occurs when there is an imbalance between neuronal activity and available energetics, whereas each successive recovery of rhythms within bursts is due to the recovery of the basal dynamics at the neuronal circuit level caused by the transient increases in energetics.
An alternate theory, proposed by Amzira and colleagues [69,70], further elaborates regarding the disruption of the excitatory-inhibitory balance. According to their theory, the burst phase is accompanied by an exhaustion of extracellular cortical Ca 2+ , which generates a general disconnection of cortical networks, subsequent arrest of neocortical neuronal activities, and the resulting flat EEG. However, during suppression phases, interstitial Ca 2+ levels are restored to levels, whereby any external or intrinsic signal can trigger a new burst in the hyper-excitable cortex [69,70]. These findings coincide with those reported by Schiff and colleagues [71]. They proposed the existence of disorganized and "free running" corticostriatopallidal, thalamocortical and corticothalamic loops working at a very low metabolic level during chronic unconscious states.
Methodological limitations: In this study we reveal brain sources in sub-cortical regions such as the thalamus and brain stem. It has been a matter of debate for many years whether it is possible to find sub-cortical sources based on recordings on the scalp. In previous MEG [22,23,24,72] and EEG [59] studies, subcortical sources have been detected by applying DICS to oscillatory signals (e.g. tremor), and also in healthy subjects during isometric contraction [29]. This analytical method has also been shown to identify successfully subcortical sources in the thalamus [20,21] and the brainstem [19] in our previous studies based on EEG. There have been also two validation studies with an independent method like EEG-fMRI [20] and local field potentials measured simultaneously with EEG from macro electrodes in orthostatic tremor patients [47]. The second limitation is that we have not used realistic head modeling for the source analysis. Considering that current study is a retrospective study MRIs in cases of the investigated patients were not performed according to the requirements necessary for the optimal head modeling (3D T1 and T2 Images and DTI images). In addition, some of the sequences were not fully performed, i.e. parietal or temporal parts were incompletely scanned, which is a substantial obstacle for the head modeling. It is likely that the realistic head modeling would improve the localization power of DICS. The third limitation of our study can be that we have investigated suppression phases, i.e. phases with a very poor electrical activity of the brain, which is a challenge for the source analyses. Nevertheless we found the pooled power spectrum in both the burst and suppression phases showed clear peaks in the frequency range of 1-4Hz.
Furthermore, we investigated non-homogeneous group of patients, which were treated with different medications, investigated with different numbers of electrodes and at different time points. However, the aim of our study was to investigate neuronal networks in infants with BS pattern regardless of the etiology, severity of the disease or medications. The described network underlying burst and suppression phases was found at both group and individual levels. We therefore believe that it is as a fingerprint of the BS-EEG pattern, which, independent of etiology, represents a common mechanism of BS. This is in line with other studies which have shown that a certain EEG pattern, independent of etiological factors, can be represented by a common neuronal networks [12,19,20,73,74].
Conclusion
In this study, we investigated dynamics within the neuronal networks during BS-EEG pattern in neonates and infants with severe epileptic and non-epileptic encephalopathies. Our study revealed that there is a specific periodicity of neuronal activity during BS. Each suppression phase shows a complete deafferentation between subcortical and cortical structures and cessation of neuronal projections responsible arousal and cognitive processing and awareness. However, each consecutive burst can be considered as a temporary recovery of subcortical and cortical neuronal dynamics. Despite the above mentioned limitations, our findings support the feasibility of the described methodology for the investigation of infants with severe encephalopathies with burst suppression EEG pattern. showing all sources that were described by DICS in each patient separately for burst and suppression phases, numbered according to the strength of the identified sources. During burst phases: the source of the strongest power (source 1) in the frequency band 1-4 Hz was detected bilaterally in the precuneus in all 13 patients. The local maximum of this source varied across the patients. In all the cases, there were four sources coherent with the first source. Sources with the strongest coherence with the reference (source 2) source were found bilaterally in the somatosensory cortex in the patients. The next strongest source (source 3) were detected in prefrontal regions bilaterally; subsequent sources were found in the thalamus (source 4) bilaterally in eleven patients and unilaterally on the left side in two patients, whereas the last coherent source was determined in the brainstem (source 5), or more precisely in the mid-brain tegmentum in all thirteen patients. During suppression phases: The source of the strongest power (source 1) in the frequency band 1-4 Hz was detected bilaterally in the precuneus in all 13 patients. The second strongest sources were found bilaterally in the occipital cortex (source 2) in eleven patients and unilaterally on the left side in two patients. The next strongest coherence was detected bilaterally in the somatosensory cortex (source 3), the location of these sources were analogous to the second sources during the burst phases. The last coherent sources were determined bilaterally in prefrontal cortex (source 4). (TIF) showing all sources that were described by DICS in each patient separately for burst and suppression phases, numbered according to the strength of the identified sources. During burst phases: the source of the strongest power (source 1) in the frequency band 1-4 Hz was detected bilaterally in the precuneus in all 13 patients. The local maximum of this source varied across the patients. In all the cases, there were four sources coherent with the first source. Sources with the strongest coherence with the reference (source 2) source were found bilaterally in the somatosensory cortex in the patients. The next strongest source (source 3) were detected in prefrontal regions bilaterally; subsequent sources were found in the thalamus (source 4) bilaterally in eleven patients and unilaterally on the left side in two patients, whereas the last coherent source was determined in the brainstem (source 5), or more precisely in the mid-brain tegmentum in all thirteen patients. During suppression phases: The source of the strongest power (source 1) in the frequency band 1-4 Hz was detected bilaterally in the precuneus in all 13 patients. The second strongest sources were found bilaterally in the occipital cortex (source 2) in eleven patients and unilaterally on the left side in two patients. The next strongest coherence was detected bilaterally in the somatosensory cortex (source 3), the location of these sources were analogous to the second sources during the burst phases. The last coherent sources were determined bilaterally in prefrontal cortex (source 4). (TIF) showing all sources that were described by DICS in each patient separately for burst and suppression phases, numbered according to the strength of the identified sources. During burst phases: the source of the strongest power (source 1) in the frequency band 1-4 Hz was detected bilaterally in the precuneus in all 13 patients. The local maximum of this source varied across the patients. In all the cases, there were four sources coherent with the first source. Sources with the strongest coherence with the reference (source 2) source were found bilaterally in the somatosensory cortex in the patients. The next strongest source (source 3) were detected in prefrontal regions bilaterally; subsequent sources were found in the thalamus (source 4) bilaterally in eleven patients and unilaterally on the left side in two patients, whereas the last coherent source was determined in the brainstem (source 5), or more precisely in the mid-brain tegmentum in all thirteen patients. During suppression phases: The source of the strongest power (source 1) in the frequency band 1-4 Hz was detected bilaterally in the precuneus in all 13 patients. The second strongest sources were found bilaterally in the occipital cortex (source 2) in eleven patients and unilaterally on the left side in two patients. The next strongest coherence was detected bilaterally in the somatosensory cortex (source 3), the location of these sources were analogous to the second sources during the burst phases. The last coherent sources were determined bilaterally in prefrontal cortex (source 4). showing all sources that were described by DICS in each patient separately for burst and suppression phases, numbered according to the strength of the identified sources. During burst phases: the source of the strongest power (source 1) in the frequency band 1-4 Hz was detected bilaterally in the precuneus in all 13 patients. The local maximum of this source varied across the patients. In all the cases, there were four sources coherent with the first source. Sources with the strongest coherence with the reference (source 2) source were found bilaterally in the somatosensory cortex in the patients. The next strongest source (source 3) were detected in prefrontal regions bilaterally; subsequent sources were found in the thalamus (source 4) bilaterally in eleven patients and unilaterally on the left side in two patients, whereas the last coherent source was determined in the brainstem (source 5), or more precisely in the mid-brain tegmentum in all thirteen patients. During suppression phases: The source of the strongest power (source 1) in the frequency band 1-4 Hz was detected bilaterally in the precuneus in all 13 patients. The second strongest sources were found bilaterally in the occipital cortex (source 2) in eleven patients and unilaterally on the left side in two patients. The next strongest coherence was detected bilaterally in the somatosensory cortex (source 3), the location of these sources were analogous to the second sources during the burst phases. The last coherent sources were determined bilaterally in prefrontal cortex (source 4).
|
2016-05-17T05:47:31.558Z
|
2015-04-30T00:00:00.000
|
{
"year": 2015,
"sha1": "0c3817a11babe1072fc522d1a55effeaf67407e0",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0123807&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c3817a11babe1072fc522d1a55effeaf67407e0",
"s2fieldsofstudy": [
"Biology",
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
224818945
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Climate and Management on Patterns of Taxonomic and Functional Diversity of Recreational Park Vegetation
Recreational urban parks support diverse assemblages of plants that through their functions, contribute beneficial services to billions of individuals throughout the world. Drivers of vegetation-derived services in parks are complex, as climate and park management interact with the functioning of multiple species of vegetation types. Yet, informal observations suggest that recreational parks are constructed consistently to specific principles of landscape design. Here we ask: what are the patterns of functional traits and vegetation diversity in cities of varying climate in the United States, and how do these patterns result in a consistent typology of recreational park? We hypothesized that increased aridity would exclude species not adapted to warm, dry climates, thereby reducing local, or alpha, taxonomic diversity and shifting community composition. However, a similar preference of park managers in the United States for suites of service-based functional traits leads to similarity of mean values of services traits in recreational parks among cities, regardless of climate differences. We tested this hypothesis by surveying lawn species, comprised of herbaceous turf and spontaneous plants, and woody species in fifteen recreational parks across Baltimore MD, Riverside CA, and Palm Springs CA, three cities that contain multiple parks but differ in regional climate. With increasing aridity, taxonomic alpha diversity decreased and plant physiology shifted, yet no differences were observed among most service-based functional traits. Among the cities surveyed, no significant differences were observed in functional dispersion of woody and spontaneous species or most service-based traits. Taxonomic composition differed in each city for all vegetation types, while suites of service-based traits differed between Baltimore and the two more arid cities of Riverside and Palm Springs. Our results suggest that across the United States, service-based functional traits are consistent, even when arising from unique compositions and abundances of species in recreational parks. We interpret these results as an interaction between climate and the preferences of recreation park managers for services, creating a pattern of vegetation diversity where taxonomic alpha and beta diversity vary among regions while specific suites of services remain available.
INTRODUCTION
Urban vegetation comprises novel ecosystems by bringing together native and non-native plant species of no natural analog (Hobbs et al., 2014;Aronson et al., 2017). In cities across climates and cultures, one finds novel vegetation arrangements within recreational parks (Threlfall et al., 2016;Weems, 2016;Talal and Santelmann, 2019). Recreational parks are a public form of green infrastructure that provides services to individuals through access to recreation, cooler temperatures, and improvement to overall health (Bolund and Hunhammar, 1999;Ayala-Azcárraga et al., 2019). These urban greenspaces contain multiple species of trees planted to provide esthetic benefits and shade as well as large expanses of green turf for recreation and gathering (Tinsley et al., 2002;Pataki et al., 2013;Talal and Santelmann, 2019). However, specific services provided to individuals and neighborhoods by planted vegetation may be contingent on the diversity of species and vegetation function (Larson K. L. et al., 2016;Vieira et al., 2018).
Taxonomic and Functional Diversity in Recreational Park Vegetation
Examining patterns of taxonomic and functional diversity among communities allows us to understand the drivers behind local species assembly and functioning (Anderson et al., 2011;Johnson et al., 2015). What drives urban diversity is a blend of biophysical and social factors (Padullés Cubino et al., 2018). Climate and management preferences can determine the expression of plant functional traits in parks, from the physiological of leaves to esthetic characteristics preferred by people (McCarthy et al., 2011;Pataki et al., 2013;Avolio et al., 2018). Vegetation physiological function has also been linked to the provisioning of ecosystem service-based traits Swan et al., 2016). The influence of climate and management preference can vary depending on the type of vegetation studied (Nielsen et al., 2014). A metaanalysis of species richness in urban parks found out of ten studies focused on vascular plants, seven studies restricted results to only woody species (Nielsen et al., 2014). Other recent studies in urban green systems focus only on trees (Avolio et al., 2015;Gillespie et al., 2017;Kendal et al., 2018), or herbaceous turf and weedy species (Knapp et al., 2012;Wheeler et al., 2017;Padullés Cubino et al., 2018). To understand the causes of vegetation diversity in an urban plant community, we need to know how regional (climate) and local (management) influence different vegetation types in a specific urban community (Aronson et al., 2016), such as recreational parks.
Building on the framework of environmental filtering of communities (Ackerly, 2003;Spasojevic and Suding, 2012;Kraft et al., 2015), extreme aridity reduces physiological niche space, thus reducing taxonomic diversity and shifting physiological function of park vegetation (Spasojevic et al., 2014;Kraft et al., 2015;Pearse et al., 2018). Cities of extreme climates generally have lower species richness of urban trees and lawn vegetation compared to milder climates Padullés Cubino et al., 2018). Within lawn communities, these patterns may arise due to differences in irrigation practices across cities, as well as differences in turf varietals (Kendal et al., 2018;Pincetl et al., 2019). Cities in more arid climates, such as Los Angeles and Phoenix, irrigate their lawns, while more mesic cities have both irrigated and non-irrigated lawns (Wheeler et al., 2017). Warmer climates also allow land managers to plant species of turf which use the C4 photosynthetic pathway, which can maintain productivity under high temperatures and aridity, while C3 turf species are more physiologically restricted, making them a poor choice for hot and dry cities environments (Wherley and Sinclair, 2009;Beard, 2013). Furthermore, lawn communities are often colonized by spontaneously growing herbaceous species that has a greater range of functional properties compared to turf, potentially allowing more spontaneous species to pass through environmental filters that limit turf species (Robinson and Lundholm, 2012).
Services Preferred by Recreational Park Management
While taxonomic composition and physiological functions are strongly driven by climate, similar preferences for suites of traits related to specific service-based functions among urban residents may decrease functional diversity of urban plant communities (Fukami et al., 2005;Larson K. L. et al., 2016). The service-based traits chosen for recreational parks reflect how park managers design park vegetation, where the park tree composition can provide services such as shade and visual esthetics not found in the local native communities . Lawns provide similar services across cities, such as vibrant greenness and resiliency to active use (Larson K. L. et al., 2016). However, the effect of preference in parks on service-based traits can differ depending on the plant community and type of trait. In warmer cities, there can be an increase in service-based traits relating to showy flowers and fruits, however across a gradient of aridity, differences in tree heights or leaf areas have not been identified Pearse et al., 2018).
Service-based functional traits can have distributions less limited by climate compared to ecophysiological traits. For example, esthetic flower production is a valued service trait not limited by climate due to seasonally available flowering species in most regions (Konijnendijk et al., 2005;Threlfall et al., 2016;Goodness, 2018). However, esthetic fall foliage is prevalent in broadleaf deciduous species, which are likely to be found mostly in biomes like temperate deciduous forests. Preferences allow for similarity of esthetic flower traits across climates while having fewer effects on traits of esthetic fall foliage. Alternatively, spontaneous vegetation is not actively planted, making its dispersal more limited than horticultural vegetation (De Wet and Harlan, 1975). After undergoing selection by dispersal and environmental filters, the selected traits of spontaneous species in recreational parks are those that can overcome mowing, weeding, and physical dispersal barriers. Overall, these at times independent variations of taxonomic, ecophysiological, and service-based diversity implies multiple drivers may determine the pattern of diversity in urban park vegetation.
While visually one can see that recreational parks appear similar in form to one another regarding their design, the question remains who controls that typology and the diversity found within? Urban park tree diversity has been found to respond to different physical and social drivers compared to residential yards and street trees (Kendal et al., 2012). What vegetation is found in recreational parks is ultimately a function of the park managers who decide what is planted. A recent study based in Portland OR found the reasons behind vegetation design in recreational parks are often a function of economics, maintenance of the park environment from human disturbance, and a desire to provide beneficial services for visitors (Talal and Santelmann, 2020). Similar management desires for economy of maintenance were found in surveys of park managers in Hong Kong (Chan et al., 2014). The differences between park visitor preferences and manager preferences can often be due to budget constraints (Baur et al., 2013), where park visitors may desire an increase in colorful foliage and more flowers, managers are looking for ways to improve maintenance on a restricted budget (Talal and Santelmann, 2020). While park managers may have similar goals in different countries, the species diversity found in parks is often determined by national and local horticultural trends Nielsen et al., 2014;Roman et al., 2018), as well as regional climate .
To better understand how climate and management preferences influence the distributions of recreation park diversity, we ask: how do park biodiversity, community composition, and values of service-based traits differ among three mid-sized United States cities of varying climates? We predict that alpha diversity will vary among recreational parks in arid and mesic cities due to the physiological tolerances of vegetation, and that each city will have a unique taxonomic beta diversity. Functional diversity and service provisioning, however, will not differ across climates owing to the preference of park managers for a specific typology of vegetation composition and provided services within recreational parks. To resolve the gap spanning multiple vegetation types within the same community, we focus on three different vegetation types, including horticultural woody species and lawn communities comprised of turfgrass and spontaneous herbaceous vegetation. By quantifying patterns in taxonomic and functional diversity and their relationship to services provisioning in recreational parks, we describe how broad-scale climatic variables interact with the preferences of recreational park managers to influence park vegetation diversity and function.
Study System
We assessed climate and park managers preference as drivers of park vegetation biodiversity by sampling 15 recreational parks located in three cities within the United States: Baltimore MD, Riverside CA, and Palm Springs CA (Figure 1 and Table 1). Cities were selected to represent a gradient of aridity (measured as the vapor pressure deficit, VPD). Baltimore (mean summer VPD: 0.89 kPa) is in a temperate humid climate characterized by the regular precipitation events throughout the growing season. Riverside (mean summer VPD: 2.7 kPa) is in a Mediterranean climate of intermediate aridity and Palm Springs (mean summer VPD: 6.2 kPa) is in a desert climate of extreme aridity, and both cities are characterized by seasonal precipitation during winter and early spring with very little precipitation during summer and fall. Within Riverside and Palm Springs, urban vegetation health is maintained through regular irrigation throughout the year. In each city, we selected five recreational parks. We focused only on highly managed recreational parks, rather than natural parks or parks with remnant natural areas, to highlight effects of park managers' preferences for recreation. Recreational parks are characterized by large lawn areas with multiple species of planted, maintained woody species. Lawns, along with spontaneous species found within the lawn, are the only herbaceous species present in the park and no area is unmanaged. To test whether species-area relationships influence diversity (Nielsen et al., 2014;van der Maarel and Franklin, 2015), our parks represented a range of areas, from 10,000 to 300,000 m 2 . Park size was determined via ArcGIS 10.6 (ESRI, Redlands, CA, United States) polygon delineation ( Table 1). Our field sampling protocol was designed to collect all taxonomic data for the five parks within each region over 7 days of field work in each city. To assess climate as a driver of physiological function, parks were sampled during mid-afternoon in peak summer months. Our experimental protocol is designed to be transferrable to other cities to characterize recreational park vegetation can within a approximately 5 days.
Functional Trait Analysis
Our research protocol is designed for maximum repeatability, even by researchers without access to physiology wet labs. To that end we collected functional data in silico, using trait databases and primary literature. To incorporate functional diversity metrics into our analysis of recreational park species, we used the BIEN database (Maitner et al., 2018), the Global Wood Density Database (Chave et al., 2009) and primary literature to assemble average ecophysiological (EP) traits for each sampled species ( Table 2). Following Westoby's leaf-height-seed plant ecology strategy scheme, we included traits that relate to a holistic FIGURE 1 | Study area of cities of interest. Five recreational parks were sampled in each city. All parks are representative of a recreational park typology, comprising expansive lawns and individually planted trees. Images are sourced from Google Earth Pro. ecological strategy: Specific Leaf Area, Seed Dry Mass, and Height (Westoby, 1998). To incorporate traits that relate to hydraulic architecture in stems, we included wood density as an additional trait (Swenson and Enquist, 2007;Chave et al., 2009) for woody species. To analyze influences of park manager preference, we included a series of service-based traits for woody species (ES). These traits are based on desired attributes found in nursery stock . We included traits reflecting shade and fruit provisioning, as well as flower, fruit, foliage, and bark esthetics. While there are many service-based traits found in horticultural vegetation, such as carbon capture (Pataki et al., 2006), pollution and dust mitigation (Wang et al., 2019), and microclimate regulation (Shiflett et al., 2017), we chose to focus on traits that were more associated with attributes preferred by urban residents and urban land managers Larson K. L. et al., 2016). We used horticultural sources to determine ES traits for each woody species ( Table 2). ES traits were analyzed as presence/absence, where if a horticulture record listed a trait as associated with the species, that trait was given a value of 1 and if no record of the traits was found, we recorded a value of 0. We only used species that had records for all trait factors for subsequent analyses. As there was an unequal number of species containing all ES and all EP traits, we separated woody species into two categories to capture all species with a full spectrum of service-based traits, and all species with a full spectrum of EP traits.
We analyzed herbaceous species as a lawn, encompassing all herbaceous species, as well as separated by functional groups of turf species or spontaneous species. As turf and spontaneous species were all herbaceous, we did not include wood density as an EP trait and focused only on those traits in the Leaf-Height-Seed strategy. ES traits were not calculated for turf or spontaneous species. Turf is selected based on its ability to persist; thus if turf is present a singular selectable service is being provided (Christians et al., 2016). Spontaneous species are not preferentially selected for planting. We filled gaps in trait data through searches of primary sources ( Table 2).
Statistical Analysis
We conducted all statistical analyses in R (R Core Team. 2019). For each vegetation type we created a site by species matrix, each site being one sampled recreational park. Taxonomic alpha diversity of vegetation types in parks was represented by the Shannon-Wiener index calculated in the vegan package in R (Oksanen et al., 2008). Statistical significance was determined using one-way ANOVA calculated for each cover type; differences among cities were determined with a Tukey's HSD post hoc test. Taxonomic beta diversity between any two parks was calculated using Brey-Curtis distance, and visualized through Principle Coordinates Analysis (PCoA). Within-city beta diversity was defined as the Euclidean distance to centroid, and significance was determined through a PERMANOVA, while differences in Park area was determined through ArcGIS, percent impermeable surface was determined from the 2016 National Land Cover Database (Homer et al., 2012), and adjacent land use was determined through county zoning maps of each city.
composition were determined using a pairwise PERMANOVA in the RVaideMemorie package in R (Maxime, 2019). To examine differences in beta diversity, we calculated dissimilarity matrices between all parks in the study, using both taxonomic and functional characteristics. Functional beta diversity was calculated using Gowers distance. Any trait distribution that did not fit a normal distribution was log-transformed and all trait data were z-transformed before any multivariate analysis was completed. Using PERMANOVA, we tested for compositional shifts in taxonomic and functional diversity for all vegetation types, and visualized results by the PCoA. The dissimilarity matrices allowed us to calculate homogeneity of variance through PERMANOVA and we identified compositional shifts in taxonomic and functional diversity. Functional alpha diversity was defined as the functional dispersion (FDis) of each park. FDis is an abundanceweighted measure of the distribution of functional traits in a community and quantifies the range of functional strategies within a community (Laliberte and Legendre, 2010). To quantify individual functional metrics, we calculated community weighted trait means (CWM) for each trait. CWM values provide a singletrait metric that is weighted by species abundances within a sampled community (Zuo et al., 2016). FDis and CWM analyses were completed using the FD package in R (Laliberté et al., 2015).
Taxonomic and Functional Alpha Diversity
Taxonomic and functional alpha diversity was first analyzed to determine if park area acted as a significant driver. Park area was not correlated with either taxonomic or functional alpha diversity. We then conducted subsequent analyses investigating city-wide climate as a primary driver of diversity metrics.
For both woody and turf vegetation, taxonomic alpha diversity, measured as Shannon-Weiner diversity, was lower in recreational parks of Palm Springs compared to the more mesic city of Baltimore (Woody species EP traits, p = 0.011; woody species ES traits, p = 0.021; turf species, p = 0.008). Differences in Shannon diversity of woody vegetation between Riverside and either Baltimore or Palm Springs were not detected. Differences in lawn taxonomic diversity were observed between the mesic Baltimore and the two drier cities of Riverside (p = 0.050) and Palm Springs (p = 0.010). However, when isolating turf (Schmid and Brenzel, 2001) 5. (Maitner et al., 2018) 6. (Kröber et al., 2012) 7. 8. (Johnson and Gerhold, 2001) Trait values are determined from cited sources. and spontaneous species, the turf in Palm Springs was different compared to the other two cities (Palm Springs -Riverside p = 0.024, Palm Springs -Baltimore p = 0.008) and no differences were found when comparing spontaneous species assemblages (p = 0.115) (Figure 2A). Functional alpha diversity, FDis, of both EP and ES traits in woody species were not significantly different among the three cities (EP p = 0.487; ES p = 0.659) (Figure 2A). Likewise, FDis of all lawn species and only spontaneous species were not significantly different across cities (p = 0.247). Within turf species, there was a significant difference in city-scale FDis means (p = 0.034). No significant difference of FDis means was found between Baltimore (mean FDis = 0.206) and Riverside (mean FDis = 0.200), however Palm Springs (mean FDis = 0.047) values were significantly lower than Baltimore (p = 0.050) (Figure 2C).
Functional Traits
Woody EP traits all showed shifts in CWM across cities. The three traits associated with the Leaf-Height-Seed strategy all decreased FIGURE 2 | Metrics of alpha diversity for each vegetation type (mean and standard deviation). Asterix refer to significant differences between groups tested through ANOVA with a post hoc Tukey HSD test (*0.001 < p < 0.05). Herbaceous species are both spontaneous and turf species (A) Taxonomi.c alpha diversity was measured by Shannon-Weiner Diversity, (B) Functional alpha diversity (FDis) of woody species, (C) FDis of herbaceous type species. FDis was measured by Functional Dispersion, an abundance weighted metric of functional trait ranges.
from Baltimore to Riverside and Palm Springs (SLA: Baltimore -Riverside p < 0.001, Baltimore -Palm Springs p < 0.001; Height: Baltimore -Palm Springs p = 0.002; Seed Dry Mass: Baltimore -Riverside p < 0.001, Baltimore -Palm Springs p < 0.001). Wood density increased from Baltimore to Riverside and Palm Springs (Baltimore -Riverside p = 0.031, Baltimore -Palm Springs p = 0.001) (Figure 3A). Overall values of woody ES traits display little dissimilarity among cities. Shade and fruit provisioning, and FIGURE 3 | Community weighted means (CWM) of woody functional traits (mean and standard deviation). Asterix refer to significant differences between groups tested through ANOVA with a post hoc Tukey HSD test (*0.001 < p < 0.05, ***p < 0.0001). (A) Traits related to ecophysiological characteristics; (B) Traits related to service-based functional trait characteristics. esthetic traits of flowers, fruit and bark were not different across our study cities. In each city, the highest value for any servicebased trait was that of shade provisioning. The lone service functional trait which exhibited differences among cities was that of esthetic fall foliage; more trees associated with the fall foliage trait were found in Baltimore compared to Riverside and Palm Springs (Baltimore -Riverside p < 0.001, Baltimore -Palm Springs p < 0.001) (Figure 3B).
Within lawn species, only seed dry mass differed across cities, with Palm Springs having significantly lower values FIGURE 4 | CWM of ecophysiological traits of herbaceous, turf, and spontaneous species (mean and standard deviation). Asterix refer to significant differences between groups tested through ANOVA with a post hoc Tukey HSD test ( * p < 0.05, * * p < 0.001, * * * p < 0.0001). Observe that y-axis varies for each trait type.
than Baltimore (p = 0.031). Similar results were observed in spontaneous species, where seed dry mass was higher in Baltimore compared to Riverside and Palm Springs (Baltimore-Riverside, p = 0.016; Baltimore-Palm Springs, p = 0.012). Turf species in Riverside showed larger specific leaf area compared to Baltimore (p = 0.021), while vegetation height in all herbaceous vegetation groups was not different among cities (Figure 4).
Composition / Beta Diversity
All cities displayed a significant difference in taxonomic composition from each other except in lawn species composition between Riverside and Palm Springs (Figure 5A, p values displayed in Table 3). In a pairwise PERMANOVA, we found no difference between all lawn and only spontaneous functional composition among cities; woody ES and EP and turf composition were significantly different in Baltimore compared to both Riverside and Palm Springs but not between the two more arid cities (Figure 5 and Table 3B). No differences were found in the distance to centroid of taxonomic compositions among the cities or vegetation types. Regarding functional composition, only woody physiological composition showed a significant difference in variance around the centroid between parks in Baltimore and Palm Springs (Table 3A). Bold font represents a significant p-value. ES Wood refers to woody species which also have service-based functional traits recorded. EP Wood refers to woody species which also have ecophysiological traits recorded.
Regional Climate as a Driver of Park Diversity
The decrease of taxonomic alpha diversity with increasing aridity is consistent with our hypothesis of climate filtering. In arid cities like Palm Springs, the number of woody and turf grass species that can withstand physiological stress is limited compared to the horticultural species found in more mesic cities (Pearse et al., 2018; Figure 2). In recreational parks, taxonomic diversity is further filtered by park planners selecting species that provide specific desired services (Talal and Santelmann, 2019). The increased selection pressure of park management combined with extreme aridity could explain why we found a decrease in diversity while other studies found a positive correlation between species diversity of all urban trees and warmer climates . Conclusions based on broad sampling of urban vegetation differ when confining studies to a specific type of green infrastructures, such as residential lawns or recreational parks as in this study, where management practices regarding planting and watering are more specialized (Pearse et al., 2018). As increasingly arid climates appear to reduce species diversity within recreational parks, differences in climate also result in unique taxonomic and functional composition in recreational parks across cities (Figure 5). All vegetation types in recreation parks have unique compositions among our study cities. We hypothesized that to achieve desired services, the taxonomic composition will vary with minimal effect on the values of service-based traits. Unique taxonomic compositions can vary across cities due to climatedriven changes in the availability of urban vegetation , temporally through variability of nursery offerings , or depending on local consumer preferences (Conway and Vander Vecht, 2015;Roman et al., 2018). While many species found in recreational parks are not native to the region (Wheeler et al., 2017;Talal and Santelmann, 2019), the lower Shannon diversity in Palm Springs and the difference in composition among all cities, explain how horticultural vegetation physiologically responds to the native climate.
For recreational parks to maintain healthy vegetation, planted vegetation must be able to functionally acclimate to regional climate. Lower specific leaf area is often connected to adaptations that confer tolerance to drought, such as reduced wilting and ability to maintain photosynthesis in extremely arid environments (Poorter et al., 2009;De Micco and Aronne, 2012). The higher wood density seen in arid urban park trees can lower cavitation risk, and mortality caused by extreme aridity may be better avoided by these species than by species of lower wood densities (Wright et al., 2004;Savi et al., 2015). However, resistance to drought is likely a co-benefit as park managers are more concerned with planting for ease of maintenance and economic reasons, and high wood density is associated with long lived trees which would need to fewer replacements (Díaz et al., 2015;Talal and Santelmann, 2020). This ecological tradeoff between stress tolerance and growth is also found in the trait of seed dry mass. Species adapted to arid regions generally have seeds with lower dry mass to prevent desiccation before precipitation brings on germination (Westoby, 1998). Recreational park tree species found in arid regions exhibit adaptations to reduce heat and drought stress by having lower specific leaf area, seed dry mass, and higher wood density. Variation in functional traits ( Figure 3A) among cities will result in park visitors in Baltimore experiencing a different functional composition than in Riverside and Palm Springs ( Figure 5B).
The photosynthetic pathway of C4 turfgrasses allows for continuous transpiration during extreme heat and aridity, and in a well-irrigated Palm Springs park there is plenty of water for turf to maintain productivity. In a mild city like Baltimore, there are better conditions for C3 turf to highly productive yearround. However, while the physiological traits of the C3 turf in Baltimore and the C4 turf in Palm Springs were similar, the range of trait values was smaller in Palm Springs. This result is indicative of fewer varieties of C4 turf grass for planting compared to C3 varieties in nurseries throughout the country (Trammell et al., 2019).
Spontaneous species are not subject to the same environmental or preferential drivers as turf species (Niinemets and Peñuelas, 2008;Knapp et al., 2012). Yet, many varieties of spontaneous species persist in cities and are of similar functional type (Wheeler et al., 2017), resulting in similar physiological traits and taxonomic diversity. Spontaneous species maintained distinct compositions among cities, while the turf species were less dissimilar ( Figure 5A). However, unlike turf composition, spontaneous functional composition was not significantly different across cities. For spontaneous species to be found within a recreational park lawn, the species must disperse and germinate within the climate of the city, and then establish despite management and recreational activities enacted on lawns (Abu-Dieyeh and Watson, 2005;Anderson and Minor, 2019). Adaptations to regional climate may explain the variety in lawn taxonomy that we observed, while the adaptation to intense human impacts could explain the lack of difference in functional composition.
Management Preference as a Driver of Park Diversity
Depending on the specific functional trait, management preference or climate can be a greater influence on park plants. The functional responses specific leaf area in turf and seed mass in spontaneous species are potential evidence of aridity creating regions of unique taxonomic and functional compositions (Figure 4). Regarding suites of service-based traits, Baltimore's climate allows for a greater abundance of species with the highly valued trait of esthetic fall foliage, as evidenced by the Maryland Department of Natural Resources' weekly fall updates tracking the changing of the leaves (Maryland Department of Natural Resources 2019). This unique difference in a single climate driven trait can drive significant changes in total service-based trait composition in recreational parks ( Figure 5B). Similarly, while fruit production is not a preferential service-based trait by park managers, we did find a greater abundance of fruit trees in the Palm Springs and Riverside regions, which could be indicative to the legacy of citrus in southern Californian agriculture (Farmer, 2013).
In recreational parks, woody species are generally planted to provide valued service traits of shade, and esthetic appreciation Avolio et al., 2018) and turf is planted to provide the service of vegetative greenness and recreational play areas (Ayala-Azcárraga et al., 2019). Park trees are spaced far enough apart for esthetics to be appreciated while providing copious but fragmented shaded areas, creating a physical arrangement of vegetation where similar service traits are available across cities (Goodness, 2018;Talal and Santelmann, 2019). While regional aridity is correlated with park vegetation physiology, the preference for specific values of ES traits is generally similar across urban regions. Similarity of servicebased traits in recreational parks allows park visitors, who value certain services, to have similar experiences in cities regardless of regional horticulture. While our study cities exhibited a unique functional composition of woody EP and ES traits, we found no differences in FDis or compositional variation ( Figure 2B and Table 3A). We suggest this pattern represents an example of park managers' preference for a similar arrangement of function in recreation parks across cities, despite a regionally varying species diversity. Parks in Palm Springs achieve a comparable range of functional breadth to milder cities while exhibiting a unique taxonomic and function composition and a significantly smaller taxonomic diversity (Figures 2, 3, 5).
One of the defining characteristics of woody species in recreational parks is that they are not a monoculture; there is generally a variety of tree species (Jim and Chen, 2008;Nielsen et al., 2014). Moreover, surveys of urban residents have identified specific highly valued service-based traits in trees, such as shade, beauty, and fruit production . Variation in the desires of urban residents result in a large variety of species that can provide these specific services Talal and Santelmann, 2020). Therefore, a wide variety of woody species will result in a similar FDis of physiological traits, even when the distribution of species is different. While parks may provide varying amounts of services, the similarity of dispersion of service-based traits in cities with differing Shannon diversity implies management practices that select for a specific range of service functions in a park.
Woody species may not be a monoculture, but our results show that planted turf acts like a plantation. We interpret this result as evidence that the highest value service trait is that it remains green and usable for recreation and is easily maintained (Christians et al., 2016;Larson K. L. et al., 2016). If a minimal number of species available can achieve this goal, there is little need to expand planting to species with unique functions. Maintenance regimes can be standardized to the similar taxa, which fits with the stated desires of park managers to minimize maintenance costs (Chan et al., 2014;Talal and Santelmann, 2020). There are other varieties of native grasses or sedges that are viable in regions of high aridity, yet these varieties are not commonly cultivated as turf.
Urban Form as a Driver of Diversity
Interestingly, we did not find significant relationships between park area and alpha diversity, either taxonomic or functional. This is in contradiction to the recent review of diversity drivers in urban parks (Nielsen et al., 2014). The discrepancy between our study and current literature could arise from our focus on urban recreational parks, where the entire park area is actively managed only for recreation (Weems, 2016). Conversely, the review by Nielsen et al. (2014) includes parks managed for recreation, agriculture, and natural areas. Including multiple varieties of urban park incorporates multiple habitats as well, which influence species-area relationships.
While we used observations of urban park vegetation as evidence of management preferences, there are other management interactions outside of vegetation preference that can influence the diversity in a community. Fertilization, pesticide application, prioritizing play equipment, and access to local nurseries can all lead to variations in taxonomic and functional vegetation diversity (Kjelgren and Clark, 1993;Politi Bertoncini et al., 2012;Chan et al., 2014;Cavender-Bares et al., 2020). Urban soil profiles can be both heterogeneous within and among cities (Crum et al., 2016;Herrmann et al., 2018). Regional climate and human facilitation are major filters leading to taxonomic and functional diversity (Aronson et al., 2016;Pearse et al., 2018), however, future work would be inclined to explore other potential narrower urban filters on recreational park diversity.
CONCLUSION
Climate and management preferences both play key roles in determining recreational park structure and composition, through driving differences in taxonomic alpha and beta diversity while maintaining similarity in the value and distribution of service-based traits. We show that regional climate drivers affect the taxonomic diversity and composition of each city's parks. Furthermore, the number of functional strategies also reflects a stabilization of FDis among cities. By showing how Shannon diversity of woody and turf species both responds to climatic shifts, FDis of woody species and turf species diverge in this regard, we can infer woody species FDis is influenced by park management preference while turf has a strong limitation to extreme climates. Integrating woody, turf, and spontaneous vegetation with multiple metrics of diversity allows for these results and answers a call for work to incorporate multiple functional types within a singular study (Nielsen et al., 2014). To develop a comprehensive diversity framework for entire cities, future work should incorporate more cities to represent more regional climates and differences in local horticultural preference.
Following the paradigms of urban ecology "in", "of " and "for" the city, our hypothesis required a synthesis of both climate drivers and the influence of park managers' preference to understand the patterns of recreational park diversity (Pickett et al., 2016). Our study synthesizes a biotic (ecology "in" the city) and a social-ecological (ecology "of " the city) influencer of diversity and our results provide a rationale for why specific vegetation types and functions are more influenced by either climate or preference. Our results show the influence of management preference guiding diversity in recreational parks, where service-based traits and FDis (excepting turf) does not change while taxonomic alpha diversity decreases into arid regions. Using our approach, we can identify functional cobenefits that could enhance the selection of park vegetation to provide climate resiliency along with traditional park management. The baseline days of extreme heat in Palm Springs are projected to increase from 135 to 179 days >35 • C by the end of the century (Sun et al., 2015), creating opportunities for the current stewardship for service-based function to shift in these extreme cities as the availability of viable vegetation decreases. By incorporating the results of this study with local urban park planning are cities at risk of extreme heat, we can move to practicing ecology "for" the city as well.
DATA AVAILABILITY STATEMENT
The datasets generated for this study can be found in the doi: 10.6086/D1FT1R.
AUTHOR CONTRIBUTIONS
PI, DB, CS, and GJ designed the study. PI, DB, and MR collected all the data for the study. PI and DB completed the statistical analysis of the data. PI wrote the manuscript. All authors contributed to the editing of the manuscript and approved the submitted version.
FUNDING
This work was supported by NSF CBE -1444758 and CNH -1924288.
|
2020-10-22T18:56:40.716Z
|
2020-10-22T00:00:00.000
|
{
"year": 2020,
"sha1": "6a525dc5d636edd04c7fcf823609f05f3342d30e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fevo.2020.501502/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "6a525dc5d636edd04c7fcf823609f05f3342d30e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
252570645
|
pes2o/s2orc
|
v3-fos-license
|
International Comparison of Self-Concept, Self-Perception and Lifestyle in Adolescents: A Systematic Review
Objectives: Adolescence is considered a vital time to address healthy attitudes and values towards an effective transition to adulthood. The aim of this review was to analyse self-concept, self-perception, physical exercise, and lifestyle in the late adolescent population. Methods: Systematic review of studies assessing the results by the Rosenberg Self-Esteem Scale, the General Health Questionnaire, the Activity Questionnaire for Adolescents, and the Health Behaviour in School-aged Children questionnaires in late adolescents. The PRISMA recommendations were followed. The CASPe quality-check system was applied, excluding articles with a score <8. Results: 1589 studies were found, and 69 articles were selected. Adolescents with high self-concept and self-perception tend to be emotionally stable, sociable, and responsible. No significant differences were found regarding self-concept and self-perception between different countries, but there were differences between men and women. Physical activity and healthy diet improve self-concept and perception of body image. Conclusion: Self-concept and self-perception are associated with responsibility, stability, and mental strength. Most healthy behaviours during adolescence are followed during adulthood. Socio-cultural level of Health Science students is a differential factor for overweight and obesity.
INTRODUCTION
Satisfaction with one's life is related to a lower number of illnesses, increased happiness, and better emotional well-being [1], and the adolescence is considered a key time in life to address these emotional, social, and physiological aspects that affect their development and well-being [2].
Many authors coincide that adolescence begins at puberty and ends with the complete development of the organism and the onset of adulthood [2][3][4][5]. During this transition, there is a continuous process of self-assertion in the pursuit of independence where individual and social identity are consolidated [5]. The World Health Organisation (WHO) considers late adolescence to be the period between 19 and 24 years of age. By this time, the person is preparing for their profession and adult life, so it is determinant to promote healthy lifestyle habits, which in most cases are maintained throughout adulthood [5][6][7]. In Hispanic cultures, there is also a characteristic family model with dependence on care, security, protection, and social policies of the welfare state [8]. In any case, in order for the subject to adopt and develop the competences, skills, and values that will enable them to make the transition to adulthood effectively, it is necessary to focus on the positive development of the adolescent [9].
Self-concept, self-esteem, and self-perception are interrelated and condition the lifestyle and health-related habits of individuals, particularly of adolescents.
Self-concept is understood as the set of feelings that the subject has about him/herself. In adolescents, it is important since it is an indicator of an adequate physical, cognitive, behavioural, affective, and social integration of the individual [10]. It is related to the individual own interpretation of the world [5], and it is also influenced by values, cultural expectations, and personal relationships [11]. The higher the self-concept, the greater the feeling of satisfaction [2,12]. Following this line of argument, self-esteem is known as the sense of personal efficacy or self-efficacy, the assurance of self-worth, and the right to live and be happy [11,13,14]. Self-esteem is related to self-respect and self-acceptance [15]. It is a protective factor against unhealthy behaviours such as anxiety, depression, and suicidal ideation [13,16].
Finally, self-perception is important for understanding how individuals think, behave, and relate to others. It is understood that self-perception includes those internally conscious and organised concepts that the individual has about him/herself. It is related to a greater or lesser extent to age, low levels of schooling, income, race, marital status, smoking, physical activity, alcohol consumption, presence of chronic morbidity, and body mass index [17,18].
Overweight and obesity are among the most important problems in adolescence due to their psychological and social impact. The expansion of new technologies has influenced the prevalence of sedentary lifestyles, unbalanced and hypercaloric diets, and social changes [19]. Moreover, in terms of morbidity, there has been an increase of the risk factors for cardiovascular disease and other chronic pathologies in adulthood [5,20]. In Spain, the prevalence of overweight and obesity in this age group is estimated at 15.5% in women and 16.5% in men.
Health education approaches are decisive in adolescence [20]. Thus, the practice of physical exercise is key to prevent overweight and obesity. It is responsible for the proper functioning of the body and is part of physical well-being and a healthy lifestyle [21][22][23]. It is also related to increased cognitive competence [21,22] and prevents chronic diseases such as obesity, cardiovascular diseases, and metabolic syndrome [23]. According to various studies, men devote more time to physical activity [21,22].
Living a healthy lifestyle has an impact on the prevention of cardiovascular diseases, which are the main cause of premature death in industrialised countries. The influence of lifestyle is also determinant to reduce the onset of pathologies such as type 2 diabetes mellitus, hypertension, dyslipidaemia, overweight, and obesity [24,25]. Otherwise, the psychological state that most influences lifestyle is stress, which is associated with poorer health by increasing the risk of heart disease, cancer, and/or suppression of the autoimmune system [25][26][27]. Increased job dissatisfaction, work intensity, inflexible working hours, or difficulty in work-life balance are associated with increased sick leave due to stress, anxiety, or depression [25][26][27][28].
Comparing the self-concept, self-esteem, self-perception, and physical exercise that young people have depending on their culture is fundamental to understand the actions for improvement that should be carried out on lifestyle. Knowledge on whether there are differences with respect to the environment and socio-cultural level is necessary to understand how it influences the individual. It also would facilitate the positive approach and development of the adolescent. In order to assess the self-concept, self-esteem, self-perception, physical activity, and lifestyle of adolescents, this research team previously conducted a systematic review titled "Questionnaires assessing adolescents' self-concept, self-perception, physical activity, and lifestyle: a systematic review" [26]. The aim was to determine which questionnaires are optimal for the assessment of these concepts. In conclusion, the Rosenberg Self-Esteem Scale (RSES), the General Health Questionnaire 12 (GHQ-12), the Physical Activity Questionnaire for Adolescents (PAQ-A), and the Health Behaviour in School-aged Children (HBSC) are valid and reliable tools for the assessment of self-concept, self-esteem, self-perception, physical activity, and lifestyle, respectively.
METHODS
A systematic review of the Scoping Review type was developed, following the recommendations of the Preferred Reporting Items for Systematic reviews and Meta-Analyses, also known as the PRISMA statement [29] (Figure 1). A search was carried out in different databases on self-concept, self-esteem, self-perception, physical exercise, and lifestyle of late adolescents at an international level.
Eligibility Criteria
There is no prior record of the study protocol. However, any study was acceptable regardless of its design as long as it assessed selfconcept, self-esteem, self-perception, physical activity, and lifestyle with the RSES, GHQ-12, PAQ-A, and HBSC instruments on late adolescents. Therefore, the study population was late adolescents. Articles in English, Portuguese, and Spanish were included. The exclusion criteria were: studies not exceeding a score of 8 in the Critical Appraisal Skills Programme Español (CASPe) [30]; publications without a scientific basis, with an unrepresentative sample (children and/or adults), or whose representativeness was not stated when describing the sample; articles whose statistical significance was not stated or whose results were not statistically significant; questionnaires with Cronbach's alpha lower than 0.75.
Study Selection Process
A critical reading of the articles was carried out and they were assessed using the CASPe [30] system. The CASPe classification was determined by the type of research of each article, where the first two questions were elimination questions. The accepted level for the critical reading was 8 points irrespective of the type of study (Supplementary File S1). Quality assessment was carried out independently by peers. Divergences were resolved by debate and consensus of the investigators. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) [31] system was used to assess the level of recommendation. High and moderate-quality articles were selected (Supplementary File S1). To assess the clinimetric quality of the instruments, COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) [32] tool was applied ( Table 2). Automation tools were not used.
Data Extraction Process
This was performed qualitatively by the researchers. Discrepancies were identified and extraction was agreed upon to produce the data list.
Data List
To structure the data, all results compatible with the use of the RSES, GHQ-12, PAQ-A, and HBSC questionnaires on late adolescents were searched at international level. The results of the different items were compared on the basis of self-concept, self-esteem, self-perception, physical exercise, and lifestyle among young people with socio-cultural differences. Table 1 expresses the results consistent with each domain.
Assessment of the Risk of Bias in the Individual Studies
During the critical reading of the articles, their internal structure, validity, and reliability were observed. The assessment of the degree of recommendation, as well as the analysis of the study population, the variables, the interventions used, and the results are described in Supplementary File S1.
Measures of Effect
The following questionnaires were used: RSES to assess selfconcept; the GHQ-12 for the assessment of self-perception; the PAQ-A to measure physical exercise; and the HBSC study, where the lifestyle of adolescents is reflected. Table 2 describes the clinimetric quality of these instruments using the COSMIN tool [32].
Synthesis Methods
All articles with a moderate and high level were gathered using the GRADE system [31]. All articles using the RSES, the GHQ-12, the PAQ-A, and the HBSC on adolescents at the international level were analysed and selected according to the COSMIN protocols [32] ( Table 2).
Assessment of Publication Bias and Certainty of Evidence
A critical reading of the articles selected from the consulted databases was carried out. This systematic review was peerreviewed by the researchers, deciding by consensus those issues that were not in agreement. The CASPe [30] system was applied to the articles used for the development of this systematic review, and this work was also assessed once completed to self-check for possible methodological biases, receiving a score of 10/10 points. The recommendations of the PRISMA [29] statement were followed as a self-checklist (Supplementary File S2).
Study Selection
After the search, 1589 studies were identified. The total number of articles eliminated for being duplicates, non-retrievable publications, not meeting the study objectives, and/or not exceeding the CASPe [30] score with more than 8 was 1519. A total of 69 articles were selected as represented in the flow chart ( Figure 1). The 4 methodological articles PRISMA [29], CASPe [30], GRADE [31], and COSMIN [32] were also added.
Study Characteristics
These are described in Supplementary File S1. 61 case-control studies, 4 systematic reviews, 1 clinical trial, 2 cohort studies, and 1 clinical prediction rule study were selected.
Risk of Bias of Individual Studies
This was assessed with the CASPe [30] critical appraisal tool (Supplementary File S1).
As already mentioned, self-concept and self-esteem are linked. Thus, according to the study by Daniela-Calero A, et al. [10], having high self-esteem and self-concept in adolescence is significantly related to being emotionally stable, and a social and responsible subject.
Gálvez-Casas A, et al. [9] showed that males show a prevalence of overweight and females of obesity. The highest scores for general self-concept were for people whose weight was within normal limits. For this reason, the authors highlight the need to contribute to the improvement of physical self-concept as it favours the balanced development of the adolescent's personality [9]. The research by Molero D, et al. [41] states that physical self-concept is related to the age of the subject. Scores on this concept improve over the years, being higher at the beginning and intermediate stages of life, and lower during adolescence. In the scales of physical ability and physical attractiveness, it was the adults who obtained better scores. This data could indicate that there is better physical acceptance at older ages.
Self-Perception
The General Health Questionnaire (GHQ) is a tool for measuring minor psychiatric morbidity in community, primary care, or medical-surgical outpatient settings. It assesses a person's selfperceived health over the past six months [44]. The GHQ 12-item questionnaire is the most widely used tool for the assessment of non-psychotic psychiatric disorders , and it measures mental health in specific groups such as adolescents [45][46][47][48][49][50][51]. It has been adapted to and translated into 38 languages and applied in national health surveys and studies carried out by the WHO [51]. It has a two-dimensional structure: one part assesses depression and the other, social dysfunction [45,51].
A study in a hospital institution in the city of Medellín with the GHQ-12 [46] surveyed 29476 people over 16 years of age and found out that the perception of psychological well-being or distress of both the self and social functioning are related to intrinsic factors (motivational, cognitive, emotional, and personality traits) [46]. In the study by Brabete AC, et al. [49], where the surveyed population is from Romania, it is women who have a higher prevalence of mental illness. During the adolescence period, a greater differentiation in this aspect between males and females begins to emerge.
Videra-García A, et al. [3] indicate a significant sex differentiation, and show that boys have a better assessment of their health. The study highlights the fact that females score higher on health perception, although this is not true since it is females who have a higher prevalence of mental illness. Thus, this result indicates greater health problems in girls than in boys. This study concludes that males have a better physical self-concept and self-perception of health than females, with a confidence level of 95% [3]. Inchley J, et al. [66] state that Spanish adolescents' perceived support from their families is high, but scores on the ease of communication with parents decrease. However, indicators of emotional well-being receive positive results.
Soria-Trujano R, et al. [67] concluded that it is Mexican women who present more cases of depression. Also, health science students indicate a greater presence of anxiety due to work experience in health centres, which leads to sleep problems [67].
Lifestyle
The Health Behaviour in School-aged Children (HBSC) measures young people's lifestyle habits and creates health promotion tools. It evaluates socio-demographic variables, diet, hours of sleep, risky consumption, oral hygiene, sexual behaviour, physical activity and sedentary behaviours, family context, leisure time, schooling, social environment, general health and psychological status, and socio-economic data. It is an initiative promoted by the WHO at the international level. Its latest version was published in 2020.
The report "Spotlight on adolescent health and well-being" [68] establishes a comparison of the lifestyle of young Spaniards and their peers in other European countries. The main findings are as follows: the level and quantity of physical exercise recommended by the WHO has worsened: less than 1 in 5 adolescents do so. Spain became involved in this research in 1986 [69,70].
Regarding nutritional habits, the majority of young people at international level do not follow the indications, which is detrimental to their healthy development. For this reason, the level of overweight and obesity has increased considerably, affecting 1 in 5 adolescents [7].
The results of the study "Healthy lifestyles in Nursing students of the Cooperative University of Colombia" [25] show that 27.3% of Nursing students are overweight and 7.8% are morbidly obese. The students indicate that they prefer to do physical exercise outside the university, but the lack of practice is striking [25].
Despite the cultural level of university students in Health Sciences and their extensive knowledge of healthy nutrition [19], they do not act accordingly. It is considered essential to promote health education as a promotional tool for young people and to generate changes for their own benefit and for the benefit of society [19].
Salas-Salvadó J, et al. [23] report that people who follow a Mediterranean diet have a lower chance of suffering from cardiovascular diseases. A follow-up of almost 5 years was carried out, where the results indicate that the probability of suffering a primary cardiovascular event was 30% lower in people following a traditional diet. Also, the consumption of nuts and olive oil is associated with a 40% lower risk of diabetic retinopathy and a higher probability of reversing the metabolic syndrome [23]. On the other hand, although substance use has decreased, the number of adolescents who use alcohol and tobacco remains high. 1 in 5 adolescents have been drunk two or more times in their lifetime and 1 in 6 respondents have smoked in the last 30 days. Regarding risky sexual behaviour, 1 in 4 adolescents have had unprotected sex [68].
Soria-Trujano R, et al. [67] compare Mexican health science professionals and show that, among nurses, there is a higher number of consumers of toxic substances such as alcohol and tobacco. It is men who stand out in their consumption, and women have a significant intake. This contrasts with the knowledge they have about the health problems associated with the consumption of these substances. Although it can reduce the level of stress, it is detrimental to academic performance [67].
In the research carried out by Hernando A, et al. [6] boys have a worse academic performance than girls, but their time among friends is greater. Women show a decrease in physical exercise and a reduction in the number of hours of sleep [6].
Physical Exercise
The PAQ-A (Physical Activity Questionnaire for Adolescents) assesses the physical exercise of the person in the last 7 days. It consists of 9 questions that measure aspects of the physical exercise performed by the adolescent. It also provides information on whether the person has been ill. It is evaluated by means of a scale of 1 to 5 points that establishes a graduation of the level of physical activity carried out. It allows us to know at what time of the week the person is most active [22,68,69].
According to the study by Rizo Baeza MM, et al. [5], overweight and obese young people spend more hours doing physical activity than those of normal weight, followed by those of low weight, who dedicate the least hours to their physical activity [5], the difference being statistically significant.
According to Martínez-Gómez D, et al. [22], physical activity measured by the PAQ-A questionnaire presents indicators of adiposity, bone mineral content, psychological indicators, and heart rate variability [22].
Ruiz-Ariza A, et al. showed that men are more attracted to physical activity than women [20].
The level of physical activity in a sample of adolescents aged between 14 and 17 years was analysed in Colombia [70]. Statistically significant differences were found between high and low levels of physical exercise according to the PAQ-A questionnaire. Young people with higher levels of physical activity showed an increased interest in physical exercise. It is noteworthy that females would complete workouts twice a week for four years but that this activity did not increase with age. They conclude by commenting that students with better results in the PAQ-A are more likely to obtain better physical conditions (strength, endurance, and speed).
According to Rincón Herrera AD, et al. [71], Colombian adolescents do not increase their level of physical exercise. This is true, according to Ruiz Ariza A, et al. [20], in Spanish male adolescents, who are more attracted to sports than their female counterparts. However, in the study conducted by Rizo Baeza MM, et al. [5], there is controversy with respect to Rincón Herrera AD, et al. [71] since, according to him, young people with weight problems practice more sport than the rest. However, the opposite is true for Colombian adolescents.
Synthesis results: the results of the synthesis of the individual works are shown in Supplementary File S1. The assessment of the questionnaires using the COSMIN [32] tool is shown in Table 2. Table 3 summarises the results of this systematic review.
Publication bias: all articles that did not obtain a score greater than or equal to 8 in the CASPe [30] tool were eliminated, thus avoiding bias in the results of this systematic review.
DISCUSSION
Self-assessment of health should be incorporated into health surveys on a regular basis, as self-perception of health is a good predictor of morbidity and mortality [31]. Men tend to suffer from life-threatening health problems. In contrast, women suffer to a greater extent from disabling conditions due to chronic diseases. For this reason, there is an increase in women's negative selfperception [72]. A person's lifestyle contributes to the development of chronic non-communicable diseases, which are the main cause of morbidity and mortality [24]. The resulting public health problem justifies the search for instruments to assess the state of self-concept, self-perception, physical exercise, and lifestyle. Valid and reliable assessment tools are required [26].
As results with a moderate-high degree of recommendation (GRADE) [31], it should be noted that the fact that adolescents have a high self-concept predisposes them to be emotionally stable, sociable, and responsible [10]. Some authors attach great importance to the improvement of physical self-concept. The reason for this is the balanced development of the young person's personality. Being a person with a normal body mass index predisposes him or her to a better general self-concept. However, the number of overweight and obese people continues to increase [9]. Physical acceptance improves as we get older. Adults score higher on attractiveness and physical ability. This is an indicator of the relationship between age and self-concept. In several articles, this fact is linked to the social pressure that exists among the youth [41]. No differences are reported between adolescents from different countries or socio-cultural backgrounds. All authors reach the same conclusions: self-concept is fundamental in the adolescent stage; it is linked to self-esteem and directly influences the psychological and physical state of the person. They suggest that, in adulthood, self-concept and self-esteem are better scored. However, improving self-concept from an early age is key, as it favours the balanced development of the adolescent's personality [9,10,[33][34][35][36][37][38][39][40].
With a moderate level of recommendation, it was found that men have a better physical self-concept and self-perception of health than women [2]. The prevalence of mental illnesses such as depression and anxiety is higher among females [72]. In turn, Health Science students have a higher prevalence of anxiety, that leads to sleep deprivation [49,67]. Spanish adolescents rate the support from their relatives as high, but this rating decreases when it comes to being able to communicate with their parents. It is worth noting that emotional well-being receives positive results [68]. In conclusion, no cultural differences are found with regard to the prevalence of mental illnesses such as depression or anxiety in women. This fact should be highlighted since an improvement would be expected in countries with an educational and social system such as Spain. From an early age, there is an emphasis on health education and a multitude of tools are provided for the positive development of adolescents. However, it is boys who have a better assessment of their health [3,42,43,48,49,68,71].
Health science students have knowledge and tools to improve their lifestyle; however, they do not apply this knowledge to their own benefit [5,18,24,72]. Following a correct diet prevents the onset of chronic non-communicable diseases [23]. The Nursing degree has a higher number of consumers of toxic substances such as alcohol and tobacco. Men stand out despite the fact that women have a high intake. It is striking that they are not capable of putting their knowledge into practice, given that they will be responsible for communicating a message of prevention and health promotion to the general population [67,71,72].
Regarding cultural differences, the data provided by the RSES on self-esteem and self-concept do not report significant differences between adolescents from different countries. This concept is fundamental during the youth stage because it directly influences the psychological and physical state of the person . Something similar occurs with the GHQ-12. Selfperception of adolescents is fundamental to avoid psychological illnesses such as depression, anxiety, or suicidal thoughts. It is necessary to focus on the positive development of adolescents and, especially, on this concept in women, since it is they who have the highest prevalence of psychiatric pathology at an international level [3,42,43,48,49,67,68,73].
Physical exercise, assessed by the PAQ-A questionnaire, is shown to be lower in both Colombian and Spanish women than in men [20,70]. However, Spanish adolescents with overweight
Concept
National results International results
Self-concept and selfesteem
No differences reported Self-perception No cultural differences are found with regard to the prevalence of mental illnesses such as depression or anxiety.
Physical exercise
Men are more interested in sports than women. Young Colombian women do not increase their level of physical exercise. Young Spaniards with weight problems do more sport than the rest.
Normal-weight young Colombian men and women do more sport.
Lifestyle
Health science students do not act on the knowledge they have learnt about healthy lifestyles. There are more Nursing students who are overweight, obese, and substance abusers. and obesity problems do more sports than young Colombians with the same health problem [5,70]. Body weight has an impact on the hours of physical exercise adolescents take, although, it is men who are more attracted to sports, devoting more hours to them [20,22]. Nevertheless, fewer than 1 in 5 adolescents engage in physical exercise [6]. It would be interesting to delve on the reasons for the decrease in attraction to physical activity among women, and the question arises as to whether body image influences interest in undertaking a higher level of physical exercise. In conclusion, comparing the self-concept, self-esteem, selfperception, and physical exercise of young people depending on their culture is essential to understand the actions to be taken to improve lifestyles. Knowing whether there are differences with respect to the environment and socio-cultural level is key to understand whether it influences the individual. Understanding and promoting a healthy lifestyle in adolescents helps to target and implement interventions from a preventive point of view. It also facilitates positive adolescent approach and development, and it reduces economic costs, since the correct approach to young people avoids future complications such as chronic noncommunicable diseases (depression, anxiety, overweight, obesity, among others) and hospital admissions for this reason.
A healthy young and adult population is beneficial not only for the individual but also for their environment, which benefits from having a productive and healthy community. Improving selfconcept increases the likelihood of being a more sociable, responsible, and emotionally stable person, which is linked to a greater self-esteem.
Limitations
This study is conditioned by the validity and reliability of the data provided in the selected scientific articles. It is also conditioned by the search for articles in English, Portuguese, and Spanish. It is influenced by the selection processes of studies where the CASPe [30] and the GRADE system are taken into account [31]. The aim is to review the literature and describe the self-concept/selfesteem, self-perception, physical activity, and lifestyle of adolescents at an international level. The results are extracted from the questionnaires selected for the assessment of the four variables mentioned above. This systematic review would provide more representative data if further research were to be carried out using the same methodology and broadening the selection of articles.
AUTHOR CONTRIBUTIONS
Conceptualization, formal analysis, investigation, methodology, resources, software, supervision, validation, visualization, writing-original draft, writing-review and editing: NP-L, MS-G, JR-G, JG-S, and GD-C. Data curation and project administration: NP-L, MS-G, and GD-C. All authors have read and agreed to the published version of the manuscript. NP-L and GD-C had full access to all study data and took responsibility for the integrity of the data and the accuracy of the data analysis. They also assessed each study used in this systematic review individually.
|
2022-09-29T14:07:20.219Z
|
2022-09-29T00:00:00.000
|
{
"year": 2022,
"sha1": "bc643394887eb515ebc151937f4177e77b38c734",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "bc643394887eb515ebc151937f4177e77b38c734",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219774429
|
pes2o/s2orc
|
v3-fos-license
|
Local knowledge of the using tribe farmers in environmental conservation in Kemiren Village, Banyuwangi, Indonesia
Article history Received: 7 October 2019 Revised: 14 March 2020 Accepted: 29 March 2020 Using Farmers (Banyuwangi ethnic community) in Kemiren Village uses their ancestral knowledge in utilizing natural resources and the environment so that its sustainability is maintained. This study aims to identify the local knowledge of Using farmers in Kemiren Village, Banyuwangi Indonesia which plays a role in preserving their natural resources and environment. The study uses a qualitative approach and the data collection using documentation, interview, and field observation techniques. Data analysis uses methods crossreferenced and repeated information. Local knowledge of Using farmers that used to manage the environment is in the form of values (togetherness, obedience, consensus, fairness and caring), norms (prohibitions/taboo and suggestions in utilizing natural resources), belief (providing labuhan/offerings and selamatan/ritual), and practices in utilizing natural resources. The primary key that plays a role in environmental preservation is a harmonious relationship among farmers.
INTRODUCTION
Local knowledge relates to the pieces of knowledge, beliefs, traditions, practices, institutions or institutions, as well as the views of life held by local communities about the relationship between living things, including fellow humans and their relationship with the environment (Gadgil, Berkes, & Folke, 1993;Vandebroek, Reyes-García, Albuquerque, Bussmann, & Pieroni, 2011). Cultural norms and values possessed by local communities are also part of the local knowledge system (Agatha, 2016). Local knowledge is a cultural heritage, about practices that can be used as a source of information for planning and management of land and natural resources (Eyporsson & Thuestad, 2015). Local knowledge is also dynamic, adaptive, and holistic (Beckford & Barker, 2007).
Ancestors of indigenous peoples in Indonesia conceptualize local knowledge in the form of local wisdom that is used as a guide for behaving and acting towards nature and the environment (Siswadi, 2011). They think it is imperative to maintain and utilize natural resources in a balanced manner or sustainable (Thamrin, 2013). Utilizing natural resources is done using perspectives obtained based on experience and knowledge (Heryanto, Supyandi, & Sukayat, 2018). Therefore, local people can be used as a reference behavior that should be done by humans to preserve nature (Darusman, 2014).
Local communities in Indonesia play a role in conservation through traditional land management (Iswandono, et al., 2016;Tamalene, Hasan & Kartika, 2019). The Baduy community in Banten manages the natural resources of the forest by dividing it into three, namely larangan forest, dudungusan forest, and garapan forest, where larangan forest is prohibited and not permitted by anyone. Limited harvesting may be done in the forest dudungusan, while garapan forests function as fields or Huma (Suparmini, Setyawati, & Sumuar, 2013). Tallasa kamase-mase (simple life, as it is) is a principle of life owned by the Ammatoa community that has prevented excessive harvesting of forest products (Sukmawati, Utaya, & Susilo, 2015).
Agriculture has close links with indigenous peoples in Indonesia, where indigenous farmers manage their farming using customary provisions (Kurniasari, Cahyono, & Yuliati, 2018). Local knowledge such as rituals, traditional ceremonies, and activities related to local values affect the environmentally friendly behavior of farmers, namely reducing the negative impact on the environment in managing agricultural land (Mulyadi, 2011;Hariyadi, Tamalene, & Hariyono, 2019). The Using Community (or also called Osing tribe) is an indigenous community that is considered a native of Banyuwangi Regency, East Java Province, Indonesia. Kemiren Village is one of the villages in the Banyuwangi Regency, where the community still firmly holds the Using tradition. The community interaction with nature is related to agriculture because most of the people are farmers. They have local knowledge to preserve their environment (Herawati, 2004) and has been proven to contribute to the preservation of water resources in Kemiren Village around (Sumarmi, 2015). Rice fields in Kemiren are also known to be fertile and never lack water so that farmers can grow crops throughout the year. They still maintain the culture of rice fields until now, where the culture means learning to respect and care for nature as it is towards their self (Saputro, Purwadi, & Marhaedi, 2015).
Local knowledge is formed informally and is rarely documented because it is inherited verbally. This study aims to identify the local knowledge of Using farmers in Kemiren in managing natural resources and the environment so that it plays a role in the preservation of the village environment. Local knowledge needs to be documented because it can be developed as a model for sustainable natural resource management, where at present many local practices have been lost (Kala, 2013). Local knowledge can also be lost gradually due to the introduction of the concept of unsustainable development, where the effect of the loss is not only clearly seen in the local community but also gives other community losses (Ihenacho, Orusha, & Onogu, 2019).
Kemiren Village is a village that is open to technology and globalization. Kemiren farmers have utilized new technology products such as hand tractors to plow fields (although there are individual farmers who use cows) and rice thresher machines. They also use chemical fertilizers and get assistance from agricultural extension workers. Therefore, it is essential to document the local knowledge of Using farmers in Kemiren so that this knowledge is not lost and can be passed on to the next generation.
Research design
The research approach used was qualitative and data was collected through observation and interviews (Albuquerque, Ramos, Paiva de Lucena, & Alencar, 2014). Observations were made through participant observation and unstructured observation based on Bungin (2007). Participant observation is done by observing and directly involved in the activities of the informant who is a local expert, while unstructured observation by observing the activities of farmers and the condition of rice fields in Kemiren Village. Interviews were conducted by establishing direct communication with selected informants. Interviews conducted included semi-structured interviews and informal interviews based on Albuquerque, Ramos, Lucena, & Alencar (2014).
Participants
This research was carried out in Kemiren Village, Glagah Subdistrict, Banyuwangi Regency, East Java, from February to May 2019. The selection of informants was carried out through purposive and snowball sampling techniques. The purposive sampling technique was used to select critical informants, namely 2 elders of Kemiren Village. In contrast, the snowball sampling technique is used to select recommendation informants based on information from key informants. The recommendation informants consisted of 10 farmers as local experts in the agriculture of Kemiren Village, 1 Modin Banyu, and 3 Irrigation Service Officers. Modin Banyu is appointed as a regulator of the distribution and utilization of Kemiren village irrigation based on the farmers' deliberations and agreements.
Data analysis
Data was analyzed with cross-referenced and repeated information methods (Albuquerque, Lucena, & Lins, 2014). The results of the analysis were descriptive data of local knowledge of Using farmer that plays a role in the environmental preservation of Kemiren Village.
Results and Discussion
Kemiren Village is at an altitude of 187 m asl. The percentage of farmers and farm laborers in Kemiren Village is 38.5% of the entire village population. In comparison, the rice paddy and dry fields area is 78% of the total area of Kemiren Village. The agricultural system of the community is settled agriculture and uses a technical irrigation system. Irrigation water utilizes the Gulung River flowing from upstream in the Kali Bendo plantation area with the arrangement of officers from the Irrigation Services Department in synergy with modin banyu. Gulung River never recedes its water even though during the dry season so that it can fulfill irrigation needs for the rice fields of Kemiren Village throughout the year. Therefore, the fields of Kemiren Village can be planted with paddy throughout the year.
Local knowledge used by Using farmers of Kemiren Village in the utilization and management of natural resources is sourced from weluri, which was passed down by their ancestors. Weluri is a testament, advice, knowledge, procedure/technique, and traditions of the ancestors used as guidelines on how to behave towards fellow humans and their environment.
Weluri is still maintained and carried out today as a form of obedience to his ancestors. Based on research data, the local knowledge of Using Kemiren farmers who play a role in environmental conservation can be categorized as values, norms, beliefs, and environmental management practices (Table 1). Values in local knowledge are related to togetherness, obedience, consensus, fairness, and caring (Siswadi, 2011). Values are related to life goals, while norms are used to regulate the behavior and actions taken by humans (Kaczocha & Sikora, 2016). These norms are social norms relating to what other people are expected to do (Nordlund, 2009
Local knowledge in the form of values
Values in managing natural resources and the environment are still firmly held by the Kemiren community and became an advantage of the Kemiren community. Other neighboring villagers also recognize this advantage. The value of togetherness can be seen from the tolerance attitude, mutual respect, and does not interfere or take other people's property developed by the Kemiren farmers. This value is also a guideline for all Kemiren people so that they will not dare to disturb, destroy, and take something in someone else's rice field without the owner's permission, even just grass.
The persistence of Kemiren farmers to maintain the values have raised the respect of other village communities. Therefore, the rice fields belonging to the Kemiren people in another village will not be disturbed by farmers in that village even though the fields are not guarded. No one dares to take their plants, even grass without permission. However, the community has a unique tradition to notify that plants or grass on their land should not be taken, namely by plugging blarak (coconut leaf midrib) ( Figure 1). Even though there is no blarak, people still will not take plants from other people's land as an expression of respect and do not want to interfere or take other people's property. Tolerance among farmers can be seen from the reluctance to ask for grasses that grow in the fields if the farmer has cows and other livestock because he knows the grass will be used for animal feed. All matters relating to the necessities of life of all farmers are arranged together through deliberations to get consensus. Consensus can be created because all things are done in a kinship, tolerance, and care so that it will not interfere with other people's property or public facilities that belong together. Consensus has been able to stop existing conflicts in the community and prevent future conflicts in the use of natural resources because of community obedience with the agreements (Bawole, Simbolon, Wiryawan, & Monintja, 2014). The farmers will respect the decisions made and, with personal awareness, will implement them. The obedience to regulations, whether written or not, is reflected in several actions, such as the irrigation water regulation carried out by the modin banyu, the tree felling rules, and the farmers' group rules that have been mutually agreed upon. Springs that appear on someone's land will also not be owned privately and allow other communities to use them as a form of togetherness and caring for others. Mutual sharing is also applied to other resources such as plants needed for food, medicine, and labuhan (offerings).
Local knowledge in the form of Norms
The norms implemented by the Kemiren farming community contain elements of suggestions and prohibitions in behavior to avoid harming nature and fellow humans. The suggestions and prohibitions are (1) maintaining the quality of spring water, (2) labuh (giving offerings) and held ritual in rice fields and spring, (3) maintaining irrigation channels, (4) reporting tree felling and planting plants only as a substitute, (5) no felling trees in the spring area, and (6) no trespassing above the spring Maintaining the quality of springs water. Kemiren village has water springs with varying discharge. 29 recorded springs are scattered throughout the village (Kemiren Village Profile, 2016). The Springs is generally appeared in gardens and along the river of Sobo River and the Gulung River flanking Kemiren village. Water from springs will flow into both rivers, thus increasing river water discharge. Water from a spring is usually collected first and then flowed through pipes for later use (Figure 2).
Farmers usually use spring that appears near their fields to clean themselves (take a bath) after their field activities. Kemiren farmers have a habit of paddling all day and go home in the evening so that near the spring is usually built a Muslim simple prayer place. Things that should not be done at water spring are entering the water from the source of the water discharge to the shelter, urinating, or defecating near the source of the water discharge. These prohibitions are useful for maintaining water quality. In addition to farmers, water springs are also used by the Kemiren community (generally women) to wash clothes.
Figure 2. Springs of Kemiren Village
Maintain irrigation channels. Rice field irrigation water comes from the Gulung River, sourced from the Bendo Kali plantation (about 10 Kilometers from Kemiren Village). Water from the Gulung River has flowed through irrigation channels until it reaches the community's rice fields. All farmers are encouraged and required to maintain irrigation channels and not take actions that can damage the irrigation channels. The farmers also realize that irrigation canals are a necessity for all farmers, so they do not dare to do things that will harm other farmers. The rice field irrigation system in Kemiren village was once regulated only by the modin banyu. However, agricultural irrigation is now under the supervision of the Irrigation Service that synergizes with the modin banyu. The synergy between the Irrigation Service, Modin Banyu, and farmers have kept the irrigation channel well maintained. Watersheds (area along the river) are still well preserved because people obey the prohibition against cutting down trees along the watershed.
Reporting tree felling and planting new trees. Farmers and other communities will report to the village government if they cut down trees even though the tree is their own, planted in their gardens. This is a rule set by the village government, and the community obeys it with self-awareness. They also plant new trees as a substitute before logging is done.
There are no falling trees in the spring area. People do not dare to cut down trees that have the springs under, although there are no myths about the tree. The community does it because they know the vital role of trees for the sustainability of springs. If the spring comes out of the bamboo clump, then the bamboo cut down use selective feeling that it does not interfere with the survival of the bamboo population and the spring.
No trespassing above the spring. Another prohibition norm is no to trespassing area above the spring. The teachings of ancestors are interpreted with the aim that the land above the spring does not constitute landslide. The community also means that the land above the spring is forest, so the prohibition is interpreted as ancestral advice so that their descendants do not disturb the forest.
Values and norms that are obeyed and applied by farmers are one of the primary keys to the success of environmental management in Kemiren Village. The community is always live in harmony, mutual respect, and help each other. There is no conflict between them. These norms and values are characteristic of rural communities that consider more social-culture and nature conservation than economic interests (Chalim, 2012). Social norms will strengthen social ties and a sense of social responsibility in the use of environmental resources (Agatha, 2016).
Local knowledge in the form of belief
Provides labuhan/offering and ritual in rice fields and spring. Labuh (give an offering, as shown in Figure 3a), is done as part of society's universal belief. The purpose of ritual and labuhan is generally to ask for safety, avoid catastrophe or disease, and the water used to irrigate rice fields will make paddies flourish and produce abundant yield. Based on Koentjaraningrat (2015), ritual and offerings such as those conducted by Using farmers are a form of religious ceremonies, a form of surrender to the God who has powered higher than them. Religious activities cause people to feel religious emotions that will encourage them to have an attitude and take religious actions.
. (a) Labuhan and (b) adeg-adeg
Labuhan done by each farmer is different because each paddy field depends on each family's ancestor's testament. Some labuhan used are getihan cengkaruk, kinangan, incense, jenang lemu, jenang abang, and cekeker, as shown in Table 2. Labuhan can be only one type or several types of those materials depending on the testament of the ancestors. Labuhan, which is carried out by farmers related to environmental management (water conservation), namely when dying al or plowing the fields, tandur (planting paddy), dauhan (ritual clean dam), and rebo wekasan (ritual on last Wednesday in the month of Sapar the Javanese calendar).
Labuhan nyingkal (plows the paddy field) is placed near the wangan (water entry hole into the field) when the water starts to flow into the paddy field. If the plants planted by farmers do not need water flow when plowing the paddy field like palawija crop, the farmer does not putting labuhan. The farmers putting the labuhan and also plugged in adeg-adeg in the wangan when tandur/planting paddy (Figure 3b). Adeg-adeg is several plant stems (Table 3), that plugged into the ground. Each family has his adeg-adeg depending weluri each ancestor, and it can be only one kind of plant or several plants. Adeg-adeg is believed to be able to ward off diseases or pests that attack rice. Labuh lying is accompanied by a prayer that is usually done by the farmers themselves. Ritual dauhan happens in October, with the drying of the dam carried out by the Irrigation Service. The drying process aims to check the condition of dams and irrigation channels, carry out maintenance, and repair damaged installations. This ritual was led by modin banyu after the community cleaning up the river in the area around the dam, and water began to flow again through the dam. Labuhan given is a cekeker, a pitung tawar flower (flowers mixed with raw rice given turmeric juice, so it is yellow), jenang abang, and asepan (incense). The salvation by eating dishes pecel pithik together is done at the end of the ritual. Modin banyu will advise so people in harmony, mutual respect, not fighting over irrigation water, and if there are problems between people in the community, then report to him immediately. The rebo wekasan ritual is carried out because the community believes that on the last Wednesday in Sapar (Javanese calendar), the disease is transmitted through water. Therefore, on this day, the community must not take water from any water sources for drinking and cooking needs because they are afraid of contracting the disease. A ritual performed in water sources such as rivers and springs by providing labuhan and recite prayers together, and then eat a dish of jenang abang and sego golong (white rice with boiled chicken egg dishes and pecel sauce typical Using). Sego golong has philosophy as a piece of advice that as humans must have clean and soft white hearts (parables of egg whites) so that they are as valuable as gold (parables of egg yolks)-putting Labuhan at the spring also done at harvest time.
Labuhan, adeg-adeg, selametan, and prayers that they say in every paddy activity is a form of belief in cosmic forces that can affect their life. Matters relating to belief are very susceptible to outside culture, and some consider it superstition (Yuan, Lun, He, Cao, Min, Bai, Liu, Cheng, Li, & Fuller, 2014). However, the Kemiren farmers still hold unwavering those things and not dare to leave them. Farmers do not dare to violate them because they want to obtain safety, avoid catastrophe or disease, and expect abundant harvests. Besides, the ritual is an ancestral weluri, and it is carried out as a form of respect for the ancestors.
Spiritual and cosmological beliefs have an essential meaning in the use and management of biodiversity (Rankoana, 2015). The use of individual plants that are used for labuhan makes these plants have an essential meaning for the people of Kemiren. The community will protect plants beneficial for their lives (Tamalene, Al Muhdhar, Suarsini, Rochman, & Hasan, 2016), affecting the preservation of these plant species. Belief can also influence local communities' behavior to support the implementation of local wisdom in environmental preservation (Limba, Lio, & Husain, 2017).
Local knowledge in the form of practices
Local practices by Kemiren farmers that have an impact on environmental preservation are, making the land near the river as dry fields (garden), using cow dung for manure, cultivate plants that are often needed, cooperate to clean dams and irrigation channels, selective logging.
Local practices related to the management of biological diversity and ecosystems to ensure the continuity of the natural resources and ecological functions flow that support the community (Berkes, Colding, & Folke, 2000).
They were making land near the river, a dry field (garden). Farmers who have land near the watershed will make it a garden or dry field (Figure 4), even though the land can also be used as rice fields (planted with rice and palawija crops). They have an understanding that trees can prevent soil erosion so that land will be turned into gardens and planted with trees. They are worried that if the land is turned into a rice field, it will cause soil erosion, which will harm the environment. Another reason is the spring that comes out of the roots of trees that exist along the watershed. The community also planted trees along the river together with the Irrigation Service. Utilize cow dung for manure. Farmers usually also become cattle ranchers. Cow dung will be used as manure and will usually be spread before the process singkal is carried out. Making manure is done simply, which is cow dung placed on the ground and left alone without any treatment. The community indicated that dung could be used as manure when its color was black as the color of the soil.
Cultivate crops that are often needed. Farmers generally grow their crops that are used for daily needs such as vegetables and cooking spices, even though there has been a shift in buying the plants needed. The purpose of planting plants is to be easy to get these plants in a short time and the amount as needed. Place of planting is carried out in rice fields, gardens, and house yards (home garden) adapted to the plant's character. Vegetables are usually planted in paddy fields, or they leave a small portion of their fields for their daily vegetable needs while cooking spices are usually planted in the house yard. Farmers who have cows usually let their rice fields overgrow with grass (especially reed), so they can be used to feed their cows.
They usually also cultivate the plant needed for rituals or labuhan. Betel plant as a material kinangan is a plant that is often cultivated in the yard of the house. They also plant dringo, commonly used for sawan (an herb that is anointed to rice seeds evenly before the seeds are sown), which aims to avoid contracting the disease. Sawan consists of leaves dringo, shallots (Allium cepa.), and turmeric (Curcuma longa), which is mashed with a little water added. Dringo is usually planted near the irrigation channel in rice fields.
Indigenous peoples use the home garden as a planting area of needed plants in their lives (Bamin & Gajurel, 2015) for their economic and socio-cultural needs (Hazarika, Biswas, & Kalita, 2014). Farmers also invited other farmers to take their crops so that a tradition of sharing was formed among the community. The tradition is weluri from the ancestors in order to maintain harmony and care for fellow humans who are still guarded by farmers today. Planting areas around the house (home garden) can help maintain harmonious social relations in the community because it becomes a medium for sharing plants needed in daily life (Peroni, Hanazaki, Begossi, Zuchiwschi, Lacerda, & Miranda, 2016). Such behavior has fostered an attitude of togetherness, kinship, and caring to create harmony among fellow Kemiren farmers. The practice of domestication has played a role in the preservation of biodiversity because the Kemiren people do not need to take these plants from the wild so that they do not interfere with the preservation of the ecosystem.
They are working together to clean dams and irrigation channels. The tradition of cleaning dams and irrigation channels is carried out every October in conjunction with the ritual dauhan. Farmers, both young and old, work together to clean the river area around the dam from rubbish and tree trunks and to hoe soil and sand to prevent silting of the river. Irrigation channels are also cleaned, grass grows, and has the potential to inhibit the flow of water is also cut. This tradition strengthens the togetherness of the farmers.
They are planting a tree and selective felling. Ancestors of Kemiren community plant trees in their gardens so their descendants will use them in the future. They have an understanding that old wood is more substantial and durable from wood-destroying pests. The hope is that when his children need wood to build a house, the tree will be old enough to get quality wood. At present, only a few are still doing that and choosing to buy wood when they need it.
The tree felling is done with selective felling that is only cutting down trees or bamboo that have met the criteria in terms of age or adjusted as needed. For example, to produce angklung paglak, the bamboo (Bambusa sp.) that is cut down is only 3 years old, and the trunk is straight (Utomo, Al Muhdar, Syamsuri, & Indriwati, 2018). Farmers also have the belief that logging is done on pasaran pahing (particular day in Javanese calendar).
The farmers of Kemiren Village have received assistance from the agriculture and Irrigation Service Departement. The officer still gives space for farmers to carry out the local knowledge they have, including the rituals and agricultural techniques that they get from their ancestors. Thus the synergy between farmers and institutions does harmoniously. The traditional community will be easier to collaborate with (modern) foreign technology if their social components are not considered wrong or ignored (Behailu, Pietilä, & Katko, 2016).
The local knowledge of Using farmers still being applied today has proven to have a role in maintaining the preservation of natural resources, especially water sources in Kemiren Village. However, rapid urbanization can cause an ethnic community to lose its local knowledge and conservative attitudes (Majumder, Deka, Pujari & Das, 2013). Globalization has changed the lives of Using farmers in Kemiren. For example, farmers choose to use a tractor because they feel more efficient than cows, and they do not need to care for cows anymore. Harvesting is also faster because it uses a manual or machine rice thresher, unlike when it used ani-ani. Ani-ani is a traditional tool for cutting rice stalks in harvesting time. There was a tradition of cooperation when farmers are planting or harvesting rice that they have rarely done it now. The community feels that the necessities of life are increasing, so they are looking for additional work, and there is no time for cooperation.
At present, this has not diminished the value of togetherness that they have held fast. Nevertheless, with the decreasing intensity of the interaction is feared that it will erode the kinship values that they have. The research of Iswandono, Zuhud, Hikmat, Kosmaryandi, & Wibowo (2016) on the Maggarai tribe in the Ruteng Forest of East Nusa Tenggara Province get the result that kinship, communal social ties, and religious rituals still adhered to play a role in preserving traditional land management carried out by the tribe.
The beliefs, knowledge, values, norms, and practices of local wisdom that are used by Using community in Kemiren Village when interacting with the environment have created harmony between the community and nature. Their harmony has proven to have a positive impact on natural resource management and environmental preservation. Thus the use of natural resources is done wisely. They do not dare to take actions that will damage public facilities that shelter and sustain the needs of the whole community, such as irrigation channels, rivers, and springs. The water of springs that never dry out even in the dry season is clear preservation evidence of farmers' determination in maintaining their values, norms, and ethics.
Conclusion
Using farmers in Kemiren have local knowledge that is used to interact with nature and fellow humans. Local knowledge possessed is always maintained and implemented so that it can regulate the use of natural resources and maintain its sustainability. The local knowledge is in the form of values (togetherness, obedience, consensus, fairness, and care), norms (prohibitions and suggestions in utilizing natural resources), belief (providing labuhan/ offerings and ritual), and practices in utilizing natural resources and the environment (making land near the river as a dry field/garden, using cow dung for manure, growing plants that are often needed, working together to clean dams and irrigation channels, and selective logging). Farmers have strong social ties, use natural resources as needed, and behave wisely. Their local knowledge has been able to create a harmonious life to reduce conflicts that can arise related to the use of natural resources.
|
2020-06-04T09:12:34.514Z
|
2020-05-11T00:00:00.000
|
{
"year": 2020,
"sha1": "15ec8a87699a6cc4c111d54cffe6fd5678a5b4b0",
"oa_license": "CCBY",
"oa_url": "http://journal.unj.ac.id/unj/index.php/biosfer/article/download/12620/8724",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ff78e1344c181a7b2277c8ad40d6bd14c6ead5b",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
30203127
|
pes2o/s2orc
|
v3-fos-license
|
New Progress on the Role of Glia in Iron Metabolism and Iron-Induced Degeneration of Dopamine Neurons in Parkinson’s Disease
It is now increasingly appreciated that glial cells play a critical role in the regulation of iron homeostasis. Impairment of these properties might lead to dysfunction of iron metabolism and neurodegeneration of neurons. We have previously shown that dysfunction of glia could cause iron deposit and enhance iron-induced degeneration of dopamine (DA) neurons in Parkinson’s disease (PD). There also has been a substantial growth of knowledge regarding the iron metabolism of glia and their effects on iron accumulation and degeneration of DA neurons in PD in recent years. Here, we attempt to describe the role of iron metabolism of glia and the effect of glia on iron accumulation and degeneration of DA neurons in the substantia nigra of PD. This could provide evidence to reveal the mechanisms underlying nigral iron accumulation of DA neurons in PD and provide the basis for discovering new potential therapeutic targets for PD.
INTRODUCTION
Parkinson's disease (PD) is a common neurodegenerative disorder characterized by resting tremor, rigidity, and bradykinesia. Neuropathological hallmarks of PD include the degeneration and loss of dopaminergic neurons in the substantia nigra (SN) and the subsequent dopamine (DA) depletion in the striatum. Although the exact pathogenesis of PD is not fully understood, a growing body of research has confirmed that nigral iron accumulation was involved in the death of DA neurons in PD (Wang et al., 2007;Jiang et al., 2010Jiang et al., , 2017Song et al., 2010). Iron levels in the substantia nigra pars compacta (SNpc) increased significantly, while no significant change in the SN pars reticularis of PD patients (Dexter et al., 1989). Then many researchers have confirmed that iron levels in the SN were significantly higher in PD patients than normal subjects using a variety of technologies such as biochemistry, histochemistry, and imaging (Dexter et al., 1991;Sofic et al., 1991;Langkammer et al., 2016). About 90% of the patients with idiopathic PD showed an increased echogenicity of SN using transcranial sonography (TCS). Further experiments confirmed that there was a significant positive correlation between the echogenic area of the SN and the concentration of iron, H-ferritin and L-ferritin in post-mortem brains (Zecca et al., 2005;Berg, 2006). In recent years, using magnetic resonance imaging (MRI), susceptibility weighted imaging (SWI), enhanced gradient echo T2 * weighted angiography (ESWAN), Quantitative susceptibility mapping (QSM) in vivo also confirmed increased nigral iron content in PD patients (Wang C. et al., 2013;Pyatigorskaya et al., 2014;Wu et al., 2014;Langkammer et al., 2016;Huddleston et al., 2017). In addition, results showed that iron levels in the SN were associated with the severity of motor symptoms in PD patients (Martin et al., 2008;Wallis et al., 2008;Pavese and Brooks, 2009;Guan et al., 2017).
Conventional MRI and diffusion-weighted imaging at 1.5 T have been recommended by European Federation of Neurological Societies (EFNS) to support a diagnosis of multiple system atrophy (MSA) or progressive supranuclear palsy versus PD (Berardelli et al., 2013). EFNS has also recommended TCS for the differentiation of PD from atypical and secondary parkinsonian disorders and for the early diagnosis of PD and in the detection of subjects at risk for PD. They also mentioned that TCS should be used in conjunction with other screening tests (Berardelli et al., 2013). However, it has been reported that the diagnostic accuracy of TCS in early stage PD is not sufficient for routine clinical use (Bouwmans et al., 2013). In their study, 196 consecutive patients were collected for analysis of clinically unclear parkinsonism by undergoing a TCS scan of the brain. Two years later, patients were re-examined for a final clinical diagnosis. Results showed that the sensitivity of TCS of SN+ for the diagnosis idiopathic Parkinson's disease (IPD) was 0.40 and the specificity was 0.61. Therefore, it might not sufficient to use these techniques as a routine basis for potential PD patients before the symptoms. However, longer follow-up periods might probably increase diagnostic accuracy. More studies should be conducted to identify subjects in a pre-symptomatic phase of PD using these technologies in the future.
It is also concluded that neurodegenerative diseases involving iron-mediated toxicity may be due to a failure of iron transport or storage mechanisms, rather than to the presence of high levels of non-transferrin-bound iron (NTBI) (Bishop et al., 2011). There are two kinds of iron transport processes in the brain: transferrin (Tf) binding iron (Tf-Fe) and NTBI. A list of abbreviations and the functions of iron-related proteins are shown in Table 1. Our previous study and others have confirmed that increased iron levels were associated with increased expression of iron importer divalent metal transporter 1 (DMT1) and decreased expression of iron exporter ferroportin1 (FPN1) in PD animal and cell models (Salazar et al., 2008;Wang et al., 2009;Jiang et al., 2010). The activation of iron regulatory proteins (IRPs) was responsible for this abnormal expression of iron transporters (Salazar et al., 2008;Wang et al., 2009;Jiang et al., 2010) (Figure 1). Increased iron and DMT1 expression were also observed in post-mortem PD patients (Salazar et al., 2008). This indicated that abnormal expression of iron transporters caused iron accumulation and enhanced iron-induced neurotoxicity in PD.
Furthermore, it is now increasingly appreciated that glia might be critically involved in the pathophysiology of PD. Glia are mainly classified as astrocytes, microglia, and oligodendrocytes. Both activation of astrocytes and microglia are found in the SN of PD (Shin et al., 2015). Activated astrocytes and microglia could remove damaged cells and protect neurons by releasing neurotrophic factors. Alternatively, they can also mediate neuron injury by releasing proinflammatory factors which may be involved in neuronal degeneration. Recently, attention has been drawn to the new insights into the function of glia. It is now increasingly appreciated that glia also play a critical role in the regulation of iron homeostasis and impairment of these properties might lead to dysfunction of iron metabolism and degeneration of DA neurons in PD. Astrocytes, microglia, and oligodendrocytes are all equipped with different ironrelated proteins responsible for iron uptake, storage, use and export (Figure 2). In addition, cultured neurons, astrocytes and microglia all have the ability to store huge amounts of iron, but compared to neurons, glia can stored iron more effectively (Bishop et al., 2011). Among them microglia were the most efficient in NTBI accumulation (Bishop et al., 2011). Furthermore, astrocytes were involved in the formation of blood-brain barrier (BBB). About 95% of the capillary surface is covered by end feet of astrocytes (Dringen et al., 2007). Therefore, astrocytes are of vital importance for iron transport across BBB and maintain brain iron homeostasis (Dringen et al., 2007). This might be the main source of iron for neurons and microglia (Figure 2). In addition, studies found that iron overload could activate microglia and astrocytes and promote the release inflammatory factor and neurotrophic factors, which were involved in the regulation of iron metabolism of DA neurons (Wang J. et al., 2013;Zhang H.Y. et al., 2014).
Therefore, in this review, we describe the involvement of glia in pathophysiology of PD. Then we summarize iron metabolism of glia and the effect of glia on nigral iron accumulation and degeneration of DA neurons in PD. This could provide evidence to reveal the mechanisms underlying the effect of glia on iron accumulation of DA neurons in PD and provide the basis for discovering new potential therapeutic targets for PD.
Activation of Microglia in PD
Microglia are considered as resident macrophages in the brain where they participate in phagocytosis, immune surveillance, and neuroinflammatory processes. Although the etiology of PD is not yet elucidated, increasing evidence implicates that microglia-mediated inflammatory processes contribute to the degeneration of DA neurons in PD (Ransohoff, 2016). Results also showed that the activation state of microglia rather than the number of microglia in the SN contributed to microgliainduced neurotoxicity of DA neurons (Shin et al., 2015). The role of activated microglia in PD has been well described in previous reports. Post-mortem brain examination results showed that there were many activated microglia in the brain of PD patients. And activated microglia were mainly distributed in the SN where degeneration of DA neurons occurred (Banati et al., 1998). This might be mainly due to the highest density distribution of microglia in the normal SN area of the brain (Beach et al., 2007). It has been proposed previously that mutated α-synuclein could activate microglia with proinflammatory response (Su et al., 2009). This process occurred even before nigral neuronal loss in the SNpc (Bishop et al., 2011). These observations in PD patients support the presumption that activation of microglia is involved in the initiation and progression of PD (Bruck et al., 2016). Although it is not clear whether microglia activation is a causal or the result of the secondary event in PD, microglia activation-mediated inflammatory processes indeed lead to a vicious circle between inflammatory reaction and neuron damage, and this aggravates the symptoms of PD (Block and Hong, 2007;Neher et al., 2011). On the other hand, activated microglia could also participate in neuroprotection (Le et al., 2016). This "double-edged sword" effect of microglia might depend on different activation states of microglia response to different types of stimuli in normal and disease conditions. It is now recognized that there exist two different activation states of microglia (Colton, 2009;Cherry et al., 2014). One is classical activation (M1 phenotype), which contributed to the inflammatory response to produce inflammatory cytokines. This is necessary for antigen presentation to kill intracellular pathogens. However, constant production of inflammatory cytokines could induce cell death in disease conditions. The other state is alternative activation (M2 phenotype), which had an anti-inflammatory phenotype responsible for repair and debris clearance. The proper transition from the M1 to M2 phenotype might be critical for microglia FIGURE 2 | Schematic illustration of brain iron metabolism. Iron could cross BBB though endocytosis of holo-Tf followed by iron detached from Tf inside endosomes and FPN1-mediated iron efflux or transcytosis of holo-Tf through the BVECs. (1) Astrocyte: astrocytes could uptake Fe 3+ via Tf-TfR1. DMT1, Zip14, and TRPC participate in Fe 2+ absorption. Cp can oxidize Fe 2+ to Fe 3+ and then promote FPN1-mediated Fe 2+ release. Iron can be stored in ferritin efficiently. (2) Microglia: Fe 2+ could be transported via DMT1-mediated iron import and FPN1-mediated iron export. Microglia also can transfer Fe 3+ ions to neurons by Lf/LfR-mediated pathway and store iron in ferritin. (3) Oligodendrocytes: iron is stored in oligodendrocytes mainly in the form of ferritin or Tf. Tf could be released from oligodendrocytes. Tim2-induced ferritin uptake is considered as the main mechanism for iron intake. Ferritin released from astrocyte and microglia promotes OPC maturation. BBB, blood-brain barrier; BVECs, brain capillary endothelial cells; Cp, ceruloplasmin; FPN1, ferroportin1; Lf/LfR, lactoferrin/lactoferrin receptor; NTBI, non-transferrin-bound iron; OPC, oligodendrocyte precursor cell; Tf/TfR1, transferrin/transferrin receptor 1; TRPC, resident transient receptor potential channel; Zip14, Zrt/Irt-like protein 14.
to efficiently end the inflammatory response. However, in PD conditions, persistent released inflammatory cytokines by microglia in the SN usually overshadow the beneficial molecules. It has been hypothesized that lack of M2 phenotype might be an important mechanism involved in neurodegeneration (Cherry et al., 2014).
The classical view considers that neurons are just passive victims of the activation of microglia, but in fact, neurons are not merely passive victims (Biber et al., 2007). It is now widely accepted that the interaction between neurons and glia together maintains tissue homeostasis in the central nervous system (CNS). The cluster of differentiation 200 (CD200), belonging to the immunoglobulin superfamily, participates in the regulation of immune response. CD200 is mainly expressed in neurons, which can act on CD200 receptor (CD200R) in microglia and maintain the microglia in the resting state (Lyons et al., 2007). Impairment of CD200-CD200R pathway induced activation of microglia in the SN and thus participated in the degeneration of DA neurons in PD (Wang et al., 2011).
Microglia in Iron Accumulation and Degeneration of DA Neurons in PD
The prominent hallmarks of neuroinflammation are microglia activation and subsequent secretion of pro-inflammatory cytokines such as interleukin-1β (IL-1β) and tumor necrosis factor-α (TNF-α) (Mogi et al., 1994(Mogi et al., , 1996. Elevated release of IL-1β and TNF-α from activation of microglia was observed in the cerebrospinal fluid, as well as SN and striatum in postmortem brain of PD patients (Mogi et al., 1994(Mogi et al., , 1996. And directly injection of IL-1β and TNF-α into the brain tissue can induce the degeneration of DA neurons (Carvey et al., 2005). It has been reported that TNF-α and transforming growth factor beta 1 (TGF-β1) could up-regulate iron import protein DMT1 and down-regulate iron export protein FPN1 in microglia. This increased iron uptake and decreased iron efflux promoted iron accumulation in microglia (Rathore et al., 2012). This accumulated iron in microglia might decrease extracellular iron levels, thus protect DA neurons against iron-induced neurotoxicity in the brain. However, studies have also shown that microglia activation might participate in iron-induced dopaminergic neurodegeneration in the SN (Zhang W. et al., 2014). Their results showed that Fe 2+ -induced loss of DA neurons was more severe in rat neuron-microglia-astroglia cultures than that in neuron-astroglia cultures, indicating the pivotal role of microglia in iron-elicited dopaminergic neurotoxicity. The mechanism is associated with activation of nicotinamide adenine dinucleotide phosphate oxidase 2 (NOX 2 ) in microglia, thus producing many immune inflammatory factors (Zhang W. et al., 2014). They further confirmed that NOX 2 −/− mice were resistant to iron-induced neurotoxicity in DA neurons, indicating that iron-elicited dopaminergic neurotoxicity is dependent on NOX 2 activation of microglia. Therefore, inhibiting excessive activation of NOX 2 in microglia may be new targets for the treatment of PD.
In addition, our previous study showed that iron status of microglia can also affect secretion of pro-inflammatory cytokines including IL-1β and TNF-α and then participate in the degeneration of DA neurons (Wang J. et al., 2013). Our results demonstrated that lipopolysaccharides (LPS) could activate microglia, resulting in abundant IL-1β and TNF-α secretion. This is enhanced by iron repletion and attenuated by iron depletion. This provides evidence that iron status of microglia is vital important for IL-1β and TNF-α releasing from microglia. Furthermore, IL-1β and TNF-α released from microglia also could affect iron metabolisms of DA neurons. Our previous study showed that IL-1β and TNF-α induced activation of IRP1, thus up-regulated DMT1 with iron responsive element (DMT1+IRE) expression and down-regulated FPN1 expression in ventral mesencephalon (VM) neurons (Wang J. et al., 2013). This is responsible for the increased iron influx and decreased iron efflux of VM neurons, thus leading to iron load of DA neurons. This led to a hypothesis that excess iron in the SN area activated microglia and released proinflammatory factors, thus aggravating iron accumulation inside DA neurons (Figure 3).
Another possible mechanism underlying the effect of microglia on iron metabolism of neurons is associated with interleukin-6 (IL-6). Studies have found that LPS could induce the expression and release of IL-6 in microglia. Released IL-6 from microglia could up-regulate hepcidin via IL-6/signal transducer and activator of transcription 3 (STAT3) signaling pathway in neurons (Qian et al., 2014). Hepcidin is a critical regulator of the entry of iron into cells by binding to the iron exporter FPN1 and resulting in the internalization of FPN1, which could inhibit iron export from neurons. Therefore, IL-6 released by activated microglia up-regulated hepcidin of neurons and enhanced iron accumulation by preventing FPN1mediated iron release from neurons (Figure 3). A recent result showed that activated microglia could also stimulate astrocytes to release hepcidin via IL-6 signaling, which then prevented Excessive activation of microglia induced by LPS or neurotoxins could release IL-1β and TNF-α, which aggravates iron accumulation of DA neurons by up-regulating DMT1+IRE and down-regulating FPN1. (4) MMP-9 released by damaged DA neurons could lead to upregulation and release of LCN2. LCN2 then activates microglia to release TNF-α and IL-1β, which is involved in the abnormal expression of DMT1 and FPN1 in DA neurons. LCN2 released from activated microglia might also induce direct neurotoxicity via excessive iron delivery into DA neurons by binding to LCN2 receptors. (You et al., 2017) (Figure 4). In addition, ceruloplasmin (Cp) is one of the major copper-binding proteins responsible for converting toxic ferrous iron into ferric iron (Patel and David, 1997). Evidence has shown that Cp could potentiate LPSinduced activation of microglia and increase the production of IL-6 (Lazzaro et al., 2014). These findings provide powerful evidence that the cooperative effect of neuroinflammation and iron accumulation may enhance the degeneration of DA neurons in PD.
FPN1-mediated iron release and induced apoptosis of neurons
In addition to release TNF-α, IL-1β, activated microglia can also release lactoferrin (Lf), which is an iron-binding protein belonging to the transferrin family. Lf-Lf receptor (LfR) transfers Fe 3+ ions by a receptor-mediated pathway. Iron affinity of Lf is about 300 times higher than TfR (Baker (1) IL-6 could promote astrocytes to release hepcidin, which then prevents FPN1-mediated iron release from DA neurons. (2) BDNF and GDNF secreted by activated astrocytes can inhibit IRP via acting on their receptors, thus down-regulating the expression of DMT1 and reducing iron accumulation in DA neurons. (3) MMP-9 released by damaged DA neurons could lead to upregulation and release of LCN2. LCN2 then activates astrocytes to release TNF-α and IL-1β, which up-regulates DMT1 and down-regulates FPN1 in DA neurons. Released LCN2 might also induce direct neurotoxicity via excessive iron delivery into DA neurons by binding to LCN2 receptors. et al., 1994). It was reported that both iron-free Lf (apo-Lf) and iron-saturated Lf (holo-Lf) entered Caco-2 cells via a similar mechanism, but affect cell proliferation differentially (Jiang et al., 2011). In the brain, Lf is produced by activated microglia (Fillebeen et al., 2001). The expression of Lf mRNA was reported to be increased in 1-methyl-4-phenyl-1,2,3,6tetrahydropyridine (MPTP) mouse models of PD (Faucheux et al., 1995). Immunohistochemical studies of PD patients revealed an increase of LfR on SNpc neurons and microvessels (Faucheux et al., 1995). These findings indicate a possible role of Lf/LfR in nigral iron accumulation and the subsequent degeneration of dopaminergic neurons in PD. Our previous study suggested that activated microglia could synthesize and release Lf. This process was further enhanced by iron overload (Wang et al., 2015). In VM neurons, both apo-Lf and holo-Lf exerted their neuroprotective effects against 1-methyl-4phenylpyridine (MPP + ) by protecting mitochondria, increasing the expression of copper and zinc-containing superoxide dismutase (Cu/Zn-SOD) and B-cell lymphoma-2 (Bcl-2) (Wang et al., 2015). This indicated that Lf protected dopaminergic neurons from neurotoxin, although Lf tended to transport iron to dopaminergic neurons. This might be related to the anti-oxidant and anti-apoptotic activities of apo-Lf and holo-Lf. Chelation of cellular iron by apo-Lf might also exert its function via decreasing cellular free iron and iron-induced neurotoxicity (Wang et al., 2015) (Figure 3).
Activation of Astrocytes in PD
Astrocytes are the most abundant cell type in the CNS and have diverse physiological functions including cellular support during CNS development, ion homeostasis, uptake of neurotransmitters and neuromodulation through their close association and communication with neurons and other glia. There is abundant evidence for a protective effect of astrocytes on the survival of DA neurons. Intranigral infusion of IL-1-β to activate astrocytes in advance can effectively protect DA neurons from 6-hydroxydopamine (6-OHDA)-induced neurotoxicity (Saura et al., 2003). They mentioned that in this condition, microglial activation was not induced by IL-1β. This was a key factor for the neuroprotection as activated microglia is potentially neurotoxic to DA neurons. They suggested that the protective effects in IL-1β-treated animals were associated with activated astrocytes, but not a direct effect of the IL-1β (Saura et al., 2003). Our previous study showed that activation of heme oxygenase-1 (HO-1) in astrocytes might responsible for the protective effect of astrocytes on DA neurons by resisting oxidative stress in the MPTP-induced PD mice models (Xu et al., 2016).
It has been accepted that astrocytes have both neuroprotective and neurodegenerative functions. Whether astrocytes are beneficial or harmful might depend largely on the molecules that they release into and uptake from the extracellular space (Rappold and Tieu, 2010). It is well documented that nerve growth factor (NGF), glial cell line-derived neurotrophic factor (GDNF), and basic fibroblast growth factor (bFGF) released from astrocytes promote the survival of DA neurons (Rappold and Tieu, 2010;Rocha et al., 2012). Astrocytes might also confer neuroprotection to DA neurons by clearing excess extracellular toxic alpha-synuclein and enhancing degradation of alpha-synuclein through the lysosomal pathway. However, although this degradation of alpha-synuclein in astrocytes may confer initial protection to neurons, when the accumulation of alpha-synuclein exceeds the degradation capacity of astrocytes, aggregates of alpha-synuclein in astrocytes could up-regulate transcripts of inflammatory cytokines such as IL-1β, and TNF-α (Lee et al., 2010;Lindstrom et al., 2017). It has been found that α-synuclein-positive protein aggregates were present in the astrocytes of post-mortem PD brains (Wakabayashi et al., 2000). A study on α-synuclein inducible transgenic mice, which selectively expressed human PD-related A53T α-synuclein in astrocytes, showed that excess A53T α-synuclein in astrocytes caused severe astrogliosis, leading to dysfunction of astrocytes to maintain the integrity of BBB and homeostasis of extracellular glutamate. This induced inflammation, microglial activation and a significant loss of DA neurons in the midbrain in these mutant mice (Gu et al., 2010).
In addition, activated astrocytes also possess immune and inflammatory activities just as microglia. This also called reactive astrogliosis, which is accompanied by neuronal injury in neurodegenerative conditions including PD. This process could limit damage within the reaction area and provide repairment after injury (Biber et al., 2007;Pyatigorskaya et al., 2014). However, studies have shown that there might be two different types of reactive astrogliosis depended on the type of inducing injury. Neuroinflammation and ischemia induced two different types of reactive astrocytes termed "A1" and "A2, " respectively. Reactive astrocytes in ischemia exhibited a phenotype that might be beneficial or protective (A2), whereas reactive astrocytes induced by LPS exhibited a phenotype that might be detrimental (A1) (Zamanian et al., 2012;Lindstrom et al., 2017). Recently, it has been reported that A1 reactive astrocytes could be induced by IL-1β and TNF-α secreted by activated neuroinflammatory microglia. These A1 reactive astrocytes lost most their normal functions, but gained a new neurotoxic function, which induced the death of neurons and oligodendrocytes (Liddelow et al., 2017). This indicated that activated neuroinflammatory microglia could induce A1 reactive astrocytes by releasing TNF-α and IL-1β. A1 reactive astrocytes then amplified the immune response and ultimately contributed to the cell death of DA neurons in the SNpc during neurodegeneration (Saijo et al., 2009;Glass et al., 2010).
Iron Metabolism in Astrocytes and Its Role in Degeneration of DA Neurons
Astrocytes participate in the formation of BBB and are generally accepted as principle contributor for the uptake of a variety of nutrients including iron to the brain. It controls the process of iron transport from outside the brain to inside the brain and regulates iron transport from astrocytes to other brain cells (Dringen et al., 2007). Studies have shown that astrocytes are not cells with a high metabolic requirement for iron. Iron content in basic condition was only about 10 nmol/mg protein Riemer et al., 2004). However, these cells have a strong iron transport capacity and can transport Tf-Fe, NTBI and heme iron. Studies have shown that astrocytes in vivo do not express Tf or TfR1 (Moos, 1996). However, these two proteins were expressed in cultured astrocytes in vitro to participate in iron transportation . Most studies suggest that astrocytes do not give priority to uptake Tf-Fe in vivo or in vitro (Swaiman and Machen, 1985;Oshiro et al., 1998;Takeda et al., 1998;Jeong and David, 2003). DMT1 is thought to participate in divalent iron absorption in astrocytes (Tulpule et al., 2010). DMT1 was detected in cultured astrocytes (Jeong and David, 2003;Erikson and Aschner, 2006) and mainly expressed in the end-foot which related to the vascular endothelial cells (Burdo et al., 2001;Wang et al., 2001), indicating a major role of DMT1 in astrocytes for brain iron uptake. Release of iron from vascular endothelial cells was uptaken by nearby astrocytes via DMT1 and then redistributed to other cells. This suggests that DMT1 might be involved in the redistribution of iron in the brain. In addition, it has been found that the zinc transporter Zip14 and resident transient receptor potential channel (TRPC) (Pelizzoni et al., 2013) in astrocytes also play a role in the process of NTBI transportation (Figure 2). Iron absorption mediated by these transporters in astrocytes can buffer high levels of extracellular iron, and thus inhibit high iron-induced damage to DA neurons. In addition, astrocytes can store iron via ferritin efficiently and release iron by FPN1. Cp is a kind of ferrous oxidase mainly in the form of glycosyl-phosphatidylinositol (GPI)-Cp in the brain (Patel and David, 1997) and can effectively oxidize Fe 2+ to Fe 3+ , then promoting FPN1-mediated iron release (Figure 2). FPN1 and GPI-Cp co-expressed in the cell surface of astrocytes (Jeong and David, 2003) to mediate iron release from astrocytes (Figure 2).
It has been reported that iron levels can affect the expression of iron related proteins in astrocytes. Ferric iron incubation increased ferritin expression and reduced the expression of TfR . This adjustment is beneficial to reduce iron uptake and decrease intracellular free iron levels in the high iron environment. This could protect astrocytes against iron-mediated oxidative stress. Our previous study confirmed that regulatory mechanism of iron metabolism in astrocytes was significantly different from DA neurons after 6-OHDA treatment. 6-OHDA induced an increase in DMT1-mediated ferrous iron influx and a decrease in FPN1-mediated iron outflow, then led to iron accumulation in DA neurons (Wang et al., 2009;Jiang et al., 2010). However, in astrocytes, both iron import and export were enhanced by 6-OHDA due to the significantly increased expression of DMT1 and FPN1 (Zhang et al., 2013). This suggested that 6-OHDA might promote iron transport rate in astrocytes under the condition of oxidative stress to avoid iron deposition in astrocytes.
Astrocytes may affect the iron metabolism of DA neurons in PD models. Astrocytes are vital for the survival of DA neurons by secreting various neurotrophic factors, such as brain derived neurotrophic factor (BDNF) and GDNF. Lui et al. (2012) observed higher expression of BDNF in the damaged striatum and SN and elevated expression of GDNF in the damaged striatum in early 6-OHDA-induced PD models (5 and 7 days after unilateral injection of 6-OHDA); Both of GDNF and BDNF decreased after 6-OHDA treatment for 14 days. This indicated that synthesis and secretion of BDNF and GDNF by astrocytes in the early PD rat models can promote cell survival (Lui et al., 2012). Our previous study demonstrated that BDNF and GDNF can inhibit iron uptake into neurons by decreasing the expression of iron import protein DMT1, thus reducing 6-OHDA-induced iron accumulation in DA neurons. Intracellular signaling pathways MEK/ERK, PI3K/Akt might participate in these processes (Zhang H.Y. et al., 2014) (Figure 4). These results confirmed that astrocytes can affect iron metabolism and survival of neurons by releasing neurotrophic factors BDNF and GDNF.
Recently, another novel mechanism underlying neuron-glia interaction in iron metabolism has been reported (Kim et al., 2016). Lipocalin-2 (LCN2) is a member of highly heterogeneous secretory protein family of lipocalin. Diverse functions of lipocalin-2 have been demonstrated in the CNS (Ferreira et al., 2015;Jha et al., 2015). It has been reported that LCN2 was up-regulated in the SN of PD patients and MPTP-induced PD animal models (Kim et al., 2016). Further study showed that the increased LCN2 levels contributed to neurotoxicity and neuroinflammation, resulting in disruption of the nigrostriatal DA neurons and abnormal locomotor behaviors (Kim et al., 2016). Secreted LCN2 can activate microglia and astrocytes to promote M1 polarization and suppress M2 signaling pathway (IL-4-STAT6 signaling pathway) (Jang et al., 2013a,b;Lee et al., 2015). These activated astrocytes and microglia can produce neurotoxic cytokines such as TNF-α and IL-1β (Kim et al., 2016), which might be involved in the dysfunction of iron transporters and thus increase iron accumulation in DA neurons as mentioned above. In addition, LCN2 was reported to be an iron transport protein, regulating intracellular iron levels by binding to its receptor (Lee et al., 2012;Jha et al., 2015). Therefore, it is possible that increased secretion of LCN2 in reactive astrocytes and activated microglia might induce direct neurotoxicity to DA neurons via excessive iron delivery into DA neurons, resulting in the disruption of the DA neurons in PD (Kim et al., 2016) (Figures 3, 4). This provides new experimental evidence on relationship between abnormal iron metabolism and inflammatory in PD. Further studies should be conducted to elucidate the exact mechanisms underlying the effect of LCN2 on iron accumulation in DA neurons in PD.
IRON METABOLISM IN OLIGODENDROCYTES
Oligodendrocytes play a key role in myelin formation for proper transmission of nerve impulse in the CNS. There are large amounts of stored iron and synthesized Tf in oligodendrocytes (Todorich et al., 2009;Franco et al., 2015) (Figure 2). It has been shown that iron uptake of oligodendrocytes was accompanied by myelin formation and iron deficiency animals showed damage of myelin formation (Badaracco et al., 2008). In addition, injection of apotransferrin (aTf) to postnatal day 2-5 rats increased the expression of several myelin proteins and accelerated oligodendrocyte maturation (Escobar Cabrera et al., 1994Marta et al., 2003). Transgenic mice with Tf overexpression also showed increased myelin formation (Saleh et al., 2003). These results indicate that iron and Tf were necessary molecules for myelin formation and maturation of oligodendrocytes (Franco et al., 2015).
Recently, results showed that hypomyelination in iron deficiency animals might be also associated with the deficiencies in microglia and astrocytes (Rosato-Siri et al., 2017). During postnatal development, microglia were an important iron source for oligodendrocytes (Todorich et al., 2009). There were large amounts of accumulated iron in microglia before myelination. However, iron levels decreased in microglia paralleled by increased iron accumulation in oligodendrocytes. This suggests that accumulated iron in microglia might be released from microglia to developing oligodendrocyte precursor cell (OPC) for its maturation during myelination (Todorich et al., 2009) (Figure 2). It has been demonstrated that microglia-released ferritin is an important source of iron for oligodendrocytes (Zhang et al., 2006). Further in vivo study showed that microinjection of ferritin to the spinal cord of adult rats could lead to internalization of ferritin in microglia and then ferritin could be subsequently released to promote the proliferation of neuron-glial antigen 2 (NG2)-positive progenitor cells and differentiation into mature oligodendrocytes (Schonberg et al., 2012). This indicates that ferritin released from microglia might act as a source of iron for NG2+ progenitor cells, thereby contributing to the proliferation and the formation of new myelin-producing oligodendrocytes. Astrocyte could also influence maturation and differentiation of oligodendrocytes through the secretion of different growth factors. In addition, iron efflux from astrocytes could be involved in remyelination of OPC directly (Schulz et al., 2012). In iron deficiency conditions, a crosstalk between astrocyte, microglia, and OPC prevented oligodendrocytes maturation and myelin formation.
Oligodendrocytes are the main iron-containing cells in the brain (Gerber and Connor, 1989). Iron was stored in oligodendrocytes mainly in the form of ferritin or Tf (Figure 2). Studies have shown that there were ferritin binding sites in oligodendrocytes (Hulet et al., 1999), indicating receptormediated mechanisms of iron transport (Fisher et al., 2007).
It has been confirmed that T cell immunoglobulin and mucin domain-containing protein-2 (Tim2) is the receptor of heavy chain ferritin (H-ferritin). It can bind and result in internalization H-ferritin . Studies have confirmed the expression of Tim2 in oligodendrocytes in vivo and in vitro. As oligodendrocytes neither express TfR nor express DMT1 (Todorich et al., 2008). Tim2 is considered as the main mechanism for iron intake in oligodendrocytes (Todorich et al., 2008) (Figure 2). Another study showed that scavenger receptor class 5 (scara5) was the receptor of light chain ferritin (L-ferritin) expressed in embryonic mice and kidney cells of adult mice. Scara5 can bind to L-ferritin or Tf (but not HFt) and mediate its endocytosis (Li et al., 2009). The discovery of receptormediated iron transport in oligodendrocytes provides the new experimental basis for mechanisms of iron transportation and indicates its possible role in neurodegenerative diseases including PD. It is now considered that oligodendrocytes may not play a key role in the occurrence and development of PD, they may be more involved in the progression of late onset of PD (Halliday and Stevens, 2011).
CONCLUSION AND FUTURE DIRECTIONS
In recent years, considerable advances have been made in understanding iron metabolism in glia and neurons. We have summarized iron metabolism in glia (Figure 2) and reviewed their possible roles in the degeneration of DA neurons in PD (Figures 3, 4) in this review. Glia could affect iron metabolism and survival of DA neurons in PD through the release of proinflammatory factors or neurotrophic factors. Excessive activation of glia aggravated iron accumulation and degeneration of DA neurons in PD. In addition, there exists a complex regulatory mechanism between glia, which leads to the final degeneration of DA neurons. Therefore, regulating the function of glia may provide a new therapeutic target for the treatment of iron-mediated neurodegenerative disorders especially PD.
However, the current researches on iron metabolism rely mainly on experimental animals, especially in rodent models. These models cannot fully simulate changes of iron metabolism in the brain of PD patients. Therefore, it is crucial for studies using human stem cells, human glia or post-mortem brain tissue of PD patients to clarify iron metabolism of glia and their role in the degeneration of DA neurons in PD. Further investigations are also required to investigate whether the newly discovered transporters and regulatory proteins are also expressed in glia and how they functioned. In addition, the exact molecular and cellular mechanisms underlying the interaction between glia and DA neurons on iron metabolism should also be elucidated in the future.
AUTHOR CONTRIBUTIONS
HX wrote the manuscript. YW and HX constructed the figures. NS and JW contributed to the editing of the manuscript. HJ and JX revised the manuscript. All authors read and approved the final manuscript.
|
2018-01-19T18:32:23.444Z
|
2018-01-19T00:00:00.000
|
{
"year": 2017,
"sha1": "06d102b714d4e2573df670f616045d039f5da81c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2017.00455/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06d102b714d4e2573df670f616045d039f5da81c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
227259693
|
pes2o/s2orc
|
v3-fos-license
|
The Therapeutic Potential of Neuronal K-Cl Co-Transporter KCC2 in Huntington’s Disease and Its Comorbidities
Intracellular chloride levels in the brain are regulated primarily through the opposing effects of two cation-chloride co-transporters (CCCs), namely K+-Cl− co-transporter-2 (KCC2) and Na+-K+-Cl− co-transporter-1 (NKCC1). These CCCs are differentially expressed throughout the course of development, thereby determining the excitatory-to-inhibitory γ-aminobutyric acid (GABA) switch. GABAergic excitation (depolarisation) is important in controlling the healthy development of the nervous system; as the brain matures, GABAergic inhibition (hyperpolarisation) prevails. This developmental switch in excitability is important, as uncontrolled regulation of neuronal excitability can have implications for health. Huntington’s disease (HD) is an example of a genetic disorder whereby the expression levels of KCC2 are abnormal due to mutant protein interactions. Although HD is primarily considered a motor disease, many other clinical manifestations exist; these often present in advance of any movement abnormalities. Cognitive change, in addition to sleep disorders, is prevalent in the HD population; the effect of uncontrolled KCC2 function on cognition and sleep has also been explored. Several mechanisms by which KCC2 expression is reduced have been proposed recently, thereby suggesting extensive investigation of KCC2 as a possible therapeutic target for the development of pharmacological compounds that can effectively treat HD co-morbidities. Hence, this review summarizes the role of KCC2 in the healthy and HD brain, and highlights recent advances that attest to KCC2 as a strong research and therapeutic target candidate.
Introduction
Huntington's disease (HD) is an autosomal dominant disorder, caused by CAG trinucleotide repeat expansion of the gene encoding huntingtin (HTT) [1]. While the disease displays complete penetrance, significant interindividual variation in age of disease onset is observed [2]. CAG repeat length only partially explains this variance [3], additional influences include other genetic modifiers [4][5][6], such as epigenetics [7] and environmental aggressors [2,8] (also see Figure 1A,B). The HTT gene consists of CAG (cytosine-adenine-guanine) trinucleotide repeats at the 5 end, which encode glutamine. In healthy individuals, the CAG sequence is repeated 10-35 times, while in HD, the CAG sequence is repeated more than 36 times. The mutated HTT gene (mHTT) causes the production of abnormal huntingtin (Htt) protein. Htt, with an unusually long polyglutamine sequence, is cut into smaller fragments that will accumulate in neurons; given their toxic nature, cell damage occurs. (B) Huntington's disease (HD) is an autosomal dominant disorder; individuals homozygous (HH) or heterozygous (Hh) for the dominant allele will develop HD. In this example, we have an affected male (Hh) and an unaffected female (hh); therefore, the probability that their offspring will develop HD is 50% (2 in 4). (C) Neural cell damage, and subsequent neural cell death in the basal ganglia contributes to the observed symptoms of Huntington's disease (HD). Electrical signals (nerve impulses) are the basis for communication in the brain; these signals are quickly transmitted from cell to cell via chemical signals known as neurotransmitters. Once generated, a nerve impulse will travel along the length of the axon until it reaches the synaptic knob. At this point, the release of neurotransmitters is triggered; these neurotransmitters will cross the synaptic cleft and bind to complementary receptors on the post synaptic cell. The signal can then be sent along the axon of this second neuron. In HD, basal ganglia structures are smaller than those observed in healthy individuals; this shrinkage is due to death of the striatum. As a result of striatal cell death, the internal globus pallidus (IGP) can only receive a decreased concentration of neurotransmitters. HD is predominantly characterised by progressive motor incoordination; patients tend to experience involuntary muscle contractions and fine motor control defects [9,10]. HD literature demonstrates the progressive death of both striatal and cortical neurons [11][12][13][14]; however, γ-aminobutyric acid (GABA)-ergic projecting neurons of the dorsal striatum are at the highest risk of destruction [2] ( Figure 1C). Since striatal neurons play an important role in motor planning and voluntary movement, striatal damage may be responsible for the movement defects experienced [15]. Destruction of the striatum may also exacerbate non-motor symptoms due to its involvement in cognition and behaviour [16].
Patients tend to have several other clinical manifestations such as learning and memory deficits [17][18][19], as well as changes in their sleep architecture [2,10,20,21]. Approximately 90% of HD patients have sleep disturbances; these are often first observed in the premanifest stage of disease, coinciding with the emergence of early cognitive changes [17][18][19][20][21]. Although cognitive and behavioural changes are thought to present in advance of motor symptoms (often several years prior) [19,[22][23][24], clinical diagnosis is still largely centred around movement abnormalities [20,25], at a time when striatal cell death is extensive [11][12][13]. However, human brain post-mortem studies have shown that patients with both clinical manifestations (such as behavioural changes) and genetic confirmation of HD have limited neural cell loss [26]. There is evidence of marked variation in the extent and therefore severity of neuropathological changes [26] (also see Figure 1B). This provides evidence that the disease process involves synaptic dysfunction in advance of cell death [27,28]; it is important to appreciate that clinical abnormalities may also present in advance of anatomical changes [26]. The recognition of these cognitive and behavioural abnormalities may better inform early clinical diagnosis of the disease and treatment of these coexisting disorders, thereby improving patient quality of life. In addition, aberrant synaptic signalling may represent a novel therapeutic target.
GABA signalling is crucial in both motor and behavioural control; GABAergic neurotransmission is altered in HD [2,29]. To better understand GABAergic activity, we need to consider both K + -2Cl − cotransporter 2 (KCC2) expression and the maintenance of neuronal intracellular chloride (Cl − ) concentration ([Cl − ] i ) . Cl − is an important anion involved in the regulation of cell volume [30], proliferation, and apoptosis [31]. Cl − has a further role in determining membrane potential and the firing of action potentials [32]. Extracellular [Cl − ] tends to be fixed, while [Cl − ] i is more variable [33]. The presence of chloride-cation cotransporters (CCCs) is central to determining [Cl − ] i [34]. Such transporters are responsible for the bidirectional movement of Cl − , and its function is determined by the direction of flux [31].
GABA is the main inhibitory neurotransmitter in the brain [35]. GABA binds GABA type A receptors (GABA A R); these receptors are ligand-gated anion channels central to the control of Cl − movement [35]. Noteworthily, GABA A receptors are permeable for both Cl − and bicarbonate (HCO 3 − ); the net effect of GABA therefore also depends on the distribution of bicarbonate [36]. Previous studies have also implicated HCO 3 − in GABA A receptor-mediated depolarization [37][38][39][40][41]. In a recent study, Lombardi et al. [41] suggest that implementation of physiological levels of HCO 3 − -conductivity to GABA A receptors enhances the [Cl − ] i changes over an extensive range of [Cl − ] I ; however, this outcome strictly depends on the stability of the HCO 3 − gradient and the intracellular pH. For in-depth understanding on the relationship between distribution of HCO 3 − and GABA signalling, readers are referred to recent review on the subject [36,[42][43][44]. Yet, the reversal potential (when net flow = 0) for GABA (E GABA ) is primarily determined by the reversal potential for Cl − ; GABAergic signalling is therefore dependent on [Cl − ] i [45]. Whilst inhibitory GABAergic activity is important for proper central nervous system (CNS) functioning [46], GABA can also induce membrane depolarisation [47]. The excitatory action of GABA is important in the development of the nervous system [35]; its roles include regulating synaptogenesis in addition to supporting neurite outgrowth and the maturation of the neuronal network [2,35]. High [Cl − ] i produces less negative GABA currents that culminate in depolarisation events (excitation) [48]. Conversely, low [Cl − ] i leads to hyperpolarisation as a result of more negative E GABA values (inhibition) [45]. During development there is a gradual hyperpolarising shift in E GABA as a result of decreased [Cl − ] i , which is maintained in the mature mammalian brain [49,50]. Since the healthy brain relies on the proper balance between excitatory and inhibitory inputs, uncontrolled regulation of neuronal excitability can have implications for health [32]. Neuronal [Cl − ] i is largely regulated through the activity of Na + -K + -2Cl − cotransporter 1 (NKCC1) and KCC2 [51][52][53]. NKCC1 pumps Cl − into neurons, while KCC2 is responsible for Cl − efflux [50,53]. Moreover, the extensively studied CCC family member, NKCC1, has numerous physiological obligations [54][55][56] that make it a promising neurological drug target, owing to its importance in GABAergic signalling [50]. Recently, Chew and colleagues [57] determined the cryo-electron microscopy structure of NKCC1 from Danio rerio. This extensive study revealed the mechanisms involved in NKCC1 molecular transportation and communication and further provided insights into ion selectivity as well as coupling and translocation; a clearer framework for understanding the physiological functions of NKCC1 in relation to human diseases was also established [57]. Besides, modulation of NKCC1 activity alongside that of KCC2 has been implicated in the development and progression of HD [2,[58][59][60]. These CCCs are differentially expressed over the course of development, and so the activity of KCC2 and NKCC1 is not synonymous between immature and mature neurons [50]. In the embryonic and early postnatal period, solute carrier family 12 (SLC12), member A1 (SLC12A1) messenger RNA (mRNA) expression of NKCC1 is high [50]. As maturation proceeds, NKCC1 expression decreases and SLC12, member A5 (SLC12A5) expression of mRNA encoding KCC2 is upregulated: there is a resultant net decrease in [Cl − ] i [50]. The developmental stimulation of KCC2 and inhibition of NKCC1 expression initiates the switch from excitatory to inhibitory GABA signalling [61]. These evolutionarily conserved transporters (KCC2 and NKCC1) are inclusive among central mediators of ion transport in multicellular organisms, with specific roles in regulating ionic and water homeostasis in mammalian CNS [62], which is essential in determining the polarity of the neurons [63]. Notably, during development, [Cl − ] i increment is prominent in immature neurons and when activated, they display a depolarising response, which is due to the elevated expression of NKCC1 in comparison with KCC2 [50]. However, during maturation, NKCC1 expression gradually decreases, and KCC2 expression increases, resulting in an opposite expression pattern [50,63] (also see Figure 2). Importantly, the inhibition/stimulation of KCC2/NKCC1 pair via protein phosphorylation is through a regulatory mechanism that works in a reciprocal pattern [53,64] and members of the with-no-lysine kinase (WNKs) family as well as their downstream targets; STE20/SPS1-related proline/alanine rich kinase (SPAK) and oxidative stress response kinase (OSR1) are the most prominent kinases that regulate this process [52,[64][65][66][67]. Consequently, impaired ion homeostasis resulting from mutation in the physiological function of some of this transporter pair and/or their upstream regulators may be detrimental and subsequently result in diminished inhibition and augmented network hyperexcitability, which underlies numerous neurological disorders [52,66,[68][69][70][71] including HD [58][59][60].
Indeed, loss of KCC2 has implications in disease: KCC2 dysfunction and/or deficiency attenuates Cl − efflux and GABAergic inhibition is therefore impaired [32,46]. When [Cl − ] i exceeds equilibrium, depolarisation events contribute to the onset of neurological disease [35,46]. Decreased KCC2 expression coupled with increased NKCC1 expression and/or activity has been documented in several pathologies [63,68,69,71], including HD [58][59][60]. In HD, HTT is mutated (mHTT) and acts to alter KCC2 and NKCC1 expressions and activity [58,59] through mechanisms that remain undetermined. Since KCC2 and NKCC1 expressions and functionality are crucial in determining the effects of GABA dysregulated KCC2 and NKCC1 activities [32,46,58,72], subsequent abnormalities in GABAergic signalling are thought to contribute to HD pathogenesis [2]. Hence, this review aims to investigate the possible mechanisms by which KCC2 expression and function is altered in HD. Although HD is primarily characterised by uncoordinated motor activity [9,10], patients have additional, co-existing neurological disorders [17][18][19][20][21]. In view of the aforesaid, the association between altered KCC2 activity, and the comorbidities that present as part of the disease process will also be discussed. For examples, how are NKCC1 and KCC2 expression and activity controlled in the healthy brain? What are the known mechanisms by which KCC2 expression and activity are altered in HD? Additionally, how does altered KCC2 expression and activity contribute to HD comorbidities with a particular focus on cognitive and sleep changes? . GABA A signalling shifts from depolarizing to hyperpolarising responses are mediated by developmental expression of KCC2 and NKCC1 in the brain (neocortical neurons) of rats. The differential expression of these channels regulates intracellular Cl − concentration ([Cl − ] i ) and therefore determines the activity of γ-aminobutyric acid (GABA). Na + -K + -Cl − cotransporter 1 (NKCC1) pumps Cl − into neurons; its expression is high in the early postnatal period, decreasing as maturation proceeds. The expression pattern for K + -2Cl − cotransporter 2 (KCC2), responsible for Cl − efflux, is directly opposite. In the embryonic and early postnatal periods, [Cl − ] i is high, and so GABAergic signalling is excitatory (depolarising); as maturation occurs, [Cl − ] i decreases, initiating the development hyperpolarising shift, whereby GABAergic signalling becomes inhibitory. Figure elements were taken and modified from Tillman and Zhang [63].
Phosphorylation Regulation of KCC2 by Protein Kinase Signalling Pathways
The WNK-SPAK/OSR1 phosphorylate threonine residues 906 and 1007 (T906/T1007) and subsequently downregulate KCC2 mRNA gene expression; thus, a decline in its physiological function is observed [73,74]. The phosphorylation of these residues is highest in the early postnatal period, with a gradual decrease throughout development [73,74]; WNK1 activity is at its most reduced level in mature neurons [64,74]. KCC2-T906/T1007 phosphorylation has been shown to decrease by approximately 95% between embryonic day 18.5 (E18.5) and adulthood in mice [75]. This decrease in threonine phosphorylation may contribute to the developmental onset of KCC2 function [73,74], thereby facilitating the upregulation of Cl − extrusion from mature neurons resulting in the hyperpolarising E GABA shift [32,76] (also see Figure 3).
Moore et al. [76] demonstrated that preventing KCC2-T906/T1007 phosphorylation in vivo (assessed in knock-in mice), via threonine to alanine mutation, accelerates the onset of KCC2 function in the postnatal period. In the study, E GABA values were found to be hyperpolarised across neuronal development (patch-clamp experiment; cultured hippocampal knock-in mouse neurons), i.e., in preventing the phosphorylation of KCC2-T906/T1007, postnatal GABAergic depolarisation activity was largely abolished, thus suggesting that the developmental onset of hyperpolarising synaptic inhibition is dependent on regulated KCC2 phosphorylation [76], and this is further supported by other studies [73,74]. Therefore, potentiating KCC2 function to rescue delayed E GABA shift during development may improve cognitive defects [76]. Serine 940 (S940) is another key phosphorylation site in the regulation of KCC2 activity, phosphorylation of S940 is controlled by protein kinase C (PKC) [77,78]. S940 phosphorylation leads to decreased KCC2 internalisation and subsequent increased Cl − extrusion [61], thereby increasing KCC2 function [61,77]. In this regard, Moore et al. [76] further suggest that S940 can be experimentally mutated to alanine (S940A) to prevent its phosphorylation to facilitate [Cl − ] increment. Briefly, they demonstrated that developmental E GABA shift is delayed in S940A neurons compared to wildtype (WT) controls, thereby suggesting that phospho-regulation of KCC2-S940 may be involved in defining the developmental onset of GABAergic inhibition [76].
Furthermore, brain-type creatine kinase (CKB) also plays a vital role in regulating cellular energy homeostasis via ATP-dependent phosphorylated catalysis of creatine into phosphocreatine, thereby establishing a readily available ATP-buffering system [79]. Notably, ATP is involved in the activation of Na + -K + -ATPase, which serves as a driving force for KCC activation; hence, it is expected that ATP should enhance the function of KCC2 [80]. Interestingly, some reports have affirmed an ATP-induced KCC activation [27,28]. Aside from potentially providing ATP, Hemmer et al. [81] hypothesize that CKB might phosphorylate KCC2 to change its function, because CKB possesses autophosphorylation activity. However, the implication of the interaction between KCC2 and CKB in relation to their physiological functions and how intracellular ATP concentrations might contribute to KCC2 function is still elusive [80]. More importantly, however, the fact that WNK-SPAK/OSR1 kinase complex is known to phosphorylate and inhibit KCC2 or stimulate NKCC1 [52,[64][65][66][67] is already established. Thus, molecular compounds that can block WNK-SPAK/OSR1 signalling pathway will result in activating KCC2 and inhibiting NKCC1 activities. The manipulation of the interaction between CKB and KCC2 activities could be a substitute mechanism to achieve KCC2 activation [80,82]. In fact, the interaction between CKB and KCC2 expression/activity has been implicated in the modulation of GABA A R-mediated signalling [2,59]. Furthermore, previous reports have demonstrated that enhancement of CKB activity may facilitate the activation of KCC2 function [82][83][84][85]. In HD, reduced expression and activity of CKB is associated with motor deficits and hearing impairment [83,84]. By and large, the enhancement of CKB activity prior to its interaction with KCC2 activates its function resulting from inhibited phosphorylation of the WNK-SPAK/OSR1 signalling pathway may be a hypothesis worthy of intensive investigations ( Figure 3). Hence, it is worthwhile to further investigate the interaction of KCC2 and CKB and how the interaction can modulate the WNK-SPAK/OSR1 signalling cascades in neurological diseases including HD.
Indeed, phosphorylation status of key regulatory sites on KCC2 determines when the developmental E GABA shift occurs, and regulated depolarising GABAergic signalling (largely in the early postnatal period) is necessary for normal cognitive and behavioural development [76]. Since the phosphorylation process is central to KCC2 function, future research should assess whether the phosphorylation status of key KCC2 sites is constant between HD patients and controls. Further to this, it should be established if phosphorylation status changes as HD progresses. If the phosphorylation of key KCC2 sites does not occur as normal in HD gene carriers and patients, investigating how this affects the development hyperpolarising shift in E GABA is of concern; this research may provide an explanation for the behavioural and cognitive manifestations observed in HD patients. Investigation into the phosphorylation of KCC2 and the E GABA is of increasing interest, especially since it has been suggested that the potentiation of KCC2 function (accelerating hyperpolarising shift) can improve cognitive decline [76]. A novel strategy to facilitate neuronal Cl − extrusion and E GABA by coincident NKCC1 inhibition and KCC2 activation by inhibiting the WNK-SPAK/OSR1 kinases. Mammalian neurons that are challenged with multiple neuropsychiatric conditions (such as seizures, neuropathic pain, spasticity, schizophrenia, and others) are usually driven by hyperexcitable circuits, intraneuronal Cl − ([Cl − ] i ) levels are elevated due to increased NKCC1 activity, and/or decreased KCC2 activity, promoting GABA A R-mediated membrane depolarization and excitation. In healthy mature neurons, [Cl − ] i is low due to the opposite activity profile of the CCCs, promoting GABA A R-mediated hyperpolarization, which is critical for the proper balance of excitation-inhibition in neuronal circuits. WNK-SPAK/OSR1 inhibition, via the coincident effects of NKCC1 inhibition and KCC2 activation (the main Cl − extrusion mechanism in neurons), might be a potent way of facilitating neuronal Cl − extrusion to restore ionic inhibition in diseases that are characterized by disordered Cl − homeostasis and GABA disinhibition. ZT-1a, a novel molecular compound, can specifically inhibit SPAK signalling pathway, thus interfering SPAK regulation of GABA signalling via NKCC1 inhibition and KCC2 activation [86]. Activation of protein kinase C (PKC) and brain-type creatine kinase (CKB) are likely to increase KCC2 cell surface expression, but the mechanisms involved are still unclear.
KCC2 Regulation and Function in the HD Brain
To start with, dysfunctions in GABAergic inhibitory neural transmission happen in neurological disorders including HD [2,59,72]. KCC2 is a key moderator of inhibitory GABAergic inputs in normal/healthy adult neurons, as its Cl − extruding activity facilitates the hyperpolarizing reversal potential for GABA A R Cl − currents and its disruption promotes HD-associated symptoms [2,29,59,76,87]. Certainly, KCC2 interacts with HTT and is downregulated in HD, which contributed to GABAergic excitation and memory deficits in the R6/2 mouse HD model [2,59]. Recently, Dargaei et al. [59] demonstrated that aberrant CCC expression causes a shift in the reversal potential of GABA A R-mediated Cl − currents, resulting in excitatory GABA A R signalling. In the study, HD transgenic mice (R6/2) have decreased KCC2 and increased NKCC1 activity replicating CCC expression observed in the brains of HD patients [59]. Particularly, decreased expression of KCC2 mRNA protein was more prominent in the cortical and striatal regions coupled with significant reduced expression of KCC2 in the hippocampus of HD brains of R6/2 [59]. Noteworthily, recent works have now potentially indicated that pharmacological enhancement of KCC2 function could reactivate dormant relay circuits in injured mouse and patient brain, leading to functional recovery and the amelioration of neuronal abnormality and disease phenotype associated with mouse and human models of neurological disorders including HD [58][59][60]86,88,89]. Indeed, there is a growing potential for KCC2 as a vital therapeutic target for neurological diseases and subsequent inhibitory input dysfunctions [60].
Mechanisms of Reduced KCC2 Function in HD
Several theories exist as to why KCC2 expression is reduced in the brain of HD patients and some of these theories, in one way or the other, implicate CKB, a KCC2-interacting protein [2,59,84,85]. In a recent mouse model study, Hsu et al. [85] demonstrated that interactions as well as the expression levels of KCC2 and an interacting protein, CKB, are reduced in neurons of R6/2 when compared with WT. In this study, the researchers treated the animals with vehicle as well as drugs that selectively target synaptic or extrasynaptic GABA A receptors (diazepam or gaboxadol) and subsequently used real-time quantitative polymerase chain reaction, western blot, and immunocytochemistry techniques to monitor the GABA A R and KCC2 expression levels; they further evaluated the interaction between KCC2 and CKB in primary cortical neurons harvested from WT and R6/2 using immunofluorescence and proximity ligation assays [85]. In conclusion, the results from that study suggested that reduced CKB and KCC2 function occurred in HD neurons, which may diminish the GABA A -mediated inhibitory function [85]. Additionally, Dargaei et al. [59] suggested that KCC2 may be appropriated into mHTT inclusions in the hippocampus, which greatly interfere with the transporter's expression and functionality, and that the possible effect of mHTT on KCC2 function may be due to the interaction between KCC2 and CKB. Noteworthily, decreased CKB expression in mHTT expressing neurons is a significant event in the development and progression of HD, which certainly contributes to the neuronal dysfunction linked with HD [79,83,84]. In addition to that, CKB interacts, phosphorylates, and activates KCC2 expression/function [80,82]; hence, diminished KCC2 function in HD is most likely to occur, which may subsequently reduce GABA A -mediated inhibitory function [2]. In view of the aforementioned, Dargaei and co-workers [59] briefly hypothesised that the observed decrease in the hippocampal KCC2 expression of R6/2 mice may result from reduced CKB-mediated phosphorylation and activation of KCC2. Furthermore, decreased KCC2 expression and activity may be a result of the toxic effects of mHTT, as the mutant protein may cause aberrant protein-protein interactions, forming protein aggregates as part of the disease process [59]. Consequently, the KCC2 protein may be sequestered into these mHTT aggregates, thereby reducing KCC2 functionality in the brain [59].
Both loss-of-function and gain-of-function mHTT effects exist, and loss-of-function effects may be responsible for triggering disease pathogenesis In addition, these effects may produce the neurological characteristics of HD [3]. Gain-of-function effects, on the other hand, may drive disease progression, and current strategies for the treatment of HD often include HTT expression knockdown; these techniques are not specific, and therefore both mutant and WT HTT expression is targeted [3]. Since WT HTT has many functional roles in the CNS, these approaches may trigger unwanted outcomes such as those observed in HD; unfortunately, given HTT knockdown techniques may produce unwanted effects [3]. Hence, further research could aim to refine these techniques to be more directed, or probably identify other targets, such as CKB to increase KCC2 expression and functionality in the HD brain. Further investigations should also seek to replicate these findings. Moreover, it would be interesting to determine if the loss-of-functions effects of mHTT act to trigger disease onset independent of CAG repeat length.
Other mechanisms involve WT HTT, which has many functional roles [3,20,90,91]. HTT is important in embryonic development with a role in neurogenesis; HTT knockout mice display embryonic lethality [92]. HTT is also an important protein for the control of vesicle transport and gene transcription [3,20], as WT HTT interacts with transcription activators and repressors [90,91]. In HD, it is suggested that mHTT could cause abnormal interactions with transcriptional machinery, thereby contributing to reduced (or aberrant) KCC2 and GABA A R subunit expression [2]. Two RE1/NRSE (repressor element 1/neuron restrictive silencer element) sites flank the transcription start site of the KCC2 gene [71]. WT HTT ensures the REST/NRSF (RE1 silencing transcription factor/neuron restrictive silencer factor) complex is maintained in the cytoplasm; in this state, the complex is unable to bind RE1/NRSE, permitting gene transcription [91]. mHTT, however, inhibits the transcription of genes containing NRSF, reducing KCC2 expression [91] (also see Figure 3). Investigations into the significance of REST and RE1 have yielded results which may inform the development of novel therapeutics [71]. For example, REST-dual RE1 interaction may represent a novel mechanism for the upregulation of KCC2, thereby promoting the GABAergic switch from excitatory to inhibitory action [71]. This study also revealed how REST inhibition may accelerate the developmental Cl − shift, while REST overexpression slows the hyperpolarising E GABA shift [71]. This may have applications in improving the cognition of HD patients. As discussed earlier, further research should establish the role of the E GABA shift in the cognitive and behavioural manifestations of the disease, therefore making REST a potential therapeutic target.
Additionally, many transcription factor-binding sites have been characterised in the SLC12A5 gene [93,94]. For example, the transcription factor, early growth response 4 (Egr4), is enriched in neurons and has been identified as a key regulator in the control of KCC2 expression [93]. Erg4 mediates brain-derived neurotrophic factor (BDNF)-dependent transcription of KCC2 in immature neurons [95]. BDNF, whose expression and activity are altered in HD populations, is important in the survival of striatal neurons [91]. The aforementioned hypothesis was further supported by the findings of Yeo et al. [71] that demonstrated that KCC2 expression may be potentiated by the application of BDNF. In another rat model study, Zhang et al. [96] demonstrated that microinjection of BDNF (1 µg/µL) into the nucleus raphe magnus (NRM) region of the brain significantly inhibited the expression of KCC2 protein in the brainstem of injected rats when compared with control (non-injected) rats. Furthermore, BDNF have been suggested as a strong candidate responsible for downregulation of KCC2 expression in hippocampal cells [97,98]. Interestingly, both BDNF and inhibition of KCC2 produce similar effects in inverting inhibitory GABA synaptic currents in neurons cells, thereby instigating the cellular mechanisms for impaired GABA inhibitory function [96,99]. Previous studies that provide more direct supporting evidence for BDNF ability to decrease KCC2 expression as the signalling mechanism for loss of GABA inhibition do exist [96,[100][101][102]. Hence, impairment of the BDNF-KCC2-GABA signalling cascade may promote neurological dysfunctions [96] including HD [91,95]. These findings may provide a means for increasing KCC2 expression in HD, and future study should continue to investigate the association. Similarly, establishing how the activity of Erg4 can be manipulated in order to control and potentially enhance KCC2 expression in the HD brain may be of interest as a therapeutic target.
WT HTT has a further role in synaptic connectivity-it is specifically important in the formation and maintenance of cortical and striatal excitatory synapses; silencing HTT in the developing mouse cortex leads to an increase in excitatory synapse formation [3,103]. Li and Li [104] showed that the altered communication between mHTT and HTT interactors promotes aberrant synaptic transmission in HD. This has significant consequences for patients, since synaptic dysfunction is thought to underlie the mechanisms by which cognitive and behavioural changes manifest [27,28].
Not only is KCC2 expression altered in HD, but NKCC1 expression is also abnormally increased [2,58,60]. In fact, there are several reports alluding that enhanced NKCC1 activity may contribute to the pathogenesis of HD [2,[58][59][60]. In a recent mice and human study, Hsu and co-workers [58] demonstrated that NKCC1 mRNA expression increased in the striatum of R6/2 and Hdh 150Q/7Q transgenic HD mice and caudate nucleus of HD patients. Furthermore, inhibition of NKCC1 with bumetanide and adeno-associated viral vectors (AAVs) salvaged the motor deficits of R6/2 mice, thereby suggesting NKCC1 as possible therapeutic target for the potential salvage of motor dysfunction in patients with HD [58]. Indeed, increases in NKCC1 expression are seen to accompany reductions in KCC2 expression; this phenomenon is thought to be as a result of KCC2 reversion to its immature GABAergic phenotype, NKCC1 [2,59,60]. Additionally, upregulation of NKCC1 expression leads to a higher [Cl − ] I , since it allows an influx of Cl − and thus when GABA is stimulated, causes an excitatory response [105]. Increased NKCC1 expression in disease may also be as a result of the toxic secondary effects of mHTT; for instance, mHTT reduces BDNF expression by impairing its gene transcription [59]; mHTT is also thought to be involved in the inhibition of BDNF release and transport [106]. WT HTT sustains the production of cortically derived BDNF, which regulates NKCC1 expression; hence, reduced KCC2 expression and functionality, coupled with increased NKCC1 activity, leads to the disruption of [Cl − ] i followed by the reversal of E GABA [59]. Indeed, excitatory GABAergic signalling promotes disease states, as is the case in HD [2,59,72]. Furthermore, the balance between GABAergic inhibition and excitation is important in processes such as circadian rhythmicity and sleep [107]; this is particularly pertinent for HD patients. The mechanism behind how KCC2 reverts back to its immature phenotype should be established. Similarly, research should further explore mHTT as a therapeutic target.
Sleep Disorders in Huntington's Disease
Changes in the sleep architecture of HD patients were first described in 2005 [108]. Since then, extensive research has sought to explain the underlying mechanisms for the pathogenesis of such disorders [25,60,72]. It is difficult to accurately measure circadian behaviour (e.g., any changes in rhythmicity) in humans; this is largely due to the fact the environment in which we live varies significantly and may act to disguise endogenous rhythms [109]. However, the sleep alterations seen in HD seem to appear in the premanifest stage and become increasingly worse as the disease progresses [17][18][19][20][21]. One study, for example, assessed the association between circadian blood pressure (BP) changes and sleep quality in HD patients compared to controls (38 HD patients: 23 premanifest; 15 early stage HD and 38 age-and sex-matched controls); based on percentage change classification in day/night time BP, subjects with decrease relative to daytime BP (nocturnal dippers) were ≥10%, while the non-dippers (BP decrease relative to night-time) were <10%. Overall, HD patients were significantly (p = 0.001) more likely to experience non-dipping and increased daytime sleepiness compared to controls, both of which indicate poorer sleep quality [25], and these may be implicated in poor cognitive performances [110,111]. Although this study used an objective measure to assess sleep (BP), which has an advantage over other methods such as actimetry (an indirect measure of sleep), the sample size used was relatively small. In view of that, patients were instructed to self-report their sleep quality-this is a subjective measure and therefore open to bias; the use of other techniques such as polysomnography may have yielded differing results. Moreover, six of the participants were under antidepressant prescription (medication was included on the basis that BP changes, if present, remained constant) [25]; such medication may interfere with sleep architecture. For example, drugs with activating effects (e.g., fluoxetine) may cause sleep disturbances [112]. Contrastingly, antidepressants with more sedative-like properties (e.g., doxepin) will act to improve sleep in the short-term but may cause sleep architecture changes with prolonged ingestion [112]. Additional research should therefore seek to replicate these findings, controlling for the outlined confounders.
Evidence of worsened sleep quality in HD patients is further supported by Lazar et al. [113], who demonstrated that non-HD gene carrier patients significantly had better overall sleep quality compared to HD gene carriers, therefore suggesting that sleep quality decreases with disease progression [113]. Frequent nocturnal awakenings and delayed sleep onset in early-stage HD patients have been reported by Goodman et al. [10], who used polysomnography (to assess sleep directly) and actigraphy techniques. These methods are potentially less reliable, since they are dependent upon movement, and HD is predominantly a motor disease [10]. Even though the polysomnography data seemed to support the actigraphy findings, the reliability of the latter suggests that methodological triangulation should be employed. Again, some of the study participants were under antidepressant prescription, which may act as a confounding factor [10]. The seemingly wide use of antidepressants in patients [10,25] further demonstrates the burden of HD comorbidities. Another study found sleep onset and wake-up times to be delayed in HD patients [114]. This study also implicates hippocampal changes, as discussed earlier [59], in the disease process; HD patients had worsened sleep quality, which was seen to be associated with decreased cognitive performance [114]. It should be noted, however, that this study used questionnaires to investigate sleep rather than an objective measure such as polysomnography; additionally, the employed sample size was relatively small [114]. It is vital to either assess sleep through direct, objective measures, or confirm subjective findings with objective research methodologies [115].
Indeed, difficulty in falling asleep is among the most prevalent of symptoms in HD [116]. Jha et al. [116] report that sleep changes are dependent on disease duration and severity but did not find a significant correlation between CAG repeat length and sleep disturbances. This is an interesting point, since severity of disease is normally defined by the length of CAG repeats [2]. Future research should investigate whether an association does in fact exist. A study in HD sheep supports the notion that sleep changes are amongst the earliest symptoms of the HD, and not as a result of disease progression [109]. A further study shows early cognitive changes begin to emerge in the premanifest stage of disease [19], the same time at which sleep changes are observed [17][18][19][20][21], thus supporting this point. Cognitive decline may present up to 15 years prior to the emergence of motor symptoms [117]. Other studies, however, report that only select cognitive measures show accelerated decline [118,119], and another study that employed ambulatory electroencephalogram (EEG) recordings demonstrated less compelling evidence [120]. The studies discussed in this section show that HD patients experience excessive daytime sleepiness, delayed wake-up times, nocturnal awakenings, and sleep fragmentation [10,25]. These are all indicative of worsened sleep quality, and since sleep is central to proper human functioning, may exacerbate cognitive deficits [110]. If we can effectively treat sleep disorders when they first manifest, we may also improve the cognitive and learning and memory deficits patients also report. However, selecting the most appropriate animal model for the investigation of human pathologies is imperative. R6/2 mice are the best characterised rodent model; they do however have limitations: they carry only a portion of the HD gene and have a shorter lifespan and different neuroanatomical organisation [109]. Whether data collected through the use of murine models is entirely translational to the human HD population is therefore under debate [109]. Given sheep carry the full HD gene (with relevant CAG repeat length) [109], they may represent a useful animal model for the investigation of the mechanisms underlying these circadian changes and how these changes may influence disease progression. While sheep present other challenges as a disease model (size, maturation period etc.), the ovine model could be used to confirm or disprove current findings.
Sleep Disorders Treatment
Co-existing morbidities in HD populations often cause serious distress to patients and their relatives [121]. Studies have found that poor sleep quality is associated with irritability and depression, independent of each other [114,122]; this is true of HD patients also [10,113]. As discussed, sleep quality is thought to worsen throughout the progression of HD [113], possibly exacerbating other clinical manifestations such as cognitive decline [123] and learning and memory abilities [114,122]. Promisingly, mouse studies have shown that cognitive deficits can be restored through the pharmacological manipulation of sleep; such intervention may act to improve sleep quality and wakefulness, thereby improving cognition [124]. Although a cure for HD does not exist, the neuropsychiatric symptoms that present alongside the disease are largely treatable [121]. Anderson et al. [121], advise the use of randomised control trials to establish the best treatment options. Nonetheless, clinical statements are currently available to guide the management of HD comorbidities based on the treatment options previously developed for use in non-HD populations [3,121]. For example, GABA is already used in the treatment of sleep disorders [125]. Further research could establish whether GABA or CCC agonists/antagonists can be used as therapeutic agents for sleep disorders in HD patients.
Hypothalamic Changes in the HD: Implications for Sleep and Circadian Rhythmicity
Very few studies have sought to assess hypothalamic changes in HD patients; this is partly due to shortages in tissue availability from this region, but is also as a result of differing definitions regarding the boundaries of the hypothalamus and its nuclei [20]. Despite this, hypothalamic dysfunction has been evidenced in the early stages of HD [126]. Early studies, using magnetic resonance imaging techniques showed loss of grey matter in the HD hypothalamus [127,128]. At this time, Petersen and Gabery [20] also observed atrophy in the hypothalamus of both R6/2 transgenic mice and HD patients.
More recently, Politis et al. [126] have evidenced hypothalamic involvement in HD by demonstrating the loss and/or dysfunction of dopamine receptors in premanifest HD gene carriers and symptomatic patients. This hypothalamic dysfunction has been implicated in poorer sleep quality of HD patients [25], and may also cause autonomic dysfunction [20,25,129].
It is thought that HD hypothalamic changes occur independently of striatal alterations; this could therefore explain differences in the severity and extent of the comorbidities observed [126]. Furthermore, since HD diagnosis is centred around the presence of motor abnormalities, research largely focusses on explaining the underlying mechanisms for these changes; investigation of other clinical manifestations, which are now thought to present prior to motor impairment, is less extensive. The recognition and investigation of HD hypothalamic changes are of critical importance for the study of sleep disorders in HD patients. This is because the suprachiasmatic nucleus (SCN), considered the pacemaker for circadian rhythmicity, is located in the anterior hypothalamus [130,131]. The SCN is both the brain's clock and the brain's calendar [132]. The ability of the SCN to respond to changing seasonality is critically important in maintaining several biological processes to which neurotransmission and sleep are central [132][133][134].
KCC2 and GABA Involvement
The mechanisms underlying circadian rhythmicity disruption and sleep disturbances remain unclear [120]. There is, however, evidence that mouse and rat transgenic models of HD may be able to replicate the sleep disorders reported in HD patients, thereby providing an important means for understanding these mechanisms [120]. GABAergic signalling, as well as KCC2 expression and functionality, may play an underlying role in these mechanisms. GABA is the primary neurotransmitter in the SCN [135]; it is the only neurotransmitter that is produced and received by SCN neurons [136]. Furthermore, both GABA A and GABA B receptors are present in more than 90% of SCN neurons [131,135,137,138]. GABA A R activity controls the ability of the pacemaker to shift state in response to light [131,135,139]. Furthermore, SCN state switching depends on Cl − transport and GABA A signalling [132]; in turn, these are dependent on controlled NKCC1 and KCC2 expression and function [132,140].
Contrary to the notion that GABA is inhibitory in the adult CNS, GABAergic excitation has been observed in subsets of matured SCN neurons of rats during a 24-h cycle and was particularly high during the night phase [105,141]. Experiments using immunohistochemistry techniques have shown regional differences in CCC expression in the SCN [142]; [Cl − ] i and GABAergic excitation also vary on this basis [143]. Furthermore, the polarity of GABA switches between inhibition and excitation in a time-dependent, cyclic manner [141], again controlled by NKCC1 and KCC2 [105,143]. GABAergic excitation (verses inhibition) is dominant at night [105,141], and more notable excitation activity is observed in the dorsal region of the SCN [105]. A decrease in functional KCC2, coupled with an increase in functional NKCC1, may contribute to the depolarising effect of GABA in SCN neurons as a result of elevated [Cl − ] i [105,133]. It is important to recall that earlier in this review, we highlighted that KCC2 and NKCC1 are mediators in the regulation of ionic and water homeostasis in mammalian CNS [62], which is essential in determining the polarity of neurons [63] (also refer to Figure 2). Undoubtedly, KCC2/NKCC1 pair is involved in the regulation of [Cl − ] i of SCN neurons, which in turn influences the response of SCN neurons.
Furthermore, recent reports have suggested that KCC2 play a crucial role in the promotion of GABA inhibition in the SCN neurons [2,132,133,143]. A recent mice model study by Olde Engberink and co-workers [133] demonstrated that KCC2 blockade reverses the polarity of the GABAergic response in the SCN neurons from mice ex vivo. In the study, the KCC2 blocker ML077 instigated an increase in GABAergic excitatory responses in SCN neurons (C57BL/6 mice). Furthermore, 26% of the cells with inhibitory responses to GABA, and half of the neurons which originally did not respond to GABA, became excitatory upon ML077 incubation [133]. A second supporting study used a different KCC2 antagonist (VU0240551) to show the involvement of KCC2 in controlling the [Cl − ] i of SCN neurons [143]. Conversely, the application of the NKCC1 blocker, bumetanide, has directly opposite effects; bumetanide prevents GABAergic excitation [105]. This provides evidence that NKCC1 activity, responsible for increasing [Cl − ] i , promotes excitatory responses of SCN neurons to GABA [58,144].
KCC2 activity is important in maintaining the excitatory/inhibitory ratio under all photoperiod conditions [133]. KCC2 is thought to be modulated by light; its expression is specifically downregulated in compartments of the SCN that receive and process photic input [132]. Though the mechanism(s) that light employs to modulate KCC2 expression is still elusive [132], the change in photic input may result via post-translational regulation of KCC2 expression and/or activity, because extended photoperiod is not likely to facilitate its transcription process, that is, increase KCC2 mRNA gene expression in the SCN neurons [145]. Day to night-time variances in the expression of KCC2 regulators, such as kinases, may, however, be implicated [132]. Similarly, underlying mechanisms for KCC2 downregulation at night may involve transcriptional changes [132]. These are the same mechanisms that were discussed in earlier sections. Since mHTT can affect KCC2 expression in the HD brain [2,59,84], determining phosphorylation activity and transcriptional control of KCC2 within the SCN, in both the healthy and HD brain, would be beneficial to assess whether any changes occur, and if these changes are implicated in HD sleep disorders.
The link between altered KCC2 expression and altered circadian rhythmicity/sleep disorders in HD patients is not a well-studied area of research. The severity and impact of sleep disturbances in HD patient populations [121] validate why this is a clinically relevant field of investigation; the mechanisms underlying circadian rhythmicity changes urgently need to be understood. Future studies should seek to determine KCC2 and NKCC1 expression levels in the SCN at night and in the daytime, in both healthy and diseased brains. This would be useful in establishing whether changes exist, and if these changes are in fact significant. Further to this, research should investigate whether the downregulation of KCC2, for example, as a result of interactions with mHTT [2,84], or reversion back to NKCC1 [59], contributes to sleep disorders in HD. Additionally, future studies should assess whether the switch between GABAergic inhibition and excitation occurs from day to night as expected, thereby determining whether this represents a possible pharmacological target in HD populations.
Drug Development for KCC2 Activation
The development of specific and potent NKCC1 inhibitors and KCC2 activators represents a long-sought goal for the treatment of multiple central nervous system (CNS) diseases. As discussed above, WNK1-regulated phosphorylation of KCC2 at Thr906 and Thr1007, by SPAK/OSR1, maintains depolarizing GABA activity in neurons, representing a promising therapeutic drug target for GABAergic inhibition.
We have now designed and synthesized a new focused chemical library derived from Closantel [148] and Rafoxanide [149], though a "scaffold-hybrid" strategy [51], which led to identification of "ZT-1a" [5-chloro-N-(5-chloro-4-((4-chlorophenyl)(cyano)methyl)-2-methylphenyl)-2-hydroxybenzamide] as a highly selective SPAK inhibitor [86]. ZT-1a provides neuroprotection by directly inhibiting SPAK kinase activity and SPAK-mediated phospho-activation of NKCC1 and phospho-inactivation of KCCs in ischemic brains [86]. Thus, it is promising because ZT-1a may interfere with the SPAK regulation of GABA signalling via NKCC1 and KCC2 through controlling [Cl − ] i in neurons (Figure 3). We propose that future research should examine whether KCC2 phosphorylation of Thr906/Thr1007 is important for function and pathology in cortical networks through studying HD animal models or further investigating the therapeutic utility and potential of SPAK specific inhibitor ZT-1a treatment.
Conclusions and Future Prospective
In summary, there is a link between HD and sleep; both sleep questionnaire-based studies and studies using objective measures such as polysomnography and BP monitoring techniques have confirmed this association. The activity of KCC2 is important in the brain, and mHTT may act to alter its expression in HD victims. Furthermore, KCC2 expression in the SCN is central to the control of circadian rhythmicity and sleep. In HD, although most damage is noted in the striatal neurons (most commonly associated with movement), hypothalamic dysfunction also occurs, perhaps before striatal damage. It therefore stands to reason that improper KCC2 expression and/or activity may contribute to sleep disorders affecting a large proportion of HD patients. Furthermore, the controlled expression and function of KCC2 is central to determining sleep-wake cycles. Though knowledge about the underlying mechanisms of altered sleep architecture in the HD brain is yet elusive; it may be possible that KCC2 expression, and its role in determining GABAergic excitation is key to this. Investigating KCC2 as a therapeutic target may therefore lead to the production of pharmacological compounds that can effectively treat HD co-morbidities. Patient quality of life would as a result be enhanced; future research should also assess its applicability to potentially improving disease prognosis.
Author Contributions: K.A. and J.Z. were responsible for writing the whole passage. K.A., S.S.J. and J.Z. were responsible for checking and revision. All authors have read and agreed to the published version of the manuscript.
|
2020-12-03T09:05:08.327Z
|
2020-11-30T00:00:00.000
|
{
"year": 2020,
"sha1": "008fbc951947521228b03032c42a5e932feff5f7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms21239142",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fef5f74d42e6ff47773ae6089f44eaaa2e0142fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229939346
|
pes2o/s2orc
|
v3-fos-license
|
Therapeutical Management and Drug Safety in Mitochondrial Diseases—Update 2020
Mitochondrial diseases (MDs) are a group of genetic disorders that may manifest with vast clinical heterogeneity in childhood or adulthood. These diseases are characterized by dysfunctional mitochondria and oxidative phosphorylation deficiency. Patients are usually treated with supportive and symptomatic therapies due to the absence of a specific disease-modifying therapy. Management of patients with MDs is based on different therapeutical strategies, particularly the early treatment of organ-specific complications and the avoidance of catabolic stressors or toxic medication. In this review, we discuss the therapeutic management of MDs, supported by a revision of the literature, and provide an overview of the drugs that should be either avoided or carefully used both for the specific treatment of MDs and for the management of comorbidities these subjects may manifest. We finally discuss the latest therapies approved for the management of MDs and some ongoing clinical trials.
Introduction on Mitochondrial Diseases
Mitochondrial diseases (MDs) are a group of genetic disorders characterized by dysfunctional mitochondria. Encompassing all pathogenic mitochondrial and nuclear DNA mutations, they represent the most frequent group of metabolic disorders in humans, with a prevalence of about 1 in 4300 cases [1]. Mitochondrial DNA (mtDNA)-related disorders, unlike other genetic disorders, are characterized by maternal inheritance. As such, mtDNA point mutations will be passed on by a mother to all her children (males as well as females), although only her daughters will be able to continue to transmit the mutation [2]. These diseases are clinically heterogeneous; this may be partly attributed to the heteroplasmy level of mtDNA molecules in cells and the threshold effect: mtDNA molecules are distributed in multiple copies in each cell (polyplasmy) but most pathogenic mutations do not usually affect all mtDNA copies (heteroplasmy). Additionally, there is a level to which the cell can tolerate damage to mtDNA molecules: metabolic dysfunction and clinical symptoms occur only when the mutation load exceeds this threshold [3]. Furthermore, there are limited genotype-phenotype correlations to direct molecular genetic diagnosis, and many phenotypes can be caused by defects involving numerous genes. For example, Leigh syndrome, the most common clinical phenotype seen in pediatric MDs, is a progressive neurodegenerative disorder which may be caused by mutations in almost 80 different genes [4].
MDs may manifest in childhood or adulthood, with vast clinical heterogeneity. Patients may show symptoms affecting a single organ or tissue, or multisystem involvement; the most affected organs are usually highly dependent on aerobic metabolism and the disease is often progressive, with high morbidity and mortality [5]. Some of the clinical features shown by patients may indicate specific syndromes. MDs should be suspected when an individual presents with the clinical involvement of more than one tissue and/or organ, especially when both the central and peripheral nervous system are affected.
Mitochondrial dysfunction may also be found in neurodegenerative disorders that are not classified as "primary mitochondrial diseases", such as Alzheimer s disease, in which the role of mitochondria is one of the major factors in disease progression [6,7]. As a result, it is clear why studies focused on the pathophysiological mechanisms underlying MDs are becoming increasingly important in medical research.
Because of the clinical heterogeneity of MDs, diagnosis may be challenging, especially considering that the patient's phenotype may overlap with a broad range of diseases. MDs, often suspected in early childhood, may be diagnosed thanks to proper investigation with regard to clinical presentation, family history, pathology, metabolic profiling, enzyme activity levels, and the use of specific techniques such as electrophysiology, magnetic resonance imaging (MRI), magnetic resonance spectroscopy (MRS), and genetic analysis [8].
For the vast majority of these diseases, therapy is only symptomatic, although several recent clinical trials have highlighted the value of disease-modifying therapies such as idebenone [9][10][11] or adeno-associated viral vectors [12] in the treatment of Leber's hereditary optic neuropathy (LHON) or the supplementation of coenzyme Q10 (CoQ10) in patients with CoQ10 deficiency [13]. Management of patients includes the prevention and treatment of early complications that may affect the involved organs, avoiding potential triggers of decompensation such as fasting, intercurrent illness, pyrexia, trauma, surgery, or use of medications toxic to mitochondria [14]. Furthermore, several studies in the literature suggest the importance of non-pharmacological treatment as an additional therapeutic strategy: an example is the prescription of a ketogenic diet in patients suffering from metabolic or neurological diseases, including MDs. In fact, thanks to the greater supply of ketones, the brain should have a more efficient energy source than that provided by glucose alone [15].
Currently, the guidelines for the correct diagnosis and therapy of diseases undergo constant and periodic assessment in all fields of medicine. A complete review of therapeutic management of mitochondrial diseases and drugs that should be used or avoided in mitochondrial patients is, therefore, particularly important. Indeed, due to the pathophysiological processes underlying them, MDs represent a distinct category in the clinical panorama and deserve specific analysis.
Safety of Drug Use in MDs
As discussed above, the extreme variability in the phenotype and genotype of MDs suggests the need to identify a specific therapy for every single patient under examination. Subsequently, the vast number of drugs available in clinical practice should be analyzed in order to separate drugs that may be defined as "safe" from those that may instead be dangerous in the treatment of the patients affected by a mitochondrial disease. For instance, there are some categories of antibiotics which have effects on mitochondrial translation and therefore may be dangerous for patients harboring mitochondrial translation deficiency, as demonstrated by Jones et al. in their study [16].
For this reason, an international group of experts proposed a study aimed at developing a consensus on safe medication use in patients with a primary mitochondrial disease [14]. The authors highlighted how, for most existing licensed drugs, mitochondrial toxicity in vivo is unknown. In the evaluation of drug safety in subjects affected by a mitochondrial disease, the only available information is derived from in vitro and in vivo pre-clinical studies and published case reports. Thus, a consensus based on a systematic analysis of notions present in literature and the clinical experiences of pediatricians, internists, and neurologists qualified in treating this type of patient may certainly be useful for strengthening knowledge in the treatment of patients affected by these particular diseases.
In this workshop, using a modified Delphi-based technique, a group of internationally acknowledged experts was able to develop a consensus on drug safety in the treatment of patients with MDs. The Delphi method, developed by the RAND (Research and Development) Corporation in 1953, is a consensus method used in research that is directed at problem-solving, idea generation, or for determining priorities [17]. The technique is based on a structured and repetitive survey process of at least two rounds, which continues until a consensus is reached among panelists. Between each round, feedback is provided to the panelists. Due to the short duration of the workshop (2 days), only a limited number of drugs were considered. Thus, after selecting the drugs to study as per a previously published list on the website of the patient advocacy group IMP (https://www.mitopatients.org/mitodisease/potentiallyharmful-drugs), supplemented with the names of a few drugs commonly prescribed to patients affected by MDs (i.e., anesthetic agents, analgesics, antibiotics, and antiepileptic drugs), the total number of drugs/drug classes was limited to 46 [14].
The expert panel reported that all the 46 drugs or drug groups studied were generally safe for patients with MDs. In particular, they reported a good or strong consensus for six drugs or drug groups in the first Delphi round: enalapril, paracetamol, midazolam, carbamazepine, oxcarbazepine, and haloperidol. For the other 40 drugs analyzed, a further examination was necessary based on the clinical experiences of the panelists involved. At the end of this workshop, the experts identified some specific restrictions that should be evaluated in relation to certain molecular alterations and particular clinical conditions.
The main recommendations proposed at the end of this workshop are summarized in the following points: (1) The use of aminoglycosides for elective long-term treatment should be preceded by a screening for the homoplasmic m.1555A>G and m.1494C>T mutations associated with both aminoglycoside-induced and non-syndromic hearing loss [18]. Exceptionally, when an effective broad-spectrum antibiotic treatment is needed in emergency situations and the prescription of aminoglycosides is considered necessary, this drug classes may be administered until the immediate danger has passed or antibiogram has been carried out. After that, physicians should replace aminoglycoside with a safer antibiotic class.
(2) Valproic acid should be administrated only in exceptional situations. POLG mutations are an absolute contraindication for use of this drug. In fact, in the case of clinical signs suspicious for POLG disease (i.e., epilepsia partialis continua, explosive onset of focal epilepsy, or rhythmic high amplitude delta with superimposed spikes on electroencephalogram (EEG) and/or known liver disease, valproic acid should not be used in patients [19].
(3) Neuromuscular blocking agents should be carefully used and monitored for patients manifesting a mainly myopathic phenotype. General anesthesia has not shown side effects in patients with MDs, and experts agreed on the safety of these drugs and drug classes, while recommending care during general surgical procedures for these patients. A good strategy to prevent the effects of catabolism may be to minimize preoperative fasting and to administer intravenous glucose perioperatively in the case of prolonged anesthesia unless the patient is on a ketogenic diet.
(4) Adverse effects may be influenced by the duration of drug administration, which should be guided by individual patient necessities and their response to specific treatments. Indeed, there may be clinical circumstances in which subjects benefit from the longer administration of some drugs, although this involves an increased risk of side effects or disease progression, particularly when better alternative pharmacological options are not available.
(5) Renal impairment is a common feature of many patients, and therefore drug dose adjustment should be considered, especially for drugs with a predominantly renal clearance. Due to the major risk of developing metabolic acidosis (lactic acidosis), physicians should be careful in administering the drugs that most frequently cause acidosis, performing regular clinical reviews and monitoring acid-base status in blood.
The main limitation of this type of study is that expert judgement is not always based on empirical studies, although the consensus model represents the most-used method to inform clinical practice in the absence of validated data. Furthermore, the physicians experience that patients seemed to tolerate most drugs is not strongly supported by preclinical studies. This may be partially attributed to the use of different dosages in preclinical studies and clinical practice, where the reference dose for drugs is much lower than the toxic concentration dose. Conversely, higher or toxic drug levels are used in the majority of pre-clinical studies.
Another important limitation of this study is that patients often have co-morbidities and/or are taking more than one drug; this implies a greater difficulty or an impossibility in determining which effects are due to the drug in question.
For these reasons, it is important to update the current list regularly, considering the addition of other drugs in the future using the same Delphi process. Furthermore, due to the frequent discrepancies between pre-clinical and clinical situations, future preclinical studies should assess the toxicity of drug with an analysis closer to the actual doses used in the treatment of patients and their conditions (e.g., mitochondrial deficiencies), in order to improve knowledge on drug-induced mitochondrial dysfunction in these subjects. According to the results of this consensus paper and the clinical trials in literature, we proposed a review of recommended therapeutic management of primary mitochondrial diseases, distinguishing the management of patients affected by peripheral neuropathies and skeletal muscle diseases from that of patients in which the disease mainly affects the central system. A list of drugs analyzed in this paper is summarized in the Table 1. Table 1. Recommendations for safety use of the main drugs and drug classes in mitochondrial patients.
Myopathy
Damage to the cell respiratory chain, caused by mutations in mitochondrial or nuclear genes encoding enzymes involved in oxidative phosphorylation, is the main mechanism clinical manifestations of MDs are based on. The consequent reduction of ATP molecules preferentially affects organs with high energy requirements such as skeletal muscle. Myopathy, expressed by symptoms of exercise intolerance with premature fatigue or muscle weakness, is therefore common in MDs, for which there is no effective available treatment [47]. Myopathy may represent the only clinical feature of mitochondrial diseases, as observed in some patients affected by primary mitochondrial myopathy (PMM), or, more commonly, patients may show additional manifestations (i.e., diabetes, sensorineural hearing loss, optic atrophy, peripheral neuropathy, cardiomyopathy, nephropathy, hepatopathy, stroke-like episodes, seizures, ataxia, failure to thrive, developmental delay or regression, and dementia) [38,48].
Anamnesis and physical examination should be carefully performed to identify the locus of the impairment. Exercise intolerance may involve functions related to the lungs (i.e., obstructive pulmonary disease), heart (i.e., cardiomyopathy), or the motor unit (i.e., primary disorders of nerve or muscle). In contrast, the impairment may be due to an insufficient amount of ATP in working muscles, which may be observed, for example, in patients affected by inborn errors of intermediary metabolism (i.e., myophosphorylase deficiency or carnitine palmitoyl transferase deficiency) or mitochondrial myopathies [39].
For these reasons, studies focused on exercise tolerance and physical training have been conducted to analyze the effect of exercise in patients affected by mitochondrial myopathy. The use of the 6-minute walking test (6MWT) as a functional test in clinical research may be associated with laboratory and physiological measures (i.e., resting and end-exercise blood lactate, respiratory exchange ratio (VCO 2 /VO 2 ), measurement of cardiac output, and pulmonary function measured with spirometry), in order to identify patients affected by PMM [48]. In addition, because of its sensitivity to fatigue-related changes, the 6MWT may represent a measure of fatigability [49].
Jeppesen et al. reported how aerobic training for 12 weeks significantly improved maximal oxidative capacity (VO 2max ) in 20 persons with four different mtDNA mutation types and a variety of mutant loads. Furthermore, they reported how this improvement translated into clinically significative effects only in subjects with severe oxidative defects and not in asymptomatic patients harboring mtDNA, in which daily activities did not change. As reported, there were no changes in mtDNA mutation load in muscle, plasma creatine kinase (CK) levels, and muscle regeneration and apoptosis with physical training. Short-term exercise seems to be safe in these patients [31]. Thus, we may assume that regular aerobic exercise should be included in a multi-disciplinary approach to these patients.
Although long-term efficacy has not been definitively confirmed, physicians usually prescribe dietary supplements like antioxidants and mitochondrial cofactors. Complex B vitamins (mainly thiamine/vitamin B, and riboflavin/vitamin B2), creatine, coenzyme Q10 and its reduced form ubiquinol, alpha lipoic acid, N-acetylcysteine, folinic acid, vitamin C, and vitamin E may be included among the compounds of these supplements [32], which are often described as "mito-cocktails".
The evidence supporting the use of coenzyme Q10, also known as ubiquinone, in mitochondrial disease, based on Level III and IV open-label studies, was revised in 2007. The main limitations of these studies are the low dosages of CoQ10 and the lack of information on blood or tissue levels [33]. Reduced CoQ10 has become commercially available in the form of ubiquinol, which has a higher bioavailability (it is three to five times better absorbed when compared with the oxidized form of CoQ10, ubiquinone). Ubiquinol is administrated at doses of 2 to 8 mg/kg per day, while ubiquinone is usually prescribed at doses of 5 to 30 mg/kg per day. Both supplements are administered twice daily with meals [34].
Riboflavin (vitamin B2) serves as a flavoprotein precursor and so represents an essential element in complexes I and II of the mitochondrial respiratory chain and a cofactor in enzymatic reactions such as the fatty acid β-oxidation and the Krebs cycle. Consequently, riboflavin supplementation may improve symptoms and the clinical course of subjects affected by multiple acyl-CoA dehydrogenase deficiency (typically caused by electron-transport flavoprotein dehydrogenase (ETFDH) gene mutations) [40], mitochondrial diseases with complexes I and II deficiencies (as reported by some non-randomized studies in the literature) [50,51], and acyl-CoA dehydrogenase-9 (ACAD9) deficiency (which results in increased complex I activity in fibroblasts of patients). The usual prescribed dosage of riboflavin is 50-200 mg/day, divided into 2-3 doses [41].
Creatine, in its phosphorylated form (phosphocreatine), serves as a source of highenergy phosphate released during anaerobic metabolism. The highest concentrations of creatine are observed in skeletal muscle and the brain, which are tissues with higher energy requirements. Lower phosphocreatine levels may be observed in skeletal muscle and the brain, respectively, in individuals with mitochondrial myopathies and mitochondrial encephalomyopathies [52,53].
Treatment based on creatine supplements results in an increase in high-intensity, isometric, anaerobic, and aerobic power, as shown in some studies reported in the literature [28,54]. The standard dosage that is actually suggested is 10 g per day for adults twice daily, and 0.1 g/kg per day, divided into two doses for children.
L-Carnitine is a cellular compound involved in the process of mitochondrial βoxidation of fatty acids and the esterification of free fatty acids that may otherwise be sequestered by CoA. Tissues like those of skeletal muscle, heart, and liver mostly depend on β-oxidation for ATP production. Carnitine may prevent CoA depletion and remove excess, potentially toxic, acyl compounds. It is possible observe, in individuals with respiratory chain defects, a reduction of free carnitine levels in plasma and an increase in esterified carnitine levels, even if primary carnitine deficiency is not a typical feature of MDs. L-Carnitine supplementation for mitochondrial disorders may contribute to restoring free carnitine levels and remove accumulating toxic acyl complexes [34]. Carnitine may be administered orally or intravenously in mitochondrial disease and is usually combined with other vitamins and cofactors. Currently, the benefit of isolated use of carnitine in patients with primary mitochondrial disorders has not been confirmed, as for other supplements used separately or as a part of an antioxidant cocktail in the treatment of mitochondrial patients (i.e., thiamine (B1), vitamins C and E, and alpha-lipoic acid).
One small, randomized placebo-controlled trial provided evidence for the combination of creatine, CoQ10, and lipoic acid [29]. Patients usually take dietary supplements when recommended by a physician or on their own. In this case, serious side-effects were not reported, and some subjects reported an improvement in their activities [37].
With regard to the use of drugs, it has also been suggested that care should be taken when using agents such as statins, corticosteroids, metformin, and antiretrovirals, since they may worsen the underlying myopathy [27]. For example, statin-induced myopathies are associated with the inhibition of the Qo site of respiratory complex III (CIII) by several statin lactones. Consequently, polymorphisms of uridine 5 -diphospho-glucuronosyltransferases (UGTs), the enzymes converting statin acids into lactones, and CIII could be predisposing factors in statin-induced myopathies [43].
Neuropathy
The prevalence of peripheral neuropathy in MDs is about 12.4%. Mitochondrial neuropathic patients have an increased prevalence of ataxia, hearing loss, muscle weakness and muscle wasting [55]. Individuals harboring POLG-, TYMP-, or MPV17deletions or m.8993T>G and m.8993T>C mtDNA mutations, as well as those affected by SURF1-related mitochondrial diseases are at greater risk of developing peripheral neuropathy. This may also be a secondary complication of mitochondrial diabetes, renal insufficiency, or a side effect of treatment [27].
Patients with "pure forms" of neuropathy are rare, instead, it is more frequent to observe a mixed/undefined pattern. Because of the relative rarity of the pure forms, it is very difficult to find a strict genotype-phenotype relation. Neuropathic pain seems to be more frequent in POLG patients than in mitochondrial patients with neuropathy with different genetic causes [55].
The use of some drugs should be avoided in these patients, in particular antiviral drugs and dichloroacetate. Patients treated with nucleoside analogue reverse transcriptase inhibitors (NRTIs) may develop myopathy or neuropathy after long-term therapy. Zalcitabine, didanosine, and lamuvidine cause neuropathy, zidovudine causes myopathy, and stavudine and fialuridine cause neuropathy or myopathy and lactic acidosis. Muscle wasting, myalgia, fatigue, weakness, and CK elevation represent the main features of myopathy; indeed, the neuropathy is painful, sensory, and axonal. Another cause of NRTI-related neuropathy is mitochondrial toxicity. Thus, NRTI-induced mitochondrial dysfunction influences the clinical administration of these agents, especially at high doses and when combined [24]. Particular attention should be paid to treatment with dichloroacetate (DCA), especially in patients affected by mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS). Dichloroacetate (DCA) has been used to treat congenital and acquired conditions associated with lactic acidosis, thanks to its ability to reduce lactate. In fact, DCA is able to interact with the pyruvate dehydrogenase enzyme complex, located in the mitochondria [30]. Because of the increase in pyruvate catabolism, the accumulation of lactate is prevented, and lactic acidosis may be avoided in many patients. Different studies demonstrated that dichloroacetate treatment was well tolerated and blunted the postprandial lactate increase in children with congenital lactic acidosis, although neurological or other measures of clinical outcome improvements were not observed [20,21]. However, in individuals with MELAS syndrome it was demonstrated that treatment with dichloroacetate was linked to peripheral nerve toxicity. This may be explained by the greater vulnerability of MELAS m.3243A>G patients to DCA toxicity as compared to patients with other diseases causing lactic acidosis. Furthermore, diabetes mellitus is likely an additional contributing factor, given that it commonly causes symptomatic or subclinical neuropathy with both axonal and demyelinating features [30].
New Potential Primary Mitochondrial Myopathy (PMM) Treatment
In recent years, different clinical trials on potential treatment for PMM have been started and are still ongoing to analyze the effect of new molecules that may provide benefits to these patients.
One of these molecules, elamipretide, demonstrated a benefit measured by an improvement of exercise performance after 5 days of treatment in patients with PMM without increased safety concerns [44]. Thanks to selective binding to cardiolipin via electrostatic and hydrophobic interactions, elamipretide can protect from oxidation and, consequently, preserve mitochondrial cristae, promote oxidative phosphorylation, and inhibit mitochondrial permeability (transition pore opening). Kaara et al., in a randomized, placebo-controlled clinical trial of elamipretide in patients with PMM, observed a dose-dependent improvement in 6MWT results, which confirmed the benefits recorded in exercise performance in aged mice reported in preclinical animal studies [45]. The trial also found a correlation, evaluated for several cardiopulmonary exercise testing (CPET) parameters, between the change in distance walked in 6MWT and peak oxygen consumption in all the participants, similar to what has been observed in other advanced chronic conditions such as heart disease [44]. Although the trial provided Class I evidence, the improvements in exercise performance and the well-tolerated safety profile of elamipretide are encouraging.
Omaveloxolone is another molecule under investigation that may be potentially useful in treatment of PMM. It is a semi-synthetic oleanolic triterpenoid and a potent activator of nuclear factor erythroid 2-related factor 2 (Nrf2). Triterpenoids are a class of small anti-inflammatory molecules derived from natural sources. Omaveloxolone targets redoxsensitive cysteine residues on the regulatory molecule Keap1 and thereby rescues Nrf2 from degradation. Moreover, blockage of Keap1 inhibits the NF-κB proinflammatory signaling pathway. These effects of omaveloxolone may improve muscle function, oxidative phosphorylation, antioxidant capacity, and mitochondrial biogenesis in patients with mitochondrial myopathy [46]. The clinical trial is still ongoing, but it has already shown that treatment with 160 mg omaveloxolone leads to a significant reduction in lactate production and heart rate during submaximal exercise. As most everyday activities are of a submaximal intensity, these results are potentially clinically meaningful and indicate that omaveloxolone may improve exercise tolerance in patients with mitochondrial myopathy [46]. The study provided Class II evidence that, for patients with mitochondrial myopathy, omaveloxolone compared to placebo did not significantly change peak exercise workload.
Thymidine kinase 2 (TK2) deficiency may determinate a late onset case of mild chronic progressive external ophthalmoplegia (CPEO). The most frequent clinical presentations are infantile onset and childhood onset progressive limb and bulbar myopathy with restrictive lung disease. Domínguez-González et al. performed a clinical trial based on the use of pyrimidine deoxynucleoside and deoxynucleotides as novel pharmacological therapies in 16 patients with mitochondrial myopathy due to TK2 deficiency. The beneficial effects of the therapy were confirmed both with functional tests, such as 6MWT, and laboratory measures. Serum levels of growth differentiation factor (GDF-15), a sensitive diagnostic biomarker for mitochondrial myopathy, were reduced in these subjects. The benefits observed, including stabilization or mild improvements in motor and respiratory functions, were better in early-onset patients then in late-onset ones [42]. Further studies are ongoing to support potential of this treatment.
Seizures and Epilepsy
Epilepsy is a major feature of mitochondrial disease. The main genetic causes of mitochondrial epilepsy are: mtDNA mutations (including those typically associated with MELAS and myoclonic epilepsy with ragged red fibers (MERRF) syndromes); mutations in POLG (classically associated with Alpers syndrome but also presenting as mitochondrial recessive ataxia syndrome (MIRAS), spinocerebellar ataxia with epilepsy (SCAE), or myoclonus, epilepsy, myopathy, sensory ataxia (MEMSA) syndromes) and other disorders of mitochondrial DNA maintenance; complex I deficiency; disorders of coenzyme Q10 biosynthesis; and disorders of mitochondrial translation such as RARS2 mutations [56]. Epileptic seizures may occur sporadically or as a part of a complex syndromic disorder, such as those mentioned above. The management of mitochondrial epilepsy may be very complicated, and prognosis may be often negative. With regard to the treatment, multiple anticonvulsants are frequently prescribed to individuals with mitochondrial epilepsy. There are no studies that clearly prove the role of vitamins or other nutritional supplements [56]. A recent multicenter Italian study collected data on a large number of patients affected by mitochondrial epilepsy and their management [22]. The authors reported a full or partial response to antiepileptic drugs (AEDs) for the most of patients, with an absence of response in only 11% of cases. Response to treatment, however, does not seem to be influenced by age at epilepsy onset. In clinical practice, mitochondrial epilepsy is generally treated using AEDs characterized by low mitochondrial toxic potential, for example gabapentin, lamotrigine, zonisamide, and levetiracetam (LEV). LEV was the most-used AED in the Italian cohort (mainly in adults), with an effectiveness in 81% of the adult-onset epilepsy cases. Instead, in early-onset epilepsy, the authors reported a more frequent use of phenobarbital and vigabatrin, achieving seizure control in 75% and 66% of cases, respectively. It is interesting to highlight how the use of topiramate (TPM) in children allowed full seizure control, although this drug was used in only 25% of cases.
The role of adrenocorticotropic hormones and a ketogenic diet in the treatment of these patients has been reported by some studies in the literature [57,58] which highlighted how these options are effective and safe. However, the Italian study reported very poor antiepileptic efficacy in young children.
As reported above, valproic acid should be avoided, particularly in patients with POLG mutations, because this drug may cause a fulminant hepatic failure or worsen neurologic symptoms, including seizures.
Epilepsy may be refractory to treatment. No specific combination of medications has been observed that results in clinical improvement.
Stroke-Like Episodes
Stroke-like episodes may be the main feature of some forms of MDs, especially in MELAS syndrome [59]. The more common features of these episodes include headache, nausea and vomiting, encephalopathy, focal-onset seizures (with or without associated focal neurological deficits), and cortical and sub-cortical signal abnormalities not confined to vascular territories, known as "stroke-like lesions". While originally described in young adult patients, this type of symptom has increasingly shown late-onset forms [60].
The exact pathophysiological mechanisms of stroke-like episodes are unknown, and this is also reflected in debates on the management of stroke-like episodes. For example, the use of L-arginine seems to be supported by vascular involvement in stroke-like episodes and the nitric oxide deficiency found in patients [36]. However, there is no strong evidence that L-arginine (associated or not to co-supplementation of citrulline) improves the management of stroke-like episodes, so further investigations are needed.
In the case of clinical suspicion of a stroke-like episode, a detailed history of patient is a diagnostic key point. Indeed, a sudden onset of focal neurological deficits, particularly pure motor weakness (facial weakness or hemiparesis), that evolve rapidly within minutes should direct the physician towards a vascular stroke, since these modalities of symptoms onset are quite unusual in stroke-like episodes. Instead, for any individual who presents with complex visual symptoms, perception problems, and hearing disturbances persisting for hours or days before admission, the hypothesis of a mitochondrial stroke-like episode is more likely. It is also important to try to identify potential triggers such as infection, gut dysmotility, dehydration, prolonged fasting, and non-adherence to the anti-epileptic drug(s) [23].
Patients who have already had a stroke-like episode and show symptoms suggestive of a new episode or seizure should be considered for early treatment with benzodiazepines if they are outside the hospital, and with an intravenous anti-epileptic drug (AED) such as levetiracetam, phenytoin, lacosamide, or phenobarbitone once in the hospital environment. As mentioned above, the use of valproate is not recommended for patients harboring POLG mutations [19].
When intensive care is necessary (i.e., generalized, convulsive status epilepticus; intrusive, frequent focal motor seizures with breakthrough generalized seizures which fail to respond to intravenous AEDs; severe encephalopathy with a high risk of aspiration; or focal motor status epilepticus with retained consciousness failing to respond to benzodiazepine and to intravenous AEDs) the use of midazolam is recommended as general first option for anesthesia. With regard to the use of propofol there is no strong evidence suggesting it should be totally avoided in refractory status epilepticus associated with stroke-like episodes. Therefore, physicians should evaluate its administration case by case [23].
For patients who show neuro-psychiatric symptoms, the use of haloperidol, benzodiazepine, and quetiapine is suggested by clinical experience as safe and efficacious, according to the most recent recommendations published after a recent consensus on stroke-like episode management [23].
Hearing Loss
Hearing loss is a common feature of MDs. Patients most frequently show progressive sensorineural hearing loss, typically of cochlear origin. Alternative observed forms are congenital hearing loss and auditory neuropathy [27]. Mutations in mtDNA are associated with both syndromic and non-syndromic hearing loss. In particular, m.1555A>G and m.3243A>G mutations are frequently associated with sensorineural hearing loss. Regarding syndromic hearing loss, systemic neuromuscular disorders such as MELAS, MERRF or PEO syndromes are often characterized by sensorineural hearing loss. MT-RNR1 or the mitochondrially encoded tRNA serine 1 (MT-TS1) gene may show many pathogenic variants associated with non-syndromic hearing loss, which is generally symmetric and progressive, mainly involving a high frequency range. m.1555A>G in MT-RNR1 is the most common non-syndromic mutation [61].
Digital hearing aids may improve function in the case of moderate or severe hearing loss, whereas cochlear implantation should be considered in profound hearing loss. In the management of mitochondrial diseases, aminoglycosides may precipitate the development of hearing loss and should be used with caution, as should platinum-based anticancer drugs [26]. Aminoglycoside and cisplatin ototoxicities are also related to the induction of oxidative stress and an increase in reactive oxygen species (ROS) levels that may exacerbate mitochondrial disease. Consequently, mitochondria-targeted antioxidants could be useful for the prevention and treatment of diseases associated with mitochondrial dysfunction. Examples of these drugs include lipophilic cation-based antioxidants such as MitoQ, MitoVitE, MitoPBN, MitoPeroxidase, SkQ1, and SkQR1, or amino acid-and peptide-based antioxidants such as SS tetrapeptides [62,63].
However, there is a lack of clinical evidence for the use of mitochondria-targeted antioxidants, although there are reported animal studies in the literature [64]. Therefore, clinical trials based on mitochondria-targeted antioxidant administration need to be performed to confirm the otoprotective effects in patients.
Visual Loss
Ophthalmologic manifestations in MDs are present in 35-81% of cases and have been described in many forms of mitochondrial disorders [65,66]. Symptoms may be the main feature of the disease (such as ophthalmoplegia and ptosis in CPEO), or specific for a syndrome, such as optic nerve disease in Leber's hereditary optic neuropathy (LHON). Otherwise, patients may manifest nonspecific ophthalmologic manifestations like cataracts, retinal disease, nystagmus, strabismus, or reduced visual acuity [27].
Mitochondrial damage is also involved in responsible mechanisms for retinopathy in patients affected by metabolic disorders (i.e., diabetes, dyslipidemia) and in several agerelated retinal diseases, such as glaucoma and age-related macular degeneration. In fact, oxidative stress affects cell structures and components, particularly metabolically active neuronal cells such as photoreceptors. Therapies targeting these pathological processes were and are object of several studies in the literature [67][68][69][70][71].
Surgery may be beneficial for strabismus or cataracts and represent a valuable option for ptosis. The lubrification of the eyes is also important in patients who show inappropriate eyelid spread of tears in patients, with ptosis or after ptosis repair.
LHON is a common mitochondrial optic neuropathy characterized by acute onset of painless and bilateral central vision loss. This disease more frequently develops in young adults. Risk factors such as heavy alcohol and moderate nicotine/tobacco exposure increase the possibility of developing symptoms and should be avoided. Idebenone, a short-chain benzoquinone capable of acting as an antioxidant agent, represents the only disease-specific drug approved to treat visual impairment in adolescents and adults with Leber's hereditary optic neuropathy. Different trials have demonstrated the safety of this supplement in LHON treatment, although further investigations about its effectiveness are needed considering the rare incidence of this disease and the low recruitment numbers in clinical trials [9].
Several other therapies with neuroprotective, antioxidant, anti-apoptotic and/or antiinflammatory activities have been investigated in the treatment of LHON but there are not clear results in clinical trials. Promising preliminary results have been reported for gene therapy in in-vitro and in-vivo models of LHON. In particular, allotopic gene therapy for LHON at low and medium doses appears safe, opening the door for testing at high doses [72].
Parkinsonism and Movement Disorders
Primary mitochondrial disease may be characterized by damage to basal ganglia, the cerebellum, the cortex, or corticospinal tracts, and may cause movement disorders and abnormalities of tone. Thus, patients can manifest a mixed movement and tone disorder, including hyper-and hypokinetic or cerebellar types of movements, hypotonia, spasticity, rigidity, and dystonia [25]. Other possible symptoms are myoclonus, ataxia, gait disturbance, parkinsonism, and rigidity. In the context of mitochondrial disorders, Leigh syndrome and Leber's hereditary optic neuropathy mutations are often associated with dystonia, although this clinical manifestation has also been observed for several other mtDNA mutations. In particular, subjects with POLG mutations or mtDNA depletion syndromes are at higher risk of parkinsonian symptoms [73].
The clinical approach to movement and tone disorders in mitochondrial disease is comparable to that in patients with movement disorders of other causes. The symptomatic benefit of levodopa in mitochondrial patients with generalized dystonia or parkinsonism needs to be investigated further, as it is still not clear how pathophysiologic mechanisms involved in mitochondrial parkinsonism differ from those of parkinsonism of other causes. Symptoms like focal or multifocal dystonia can be improved with botulinum neurotoxin injections, whereas the use of oral baclofen, even if tolerated in most patients with generalized dystonia, has not provided satisfactory clinical responses [25]. Deep brain stimulation may be an option in the treatment of mitochondrial movement disorders, rigidity, and dystonia, considering the patient's long-term prognosis and level of morbidity [27].
Anaesthesia
Patients with MDs may require general anesthesia in their diagnostic workup and subsequent management. However, the evidence base for the use of general anesthesia in these patients is limited and inconclusive.
Mitochondria are a possible site of action for general and local anesthetics. Due to the high energy requirements of the central nervous system, patients with mitochondrial dysfunction might be more susceptible to changes in consciousness levels when anesthesia is used [74].
Patients affected by MDs are more susceptible to developing lactic acidosis, and surgical procedures and perioperative fasting, as possible metabolic stress factors, may exacerbated this condition. Routine, perioperative use of lactate-free intravenous fluids (such as 5% dextrose-0.9% saline) is recommended in all patients with MDs undergoing general anesthesia.
The experts involved in the international Delphi-based consensus agreed that general anesthetic in MD patients was safe and that the risk of adverse events was exceptionally rare [14]. As already mentioned, in patients who are not on ketogenic diet, the perioperative administration of intravenous glucose associated with minimal preoperative fasting may prevent the effects of catabolism [14].
Regional anesthesia eliminates the risk of prolonged muscle relaxation and central nervous system depression, as well as the possibility of malignant hyperthermia. Although no neurological sequelae have been reported following spinal or epidural anesthesia, these procedures should be avoided in patients with major abnormalities of spinal cord or severe peripheral neuropathy [75].
Conclusions
Because of the lack of disease-modifying therapies for most MDs, the management of patients affected is mainly focused on the treatment of clinical features that patients show and on the prevention of major complications. In order to improve this approach, further knowledge with regard to mitochondria-safe drugs commonly used in clinical practice is essential. Furthermore, when the administration of a mitochondrion-toxic drug is needed, physicians should perform careful clinical and laboratory follow-up to precociously recognize and treat possible side effects (e.g., rhabdomyolysis, lactic acidosis, and hepatic failure, among others) [76].
Administration of vitamins, coenzyme Q, or other types of supplements is also an essential part of the therapeutic management of patients. For example, supplementation with folic acid has proved to be useful to compensate for the brain folate deficiency, which is a very common physiopathological aspect of mitochondrial diseases, especially in Kearns-Sayre syndrome [35].
Recently, besides recent trials which evaluated the potential therapeutic effects of idebenone in LHON [9,10], there is also a great interest in the new possibilities offered by mitochondrial biogenesis [77]. Gene therapy is mainly focused on the prevention of mtDNA related syndromes through the elimination of mutated mtDNA from oocytes [78], and the systemic or local administration of vectors to restore mitochondrial functionality [79]. Further clinical investigations and a better understanding of the mechanisms that this new approach may offer are needed. However, mitochondrial gene therapy appears to be to be a valuable and promising strategy to treat MDs.
|
2020-12-31T09:02:26.779Z
|
2020-12-29T00:00:00.000
|
{
"year": 2020,
"sha1": "fff47ace249f6c20cacf4096cb901fc83894d246",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/1/94/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "921c9c33d35a39f9fc2a3ea53c947d4211cb174a",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216524191
|
pes2o/s2orc
|
v3-fos-license
|
Experimental Study of Hybrid Layer Effect for Prism under Bending, Shear and Torsion
The composite concrete structure by adopting varied concrete characteristics that are systematized in the layered method, described (hybrid), the principal purpose for utilizing the hybrid mixture is to enhance the load-carrying function for the segment. Hybrid concrete sections maintain high compressive strength, ductile, high absorption energy and high tensile strength, these properties can be performed by installing two or more various types or strengths of mixture layers collectively so that every layer is employed to its best benefit. The results of (flexural, shear and torsion tests) on hybrid prism composed of two layers of different types of concrete are presented and discussed with the effect of different parameters and variables. Normal concrete layer combined with a layer of (high or ultra-high-strength concrete) with the effect of fiber. The purposes of the present investigation are: find the engineering properties of three concrete types, the effect of hybrid concrete with different concrete types at different layers of prisms with different parameters and variables tested under three types of testing (flexural, shear and torsion tests).
Introduction
Concrete is the common extensively worked material in building construction with different characteristics and strength including normal, high and ultra-high-strength concrete. For the new demands of construction including high strength, thin sections, large space, complex forms of structure, economy, fast construction, elimination of reinforcement, and reduced maintenance special concrete with enhanced properties and characteristics are required. The cost of UH concrete is higher than the cost of conventional strength concrete except, UH concrete provides advantages including higher performance levels, impact resistance, ductility, toughness, and strength [1][2][3][4].
Shear, Flexural and Torsion test
The force that produces the slippage between two materials on two different surfaces parallel to the direction of force is called the shear load. The force of a materials or compounds toward the structure surface of failure when the material or compounds fail is called the shear force. The stresses in a material simply prior it yields which is a material feature is denoted flexural strength. The concrete member torsional failure is initiated by the tensile stress generated related to a category of pure shear, which is torsion sequences [5][6][7]. Concrete easily cracks under impact loads and tensile stress related to low tensile strength. Fibers with different shapes, types, percentage and size were used to bridging the cracks, and provide improved serviceability to the structure. The composition of steel fibers generally may enhance the tensile strength of the mold to an average level besides the toughness will be improved to a higher intensity [8][9][10][11][12].
Muhammad I R and Mazen A M (2007) [7] improves the toughness and the tensile strength of reinforced concrete members by steel fiber. Santhakumar R et al. (2007) [13] numerical finite elements study on coupled torsion and bending for reinforced concrete beams. Suresh R (2009) [14] study the rectangular beams SFRC torsional behavior subjected to combined torsion-bendingshear. Pant A and Parekar S (2009) [15] presented reinforced concrete beams tests with shear, bending, and torsion with the effect of steel fiber. Sable K et al. (2011) [16] investigates Fly ash as cement replacement and SFRC with different strengths under shear and torsion. Kishor S and Madhuri K (2012) [17] study of SCC and NCC shear and torsion strength with different aspect ratios with and without fibers. Ahmed S and Khaled S (2014) [18] investigate the performance of steel fiber selfcompacting high-strength reinforced concrete beams experimentally and analytically following combined torsion and bending. Babar et al. (2015) [19] investigate ductility and shear strength of nonstirrups hooked steel fiber reinforced concrete beams. Satyajeet B and Ajinkya D (2015) [20] revealed an analytical design to predict the shear, tensile strength, torsion and bending of the fiber-reinforced concrete beam. Fibers, increased load-carrying ability, ductility and toughness of high strength fiber reinforced concrete beam. Amulu C and Ezeagu C (2016) [5] investigates the impact of coupled actions of torsion, bending and shear stresses. Awadh E (2016) [6] study combined torsional moments, shear forces and bending moments for reinforced concrete beams. Karim et al. (2016) [21] improved that torsional resistance provided by fibers. Thivya J et al. (2016) [22] study a composite beam under combined bending and torsion. Senthuran and Sattainathan S (2016) [23] study the improved torsional behavior of reinforced beams having steel fibers. Sameera K (2017) [24] investigates the behavior of cylindrical composite beams varying the percentage of fiber content under the combined state of flexure, torsion, and shear.
Description of the Experimental Work
The experimental program is conducted to find the relation between different types of concrete in shear, flexure, and torsion. These mixes were divided into four different mixes with different concrete properties with different tests including (weight change, density, tensile splitting strength, compressive strength, and flexural strength) [1,12]. All work was investigated over a period of 28 days. Table 1 shows the details of various types of mixes used in the present research work. (100*100*400 mm) wooden and steel molds prisms were prefabricated to cast the specimens. The experimental program includes testing 39 specimens with different concrete types tested under shear, torsion, flexure and compression as presented in Table 2. In the case of regular or high-performance mixtures are poured normally where dry materials are initially mixed from cement, sand, and gravel for a certain period and then added the water to obtain a homogeneous mixture As for ultra-high-strength concrete mixes, dry materials are mixed together including of cement, sand and gray silica fume from to obtain high density and high compressive strength, then water with the Sika ViscoCrete-PC5390 superior plasticizer add to the mixture to get a homogeneous mixture and for the mixture containing fiber, the fiber is added in the end of mixing by hand to distribute evenly. After finishing the casting process, the homogeneous mixture is distributed to the molds where it is filled with three layers and vibrate to obtain a homogeneous casting. The surface of the molds shall be modified and covered for a whole day and then taken out of the molds and placed in water containers for 28 days after which they are taken out in preparation for the day of examination. Results from tests of all mixtures were analyzed and discussed in tables and figures. Table 3 For torsional test special steel mold used for testing the specimens. For a prismatic prism of tubular cross-section, the angle of torsional twist (θ) is conditional on the implemented torque (T), the length of the member (L), the shear modulus of the material (G) and a shape-dependent parameter known as the torsional inertia constant (It). Many test methods for evaluating shear strength, like FIP standard. Several researchers have reported the use of fibers to enhance the shear, torsional and flexural capacity of concrete.
Results and Discussion
Results of shear, flexure, and torsional tests for all molds are presented here. Table 4 shows all flexural stresses for normal and high and ultra-high concrete. Figures from 9-15 show the flexural comparison between normal, high, ultra-high-strength concrete with or without fiber with a different layer. Table 5 shows all shear stresses for normal and high and ultra-high concrete. can overcome the shear stress of high strength concrete which includes the ultra-high-strength concrete. Compare the results to the ultra-high-strength shear stress it can be seen that just two models can overcome the shear stress of ultra-high-strength concrete which includes the ultra-highstrength concrete with fiber. Table 6 shows all torsional stresses for normal and high and ultra-high concrete. Figures from 23-29 show a torsional comparison between normal, high, ultra-high-strength concrete with or without fiber with different layers. Compare the results to the normal torsional stress it can be seen that all models can overcome the torsional stress of normal concrete. Compare the results to the high strength torsional stress it can be seen that just seven models can overcome the torsional stress of high strength concrete which includes the ultra-highstrength concrete. Compare the results to the ultra-high-strength torsional stress it can be seen that just four models can overcome the torsional stress of ultra-high-strength concrete which includes the ultra-high-strength concrete with fiber. Figure 27. Comparison of torsional test results compared to normal concrete.
Analytical modeling
An acceptable concordance was found between the experimental test conclusions and the finite element program. The analytical study includes the modeling of normal, high and ultra-high-strength molds tested under different types of loadings, with the dimensions and properties corresponding to the actual experimental data. The specimen will be modeled using an eight-node three-dimensional concrete brick element [25][26][27]. The theoretical work is applied to verify the finite element programs can examine many structural elements. Table 7 shows a comparison between experimental and analytical stress. Figure 30 shows a 3-D view and dimensions of the shear molds. Figures 31, 32 and 33 shows shear stress for normal, high and Ultra-high strength concrete mold. Figure 34 shows the brick element for the flexural test. Figures 35, 36 and 37 show flexural stress for normal, high and Ultra-high strength concrete mold. Figure 38 shows a 3-D view and dimensions of the torsional molds. Figures 39, 40 and 41 show torsional stress for normal, high and Ultra-high strength concrete mold.
|
2020-04-27T20:38:31.979Z
|
2020-03-21T00:00:00.000
|
{
"year": 2020,
"sha1": "662a9910310cc0085327c17c2a7a6f5d75107c3f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/745/1/012168",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b0909a637d2b6fd1e608a3a66d20c3de0067127e",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
261416468
|
pes2o/s2orc
|
v3-fos-license
|
A Chinese Perspective on Writing English Abstracts: Challenges, Errors, Improvement Tips, and Critical Reflections
This study examined Chinese EFL researchers’ English abstract writing in language education. Using open-ended questionnaires, it first investigated 24 Chinese EFL researchers’ perceptions of their challenges in writing English abstracts. Using generalizability theory and follow-up interviews, it then invited 16 experienced English journal reviewers to assess 27 published English abstracts written by Chinese EFL researchers to identify common errors and suggest improvement tips. Finally, it examined eight selected Chinese EFL researchers’ critical reflections on the English journal reviewers’ assessment of the 27 English abstracts. The results indicated that Chinese EFL researchers experienced several challenges in writing English abstracts. Further, the English reviewers’ assessment of the 27 abstracts was consistent and reliable. The common errors were associated with the accuracy, non-evaluative nature, coherence and readability, and conciseness of an English abstract. The English reviewers provided tips for improvement, and Chinese EFL researchers expressed their critical reflections on their assessment. Educational implications are discussed.
Introduction
An abstract is a brief and comprehensive summary of a proposal or a completed article (American Psychological Association [APA], 2020 ;Fowler, 2011;Goldborl, 2002;Heseltine, 2012;Price, 2014;Swales & Feak, 2009).It provides the reviewers with a summary of the proposal or the article to make acceptance decisions (Happell, 2007).It gives ''synthesized, focused, and succinctly written information'' about proposed and completed work (Pearce & Ferguson, 2017, p. 453).Although their depth and breadth vary among conferences, grant organizations, and scholarly journals, they are essential for conference presentations, grant proposals, and journal article submissions (Happell, 2007;Pearce & Ferguson, 2017;Price, 2014;Swales & Feak, 2009).More specifically, a published journal article abstract communicates the significant components of the article (APA, 2020; Swales & Feak, 2009).It is ''an advertisement'' for the article (Sheldon & Jackson, 1999, p. 78).Moreover, it provides the readers with their first exposure to the research article (Hyland, 2004;Swales & Feak, 2009).It is ''often the point at which they decide whether to continue and give the accompanying article further attention or to ignore it'' (Hyland, 2004, p. 63).The abstract is also a ''window to the world'' on the article (Heseltine, 2012, p. 204).It helps the authors maximize the exposure of their research through journal article publications (Hyland, 2004;Swales & Feak, 2009).
It is customary that articles published in non-English journals require accompanying English abstracts to establish the international visibility of the research, the journals, as well as the authors (Burgoine et al., 2011;Cilveti & Pe´rez, 2006;Klimova, 2013;Linder, 2014;Lore´s-Sanz, 2016).Undoubtedly, these English abstracts play a significant role in establishing the international visibility (Burgoine et al., 2011;Friginal & Mustafa, 2017;Hosseingholipour et al., 2021;Hyland, 2002;Linder, 2014).Nowadays, more and more Chinese EFL (English as a foreign language) researchers publish research articles in English journals hosted in English speaking countries such as the United States and Great Britain (Huang et al., 2021;Li & Flowerdew, 2007).Due to their non-English background, they face challenges composing high-quality English abstracts, which will eventually prevent them from publishing in English journals and establishing the international visibility (Hyland, 2004;Hyland & Shaw, 2016;Hu & Cao, 2011;Li & Flowerdew, 2007;Lu & Deng, 2019;Ruan, 2018;Ye & Wang, 2013).
However, there is little research assessing the quality of English abstracts written by Chinese EFL researchers within the frameworks of both modern assessment theories, for example, the generalizability (G-) theory, and discourse analysis, for example, the four-move introductionmethods-results-discussion (IMRD) model.This study aimed to bridge this gap.The results will have important implications for Chinese researchers and English journal editors and reviewers in the international academia.
A Brief Narrative Literature Review Informative and structured abstracts are the two major types of journal article abstracts (Cals & Kotz, 2013;Hartley, 2003;Mosteller et al., 2004).Informative abstracts are very common in library journals, generally one to two paragraphs (between 100 to 200 words) in length; whereas structured abstracts are very common in science, technology, and medical journals but less common in library journals (Atanassova et al., 2016;Pearce & Ferguson, 2017).Structured abstracts typically include research purpose, research design or methodology, results and findings, limitations, practical implications, and originality/value (Pearce & Ferguson, 2017;Price, 2014).
This brief narrative literature review section (Feak & Swales, 2009) reviewed relevant past research in humanities and social sciences in the following three categories and synthesized it into a coherent discussion: a) studies that investigated the quality of abstracts and identified common errors committed by EFL researchers in abstract writing, b) studies that examined differences in genre of abstracts across English and other languages including Chinese, and c) studies that offered tips for teaching writing a good abstract.
Common Errors Committed by EFL Researchers in Abstract Writing
Several studies examined the quality of abstracts and identified the common errors committed by EFL researchers in humanities and social sciences (Hosseingholipour et al., 2021;Klimova, 2013;Linder, 2014;Lore´s-Sanz, 2016).For example, Klimova (2013) examined the quality of 66 English abstracts to identify the common errors committed by the undergraduate EFL students majoring in management of tourism at a university in the Czech Republic.The results indicated that these abstracts were poor in quality; students simply translated their abstracts from their native language into English.The identified errors in their English abstracts included a) structures that were not logical and clear; b) abstracts exceeded the word limit; and c) linguisticstylistic errors such as word order, articles, subject and predicate agreement, prepositions, spelling, punctuation, and capitalization errors.However, these 66 abstracts were written by undergraduate EFL students and they were not officially published in scholarly journals, which may limit the generalization of the findings to EFL researchers in the field.
Unlike Klimova (2013), Linder (2014) investigated the quality of 197 published English abstracts selected from Spain's ten most prestigious open-access translation studies journals.Although the overall quality of these abstracts was much better than what Klimova (2013) had reported, the results indicated that 73 English abstracts contained 128 grammatical, vocabularyrelated, and typographical errors.
Unlike Klimova (2013) and Linder (2014) who investigated the quality of actual abstracts written by EFL researchers, Hosseingholipour et al. (2021) examined the perceptions of nine physical education professors and four Ph.D. students at Iranian universities about the factors affecting the quality of Iranian researchers' English abstracts as well as their common errors in writing English abstracts.The results indicated that problems with the English language negatively affect the quality of Iranian researchers' English abstracts; and English language errors were commonly committed by them.Their errors included incorrect word choices and spellings, ungrammatical sentences, wrong rhetorical moves, and poor abstract organization.
Different from Klimova (2013), Linder (2014), and Hosseingholipour et al. (2021), who examined the quality of English abstracts written by EFL researchers only from a single non-English speaking country, Lore´s-Sanz (2016) examined the quality of 66 English abstracts written by EFL researchers from 17 non-English speaking countries (e.g., Netherlands, Germany, Israel, Turkey, and Korea).Furthermore, these abstracts were published in the 2012, 2013, and 2014 volumes of Social Science Research, an American-based and one of the most prestigious journals in the field of sociology.Specifically, the researcher analyzed the rhetorical moves in these abstracts and found that they did not strictly follow the conventional four-move introduction-methods-resultsdiscussion (IMRD) rhetorical structure typical of the English academic world.The number of moves was reduced in their English abstracts.Furthermore, their abstracts showed ''an inclination towards rhetorical simplification, while maintaining the convention of the discipline to include aims, methods, and results'' (pp. 67-68).However, the simplification of the conventional rhetorical structure, as argued by the research, shows a higher degree of textual complexity.
These studies suggest that there is much room for EFL researchers, student researchers in particular, to improve their English abstracts.Their common errors may prevent readers from understanding the research problems under investigation and the significant research findings.As a result, these errors would negatively impact the international dissemination and visibility of their research studies (Hosseingholipour et al., 2021;Klimova, 2013;Linder, 2014;Lore´s-Sanz, 2016).
Differences in Genre of Abstracts Across English and Other Languages Including Chinese
A few studies compared genre of abstracts in humanities and social sciences and identified differences across English and other languages including Chinese (Behnam & Golpour, 2014;Friginal & Mustafa, 2017;Hu & Cao, 2011;Ozdemira & Longo, 2014;Ruan, 2018).For example, Behnam and Golpour (2014) examined the move structure differences of published research article abstracts written by native English and Iranian researchers in applied linguistics.They reported that English abstracts included all conventional moves, whereas Persian abstracts had more move omissions.Later, Friginal and Mustafa (2017) found differences in terms of how information is formed and shared and the way directness and argumentation authors articulate by examining the linguistic characteristics of English research article abstracts in linguistics, applied linguistics, and English education, which were published in the United States and Iraq and written by Iraqi EFL researchers.
Unlike Behnam and Golpour (2014) and Friginal and Mustafa (2017) who examined research article abstracts differences between native English and Iranian and Iraqi EFL researchers, Ozdemira and Longo (2014) examined graduate thesis abstracts differences in metadiscourse features between American and Turkish students.They reported that American students used more ''evidential, endophorics, code glosses, boosters, attitude markers, and self-mentions'' (p.59) than Turkish students; however, American students used less metadiscourse transitions, frame markers, and hedges than Turkish students in their thesis abstracts, suggesting that Turkish EFL students may have challenges in writing good thesis abstracts which can follow the logic of arguments.
A couple of studies examined the English abstract differences in metadiscourse and linguistic features between native English and Chinese EFL researchers in applied linguistics (Hu & Cao, 2011;Ruan, 2018).Both studies reported significant differences across English and Chinese.For example, Hu and Cao (2011) analyzed 649 abstracts selected from eight journals in applied linguistics to examine whether hedging and boosting strategies differ between Chinese and English medium journals.The results indicated that abstracts published in English medium journals used more hedges than abstracts published in Chinese medium journals.Interestingly, by analyzing a corpus of 200 abstracts in four applied linguistic journals, Ruan (2018) reported that native English researchers used more simple noun phrases, whereas Chinese EFL researchers used more complex noun phrases in research article abstracts.
Tips for Teaching Writing a Good Abstract
Several researchers in the field of humanities and social sciences offered tips for teaching writing a good abstract (Friginal & Mustafa, 2017;Hyland, 2003Hyland, , 2007;;Klimova, 2013;Ruan, 2018;Stotesbury, 2003;Swales & Feak, 2009).Hyland (2003Hyland ( , 2007) ) suggested genre-based pedagogies for EFL academic writing teachers to assist their students to produce effective texts and abstracts.Swales and Feak (2009) defined genre as ''a name for a type of text or discourse designed to achieve a set of communicative purposes'' (p. 1).Following this definition, the research article abstract is a genre (Swales, 2004).Genre-based pedagogies require the teacher to create a contextual framework of writing the abstract, help students understand the structure of an abstract, and then assist them toward a command of writing a good abstract (Hyland, 2003(Hyland, , 2007)).
Moreover, Stotesbury (2003) suggested that EFL academic writing teachers teach students the conventions of abstract writing in their own fields.Specifically, they should be analyzing abstracts in scholarly journals of their own disciplines.Students should have concrete examples to learn to write good abstracts.These suggestions were further explained by Friginal and Mustafa (2017) who argued that successful academic writing and production of high-quality research article abstracts ''can be addressed by exploring data and developing teaching materials from data-driven measures.Cross-cultural and cross-disciplinary comparisons using a specialized corpus appear to be an effective starting point in understanding existing discourse patterns'' in teaching writing a good English abstract (p.55).
In addition, Swales and Feak (2009) suggested a five rhetorical moves model be followed in teaching writing a good abstract, that is, Move 1-background/introduction/situation, Move 2-present research/purpose, Move 3-methods/materials/subjects/procedures, Move 4results/findings, and Move 5-discussion/conclusions/ implications/recommendations.It was identified by most researchers in various fields and languages and found effective in teaching writing a good abstract.
Finally, since it is common that EFL student researchers write their abstracts in their first language and then translate them into English, Klimova (2013) suggested that translation be avoided in composing abstracts due to linguistic differences between English and other languages.EFL academic writing teachers should encourage and assist their students to write abstracts in English (Klimova, 2013).They should also be taught to use compact grammatical features but still write with clarity of meaning to achieve their rhetorical and pragmatic goals in writing good English abstracts (Ruan, 2018).
To conclude, although several studies investigated the common errors committed by EFL researchers in humanities and social sciences and further identified the differences in genre of abstracts across English and other languages including Chinese, the assessment of the quality of English abstracts written by Chinese EFL researchers in humanities within the G-theory and IMRD theoretical frameworks is still under-researched.Therefore, this research aimed to a) examine Chinese EFL researchers' challenges in writing journal article abstracts in English; b) investigate English journal reviewers' assessment and evaluation of the quality of published English abstracts written by Chinese EFL researchers; c) provide Chinese EFL researchers with improvement tips; and d) explore Chinese EFL researchers' critical reflections on English journal reviewers' assessment and evaluation.
Theoretical Frameworks
G-theory (Cronbach, Gleser, Nanda, & Rajaratnam, 1972) was used as a theoretical framework guiding the assessment of the quality of English abstracts written by Chinese EFL researchers.It is a modern test theory and powerful in identifying the sources of assessment score variance and error and then estimating the impact of these variance components on the score reliability (Brennan, 2001;Shavelson et al., 1993;Shavelson & Webb, 1991).It is commonly used by educational researchers to examine assessment variability and reliability (Huang et al., 2021;Li & Huang, 2022).
In addition, the four-move IMRD model in discourse analysis was adopted as the theoretical framework for the development of research instruments for data collection and data analysis of this study (Bhatia, 2002;Swales, 1990;Swales & Feak, 2009).A research article abstract usually contains the following four moves: Move 1-introduction (e.g., why is the research problem or topic important?what is the current study about?);Move 2-methods (e.g., how was the study conducted?);Move 3-results (e.g., what was discovered?); and Move 4-discussion (e.g., what are the conclusions?What are the implications?)(Bhatia, 2002;Swales & Feak, 2009).
Finally, the APA (2010) abstract standards were also used to guide the development of research instruments for data collection and the analysis of collected data of this study.According to the APA (2010) Publication Manual Sixth Edition, a good abstract is accurate (i.e., it reflects the problem under investigation, the participants, the method, the major findings, the conclusions, and the implications); nonevaluative (i.e., it reports rather than evaluates); coherent and readable (i.e., it writes in clear and concise language); and concise (i.e., it is brief and makes each sentence maximally informative).
It is important to mention that the data collection of this study had been completed before the APA (2020) Publication Manual Seventh Edition was released.The qualities of a good abstract in the seventh edition are still the same, that is, being accurate, nonevaluative, coherent and readable, and concise.However, the descriptions are slightly different for being accurate (i.e., it correctly reflects the purpose and content of the paper) and coherent and readable (i.e., it writes in clear and deliberate language).
Research Questions
The following five research questions guided this study:
The Research Design
This cross-sectional study was conducted in four phases.
Phase One used open-ended questionnaires to investigate Chinese EFL researchers' challenges in writing journal article abstracts in English.
Phase Two involved experienced English journal reviewers' quantitative and qualitative assessments of Chinese EFL researchers' English abstracts published in top-tier Chinese journals in the field of language education.For the quantitative assessment proportion, G-theory (Cronbach et al., 1972) was adopted as a methodological framework because it can analyze more than one measurement facet simultaneously in investigations of score variability and dependability assigned by assessors and evaluators (i.e., the experienced English journal reviewers in this study) (Brennan, 2001;Huang, 2012;Liu & Huang, 2020;Zhao & Huang, 2020).For the qualitative assessment proportion, English journal reviewers' written comments on various aspects of each abstract were examined so that the common errors that occurred in these abstracts could be identified.
Phase Three included follow-up semi-structured interviews with selected English journal reviewers.These interviews aimed to elicit their suggestions for Chinese EFL researchers to improve abstract writing in English.
Phase Four included follow-up semi-structured interviews with selected Chinese EFL researchers who participated in Phase One of the study.These interviews aimed to obtain Chinese EFL researchers' critical reflections on the English journal reviewers' quantitative and qualitative assessments of the published English abstracts written by Chinese EFL researchers.
The Participants
The purposive sampling method was adopted and purposive samples of 24, 16, 8, and 8 were selected for Phases One, Two, Three, and Four of this study, respectively (Creswell, 2014).Phrase One participants included 24 EFL researchers from different 4-year universities across China.The criteria for selecting Phase One participants were a) they must hold a master's degree; b) they must have more than 3 years of research experience; and c) they must have published at least three research articles in either English or non-English journals.
Among the 24 participants from China, 12 were male and 12 were female researchers; 13 obtained a doctoral degree, and 11 had a master's degree.Eight of the 11 participants with a master's degree were currently studying for their doctoral degrees in educational fields at Englishmedium universities.It was important to note that all 24 participants had more than 3 years of research experience in education, and 14 of them had published in English journals.
Phrase Two participants included 16 experienced English journal reviewers.The criteria for selecting these reviewer participants were a) they must hold a doctoral degree; b) they must have served as reviewers of English journals for at least 3 years; and c) they must have published at least five articles in English journals.Among the 16 reviewers, 7 were male, and 9 were female; and all of them have published more than eight articles in English journals.Furthermore, 11 of them were from the United States, 3 from Canada, 1 from Commonwealth of the Bahamas, and 1 from Hong Kong.Their areas of expertise were humanities and education.The corresponding author of this article was responsible for recruiting them because he had worked as an education professor at an American university and served as a reviewer of several English journals before he relocated to China.
Phrase Three participants included eight selected English journal reviewers who participated in Phase Two of the study.After performing quantitative and qualitative assessments of the published English abstracts written by Chinese EFL researchers, they were further purposefully selected for follow-up interviews about their tips for Chinese EFL researchers to improve their abstract writing in English.Among the eight interviewees, four were male and four were female; and all of them had served as English journal reviewers for over 6 years.
Phrase Four participants included eight selected EFL researchers who had participated in Phase One of the study.They were first presented the findings of Phases Two and Three results and then interviewed about their critical reflections on these results.Among the eight interviewees, four were male and four were female; five had obtained a doctoral degree, and three were studying for their doctoral degrees at English-medium universities.
The Selection of English Abstracts Published in Chinese Journals
A total of 27 English informative abstracts published in three top-tier language education journals (i.e., A, B, and C) in China across 3 years (i.e., 2014, 2015, and 2016) were selected for Phases Two and Three of this study.The three top-tier language education journals were selected by citations.Three abstracts in each year of each journal were selected through random purposive sampling for inclusion in this study.Journals A, B, and C are top-tier peer-reviewed journals that mainly publish research articles in Chinese in the areas of language research, linguistics, applied linguistics, foreign language education, and translation studies.The 27 abstracts were selected from the two areas of applied linguistics and foreign language education.Regardless of the language in which the article is written, an English abstract is required and the author(s) must be responsible for it.The purpose of the English abstract is for the journal and author(s) to establish the international visibility.For example, among Journals A, B, and C, the article titles and abstracts published in Journal B have been included in the database of the Modern Language Association of America.
The Instruments
The open-ended questionnaire questions (see Appendix A) for Phase One of this study focused on participants' significant challenges in writing journal article abstracts in English.The instrument for Phase Two was constructed in three steps: a) the researchers of this study consulted with three English journal editors in the field of language education about their advice on the assessment criteria for English abstracts; b) they further studied the abstract criteria outlined by APA (2010) and then constructed a Journal Article Abstracts Assessment Form with a 6-point holistic scoring rubric (see Appendix B); and c) the instrument was finally reviewed and slightly altered by the three English journal editors before the data collection.These three steps ensured that the final instrument for Phase Two was accurate and valid.Furthermore, it has also been reliable, with a Cronbach's alpha reliability coefficient of .88,that is, the internal consistency reliability coefficient, calculated using SPSS and Phase Two holistic assessment scores of the 27 abstracts assigned by the 16 English journal reviewers.
In Phase Two, the 16 English journal reviewers were invited to a) use the provided scoring rubric to assign a holistic score to each of the 27 English abstracts published in Journals A, B, and C on a six-point scale, and b) to provide written comments on various aspects of each abstract (see Appendix B).Follow-up interview questions (see Appendix C) for Phase Three of this study focused on the selected English journal reviewers' suggestions for Chinese EFL researchers to improve their abstract writing in English.Follow-up interview questions (see Appendix D) for Phase Four of this study focused on the selected Chinese EFL researchers' critical reflections on the significant findings of Phases Two and Three of the study.
Data Collection Procedures
Phase One data were collected through emails.Email invitations were sent out to invite participants.The researchers provided the invited participants with information about the study and they all understood that their participation was voluntary.The completed responses to the open-ended questionnaire questions were sent to the researchers through emails for data analysis.It is important to note that the open-ended questions were asked in English, and the participants also answered these questions in English.
Phase Two data were collected through emails with the 16 experienced English journal reviewers.After they had expressed their willingness to participate in the assessments of the 27 English abstracts, the Journal Article Abstracts Assessment Form was emailed to them and their completed rating forms were emailed back to the researchers.
Due to time and location inconveniences, Phases Three and Four data were collected through follow-up online interviews with eight purposefully selected Phase Two English journal reviewers and Phase One Chinese EFL researchers, respectively.The follow-up online interviews were conducted in English and between the researchers and the invited interviewees.Again, in Phases Two through Four, participants were provided with information about the study and their participation was voluntary.
Data Analysis
All qualitative data collected in four phases were analyzed as follows.The researchers first entered qualitative data into Excel spreadsheets to ensure data consistency and integrity.They then color-coded the responses under each open-ended interview question.Following that, they began to sort them into different categories and subcategories individually, then organized them collaboratively by content, and finally, discussed conceptually similar responses, grouped them together, and then categorized them by the recurring themes.This process aimed to ensure inter-coder reliability of the qualitative data analysis.Also, direct quotes from the participants were incorporated to enhance the validity of the results (Creswell, 2014).It is important to note that both researchers have rich qualitative data analysis experience.
Phase Two quantitative data were analyzed at three levels: a) descriptive statistical analysis to obtain the means and standard deviations of the holistic assessment scores of the 27 abstracts assigned by the 16 English journal reviewers; b) a person nested within journal-byreviewer (p:j) 3 r mixed effects G-study; c) a person-byreviewer (p 3 r) random effects G-study; and d) a random effects person-by-reviewer (p 3 R) D-study.The results obtained from these G-and D-studies were used to examine the quality of English journal reviewers' quantitative evaluation, that is, their assessment variability and reliability, of Chinese EFL researchers' English abstracts published in the three top-tier Chinese journals.
The computer programs Microsoft Excel and GENOVA (Crick & Brennan, 1983) were used for performing these data analyses.
Phase One: Chinese EFL Researchers' Major Challenges
Phase One open-ended questions asked about the 24 EFL researchers' major challenges in writing abstracts in English.They all reported that they had experienced challenges using appropriate and idiomatic expressions and coherent and logical structures in English abstracts.Further, they found it challenging to decide what should be included and how to organize such information transparently within the word limit.''I often struggle with writing up all the aspects necessary for a solid abstract in less than 200 words''; ''my biggest challenge is to how to use English to describe all the information clearly, concisely, and idiomatically''; ''there is so much information in a research article; I do not know how to decide between the important and the unimportant information for an abstract''; ''my abstracts are either too simple or too detailed''; and ''how much detailed information I should provide in the abstract'' were their common responses.
Furthermore, 5 out of 24 participants expressed difficulties in writing good English abstracts because they simply translated the abstracts from Chinese into English.For example, one participant who had successfully published two research articles in English commented that ''I usually wrote my abstracts in Chinese and then translated them directly into English; however, due to the language and format differences in scholarly writing, I had experienced tremendous difficulty in producing good English abstracts; and I had to revise them many times before they became acceptable.''Phase Two: English Journal Reviewers' Quantitative Assessment of the 27 Published Abstracts Descriptive statistical analysis was performed prior to the G-theory analyses.The purpose of descriptive statistics was to obtain the means and standard deviations of the holistic assessment scores assigned to the 27 English abstracts by the 16 English journal reviewers.The results are presented in Table 1.
As shown in Table 1, the mean assessment scores for the nine English abstracts published in Journal A were between 2.19 and 5.38 out of a total score of 6; similarly, the score ranges for the nine English abstracts published in Journals B and C were between 2.25 and 5.12, and 1.5 and 4.69, respectively.Further, only 7 of the 27 abstracts (25.9%) received a score over 4, indicating that the quality of 20 of these English abstracts (74.1%) was inadequate and unacceptable as evaluated by the 16 journal reviewers according to the holistic scoring rubric (see Appendix B).
In addition, also as shown in Table 2, the standard deviations of the holistic scores for all 27 English subtracts were minimal (between .40 and .72),indicating that the assessment score variability of these 27 English abstracts was small.In other words, the 16 English journal reviewers assessed these abstracts written by Chinese EFL researchers considerably consistently.Following the descriptive statistics, a series of G-and D-studies were conducted in Phase Two to further investigate the assessment score variability and reliability of the 27 English abstracts published in three Chinese journals and assigned by the 16 English journal reviewers.The results are displayed in Tables 2 to 4.
Table 2 reports the person nested within journal-byreviewer (p:j) 3 r mixed effects G-study results.As shown in Table 2, the person nested within the journal (p:j) was the largest variance component (75.22% of the total variance), indicating that the abstract assessment scores received by Chinese EFL researchers within each journal were the single largest source of variance.In other words, within each journal, Chinese EFL researchers' abstract assessment scores differed substantially.The residual yielded the second largest variance component (19.28% of the total variance).The residual contains the variability due to the interaction between reviewers, abstracts, persons within journals, and other unexplained systematic and unsystematic sources of error.Reviewer (r) yielded the third largest variance component (5.22% of the total variance), suggesting that these 16 journal reviewers differed slightly from one another in terms of the leniency of rating the 27 English abstracts.The remaining facets in the design were 0 or close to 0. It is important to note that the variance component for journals explained 0% of the total variance, suggesting that there was no difference in abstract writing performances that could be attributed to the three journals.Since the journal facet explained 0% of the total variance, it was not considered as a facet in the following person-byreviewer (p 3 r) random effects G-study design whose results are presented in Table 3.
As shown in Table 3, the object of measurement, person (p), explained the largest score variance (75.08% of the total variance), suggesting that the 27 Chinese EFL researchers differed considerably in their abstract writing skills.The residual yielded the second largest variance component (19.66% of the total variance).The residual contains the variability due to the interaction between reviewers and persons, and other unexplained systematic and unsystematic sources of error.The reviewer (r) facet was the third largest variance component (5.26% of the total variance), suggesting that the journal reviewers rated these 27 English abstracts slightly differently.4, the G-coefficient for the norm-referenced and Phi-coefficient for criterion-referenced interpretations for just one reviewer were .79 and .75,respectively, indicating fairly high reliability coefficients.The results of this study suggested that the G-and Phi-coefficients would increase to .88 and .86 for two reviewers, and .92and .90 for three reviewers, respectively.Two or three independent reviewers are usually invited to review each scholarly journal article submission.The results showed that these experienced English journal reviewers consistently reviewed the submissions.
English Journal Reviewers' Identified Common Errors in the 27 Published Abstracts
In addition to assigning a holistic assessment score to each of the 27 English abstracts, the 16 English journal reviewers also provided written comments on each abstract to identify the common errors in Chinese EFL researchers' English abstracts.Table 5 is a summary of Phase Two qualitative findings.
As shown in Table 5, several errors were identified in the following four aspects of an abstract: a) its accuracy, b) its non-evaluative nature, c) its coherence and readability, and d) its conciseness.First, the missing information about the problem under investigation, the participants, the study method, the findings, the conclusion, and the implications leads to an abstract's inaccuracy.Among the 27 English abstracts, 20 (74.1%) were found to have such errors.Further, some abstracts did not include statistical significance levels for significant quantitative findings.For example, ''As for the gender differences, females' writing, reading, translation and total scores are significantly higher than males', .
Besides, females use each strategy significantly more frequently than males'' (An excerpt from Abstract #12 published in Journal B).
Second, 13 out of 27 (48.1%)abstracts were evaluative.The authors evaluated rather than reported in these abstracts.Abstract # 24 published in Journal C is an excellent example of such problematic abstracts (see Appendix E).Further, these abstracts read like personal opinions or subjective judgments which are not based on research findings.For example, ''In linguistic studies we need to change our way of thinking, start from Chinese practice and study tradition with western languages and linguistic tradition as a reference, and seek for the true characteristics of the Chinese language for the establishment of Chinese linguistics as a contribution to the study of general linguistics'' (An excerpt from Abstract #11 published in Journal B).
Third, 23 out of 27 (85.2%)abstracts were found to have coherence and readability problems associated with their English use.Specifically, these errors included misspellings and grammatical errors, misuse of verbs tenses (e.g., it uses the past tense to describe conclusions and the present tense to describe how the research was conducted) and voices (e.g., it uses the passive voice rather than the active voice), and the ELF English use.These errors impact their coherence and readability.For example, ''Their researches have drawn the conclusion that the mechanisms are themes, new information, temporality, semantic gravity and semantic density'' (An excerpt from Abstract #23 published in Journal C); and ''Stage 3 extends from about 2010 till many years from now.It is necessary for English departments of different universities to tailor their teaching programme to their current status and local and national needs so that high-level English majors can be trained'' (An excerpt from Abstract #7 published in Journal A).
Finally, 18 out of 27 (66.7%)abstracts were found to have problems with their conciseness.Specifically, abstracts were not brief, and sentences were not maximally informative; some abstracts included too many important concepts, findings, or implications.Abstract #24 published in Journal C (see Appendix E) is an excellent example of that category of common errors.
As shown in Table 6, the four reviewers assessed this abstract very similarly regarding the assigned quantitative assessment scores and provided qualitative assessment comments.These results suggested that this abstract failed to meet the common criteria for a good abstract and therefore, further revision and editing are needed.
Phase Three: English Journal Reviewers' Tips for Chinese EFL Researchers to Improve Their Abstracts Phase Three of this study involved online semi-structured interviews with eight purposefully selected English journal reviewers from those who participated in Phase Two of this study.The main themes of the findings are as follows.
Seven out of eight reviewers interviewed agreed that the overall quality of these 27 English abstracts was poor.The common errors were reported in Phase Two qualitative results section (see above).They offered improvement tips for both these journals and the Chinese EFL researchers.As shown in Table 7, the two crucial improvement tips for these journals were to a) establish clear abstract instructions and guidelines for authors to follow in writing the English abstracts; and b) implement strict evaluation procedures for English abstracts.Further, their improvement tips for Chinese EFL researchers are summarized.First, all reviewers suggested that Chinese EFL researchers familiarize themselves with English abstracts published in English journals.They can select several published English articles related to their research interests to read first, then learn the basic moves, and finally get familiar with the standards for English abstracts.As commented by one reviewer, this process could help Chinese EFL researchers ''learn the basic moves in the abstract,'' and, by another reviewer, ''get familiar with the common criteria for acceptable English abstract.''Second, six out of eight reviewers recommended that Chinese EFL researchers ensure that a good English abstract contains all necessary information.It must reflect the problem under investigation and include the participants, research procedures, results, conclusions, and the implications for practice and policymaking.''These essential elements of an English abstract guarantee that the abstract is complete and informative,'' as commented by one reviewer.
Third, five out of eight reviewers mentioned that acceptable English abstracts must be grammatically correct and structurally coherent.Although it is not easy for Chinese EFL researchers to write high-quality English abstracts that meet these criteria, they need to keep this rule in mind while writing English abstracts.One suggested, ''they [Chinese EFL researchers] could selfevaluate the grammatical accuracy and structural coherence of their completed abstracts before submission.''Finally, three reviewers' specific tips for a successful English abstract writing procedure can be summarized as ''learning by reading, learning by doing, and learning by reflection'' strategies.One reviewer described this writing procedure as follows: ''they [Chinese EFL researchers] need to read the author instructions and guidelines established by the [English] journal; they must also practice and reflect since English is not their native language.''One senior reviewer further explained that through extensive reading, ''the scheme of writing a good abstract may be developed and improved .''''having more practice on writing various abstracts on different topics, then, the candidates [Chinese EFL researchers] will become more competent and skillful .''and they will be able to write good English abstracts by reflecting on the criteria for assessing the quality of abstracts.
Phase Four: Chinese EFL Researchers' Critical Reflections on English Journal Reviewers' Assessments Phase Four of this study involved online semi-structured interviews with eight EFL researchers purposefully It reads a little confusing.
Too lengthy, reduction and revision needed
Note.The four aforementioned common errors identified in these English abstracts written by Chinese EFL researchers would lower their readability and international visibility.To help demonstrate some of these errors, the holistic scores and written comments by four selected experienced reviewers (A, B, C, and D) for Abstract # 24 published in Journal C (see Appendix E) are included in Table 6.
selected from those who participated in Phase One of this study about their critical reflections on Phases Two and Three results.The major findings are summarized in the following section.Overall, all eight participants were satisfied with the reliability of the English journal reviewers' quantitative assessment of the 27 published English abstracts written by Chinese EFL researchers.They expressed their agreement with the reviewers regarding the common errors identified in these published English abstracts; they also found the reviewers' tips for writing high-quality English abstracts constructive and valuable More importantly, they offered the following three critical comments on Phases Two and Three results.First, six out of eight Chinese EFL researchers stated that the 27 English abstracts were published in the three top-tier language education journals in China; the generally low quality of these abstracts could have prevented the researchers from communicating their research findings within the international academia.''Who should be responsible for the quality of a published article abstract, the author, or the journal editor, or both?''one participant raised such a question.
Second, five out of eight Chinese EFL researchers suggested that international English journal reviewers support EFL researchers across the globe.When the reviewers are reading their submitted abstracts, they may consider their EFL background and unfamiliarity with the international standards for good abstracts.One EFL researcher commented that ''. even an abstract plays an important role, the study design, data collection and analysis, and reporting of the findings should all be considered when the reviewers are making their acceptance decision.''One senior EFL researcher further explained that ''English journal reviewers may be tolerate with a Chinese EFL researcher's unacceptable English abstract and give him or her another chance to revise, edit, or even rewrite the abstract if the article is generally acceptable.''Finally, seven out of eight Chinese EFL researchers suggested that they must equip themselves with the skills to write internationally acceptable article abstracts if they want their research articles published in English journals.''Each English journal has specific guidelines, and they [Chinese EFL researchers] must closely follow the guidelines while composing an abstract, '' and ''they [Chinese EFL researchers] should also read the instructors for authors when they are preparing their abstracts for submission'' as commented by two Chinese EFL researchers, respectively.
Discussion and Conclusions
Phase One of this study investigated Chinese EFL researchers' reported challenges in writing English abstracts.They experienced challenges in using effective linguistic features and correct rhetorical moves; they also felt difficult to organize their English abstracts in a logic and meaningful way.These challenges were similar to those reported in the literature (Hosseingholipour et al., 2021;Klimova, 2013;Linder, 2014;Lore´s-Sanz, 2016).In addition, direct translation from Chinese into English caused them challenges in writing English abstracts.These two languages are different in genre of abstracts (Hu & Cao, 2011;Ruan, 2018).Therefore, it is not wise for Chinese researchers to use translation in writing English abstracts.
Phases Two and Three of this study invited 16 English journal reviewers to assess the 27 published English abstracts written by Chinese EFL researchers both quantitatively and qualitatively.The results suggested that the 27 English abstracts were generally poor in quality across the three journals; the reviewers' holistic scores assigned to these abstracts were fairly consistent resulting in acceptable reliability coefficients.Although few studies employed G-theory to examine journal reviewers' assessment reliability of the quality of English abstracts written by EFL authors, the G-theory results of this study could be compared with such results reported in EFL writing assessment studies (Huang & Foote, 2010;Liu & Huang, 2020; Zhao & Huang, 2020).Unlike the assessment of EFL essays, the quality assessment of these journal article abstracts was much more consistent and reliable.It is believed that these experienced English journal reviewers followed the scoring criteria closely while evaluating these abstracts (Li & Huang, 2022).The investigation of the common errors and the improvement tips became the foci of Phases Two and Three of this study.Chinese EFL researchers face difficulties in four aspects of an abstract including its accuracy, non-evaluative nature, coherence and readability, and conciseness.These errors were similar to what other researchers had reported (Hosseingholipour et al., 2021;Klimova, 2013;Linder, 2014;Lore´s-Sanz, 2016).It is essential to mention that these errors are associated with the language and format differences between abstracts published in English versus those published in Chinese journals (Hu & Cao, 2011;Ruan, 2018;Ye & Wang, 2013).Many abstracts were directly translated from the Chinese versions without considering the criteria for a good English abstract.These errors would surely lead to lower readability and visibility of the abstracts in international academia (Friginal & Mustafa, 2017;Hosseingholipour et al., 2021;Hyland, 2002;Lore´s-Sanz, 2016).
Furthermore, the English reviewers' tips for improvement were highly constructive and valuable for the Chinese EFL researchers.For example, they should familiarize themselves with the criteria of a good English abstract; they also need to learn the conventional moves in writing a good abstract; and they can learn to write good English abstracts by reading, doing, and reflection (APA, 2020;Feak & Swales, 2011;Swales, 2004;Swales & Feak, 2004).These tips were similar to what researchers in humanities and social sciences offered for teaching writing a good abstract (Friginal & Mustafa, 2017;Hyland, 2003Hyland, , 2007;;Klimova, 2013;Ruan, 2018;Stotesbury, 2003;Swales & Feak, 2009).
Finally, Chinese EFL researchers critically reflected on the English journal reviewers' assessment outcomes.These reflections were helpful for both Chinese EFL researchers and English journal reviewers.On the one hand, Chinese EFL researchers are expected to follow the standards established by the English journals if they want to have their research published in them; on the other hand, the English reviewers are suggested to support Chinese EFL researchers and help to include their research articles in the English journals.
The present study was limited in the following three ways.First, the data collection of this study had been completed before the APA (2020) Publication Manual Seventh Edition was released; the assessment criteria for the 27 abstracts were adopted from the APA (2010) Publication Manual Sixth Edition, which were slightly different from the Seventh Edition (APA, 2020), which may have limited the interpretation of the results.Second, this study only examined 24 Chinese EFL researchers' challenges in writing English abstracts, which may limit the generalization of the findings to other EFL researchers in China.Finally, this study involved only three journals in one discipline of foreign language education in China, which may limit the generalization of the findings to other journals in different disciplines.Therefore, it is suggested that the results of this study be interpreted with caution.
In light of these limitations, the following four conclusions were reached.First, the English journal reviewers' assessment of the quality of 27 English abstracts written by Chinese EFL researchers was consistent and reliable; and these researchers' English abstracts were generally poor.Given the critical role an English abstract plays in the international dissemination of the research study, Chinese researchers should be aware that their English abstracts need to maintain high levels of accuracy, clarity, conciseness, coherence, and readability (APA, 2010).
Second, in the international dissemination of research, a journal provides clear abstract instructions and guidelines for its authors.Without such guiding rules, the English abstracts written by Chinese researchers will result in low international visibility (Hosseingholipour et al., 2021;Hyland, 2002;Klimova, 2013;Linder, 2014).The errors made by the Chinese EFL researchers in their English abstract writing are commonly found in the EFL field (Friginal & Mustafa, 2017).These errors are caused by different language or format differences between English and Chinese journals (Hu & Cao, 2011;Ruan, 2018;Ye & Wang, 2013).Third, there are solutions to the identified problems.The English journal reviewers' tips for improvement work for Chinese EFL researchers (Ruan, 2018).They could follow the suggestions and make every effort to improve their abstract writing in English (Hosseingholipour et al., 2021;Hyland, 2002;Linder, 2014).
Finally, it is believed that the English journal reviewers' awareness and understanding of Chinese EFL researchers' challenges in writing English abstracts and their kind support and suggestions will make Chinese EFL researchers more and more proficient in writing English abstracts (Huang & Foote, 2010).Chinese EFL researchers belong to the international research community, and the global research community also needs Chinese EFL researchers (Huang et al., 2021).
The results of this study provide important implications for Chinese EFL researchers, including graduate students.To compose high-quality English abstracts, they are encouraged to follow the English journals' guidelines closely, and the strategies recommended by the English reviewers so that their English abstracts are of high quality and their research studies become more internationally visible.
a) What are Chinese EFL researchers' major challenges in writing journal article abstracts in English?b) What is the quality of English journal reviewers' quantitative assessment of the published English abstracts written by Chinese EFL researchers?c) What are the common errors in Chinese EFL researchers' English abstracts identified by the English journal reviewers?d) What are the English journal reviewers' tips for Chinese EFL researchers to improve their abstract writing in English?And e) What are Chinese EFL researchers' critical reflections on the English journal reviewers' quantitative and qualitative assessments of the published English abstracts written by Chinese EFL researchers?
Table 1 .
Descriptive Statistical Results of Holistic Assessment Scores by Reviewers.
Table 2 .
Results of Variance Components for Mixed Effects (p:j) 3 r G-Study .
Table 4
reports the random effects person-by-reviewer (p 3 R) D-study results.As shown in Table
Table 4 .
A Summary of G-and Phi-coefficients.
Table 3 .
Results of Variance Components for Random Effects p 3 r G-Study.
Table 5 .
A Summary of Common Errors Identified in the English Abstracts.
Table 6 .
Assessment Summary of Abstract #24 by Four Selected English Journal Reviewers.
Table 7 .
A Summary of Phase Three Results.
|
2023-09-01T15:15:55.191Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "7ea434c4193945133143d9fd8293085af257ae32",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440231194188",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "b79b51a25dff8f36110e3966d3fe8d52b9b2baad",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": []
}
|
44668887
|
pes2o/s2orc
|
v3-fos-license
|
Active Stromal Cell–Derived Factor 1α and Endothelial Progenitor Cells are Equally Increased by Alogliptin in Good and Poor Diabetes Control
Background: It is postulated that the ability of dipeptidyl peptidase-4 inhibitors (DPP-4-i) to increase circulating endothelial progenitor cells (EPCs) may be at least partly mediated by active stromal cell–derived factor 1α (SDF-1α) (a pivotal mediator of stem cell mobilization from the bone marrow). As other DPP-4-i were demonstrated to increase EPC concentrations, in this study, we sought to investigate the ability of the DPP-4-i alogliptin in modifying EPCs and SDF-1α, in patients with good and poor diabetes control. Methods: Two groups of diabetic patients on metformin were divided by hemoglobin A1c (HbA1c): Group A—those with HbA1c ≤6.5% (28 patients) and Group B—those with HbA1c 7.5% to 8.5% (31 patients). Both groups received alogliptin 25 mg/daily for 4 months. At baseline and 4 months later, clinical, laboratory parameters, EPCs, and active SDF-1α were determined. Results: After 4-month treatment with alogliptin, either Group A or Group B showed reduced HbA1c levels and concomitant similar increase in EPCs and active SDF-1α. Conclusions: Alogliptin showed significant benefits in increasing EPCs and active SDF-1α either in good or poor diabetes control. The study demonstrated that similar to other DPP-4-i, also alogliptin is able to increase EPC concentrations, suggesting the existence of a class effect mediated by SDF-1α. The extent of increase in EPCs is independent from baseline diabetes control.
Introduction
Endothelial progenitors cells (EPCs) are a heterogeneous population of cells in different states of maturation, originated from bone marrow. Since their identification, many studies investigated their self-renewal capability, influence on reparative vascular mechanisms, and neoangiogenesis. [1][2][3] Although EPC isolation and characterization are still debated (which cell phenotype better identifies the "true" circulating EPC remains unsolved), lower levels of EPCs have been detected in the presence of smoking habit, diabetes, hypertension, cardiovascular (CV ) disease, and dyslipidemia. 4,5 Of note, increased levels of EPCs were found to be associated with a reduced risk of death from CV causes, a first major CV event, revascularization, and hospitalization. 6 A recent metaanalysis aimed to evaluate the prognostic role of the measures of EPCs on CV outcomes and death. The authors selected 21 studies for a total of 4155 patients having acute coronary syndrome, acute myocardial infarction, stroke, elective percutaneous intervention, elective coronary angiography for suspected coronary artery disease, end-stage renal disease, chronic heart failure, and aortic stenosis. Results showed that low vs high levels of EPCs (CD34 + , CD133 + ) predicted CV events, restenosis after endovascular intervention, CV death, and all-cause mortality. 7 As diabetes is considered a coronary heart disease risk equivalent, a number of drugs have been challenged to see whether their use was associated with an increase in EPCs. 8 Pioglitazone, for example, increased early and late outgrowth EPC viability in patients with impaired glucose tolerance. 9 A similar benefit was also demonstrated by a 4-month treatment with add-on insulin. 10 Another study from Taiwan challenged the effects of 2 statins (pitavastatin and atorvastatin) in hypercholesterolemic patients with type 2 diabetes mellitus: although both statins similarly reduced plasma lipids, only pitavastatin increased plasma vascular endothelial growth factor receptor (VEGF) level and circulating EPCs in such high-risk patients. 11 Then, also antihypertensive drugs such as aliskiren and hydrochlorothiazide were investigated in relation to EPCs. The authors observed not only that aliskiren had a favorable effect on endothelial function and EPCs but also that these effects were independent of blood pressure lowering, as they were not observed after the achievement of similar values of blood pressure with hydrochlorothiazide. 12 Among antidiabetic drugs, dipeptidyl peptidase-4 inhibitors (DPP-4-i) look like of particular interest because beyond their glucose-lowering effect, studies 2 Clinical Medicine Insights: Endocrinology and Diabetes suggest that they may have a positive role for the CV system and for induction of mobilization of stem cells. 13,14 Stromal cell-derived factor 1α (SDF-1α), a major regulator of progenitor cell kinetics, is a natural substrate of DPP-4, which inactivates it by removing 2 residues at the N-terminus.
In a small open-label study, a 4-week therapy with the DPP-4-i sitagliptin in addition to metformin and/or secretagogues, increased plasma SDF-1α concentrations, and circulating EPCs. 15 The most straightforward interpretation displayed by the authors was that DPP-4 inhibition raised SDF-1α concentrations, which mobilized EPCs from the bone marrow. An alternative explanation may be that glucose lowering per se improved the bioavailability of EPCs; however, the short duration of the trial and the loss of correlation between plasma glucose and EPC levels at study end seemed to argue against this hypothesis.
As previous studies demonstrated a significant influence of DPP-4-i on EPCs, this study was undertaken to investigate whether also the DPP-4-i alogliptin is able to increase EPCs and SDF-1α concentrations and whether such effect differs in good and poor diabetes control.
Patients and Methods Subjects
Individuals with type 2 diabetes were recruited in the outpatient clinic of Division of Endocrinology. Eligible subjects were diabetic patients on metformin monotherapy at a dose comprised between 1.5 and 2.5 g/d, having HbA 1c <6.5% (Group A) or 7.5% to 8.5% (Group B). Exclusion criteria included any cerebrovascular event, any revascularization procedure, clinically relevant peripheral artery disease, diabetic foot, nephropathy, retinopathy, and clinically relevant neuropathy.
Study design
Eligible subjects in both groups were invited to receive alogliptin 25 mg/daily for 4 months. At baseline, medical history, current therapies, personal history of diabetes and CV disease, smoking, and drinking habits were recorded. At baseline and 4 months later, clinical parameters (body mass index [BMI] and systolic and diastolic blood pressure) were registered and blood was drawn to determine fasting plasma glucose, HbA 1c , total cholesterol, triglycerides, high-density lipoprotein cholesterol, creatinine, aspartate aminotransferase, alanine aminotransferase, SDF-1α, and EPC count. All subjects provided written informed consent prior to study entry. The study was approved by Institutional Review Board.
Quantification of circulating EPCs by flow cytometry
To identify and quantify EPCs, we used a standardized protocol: the modified International Society for Hematotherapy and Graft Engineering (ISHAGE) sequential gating strategy, as proposed by Schmidt-Lucke et al. 16 Briefly, 1 mL of whole blood was collected from a forearm vein into EDTA tubes, transported to the cytometry laboratory, and processed within 1 to 2 hours of collection. Hence, 150 μL of whole blood was incubated with the following combination of antihuman monoclonal antibodies: 10 μL of anti-CD133 conjugated with allophycocyanin (APC) (Miltenyi Biotec, Bergisch Gladbach, Germany), 5 μL of anti-CD45 conjugated with APC-H7 (Becton Dickinson, Franklin Lakes, NJ, USA), 10 μL of anti-KDR (also known as type 2 VEGF) conjugated with phycoerythrin (Sigma, Milan, Italy), and 10 μL of anti-CD34 conjugated with fluorescein isothiocyanate (Becton Dickinson) for 30 minutes at 4°C in the dark. Red blood cell lysis was performed using FACS Lysing Solution (BD Biosciences, San Jose, CA, USA) diluted 1:10 (vol/vol) in distilled water and washed with phosphate-buffered saline before flow cytometry acquisition. Data acquisition was performed with a high-performance flow cytometer (FACSCanto II; BD Biosciences). According to the standardized protocol that we used, human circulating EPCs are identified by a minimal antigenic profile that includes at least one marker of stemness/immaturity (CD34 and/or CD133) plus at least one marker of endothelial commitment (KDR). CD45 staining was also performed to exclude cells, such as macrophages, that express "endotheliallike" proteins. 17 The same operator, who was blind to the clinical status of the patients, performed all of the cytometric analyses throughout the study.
Quantification of circulating active SDF-1α
Active SDF-1α was quantified with a custom assay based on the R&D Quantikine kit (R&D Systems, Inc., Minneapolis, MN, USA), following the manufacturer's instruction except that we used, for detection, an antibody raised against fulllength human SDF-1α (22-89) and specific for the N-terminal intact isoform (clone K15C; Chemicon) that was conjugated with horseradish peroxidase using a dedicated kit (ab102890; Abcam). A horseradish peroxidase-labeled antibody was used at final dilution of 1:20 000, based on a titration curve.
Statistical analysis
Comparisons of parameters at baseline between treatment groups were performed using the t test for normally distributed data, the Mann-Whitney test for non-normally distributed variables and the χ 2 test for categorical variables. Linear regression analysis was performed using EPCs and SDF-1α (after log-transformation) as the dependent variable after adjusting for age, blood pressure, BMI, lipid levels, and patient group as categorical variables.
Intragroup differences within variables before and after treatment during the study have been analyzed using a general linear model for repeated measures. Values are provided as
Results
Since January 2013 to December 2014, 72 subjects satisfied the inclusion criteria. About 28 of 41 patients with HbA 1c <6.5% and all the 31 patients with HbA 1c 7.5% to 8.5% agreed to participate. At the end of the study, data were available for 28 patients in Group A and 31 patients in Group B. At baseline, Group A and Group B were similar in age, sex, smoking habit, BMI, and duration of diabetes. Also, liver and kidney function were similar, as well as concomitant drugs (antihypertensive, lipid-lowering, antiplatelet, metformin). Hemoglobin A 1c , blood glucose, and triglycerides were significantly higher in Group B vs Group A but similar cholesterol levels ( Table 1). After 4 months, we observed the following: (1) similarly reduced HbA 1c (by 9.6% in Group A; by 10% in Group B) and (2) similarly increased EPCs (by 52% CD45 − CD133 + KDR + , by 47% CD45 − CD34 + KDR in Group A; by 62% CD45 − CD133 + KDR + , by 47% CD45 − CD34 + KDR in Group B) and SDF-1α concentrations (by 95% in Group A; by 106% in Group B). The extent of EPCs or SDF-1α changes was not related to HbA 1c variations ( Table 2).
Discussion
Our findings show that the 4-month treatment with alogliptin induced a significant increase in active SDF-1α. This effect was accompanied by a similar increase in EPCs and a similar reduction in HbA 1c both in those with good and poor diabetes control. It still has to be elucidated whether increased EPC concentration is attributable to improved glycemic control, to upregulated SDF-1α (as a specific DPP-4-i mechanism of action), or both. The aim of the study was to investigate whether alogliptin, similar to other DPP-4-i, was able to increase EPC concentration. Indeed, the study was not designed to ascertain the relative weight of improved blood glucose control and SDF-1α upregulation.
Several years ago, Fadini et al 17 in a nonrandomized clinical trial comparing 4-week sitagliptin vs no additional treatment in addition to metformin and/or secretagogues observed an increase in circulating EPCs, accompanied by a concomitant upregulation of SDF-1α. The same group in a randomized, crossover, placebo-controlled trial assessed the effect of another DPP-4-i, linagliptin on EPCs in type 2 diabetic patients with or without chronic kidney disease. 18 The study demonstrated
4
Clinical Medicine Insights: Endocrinology and Diabetes that linagliptin acutely (4 days) was able to increase EPCs and anti-inflammatory cells and suggested that a direct effect of DPP-4 inhibition may be important to lower vascular risk in diabetes, especially in the presence of chronic kidney disease. Two other papers challenged the influence of DPP-4-i on EPC count. In one of them, sitagliptin more than glimepiride was associated with a significant increase in EPCs-phenotypically characterized as CD34 + /CXCR-4 + cells in 30 patients with type 2 diabetes in poor glucose control with metformin and/or sulfonylurea. 19 However, as sitagliptin obtained a better glucose control than glimepiride, the study did not clarify whether the obtained results were mainly due to a DPP-4-i class effect or to a glucose-lowering effect. In the other study, Dei Cas et al 20 tried to address the issue on whether the positive increase in EPCs is a benefit induced by DPP-4-i per se or it is secondary to improved glucose control. The authors compared the effect of vildagliptin vs glimepiride on top of metformin; once obtained similar HbA 1c levels, vildagliptin but not glimepiride exerted a significant increase in EPCs at 12 months of follow-up. This finding suggests that SDF-1α, as major regulator of progenitor cell kinetics, more than improving glycemic control, may play a pivotal role in EPC circulating levels. The same study strongly suggests a long-term beneficial effect of this therapy on the endothelial repair process and its counterbalancing role to endothelial injury. In another mid-term study (3 months), saxagliptin and metformin equally improved the number of EPCs and flow-mediated dilation in newly diagnosed type 2 diabetic patients. 21 Although recent evidence proved that in patients with type 2 diabetes, a reduced baseline level of circulating CD34 + stem cells predicts adverse CV outcomes up to 6 years later, it is not known whether a reduction in blood stem cells causes CV events per se or whether it represents a bystander of inflammation, hematopoietic expansion, and bone marrow abnormalities, which in turn promote atherosclerosis. 22 Of note, intervention trials have failed to demonstrate any additional protective CV effect of this class of drugs compared with active comparators. [23][24][25] These trials have been conducted in diabetic subjects at elevated CV risk: in the SAVOR-TIMI study (saxagliptin), about 80% of patients had a history of CV disease; in the EXAMINE study (alogliptin), diabetic patients had recent acute coronary syndrome; then, in the TECOS study (sitagliptin), patients had established CV disease. It still has to be elucidated whether DPP-4-i may exert a preventive role in the absence of CV risk factors. Indeed, various clinical trials have shown that cardiac function improved in patients with acute myocardial infarction who underwent bone marrow-derived stem cell therapy. 26,27 These findings highlight the vasculoprotective effects of the EPCs, suggesting that in diabetic patients without established CV disease, a microvascular improvement induced by DPP-4-i may in turn prevent future macrovascular adverse events.
Although the present data confirm that DPP-4-i improved glycemic control and influenced EPC concentration also by Abbreviations: HbA 1c , hemoglobin A 1c ; EPCs, endothelial progenitor cells; SDF-1α, stromal cell-derived factor 1α; WBC, white blood cell.
Data are expressed as mean ± SD. *P<0.05.
Negro et al 5 upregulation of SDF-1α, we did not find an inverse correlation between HbA 1c and EPCs or SDF-1α. Probably, confounders such as hypertension/antihypertensive drugs, dyslipidemia/ lipid-lowering drugs, use of antiplatelet drugs, and smoking may have an influence on such parameters.
In conclusion, after 4-month treatment with alogliptin, either patients with good HbA 1c or those with elevated HbA 1c at baseline showed reduced HbA 1c and concomitant similar increase in EPCs and active SDF-1α. The extent of increase in EPCs was independent from baseline diabetes control. These results are in accordance with previous studies investigating the influence of DPP-4-i on EPCs and confirm that also for alogliptin, the increase in EPC count is mediated by SDF-1α. Such effect is shared by all the DPP-4-i and has to be considered a class effect.
|
2018-04-03T05:51:12.306Z
|
2017-11-25T00:00:00.000
|
{
"year": 2017,
"sha1": "0f9ad49b61630bd6ad337bbea4f5b0a3f19b32da",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1179551417743980",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f9ad49b61630bd6ad337bbea4f5b0a3f19b32da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3915718
|
pes2o/s2orc
|
v3-fos-license
|
Glucagon‐like peptide‐1 elicits vasodilation in adipose tissue and skeletal muscle in healthy men
Abstract In healthy subjects, we recently demonstrated that during acute administration of GLP‐1, cardiac output increased significantly, whereas renal blood flow remained constant. We therefore hypothesize that GLP‐1 induces vasodilation in other organs, for example, adipose tissue, skeletal muscle, and/or splanchnic tissues. Nine healthy men were examined twice in random order during a 2‐hour infusion of either GLP‐1 (1.5 pmol kg−1 min−1) or saline. Cardiac output was continuously estimated noninvasively concomitantly with measurement of intra‐arterial blood pressure. Subcutaneous, abdominal adipose tissue blood flow (ATBF) was measured by the 133Xenon clearance technique. Leg and splanchnic blood flow were measured by Fick's Principle, using indocyanine green as indicator. In the GLP‐1 study, cardiac output increased significantly together with a significant increase in arterial pulse pressure and heart rate compared with the saline study. Subcutaneous, abdominal ATBF and leg blood flow increased significantly during the GLP‐1 infusion compared with saline, whereas splanchnic blood flow response did not differ between the studies. We conclude that in healthy subjects, GLP‐1 increases cardiac output acutely due to a GLP‐1‐induced vasodilation in adipose tissue and skeletal muscle together with an increase in cardiac work.
Introduction
We recently reported substantial acute effects of physiologically increased plasma levels of GLP-1 on cardiovascular hemodynamics in humans, using continuous (invasive and noninvasive) measurements (Asmar et al. 2015(Asmar et al. , 2016a. In healthy individuals (Asmar et al. 2015) and patients with type 2 diabetes (Asmar et al. 2016a), we demonstrated a GLP-1-induced increase in heart rate, possibly due to direct effects of GLP-1 on the heart (Pyke et al. 2014). Furthermore, GLP-1 increased cardiac output in healthy subjects (~18%, 1.2 AE 0.1 L/min) but not in patients with type 2 diabetes. Despite a significant renal clearance of GLP-1, exceeding glomerular filtration (~55%), renal blood flow and glomerular filtration rate remained unchanged. The increase in cardiac output was proportionally greater than the increase in mean arterial pressure (~2%, 2.9 AE 1.4 mmHg), suggesting a vasodilation in one or more vascular beds except for the renal vascular bed.
Recently (Koska et al. 2015), it has been demonstrated that GLP-1 receptors are functionally expressed in the endothelium. GLP-1 receptor engagement improves endothelial function in patients with type 2 diabetes. The improved endothelial function is probably via stimulation of endothelial AMP-activated protein kinase pathway activity. This was demonstrated in isolated human adipose tissue arterioles, resulting in a greater eNOS activity, inducing vasodilation. However, the GLP-1-mediated vasodilation may also occur by mechanisms independent of the epithelium. Using a validated monoclonal antibody for immunohistochemistry, Pyke et al. (2014) detected vascular GLP-1 receptors exclusively in the smooth muscle cells in arteries and arterioles. Engagement of the GLP-1 receptor in vascular smooth muscle cells leads to cAMP formation and thereby activation of downstream paths involving protein kinase A (PKA) and exchange protein directly activated by cAMP (EPAC) (Drucker 2006). Both PKA and EPAC induce intracellular Ca 2+ accumulation, initiating vascular relaxation via not fully clarified signaling pathways (Gloerich and Bos 2010;Leech et al. 2010).
Using the flow-mediated dilation technique, Basu et al. (2007) demonstrated, in healthy subjects, that acute infusion of GLP-1 increases forearm blood flow in response to acetylcholine, whereas Nystrom et al. (2004) demonstrated that acute administration of GLP-1 increased forearm blood flow in type 2 diabetes patients with stable coronary artery disease, but not in healthy subjects. Using real-time, contrast-enhanced ultrasound technique, Sjoberg et al. (2014) demonstrated microvascular recruitment in the vastus lateralis muscle in healthy subjects during an acute administration of GLP-1. Aside from this, it is not fully clarified whether GLP-1, in vivo, modulates vascular tonus in other beds, for example, adipose tissue and/or splanchnic tissues. Therefore, we designed the present randomized, placebo-controlled, and single-blinded experiment to elucidate whether acute administration of GLP-1, under fixed sodium intake, induces vasodilation in adipose tissue, skeletal muscle, and/or splanchnic tissues in healthy subjects.
Subjects
Baseline characteristics are shown in Table 1. Nine lean young male subjects of Caucasian origin participated in the study, which involved two experiments performed in random order separated by about 4 weeks. All subjects were healthy and none took medication at the time of the study. Body composition was determined by dual energy X-ray absorptiometry (DEXA) scanning (Lunar iDXA; GE Healthcare, Brøndby, Denmark) (Table 1). Consent to participate was obtained after the subjects had read a description of the experimental protocol, which was approved by the Scientific Ethics Committee of the capital region of Copenhagen (H-1-2014-089).
One subject developed syncope during the catheterization procedures. Thus, only 133 Xe washout data from this subject were included in further analyses.
Protocol
For 4 days before each experiment all subjects consumed a controlled mixed diet (2822 kcal per day, 16% protein, 55% carbohydrate, 29% fat). The food was handed out frozen, and the basal sodium chloride content of the diet, measured at Eurofins Stein's Laboratory in Denmark, was 55-75 mmol per day. Sodium chloride was added to the diet in order to standardize daily intake at 2 mmol sodium chloride per kg body weight per day (Asmar et al. 2015(Asmar et al. , 2016a. 24-h urine was collected on the last day and electrolyte, albumin, and glucose concentrations were determined. Water intake was ad libitum, and strenuous excess physical activity was not allowed. Subjects fasted for 12 h before the beginning of the experiment. The experimental timeline is shown in Figure 1. After emptying the bladder, confirmed by ultrasound, subjects remained supine throughout the experiments. During the experiments, bladder emptying was allowed with subjects remaining in the supine position.
Blood flow measurements
Subcutaneous, abdominal ATBF was calculated from the washout rate constant of 133 Xenon. This technique has previously been validated in our laboratory (Simonsen et al. 2003). About 1.0 MBq gaseous 133 Xenon mixed in about 0.1 ml atmospheric air was injected into the paraumbilical area of the subcutaneous adipose tissue. The washout rate of 133 Xenon was measured continuously by a scintillation counter system (Oakfield Instruments, Oxford, UK) strapped to the skin surface above the 133 Xenon depot. Measurements obtained during periods of 20 min throughout infusions were used for analyses.
Leg and splanchnic blood flow were measured via Fick's Principle, using ICG as indicator as previously described (Enevoldsen et al. 2004;Hovind et al. 2010). An intravenous bolus injection of ICG (1 mg) in 10 mL of 0.9% NaCl was administered followed by a continuous intraarterial infusion (167 lg min À1 ) in 0.9% NaCl (70 mL h À1 ). Steady state arterial concentrations of ICG were obtained after~60 min. After at least 60 min of infusion of ICG and after a stable monoexponential washout of 133 Xenon was registered, two baseline blood sample pairs were drawn followed by start of a 2-h infusion of either GLP-1 (1.5 pmol kg À1 min À1 ) or saline (0.9% NaCl). The solutions were prepared freshly. The subjects were blinded with respect to the contents.
Central hemodynamics
Blood pressure and heart rate were monitored invasively (ADInstruments, Oxford, UK) throughout the experiments. Estimated cardiac output was recorded continuously and noninvasively using Finapres (Finapres Medical Systems BV, Amsterdam, The Netherlands) (Imholz et al. 1998). The estimation of cardiac output via pulse contour analysis is an indirect method based on the development of the pulsatile unloading of the finger arterial walls using an inflatable finger cuff with built-in photo-electric plethysmograph (Langewouters et al. 1984(Langewouters et al. , 1985. To achieve highest accuracy and precision regarding absolute stroke volume levels and cardiac output levels, a calibration of the Finapres against a direct method such as the Fick's Principle (e.g., indicator-dilution) is required. Such calibration is, however, not necessary to observe relative changes in cardiac output due to the GLP-1 infusion (Stok et al. 1993;Bogert and van Lieshout 2005). Measurements obtained during periods of~5 min before and after blood sampling were used for analyses.
Blood and urine analyses
Samples of blood were drawn simultaneously from the radial artery and the right-sided femoral and hepatic vein every 30 min from time À30 min until termination of the experiments (Fig. 1). All arterial as well as venous blood samples were analyzed for GLP-1 and ICG. Glucose and insulin were analyzed only in arterial blood samples. The amount of collected blood was substituted with a similar amount of isotonic saline during the experiments. Plasma samples were assayed for total GLP-1 immunoreactivity and for intact GLP-1, as previously described (Orskov et al. 1994;Wewer Albrechtsen et al. 2015). Concentrations of the primary metabolite GLP-1 9-36amide were calculated by subtraction of concentrations of intact GLP-1 from total concentrations (Meier et al. 2004).
Plasma insulin levels were measured using a commercial enzyme immunoassay kit (Insulin Human ELISA EIA-2935, AH Diagnostics, Aarhus, Denmark).
Blood glucose concentrations and hematocrit were measured using an automated benchtop blood analyzer system (ABL 700 series, Radiometer Medical Aps, Brønshøj, Denmark).
Plasma ICG concentrations were determined by spectrophotometry at 805 and 904 nm in duplicates as previously described (Enevoldsen et al. 2005).
Urinary electrolyte concentrations were measured by atomic absorption (Atomic absorption spectrophotometer model 2380, PerkinElmer, Norwalk, Connecticut). Urinary pH was measured using a XC161 Combination pH electrode (Radiometer Medical Aps, Brønshøj, Denmark). Urinary albumin and glucose concentrations were measured using an enzymatic method (Cobas Integra â 400, Roche Diagnostics, Indianapolis, IN).
Calculations
The subcutaneous, abdominal ATBF was calculated from the mean 133 Xenon washout rate constant determined in 20-min periods. Thus, ATBF was calculated according to the equation ATBF = Àk 9 k 9 100. A tissue/blood partition coefficient (k) for Xenon of 10 mL g À1 was used (Bulow et al. 1987b).
Leg plasma flow was calculated as: ICG infusion-rate/ (ICG femoral venous -ICG arterial ) at steady state, and leg blood flow was subsequently calculated from simultaneous measurements of hematocrit.
Splanchnic plasma flow was calculated as: ICG infusion-rate/(ICG arterial -ICG hepatic venous ) at steady state, and splanchnic blood flow was calculated on the basis of simultaneous measurements of hematocrit.
Statistical analysis
The primary end-point in this study was the cardiac output. When using a 2-tailed a = 0.05 and requiring an 80% power threshold, the sample size n < 6 was calculated to detect an appreciable effect of GLP-1 on cardiac output. This calculation was based on our previous study (Asmar et al. 2015), in which the effect magnitude of a 3-h intravenous GLP-1 infusion on cardiac output was 1.2 L min À1 with an SD of 0.2 L min À1 .
Data were analyzed, using SigmaPlot 12 (Systat Software, Inc., Chicago, IL) and GraphPad Prism 5 (GraphPad Software, Inc., La Jolla, CA). Area under the curve (AUC) was calculated, using the trapezoidal rule, and the t-test (2-tailed) for paired data was used for comparing DAUC during the GLP-1 infusion and DAUC during the saline infusion. Values of P < 0.05 were considered statistically significant.
Standardized sodium chloride intake
On the last day of the 4-day period with standardized sodium chloride intake prior to the GLP-1 or saline study, 24-h renal sodium excretions (data not shown) were equal between subjects. Using samples from urine collected throughout the GLP-1 and saline studies (328 AE 5 min and 333 AE 9 min), mean urinary sodium, potassium and hydrogen excretions were not statistically different on the 2 days (data not shown).
Effects of GLP-1 on arterial blood glucose and plasma insulin
Arterial plasma insulin concentrations and arterial blood glucose concentrations during the GLP-1 and saline infusions are shown in Figure 3. During the GLP-1 infusion, arterial plasma insulin concentrations tended to increase transiently from 16.6 pmol/L to 21.0 pmol/L, whereas an increase was not seen during the saline infusion ( Fig. 3A and B). During the GLP-1 infusion, arterial blood glucose concentrations were transiently reduced (P = 0.001) (Fig. 3C and D) with a nadir of 4.36 AE 0.12 mmol/L at 60 minutes, and with a range of 3.90-5.73 mmol/L within the first 60 min (Fig. 3C). None of the subjects developed symptoms of hypoglycemia. After a transient~2-fold reduction in splanchnic glucose output concomitant with the increase in insulin concentration 30 min after the commencement of the GLP-1 infusion, the splanchnic glucose output returned to baseline level, 1 mmol min À1 , indicating that the hypoglycemia did not elicit significant metabolic counter regulation. During the saline infusion, blood glucose concentrations remained unchanged, and the splanchnic glucose output remained constant~1 mmol min À1 .
Effects of GLP-1 on central hemodynamics
Cardiac output, blood pressure and heart rate during the GLP-1 and saline infusions are shown in Figure 4 and 5.
In the GLP-1 study, cardiac output increased in average by 0.8 AE 0.1 L min À1 (13%) with a maximal increase (90-120 min after the commencement of the infusion) by 1.1 AE 0.1 L min À1 (18%). In the saline study, the peak increase (90-120 min after the commencement of the infusion) was 0.3 AE 0.1 L min À1 (Fig. 4A and B). Systolic blood pressure increased significantly by 7 AE 1 mmHg in the GLP-1 study compared to an increase by 2 AE 1 mmHg in the saline study, (Fig. 5A and B). Diastolic blood pressure remained unchanged in both studies ( Fig. 5C and D). Arterial pulse pressure increased significantly in the GLP-1 study by 6 AE 1 compared to an increase by 3 AE 1 mmHg in the saline study ( Fig. 5E and F). Mean arterial pressure tended to increase by 3 AE 1 mmHg compared with the saline study ( Fig. 4C and D). Heart rate increased significantly in the GLP-1 study by 7 AE 1 bpm, whereas heart rate remained constant in the saline study ( Fig. 5G and H).
Effects of GLP-1 on subcutaneous, abdominal ATBF, leg blood flow, and splanchnic blood flow Subcutaneous, abdominal ATBF, leg blood flow, and splanchnic blood flow during the GLP-1 and saline infusions are shown in Figure 6 and 7.
In the GLP-1 study, subcutaneous abdominal ATBF began to increase after 40-60 min with a maximal increase (80-120 min after the commencement of the infusion) by 2.4 mL min À1 100 g tissue À1 . In the saline study, the peak increase (80-120 min after the commencement of the infusion) was 1.1 mL min À1 100 g tissue À1 . In the GLP-1 study, leg blood flow increased significantly to a maximal increase (90-120 min after the commencement of the infusion) by 195 mL min À1 compared to an increase by 83 mL min À1 in the saline study. In both studies, splanchnic blood flow did not change significantly, however, in the GLP-1 study, splanchnic blood flow tended (P = 0.10) to increase initially (0-60 min after the commencement of the infusion) compared with the saline study.
Discussion
In this study, we demonstrate that GLP-1 elicits an increase in subcutaneous, abdominal ATBF during an acute administration of GLP-1. Additionally, leg blood flow increases, whereas splanchnic blood flow remains unchanged. The increase in adipose tissue and skeletal muscle blood flow is consistent with most of the GLP-1induced increase in cardiac output. Thus, we demonstrate a significant~2-fold sustained increase in subcutaneous, abdominal ATBF, 40-60 min after the commencement of the GLP-1 infusion, which is a new in vivo observation. The increase in subcutaneous, abdominal ATBF was 2.4 mL min À1 100 g tissue À1 during the last 60 min of the GLP-1 infusion. If we assume that the GLP-1-induced increase in subcutaneous, abdominal ATBF is representative for the average body fat mass (14.8 kg, Table 1), it can be calculated that the increase in ATBF can account for~35% of the increase in cardiac output in the last hour of the experiment. Previously, only one study has investigated the effect of GLP-1 on subcutaneous, abdominal ATBF. Bertin et al. (2001), was not able to demonstrate any effect of locally infused GLP-1 on subcutaneous, abdominal ATBF, using the microdialysis technique (ethanol inflow/outflow ratios). However, this technique is less sensitive compared with the 133 Xe wash-out technique to detect changes in ATBF (Karpe et al. 2002b). It is well described (Bulow et al. 1987a;Karpe et al. 2002a), that ATBF increases postprandially and that hyperinsulinemia per se cannot account for this increase (Asmar et al. 2010). Recently, we demonstrated in healthy lean subjects that glucose-dependent insulinotropic polypeptide (GIP) increases subcutaneous, abdominal ATBF by~5-fold (Asmar et al. 2010(Asmar et al. , 2014(Asmar et al. , 2016b. However, the increase is dependent on postprandial hyperinsulinemia as well as hyperglycemia. In contrast to GIP, we demonstrated in this study, that GLP-1 stimulates subcutaneous, abdominal ATBF at fasting glucose and insulin concentrations. Interestingly, the increase in subcutaneous, abdominal ATBF due to systemic exposure of GLP-1 in this study is comparable to the 2-3-fold increase in subcutaneous, abdominal ATBF, usually seen as a response to nutritional stimuli (Bulow et al. 1987a;Karpe et al. 2002a). Whether the effect of GLP-1 may be potentiated by hyperglycemia and hyperinsulinemia, as is the case for GIP, needs to be studied in additional experiments. The fact, the GLP-1 0 s vasodilatory effect in adipose tissue is delayed (by 20-40 min) compared to the GLP-1-induced vasodilation in skeletal muscle (discussed later) indicates that different mechanisms could be involved.
Hyperinsulinemia >400 pmol/L has in previous studies been shown to elicit increase in central sympathetic activity (Rowe et al. 1981;Vollenweider et al. 1993; Scherrer and Sartori 1997; Tack et al. 1998;Paolisso et al. 1999). However, in our previous study, conducted under the same experimental conditions as applied in this study, we were not able to measure any significant effect on plasma levels of noradrenaline or adrenaline. Additionally, heart rate increased similarly with no effect on heart rate variability. Altogether, indicating that a significant activation of the sympathetic nervous system was probably not induced in this study by the transient decrease in blood glucose levels induced by GLP-1.
In a study by Hilsted et al. (1985), a decrease in subcutaneous ATBF was found during hypoglycemia (~2 mmol/L), induced by insulin injected intravenously. This vasoconstriction was probably due to stimulation of vascular a-receptors by increased circulating catecholamines. In this study, blood glucose levels decreased transiently with a nadir of 4.36 AE 0.12 mmol/L from time 30-60 min after the commencement of the GLP-1 infusion. In the same time interval (40-60 min), the subcutaneous, abdominal ATBF began to increase. It can be speculated that the mild hypoglycemia as seen in this study may have attenuated the initial GLP-1-induced vasodilation in the adipose tissue. A sustained increase took place in leg blood flow during the GLP-1 infusion. The increase in blood flow in the examined leg during the GLP-1 infusion was 195 mL/ min. Adipose tissue accounts for~2.6 kg of the tissue mass in the examined leg (Table 1). Assuming that leg adipose tissue and subcutaneous, abdominal ATBF behave similarly postprandially as demonstrated previously (Manolopoulos et al. 2012),~30% of the leg blood flow increase can be estimated to have taken place in adipose tissue. If we assume that the remaining increase in leg blood flow takes place in the skeletal muscles and that the skeletal muscles in the leg are representative for whole body skeletal muscles, this can account for~40% of the observed increase in cardiac output (given that~40% of total body weight is skeletal muscles). There was a substantial arteriovenous difference in plasma levels of intact GLP-1 across the lower extremity and splanchnic vascular bed, possibly reflecting expression of dipeptidyl peptidase-4 (DPP-4) in the endothelial membrane of the capillaries in these vascular beds. Interestingly, total GLP-1, and thereby the metabolite GLP-1 9-36amide, which is not a substrate for DPP-4 and if anything acts as an antagonist at the GLP-1 receptor, was also cleared in the leg. Both intact GLP-1 and its metabolite have previously (Drucker 2016) been reported to cause vasorelaxation, which was to some extent independent of the known GLP-1 receptor. The mechanism responsible for the increased flow found here cannot be derived from the present results. Nevertheless, our findings are in accordance with previous studies, demonstrating an acute GLP-1-induced increase in flow-mediated dilation of the brachial artery (Nystrom et al. 2004;Basu et al. 2007). Sjoberg et al. (2014) infused GLP-1 (1.0 pmol kg À1 min À1 ) directly into the femoral artery in healthy subjects leading to supraphysiological plasma levels of GLP-1 (20-30-fold increase compared with baseline). Independent of physiological hyperinsulinemia, this led to microvascular recruitment by~60% in the vastus lateralis muscle of the infused leg after 5 min together with an increase in the diameter of the femoral artery bỹ 12%. In the noninfused contralateral leg, in which plasma levels of GLP-1 was within the physiological range (7-10-fold increase compared with baseline) the effect was slightly delayed but similar in magnitude. This is in accord with this study in which skeletal muscle blood flow (lower limb blood flow corrected for adipose tissue flow) increased during the GLP-1 infusion, resulting in similar plasma levels of GLP-1 as seen in the physiological part of the study by Sjoberg et al. (2014).
The splanchnic blood flow, measured by indocyanine green clearance technique, remained unaffected during the GLP-1 infusion apart from a transient tendency to increase initially after the commencement of the GLP-1 infusion. We could not demonstrate any significant splanchnic arteriovenous difference in plasma concentrations of total GLP-1 or GLP-1 9-36amide. In a previous study, Trahair et al. (2014) demonstrated that under postprandial hyperglycemia and hyperinsulinemia (induced by a intraduodenal glucose infusion; 3 kcal/min) an intravenous infusion of GLP-1 (0.9 pmol kg À1 min À1 ) increased blood flow in the superior mesenteric artery as measured by ultrasound/ Doppler technique. Flow increased significantly more compared with the saline infusion under postprandial hyperglycemia and hyperinsulinemia. However, blood glucose concentrations were lower during the GLP-infusion. Whether GLP-1 has a role per se in the vasodilation in the splanchnic bed cannot be determined from the study by Trahair et al. (2014) since an oral glucose load initiates a similar vasodilatory effect (Bulow et al. 1999). An explanation for the constant splanchnic blood flow found in this study can be, that an increase in the portal vein blood flow has been counterbalanced by a vasoconstriction in the hepatic artery via the intrinsic autoregulation of the hepatic artery as well as the hepatic arterial buffer response (Eipel et al. 2010), a mechanism which previously has been demonstrated with respect to the splanchnic vascular effect of GIP (Kogire et al. 1988). However, in a recent human study, it has not been possible to demonstrate a GLP-1induced vasodilation in the superior mesenteric artery by ultrasound/Doppler sonography technique (J.J. Holst, unpubl. data).
Together with our previous study, we have under similar conditions altogether examined the vascular biology of GLP-1 in four major vascular beds; the renal, splanchnic, adipose tissue, and skeletal muscle. Considering the time course of blood flow changes in the examined vascular beds and the changes in cardiac output under slightly supraphysiological circulating GLP-1 levels, the initial increase in cardiac output is likely due to an increase in skeletal muscle blood flow. Since GLP-1 was extracted significantly in the lower extremity, this indicates that the vasodilation in the skeletal muscle may be elicited via GLP-1 receptors (Pujadas and Drucker 2016). The later onset increase in ATBF, contributing to the GLP-1induced increase in cardiac output may be due to a derived GLP-1 effect. This mechanism needs to be elucidated in separate experiments.
Limitations of the study
Firstly, the circulating levels of GLP-1 outside the portal circulation are usually low due to a rapid degradation of GLP-1 by DPP-4. The slightly supraphysiological levels of GLP-1 applied in this study may therefore not be completely translated into normal human physiological conditions. Secondly, the effect of acute elevation of plasma GLP-1 concentrations for a few hours in a limited number of patients do not necessarily reflect the chronic elevation of GLP-1 receptor agonist levels in a larger population with type 2 diabetes treated with a long-acting GLP-1 receptor agonist. Therefore, the existence of chronic effects that are not detected in the present experimental setup cannot be excluded.
Conclusions
Under the conditions applied in the present experiments, acute administration of GLP-1 increases blood flow in adipose tissue and skeletal muscle, whereas splanchnic blood flow remains unchanged. Together with an increase in cardiac work and thereby blood flow, this can explain 2017 | Vol. 5 | Iss. 3 | e13073 Page 10 ª 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
|
2017-10-19T04:31:03.686Z
|
2017-02-01T00:00:00.000
|
{
"year": 2017,
"sha1": "f9a6847b235a9e60762296d0d5d45c601ecc2dd5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14814/phy2.13073",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "708a793b467240d50395ceae0e716c5400a86190",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17563024
|
pes2o/s2orc
|
v3-fos-license
|
Chiral transition in a strongly coupled fermion-gauge-scalar model
We report the recent results from the computer simulations of a fermion-gauge-scalar model with dynamical chiral-symmetry breaking and chiral transition induced by the scalar field. This model might be considered to be a possible alternative to the Higgs mechanism of mass generation. A new scheme is developed for detecting the chiral transition. Our results show with higher precision than the earlier works that the chiral transition line joins the Higgs phase transition line, separating the Higgs and Nambu (chiral-symmetry breaking) phases. The end point of the Higgs transition with divergent correlation lengths is therefore suitable for an investigation of the continuum limit.
INTRODUCTION
Some strongly coupled lattice fermion-gauge models with a charged scalar field, which break chiral symmetry dynamically, might be considered to be a possible alternative to the Higgs mechanism for mass generation, as discussed in [1,2].
Let us concentrate on a prototype with U (1) gauge group, a scalar of fixed modulus and one staggered fermion (corresponding to 4 flavors), where both the scalar and fermion have charge one. The action has been described in [1,2] with three bare parameters (β, κ, m 0 ). The dynamical mass generation is meaningful only in the chiral limit m 0 = 0. We consider here the phase transition line NET between two phases [1,2]: (1) Dynamical mass generation (Nambu) phase, below the NET line, where chiral symmetry is spontaneously broken ( ψ ψ = 0) due to the strong gauge fluctuations so that the fermion mass m F is dynamically generated; (2) Higgs phase, above the NETS line, where the Higgs mechanism is operative, but ψ ψ = m F = 0. The scalar field induces a second order chiral phase transition NE line which opens the possibility for approaching the continuum. * Speaker.
Whether such a model can replace the Higgs mechanism depends crucially on the existence and renormalizability of the continuum limit. To search for such a continuum theory and grasp its nature, we need to make precise determination of the second order phase transition point with divergent correlation lengths. For such a purpose, we have done extensive simulations using Hybrid Monte Carlo (HMC) algorithm and developed some new methods for locating the NE line.
HMC SIMULATIONS
The HMC simulations have been done on 6 3 16 and 8 3 24, where on 6 3 16, we have better statistics (1024-6500 trajectories) for different (β, κ, m 0 ). The detailed results for the spectrum are reported in [3]. We have measured the following local observables: plaquette energy E p , link energy E l and chiral condensate ψ ψ , where for ψ ψ we use the stochastic estimator method. However, it is very difficult to use the local quantities at finite m 0 to detect a critical behavior on the NE line, since they show smooth behavior as a function of β or κ. (One could expect the critical behavior only in the infinite volume and chiral limit.) For (β, κ) near the point E, the peaks of susceptibility for different quantities develop and coincide, while the boson mass am S gets smaller. Concern-ing the location of the ET line, on the 6 3 16 and 8 3 24 for κ < 0.31 or β > 0.64 and m 0 = 0.04, we find explicit two state signals from the thermocycle, time history and histogram analysis of the local quantities.
On the NE line, the π meson shows more obviously the phase transition than other quantities. In the Nambu phase, the π meson should obey the PCAC relation. In the symmetric phase, the π meson is no longer a Goldstone boson, and one should observe a deviation from PCAC. At κ = 0.4, these properties are nicely seen in fig. 1, from which one sees that for β < 0.57 where the system is in the broken phase, we have Goldstone bosons. However, on 6 3 16, even at β = 0.57 (possibly in the chiral symmetric phase), a linear extrapolation leads to ψ ψ | m0=0 ≈ 0.13. For larger β, the extrapolated result gets smaller (e.g. at β = 0.65, ψ ψ | m0=0 ≈ 0.05) and is expected to vanish in the V → ∞ limit. Of course, one should not expect the linear extrapolation to be valid at the critical point.
NEW ORDER PARAMETER
ψ ψ is not a convenient order parameter for the chiral transition of a finite system due to the sensitivity of chiral extrapolation. We employ a different method for determining the chiral transition, namely we calculate the chiral susceptibility in the chiral limit, defined by If there is a second order chiral phase transition, χ chiral should be divergent (in other words, χ −1 chiral should be zero) at the critical point and in the thermodynamical limit. In the chiral limit, the chiral susceptibility in the Nambu phase is difficult to obtain, but it is calculable in the chiral symmetric phase [4]. It can be shown that in the symmetric phase χ −1 chiral , defined in eq. (1), is the same as where λ i are the positive eigenvalues of the massless fermionic matrix. Approaching the NE line from the symmetric phase by fixing κ, χ −1 should behave as χ −1 ∝ (β − β c ) γ , corresponding to the divergent correlation length at the second order phase transition point in the thermodynamical limit V → ∞. In the Nambu phase, it can also be shown that eq. (2) is equivalent to in the V → ∞ limit. Then in such a limit, χ −1 should be zero since ψ ψ | m0=0 = 0 in the Nambu phase. Therefore, χ −1 defined in eq. (2) is a suitable order parameter for the chiral phase transition: it is zero in the broken phase, and it is nonzero in the symmetric phase.
Let us again focus on the results at κ = 0.4. To perform the calculation, we generalize MFA [5], in which the chiral limit m 0 = 0 is accessible, to the fermion-gauge-scalar models. From fig. 2, we observe that on 8 4 the chiral transition appears at β c ≈ 0.57, being consistent with the observation of fig. 1.
DISCUSSIONS
The location of the NE line on the available lattices obtained by the above methods is summarized in fig. 3, where the point N is plotted by interpolation. We have determined the phase transition line NE with high precision and demonstrated that this second order chiral transition line joins the Higgs phase transition line at the end point E being around (β, κ) = (0.64, 0.31), separating the Higgs and Nambu phases. No finite size scaling analysis has been done, and larger lattices are required for such a purpose.
From the spectroscopy [3], we know that am F scales to zero when crossing the chiral transition line NE. Nevertheless, the susceptibility for E l and correlation length for the composite scalar (am S > 1.5) remain finite on the whole NE line except approaching the end point E. Therefore, the end point, hopefully being a second order point with divergent correlation lengths, is the most suitable candidate for the continuum limit.
Further work to be done is to study the finite size effects, analyze the dependence of the end point E on the bare fermion mass, investigate the scaling properties and understand the nature of the end point, which is underway [6].
We would like to thank C. Frick and J. Jersák for collaboration and V. Azcoiti for useful discussions. The HMC simulations have been performed on HLRZ Cray Y-MP8/864 and NRW state SNI/Fujitsu VPP 500.
|
2014-10-01T00:00:00.000Z
|
1994-11-29T00:00:00.000
|
{
"year": 1994,
"sha1": "c01706b799005a2f966a5756bca8d4146edaa267",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9411072",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8f65b48779d0f7c122b801238d68faba111a26fc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
214300400
|
pes2o/s2orc
|
v3-fos-license
|
Leveraging ICT Technologies in Closing the Gender Gap
In recent decades, the growth of information and communications technologies (ICT) and the move toward the digitalization of trade and global value chains has been radically transforming the global trade scene, with important implications for women engaged in trade. In order to identify adequate measures to reduce gender disparities, this paper reviews and discusses evidence from the existing literature, as well as presents evidence from a number of new empirical analyses. It also introduces two new frameworks to analyze the gender dimensions of e-commerce. Digital technologies have the potential to empower women socially and economically by creating new employment and entrepreneurial opportunities, removing trade barriers for women, enhancing access to finance and information and optimizing their business processes. For example, e-commerce substantially lowers the barriers to entry for micro-, smalland medium sized enterprises by reducing the investment needed to launch and run a business. Digital solutions that remove the need for face-to-face interactions when trading can help reduce the difficulties women business owners face, such as mobility constraints, discrimination, and in some countries even violence. As workers, digital technologies may help women overcome time and mobility constraints by connecting women to work from different locations and in flexible hours through emails, instant messaging and tele-conferences. It will also benefit women as consumers by saving time, providing access to information, reducing transaction costs, or giving them more control over the purchasing process. Yet, technology is not the silver bullet in resolving all the gender gaps in trade. This is because women’s access and use of ICTs and digital technologies tend to lag in contrast to men. The benefits of digital technologies hinges on well-designed and specifically targeted policies. JEL Categories: F13, F19, F6, J18 1 Marie Sicat works at the United Nations Conference on Trade and Development (UNCTAD) 2 Ankai Xu and Ermira Mehmetaj work for the World Trade Organization 3 Michael Ferrantino and Vicky Chemutai work for the World Bank Group
Introduction
In recent decades, the growth of information and communications technologies (ICT) and the move toward the digitalization of trade and global value chains has been radically transforming the global trade scene, with important implications for women engaged in or affected by trade, whether this be as traders, workers or consumers. By enabling the expansion of markets and the entry of large numbers of women into the labor force, trade has been recognized as playing an important role in helping to create opportunities for women and in bridging the gender gap. At the same time, there continue to exist differential impacts of trade on women and men. ICTs can give rise to new windows of opportunity to bridge the gender gaps. This paper argues that digital technologies provide an opportunity to empower women and to close key gender gaps between men and women. Policies need to be in place, however, to ensure that women benefit from the digital transformation.
Due to the rapidly changing ICT and digital landscape, a volatile global trade scene and their interface with persisting gender barriers, it is challenging to assess the extent to which ICTs and new technologies can be leveraged to close the gender gaps. Assessing the situation becomes even more complex today with the emergence of new digital technologies such as artificial intelligence (including automation), robotics, and big data in key trade-related sectors, which stand to transform society and to revolutionize trade, business and industrial production processes. Compounded by the scarcity of sex-disaggregated data, gaining a clear picture of the interplay between gender, trade and technology to close the gender gap is difficult.
To help fill this knowledge gap and in order to identify adequate measures to reduce gender disparities, this paper reviews and discusses evidence from the existing literature, as well as presents evidence from a number of new empirical analyses. It also introduces two new frameworks to analyze the gender dimensions of e-commerce. Specifically, the paper sheds light on the underlying dynamics interlinking gender, trade and technologies and seeks to elucidate key mechanisms by which technology can be leveraged to empower women traders, close key gender gaps, and provide evidence-based analysis to enable policymakers to take informed decisions on gender-sensitive policies that can strengthen opportunities for both men and women in reaping the benefits of trade reforms.
Digital technologies bring transformative forces to global trade and commerce, and while they may simultaneously affect men and women, their impacts on women can be different due to the gender-specific needs and preferences and unique barriers women face as workers, traders and consumers. Leveraging digital technology to empower women's businesses can help drive trade, create jobs and foster economic growth.
A persisting gender digital divide
Despite evidence indicating the beneficial effects of empowering women through ICTs, women's access and use of ICTs and digital technologies tend to lag in contrast to men. Past data indicate that the digital gender gap is persistent and tends to get deeper over time (ITU, 2016). For instance, the internet user gender divide increased from 11 percent in 2013 to 12 percent in 2016, with more than 250 million fewer women now online than men at the global level. Figure 1 illustrates the higher internet penetration rates for men than for women in all regions of the world in 2016.
While this digital gender divide is prominent globally, the gap in ICT access between women and men varies significantly across countries, ranging from 2.3 percent in developed countries to 7.6 percent in developing countries. The rate of female online presence has reached 80 per cent in advanced economies in contrast to the world average among developing countries of 37.4 percent. LDCs lag even further behind with less than 13 percent of women online. This suggests that the lack of women's online empowerment in these countries could further hamper their attempt to participate more actively in digital trade.
With the trend toward the provision of services online (e-services), in particular through the internet, in both the public and private sector, women entrepreneurs and consumers without access to this technology have a clear disadvantage. In addition, in recent times, the advent of emerging technologies, including automation and artificial intelligence, is redefining the future of jobs, with particular impact on women. While this is offering opportunities, it may also require mitigating measures to ensure no widening of the gender gap, particularly in developing countries.
Methodology and New Analytical Frameworks for Leveraging Technology to Close the Gender Gap in Trade
As there has been considerable work to date in reviewing and studying empirical evidence of the gender dimensions of trade and technology, this paper aims to contribute to this effort by revisiting the issue of closing the gender gap through technology and trade, in particular through the narrower lens of e-commerce for trade, and to use new empirical data to shed light on new trends in the evolving gender, e-commerce and emerging ICT technologies landscape.
For the following review, the authors took stock of rigorous quantitative and qualitative studies in gender, trade and technology with the objective aimed at identifying key policy insights and intervention measures and highlighting key areas for future consideration. Case studies were identified and collected applying primarily to developing countries, but also gleaning findings from the experience of developed countries where relevant empirical data and evidence was available. Women's online consumer behavior and its link with employment trends in e-commerce related jobs is examined through an analysis of official occupational employment statistics and time use surveys in the US (case study 1). The gender dimension of e-commerce trade in South East Asia and South Asia was examined through a sex- Internet penetration rate for men Internet penetration rate for women studies, two analytical frameworks were constructed for this research paper to reflect on and help collect, sort and systematize the review findings.
The impact of technologies on women as entrepreneurs, workers and consumers
The literature review elucidated the following key ways in which technologies are impacting women in the trade sector as entrepreneurs, workers and consumers.
How technologies empower women entrepreneurs for trade
It is estimated that globally there are roughly 9.34 million formal women-owned small-tomedium enterprises (SMEs) in over 140 assessed countries, which make up roughly one-third of all formal SMEs (IFC 2014) 4 . These businesses and the women who lead them have the potential to be a powerful force for building prosperity through GDP growth and job creation.
While the potential for economic contributions is significant, female business owners face challenges in accessing the support services they need to grow, such as access to networks, training, financing and markets (ITC, 2017). Digital technologies such as e-commerce platforms have, thus, the potential to bring female producers and traders closer to the markets, offer female consumers a larger variety of products at lower costs, and make it easier for female entrepreneurs to borrow.
The premise of our analysis is that digital technologies, in particular online platforms, can enable women to leverage their comparative advantages and overcome a range of hurdles in traditional modes of trade. Specifically, technologies enhance women's participation in trade by reducing the cost of trade, opening new opportunities to trade in services, enabling them to better use their skills and facilitate women's access to finances.
Reducing the cost of trade and barriers to entry
Technological developments help to reduce the information and transaction cost traditionally associated with cross-border trade. Several studies show that easier access to market information through even relatively simple technology such as mobile phones can decrease spatial variation in prices in developing countries and especially in agricultural markets (Bernard et al. 2007;Aker and Mbiti, 2010). More sophisticated technologies such as e-commerce platforms reduce searching costs dramatically by matching buyers and sellers.
Online rating systems and e-payment solutions enhance trust between buyers and sellers (Resnick and Zeckhauser, 2002;Ba and Pavlou, 2002). Digital solutions that remove the need for face-to-face interactions when trading can help reduce the difficulties women business owners face, such as mobility constraints, discrimination, and in some countries even violence (ITC, 2017).
The Internet allows more micro-, small-and medium sized enterprises to trade online by lowering the entry cost for trade, and women disproportionally benefit from this cost reduction. Some evidence points to the fact that women business owners are more present online than they are in traditional businesses.
For example, a 2015 survey of Pacific Island exporters showed that firms that are active online have a greater concentration of female executives under 45 years of age than those that are active offline (DiCaprio and Suominen, 2015). Etsy, a creative commerce platform, reported that 86% of its sellers in the United States are women, and they are more likely to be younger than the typical business owner (Etsy, 2017). On Alibaba, a Chinese e-Commerce platform, more than half of all online shops are owned by women. In comparison, only 17.5% of small enterprise in China has a female top manager, and the figure globally stands at 18.6% (World Bank Enterprise Surveys).
E-commerce, often referred to as an industry made for "start-ups", substantially lowers the barriers to entry for SMEs by reducing the costs and investments needed to launch and run a business. One of the most characterizing features of the e-commerce sector is the prevalence of e-marketplaces or virtual marketplaces bringing together buyers and sellers in the online space, whether this be for the sale of goods or services, between businesses and consumers (B2C), between businesses (B2B), between business and government (B2G), to name a few.
As e-marketplaces help to bridge asymmetric information gaps in market access, eliminate the need for investing in physical retail space and also often build in the provision of service delivery in payment services and logistics, they offer particular benefits and opportunities to women entrepreneurs and traders who typically face inequitable access to market information, capital for the purchase or investment of land and retail space, electronic payment solutions, and logistics services.
There exist strong gender dimensions in e-marketplaces. Bhutan's e-auctioning platform illustrates how e-commerce platforms can help reduce the cost of trade for women traders, access market information, increase productivity and earn higher profit margins from crossborder trade (see Box 1). Additionally, there are two case studies shedding further light on some of the gender dimensions characterizing regional e-marketplaces and online trading.
See case studies: Women traders on the Alibaba E-marketplace in China and Women-led ecommerce trading firms in South and Southeast Asia in the Annex of the paper. In contrast to the conventional auction platforms, where farmers were required to weigh and grade the potatoes manually, BCE weighing is now carried out by machines and the potatoes are graded using automated grading machines. With EIFs' support, BCE has procured two potato grading machines with a capacity to grade and weigh around ten truckloads of potatoes in a day. The use of these machines has reduced processing and payment time to one day, as opposed to three working days in the conventional system.
The potatoes are packed and marketed in specially designed packaging, which help Bhutanese potatoes gain recognition and build a brand in the international markets.
Through the use of ICT in commodity markets, BCE has helped to creates incentives for market participants to produce commodities that meet quality specifications, and to behave according to commodity market standards, thereby increasing their sales. Through this online system farmers are able to sell their products faster, more efficiently and with higher margins. Building on this success, the government of Bhutan intends to introduce the trade of cardamom in the online commodity exchange system.
Opening new opportunities to trade in services
New technologies enable traditionally non-tradable services to be traded online, which can bring more economic opportunities for women. In addition, these opportunities are multiplying with the increasing trend toward e-services in both the public and private sector. In Upwork, an online marketplace for freelancers to provide services, 44% of the workers are women, compared to an average of 25% of non-agricultural economy globally (World Bank, 2016).
Airbnb, an online marketplace for hospitality service, estimates that more than 1 million women host on Airbnb, making up 55 percent of the global Airbnb host community. In addition, women host on Airbnb at 120 percent of the rate of men, with a higher percentage of the women hosts report part-time employment and earning lower income outside of the hosting activity (Airbnb, 2017).
Women are also discovering new opportunities in online teaching. Kim and Bonk's (2006) survey results showed that the number of female instructors online had increased dramatically over a few years. More than half of the respondents (53%) were women compared to a similar survey conducted a few years earlier, which was dominated by male instructors.
Data from e-marketplaces, often supplemented by online customer service, also appear to indicate women's predilection for customer service, online marketing and other communication and social interaction skills important for raising credibility in the online space.
Kricheli-Katz and Regev (2016) studied the auction transactions of private sellers on eBay and found that women online sellers generally received higher ratings than men on ecommerce platforms. While the women sellers engaged in trade in goods, the skills they displayed support findings from previous studies indicating that women have a predilection for customer service, online marketing and other online sales qualities. The study documented that women sellers on eBay tended to have better reputation as men sellers. Alibaba also reports that women-led enterprises receive higher ratings in customer service, logistical service, and give more accurate descriptions of their products. In particular, the advantages of women are more prominent in markets of highly differentiated consumer products and services and, in particular, in markets geared toward women clientele and household.
Women-led enterprises are concentrated in sectors such as cosmetics, clothing, jewellery and baby products.
Providing training and information
ICT can bring benefits for women traders by providing training, improving access to information, facilitate business planning, and optimizing their business processes. For instance, mobile phones facilitate access to agricultural market information, in many cases replacing the message boards and radio programs of traditional market information systems.
Farmers in countries as diverse as Niger, Senegal, and Ghana receive the price of a variety of goods immediately on their mobile phone by simply typing a code and sending a text message. In Kenya, Uganda, and India, farmers can call or text hotlines to ask for technical agricultural advice (Aker and Mbiti, 2010). Törenli (2010) also argues that ICT can be of benefit to the information poor if it is used to generate 'solidaristic' practices in order to combat labour exploitation by the subcontractors.
Despite the potential benefits of digital technologies, the gender digital divide risks to ill equip women for the digital and technological advances ahead. Effective measures to educate and expose these women to the digital environment and to obtain digital skills will be crucial at all levels, including among informal women entrepreneurs and women at low socio-economic levels. Peer to peer learning initiatives as well as initiatives leveraging digital tools and combining the efforts of high-growth women entrepreneurs with informal sector entrepreneurs are other approaches being pursued. Shop Soko with its business model recruit tech savvy and entrepreneurial community agents to provide ongoing training and mentorship to subsistence entrepreneurs is one example (see Box 2).
Progress in strengthening women's ICT access and use in many countries has led to increasing numbers of women in possession of ICT devices and tools, in particular mobile phones.
Nonetheless, many women entrepreneurs lack knowledge or competence in effective use of these tools. Strong measures are needed to provide practical training and guidance for productive utilization. Increasingly, there is a gap among entrepreneurs possessing mobile phones and ICTs between those who are effective "digital users" and those who are not.
Strong measures are needed in order to address this so called "gender digital-use divide" (US Department of Education, 2015). See Box 3 on ICT, e-commerce, and digital technologies among women-owned micro-enterprises in Egypt on bridging this divide.
Continuing efforts to encourage women in STEM fields and ensuring women entrepreneurs' ability to enter the tech market and make effective of business software will also be key (see Box 4 on Making technology work for women in Burkina Faso). Strong initiatives to re-skill and re-tool women, including in the ability to work closely with digital technologies and machines, will be crucial. Shop Soko has also introduced a number of innovations to support local artisanal microentrepreneurs. It has a mobile-enabled "virtual factory" which operates at a fraction of the cost of traditional production, thus increasing earnings for entrepreneurs. In addition, one of the key innovative approaches it employs is the recruitment of tech savvy and entrepreneurial community agents to provide ongoing training and mentorship to poor women, as well as providing quality assurance and logistics.
With the majority of the working population in Sub-Saharan Africa (up to 90% in Kenya) employed in the informal sector and barely earning enough to survive, challenges in basic ICT literacy and skills, internet access, access to electronic payments plague most of Kenya's micro-entrepreneurs in artisanal arts. Women, in particular, are disadvantaged, due to the gender digital divide. The peer to peer learning approach leveraging high-growth women entrepreneurs as community agents to support largely subsistence entrepreneurs aims to create a sustainable network to support the development of ICT and entrepreneurial skills which are necessary to close the gender gap. By developing an e-commerce curriculum for community groups throughout Kenya, the Shopsoko team of community agents is now focused on scaling their impact https://shopsoko.com/pages/impact
Box 3: ICT, e-commerce, and digital technologies among women-owned microenterprises in Egypt
As part of the development of its national e-commerce strategy in 2015, the Government of Egypt, in cooperation with UNCTAD, conducted its first nationwide official e-commerce survey of its micro-enterprises in the handicrafts sector which helped illuminate important information on the size, scale and scope of its women entrepreneurs in e-commerce. The national survey found that the majority of enterprises in the formal sector are owned by men (96%) in contrast to only 4% of the businesses which were women-owned. (UNCTAD, 2015).
The small numbers of women entrepreneurs in the formal sector and even smaller numbers of these women entrepreneurs using internet pointed to the likelihood of a strong presence of women entrepreneurs in the informal sector not captured by standard national statistics.
Subsequently, the national survey was further supplemented by focus group consultations of women entrepreneurs in the informal sector, as well as findings from face-to-face interviews with Egyptian women entrepreneurs in the informal sector through UNCTAD cooperation with the ILO (ILO, 2015). Although obtaining accurate quantitative data on the numbers of women entrepreneurs in the informal sector nationwide was not feasible, the focus groups and face-to-face interviews played an important role in obtaining qualitative information on ICT and e-commerce use among the micro-entrepreneurs in this sector.
The findings from consultations with more than 20 focus groups of some 100 women entrepreneurs in handicrafts, primarily in the informal sector, in the cities of Aswaan, Sohag, Cairo and Alexandria showed that few were familiar with the concept of e-commerce and online shopping or knew existent e-marketplaces in Egypt such as Jumia or Souq.
However, mobile phone use was common across entrepreneurs. Among informal women entrepreneurs with whom there were face-to-face interviews, 50.7% made use of a regular mobile phone to run their business, 8.2% made use of a smart phone, 4.1% a desk top computer, 6.8% a portable laptop computer, 7.5% fixed line internet subscription, 8.2% mobile internet subscription, and .7% internet café, telecentre or kiosk.
Focus groups consultations showed that, while the majority of women informal microentrepreneurs had a basic mobile phone rather than a smartphone, they made elementary use of their mobile phones for their businesses. Many of the entrepreneurs were on Facebook and used Facebook to market their products. In Nubia, some entrepreneurs communicated with their customers and received orders through mobile phone messaging.
Some took photos of their products and sent them to their customers using WhatsApp.
They sometimes received photos from customers showing model products the customer wanted them to produce.
Informal sector entrepreneurs communicated that, with the exception of mobile phones, they did not have the necessary ICT knowledge and skills and so could not identify opportunities that could be generated through ICTs. Only 7.5% of the informal entrepreneurs were aware of the different ways in which they could use the Internet in their businesses and 5.5% expressed confidence in their ability to use Internet for their business operation. The findings showed that, while progress had been made and continuing progress was needed in ensuring women entrepreneurs' physical access to ICT devices and tools in Egypt, effective use and application of these tools in possession was a major challenge. Many of the micro-entrepreneurs that were consulted communicated an eagerness to gain capacity to make use of ICT and e-commerce for their businesses and called on the Government to build an e-marketplace to market their local specialty handicrafts. Measures are needed to close the gender gap particularly relating to reducing the "gender digital use divide".
Box 4: Making technology work for women in Burkina Faso
In Burkina Faso, IT training in enterprise resource planning (ERP) software for women producers is enabling them to address supply side constraints and strengthen their productive capacities in key sectors such as sesame, shea almond, processed cashew and dried mango. With EIFs' support, Burkina Faso has developed a special ERP software system which helps to empower women's MSMEs. This software allows women to input data on processing and raw material and forecast the final outputs, thereby enabling women to track the entire production cycle optimise their business management and production. The software also has an order and sales management modalities which permits women to sell their products online. They can create a stock of the finished products, by product type and packaging, manager customer information, deliver orders and charge and collect their bills.
This modality has created a convenient alternative to face-to-face trading. The software has been rolled out in 35 processing regions and is effectively boosting productive capacity and raising the business competitiveness of women-led MSMEs.
Enhancing women's access to finance
A growing literature documents the gender gap in access to credit (Klapper and Parker, 2011;Johnson, 2004). The International Finance Corporation (IFC) estimates that as many as 70 percent of women-owned SMEs in developing countries are unserved or underserved by financial institutions, resulting in a total credit gap of $287 billion, which is 30 percent of the total credit gap for SMEs. Most of the financial and non-financial barriers affecting womenowned SMEs occur at the start-up stage of the business life cycle (IFC, 2014).
Women entrepreneurs are more likely than male entrepreneurs to rely on internal or informal financing and are charged higher interest rates than men (Richardson et al, 2004;Muravyev et al, 2009). Reasons for such a gender gap could vary from taste-based discrimination (Beck et al, 2011) to lower overall financial literacy among women (Lusardi and Tufano, 2009).
According to Aterido et al (2013), firm size, age and a lower likelihood to be an exporter and have foreign ownership participation explain the difficulties that female-led companies face in accessing finances in nine countries in Southern and East Africa.
Gender differences in access to and use of financial services can have direct negative ramifications for the whole economy (Aterido et al., 2013), since barriers to finances reduce the efficient capital allocation and aggravate income inequalities (Beck et al., 2007).
Technology advancements, such as mobile money, digital platforms that match start-ups and providers of financial services, as well as blockchain may help women overcome barriers they face in accessing necessary capital for their economic empowerment.
Some studies show that peer-to-peer crowd-funding platforms allow women to access trade finance at much lower costs, even if women tend to ask for less money than men. For example, Marom, Robb, and Sade (2014) studied data from the crowd-funding platform Kickstarter and find that women are 35 percent of the project leaders and 44 percent of the investors on the platform. On average, men seek significantly higher levels of capital (and raise more) than women. However, women enjoy higher rates of success, even after controlling for category and goal. Similarly, Barasinska and Schäfer (2014) provided evidence on the success of female borrowers at a large German peer-to-peer lending platform. Their results show that there is no effect of gender on the individual borrower's chance to receive funds on this platform. In other words, online crowd-sourcing platforms seem to be "gender blind" and eases female discrimination in access to finance. In developing countries, mobile phones can dramatically reduce the costs of sending and receiving money relative to traditional mechanisms, an important issue for rural farmers and traders .
How technologies affect women as workers
Automation will likely eradicate many jobs in the low skilled sectors, where women make a higher share of the labor force (Autor, 2015). For example, many jobs in manufacturing, office and administrative support occupations are predicted to be at risk of being automated (Frey and Osborne, 2017; Goos et al., 2014). Brussevich et al (2018) estimated that women on average perform more routine tasks than men across all sectors and occupations, and thus automation could cause more female jobs to disappear. However, at the same time, thanks to the technological change, many jobs are being transformed and new jobs are being created (Bessen, 2015). As analyses of data on the United States, Japan and European countries reveal, employment shares for skilled workers, high-paid professionals and managers, have been rising in the industries that experienced the fastest growth in ICT (Michaels et al., 2014;Goos et al., 2014). ICT substitutes for unskilled workers in performing routine tasks, but it complements skilled workers in executing nonroutine abstract tasks (Akerman et al., 2015;Atalay et al., 2018).
In a nutshell, technology will likely cost women jobs, but it will also likely create new employment opportunities for women. Technology has the potential to empower female labour force through different channels and these are explored below.
Increasing demand for female skills
Technology-driven shifts in demand for female-specific skills can enhance women's economic opportunities and reduce the wage gap between genders. Women are well positioned to gain from a shift in employment toward non-routine occupations, and away from physical work.
Their often superior social skills present a comparative advantage in the age of digitalization (Krieger-Boden and Sorgner, 2018). According to Deming (2017), high-paying jobs increasingly require social skills and the fastest growing cognitive occupations -managers, teachers, nurses and therapists, physicians, lawyers, even economists -all require significant interpersonal interaction.
Using data from West Germany, Black and Spitz-Oener (2010) found that women have witnessed relative increases in non-routine analytic and interactive tasks, which are associated with higher social skill levels. Similarly, Cortes et al. (2018) found that since 1980 the probability of working in a cognitive/high-wage occupation has risen for a collegeeducated woman in the US (Cortes et al., 2018). This is attributed to a greater increase in the demand for female-oriented skills, in particular social skills which have gained importance in cognitive/high-wage occupations relative to other occupations. Also, Lindley (2012) documented demand shifts in favor of skilled women that are positively correlated with technical change and occurred mainly in sectors where social skills are important, such as education and health sectors.
Some empirical evidence shows that women benefit disproportionally from technological upgrade induced by access to foreign markets. Juhn et al (2013) documented that the increased access to US market after the entry-into-force of the North American Free Trade Agreement (NAFTA) induced more productive firms in Mexico to modernize their technology in order to enter the export market. These new technologies involved computerized production process, which disproportionally benefited female workers since they lowered the need for physically demanding skills. As a result, the relative wage and employment of women improved, especially in blue-collar tasks.
Overcoming time and mobility constraints
A growing literature on gender income gap shows that women are less flexible to work long hours given their responsibilities as primary caregivers. This disadvantage pertaining to inflexible working hours for women is particularly pronounced in exporting firms, since exporters may require greater commitment from their employees such as working particular hours to communicate with partners in different time zones or travelling at short notice, and they may therefore disproportionately reward employee flexibility. In a paper studying the Norwegian manufacturing sector, Bøler et al (2018) find that a firm's entry into exporting increase the gender wage gap by about 3 percentage points for college educated workers.
Facilitating households' access to financial services
ICT can also benefit female consumers by decreasing discrimination they might face when purchasing offline. A study examining racial discrimination found that compared to offline, minority buyers pay online nearly the same prices as do white consumers, controlling for consumers' income, education, and neighborhood characteristics. The Internet facilitates information search and removes cues to a consumer's willingness to pay. The results imply that the Internet is particularly beneficial to those whose characteristics disadvantage them in negotiating (Morton et al., 2003).
Comparing traditional markets with the Internet, Rezabakhsh et al. (2006) show that the Internet enables consumers (a) to overcome most information asymmetries that characterize traditional consumer markets and thus obtain high levels of market transparency, (b) to easily band together against companies and impose sanctions via exit and voice, and (c) to take on a more active role in the value chain and influence products and prices according to individual preferences. Since women many times assume most of household chores including purchasing for the household, the results are likely to apply more to female consumers and make a case for how ICT benefits/could benefit female consumers.
The inter-relationship between consumer online consumption and the growth of e-commercerelated sectors and the impact on jobs is another important area relating to the gender dimension of e-commerce consumerism and its link with women's update of technology. In recent times, strongly competitive forces leading to a struggling "bricks and mortar" retail sector in the face of competition from e-commerce players has led to concerns regarding the loss of traditional sectors.
The case study below, based on analysis of new, empirical data, illustrates the time-saving, efficiencies, and beneficial aspects of e-commerce for women as online consumers, its link with reducing women's household burdens, supporting women's needs to meet household responsibilities, and what appears to be its interface with a virtuous cycle feeding into the growth of jobs and e-commerce-driven sectoral growth in the economy.
Case study 1: E-commerce substitutes paid market time for household shopping time 5 The study assessed linkages between women's online consumer behavior and its potential impact on employment trends in e-commerce related jobs. The analysis was done making use of data from the Occupational Employment Statistics ( The dominant category of employment which has expanded in the current supply chain boom is "transportation and material moving operations", which is mainly male-dominated. Noting that the distribution of household hours and market hours varies by gender, the study proceeded to explore whether there may be benefits for both men and women in e-commerce.
In particular, the study examined how e-commerce also replaces unpaid household hours of shopping time, with paid market hours of work in warehousing and transport. Figure
New Frameworks for Analyzing the Gender Dimensions of E-commerce
The Shopping and E-commerce Occupations, by Gender (2017) Men Women The new frameworks help to contextualize key issues emerging from the literature review with consideration to the gender dimensions of the following: (1) technology and trade dynamics at the micro level in e-commerce; and (2) e-commerce dynamics within the context of a holistic, integrated framework for the e-commerce ecosystem.
The first framework provides a normative lens for looking at issues relating to the use of ICTs and e-commerce for the empowerment of women at the micro-level, in particular women traders and women entrepreneurs. The second framework provides a method for gender analysis of e-commerce cross-sectoral, taking in consideration the multiple and inter-related enabling pillar areas which lay the foundation for e-commerce growth.
Both frameworks highlight the importance of considering the gender dimension in electronic platforms which emerge as a critical success factor for growing e-commerce in a country.
This includes e-marketplaces -which in the past two decades have been the most widespread and typically representative form of electronic platform in e-commerce -as well as e-retailing, e-auction, e-services, e-payment, e-government, and crowdsourcing platforms. Inter-related with the issue of electronic platforms are issues relating to both businesses and consumers and the inter-relationship between them, including issues of business competitiveness, household consumption and the growth of supporting services and industries.
Framework 1: The gender prelude to digital trade -Empowering women traders and entrepreneurs for e-commerce through ICTs
Studies from the literature review provided evidence of numerous ways in which digital technologies reduce the cost of trade and barriers to entry for a wide range of women traders and entrepreneurs. The case studies also provided evidence of how certain e-commerce tools were playing an important role in empowering women entrepreneurs whether their trade be in goods or services.
In an increasingly digitalized world, digital technologies have become an essential tool for running a competitive business for women across all walks of life. The range of potential benefits ICTs offer is extensive. For the segment of high-growth-oriented women entrepreneurs, this includes access at low cost or no cost to crucial information in areas such as business development, market and pricing information, production technologies, compliance, forecasts and training. Affordable access to digital technologies is increasingly becoming an imperative for entrepreneurs to participate in the global value chains and to create the necessary seamless and efficient back office administration.
On the other end of the spectrum, women entrepreneurs are largely under-represented in contrast to men as business owners of formally registered enterprises. In low-income countries, women-owned businesses tend to be clustered in the micro and informal sector.
Women entrepreneurs in the informal sector have limited legal rights, social protection, status or recognition. Compounding their disadvantage, and vulnerability, due to lack of literacy, skills, access to economic resources, and financing, the gender digital divide is particularly prevalent among this segment of women entrepreneurs (UNCTAD 2014) The framework illustrating the interplay of these diverse factors is shown below (see Figure 4). The framework sets out a four step gender and e-commerce analytical process involving the mapping of: (1) The Framework also highlights the range of strategic objectives policymakers may wish to pursue. This may range from gender measures and policies targeting high-growth women entrepreneurs -for example, women entrepreneurs with technical expertise in the technology sector -aimed at leveraging e-commerce to spur major GDP growth and job creation to those targeting micro and small subsistence women entrepreneurs and traders aimed at supporting family livelihood, helping to provide a wide social safety net for low income families, reducing poverty and promoting social inclusion.
The Diversity of ICT Tools and Technologies
The confluence of strategic considerations
Framework 2: Gender Dimensions of the UNCTAD Integrated Enabler and Assessment E-commerce Framework
Studies from the literature review made clear that ensuring women's effective participation in e-commerce requires a multiplicity of simultaneous favourable conditions ranging from the existence of sound ICT infrastructure and women's access to and use of ICTs and internet to the availability of payment solutions and logistics and delivery services. The review also showed a strong gender dimension in the high interplay and inter-relationship between various sectors in the economy for the growth of e-commerce. For this reason, a second framework is provided which sets out an integrated, holistic diagnostic approach to analyse the gendered dimension of e-commerce and to support the effective identification of policy measures, recommendations, and actions.
In order to build a more gender-inclusive e-commerce sector, it is essential to enable women in this field to participate to policy-making dialogues. For this purpose, UNCTAD launched the initiative eTrade for Women which aims at enabling women digital entrepreneurs in developing countries to be more prominent contributors to policy processes and become levers to the prosperity of their regions.
UNCTAD's ICT Policy Review Integrated E-commerce Enabler and Assessment Frameworkcomprised of eight key pillars in enabling areas for e-commerce -provides a systematic integrated and holistic analytical framework for conducting e-commerce diagnostics on the Relating to the Framework's first pillar, for example, a gender analysis for e-commerce may wish to take into consideration issues relating to ICT infrastructure and telecommunications and related services, including the IT sector. This may include: the gender connectivity gap and ensuring women's equitable access and use of ICTs; the need for women-friendly IT support for women entrepreneurs; the need for IT and ICT literacy for women; ensuring women's representation as tech entrepreneurs; the need to modernize women entrepreneurs' IT back office processes, to name a few.
The Framework also helps to highlight, raise clarity and elucidate linkages among and between sectors in the e-commerce ecosystem. The Framework's Pillar 2 on logistics and trade facilitation and Pillar 7 on consumer market issues, for example, serve to help highlight and explore interlinkages such as those described in the case study "E-commerce substitutes paid market time for household shopping time" which provided empirical evidence of the correlation between women's online consumer behavior and the generation of jobs in the logistics sector.
Conclusion: Challenges for women in digital transformation
We have so far examined how digital technologies offer opportunities for women and identified several mechanisms through which technologies empower women entrepreneurs for trade, help women as workers, and benefit female consumers. This includes reducing the cost of
Pillar 2: Logistics and Trade Facilitation
• Gender inequitable access to e-payments. Women's lack of collateral and challenges in accessing bank accounts and loans. Prevalence of subsistence women entrepreneurs in informal sector and preference for cash. Government social protection e-payments to women. Women and financial inclusion. The need for payment security knowledge and awareness among women. Financing, microcredit and crrowdsourcing for women's businesses.
Pillar 3: Electronic Payments
• Legal and regulatory measures to support women's small businesses. Discriminatory laws and regulations, including local laws, which constrain women's access to ICTs. Women's representation and leadership iin government, regulatory, policymaking and other decisionmaking spheres for e-commerce and ICT . Women entrepreneurs and cybersecurity. Consumer protection for women purchasing online.
Pillar 4: Legal and Regulatory
• Electronic platforms for women's empowerment. Women's participation in e-commerce platforms and eservices . The use of websites, e-marketplaces, B2C, B2B. to market women entrepreneurs goods and services. Women and freelancer platforms. Women and the sharing economy. Women and the gig economy. Women as e-commerce consumers. Women and digital data.
Pillar 5: Electronic Platforms
• Building women's leadership and talent in e-commerce. E-commerce, ICT and entrepreneurial training for women entrepreneurs. Empowerment of women's micro and small enterprises through ICTs. Vocational skills training for women in e-commerce Women's education in IT and STEM. Gender gap in STEM. Gender pay gap across sectors. IWomen in e-commerce management. Business development support services for woeen entrepreneurs. Women-friendly designed curriculum in Ie-commerce, CT and IT education. Incubation and accelerator programs for women e-commerce entrepreneurs.
Pillar 6: Skills Development and Building Talent, including MSME development • Women entrepreneurs' awareness of value proposition of e-commerce. Awareness-raising compaigns to promote women in e-commerce. Women's consumer awareness on e-commerce. Women in e-commrce market research. Women in e-commerce marketing and advertising. Women in e-commerce retail market.
Pillar 7: Awareness raising, including consumer awareness and consumer market issues • E-procurement measures to support women entreprenuers. Women entrepreneurs n B2B. Women's access to government e-services. Women's businesses as suppliers of e-commerce goods and services.
Pillar 8: Electronic procurement, including government e-services trade and barriers to entry for women entrepreneurs, opening new opportunities to trade in services, facilitating women's access to training, knowledge and finance, etc.
Yet technology is not the silver bullet in resolving all the gender gaps in trade. The benefits of digital technologies do not automatically accrue to women without well-designed and specifically targeted policies. For this reason, two new frameworks have been provided in this paper aimed at contributing to the body of analytical tools and frameworks available to the development community and gender practitioners to support strong gender analysis on e-commerce and the formulation of effective policy measures, recommendations and strategies to leverage e-commerce to close the gender gap in trade.
Effective measures are needed to address several persisting challenges faced by women in the digital economy. Some of these reflect to a large extent some carryover from challenges pre-dating the digital economy, for example in women's inequitable access to key resources in developing countries. The first persisting challenge to reap the benefits of digital technology is the continuing gap in access to digital infrastructure and digital technologies both across countries and between genders. Developing countries, especially least-developed countries (LDCs), lag behind in all indicators of ICT development but especially so in access to broadband Internet and mobile access. The disadvantages in terms of internet access are magnified by other obstacles, including low download and upload speeds and relatively expensive broadband services compared to income levels in developing countries. These factors, in turn, make consumers in these countries less likely to use the internet for economic purposes (UNCTAD, 2017). Similar divides exist within countries, particularly between men and women. Recent estimates by the Global System for Mobile Communications (GSMA) reveal that 184 million fewer women own mobile phones in low-and middle-income countries compared to men. Women are on average 10% less likely to own a mobile phone than men, in part due to costs and gendered social norms (GSMA, 2018;Uruquieta and Alwang, 2012). Even if women own mobile phones, there is a significant gender gap in usage, particularly for more transformational services, such as mobile internet. Over 1.2 billion women in low-and middle-income countries do not use mobile internet. Women are, on average, 26% less likely to use mobile internet than men (GSMA, 2018).
Even in those countries where progress has been made in women's access to ICTs and ecommerce tools, for example mobile phones and internet, merely possessing this access is not enough, as many women continue to lack the ability to make effective use of these ICTs and tools. Further efforts are needed to eliminate this "gender digital use divide".
Secondly, discrimination linked to gender stereotypes still presents a challenge for women to flourish economically and socially. Though online platforms could reduce the gender-based discrimination, they do not eliminate gender bias. Kricheli-Katz and Regev (2016) found that women sellers on eBay receive on average about 80 cents for every dollar a man receives when selling the identical new product and 97 cents when selling the same used product, even though women receive higher reviews. Although eBay does not reveal the gender of its users in online transactions, it appears that the price differences can still be attributed to the ability of buyers to discern the gender of the seller, though more study is needed to clarify the underlying dynamics. Another study shows that digitization could even worsen the discrimination against women (Malmstrom and Wincent 2018). In particular, it shows that bankers' analysis of a borrower's creditworthiness had a greater gender bias against women when the bankers made their loan decisions only based on data and papers than when the bankers met the borrower (Malmstrom and Wincent 2018).
Another imminent challenge is the gap in education and skills needed for women to benefit from the technological advancements. For example, numeracy task inputs are highly correlated with technological change in the digital age (Lindley, 2012). Graduates in fields which provide useful numerical skills often see productivities increase, such as physical science, mathematics, computer science, engineering, economics and business degrees.
These fields typically have fewer female graduates (Lindley, 2012;Akerman et al., 2015). Similarly, in developing countries, women face constraints in making the most of technological change due to low levels of language and technical literacy (Geldof, 2011;GSMA, 2018;Wyche and Steinfeld, 2015). A study of Ugandan farmers in the Kamuli District find that approximately 62% of men and 32% of women use the calculator function of the mobile phone to calculate proper market prices. Not understanding how to use the calculator function was the main reason, given by 40% of the women surveyed compared to 13% of men (Martin and Abbott, 2011). Another study by Scott, McKemyey, and Batchelor (2004) reveals that many women in rural Uganda were not using mobile phones because of the cost of making a phone call and their lack of knowledge of how to use the device.
However, the underlying challenges that prevent women to benefit from the technological change are structural issues due to cultural and social norms. Gender and racial inequality may thus be exacerbated by automation, as women and some ethnic minority groups are more likely to work in the lower-skilled jobs that are susceptible to automation (IPPR, 2018). Some preliminary analysis on women-led enterprises on Alibaba reveals that (1) the share of women-led enterprises is higher on e-commerce platforms compared with offline businesses, (2) women-led enterprises are on average smaller in size compared with men-led firms, however, the average sales of women-led firms are higher than men-led firms among the larger firms online, (3) women entrepreneurs are more successful in highly differentiated product sectors including cosmetics, clothing, grocery and baby products, etc. and (4) women entrepreneurs are less likely to borrow through microloan but they are more likely to repay.
Online platforms drastically reduce the entry barrier to start a business and allow small enterprises to access the market. Half of the start-ups registered on Tmall.com, a platform for B2C sales for higher quality products, have less 5,000 RMB as registered capital ($720) and the average registered capital of a business on Tmall.com is about 210 000 RMB ($3000).
This figure is significantly lower than the average registered capital of a typical Chinese enterprise at 5 million RMB (China State Administration for Industry and Commerce, 2015).
As a result of the lower entry cost, more than 7 million small businesses are registered on Alibaba platforms. Among these enterprises active on the platforms, over half (50.79%) are led by women.
More than 7 million small businesses are registered on Alibaba platforms. Among these enterprises active on the platforms, over half (50.79%) are led by women. Table 1 shows the relative share and the size of women-led firms according to their annual sales revenue. The share of female entrepreneurs is higher among small-and micro-size firms and relatively lower among larger firms. However, the average revenue of women-led firms displays a reverse trend: women-led firms have higher average sales among large firms, but lower average sales revenue among smaller firms. The reason behind this trend may be due to the fact that women are more likely to open an online shop as a part-time job, which prevents them from scaling up the businesses, although more evidence is needed. Note: online firms are divided into four group according to their annual revenue: • an online shop with over 5 million RMB in annual revenue is considered a large-sized online firm; • an online shop with annual revenue between 1 and 5 million RMB is considered a medium-sized firm; • an online shop with an annual revenue of less than 1 million is considered a small firm, and • an online shop with less than 360 000 RMB annual sales is considered a micro-sized firm.
Women-led online firms are concentrated in specific sectors and product groups. Figure 6 depicts the share of women-led enterprises in different sectors and product groups: 67% of firms selling cosmetic products are led by a woman, and the share of women-led enterprises is 60% for baby products, 54% for clothing, 52.% for jewelleries and accessories, 52% for groceries and 51% for bags, shoes and suitcases.
Although women-led firms account for half of all firms selling on Alibaba platforms, their average sales are lower than men-led firms. On average, the sales revenue of women-led firms is 18% lower than that of firms led by men. Somewhat surprisingly, in sectors where women are dominant in numbers of firms, women-led enterprises have relatively smaller sales. The medium sales are lower for women-led enterprises in all sectors compared with men-led online shops, although the difference is smaller when it comes to sectors such as digital products and services. Women entrepreneurs online also tend to be younger. The average age of a female shop owner on Alibaba is 32 years old, whereas women entrepreneurs starting businesses offline in China are on average 48 years old. Close to one-third of all women entrepreneurs starting up a business on Alibaba are between 25-29 years old (33%), followed by women between 18-24 years old (28%).
Financial services such as loans, savings, and money transfers help businesses build assets and increase their income and make them less vulnerable to economic stress. Ant Financial, an affiliate of China's Alibaba Group, uses Internet-based financing to expand lending to more Chinese micro and small enterprises and women-owned businesses. Its microloan business, Ant Credit, provides microloans to small businesses and individual entrepreneurs over the Internet. It evaluates potential borrowers' creditworthiness based on transactional and behavioural data, such as timely delivery of products and settling of bills, which is gathered as they do business online. Ant Credit's clients are mostly small businesses-more than half of which are owned by women-on Alibaba Group's online marketplaces such as Taobao.com and Tmall.com (IFC, 2017).
Data of Ant Credit indicate that women are less likely to borrow online but they have higher rate of repayment. On average, only 36% of all women-led enterprises operating on Alibaba borrow through Ant Credit. The amount of loans women apply for is on average 6% higher than loan requests filed by men, and the average amount of approved loan for women is slightly higher for women than for men (2%). Remarkably, the ratio of default is significantly lower for women who borrow. The default rate within 90 days is 27% lower for female than for male borrowers, and financial non-performing rate is 28% lower for women than for men.
These evidence echoes findings in studies based in developed countries, where women business owners are found to received be significantly less early-stage capital than men, yet businesses founded by women ultimately deliver higher revenue than those founded by men (BCG, 2018). More in-depth research is needed to investigate the reasons behind the lack of finance for women.
Southeast Asia
The study sought to examine the depth of e-commerce engagement in South Asia and South East Asia, and to identify the priority challenges to e-commerce faced by men and women in different types of companies e.g. small vs. large companies; non-exporter vs. exporter, offline sellers vs. online sellers etc. 6 The study found that the share of women-led firms engaged in e-commerce remains small and reflects the share of women-led firms in the overall economy. The response rate of firms in the survey with male CEOs was approximately 70% higher than that of firms with female CEOs. Analysis of responses from responding traders, however, found that 6 This survey complements the World Bank's efforts on building fundamental ecommerce diagnostic tools such as the Enterprise Surveys and the Doing Business platform which, inter alia, provide an overview of the effects of customs procedures and trade regulations. The survey was done in partnership with NexTrade Group to systematically collect data on issues critical to ecommerce players, such as legal liability rules for internet intermediaries and the quality and cost of urban last-mile delivery particularly at a regional level. A sample of 2,880 merchants and 1,174 ecommerce ecosystems in both South East Asia and South Asia were surveyed. In Southeast Asia, the survey was run in March-May 2018 and covered 1,192 merchants in nine major sectors, and 635 ecommerce ecosystem firms (such as ecommerce and payment platforms and logistics, financial services, IT services firms) that service merchants across five economies (Indonesia, Malaysia, Myanmar, Thailand, and Vietnam). In South Asia, the survey was run in March-May 2018 and covered 1,688 merchants in nine major sectors, and 539 ecommerce ecosystem firms (such as ecommerce and payment platforms and logistics, financial services, IT services firms) that service merchants across seven economies (Afghanistan, Bangladesh, Bhutan, India, Nepal, Pakistan, and Sri Lanka) (Suominen, 2018).
there were no meaningful differences between women and male-led firms in terms of their ecommerce performance and that women-led firms face the same regulatory barriers as their male equivalents.
Data from responding firms furthermore showed that firms with female CEOs were 18.6 percentage points more active than their male counterparts in two-way trade i.e. both selling and purchasing online (see Figure 7). Aggregating the two-way trade and those who sell online, the female/male distribution was relatively the same i.e. 55.5% and 57.6%. The analysis further found that gender coupled with whether the survey respondent was a firm owner or a CEO was a deciding factor in whether the firms tended to engage in ecommerce. Firms in both South Asia and South East Asia that had female survey respondents reported that they were more engaged in e-commerce as both buyers and sellers online. In South Asia, the likelihood for this increases for firms with female owners/CEOs. Additionally, the analysis also revealed that male survey respondents and firms with male CEOs are more likely to be neither buyers nor sellers online, particularly in South Asia. Firms with male CEOs in South Asia tended to sell online whereas firms with a male owner had a higher likelihood Neither sell nor buy online of being an online buyer. In South East Asia, male survey respondents were more likely to be sellers online than their female counterparts.
Key impediments reported by respondents as preventing merchants from engaging in ecommerce within the country and across the border were: a poor regulatory environment for doing business, minimal access to trade finance, low technical capacity to engage in ecommerce, lack of access to online payment mechanisms, difficulties in ecommerce-related logistics, lack of/unclear digital regulations and a poor degree of connectivity and IT infrastructure. Merchants in South Asia who both sell and purchase online (Figure 8) tended to rate the overall regulatory environment for doing business as the largest impediment to doing cross-border ecommerce (7.17/10) and regulations on e-commerce and digital services (6.78/10) as the least constraint. With regards to buying and selling online, the highest impediment reported was team's ability to engage in e-commerce (7.04/10) and the least impediment reported was ecommerce-related logistics (6.62/10).
Figure 8: Impediments to ecommerce by merchants who both sell and purchase online in South Asia
6.3 6.4 6.5 6.6 6.7 6.8 6.9 7.0 7. The data showed that firms with female CEOs generally had a more optimistic view of the existing impediments to ecommerce trade within the country and across the border ( Figure 9). On average, they rated the existing impediments at 7.65/10 whereas their male counterparts rated them at 6.64/10. Taking into account variables such as firm size, the CEO's gender, and the firm's past history of export activity, information from responding firms indicated that there tend to be minor differences in the experiences of firms with male and female CEOs among small firms that export and engage in e-commerce. However, data did show that within the country, small firms (under 50 employees) with female CEOs tended to suffer more from inefficiencies in IT connectivity and infrastructure and digital regulations.
For cross-border e-commerce, medium firms (51-500 employees) with female CEOs reported gaps in connectivity and IT infrastructure, e-commerce related logistics, online payments and shortages in the team's capacity to engage in e-commerce as the major constraints. The analysis found that there exist differences in the types of exporting sectors that men and women are engaged in. By analyzing the types of sectors that female exporters are most active in, it is shown that, for example, in Bangladesh in South Asia, female-led exporting firms were most engaged in exports relating to 'jewelry and fashion accessories' as shown in Figure 10 below. Other exporting countries include China, India, Japan, Korea, Indonesia, Malaysia, Thailand, Vietnam, Singapore, Pakistan and 'other Asia'.
Source: World Bank Group
Female-led firms' inclination to export more was evident in data reporting a 4.5-6% higher amount of export sales over total sales by these firms in 2016-17 than firms of their male counterparts. Additionally, firms with female CEOs in South Asia reported that they were 11% more likely to export to foreign markets in 2016-17 than firms with male CEOs.
In summary, the study found that on average, interviewed firms with female CEOs are 18.6 percentage points more active than their male counterparts in selling and purchasing online.
Additionally, on average, interviewed firms with female CEOs reported that the impediments to engaging in e-commerce were not as significant for them, as they were for men.
Small, medium and large firms engaged in both selling and buying online have a higher tendency to export than those firms that neither buy nor sell online. Notwithstanding the numerous impediments to engaging in ecommerce, female-led firms (CEOs) have a tendency to export more. For small firms that export, there are minor differences between the impediments faced by firms with either gender of CEO.
However, given the small sample size of female respondents/ firms with female CEOs, it is difficult to make representative inferences. Recognizing that both men and women face the same impediments to engaging in ecommerce, the main policy message would be for governments to address the regulatory challenges which include, inter alia, customs procedures for e-commerce in both imports and exports, logistics costs and digital regulations.
|
2020-02-20T09:16:20.767Z
|
2020-01-04T00:00:00.000
|
{
"year": 2020,
"sha1": "b3a01784431fd35f9a429fff2686c0f523a98f82",
"oa_license": "CCBY",
"oa_url": "https://openknowledge.worldbank.org/bitstream/10986/33165/5/Leveraging-ICT-Technologies-in-Closing-the-Gender-Gap.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b77b5fedffdfbab5b546eb29840bd3e98399d042",
"s2fieldsofstudy": [
"Economics",
"Computer Science",
"Sociology",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
129590972
|
pes2o/s2orc
|
v3-fos-license
|
The application of data mining techniques for the regionalisation of hydrological variables
Flood quantile estimation for ungauged catchment areas continues to be a routine problem faced by the practising Engineering Hydrologist, yet the hydrometric networks in many countries are reducing rather than expanding. The result is an increasing reliance on methods for regionalising hydrological variables. Among the most widely applied techniques is the Method of Residuals, an iterative method of classifying catchment areas by their geographical proximity based upon the application of Multiple Linear Regression Analysis (MLRA). Alternative classification techniques, such as cluster analysis, have also been applied but not on a routine basis. However, hydrological regionalisation can also be regarded as a problem in data mining — a search for useful knowledge and models embedded within large data sets. In particular, Artificial Neural Networks (ANNs) can be applied both to classify catchments according to their geomorphological and climatic characteristics and to relate flow quantiles to those characteristics. This approach has been applied to three data sets from the south-west of England and Wales; to England, Wales and Scotland (EWS); and to the islands of Java and Sumatra in Indonesia. The results demonstrated that hydrologically plausible clusters can be obtained under contrasting conditions of climate. The four classes of catchment found in the EWS data set were found to be compatible with the three classes identified in the earlier study of a smaller data set from south-west England and Wales. Relationships for the parameters of the at-site distribution of annual floods can be developed that are superior to those based upon MLRA in terms of root mean square errors of validation data sets. Indeed, the results from Java and Sumatra demonstrate a clear advantage in reduced root mean square error of the dependent flow variable through recognising the presence of three classes of catchment. Wider evaluation of this methodology is recommended.
Introduction
Much of the work undertaken by the Engineering Hydrologist is dependent upon the interpretation and manipulation of recorded data. In general, the longer the period of record, the smaller the standard errors of estimate of hydrological design variables, such as flow quantiles. Hydrologists could therefore be said to have a vested interest in maintaining, if not expanding, the size and scope of hydrometric networks. Unfortunately, the attention to hydrometric activities, stimulated by the International Hydrological Decade from 1965-1974, has not been maintained, and the densities of measuring networks in many countries have decreased owing to a variety of causes, ranging from cost-saving measures to civil unrest. The general deterioration has been such that the World Bank Policy Paper on Water Resources Management (World Bank, 1993) observed that inadequate and unreliable data now pose a serious constraint to efficient water management in many countries. The problem is not simply confined to the developing world. According to Lanfear and Hirsch (1999), every year more than 100 US Geological Survey stream gauging stations with more than 30 years of record are being discontinued owing to shortfalls in funding. ¶ Expanded version of a paper presented to the 7 th National Hydrology Symposium of the British Hydrological Society held in Newcastle-upon-Tyne, 4-6 September, 2000.
Hydrologists have responded to this situation of sparse (or even reducing) gauging networks by developing increasingly sophisticated methods for the regionalisation of hydrological variables. In the modern informatic sense of the term, regionalisation provides a ready example of data mining, which may be broadly defined as the process of extracting useful knowledge and models from raw data stores. Data mining approaches encompass techniques of regression, classification, clustering, and change and deviation detection, each of which has already been applied in some form in hydrological regionalisation. However, to date, the potential of informatic tools, such as Artificial Neural Networks (ANNs), has not been fully explored. ANNs can be applied both to classify catchments according to their characteristics and to relate the 'pattern' of those characteristics to hydrological variables.
In this paper, the experience gained in two recent studies in which ANNs were applied for the regionalisation of flood quantiles is summarised and the results extended. Following a brief discussion of the type of data typically available for regionalisation studies, the configurations of ANNs that can be applied for the purposes of classification of catchments and the development of relationships between flood quantiles and catchment characteristics are described. The results obtained from three case studies relating to the southwest of England and Wales; England, Wales and Scotland; and the islands of Java and Sumatra in Indonesia are then summarised and compared with those obtained from a widely-used approach to hydrological regionalisation based upon Multiple Linear Regression Analysis (MLRA). The concluding section emphasises the advantages of applying ANNs and indicates the possible scope for further refinement.
Regionalisation
In attempting to regionalise a given set of hydrological variables, the engineering hydrologist is faced with a diversity of data. The required outputs of the regionalisation procedure are the values of the dependent variables as computed from the available records at the gauged sites within the region of interest. The inputs are those catchment and rainfall characteristics that are deemed to be influential in determining the magnitude of the desired outputs. The latter are usually confined to variables that can be derived from topographic maps of a consistent scale and date, or meteorological variables that are similarly mapped for climatological or engineering design purposes. The former may be further subdivided into those variables that describe the geomorphology of the catchment and those that pertain to its land use. The latter is most frequently described in terms of indices of the form Dating of the mapping employed is obviously critical, bearing in mind the rapidity of such processes as urbanisation, deforestation or the intensification of agriculture. The potential use of satellite imagery for this purpose has yet to be explored fully, but depends upon the further development of the appropriate tools and algorithms for converting emitted and reflected radiances into hydrologically relevant products. The scope of such indices is very broad, and their relative hydrological importance varies from climate to climate. Their choice is often heavily dependent upon the personal intuition of the analyst.
In contrast, the geomorphological descriptors of a catchment are widely known, but their inter-relationships are perhaps less well appreciated. The high explained variance of area, AREA, as a predictor of main stream length, MSL, (Hack's Law) calls into question the intrinsic value of derived variables, such as catchment form factor or SHAPE (the quotient of AREA and the square of MSL). There are many different ways of defining certain variables, such as main channel slope, but consistency in methods of extraction is possibly more important than selection of one particular form over another. The bifurcation, area and length ratios of the channel network are considered only infrequently, perhaps because of the time and effort required to compute their values for a large number of catchments. This constraint can, of course, be avoided if the analyses can be carried out using automated procedures on a digital elevation model.
In summary, the data forming the basis of a regionalisation study can be messy in the sense of variety of origin and method of computation. The hydrological variables themselves will often have been derived from different lengths of record, and maps to a common (relatively large) scale are not always available. However, the studies reported herein have been based upon published data from previous work (NERC, 1975;Institute of Hydrology and Direktorat Penyelidikan Masalah Air, 1983;Gustard et al., 1989; the current FRIEND European Water Archive) in which a high degree of quality control has been exercised. The questions to be answered by the analysis of these data are essentially two-fold: (1) Are the catchments to be analysed hydrologically homogeneous in the sense of belonging to one "region"? (2) Can some form of relationship be developed between the hydrological variables of interest and the (mapped) catchment and rainfall characteristics?
Question (1) is a matter of classification, whereas (2) requires the modelling of dependencies. Perhaps the most widely-applied procedure for hydrological regionalisation applied to date has been the so-called Method of Residuals, in which the classification and modelling are carried out simultaneously, with the appropriate models being developed by application of MLRA. Introduced by the US Geological Survey (see Dalrymple, 1960;Benson, 1962), this methodology has been widely adopted, most notably in the development of estimation procedures for ungauged catchments in the UK Flood Studies Report (FSR) (NERC, 1975). In brief: i. the hydrological index variable (quantile) is regressed upon catchment and rainfall characteristics for the whole data set; ii. the residuals, i.e. the differences between the observed and computed values of the index variable, are plotted geographically in order to identify groups of these differences that are similar in both magnitude and sign and can therefore be regarded as a sub-region; and iii. the regression analysis is repeated for the sub-regions identified and then generalised across the whole region.
The heavy dependence on geographical proximity in defining the sub-regions has often been criticised, and many authors have turned to the use of multivariate techniques, such as cluster analysis to define homogeneous regions and discriminant analysis to allocate an ungauged catchment to an appropriate region. However, any group of variables is capable of yielding clusters, and different groupings can be obtained if different algorithms and distance measures are adopted (see Nathan and McMahon, 1990, for a more detailed discussion). Nevertheless, the possibility that sites do not have to be geographically contiguous to form a subregion remains intuitively appealing. Furthermore, MLRA is constrained by the linearity assumption, which the transformation of variables can mitigate but not entirely eliminate. A possible alternative approach can be found in the pattern classification and feature detection capabilities of modern informatic tools, such as ANNs.
Artificial neural networks
An ANN consists of layers of processing units (to invoke the biological analogy, representing neurons) where each processing unit, or node, in each layer is connected to all nodes in the adjacent layers (representing biological synapses and dendrites). The selection of an appropriate architecture for the ANN depends upon the problem in hand and the type of learning algorithm (i.e. calibration procedure) to be applied. For example, a Kohonen network is commonly used for the classification of patterns in data sets. Since no outputs are provided for training purposes, the process of determining the weights is referred to as unsupervised learning. More generally, ANNs can be trained (i.e. calibrated) to provide the correct output response to a given input stimulus (supervised learning).
For this purpose, a multi-layer, feed-forward, perceptrontype ANN (MLP) has been found particularly suitable. Figure 1 illustrates the schematisation of a typical, threelayer MLP network of this type. The functioning of an ANN is perhaps best described by following the sequence of operations involved during training and implementation of the MLP network shown in Fig. 1. The vector of inputs is introduced at the nodes of the input layer. Each of these input nodes is connected directly to all nodes in the second, or hidden layer, and the signals carried along these connections can either be amplified or inhibited by application of weights. Each of the hidden nodes in this second layer acts as a summation device for the incoming (weighted) signals. The total signal is then transformed into an output signal using an activation function, typically a sigmoidal function, which restricts the range of the output signal to a zero-to-one interval. The output signals from the hidden nodes in the hidden layer are in turn carried along weighted connections to the nodes in the output layer. If the ANN is to be trained to learn the relationship between a given set of inputs and outputs, then the weights must be adjusted iteratively until the computed and observed outputs agree within a predetermined level of accuracy using a standard algorithm. Although back propagation is one of the most widely-used algorithms, there are several different methods for weight optimisation, some of which have better generalisation abilities than others (see Maier and Dandy, 2000, for a comprehensive discussion).
In contrast to the MLP network of Fig. 1, the Kohonen network, also referred to as a self-organising feature map (SOFM), requires no outputs for training purposes. This ANN is a classifying device that has only one layer of input nodes, one of output nodes and a set of weighted connections (Fig. 2). The network has to 'decide' which of the output nodes (i.e. the 'winner') is associated with a given input pattern, based upon a measure of similarity, such as Euclidean distance. In brief, the weight vectors are initialised with randomly selected values, and the first input pattern is presented to the network. The input pattern is compared to all the weight vectors using Euclidean distance, and the most similar vector and its output unit are selected. The 'winner' and its neighbours have their weight vectors updated so that they are moved closer to the input pattern. This pattern is repeatedly presented until the change in the weight vectors is smaller than a predefined threshold. A new input pattern is then presented, and the procedure is repeated. Similar input patterns 'fire' output nodes that are close together. In effect, each frequently-fired node defines a 'class' (although a group of adjacent nodes is usually the preferred choice for an individual class), and the input vectors that fire that node are the members of that class.
The neural network software employed in this work for both MLPs and SOFMs was the NeuroSolutions simulation environment developed by NeuroDimensions Inc. of Florida.
CLASSIFICATION OF CATCHMENTS
Hall and Minns (1999) applied a Kohonen network to classify 101 catchments in the south-west of England and Wales using five catchment characteristics listed in Volume II of the FREND Study (Gustard et al., 1989), supplemented by Volume IV of the FSR (NERC, 1975). The five characteristics were AREA in km 2 , MSL in km, main stream slope in m km -1 (S1085), mean annual rainfall in mm (AAR) and a soil index (SOIL). Values for the urbanisation index, URBAN, were also available, but were not included owing to the small range of values involved. Initially, the values were standardised to range between zero and one prior to analysis. However, in later work, the values were standardised to zero mean and unit standard deviation, as recommended by Kohonen (1995). Euclidean distance was used as the similarity measure. As noted above, the procedure for training a Kohonen network also involves the repeated presentation of the input data (catchment characteristics) until the output response has stabilised and the changes in the weights are negligible.
In this application, with 101 input patterns and two or more classes to be expected, the number of output nodes should be at least three times the number of classes anticipated. Ten output nodes were therefore adopted in a linear Kohonen network. The results are summarised in Fig. 3(a), which reveals a distinct clustering around three sets of adjacent output nodes. These 'classes' contain 25, 35 and 41 members, respectively. Of particular interest are the standardised cluster centres in Euclidean space for each grouping, which define what might be termed Representative Regional Catchments (RRCs). The de-standardised catchment characteristics for each of the three RRCs are summarised in Table 1(a), which shows that the variations between classes are essentially monotonic. In effect, Class I is composed of relatively small, steep catchments with approaching 2000 mm of average annual rainfall and a high SOIL index, and Class III represents larger, relatively flat areas with about 1100 mm of average annual rainfall and a notably smaller SOIL index. The Class II characteristics are intermediate between those of Classes I and III. Such groupings, and especially Classes I and III, are supportable from the hydrological viewpoint, i.e. small, steep catchments are expected to possess different response characteristics to large, flat drainage areas, which is a gratifying result for an unsupervised learning technique.
This division of the 101 catchments into three classes, each of which contained representatives from both southwest England and Wales may be compared with the regionalisation of the same areas adopted in the FSR. (NERC, 1975). In the Report, south-west England and Wales are two distinct regions divided at the Bristol Channel, each of which has both a different equation for the estimation of the mean annual flood and a different growth curve connecting the ratio of the T-year flood to the mean annual flood to the return period, T.
Divergence between the SOFM and FSR classifications, the latter based upon the Method of Residuals, prompted a further study in which a new data set was compiled for the whole of England, Wales and Scotland. These new data were obtained from the catalogue of the FRIEND European Water Archive, from which seven catchment characteristics could be extracted for 219 catchments. These characteristics included AREA (km 2 ), MSL (km), AAR (mm), SLOPE (m km -1 ), station height (HTSTN, m), a soil index (SOIL) and the 10-year, 2-day rainfall depth (M102D, mm). For the classification of this data set, 20 output nodes and seven input nodes were employed for the linear Kohonen network. Again, Euclidean distance was used as the similarity measure. The results are summarised in the count map of Fig. 3(b), and indicate the existence of four classes. The RRCs corresponding to each of these classes are summarised in Table 1(b), which shows that small, steep catchments are now divided into two classes: one with low soil index and high AAR in the uplands, and the second with high soil index and low AAR in the lowlands. In addition, the larger catchments are divided between two classes, representing Additional support for the spatial distribution of the classes shown in Fig. 4 can be found in the attempts to define coherent precipitation regions for the British Isles. For example, Gregory (1975) applied a variety of methods based on linkage analysis and factor analysis techniques, but found that the results obtained depended upon the technique applied. The direct solution of a principal component analysis gave regions with a distinct north-south orientation, whereas an obliquely-rotated solution provided boundaries running predominantly south-west to north-east and to a lesser extent from west to east. Regions of coherent precipitation variability have also been defined by Jones et al. (1997). Their nine regions are depicted in Fig. 5, which shows that indeed south-west England and Wales form one region designated SWE. Those authors also presented the correlations, one region at a time, for all nine areas. Their results indicated that western regions correlate most closely with western regions, and similarly for eastern regions, emphasising once again a north-south orientation of boundaries that relates to the frontal nature of the majority of precipitation in the British Isles. The Scottish regions are only weakly correlated with the regions in England and Wales, with northern Scotland (NS) showing the least correlation with all other regions. In contrast, north-west England (NW) provided the highest correlations with the other eight regions.
A third study (Hall et al., 2000) has recently been carried out on data from the contrasting climate of Java and Sumatra, obtained from the Flood Design Manual for Java and Sumatra (Institute of Hydrology and Direktorat Penyelidikan Masalah Air, 1983). The Data Appendix to the Manual provides information on the floods recorded at 50 sites in Java and 83 in Sumatra, along with 11 catchment characteristics for each site. These data represent the situation typical of a developing country, with the majority of records being over a short time span. For this exercise, attention was concentrated on the 48 sites in Java and the 44 in Sumatra suitable for annual flood analysis. Only six catchment characteristics (AREA, MSL, S1085, AAR along with lake and plantation indices) were retained for classification purposes. Euclidean distance was used as the distance measure and the data were mapped on to 15 output neurons in a linear Kohonen network. The most consistent and stable groupings obtained are summarised as a count map in Fig. 3(c), and the characteristics of the RRCs are shown in Table 1(c). The 'small' and 'large' groupings of catchments are again evident, but the annual rainfall totals are notably larger than in the previous case. Each class contains representatives from both Java and Sumatra. When the Method of Residuals was applied to the same data set, a four-variable equation for the mean annual flood (MAF, m 3 s -1 ) was obtained: (2) where PLTN and LAKE are plantation and lake indices respectively, defined as in Eqn. (1). When the residuals from Eqn. (2) were mapped, no discernable pattern emerged, in agreement with Institute of Hydrology and Direktorat Penyelidikan Masalah Air, (1983).
RELATION OF FLOOD QUANTILES TO CATCHMENT CHARACTERISTICS
In the Method of Residuals, the magnitude of an index flood is often related to selected catchment characteristics using MLRA. An MLP can also be used to relate the same sets of variables. For example, Muttiah et al., (1997) developed neural network models to relate the magnitude of the twoyear flood to catchment area, average annual precipitation and mean basin elevation, all variables being transformed logarithmically. A similar approach was applied by Hall and Minns (1998) to relate the location and scale parameters of the Extreme Value Type I (EVI or Gumbel) distribution to six catchment characteristics (AREA, MSL, S1085, AAR, SOIL, URBAN) for the data from the south-west of England and Wales. The three-layer MLPs were trained by back Fig. 4 propagation on 81 sites, with another 20 sites reserved for testing purposes. Since the data set was relatively small, no records were reserved specifically for cross-validation, but care was taken in determining the appropriate number of nodes in the hidden layer to avoid over-training. For 15 of the 20 verification sites, mean annual and 50-year floods could also be estimated using the FSR 'mean annual flood plus growth curve approach' (NERC, 1975). The results showed that the root mean square error (RMSE) of the ANN estimates were 39 per cent lower for the mean annual flood and 30 per cent lower for the 50-year flood than the FSR estimates. The results are reproduced in Figs. 6(a) and 6(b) for the mean annual flood and the 50-year flood respectively. A similar approach was applied to the data for Java and Sumatra (Hall et al., 2000), training ANNs on 66 sites with another 25 used for verification purposes, with between 4 and 12 input catchment characteristics and the same two EVI parameters as outputs. The results are summarised in Fig. 7, which shows the variation of RMSE with number of independent variables. Each MLP was trained ten times with different randomised starting values for the weights, and some indication of the scatter is given by the band denoting plus and minus one standard deviation about the average RMSE. The best result in terms of the RMSE of the mean annual floods for the verification data set was obtained with eight catchment characteristics, but the improvement in RMSE over the regression equation (Eqn. (2) above) derived for 92 catchments was only marginal. However, when the data set was divided into the three classes as indicated by the Kohonen network analysis summarised above, retaining the same catchments for training and verification, the RMSE for the mean annual flood was reduced to 65 per cent of that obtained by applying the regression equation. There are therefore considerable benefits to be gained from pursuing the sub-division of the original data set according to the results of the SOFM classification, in contrast to the absence of discernable sub-regions in the results from applying the Method of Residuals. Further confirmation of the benefits of classification could be obtained by repeating the analysis with random selections of catchments forming the regions, but at the time of writing this exercise has not been undertaken.
Concluding remarks
The FSR (NERC, 1975; Vol I, Section 4.3.10), provided a simple method for evaluating the 'worth' in terms of the equivalent number of years of record, N, of a regression estimate of a flow quantile. Using this approach, in which the standard error of the (log) estimate is equated to the quotient of the (regional) coefficient of variation of annual floods and the square root of N, the equivalent record length is usually of the order of only one year. There is therefore considerable scope for improvement in the precision of regionalised flood quantile estimates. Such improvements can be sought in the two distinct steps of demarcating regions of similar flood behaviour and then relating catchment and rainfall characteristics to index flood magnitudes. In the widely used Method of Residuals, the two steps are applied iteratively, with the purpose of identifying geographical clusters of sites with similar magnitudes and signs of the differences between observed and estimated index floods.
For the data sets from Indonesia, this approach failed to provide any evidence of such sub-regions, even when the islands of Java and Sumatra were considered separately. In contrast, when the data sets were analysed using a data mining technique involving unsupervised learning, three classes of catchment were identified for both Indonesia and south-west England and Wales, and four for England Scotland and Wales. The technique applied was the Kohonen network, which in practice is more of a data sorting algorithm than a data classification tool (see Kohonen, 1995). The results obtained therefore often display distinct monotonic changes in the magnitude of the input variables between classes (see Table 1). In hydrological terms, the groupings separated the small, steep, high rainfall catchments from the large, flat, lower rainfall drainage basins. Similar sub-divisions (but with obviously different RRCs) were observed in the two contrasting climates of the British Isles and Indonesia. With a sample of the order of 50-100 catchments, a third intermediate class of drainage area consistently emerges with characteristics that are intermediate between the first two. When a data set of over 200 catchments for England, Wales and Scotland was analysed, the intermediate classes were better differentiated. Seemingly, the pooling of larger regional data sets leads to more supportable classifications of catchments.
In the analyses reported above, the input catchment descriptors were limited to those for which data were either already available or could be analysed with a reasonable expenditure of time and effort. The possibility remains that other descriptors might be introduced that would assist in defining the intermediate class more clearly. The application of a fuzzy classification technique to south-west England and Wales (Hall and Minns, 1999) demonstrated similar groupings of sites to the Kohonen network, but provided additional evidence of shared membership when three classes were postulated.
The properties of MLP-type ANNs as universal function approximators are well known (see, for example, Hornik et al., 1989), and therefore the extra 'worth' in the improvement in RSMEs of flood quantiles from verification data sets obtained with ANNs when compared with multiple linear regression equations is not unexpected. However, a particular advantage of the ANN approach is that the parameters of a specified form of frequency distribution can be chosen as network outputs in preference to the magnitude of a single flood quantile, thereby avoiding the additional complication of developing a regional growth curve.
|
2014-10-01T00:00:00.000Z
|
2002-08-31T00:00:00.000
|
{
"year": 2002,
"sha1": "2a96c425dbfa93f71ed6ee7277ac6a87a31885d4",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.hydrol-earth-syst-sci.net/6/685/2002/hess-6-685-2002.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3e1703ecb9a1ca701f5077d9353098c1aaea7e86",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
25852682
|
pes2o/s2orc
|
v3-fos-license
|
The effects of sintering temperature and duration on the flexural strength and grain size of zirconia
Abstract Objective: This study investigated the effect of different sintering temperatures and times on the flexural strength and grain size of zirconia. Material and methods: Zirconia specimens (In-Coris ZI, In-Coris TZI, 120 samples) were prepared in a partially sintered state. Subsequently, the specimens were randomly divided into three groups and sintered at different final sintering temperatures and for various durations: 1510 °C for 120 min, 1540 °C for 25 min and 1580 °C for 10 min. Three-point flexural strength (for 120 samples, 20 samples per group) was measured according to the ISO 6872: 2008 standards. The grain sizes were imaged by scanning electron microscopy (SEM) and the phase transitions were determined by X-ray diffraction (XRD). The data were analyzed using one-way ANOVA and Duncan tests (p < 0.05). Results: The highest flexural strength was observed in ZI and TZI samples sintered at 1580 °C for 10 min. The differences between the ZI samples sintered at 1510 °C for 120 min and those sintered at 1540 °C for 25 min were statistically insignificant. Also, TZI samples sintered at 1510 °C for 120 min and those sintered at 1540 °C for 25 min also did not show any statistically significant differences. There were no visible differences in the grain sizes between the ZI and TZI specimens. The XRD patterns indicated similar crystalline structure for both materials subjected to the three different procedures. Conclusions: The results of this study showed that experimented high sintering temperature and short sintering time combination increases the flexural strength of zirconia.
Introduction
Zirconia is used for dental restoratives, such as crowns, bridges, implant fixtures and implant abutments [1] due to its suitable properties for dental prostheses. [2][3][4] The excellent mechanical properties of zirconia are attributed to the stressinduced transformation toughening mechanism, similar to that encountered in quenched steel. [5][6][7] Zirconia fixed partial dentures are used to replace posterior teeth because of the high flexural strength and fracture toughness of zirconia, which is used as the framework material. [8][9][10] Fractures of zirconia frameworks have rarely been reported. [11][12][13][14][15] In contrast, chipping of the veneering ceramic is a frequent complication. [11][12][13][14][15] From a clinical point of view, the stability of the system consisting of both the zirconia framework and the veneering ceramic is important.
To decrease the costs and simultaneously overcome the chipping issue, monolithic zirconia fixed dental prostheses without veneering ceramic are produced. Such restorations are esthetically unsuitable due to their high opacity. Sintering parameters have an effect on the crystalline content. [16][17][18][19] It has been shown that the holding time during sintering affects the grain growth in the material. [20] As the grain size increases, zirconia becomes less stable and more susceptible to spontaneous tetragonal-to-monoclinic phase transformations, which may result in a gradual strength decrease. [20] The monoclinic phase is stable up to 1170 C; above this temperature, it transforms into the tetragonal phase, which remains stable up to 2370 C. The cubic phase of zirconia on the other hand, exists up to the melting point, 2680 C. [21,22] The tetragonal form of metastable zirconia could be achieved at room temperature by alloying zirconia with other oxides (stabilizers), such as CaO, [23] MgO, [24] [25,26] and CeO 2 .[27] Y 2 O 3 is the most widely used stabilizer for dental zirconia. [21] In response to tensile stresses at the crack-tips, the stabilized tetragonal zirconia transforms to the more stable monoclinic phase with a local volume increase of approximately 4-5%. [27] The differences in sintering parameters of zirconia can directly affect its microstructure and properties. [28] The extent of this effect have become of interest in the field of dental research especially after the introduction of Correspondence: Müjde Sevimay, Faculty of Dentistry, Department of Prosthodontics, Selcuk University Campus, Konya, Turkey. Tel: +90 332 2231184. Fax: +90 332 2410062. E-mail: msevimay@hotmail.com short sintering cycles by manufacturers. Several authors have studied the effect of the changes in sintering time and temperature on the translucency, grain size and biaxial flexural strength of zirconia ceramics; however, the effect of these changes on the properties of zirconia remains in question. [29][30][31][32] Computer-aided design and computer aided manufacturing (CAD/CAM) technologies enable the milling of zirconia into reconstructions with complex geometries. Two types of zirconia milling processes are currently available, i.e. softmilling (partially sintered state) and hard-milling (full sintered). Soft-milled frameworks are subsequently sintered to full density. Different sintering parameters may show a strong influence on the properties of the zirconia frameworks.
Computer-aided design and computer aided manufacturing (CAD/CAM) chair side systems have reduced the operation times significantly and allowed for the production of most prosthetic restorations in one visit, although zirconia needs a sintering procedure, which takes several hours. Rapid sintering procedures, which can be carried out in minutes render the production of zirconia-based restorations possible in one visit and enhance its clinical use. Hence, translucent zirconia used for monolithic restorations was also included in this study. Use of translucent zirconia has the potential to eliminate delamination of the veneering ceramic, which has been known to be a common clinical problem and also reduce the amount of tooth preparation required. [33] The aim of this study was to investigate the effect of different sintering temperatures and durations on the flexural strength, grain size and phase transformation of zirconia. The tested null hypothesis was that the decrease in final sintering time would decrease the flexural strength.
Materials and methods
Sixty zirconium oxide and 60 highly translucent zirconium oxide bar specimens (In-Coris ZI and In-Coris TZI, Sirona Dental Systems GmbH, Bensheim, Germany) were cut using a low speed diamond saw (Isomet 1000, Buehler, IL). Then, the ZI and TZI specimens were randomly divided into three subgroups (with each group containing n ¼ 20 samples) according to the sintering time and temperature. The employed parameters belong to fixed program numbers 1 -superspeed for groups C and F; 2 -speed for groups B and E; 5 -classic for groups A and D of the sintering furnace (inFire HTC Speed, Sirona, Bensheim Germany). The test groups are listed below.
Group A (ZI): Slow sintering program (regular); sintered at 1510 C with a dwell time of 120 min. Total time is approximately 8 h.
Group B (ZI): Faster sintering program (speed); sintered at 1540 C with a dwell time of 25 min. Total time is approximately 2 h.
Group C (ZI): Rapid sintering program (super speed); needs preheating to 1580 C, starts at 1580 C with a dwell time of 10 min. At the end of dwell time, furnace opens and material is immediately removed. Total time is 10 min.
Group D (TZI): Slow sintering program (regular); sintered at 1510 C with a dwell time of 120 min. Total time is approximately 8 h.
Group E (TZI): Faster sintering program (speed); sintered at 1540 C with a dwell time of 25 min. Total time is approximately 2 h.
Group F (TZI): Rapid sintering program (super speed); needs pre-heating to 1580 C, starts at 1580 C with a dwell time of 10 min. At the end of dwell time, furnace opens and material is immediately removed. Total time is 10 min ( Table 1).
Three-point flexural strength tests
Three-point flexural strength (total number of specimens, N ¼ 120; number of samples per group n ¼ 20) was measured according to the ISO 6872:2008 standards. [34] After sintering, the final dimensions of all the specimens were 1.2 mm  4 mm  25 mm. Before the flexural strength test, the dimensions of the specimens were measured with a digital micrometer (Absolute Digimatic Caliper, Mitutoyo, Tokyo, Japan) to an accuracy of 0.01 mm. The specimens were then placed in the appropriate sample holder and loaded in a Universal testing machine (Shimadzu Model AGS-X, Kyoto, Japan) at a crosshead speed of 1 mm/min until failure. The specimens were tested dry at room temperature. The flexural strength was calculated according to the following formula.
¼ 3Nl=2bd 2 s is the flexural strength, N is the fracture load (N), l is the distance between the supports (mm), b is the width of the specimen (mm) and d is the thickness of the specimen (mm).
Zirconia grain size
After sintering, the surfaces of all the specimens (N ¼ 6, n ¼ 1 per group) were polished up to a thickness of 1 mm with a diamond suspension (Struers, Bellerup, Denmark) and ultrasonically cleaned in isopropanol. Then, the specimens were dried at 50 C for 24 h (Nüve Incubator EN 120, Ankara, Turkey) and coated with 9 nm gold-palladium particles (Cressington sputtercoater 108 auto, Cressington MTM-20, Elektronen-Optik-Service, Dortmund, Germany) and the surface topography was evaluated using a scanning electron microscope (Evo LS10, Carl Zeiss, Oberkochen, Germany) at magnifications of 5000Â and 15,000Â.
Crystal structure analysis
Three specimens were selected randomly from each subgroup for X-ray diffraction (XRD) surface analysis to detect the amount of tetragonal and monoclinic phases present. The specimens were placed in the holder of a diffractometer (Bruker D8 advance-Lynxeye detector) and irradiated with Cu Ka. The spectrum was recorded within the range of 0-80 at a scan time per step of 1 min. The voltage and current were set to 40 kV and 40 mA, respectively.
Statistical analysis
The data collected was checked for normal distribution and analyzed using one-way analysis of variance (ANOVA), followed by Duncan tests (SAS; Statistical Analysis System, SAS Institute Inc., Cary, NC) at a significance level of p50.05 to determine the effect of sintering time and temperature on each of the variables tested.
Results
The mean flexural strength values (MPa) and standard deviations for each group are presented in Table 2.
One-way ANOVA showed statistically significant differences between the groups (p50.05). The mean flexural strength of group C (ZI, 1580 C sintering temperature and total 10 min sintering time) was significantly higher than group A and B. Also, the mean flexural strength of group F (TZI, 1580 C sintering temperature and total 10 min sintering time) was significantly higher than group D and group E ( Table 2). Scanning electron microscopy (SEM) images were used to observe zirconia grain sizes. (Figures 1 and 2). The difference in grain size caused by different sintering procedures is generally small and it is difficult to see if there is a difference. In this study, no quantitative analysis was done, for this reason we could not say there were no visible difference in the grain size between the ZI and TZI groups.
The microstructure analysis of the specimens using XRD revealed that the peak positions for the spectra of the samples matches the corresponding ICSD card for tetragonal phase for ZrO 2 within the resolution of the data (PDF# 01-089-6976; (Figures 3-5).
Discussion
The 1580 C-10 min sintering program (C and F groups) led to the highest flexural strength values for both materials. For groups A, B, D and E, the sintering temperature and time combinations did not have a significant effect on the flexural strength of zirconia. Hence, the null hypothesis is rejected.
In our study, SEM images were used to observe the grain sizes visually. However, since the visual observation alone is not enough for comparing grain sizes, quantitative analyses should be done for the determination of differences. This study was undertaken to demonstrate the possible change in strength of sintered green bar-shaped zirconia specimens by varying the sintering conditions. The main motivation behind undertaking this study was that the properties of some green milled zirconia can reach higher strengths when the sintering temperature and time are altered. Also, questioning the total phase transformation from monoclinic to tetragonal during the 10 min sintering process is also of interest. Clinically, shorter sintering times would be also beneficial for the rapid manufacturing of zirconia-based prostheses.
It is to be noted that the mechanical analysis in this study uses static loading tests and dynamic (fatigue) tests would more closely resemble clinical masticatory forces. However, there is a correlation between static and fatigue properties, analogous to studies, which describe different damage modes or strength degradation, while comparing results of cyclic loading and monotonic loading tests. [35,36] Itinoche et al. [37] found marginal differences in the flexural strength of zirconia obtained by static and cyclic loading tests, although the differences were statistically insignificant.
Crystal structural analysis revealed that all subgroups contained typical tetragonal phase grains. All specimens were completely sintered to the tetragonal phase and did not transform back to the monoclinic phase. This was predicted in the light of the absence of any physical or thermal treatment on the specimens. Stawarczyk et al. [30] reported that the grain size of zirconia increased with increase in sintering temperatures. They also reported that the sintering temperature showed a significant negative correlation with flexural strength and concluded that the sintering temperature for zirconia should be limited below 1550 C. Our findings contradicts with their results, because we achieved higher flexural strength values with 1580 C sintering. This difference can be explained by several reasons: the varying brands of zirconia used in both studies, our narrower temperature range and altering the sintering times with changing temperatures. It is to be noted that all our specimens were sintered as-milled and were unpolished; thus, comparing the flexural strength values obtained in this study to the values reported in other studies using polished specimens is questionable.
The flexural strength of the groups sintered at 1580 C with a dwell time of 10 min (speed groups) were found to be significantly higher than that exhibited by the other groups (speed and regular groups) used in this study. Hjerppe et al. [38] investigated the interaction between the sintering time and static biaxial flexural strength of zirconia. According to their results, shorter sintering times did not affect the biaxial flexural strength, while having correlation to the surface composition of the samples. This can affect the durability of zirconia after water exposure, which can be clinically significant for monolithic zirconia restorations uncovered by porcelain intraorally. In our study, shorter sintering times affected flexural strength. We did not analyze the surface composition of samples; this analysis can be helpful for explaining such difference.
This study has limitations. First, only one brand of zirconia was used. The results may not be applicable for other brands with different grain sizes and different manufacturers may have special recommendations for sintering zirconia. Sintering with shortened dwell time was also influenced the density of the material but in the current study the effect of sintering on the density of the material was not studied. Further, we have used static in vitro tests; however, dynamic fatigue tests are more representative of clinical masticatory forces and further in vitro and in vivo tests are required. Kim et al. [29] recommended that the physical properties and marginal fitness of the coping are to be analyzed in relation to the sintering method, grain size and light transmittance. We have studied the physical properties related to the sintering method and the marginal fit and light transmittance of zirconia in correlation to the sintering method may require further investigations.
Conclusions
Based on the results obtained in this study, the following conclusions can be drawn.
(1) Zirconia samples tested showed the highest flexural strength when sintering was carried out at 1580 C for 10 min. (2) All experimented sintering parameters have provided full sinterization for green zirconia.
|
2018-04-03T02:07:17.689Z
|
2015-08-03T00:00:00.000
|
{
"year": 2015,
"sha1": "c94d9fb84bad33f1cb36eb1b27078ec54f46c7c7",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3109/23337931.2015.1068126?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "504aa5b0b7bbbf94446b283baaba6b9c5a167f86",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
233658782
|
pes2o/s2orc
|
v3-fos-license
|
Logical Principles in Ternary Mathematics
Introduction/Background:Our new research called “Logical Principles in Ternary Mathematics“ is an attempt to establish connection between logical and mathematical principles governing Ternary Mathematics and address issues that appeared earlier while making truth tables for “Ternary addition” and “Ternary Multiplication” presented by the same author in “Ternary Mathematics Principles Truth Tables and Logical Operators 3 D Placement of Logical Elements Extensions of Boolean Algebra” publication.The title “Logical Principles in Ternary Mathematics“ is not randomly chosen To be able to set up relations between elements in the given discipline one usually employs the basic principle of meaning-form and function In the same way we propose a logical triangle “Component”,”Vector”,”Decimal” to prove fundamental principle governing “Ternary Mathematics” presented in the given research. Aims/Objectives: The aim of the article is to set up connection between mathematical and logical rules governing Ternary Mathematics The main postulates of the Ternary Mathematics can be demonstrated by the abstract scheme or a triangle the vertices of which are “Component”,”Vector”,”Decimal” We use a triangle diagram to prove the functionality of the chosen principle. The three components are each connected with other two and transition is possible from one to another without changing the shape of a diagram and the principle applied. Original Research Article Pozinkevych; AJRCOS, 7(3): 49-54, 2021; Article no.AJRCOS.65652 50 Methodology: The most difficult part is to “translate” Algebra and Numeric Analysis into Mathematical Logic and vice versa Traditional methods of logic fail to do this transition therefore a new functional approach is chosen. Results and Conclusion: As a result of this functional approach a new Ternary addition Truth Table is made The new Ternary Truth Table consists of the 3 literals (Т, ₸,F) Truth Negative, False and the last column of the table is the logical sum of the two. For example: Т+T=T Unlike the old table it presents a sum of two numbers in a vector form and therefore makes it possible to use it in mathematics as well as in logic.
INTRODUCTION
Many of us probably have heard about syllogisms Aristotle's logic and the way we construct propositions in modern logic [1] yet a few might have thought perhaps that the choice of means or the so-called elements of a logical expression might be purely arbitrary [2]. Let me demonstrate this on an example: We have a statement: Is it true that 4+5=7? The statement is false but from this false statement you can obtain true assertions For example: 1) It is false that 4+5=7 2) It is not the case that 4+5=7 3) 4+5≠7 This is basically so much we can do if we rely on a binary principle applied to the above mentioned example E.g. the p denial is true when p is false and the denial of p is false when p is true (Where p is a simple statement) Let's take a look at the 3rd example given Namely: 4+5≠7 What can we derive from the above mentioned example knowing that the statement p≡Т? It implies many things for example 4+5=8 or 4+5=10 or 4+5=7+6 etc. All of these statements are false of course but based on a premise that ₸≡F we can draw a lot of wrong conclusions This is where binary principle fails to provide us with an accurate information and where we are going to step in with the ternary mathematics principle which introduces the operator ₸ as a completely independent literal the meaning of which can be described as "Some of" instead of "All of" or "None of" If we apply the above mentioned principles to the three examples given, we can see that: 1) It is false that 4+5=7 2) It is not the case that 4+5=7 3) 4+5≠7 are nothing but ₸ (truth negative) statements the meaning of which falls into neither Т nor F category and in that sense are not being "governed" by the binary maths principles
METHODOLOGY
Given the aforementioned assumption one needs other operators than conjunction and disjunction to govern the literals Where does this assumption come from?
Then assume that we use mathematical sign to connect literals we will have something like Let's take a look at the disjunction table.
We can see that by substituting T/ F literals by numbers we often obtain an incorrect mathematic result The reason for such inconsistency is lack of exact correlation between logic and mathematics In other words Boolean Algebra is based upon the laws of mathematics but the relations between elements Table 1. Conjunction are logical Hence we speak about logical addition and logical multiplication which further leads to inability implementing more sophisticated mathematical apparatus to solve practical engineering technical problems utilizing principles of Boolean Algebra This has an analogy with say trying to picture somebody s portrait having only two paints black and white No matter how many colors of grey we are going to get they will still be not sufficient enough to convey the color palette of a real life image Maybe this analogy is not quite the same as the concept we are trying to build but it s quite similar Let's take a look at another example by Tony R. Kuphaldt [3],and released under the Design Science License: Quote :"Let us begin our exploration of Boolean algebra by adding numbers together: The first three sums make perfect sense to anyone familiar with elementary addition.The last sum, though, is quite possibly responsible for more confusion than any other single statement in digital electronics, because it seems to run contrary to the basic principles of mathematics.
Well, it does contradict the principles of addition for real numbers, but not for Boolean numbers.
Remember that in the "world of Boolean algebra", there are only two possible values for any quantity and for any arithmetic operation: 1 or 0.
There is no such thing as "2" within the scope of Boolean values. Since the sum "1 + 1" certainly isn't 0, it must be 1 by process of elimination.
It does not matter how many or few terms we add together, either. Consider the following sums: It clearly demonstrates us flaws in elementary or conventional math as we should say it implementing Boolean math principles We are not being critical of the latter but its high time to turn our sights to something that employs more mathematical approach namely ternary math So what is ternary math and what rules govern it well as the name suggests it's a branch of discrete mathematics based upon utilizing three main concepts Truth, Truth Negative and False Each of them has its own specific use and is distinct from the other two In the following article we will try to prove that the same very math can be the basis for special placement of logical elements in computer CPU's microcircuits etc. [4,5]. What makes us so sure Well first of all that as we mentioned earlier three tools are better than two We can use them in various combinations and that these same very tools can be presented in a vector form and as such can help us build models in space cause as we know any vector has dual representation One is a line connecting two dots on the plane and another one points in space "cutting through the plane" or vectors in space [6].That same second representation is important for us when we want to present our vectors as a single point [7]. So far vector representation on the plane has been a set of two numbers ( , ) How are we going to change that? Well we will do it by means of our "beloved" literals T ₸ F These will be our dots on the xy; yz; zx or any other plane but first we have to make it happen [8].The new relation between different number representations can be demonstrated by the following diagram: Compound-Vector-Decimal
MATERIALS AND METHODS
The type of methods employed in our research includes mathematical representation of logical expression and transition between Vector Algebra Numeric analysis and Mathematical Logic We used our formula to obtain a result in the form of a Ternary Addition Table as the name suggests This is not an addition of elements in it s classic sense but rather a Logical Addition The difference between the two can be demonstrated by the following exampleТ+T=T ≡ 1+1 If T=1⟹1+1≢ Т+T Which means our principle is not consistent with mathematical rules Let's take a look at another example: ₸+₸≡(-1)+(-1) From the course of elementary algebra, we know that (-1)+(-1)=-2 If ₸=(-1) ⟹(-1)+(-1)≠(-1) however when we apply our logical scheme the result will be exactly such: ₸+₸≡(-1)+(-1) ≡₸
RESULTS AND DISCUSSION
All that inference clearly demonstrates our "Ternary Addition " Table: An author made an attempt to establish a connection between logical operators (T,₸,F) earlier but failed due to the fact that simple mathematical addition does not apply to the logic of Ternary Maths [9].
From that perspective it is more correct to call Ternary Addition a "Logical Addition of Ternary Elements" rather than just "addition" or "Ternary Addition" Both latter definitions are wrong anyway You can compare an old attempt to describe this process by the same author: If we substitute elements (Т,₸,F) by the numbers (1,-1,0) the entries for the second row Т should result in 2 Т not Т and the entries for the 4 th should be (-2) Т≡2₸ all of which is inconsistent from the standpoint of formal logic [10]. Table 7 A
Table 5. Ternary addition
So what did we do to change the results According to Mathematicians namely David Hilbert and Paul Bernays [11], we can always present a single element as a function That is the basic conversion between the two We can establish a mathematical relation between all elements of the logical table to make their entries fall into the set of numbers [12].The following principle is best demonstrated on an example We need to establish the sum of two entries Т and Т The resulting entry appears which denies logical principle of the addition: we can not speak of double Truth and if we do we will go beyond Ternary Maths cause our set of entries is 3elements : 4,5,…etc. Let's apply our "Logical Triangle Principle" Table 7. Ternary multiplication what did we do to change the results? According to Mathematicians namely David we can always as a function ( ) That is the basic conversion between the two We relation between all elements of the logical table to make some of their entries fall into the set of numbers The following principle is best demonstrated We need to establish the sum of two entries Т and Т The resulting entry appears to be 2T which denies logical principle of the addition: we can not speak of double Truth and if we do we will go beyond Ternary Maths cause our set of and not 4,5,…etc. Let's apply our "Logical Triangle On the left we will place a sum of vectors in it's component form on the right the vector form of the same sum : v Equate them = v and multiply both sides by The result is 1=v but this is the same as to say v =v Logical multiplication is an easier matter as the elements of the set are the resulting component of the ternary multiplication table and no conversion applies
Our research is continuation of "Ternary Mathematics Principles Truth Tables Operators 3 D Placement of Logical Elements
Extensions of Boolean Algebra" The research was published earlier by the same author We aim to make a transition between binary and ternary mathematics using logical means and mathematical principles By presenting a sum of logical elements as a vector form a new Ternary Addition principle is proposed (see Table 5 ; Article no.AJRCOS.65652 On the left we will place a sum of vectors in it's on the right the vector and multiply both The result is 1=v but this is the same as to say Logical multiplication is an easier matter as the are the resulting component of the ternary multiplication table Our research is continuation of "Ternary Mathematics Principles Truth Tables and Logical Operators 3 D Placement of Logical Elements Extensions of Boolean Algebra" The research was published earlier by the same author We aim to make a transition between binary and ternary mathematics using logical means and By presenting a sum of logical elements as a vector form a new Ternary Addition principle is proposed (see Table 5. Ternary Addition) Unlike the old table where we simply used algebraic rules for adding three numbers: -1,0,1 the new approach is to present a sum of two numbers in a vector form and by means of vector product establish a relation between logical sums and decimal numbers [13].Speaking of benefits and limitations of the proposed work one should mention that Ternary math is a sub-division of a discrete mathematics It operates similar principles and is aimed at development of the calculating machines and algorithms [14], to make counting faster and more accurate In order to be able to do 'so' we use not only calculus but vector algebra statistics and analytic geometry. In conclusion one has to point out that every mathematical operation can be successfully utilized by means of Ternary Mathematics At least basic mathematical operations such as addition and multiplication have their reflection in Ternary Math Tables and now it's the task of the branches of science to implement these principles and develop them further [15].
|
2021-05-05T00:07:49.155Z
|
2021-03-29T00:00:00.000
|
{
"year": 2021,
"sha1": "70f0567688e9d21a2ec8fcd2200295fee466e0a7",
"oa_license": null,
"oa_url": "https://www.journalajrcos.com/index.php/AJRCOS/article/download/30181/56640",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6ea8f895d03b17e42037f15197c0b1b45b77fa7c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
118857110
|
pes2o/s2orc
|
v3-fos-license
|
Reentrant Fulde-Ferrell-Larkin-Ovchinnikov superfluidity in the honeycomb lattice
We study superconducting properties of population-imbalanced ultracold Fermi mixtures in the honeycomb lattice that can be effectively described by the spin-imbalanced attractive Hubbard model in the presence of a Zeeman magnetic field. We use the mean-field theory approach to obtain ground state phase diagrams including some unconventional superconducting phases such as the Fulde--Ferrell--Larkin--Ovchinnikov (FFLO) phase. We show that this phase is characterized by atypical behaviour of the Cooper pairs total momentum in the external magnetic field. We show that the momentum changes its value as well as direction with change of the system parameters. We discuss the influence of van Hove singularities on the possibility of the reentrant FFLO phase occurrence, without a BCS precursor.
I. INTRODUCTION
The discovery of graphene [1] triggered enormous theoretical and experimental activity [2,3]. Henceforth, it has attracted much attention due to theoretical interests in fundamental physics, as well as its potential practical applications [4][5][6][7][8]. The attempt to understand graphene physics is not without difficulties, related e.g. to electronphonon interactions and the presence of a charge inhomogeneity [9]. However, recent advances in experiments offer the possibility to simulate similar condensed matter phenomena by loading ultracold bosonic or fermionic atoms into optical lattices [10][11][12][13][14][15][16][17][18]. The engineering of the honeycomb lattice in ultracold gases setups as well as the creation of artificial graphene-like band structures bring the possibility of exploration of regimes which are still inaccessible in solid state materials. Recently, condensed matter systems based on fermions with linear dispersion (e.g. the honeycomb lattice) have generated a surge of intensive studies [19][20][21][22][23][24][25][26]. These models have substantial differences from models with an extended Fermi surface, such as those on the square lattice. However, it has not been understood yet which unconventional phases can be stable in such systems, especially in those where effective attraction is dominant.
In this work, we analyze the stability of one of the most interesting phases occuring in this type of systems, the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state (formation of Cooper pairs across the spin-split Fermi surface with non-zero total momentum) [27,28]. We consider the attractive Hubbard model in presence of a Zeeman magnetic field. It is worth to mention that at half-filling, in the absence of a Zeeman magnetic field, a quantum phase transition from the BCS state to the normal phase * e-mail: agnieszkakujawa2311@gmail.com † e-mail: aptok@mmj.pl takes place. It results in the occurrence of a critical attraction below which the BCS state is unstable. Our main finding is not only establishing that the FFLO phase is stable for a wide range of parameters, but also that reentrant FFLO superconductivity can occur. Moreover, at half-filling and in the spin imbalanced system (equivalent to a non-zero Zeeman magnetic field), the presence of van Hove singularities (VHS) in the density of states results in a stable FFLO phase for arbitrarily weak attractive interactions. This discovery is essential from the viewpoint of realizing the FFLO state in ultracold gases experiments [29] in artificial hexagonal lattices.
Bringing together the two important threads of research, one related to graphene and the honeycomb lattice and the second to population imbalance in ultracold atomic gases, can lead to new and interesting physics. In particular, it gives the possibility to investigate some exotic superconducting phases which could potentially be found experimentally. So far, such phases have eluded experimental realization and one of the reasons for it is the non-zero critical value of attraction for the existence of the standard superconducting phase in the honeycomb lattice at half-filing and without magnetic field [10,62]. We show that reentrant FFLO superconductivity can be realized even below this critical value (even for arbitrarily small attractions) for some range of magnetic fields. This greatly facilities the experimental realization and detection of the FFLO phase in ultracold fermionic gases in the lattice and makes searches for such a phase realistic. As such, it is the main finding of our work.
The paper is organized as follows. Section II gives a discussion of the spin-polarized Hubbard model as well as the method. Section III presents numerical results concerning among others the phase diagram, density of states analysis and the dependence of the Cooper pairs properties. We conclude in Section IV.
II. MODEL AND TECHNIQUE
The system can be described by the Hamiltonian in real space as H = H K + H I , where: and Here, c isσ (c † isσ ) describes annihilation (creation) of an electron with spin σ ∈ {↑, ↓} in the i-th site of sublattice s ∈ {A, B} ( Fig. 1.a). The first term describes a non-interacting state. We assume equal hopping between the nearest-neighbor (NN) sites (i.e. t ss ′ ij = t = 1 as energy unit and 0 otherwise). µ is the chemical potential, whereas h the external magnetic field. The second term describes the on-site Coulomb interaction U/t < 0 being the source of s-wave type superconductivity.
A. Non-interacting state
In the absence of interaction (U = 0), the kinetic term (1) can be transformed to the reciprocal space as follows: Here, δ i defines the location of the NN sites ( Fig. 1.a). Hence, one obtains: Using the Nambu notation, H K can be rewritten in the following way: where Φ † kσ = c † kAσ , c † kBσ is the Nambu spinor, and The eigenvalues E αkσ of the Hamiltonian H K can be found by diagonalization of the matrix (6). As a result, one obtains: E ±,kσ = ±t|g(k)| − (µ + σh) ( Fig. 1.b).
B. Superconducting state
The source of the s-wave superconductivity in the Hubbard model is the on-site attraction (U/t < 0) between particles with opposite spins on the same site. The interaction term H I can be decoupled in the mean field approximation by: where ∆ i,s = c is↓ c is↑ is the superconducting order parameter (SOP) in the sublattice s. Then, where the last term from Eq. (7) has been omitted, because it does not affect the self-consistent equations. However, it is important to emphasize that this term has to be taken into account in a grand canonical potential calculation to determine the stability of different phases, since this constant term decreases the energy of the system [63]. Because there are two shifted sublattices (A and B) in the system, the SOP term for the FFLO phase can be rewritten as: where ∆ 0 is the SOP amplitude in the entire system, whereas Q is the total momentum of the Cooper pair. Here, R i denotes the location of the i-th site in real space, while w describes the shift between both atoms in the unit cell and equals δ 2 (cf. Fig. 1.a). In the superconducting phase (∆ 0 > 0), one can distinguish the BCS state with |Q| = 0 and the FFLO phase for |Q| > 0. Hence, in momentum space: As a consequence, the mean field Hamiltonian H MF = H K + H MF I can be rewritten in a block matrix form: .
The diagonal elements of H MF k , i.e. ones involving the matrix H(k, σ), describe the single-particle spectrum and are given by Eq. (6), while the off-diagonal elements describe superconductivity and U(Q) is defined as U(Q) = U ∆ 0 δ ss ′ δ s,A + e iQ·w δ s,B , where the index of matrix elements describes sublattices.
III. NUMERICAL RESULTS AND DISCUSSION
In this section we show and discuss the numerical results. First, we describe the details of numerical predictions (Sec. III A). Next, we present the phase diagrams for the half-filling and non-half-filling (i.e. doped) case (Sec. III B), and we discuss them in the context of the density of states analysis (Sec. III C). Finally, we provide the numerical calculations and discuss the main and novel properties of the FFLO phase in the hexagonal lattice (Sec. III D).
A. Numerical details
To find the ground state, we calculate the grand canonical potential, defined by Ω ≡ −k B T ln{Tr[exp(−H MF /k B T )]}, which at T = 0 is equivalent to the mean-field energy. The calculations were performed in momentum space, on a N = 121 × 121 k-grid inside the first Brillouin zone (FBZ). Since we study the stability of the FFLO phase, Ω is a function of the SOP amplitude ∆ 0 and the total momentum Q of Cooper pairs [64]. In this case, the procedure of minimization of Ω with respect to the SOP amplitude ∆ 0 and all possible momenta Q realized in the system is essential. To find the global minimum of Ω(∆ 0 , Q), the numerical calculations were accelerated using Graphics Processing Units (GPUs), according to the procedure described in Ref. [63].
It is important to emphasize that the mean-field approximation (MFA) overestimates, in general, the critical temperatures and the range of stability of the phases with a long-range order. However, MFA gives at least qualitative description of the system in the ground state (T = 0), even in the strong coupling limit [65].
The ansatz which we have proposed to describe the SOP in real space, i.e. Eq. (9), does not limit the solutions with respect to the Cooper pairs momentum Q. It is a very important extension in comparison to the previous theoretical works in which the assumed ansatz strongly limits the possibility of the stable phase occurrence. For instance, it is worth to mention the Kekulé order [66], for which the SOP in real space is 2π/3 phase modulated [67,68]. Moreover, using the ansatz proposed in our paper, one can provide the analysis of phases other than FFLO, e.g. the spatially homogeneous spin-polarized superfluidity (called breached pair state or Sarma phase [69]). However, our numerical calculations show that this type of phase is unstable, for the whole region of parameters, which is in agreement with other theoretical works [70][71][72]. Moreover, using this ansatz, e.g. pairing in the presence of the Fermi surface deformation [73][74][75][76] (called Pomeranchuk instability [77]) or multiparticle instability [78] can be analyzed. However, these types of unconventional phases go beyond the scope of this work.
Additionally, the existence of a discontinuous phase transition between the BCS and the FFLO phase or the normal state, which is characteristic for the systems in the Pauli limit, leads to the occurrence of the phase separation regions. In contrast to the case of a fixed chemical potential, if the number of particles is fixed, one obtains two critical Zeeman fields in the phase diagrams which determine the phase separation (PS) region between different types of phases [38,64,[79][80][81][82], e.g. the BCS and the FFLO phase or the normal state.
B. Phase diagram
In the normal state, based on the dispersion relation E αkσ , one can distinguish the conduction (α = +) and valence (α = −) bands in the band structure of the system. At half-filling (for µ/t = 0, for which the average num- ksσ c † ksσ c ksσ is equal 1) and at h/t = 0, the conduction/valence band is fully empty/occupied, and the system exhibits a semi-metal behavior (Fig. 2). These two bands touch each other at the corner points of the first Brillouin zone (FBZ) in the Dirac cones, which is manifested by vanishing DOS at the Fermi level.
At half-filling (µ/t = 0, n = 1) and in the absence of the magnetic field, the honeycomb lattice exhibits a continuous quantum phase transition between semi-metal phase and the BCS state [10,83,84] (Fig. 3). The superconducting state can emerge in the system for pairing interaction stronger than some critical interaction U c . We estimate |U c |/t ≃ 2.245 ( Fig. 4.a), which is in good agreement with previous mean-field studies [10]. However, for any µ = 0, the SOP exhibits exponential-like decrease to zero with decreasing |U | (Fig. 4.a). This behavior is well visible around VHS (µ/t = ±1 at Fig. 3).
The increase of the attraction above U c leads to the stabilization of the BCS state ( Fig. 4.a). With an increasing Zeeman magnetic field (increasing population imbalance), the FFLO phase becomes stable at some |U |dependent critical value h c , through a first order phase transition. The discontinuous phase transition is manifested by a jump of the order parameter with increasing |U | and at fixed h, which is illustrated in Fig. 4.b and indicated by stars. As we mentioned above, this behavior of the order parameter is reflected in the occurrence of the phase separation region in the phase diagrams for fixed n. Indeed, such behaviour is observed in the system under consideration as well, which will be disucssed in detail in the next paragraph.
The essential finding of our work is that the FFLO phase can also be stable below U c , for some range of magnetic fields. As we already emphasized, this feature makes the experimental realization of this phase much simpler because any superconducting state which appears in the range 0 > U > U c can only be the FFLO phase. Preparing the experimental setup in such a way that the average number of particles per lattice site is equal to one, while introducing a mismatch between the atoms with "up" and "down" spins, and tuning the interaction to be between U = 0 and U c , facilitates observing and identifying FFLO phase (see also some remarks in the last paragraph of this section).
If the chemical potential and hence the density is changed, the character of the phase diagram changes (cf. Fig. 3 and Fig. 5 for h/t = 0). As mentioned above, in the absence of a Zeeman magnetic field and at half-filling, the system exhibits a quantum phase transition. As a consequence of this fact, superconductivity becomes unstable above some critical value of attraction (|U c |) (shown in Fig. 3, Fig. 4.a and Fig. 5.a). However, at any small deviation from half-filling (i.e. for any non-zero doping), the superconductivity is stable for the whole range of attractive interactions and one can observe an exponential decay of the order parameter with decreasing |U | (e.g. Fig. 3 or Fig. 5.b shows the case of µ/t = ±1). Away from half-filling and for small values of the attraction (|U |/t 4), the FFLO phase occurs for larger values of h than in the half-filled system. It is important to emphasize that the phase transition from the BCS to the normal state as well as from the BCS to the FFLO phase is always discontinuous. However, the phase transition from the FFLO to the normal state is of second order, for the whole range of parameters. Hence, the FFLO phase, in comparison to the BCS state, evinces a reentrant behavior (i.e. appear and then disappear when varying h at fixed U and t), because the FFLO phase can occur at h > 0 for some |U | < |U c |, even without the BCS as a precursor at half-filling. These properties are novel and have not been described in the literature so far. In both cases, i.e. at half-filling and away from it, the boundaries of the FFLO and BCS phases (critical magnetic fields) show typical behavior at larger values of U [64], i.e. the FFLO state becomes unstable with an increasing attractive interaction because of the vanishing of at least one Fermi surface [40,85]. In this case, the system is in a phase of tightly bound local pairs (hard-core bosons) [65,81,82].
The influence of the presence of VHS in DOS on the produced using more advanced methods like e.g. the Dynamical Mean-Field Theory (DMFT) [86]. As mentioned above, discontinuous phase transition can lead to the occurrence of a phase separation in the case of a fixed number of particles n. It can be found by mapping of phase diagrams at a fixed chemical potential µ (Fig. 6) onto the phase diagrams with fixed n (Fig. 7). The region of parameters for which the phase separation is observed is shown in Fig. 7 (the yellow area). The existence of the FFLO phase leads to the suppression of the phase separation region. Hence, it is more likely to observe the PS region rather between spatially homogeneous phases such as the BCS and the normal states.
One needs to remember that FFLO phases are known to be much more sensitive to thermal fluctuations than the BCS state, and typically have very low critical temperatures. Hence, the experimental detection of these phases could be still rather problematic. Moreover, for a two-dimensional system, at a zero Zeeman magnetic field, the superconducting-normal transition in the attractive Hubbard model is of the Kosterlitz-Thouless (KT) type, mediated by unbinding of vortices, i.e. below the KT temperature, the system has a quasi-long-range (algebraic) order, which is characterized by a power law decay of the order parameter correlation function and a nonzero superfluid stiffness. As has been shown in Ref. [81], the KT phase (quasi-superconducting Sarma phase for a homogeneous system) is restricted to the weak coupling region and low values of polarizations (magnetic fields).
C. Density of states analysis
DOS of the honeycomb lattice shows 1/ √ E singularities due to the one-dimensional nature of the electronic spectrum [2,[87][88][89]. Moreover, near the "neutral" point (E = 0), DOS can be approximated by ρ(ω) ∝ |ω|. As we show below, the presence of the VHS in DOS, located at ω/t = ±1, at half-filling (µ/t = 0) and h/t = 0 (e.g. Fig. 8.a), is important from the point of view of unconventional superconductivity. As a consequence of the existence of two equivalent sublattices, there are two VHS in DOS. Changing the location of the Fermi level by changing the value of the chemical potential µ (filling n) or external magnetic field h in the system, one can change the relative position between VHS for particles with spin "up" (↑) and "down" (↓) (red and blue lines in Fig. 8, respectively).
It is important to emphasize that the DOS has influence on the critical temperature. In the BCS theory: T c ∝ exp (−1/|U |ρ(E F )), where ρ(E F ) is the total DOS at the Fermi level E F for both spin components. We describe the behavior of DOS schematically with the example shown in Fig. 8, in relation to some characteristic parameters taken from the phase diagram in Fig. 5. Without magnetic field (h = 0), at half-filling (µ = 0), E F is located at the neutral points K and K ′ , with E = 0 ( Fig. 8.a). Consequently, there exists a critical value of the interaction U c below which the BCS phase is unstable and the normal state is favored (the semi-metalsuperconductor transition) (see Fig. 5.a). A similar phenomenon is also observed e.g. in the metal-insulator transition [90]. In the presence of a Zeeman magnetic field, the DOSs are unequal for the particles with opposite spins. For instance, at h/t = 1, DOSs are shifted in the way illustrated in Fig. 8.b. Both VHSs are located at E F , with energies ω/t = ±1, where +/− corresponds to particles with spin ↑/↓, respectively. Consequently, DOS has a maximum at E F . The large spin-imbalance implicates the stabilization of the FFLO phase. Similar behavior can be observed in the case of an over-/underdoped system (e.g. µ/t = ±1) without magnetic field (Fig. 8.c). In this case, both VHS (for both spin components) with energies ω/t = ±1 are located at E F , whereas spin imbalance does not exist. Consequently, the BCS phase is stable. Hence, the superfluid phase can be realized for any pairing interaction strength because of the finite value of ρ(E F ). If the magnetic field is increased, DOS is shifted again. For h/t = 2, only VHS for particles with spin up is located at E F (Fig. 8.d). In this case, i.e. for large magnetic fields, the attractive interaction can lead to the stabilization of the FFLO phase. However, it is important to emphasize that there is a critical value of U below which the FFLO state becomes unstable, in contrast to the half-filled case.
As mentioned above, the mutual position of the DOSs for particles with opposite spins is crucial for the stabilization of the BCS state as well as the FFLO phase. For instance, to stabilize the FFLO state, the system should be doped to the so-called M point of FBZ. This situation corresponds to a 3/8 or 5/8 filling in a given spin-type band [91,92]. At these fillings, VHS originates from three non-equivalent saddle points. Moreover, the Fermi surface exhibits a high degree of nesting ( Fig. 9), forming a perfect hexagon at this filling [89]). These two features lead to the stabilization of the FFLO phase, as a consequence of the perfect nesting of the Fermi surfaces corresponding to the opposite spins [93,94]. It can be described using notation from Fig 9. In the case of the mentioned filling (i.e. 3/8 and 5/8 filling), the Fermi surfaces for the particles with spin up and down are degenerate (shown by solid red line). Hereby, the Cooper pairs with total momentum Q can be formed by the particles with momentum k 1 and k 2 . Because of the fact that the Fermi surface is given by the hexagon, the Cooper pairs with momentum Q (along Γ − M line) can be realized for many different k 1 and k 2 . Then, the FFLO state can be energetically more favorable for larger range of the pairing interaction U . The situation described above is clearly visible e.g. in Fig. 5.a, at h/t = 1.
D. Cooper pairs momentum Q dependence
The dependence of the Cooper pairs properties is also related to the nesting of Fermi surfaces. This is clearly visible in the evolution of the total momentum Q of pairs with increasing magnetic field (Fig. 10). Usually (for instance, in the square [57,63] or triangular [95][96][97] lattice case), only the length of Q changes, without changing the direction. It is the consequence of the mutual shift of the Fermi surfaces for the particles with spin up and down. Moreover, the direction of Q can be found, within good approximation, from the Cooper pairs susceptibility calculation [54,58,63]. However, it is worth to emphasize that only the global minimum of the energy with respect to Q and ∆ 0 has to be found to give proper information on the BCS/FFLO phase.
In the case of the honeycomb lattice, with two atoms per unit cell, Q is not subject to the typical evolution described above. Instead, the evolution of Q with increasing magnetic field can be divided into three phases (shown in Fig. 11): (A) the evolution along the reciprocal lattice vectors, (B) the evolution along the boundary of the FBZ, perpendicular to reciprocal lattice vectors and (C) the evolution along the boundary of the FBZ perpendicular to w, which describes the mutual shift of two sublattices. This evolution is a consequence of the nesting between the Fermi surfaces for particles with spin up and down, which are shifted by the Zeeman magnetic field. As a consequence, the magnitude as well as the direction of Q change in a non-monotonic way with an increasing magnetic field h (Fig. 12). One can indicate the boundaries in the phase diagram between the FFLO phase with different directions of Q. The properties described above are shown in Fig. 10
and in Supplemental
Material [98], which schematically present the spatial decomposition of the SOP for different Q [in SM -the small black crosses denote the position of lattice sites in real space, while the size and color of solid circles correspond to the value and sign of the SOP (blue and red color denote signs minus and plus, respectively)].
IV. SUMMARY
The honeycomb lattice exhibits a characteristic band structure in which two bands touch each other at the Dirac cones vertices. Consequently, at half-filling, there exists some critical interaction U c above which the BCS phase becomes stable. This value indicates the occurrence of a quantum phase transition from the semimetallic to the superconducting phase, in the absence of a Zeeman magnetic filed. In this paper, we demonstrated that the behavior of the system changes significantly when population imbalance is introduced. Such a system can be realized in ultracold gases experiments, by loading atoms in two different hyperfine states onto a honeycomb-shaped lattice. In such case, the FFLO state with non-zero total momentum of Cooper pairs can be realized. The characteristic features of the honeycomb lattice DOS can lead to the FFLO phase stabilization for any pairing interaction strength. Moreover, at halffilling, n = 1, the FFLO phase shows a reentrant behavior. For any pairing interaction (also lower than U c ), this phase can be realized without the BCS phase as a precursor, which is not observed in case of other lattices, e.g. square or triangular. We explain this behavior as a consequence of the singular DOS and the strong nesting of Fermi surfaces. These results can be helpful for experimental realization of the FFLO phase on an artificial hexagonal optical lattice, because any superconducting state which appears in the range of 0 > U > U c can only be the FFLO phase. Additionally, we show that the evolution of the total momentum of Cooper pairs is untypical. As a consequence of the nesting between the Fermi surfaces for particles with different spins, the momenta change values and directions.
|
2018-05-29T16:03:19.000Z
|
2017-10-17T00:00:00.000
|
{
"year": 2017,
"sha1": "be5e42dbf07a03602c32abf4f0aabc0282bdd984",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.06395",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "be5e42dbf07a03602c32abf4f0aabc0282bdd984",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
38737895
|
pes2o/s2orc
|
v3-fos-license
|
PLATINUM SENSITIVE 2 LIKE impacts growth, root morphology, seed set, and stress responses
Eukaryotic protein phosphatase 4 (PP4) is a PP2A-type protein phosphatase that is part of a conserved complex with regulatory factors PSY2 and PP4R2. Various lines of Arabidopsis thaliana with mutated PP4 subunit genes were constructed to study the so far completely unknown functions of PP4 in plants. Mutants with knocked out putative functional homolog of the PSY2 LIKE (PSY2L) gene were dwarf and bushy, while plants with knocked out PP4R2 LIKE (PP4R2L) looked very similar to WT. The psy2l seedlings had short roots with disorganized morphology and impaired meristem. Seedling growth was sensitive to the genotoxin cisplatin. Global transcript analysis (RNA-seq) of seedlings and rosette leaves revealed several groups of genes, shared between both types of tissues, strongly influenced by knocked out PSY2L. Receptor kinases, CRINKLY3 and WAG1, important for growth and development, were down-regulated 3–7 times. EUKARYOTIC ELONGATION FACTOR5A1 was down-regulated 4–6 fold. Analysis of hormone sensitive genes indicated that abscisic acid levels were high, while auxin, cytokinin and gibberellic acid levels were low in psy2l. Expression of specific transcription factors involved in regulation of anthocyanin synthesis were strongly elevated, e.g. the master regulator PAP1, and intriguingly TT8, which is otherwise mainly expressed in seeds. The psy2l mutants accumulated anthocyanins under conditions where WT did not, pointing to PSY2L as a possible upstream negative regulator of PAP1 and TT8. Expression of the sugar-phosphate transporter GPT2, important for cellular sugar and phosphate homeostasis, was enhanced 7–8 times. Several DNA damage response genes, including the cell cycle inhibitor gene WEE1, were up-regulated in psy2l. The activation of DNA repair signaling genes, in combination with phenotypic traits showing aberrant root meristem and sensitivity to the genotoxic cisplatin, substantiate the involvement of Arabidopsis PSY2L in maintenance of genome integrity.
Introduction putative Arabidopsis PP4 subunits, the two catalytic and the two regulatory, are expressed throughout the plant.
In S. cerevisiae it was shown that the dimer PP4c-PSY2 (named Pph3-Psy2 in yeast) is involved in regulating HXT genes (glucose transporter genes). For this regulation the PH/ EVH1 domain of PSY2 is important and was found to interact with glucose signaling transducer protein (Mth1) [16]. In mammals, PSY2 is engaged in the regulation of glucose metabolism, and in the regulation of phosphorylation state of a transcription activator CRTC2 (CREB-regulated transcriptional coactivator 2) [17]. In C. elegans the PSY2 was also involved in control of sugar metabolism because the IIS longevity pathway is activated through the insulin/IGF-1 receptor (DAF-2). PSY2 is part of this pathway by regulating FOXO transcription factor (DAF-16) downstream in the pathway [13]. In the present work we show importance of PSY2L for expression of a key sugar transporter gene in Arabidopsis.
Cisplatin is a platinum-containing DNA damaging agent and a drug used to treat cancer. PSY2 was originally identified in yeast cells when selecting drug-sensitive strains [18], and named Platinum sensitive 2. Drosophila mutated in the homologous gene (falafel) also showed cisplatin sensitivity, e.g. had reduced survival rate when exposed to cisplatin [2]. As for other eukaryotes, also in plants, cisplatin sensitivity has been shown to involve defects in DNA repair. At the plant organ level, exposure to cisplatin inhibits leaf formation and growth [19,20].
Nothing is known about the physiological function of PP4c and its two putative regulatory proteins PSY2L and PP4R2L in plants. We embarked to investigate the functions of these genes by selecting T-DNA insertion mutants and by making RNA interference lines. Interestingly, mutants of Arabidopsis PSY2L showed a striking visual phenotype and sensitivity to cisplatin. Additionally, putative genes and pathways regulated by PSY2L were revealed by RNA sequencing.
Results
Phenotype of PP4 subunit mutants-Impaired PSY2L leads to slow growth, dwarfism, sterility and longevity In order to investigate PP4 functions, two homozygous T-DNA insertion lines were isolated for both PP4-1 and PP4-2 ( Fig 1A). However, PP4c transcript levels in all four lines were similar to WT transcript levels. Two amiRNA lines for simultaneous knockdown/out of both PP4-1 and PP4-2 genes were made ( Fig 1A). The lines, with constitutive (35S driven) expression of microRNAs, were followed until the fourth generation. Extensive expression analysis gave five positive knockdown plants, however, their progeny reverted to normal WT expression levels, indicating difficulties with isolation of stable knockdown/out lines for the catalytic subunits (data not presented). No clear phenotype was observed in any of the generations, and sufficient PP4c was apparently present to support normal growth and development.
Mutants homozygous for T-DNA insert in the PSY2L gene (SALK_040864) were isolated (psy2l line), and RT-PCR analysis confirmed complete knockout of PSY2L (Fig 1B). The psy2l plants were dwarf, and extremely slow growing (Fig 2A-2E). They grew into small bushy plants producing many flowering stems with poor silique development and very few seeds per plant. Three other homozygous mutant lines (SAIL_1275_F05, SAIL_33_H01, SAIL_256_C08, Fig 1A) showed the same dwarfed phenotype and development into small bushy plants with very poor seed set, hence confirmed that impairment of PSY2L causes such phenotype traits (S1 and S2 Figs).
The psy2l mutants easily developed purple colored leaves typical for high anthocyanin content ( Fig 2D). WT plants growing on rock wool with complete nutrient medium did not have visible anthocyanins as confirmed by measurements (Fig 3A). Anthocyanins accumulated in WT grown on nitrogen-depleted nutrient solution, as expected [22]. However, for psy2l, the anthocyanin level was high already on the complete nutrient medium, and was then further enhanced by low nitrogen in the growth medium ( Fig 3A). After seven months, the psy2l plants were still flowering, and showed complete or partial sterility (Fig 2E). Alexander staining [23] indicated viable cytoplasm and some aborted pollen in the psy2l plants ( Fig 2G). The insertion line Salk_048064 (psy2l) was used in most studies. Target sites of amiRNAs are indicated with a red mark. ami1 targeted exon 3 in both PP4-1 and PP4-2 genes, and ami2 targeted exon 6 in both genes. Schemes are from the PLAZA database [21]. B, Quantitative real time expression analysis of the PSY2L gene in WT (Col-0) and the SALK_048064 line tested with two different primer pairs spanning exons 18-19 (green columns) or exons 3 and 4 (blue columns). C, Quantitative real time expression analysis of the PSY2L gene in EV/Col-0 (plants transformed with empty vector) and the psy-ami2 line. RNA from three replicates of soil-grown plants (four weeks old) was used. SE is given, Expression in mutant lines is significant different from (EV)/Col0 at the level: * p<0.05, ** p<0.01. Although most of the mutant pollen did stain red, differences from WT were obvious. Counting pollen grains in the microscope from ten intact anthers of WT and psy2l revealed a decrease in number by 58 ±6% in psy2l. Furthermore, from WT anthers pollen easily shed onto the microscope slide whereas mutant pollen did not. The oval shape of ripe pollen grains, was clearly seen for WT, but seldom found for the mutant pollen grains (S3 Fig), and there was much less pollen germinating from psy2l. Apparently much less pollen was able to interact properly with the stigma and lead to seed formation in psy2l as compared with WT. When psy2l seeds were sown on ½MS agar with 1% sucrose, impaired root growth was striking ( Fig 2H), and also shoots were smaller (Fig 2I). Psy2l seedling roots clearly differed from WT by having root hairs closer to the tip of the root (Fig 2J and 2K) and disorganized structure ( Fig 2L and 2M), resembling roots of mutants with impaired DNA double strand break repair [24]. When roots were stained with propidium iodide and examined by confocal microscopy it was clearly seen that the psy2l mutant had an aberrant meristem, e.g. shorter meristem zone with many dead cells (S4 Fig). In comparison with WT, psy2l showed significant delayed germination after 1 or 2 days at room temperature (Fig 3B), which could be caused by low concentration of GA or enhanced ABA levels. Delayed germination was highly reproducible with different batches of seeds. Germination was also tested in the presence of 5 μM gibberellic acid, but this did not significantly influence germination ( Fig 3B). Other concentrations of gibberellic acid (1 and 10 μM) were also tested, but gave no positive effects (data not shown).
Because of the severe phenotypes of psy2l T-DNA knockout mutants, we also generated two microRNA (amiRNA) encoding genes that target exon 7 and 13 ( Fig 1A). These lines showed knockdown by 50% of PSY2L transcripts ( Fig 1C). Interestingly, plants from both ami-RNA1 and ami-RNA2 lines showed visible phenotypes, with different rosette appearance, shorter roots and flowering delay, but mild effects in comparison with the T-DNA knockout mutants of PSY2L. For example, mean root length of seedlings grown six days in vertical Petri dishes was 20.0 ± 0.7 mm for WT control (transformed with empty plasmid), 5.0 ± 0.4 mm for the psy2l SALK line, and 16.3 ± 0.4 mm and 15.6 ± 0.3 mm for psy-ami1 and psy-ami2, respectively (S5 Fig).
Expression of PP4R2L in the homozygous mutant of pp4r2l (SALK_093041, Fig 1A) was tested with different primer pairs. The primer pair targeting upstream of the T-DNA insert showed over-expression, while the primer pair targeting downstream of the insert or spaning the full CDS showed knockdown of the transcript (Fig 1A, S6A Fig). The mutant showed no visual phenotype. An amiRNA complete knockout line for PP4R2L was generated (Fig 1A, S6B Fig), but also did not show any striking phenotype. Possibly, there was a mild accelerated senescence-like phenotype for cauline leaves that needs to be carefully investigated in the future. The severe phenotype of knockout psy2l mutants as opposed to the WT-like phenotype week. Before this treatment plants had been grown for 5 weeks in rock wool with complete nutrient solution. n = 3, SE is given. Statistically significant differences (p<0.02) are indicated by different letters above the bars. B, Seed germination of WT (circles) and psy2l (squares) sown on ½ MS salts with 1% sucrose without (open symbols) or with 5 μM gibberellic acid (closed symbols). After sowing, seeds had been stratified at 5˚C for three days, then placed at 22˚C in 16 h light/8 h darkness. Totally there were 90 seeds for each treatment and plant type, e.g. three repeats each with 30 seedlings, n = 3, SE is given. On day 1 and 2 psy2l is significantly different from WT with p<0.01. GA effects were not significant.
https://doi.org/10.1371/journal.pone.0180478.g003 of pp4r2l mutants point to involvement of only PSY2L but not PP4R2L in certain processes important for growth and development.
Genotoxicity assay
Three days old seedlings grown on ½ MS medium were transferred to new medium supplemented with 0-4 mgL -1 cisplatin and were allowed to grow horizontally for another 12 days (Fig 4, statistics in S7 Fig). When compared to cisplatin free media (Fig 4A), seedlings from PSY2L mutants (psy2l, psy-ami1 (1, 2), psy-ami2 (1,2) showed severe growth retardation ( Fig 4B) and less survival (Fig 4C) on media supplemented with cisplatin. Control WT and the knockout of PP4R2L (pp4r2l-ami1 (1, 2)) behaved similarly and were far less influenced by cisplatin than psy2l and ami-psy (Fig 4A-4C). Higher cisplatin concentrations (6 and 8 mgL -1 ) were also tested, but strongly prevented growth in all plants ( Fig 5C). PP4R2L with free C-terminus clearly targeted nucleus in addition to cytosol (Fig 5D). In addition, a partial ER-like network was detected for PP4R2L with free C-terminus ( Fig 5E). The fusion proteins for PP4-1 had different targeting patterns depending on cells and expression systems. PP4-1 fusions showed cytosol and nucleus targeting ( Altogether, the experiments indicate putative localization sites for PP4 in the cell. Localization patterns are complex and require further determinations of full PP4 complexes and also localization of the substrate(s) as they become known. In conclusion, all PP4 subunits were detected in the nucleus and in the cytosol, but with less frequency of PP4-2 in the nucleus.
Global expression analysis of the psy2l mutant relative to WT To find genes consistently influenced by PSY2L, two different tissue types were investigated.
Genes, 2517, differentially expressed in psy2l rosette leaves by factor 2 high or low compared with WT, were tested with the singular enrichment analysis (SEA) AgriGo bioinformatics tool kit [26]. Likewise, 2989 genes from psy2l seedlings were compared with WT seedlings (Fig 6, S1-S4 Tables). When examining "Molecular function" and "Biological Process" using the AgriGO tool, several groups of significantly enriched genes were delivered. Interesting GO terms related to the observed psy2l phenotype and significantly enriched are listed in Table 1. When examining "Cellular Component" with the AgriGo tool, "nucleus" was the clear cut significant subcellular compartment. Genes of different GO-terms were further compared for rosette leaves and seedlings to identify joint genes with expression similarly perturbed in the two different tissue types (Tables 1 and 2). Although PSY2L obviously may regulate different genes in specialized tissues, focusing on the genes coregulated in both tissues should help to identify specific genes most likely influenced by PSY2L (Table 2).
Kinase activity genes. Kinase activity genes in psy2l constituted an enriched GO term ( Table 1). Many of these genes, e.g. 93 in rosette leaves and 83 in seedlings, were down-regulated in the mutant relative to WT. From the two-fold down-regulated kinase genes, about half of them encoded protein kinases. Shared between both tissue types, 21 genes were more than two-fold down-regulated while 12 were up-regulated ( Table 1). The putative receptor kinase CRINKLY3 (AT3g55950) was lowered by factor 6.8 and 3.0 in the two tissue types ( Table 2). Also other members of the small CRINKLY4 gene family were down-regulated; CRINKLY4 (AT5g47850) was down-regulated 24.9 times in psy2l rosette leaves (but not present in seedling gene list), and CRINKLY1 (AT3g09780) was down-regulated 2.1 times in seedlings. The exception to down-regulation was CRINKLY2 (AT2g39180), which was up-regulated 2.9 times in rosette leaves (but not present in seedling gene list). The last member of the group, ACR4 (AT3g59420) was not present in lists for rosette leaves nor seedlings. The CRINKLY4 group of receptor-like kinases is involved in a wide range of developmental processes, and down-regulation of CRINKLY4 genes were found to give dwarf plants with misshapen leaves and low fertility [27,28]. Altered expression of these genes in psy2l may contribute to the observed phenotype. Another protein kinase gene, WAG1 (AT1g53700) was 3.9 times down-regulated in rosette leaves and 2.9 times down-regulated in seedlings. This protein kinase has a function in root development [29] and its strong down-regulation may be related to the effects seen in psy2l, e.g. poor root growth.
Among the 12 up-regulated kinase genes was the highly interesting WEE1 gene, that is known to be transcriptionally activated by impaired DNA replication or by DNA damage [30]. WEE1 was 2.4 times up-regulated in rosette leaves and 2.2 times up in seedlings ( Table 2). The poor root growth observed in psy2l (Fig 2H-2M, S4 Fig) may be, at least partly, caused by upregulation of WEE1 in agreement with work by De Schutter et al. [30] where overexpression of WEE1 led to arrest of root growth.
Tyrosine protein kinases were significantly enriched (Table 1), implicating PSY2L in regulation of these kinases. Many of the tyrosine protein kinases are annotated as localized to membranes, e.g. plasma membrane, endomembrane, or as transmembrane receptor proteins. Fourteen and 19 protein tyrosine kinase genes were two-fold down-regulated in rosette leaves and seedlings, respectively (only 3 shared). None of these kinases have been further characterized (TAIR database).
Protein serine/threonine phosphatase activity. Interestingly, kinases were generally more significantly enriched for down-regulated, while phosphatases were more enriched for up-regulated genes ( Table 1). All the up-regulated phosphatases belonged to the PP2C group Partial overlap between OFP-ER and free C-terminus PP4R2L was detected in (E). F-H, PP4.1 showed a variability of targeting patterns including cytosol (F-H) and weak nucleus targeting (F, H) and unknown punctate structures (F). In addition, in some cells, also targeting of the nuclear envelope was seen (G, H). I-K, PP4.2 protein showed mostly targeting to cytosol, and unknown punctate structures (I-K). Endoplasmic reticulum was labeled by OFP-ER [25]. Scale bars = 20 μM. https://doi.org/10.1371/journal.pone.0180478.g005 Platinum sensitive psy2l or PAPs (PURPLE ACID PHOSPHATASEs) with only one exception, a TOPP6 (PP1 type phosphatase) that was about two-fold up-regulated in psy2l seedlings. The PAPs have a broad range of substrates including both proteins and small organic compounds. They may have regulatory functions as well as functions in mobilization of phosphate [31,32]. PAP17 was up-regulated 2.4-fold in both types of psy2l tissues tested, and this phosphatase has been shown to display peroxidase activity [33]. Interestingly, the PP2C phosphatase ABI1 (ABA INSENSITIVE 1) was up-regulated about 2.3 times in both tissue types. ABI1 is known as a negative regulator of ABA promoting stomata closure [34,35]. HAI1 (HIGHLY ABA-INDUCED PP2C GENE 1) was up-regulated by factor 18.2 in rosette leaves and by factor 2.4 in seedlings. This gene is also annotated as a negative regulator of osmotic stress and ABA signaling. A close homolog, HAI2, was induced by factor 7-8 in both tissue types ( Table 2).
Anatomical structure development. Taking into account the strikingly altered anatomy of psy2l, we analyzed the GO term "Anatomical structure development". For the rosette leaves there were more up-regulated than down-regulated genes, whereas for seedlings the numbers were similar. For joint up-regulated genes (22 genes), many were flavonoid pathway or other epidermis-related genes, for example CER1 (ECERIFERUM 1), a fatty acid hydroxylase related to production of stem epicuticular wax and pollen fertility [36]. These genes point to involvement of PSY2L in regulation of epidermis characteristics. Four LEA (LATE EMBRYOGENES GENES) genes with unknown function were more than four times up-regulated in both tissue The two left columns represent rosette leaves from plants in soil, the two middle columns represent 8-10 days old seedlings, and the two columns to the right represent joint genes two-fold different from WT in both rosettes and seedlings. Number of genes are from expression data of three biological mutant samples versus three WT samples for each type of tissue, p<0.05 for genes listed as significant differently expressed (S1-S4 Tables). https://doi.org/10.1371/journal.pone.0180478.g006 Platinum sensitive psy2l types. LEA proteins appear to contribute to drought resistance during the vegetative stage, but most LEA genes have not been functionally characterized [37,38]. PSY2L appears necessary to restrain expression of some LEA genes indicating a negative control by PSY2L on these LEA genes either as a secondary or as a more direct effect.
Only 9 genes were jointly down-regulated in the "anatomical structure development" GO term. TRY, a gene encoding a small MYB/homeodomain-like superfamily transcription factor involved in trichome distribution [39], was strikingly down-regulated. Knockout of TRY is known to give clustering of trichomes [39], but trichomes were evenly distributed on the leaves of the psy2l mutant. Another striking down-regulated gene was ELF5A-1 (EUKARYOTIC ELONGATION FACTOR 5A-1) translation initiation factor. This is a conserved translation factor involved in promotion of ribosomal function. Expression of ELF5A-1 was strongly downregulated in both rosette leaves (5.7-fold) and seedlings (4-fold). This may be a key gene related Table 1. Singular enrichment analysis (SEA) using AGriGO for genes more than two-fold differently expressed in psy2l versus WT. GO terms of special interest for the phenotype observed are presented. Genes were sorted as Up or Down-regulated. Non-significant is marked as ns. Input number for two-fold up-regulated genes were 1227 and 1196, and two-fold down-regulated were 1290 and 1793 genes for rosette leaves and seedlings, respectively. Number of genes in background reference (BG/Ref) is given for each GO term. Total annotated number in background reference was 31819 genes. to the slow growth phenotype of psy2l, and suppression of this gene is known to impair xylem formation [40].
Rosette number of genes
Response to hormone stimulus-ABA. Hormones are likely to play important roles in forming the phenotype of psy2l, and sub-terms of the highly significantly enriched GO term: "Response to hormone stimulus" were inspected (Table 1). Up-regulated, but not down-regulated, ABA-stimulated genes constituted an enriched group. Among the up-regulated genes, 15 genes were common for both rosette leaves and seedlings ( Table 1). Five of these shared genes were transcription factors, and most strikingly up-regulated were ATHB7 and ATHB12 (HOMEOBOX 7 and 12) ( Table 2). Especially ATHB7 was up-regulated from 23 to 262 FPKM (fragment per kilobase per million mapped reads) in rosette leaves and from 27 to 72 FPKM in seedlings. Recently, high expression level of ATHB7 was found to delay senescence in Arabidopsis [41]. Delayed senescence (longevity) was a striking phenotypic trait in psy2l plants. Since high expression of ATBH7 was also pronounced in seedlings, this supports PSY2L being a suppressor of ATBH7. Interestingly, ABI5 was up-regulated in psy2l. ABI5 is known as an inhibitor of germination [42], hence high expression of ABI5 is relevant in relation to the delayed germination observed for psy2l (Fig 3B).
Response to hormone stimulus-Ethylene. "Response to ethylene" was a significantly enriched gene group (Table 1), however, only one shared gene was more than two-fold up-regulated in both rosette leaves and seedlings. This was the MYB13 gene also found in the ABA responsive group of genes. Three ethylene responsive genes, all transcription factors, were more than 3-times down-regulated. Most strongly influenced was ERF15 (ETHYLENE-RE-SPONSIVE ELEMENT BINDING FACTOR 15), which was 7 and 3 times down-regulated in rosette leaves and seedlings, respectively. ERF15 was recently found as a positive regulator of ABA response [43].
Response to hormone stimulus-Cytokinin. Down-regulated genes were enriched for stimulus to cytokinin in both rosette leaves and seedlings. In rosette leaves 5 of these downregulated genes were two-component response regulators, and in seedlings 7 of these genes were two-components response regulators. ARR15 (RESPONSE REGULATOR 15) and ARR5 (RESPONSE REGULATOR 5) were strongly down-regulated, 5-9 times for ARR15 and 3-4 times for ARR5 (Table 2). Both ARR15 and ARR5 are known to be induced or stabilized by cytokinin (TAIR annotation) indicating that the cytokinin level in the psy2l mutant is lowered relative to WT.
Response to hormone stimulus-Gibberellic acid and auxin. The GO term "Response to gibberellin stimulus" was highly enriched for down-regulated genes, and six genes were common to rosette leaves and seedlings ( Table 1). Five of the 6 genes were transcription factors, three genes were also responsive to auxin, according to GO annotation. The most striking gene was BHLH137 (At5g50915), which was 4.4 and 7.9 times down-regulated in rosette leaves and seedlings, respectively ( Table 2). The "response to auxin" GO term gave variable results for rosette leaves and seedlings with down-regulated genes being highly significant for rosette leaves, but not significant for seedlings. However, when specifically searching for SAUR (SMALL AUXIN UP RNAs) genes, two-fold changed, 13 down-regulated SAURs were found for rosette leaves (and one up-regulated), and eleven down-regulated SAUR genes were found for seedlings (and two up-regulated) (data in S1-S4 Tables). Since SAUR genes are markers for auxin effects [44], these results point to auxin levels as being lower in the psy2l mutant in comparison with WT. Flavonoid biosynthetic process. The psy2l mutant easily developed purple colored leaves although control WT plants in the same pots did not (Figs 2D and 3A). This was reflected in the up-regulation of several genes involved in flavonoid synthesis (Tables 1 and 2). Transcripts of the general flavonoid pathway regulator MYB75/PAP1 (PRODUCTION OF ANTHOCYA-NIN PIGMENT 1) was up-regulated 41 and 15 times in rosette leaves and seedlings, to a high expression level e.g. 55.9 and 54.5 FPKM, respectively. A bHLH transcription factor promoting the last steps in proanthocyanin and anthocyanin synthesis, TT8 (TRANSPARENT TESTA 8) was also strongly induced, e.g. 8-35 times, resulting in FPKM levels around 10 for both seedlings and rosette leaves. On the other hand, the TT8 close homologs GL3 and EGL3 which usually stimulates anthocyanin synthesis in Arabidopsis leaves [22], were expressed only at a low level. The regulator of TT8, TTG2/WRKY44 was also expressed at a high level, 4 and 6 times higher in psy2l seedlings and rosette leaves, respectively, compared with WT. Structural genes of the anthocyanin branch of the flavonoid pathway are positively regulated by PAP1 and TT8 in complex with the constitutive TTG1 protein [45], and this is in line with DFR (DIHYDRO-FLAVONOL 4-REDUCTASE) and LDOX (LEUCOANTHOCYANIDIN DIOXYGENASE) expression being enhanced 14-54 times in psy2l rosette leaves and seedlings ( Table 2).
Transport. The GO term "Transport" was highly enriched (Table 1). In common for rosette leaves and seedlings 24 genes were up-regulated and 23 genes were down-regulated ( Table 1). The affected genes included all kinds of different transporters, like MATE (multidrug transporters), ABC (ATPase coupled transporters), POT (proton-dependent oligopeptide transporters), transporters involved in iron, phosphate, sulfate, ammonium, lipid, purine and sugar transport. Genes co-regulated in both tissue types and more than 4-fold perturbed in comparison with WT are listed in Table 2. Transporters implicated with lipid transport were highly represented in both up and down-regulated genes, e. g. a total of 10 joint genes. In down-regulated genes, the presence of 5 chloroplast and two mitochondrium transporters indicates that functions in these organelles, are influenced by PSY2L. A mitochondrial inner membrane carrier (At5g26200) was down-regulated 4-6 times in both rosette leaves and seedlings (Table 2). Related to chloroplasts, a gene involved in protein folding and transport (At2g30695), containing a conserved domain, bacterial ribosome binding trigger factor, was down-regulated 2.4 and 2.9 times, but appeared very stable in WT control tissue. A chloroplast envelope sugar/phosphate antiporter gene, GPT2 (Glucose-6-phosphate/Pi transporter), was up-regulated by factor 7.4 and 7.9 in rosette leaves and seedlings. GPT2 allows equilibration of glucose-6-phosphate and phosphate in the cell. GPT2 is induced by high sugar levels and in response to various other endogenous and external signals [46]. The data are compatible with PSY2L as a suppressor of GPT2.
Nucleus. "Nucleus" was the most highly enriched "subcellular compartment" GO term with 49 genes jointly up or down-regulated in rosette leaves and seedlings. These were mainly transcription factors, e.g. 31 genes, many already mentioned as influenced by hormones.
Most striking was a group of 8 histones, all up-regulated (At1g13370, At2g28720, At2g28740, At3g09480, At3g45930, At3g46320, At3g53730, At5g10980). Most were highly upregulated, e.g. 4-17 times ( Table 2). The physiological significance of this up-regulation is not clear, but changes in histone composition are involved in cell cycle progression in Arabidopsis [47,48]. DNA damage repair response and cell cycle arrest. We inspected expression of genes conserved in eukaryotes and involved in DNA double strand break (DSB) repair (genes listed in: Amiard et al. [49]). This revealed 19 genes with changed expression in psy2l versus WT (Table 3). Additionally, 11 DNA repair genes were selected by the AgriGo tool. Several of the DNA repair associated genes are induced by radiation, like BRAC1, GR1, XRI1, RAD17, RAD51 and RAD54. The lower part of Table 3, with AgriGo tool selected genes, comprises also DNA repair genes not involved in DSB repair (DNA glycosylases). Furthermore a gene not revealed by the Amiard list or AgriGO, e.g. WEE1, was up-regulated in psy2l and is also considered important for DSB repair in Arabidopsis [24] (Table 2). DNA damage repair signaling and cell cycle arrest are tightly connected [30,50]. DNA damage activates signaling pathways through the sensor kinases ATM and ATR and the signaling will activate cell cycle arrest that allows time for DNA repair [49]. In Arabidopsis, the cell cycle inhibitor kinase WEE1 is transcriptionally activated in response to DNA damage or cessation of DNA replication signaled through ATR [30]. Furthermore, the cell cycle inhibitors and checkpoint regulators SMR5 and SMR7 are known to be transcriptionally activated by genotoxic stress [50]. These genes were also up-regulated in the psy2l mutant. SMR7 was 7-fold up in both psy2 rosette leaves and seedlings (S1 and S3 Tables), strongly indicating that cell cycle progress was impaired.
Kinases and phosphatases
Generally, protein phosphatases inactivate protein kinases by dephosphorylation of the activation loop in kinases, and additional sites may also be regulated by phosphorylation/dephosphorylation. Hence, when a crucial protein phosphatase is impaired this may lead to increased phosphorylation status of certain protein kinases, which may further lead to induction of a negative feedback on gene expression to restore normal levels of kinase activity. This may be part of the explanation for enrichment of down-regulated kinase genes (Table 1). Furthermore, impairing the activity of an important phosphatase complex like PP4c-PSY2L may lead to enhanced expression of other protein phosphatases as an attempt to establish homeostasis by up-regulation of phosphatases that partly can replace the impaired phosphatase. Up-regulated protein phosphatase genes were enriched, especially PP2C and PAP phosphatases.
Flavonoids
Typical nutrient stress sensitive regulators of the flavonoid/anthocyanin pathway, PAP2 and GL3 [22], were not influenced by PSY2L knockout, but expressed at a very low level in both seedlings and rosette leaves, as in WT. Furthermore, the HY5 gene which acts as an integrator of light signaling for promoting flavonoid syntheses was not consistently up-regulated, but was increased by factor 1.8 in seedlings and decreased by factor 0.6 in rosette leaves (S6 and S7 Tables). The TT8 gene is generally highly expressed in developing seeds, and not induced by stress factors like nutrient depletion or high light intensity (TAIR database, eFP Browser and [9,45,51]. The strong upregulation of TT8 expression in both psy2l seedlings and rosette leaves is intriguing (Table 2). Apparently TT8 has overtaken the function of its homologs GL3 and EGL3 that usually are important for anthocyanin synthesis in leaves [51]. Possibly, a phosphorylated regulator in the psy2l mutant, otherwise inactivated by dephosphorylation when PSY2L is present, may activate expression of TT8, PAP1, and TTG2 in psy2l. All taken together PSY2L appears to act as an upstream, negative regulator of specific transcription factors, e.g. PAP1 and TT8. High expression levels of these transcription factors explain the high levels of structural anthocyanin synthesis genes and accumulation of anthocyanins in psy2l. Table 3. Genes involved in DNA double strand break signaling and repair in Arabidopsis. Listed according to Amiard et al. (2013) [49]. Additional genes involved in DNA repair identified using AgriGo (Go Analysis Toolkit and Database for Agricultural Community) [26] are added. Arabidopsis ID numbers marked with * are involved in DNA double strand break repair according to AgriGO SEA or TAIR. Sugar metabolism, a conserved PSY2 regulated function?
Function Arabidopsis ID Gene
In yeast, mammals, and C. elegans, PSY2 has various regulatory roles regarding sugar transport and metabolism, including transport of glucose in yeast [16]. Many transporter activity genes showed altered expression in psy2l, and intriguingly the GPT2 (glucose-6-P/phosphate) transporter was transcribed at a highly increased level in psy2l, e.g. 7-8 fold increased (Table 2). In WT, the GPT2 transporter appears to be generally repressed unless certain signals from environmental or developmental cues occur [46]. The results here point to PSY2L as a negative regulator up-stream of GPT2. When PSY2L is impaired, GPT2 is constitutively expressed at a high level in very different tissues. Apparently, in a wide range of different eukaryotes PSY2L is involved in regulation of sugar transport and/or metabolism.
Anatomical structures
The strikingly slow root growth, root hairs close to the root tip, and rippled morphology of the psy2l roots (Fig 2H-2M) resembles the phenotype found for WT roots treated with bleomycin to induce DNA double strand break [24]. Staining of the roots revealed less DNA in cells at the root tip, and a high number of dead cells (S4 Fig). The observed phenotype appears to be caused by impaired cell cycle progression, which can be induced by DNA repair signaling. RNA-seq data showed up-regulation for WEE1, SMR5 and SMR7, all known to be transcriptionally up-regulated by DNA damage stress and to inhibit cell cycle progression [30,50]. Ectopic expression of SMR5 and especially SMR7 hampered cell division and growth of shoots [50]. Also for psy2l, growth beyond the seedling stage, including pollen formation and seed set, is likely hampered by restricted cell cycle progression. Overall, the phenotype and expression analysis strongly underpins that PSY2L has a function in control of cell cycle progression. PSY2L may have several targets, and targets other than the cell cycle for explaining the phenotype should not be excluded. In other multicellular organisms PSY2 was also important for growth and development. In Drosophila, PSY2 (falafel) knockout disturbed physiological development, i.e. special tissues, eyes and wings, started to die [52]. Overexpression of PSY2 (SMK1) in C. elegans resulted in worms that could not be maintained as stable lines, and the F1 progeny died during embryogenesis [13]. In the work presented here, expression analysis showed that several genes annotated as involved in anatomical structure development showed altered expression levels in the psy2l mutant. Interestingly, both transcription factors HOMEO-BOX 7 (ATHB7) and HOMEOBOX12 transcripts were significantly up-regulated. A PP4c-PSY2L complex may act as an upstream negative regulator of such transcription factors in Arabidopsis.
There are three ELF5/eIF5 (EUKARYOTIC ELONGATION FACTOR 5A) translation initiations factors in Arabidopsis, and ELF5A1 has a special function in formation of the xylem [40]. It was previously shown by Liu et al. [40] that mutants with overexpression of ELF5-1 had a thicker layer of xylem cells and thicker flowering stems, while reducing the level of ELF5-1 to 50% of WT levels resulted in thinner layers of xylem and reduced radius of the flowering stems. In our study, expression of ELF5-1 was reduced to about 20% of WT levels in both seedlings and rosette leaves. The flowering stems of the psy2l mutant often appeared flimsy and not able to stand upright like in WT. Possibly PSY2L may have a direct effect on transcription factors regulating ELF5-1 expression or alternatively a more indirect effect through influencing hormone levels.
The psy2l phenotype with delayed germination (Fig 3) and impaired growth (Fig 2) was consistent with the expression data, which indicated high ABA, but low GA, cytokinin and auxin levels. Interestingly, it was recently also reported that genotoxic stress induced DNA repair signaling and delayed germination in a SMR5 dependent manner [53].
DNA damage checkpoint and cell division
In addition to cytosol and nucleus, PP4c is known to localize to centrosome/spindle pole bodies in human and Drosophila cells [54,55]. Centrosome/spindle pole bodies are parts from microtubule-organizing centres (MTOCs) that are responsible for meiotic and mitotic spindle apparatus organization during cell division. Plants do not have centrosome/spindle pole bodies, and the nuclear envelope is thought to play the role of microtubule-organizing centres [56]. In this study, one of the obtained targeting patterns for tagged Arabidopsis PP4c is a network-like structures around nucleus (Fig 5G and 5H). These structures may be of interest for future investigation to study the role of PP4 in cell division in plants. In mammals and yeast phosphorylated H2AX histones is an important signal for DNA damage, and PP4 dephosphorylation of the H2AX histone is required for recovery from the DNA damage checkpoint [57]. In contrast, phosphorylation/dephosphorylation of this histone does not seem to be important in Arabidopsis. It still needs to be clarified if other histones have such a function in plants [49]. The hampered growth of psy2l roots and aberrant meristem, sensitivity to cisplatin, and upregulation of DNA damage and cell cycle arrest genes substantiate the involvement of Arabidopsis PSY2L in maintenance of genome integrity. The connection of plant PP4 with the DNA damage checkpoint deserves further investigation.
Conclusion
Although some chains of events seem straightforward, like high TT8 and PAP1 expression causing high levels of anthocyanins, some caution should also be taken regarding interpretation of transcript levels as markers for stimulation versus inhibition of a biological process. High transcript level of a gene may sometimes reflect that the translated product is not functional and a negative feedback loop could thereby have been distorted. This could be caused by lack of dephosphorylation of a protein by PP4c-PSY2L. Gene expression analysis can give only indications of which pathways are influenced by PP4c-PSY2L since the primary action of PP4 complexes is dephosphorylation, which takes place on the protein level. The present work reveals PSY2L as an essential regulator for growth and development in plants, likely implicating DNA damage signaling and cell cycle progress. Several perturbed genes and pathways have been identified, and these data pave the way for further exploration of the involvement of PSY2L in specific physiological processes, tissue types, and interaction with candidate genes and proteins.
Generation of amiRNA and gene overexpressing transgenes
For construction of amiRNA expressing transgenes, we searched for potential targets against PP4-1 and PP4-2 (joint), PP4R2L, and PSY2L using default settings of the Web MicroRNA Designer (WMD) application (http://wmd3.weigelworld.org), based on the previously established parameters by [63,64]. Mostly, the amiRNAs on the top of the provided list were chosen, and checked using the mirU [65] or the psRNATarget websites (http://plantgrn.noble. org/psRNATarget) [66]. Two potential amiRNAs were selected for each target, and their primers (I-IV, see S5 Table) were provided by the WMD3 website. Using these primers and two template specific primers (A and B, see S5 Table) PCR amplifications composed of two rounds were performed using the template plasmid pRS300 (Addgene: 22846) containing the miR319a precursor [64]. The amplified amiRNA transgenes were cloned in pGEMT-easy vector (Promega) and verified by sequencing. Subsequently, the transgenes were excised and subcloned into the 35S promoter-containing binary vector pBA002 [67]. In order to generate overexpressor lines of the selected genes, cDNAs were amplified and subsequently cloned into pBA002 vector.
The freeze-thaw method was used to transform the constructs into Agrobacterium ABI-1, which is a derivative of GV3101 (pMP90RK) and possesses the RK2 replicase and the trf gene required for plasmid replication. Hence, they were used for plant transformation using the floral dip method [68]. Screening of first to third generation seeds was performed on 1/2 MS agar plates containing 10 μg mL -1 phosphinothricin. Resistant seedlings were selected 10-14 d after germination.
Subcellular localization and microscopy
Three to four weeks old plants grown in soil, at 12 h light/12 h darkness were used for protoplast isolation. Arabidopsis mesophyll protoplasts isolation and their subsequent PEG-transfection with plasmids were adapted after Sheen [72] and Yoo et al. [73]. Briefly, strips of Arabidopsis leaves were incubated with enzyme solution over-night at room temperature in the dark. The released protoplasts were filtered, centrifuged, and re-suspended in W5 solution. After 1 h incubation on ice, protoplasts were pelleted and re-suspended in MMg solution. The re-suspended protoplasts were subsequently transfected, using polyethylene glycol, with the above-mentioned plasmids, and incubated for 18 h-48 h. For transformation into onion epidermal cells, plasmids were precipitated onto gold particles, and transiently introduced by a helium-driven particle accelerator (PDS/1000; Bio-Rad, Hercules, CA, USA) with adjustments set to the manufacturer's recommendations. The bombarded epidermal cell layer was incubated for one to two days. Transfected protoplasts and onion epidermal cells were then examined using fluorescence and confocal microscopes. Microscopy analysis was done using Nikon TE-2000U inverted fluorescence microscope equipped with an Exfo X-Cite 120 fluorescence illumination system and filters for YFP (exciter HQ500/20, emitter S535/30), Texas red filter set for RFP or OFP: 31004 (exciter D560/409, emitter D630/60 m), and a special red chlorophyll autofluorescence filter set (exciter HQ630/39, emitter HQ680/40; Chroma Technologies). Images were captured using a Hamamatsu Orca ER 1394 cooled CCD camera. The NIS-Elements AR analyses software (NIKON) was used to capture 0.5 Z-sections to generate extended focus images. Nikon A1R confocal laser scanning microscope using a 960 water objective was also used. Fluorescence images of EYFP (exciter 488, emitter 525), and OFP (exciter 561, emitter 595) were acquired and analyzed using the NIS-Elements AR analyses software (NIKON). Images were subsequently processed for optimal presentation with Adobe Photoshop version 9.0 (Adobe Systems, San Jose, CA, USA).
Anthocyanin determination
Anthocyanin determination was adapted from Feyissa et al. [22]. Leaf tissue (0.05 g) was extracted in 300 μL extraction buffer consisting of 1% v/v HCl (1.2 M) in methanol. The leaves were extracted by constant shaking overnight at 4˚C. Distilled water (200 μL) and chloroform (500 μL) were added and centrifuged at 13,000 x g for 2 min. The upper layer (400 μL) was added to an Eppendorf tube and mixed with 600 μL extraction buffer followed by centrifugation at two min at 13,000 x g. The absorbance was detected at 530 and 657 nm, and the relative concentration of anthocyanin was calculated as Abs 530 -Abs 657 .
Alexander staining
Pollen viability was checked using Alexander's stain [23]. Flowers that are about to open were dissected and dehiscent anthers were incubated with the stain on a microscope slide.
Genotoxicity assay
Seeds were sown on ½ MS media (M5519, Sigma-Aldrich, St Lois, MO, USA) supplemented with 1% sucrose and 0.8% plant agar (Duchefa Biochemie, Haarlem, Netherlands), stratified for 2 d, and allowed to grow under 16 h light/8 h dark cycles. In order to prepare a stock solution of 0.5 mg/mL, cisplatin (cis-diamminedichloroplatinum (II), Sigma-Aldrich, St Lois, MO, USA) was dissolved primarily in 1 mL of dimethylformamide and mixed with 19 mL of 0.9% saline solution. In order to evaluate genotoxicity in control and mutant plants, 3 d old seedlings were transferred to media supplemented with 0-8 mg L -1 cisplatin and Petri dishes were placed horizontally for 12 d, or vertically for 3 d when investigating the effect on shoot and primary root development, respectively. Root measurements were accomplished using Image J (https://imagej.nih.gov/ij/index.html).
qRT-PCR
For qRT-PCR, total RNA was extracted using RNAeasy Plant Mini Kit and treated with oncolumn DNaseI digestion (Qiagen, Hilden, Germany). One μg RNA was reverse-transcribed using the High Capacity cDNA Archive Kit (Applied Biosystems, Foster City, CA, USA) to generate first-strand cDNA in a 20 μL reaction volume. Quantitative real time PCR was performed on a Light Cycler 96 Sequence Detection System (Roche Diagnostics, Mannheim, Germany) using 96-well plates with a 15 μL reaction volume containing 7.5 μL of TaqMan buffer (Applied Biosystems; includes 6-Carboxyl-X-Rhodamine as a passive reference dye), 0.75 μL primer, 45 ng of the first-strand cDNA, and water. Primers were predesigned TaqMan Gene expression assays (S5 Table). The qPCR results were analyzed using LightCycler 96 analysis software 1.1 (Roche). The comparative threshold cycle method for relative quantification was used with ACTIN8 (At1g49240, TaqMan At02270958).
For in-gel expression screening, total RNA was extracted using DNA-free RNA isolation protocol [74]. Isolated RNA was treated with DNase I (Invitrogen, Carlsbad, CA, USA) (Life Technologies) and precipitated by ammonium acetate (7.5 M) and ethanol. First-strand cDNA synthesis was performed using Superscript III reverse transcriptase (Invitrogen, Carlsbad, CA, USA) in a 10 μL reaction mixture containing gene specific primers. PCR amplification was done using DreamTaq DNA Polymerase (5 U/μl) (Thermo Fisher Scientific, Carlsbad, CA, USA). Primers for RT-PCR amplifications are listed in S5 Table. RNA-seq Rosette leaves from soil-grown plants (4 weeks old) and seedlings grown on ½ MS with 1% sucrose (with fully expanded cotyledons) from WT Col-0 and the psy2l were used for RNA-seq analyses. Harvested tissue was frozen in liquid nitrogen. Total RNA was extracted using RNAeasy Plant Mini Kit and treated with on-column DNaseI digestion. Library preparation and RNA sequencing were performed by GATC Biotech (Konstanz, Germany). Expression analysis was performed by GATC Biotech using Bowtie transcriptome alignments, TopHat and Cufflink. Expression values are listed as means of three mutant samples compared with three WT samples. FPKM (fragment per kilobase per million mapped reads), and fold change with p-values are listed for significant different expression values in the mutant and WT (p<0.05) (S6 and S7 Tables).
Three replicates of each tissue type were sequenced. AgriGo (Go Analysis Toolkit and Database for Agricultural Community) [26] singular enrichment analysis (SEA) were used to facilitate identification of gene groups with altered expression in the psy2l mutant relative to WT. Default setting was used (Statistical test method Fisher, significance level p < 0.05). Significance values for specific gene groups are given in Table 1. A Perl script was written, which allowed to extract selected genes from the AgriGO files. Pollen appearance and germination. A, B, Analysis of the morphology of mature pollen grains of WT and psy2l mutant plants (SALK_048064). In WT most of the pollen have prolate (ovoid) morphology with tricolpate aperture (three furrows), while in psy2l most of the pollen did not develop mature pollen morphology. C, D, Germination of WT and psy2l pollen grains in the optimum solid medium. Generally, anthers of psy2l mutant produced less pollen grain and they were less dehiscent compared with WT. In conclusion, much less pollen germinated from psy2l in comparison with WT. Scale bars = 1 mm. (PDF) S4 Fig. Psy2l roots have smaller meristem zone and some dead cells. Propidium iodidestained root tips of WT and psy2l mutant (SALK_048064) seedlings grown on MS medium for 10 days. The arrows indicate the boundary of the meristematic zone from the quiescent center to the first elongated cell row of the transition zone, this zone was clearly smaller in psy2l. The cells with complete internalization of propidium iodide (red) indicate dead cells, and were visible in psy2l. Scale bars = 100 μm. (PDF) After growing three days on ½ MS medium with 1% sucrose, seedlings were transferred to new media for another 12 d for treatment with 0, 2, 4, or 6 mg L -1 cisplatin (See main text Fig 4). Percentage of seedlings showing strong growth retardation was visually observed. The experiments were repeated 3 times, and totally there were 30-40 seedlings for each plant type and concentration of cisplatin. SE is given. EV/Col-0 and pp4r2l-ami seedlings at 2 mg L -1 cisplatin were not different from zero cisplatin control (ns, not significant), but psy2l (SALK) and psy-ami seedlings showed significant growth retardation at 2 mg L -1 cisplatin, p<0.01. Higher cisplatin concentrations (4-6 mg L -1 ) gave growth retardation for all seedlings at p<0.01. (PDF)
|
2018-04-03T04:21:25.517Z
|
2017-07-05T00:00:00.000
|
{
"year": 2017,
"sha1": "9a44826c576ef49c92af2526a46bb1a169482934",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0180478&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a44826c576ef49c92af2526a46bb1a169482934",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
88790321
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of GA 4 D 7 plus 6-Benzyladenine as a Frost-rescue Treatment for Apple
S UMMARY . Freeze events during bloom can be a relatively frequent occurrence in many apple ( Malus · domestica ) production areas in the United States that significantly reduce orchard productivity and profitability. This study investigated the potential for a proprietary mixture of gibberellin A 4 D A 7 and 6-benzyladenine (GA 4 D 7 plus 6-BA) to increase fruit set and cropping of apple following freeze events at three locations across the United States during bloom in 2012. GA 4 D 7 plus 6-BA increased fruit set in two of five experiments, and increased fruit number and yield per tree in three of five experiments. GA 4 D 7 plus 6-BA increased fruit set and yield of ‘Taylor Spur Rome’ following freezes on two consecutive days during bloom when the minimum temperature reached 23.9 and 28.4 (cid:2) F. Fruit set was increased due to a stimulation of parthenocarpic fruit growth. Using locally obtained market prices, GA 4 D 7 plus 6-BA treatments increased the crop value of ‘Taylor Spur Rome’, ‘Ginger Gold’, and ‘Jonagold’ by $3842, $977, and $6218 per acre, respectively. Although GA 4 D 7 plus 6-BA application(s) after a freeze increased fruit set and cropping in some instances, tree yields were well below the average yields previously obtained in the test orchards.
SUMMARY. Freeze events during bloom can be a relatively frequent occurrence in many apple (Malus ·domestica) production areas in the United States that significantly reduce orchard productivity and profitability. This study investigated the potential for a proprietary mixture of gibberellin A 4 D A 7 and 6-benzyladenine (GA 4D7 plus 6-BA) to increase fruit set and cropping of apple following freeze events at three locations across the United States during bloom in 2012. GA 4D7 plus 6-BA increased fruit set in two of five experiments, and increased fruit number and yield per tree in three of five experiments. GA 4D7 plus 6-BA increased fruit set and yield of 'Taylor Spur Rome' following freezes on two consecutive days during bloom when the minimum temperature reached 23.9 and 28.4°F. Fruit set was increased due to a stimulation of parthenocarpic fruit growth. Using locally obtained market prices, GA 4D7 plus 6-BA treatments increased the crop value of 'Taylor Spur Rome', 'Ginger Gold', and 'Jonagold' by $3842, $977, and $6218 per acre, respectively. Although GA 4D7 plus 6-BA application(s) after a freeze increased fruit set and cropping in some instances, tree yields were well below the average yields previously obtained in the test orchards.
T emperatures near or below freezing during bloom in apple orchards occasionally results in significant reductions in fruit set and cropping due to death of the ovule/ embryo before fertilization. Apple flowers are most sensitive to freezing temperatures at the full bloom stage, when air temperatures of 28 and 25°F are predicted to kill 10% and 90% of the blooms, respectively (Ballard et al., 1998).
Foliar gibberellin (GA) sprays during bloom increase fruit set in pear (Pyrus communis) by stimulating parthenocarpic fruit development (Deckers and Schoofs, 2002;Dreyer, 2013;Luckwill, 1960;Zhang et al., 2008). This strategy is used in commercial practice to increase fruit set and cropping in pear cultivars that naturally have poor fruit set (Lafer, 2008;Vilardell et al., 2008), and to increase fruit set of pear after frost damage during bloom (Ouma, 2008, Yarushnykov andBlanke, 2005). Deckers and Schoofs (2002) suggested that GAs need to be applied to pear within 4 d after a frost event to alleviate damage, and that gibberellic acid (GA 3 ) increased fruit set more effectively than GA 4+7 .
Gibberellins, and in particular GA 4 , induce and sustain parthenocarpic growth of apple fruit (Bukovac, 1963;Davison, 1960;Luckwill, 1960). Bukovac (1963) reported that gibberellins ''induced parthenocarpic fruit growth and promoted fruit development to maturity at growth rates and with final size, color, and general quality comparable to seeded fruits'' in the cultivars Sops-of-Wine, Wealthy, and Delicious. However, Wertheim (1973) showed that GA 4+7 only temporarily increased fruit set of 'Cox's Orange Pippin' apple flowers after effemination. Thus, the efficacy of GA 4+7 to promote parthenocarpy may vary between apple cultivars. Application of gibberellins together with the synthetic cytokinin N-(2-chloro-4-pyridyl)-N-phenylurea (CPPU) had a positive synergistic effect on set of parthenocarpic fruit and fruit size in apple (Bangerth and Schröeder, 1994;Watanabe et al., 2008).
While GA sprays are commonly used to increase fruit set and cropping of pear following a freeze, there is no information describing the efficacy of this treatment in apple. The objectives of the current studies were to investigate the potential for a proprietary formulation of GA 4+7 plus 6-BA (Promalin Ò ; Valent BioSciences Corp., Libertyville, IL) to increase fruit set and cropping of apple following freeze events during bloom. A series of spring freezes throughout major apple production regions in the eastern United States in 2012 provided the opportunity to evaluation GA 4+7 plus 6-BA applications as a frost-rescue treatment.
Materials and methods
HENDERSON COUNTY, NC. A commercial formulation of 1.8% w/w each of GA 4+7 and 6-BA (Promalin Ò ) was applied at a rate of 25 or 50 mgÁL -1 to mature 'Taylor Spur Rome'/'Malling 7' ('M.7') apple trees in a commercial orchard in Henderson County, NC. There was minimal elevation change across the orchard. There were a total of three treatments in the study: an unsprayed control, and GA 4+7 plus 6-BA applied at 25 or 50 mgÁL -1 . Each treatment was assigned to fully guarded four-tree plots arranged in a randomized complete block design experiment with six replications. The GA 4+7 plus 6-BA treatments were applied on 12 Apr. and again on 13 Apr. with an airblast sprayer calibrated to deliver 100 gal/acre. Fruit set was recorded on two uniform limbs on each of the two middle trees in each plot. Fruit set was calculated as the number of fruit persisting on each limb after June drop per 100-flower clusters.
No additional fruit drop was observed from June drop until harvest. The plots and sample limbs were identified on 10 Apr. when the trees were in full bloom. Two calibrated temperature loggers (ibutton model DS1921G; Maxim Integrated, San Jose, CA) were programmed to record temperatures at 10-min intervals and placed in a solar radiation shield (model RS3; Onset Computer Corp., Pocasset, MA) attached to the trunk of one tree at either end of the test orchard at a height of 1.0 m. The treatments were applied at %1100 HR on 12 Apr. and at the same time on 13 Apr. All of the fruit on the two middle trees in each plot were harvested when the fruit reached commercial maturity as determined by a starch pattern index between 5.0 and 6.0 on the Cornell Starch Chart (Blanpied and Silsby, 1992). Mean fruit weight, yield per tree, and total yield in bushels/acre (1 bushel = 42 lb) were calculated from this data. A random sample of 10 fruit was removed from each of the middle two trees in each plot for measuring treatment effects on fruit length:diameter (L:D) ratio, and the number of fully developed seeds. The number of fruit on each tree that exhibited ring russet at the calyx end due to cold injury, shape defects such as flattened or lopsidedness, or small fruit size (<50 mm in diameter) was counted. Crop value was calculated on a per acre basis assuming cull fruit (fruit with ring russet, shape defects or small) were sold in the processing market with a value of $0.20/lb, while the marketable fruit was sold for an average fresh price of $0.57/lb. These values reflect average fruit prices received by commercial growers in North Carolina for processing and fresh fruit in 2012, a short crop year. GENEVA, NY. GA 4+7 plus 6-BA was applied as the Promalin Ò formulation at a concentration of 50 mgÁL -1 with an airblast sprayer calibrated to deliver 100 gal/acre to 12-year-old trees of the apple cultivars Gala, Jonagold, and Ginger Gold on 'Malling 9' ('M.9') rootstock. GA 4+7 plus 6-BA was applied to 10-tree plots with the treatments arranged in a randomized complete block design with five blocks and two treatments (untreated control, GA 4+7 plus 6-BA). GA 4+7 plus 6-BA was initially applied on 22 Apr. following a low temperature of 30°F on 18 Apr. (tight cluster stage), and then reapplied on 1 May following low temperatures on 28 Apr. (27°F), 29 Apr. (31°F), and 30 Apr. (29°F) (late bloom stage with some petals falling from the king flowers).
To assess the effect of the treatments on fruit set the number of flower clusters was counted on three representative branches per tree (top, middle, and bottom of canopy) on two random trees per plot at full bloom (28 Apr.) and then at harvest the number of fruit on each of the three branches was counted. Fruit set was calculated as the number of fruit harvested per 100-flower cluster. The fruit of each cultivar was harvested at normal commercial maturity. At harvest all 10 trees in each plot were harvested and the total number and weight of fruit per tree recorded. Mean fruit weight was calculated from these data. Fruit size distribution (packout) was estimated using the mean fruit weight and assuming a normal distribution of fruit sizes and a standard deviation of 20 g. Long-term average fruit prices were assigned to the yield in each packout size to calculate a gross crop value excluding packing, storage, and sales charges. Only yield and fruit size were considered for the calculation of gross crop value, and this calculation did not account for fruit color or other fruit defects. Seed number was determined from a 10-fruit sample collected from each of two 'Ginger Gold' trees per plot. Data were analyzed by analysis of variance as a randomized complete block experiment with five blocks and with 10 sample trees per block used for yield, fruit size, and crop value data and two sample trees per block for fruit set and seed number data.
AMHERST, MA. In a block of mature 'Empire'/'M.9' apple trees, 24 trees were selected that appeared to have an adequate number of flowering spurs to carry a commercial crop. Two uniform limbs were selected on each tree at the tight cluster stage. The number of blossom clusters on each limb was counted and expressed as blossom cluster density (clusters per square centimeter limb cross-sectional area). Trees were separated into eight groups (blocks) of three trees each based upon the calculated blossom cluster density. Within each group, trees were randomly assigned to receive one of the following three treatments: Control (no treatment), GA 4+7 plus 6-BA at 25 or 50 mgÁL -1 . Thus, the study was a randomized complete block design with eight replications. The phenological development in this orchard was tight cluster (9 Apr.), pink bud stage (16 Apr.), early petal fall (23 Apr.), and petal fall (30 Apr.). GA 4+7 plus 6-BA was applied in a water volume of 100 gal/acre using an airblast sprayer on 18 Apr., 5 d after a frost of 31.8°F recorded at early pink (13 Apr.). This location also recorded frosts on three consecutive days during petal fall (31.6, 30.1, and 30.4°F on 28, 29, and 30 Apr., respectively). The number of fruit on each sample limb was counted at the end of June drop and final fruit set expressed as fruit/100 blossom clusters. No additional fruit drop was observed between June drop and harvest. The fruit from each tree were counted and weighed when normal commercial maturity was reached, and yields (kilograms per tree, bushels per acre) were calculated from this data. The number of misshapen fruit and those with frost rings were determined and the seed count was taken on a 25-apple subsample from each tree. Crop value was calculated on a per acre basis assuming misshapen and frost ring-damaged fruit would be sold as culls at a value of $0.15/lb and the remaining marketable fruit were sold for $0.55/lb.
Results
HENDERSON COUNTY, NC. The test orchard experienced freezing temperatures on two consecutive nights during full bloom in 2012, the first on 12 Apr. when the minimum air temperature reached 23.9°F and the second on 13 Apr. when the minimum air temperature was 28.4°F. Air temperatures were below 32°F for %5 h each night (Fig. 1). GA 4+7 plus 6-BA significantly increased fruit set, yield, and fruit number per tree at harvest compared with the untreated control (Table 1). In addition, the marketable yield per tree and the percent of fruit with frost russet at harvest were higher on the GA 4+7 plus 6-BA treated trees compared with the control (Table 2). Fewer than 5% of fruit on control trees exhibited parthenocarpy, compared with %70% of the fruit on trees sprayed with GA 4+7 plus 6-BA. Thus, GA 4+7 plus 6-BA increased fruit set and yield after the two freeze events by stimulating production of parthenocarpic fruit (Table 3). Parthenocarpic 'Taylor Spur Rome' fruit had similar weight (Table 1) and shape as measured by L:D ratio (Table 3) compared with control fruit. The crop value in the GA 4+7 plus 6-BA treatments was %$4000/acre higher than the crop value of the untreated control.
GENEVA, NY. GA 4+7 plus 6-BA significantly increased fruit set of 'Jonagold', but did not significantly improve fruit set of 'Ginger Gold' or 'Gala' although there was a numeric increase in set for all three cultivars (Table 4). Fruit number per tree, yield, and crop value of 'Ginger Gold' and 'Jonagold' were increased by GA 4+7 plus 6-BA, but not significantly in 'Gala' (Table 4). GA 4+7 plus 6-BA increased the crop value of 'Ginger Gold' and 'Jonagold' by $977 and $6218/acre, respectively, but the increase in crop value of 'Gala' ($941/acre) was not significant. There was no effect of treatment on mean fruit weight of any of the cultivars. GA 4+7 plus 6-BA resulted in a significant reduction in average seed number per fruit in 'Ginger Gold' due to a (nonsignificant) increase in the number of parthenocarpic fruit. About 20% of the control fruit were parthenocarpic compared with 70% of fruit from the GA 4+7 plus 6-BA treatment. Seed number was not measured in 'Gala' or 'Jonagold'. The yield of untreated 'Ginger Gold' was estimated to be %4% of a full crop while the yield of untreated 'Jonagold' and 'Gala' was estimated to be %11% and 55% of a full crop, respectively. Applications of GA 4+7 plus 6-BA increased the percentage of a full crop to %12%, 39%, and 66% of a full crop for 'Ginger Gold', 'Jonagold', and 'Gala', respectively.
'EMPIRE' IN AMHERST, MA. There was no effect of GA 4+7 plus 6-BA application on fruit set or yield of 'Empire' (Table 5). There was no effect of GA 4+7 plus 6-BA on the percent of misshapen fruit, the percent of fruit with frost rings (data not shown), or seed number per fruit at harvest. Total yield in the control plots (580 bushels/acre) was %70% of the normal expected crop for trees in this orchard.
Discussion
These results demonstrate the potential for GA 4+7 plus 6-BA to increase fruit set, yield, and crop value of apple following freeze events during bloom. Although production following GA 4+7 plus 6-BA treatments was only a fraction of the expected normal yields, they did increase yields to make the treatments economically worthwhile under some conditions. For 'Taylor Spur Rome' and 'Ginger Gold', the positive effect of GA 4+7 plus 6-BA on fruit set was due to stimulation of parthenocarpic fruit development. This result was in agreement with previous findings that GA 4 stimulated parthenocarpic fruit development of several apple cultivars (Bukovac, 1963). GA 4+7 plus 6-BA spray(s) were without effect on fruit set of 'Gala'/ 'M.9' in Geneva, NY or 'Empire'/ 'M.9' in Amherst, MA. The relatively high yield of untreated 'Gala' (55% of a full crop) was largely due to the excessive production and set of lateral flower clusters on 1-year-old wood. The delayed bloom of flower clusters on 1-year-old wood resulted in improved survival compared with earlier blooming flower clusters on spurs or terminal buds. These late blossoms on 1-year-old wood (lateral flower clusters) went on to set fruit. However, fruit size of 'Gala' was quite small because of the high percentage of the fruits originating from these late blooms. 'Jonagold' also had some lateral flower clusters, but less than 'Gala', while 'Ginger Gold' had few lateral flower clusters. Apple cultivars differ in their response to a parthenocarpic stimulus provided by GA 4 (Bukovac, 1963). Fruit growth rates, final size, color, and general quality were comparable in parthenocarpic and seeded fruits of 'Sops-of-Wine' and 'Wealthy'; whereas parthenocarpic 'Delicious' were smaller than seeded fruits at maturity, and parthenocarpic 'Jonathan' and 'Rhode Island Greening' fruit abscised before reaching maturity. It was suggested that gibberellins present in the receptacle and/or ovary before flowering may be important for inducing parthenocarpy, whereas cytokinins may be responsible for continued growth of parthenocarpic fruit (Watanabe et al., 2008). If this suggestion is valid, the combination of GA 4+7 plus 6-BA might be a more effective frost-rescue treatment compared with GA 4+7 alone since presumably the 6-BA could provide a stimulus for normal growth and expansion of the parthenocarpic fruit. Additional research is needed to establish the separate roles of gibberellin and cytokinin on parthenocarpic fruit development in apple.
In this study, seeded and parthenocarpic fruit of 'Taylor Spur Rome' were not different in either size, L:D ratio, or the incidence of misshapen fruit at harvest. The effects of GAs on apple fruit shape are not consistent between cultivars. Bukovac (1963) reported that parthenocarpic fruit of the cultivars Sops-of-Wine and Wealthy induced by GA 4 had significantly lower transverse diameter, smaller core x Values in a column with different letters are statistically different by Duncan's multiple range test at P £ 0.05. w NS, *, ***Nonsignificant or significant at P £ 0.05 or 0.001, respectively, based on analysis of variance. Table 4. Effect of 50 mgÁL L1 (ppm) gibberellin A 4 D A 7 and 6-benzyladenine (GA 4D7 plus 6-BA) sprays after a series of frost/freeze events during pink bud and full bloom on fruit set, yield, crop load, mean fruit weight, seed number per fruit, and crop value of 'Ginger Gold', 'Gala', and 'Jonagold' apple trees on 'M.9' rootstock in Geneva, NY. diameter, increased thickness of the apical cortex, and smaller locular cavities at harvest. Greene (1984) reported that GA 4+7 plus 6-BA increased the L:D ratio of 'Richared Delicious' apple; however, this increase was not always a result of increased fruit length. Increasing concentrations of GA 4+7 plus 6-BA caused a linear reduction in fruit diameter, fruit weight, and seed number of 'Richared Delicious' (Greene, 1984). Watanabe et al. (2008) reported that GA 3 or GA 4+7 increased fruit diameter and L:D ratio, but had no effect on cortex width of the normally parthenocarpic cultivar Ohrin. Variability in the morphological response to treatments that induce parthenocarpy may be due to changes in the capacity to metabolize GAs or cytokinins in different regions of the apple fruit over time, to differences in GA or cytokinin metabolism between cultivars, or to differential expression of enzymes involved in promotion of cell division.
'Rome' strains of apple do not have a pronounced calyx, and did not respond to GA 4+7 plus 6-BA applications for enhancing fruit ''typiness'' at petal fall in this study. While the majority of 'Taylor Spur Rome' fruit on trees treated with GA 4+7 plus BA were parthenocarpic, %30% of fruit had one or more fully developed seeds at harvest. Assymetric seed distribution throughout the locules normally results in assymetric fruit shape, or lopsidedness (Drazeta et al., 2004). Interestingly, in spite of assymetric seed distribution in 'Taylor Spur Rome', fruit development (size, shape, and symmetry) was typical of the cultivar. While internal fruit quality was not measured in the current studies, parthenocarpic 'Golden Delicious' fruit induced by GA + CPPU had significantly higher flesh firmness at harvest but also exhibited a greater incidence of (unspecified) calcium deficiency symptoms (Bangerth and Schröder, 1994). Since a relationship between seed number, fruit calcium content, and storage disorders is occasionally reported in apple (Bramlage et al., 1990), growers need to carefully assess the increased risks of postharvest disorders and storage potential of parthenocarpic fruit resulting from application of GA 4+7 plus BA as a frostrescue treatment.
Although GA 4+7 plus 6-BA increased fruit set, yield, and crop value of several apple cultivars after freeze events during bloom in 2012, tree productivity was not completely restored to average levels. Further restoration of cropping to expected levels might result from combining application of GA 4+7 plus 6-BA with additional growth regulators. Vilardell et al. (2008) reported that following an application of GA 4+7 plus 6-BA during bloom with prohexadionecalcium (P-Ca) 15 d after bloom improved the yield of 'Abate Fetel' pear compared with GA 4+7 plus 6-BA alone in an orchard that did not experience cold injury. Similarly, application of the ethylene biosynthesis inhibitor aminoethoxyvinylglycine (AVG) 15 d after a full bloom application of GA 4+7 plus 6-BA increased fruit set of 'Packhams Triumph' pear (Rufato et al., 2011). If the opportunity presents itself in the future, studies should be undertaken to evaluate the individual and combined effects of GA 4+7 plus 6-BA at petal fall, followed by either AVG or P-Ca applications as a frost-rescue treatment in apple in an attempt to further enhance fruit set and cropping.
|
2022-05-30T07:03:44.270Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "46eb425c25f4c08b98e579dc273b8075ce68eb4a",
"oa_license": null,
"oa_url": "https://journals.ashs.org/downloadpdf/journals/horttech/24/2/article-p171.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c4714edc891e7cef1c7570eeda241dfc00bc87b8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
232066683
|
pes2o/s2orc
|
v3-fos-license
|
Combination of High Zn Density and Low Phytic Acid for Improving Zn Bioavailability in Rice (Oryza stavia L.) Grain
Background Zn deficiency is one of the leading public health problems in the world. Staple food crop, such as rice, cannot provide enough Zn to meet the daily dietary requirement because Zn in grain would chelate with phytic acid, which resulted in low Zn bioavailability. Breeding new rice varieties with high Zn bioavailability will be an effective, economic and sustainable strategy to alleviate human Zn deficiency. Results The high Zn density mutant LLZ was crossed with the low phytic acid mutant Os-lpa-XS110–1, and the contents of Zn and phytic acid in the brown rice were determined for the resulting progenies grown at different sites. Among the hybrid progenies, the double mutant always displayed significantly higher Zn content and lower phytic acid content in grain, leading to the lowest molar ratio of phytic acid to Zn under all environments. As assessed by in vitro digestion/Caco-2 cell model, the double mutant contained the relatively high content of bioavailable Zn in brown rice. Conclusions Our findings suggested pyramiding breeding by a combination of high Zn density and low phytic acid is a practical and useful approach to improve Zn bioavailability in rice grain. Supplementary Information The online version contains supplementary material available at 10.1186/s12284-021-00465-0.
Background
Zinc (Zn) is one of the essential micronutrients required for various biochemical processes and physiological functions in organisms. It could not only participate in enzyme activation, but also play an important role in the metabolism of carbohydrates, lipids, proteins, and nucleic acids (Brocard and Dreno 2011;Kelleher et al. 2011). Severe Zn deficiency in the human body causes a series of metabolic disorders, and even death (Mayer et al. 2008). The World Health Organization estimated that more than 2 billion people worldwide have the problem of Zn deficiency, and the almost half of children and women in low-and middle-income countries suffer from serious Zn deficiency (Caulfield and Black 2004;Wessells and Brown 2012). The Zn deficiency-induced malnutrition constitutes a major public health problem in the world.
Zn deficiency is primarily due to the insufficient intake of bioavailable Zn from foods (Lazarte et al. 2016). Rice is a major diet component for more than half of the world's population, especially for those residents in southern and eastern Asia. Unfortunately, the current main rice varieties cannot provide enough Zn to meet the daily dietary requirement of Zn due to the low Zn bioavailability (Du et al. 2010;Jou et al. 2012;La Frano et al. 2014). Zn bioavailability is the amount of the absorbed Zn in the blood system that is accessible for utilization in normal physiological functions (La Frano et al. 2014). Zn bioavailability of rice grain is influenced by grain Zn content, and Zn enrichment in rice grain significantly increased the amount of bioavailable Zn (Jou et al. 2012;Sreenivasulu et al. 2008;Wei et al. 2012a, b). In resent year, various strategies have been proposed to improve grain Zn bioavailability by increasing grain Zn content (Jeng et al. 2012;Johnson et al. 2011;Lee et al. 2011;Phattarakul et al. 2012;Wang et al. 2017;Wei et al. 2012a, b). However, several antinutrient compositions, such as phytic acid, could reduce Zn bioavailability by chelating Zn to form indigestible complexes in the human body (Jou et al. 2012;Lonnerdal et al. 2011;Sreenivasulu et al. 2008). Thus, enhancing Zn content synchronizes with decreasing phytic acid might be a valuable way to improve Zn bioavailability in rice.
In this study, a hybrid breeding approach was made to improve Zn bioavailability in rice grain. Two previously developed rice mutants, high Zn density mutant Lilizhi (LLZ) and low phytic acid mutant Os-lpa-XS110-1, were selected as parents to generate a homozygous double mutant that exhibited a phenotype of high Zn content with low phytic acid content in grain. Through assessing the Zn uptake by using an in vitro digestion/Caco-2 cell model, the cross-breeding of two mutants was investigated regarding its impact on the Zn bioavailability of the resulting progenies.
Plant Materials and Growth Conditions
Two rice mutant lines and their wild type parents were used in this study. The mutant LLZ was derived from Oryza sativa ssp. japonica variety Dongbeixiang with 60 Co gamma irradiation, which showed high efficiency of Zn enrichment in grain (Wang et al. 2017). The lpa mutant Os-lpa-XS110-1 was previously developed through 60 Co gamma irradiation followed by NaN 3 treatment of an Oryza sativa ssp. japonica variety Xiushui110 (Liu et al. 2007). The Os-lpa-XS110-1 has a pronounced reduction of phytic acid in seed compared to the original wild-type, which was attributed to the disruption of OsMIK by the insertion of a rearranged retrotransposon (Liu et al. 2007;Zhao et al. 2013). The double haploid (DH) progenies of different genotypes were developed from the LLZ×Oslpa-XS110-1 cross following a protocol adopted from Nguyen et al. (2016). In brief, LLZ was crossed with Os-lpa-XS110-1 to generate F 1 progenies. The anthers of F 1 progenies were cultured on SK3 medium supplemented with 2,4-D (1.5 mg/L), casein hydrolysate (0.3 g/L) and sucrose (60 g/L), and phytagel (4 g/L) at pH 5.8 for callus induction in dark at 25°C. After 25 days, calli were transferred to N6 medium supplemented with 6-BA (3 mg/L), casein hydrolysate (0.3 g/ L), NAA (1 mg/L) and sucrose (30 g/L) at pH 5.8 for differentiation and regeneration. Regenerated plantlets were then transferred to 1/2 N6 medium supplemented with sucrose (20 g/L), colchicine (4 mg/L) and agar (7 g/L) at pH 5.8 under light. Plantlets with good root system were transplanted to paddy field.. Two rice mutant lines and their wild type parents were grown in the experimental field of Zhejiang Academy of Agricultural Sciences in Hangzhou (120°2′E, 30°3′N) during the rice-growing season of 2017 and 2018, and the DH progenies were grown in the same paddy field in 2018. For evaluating the environmental effects on Zn and phytic acid content, three rice lines for each type of cross-bred progenies were chosen based on similarity of flowering time, and grown on three sites in 2019, i.e. Fuyang (119°95′E, 30°05′N), Yuhang (120°3′E, 30°42′N) and Lin'an (119°72′E, 30°23′N). The respective soil types for each site (Yuhang, Fuyang and Lin'an) were fine silty clay, silty clay loam and fine sandy loam. In each site, each rice line was grown in a block containing 5 rows and 5 plants per row. After fully matured, seeds of the middle 9 plants (3 rows, 3 plants per row) were harvested, of which 3 individual plants were pooled and mixed completely. These seeds were dried in an oven at 50°C until constant weight was obtained. For each sample, 30 dry seeds were hulled and used for subsequent analysis.
For hydroponic experiments, seeds were surface sterilized by 30% (v/v) sodium hypochlorite solution for 30 min, and washed several times with deionized water. Subsequently, the seeds were soaked in deionized water for 48 h at 30°C and germinated for 6 days at 30°C. Then the seedlings were transplanted into 48-well plastic buckets (35 L) with half-strength Kimura B solution (pH 5.6). After grown to the fifth leaf stage, the rice lines were treated with nutrient solution containing 0.4, 4 and 40 μmol/L Zn concentration respectively until maturity. The nutrient solution was renewed every 4 days, and the pH was kept at 5.5 by adjusting every day using 1 M NaOH or HCl. Hydroponic experiments were carried out in a greenhouse with a 13-h light/11-h dark photoperiod at 22-30°C. All experiments were performed with three biological replicates.
Mineral Analysis
The brown rice was dried at 75°C for 48 h and grounded into powder. The samples were subjected to acid digestion in a closed-vessel microwave system using 5 mL nitric acid (Mars Express, CEM Corporation, Matthews, USA). After cooling, the digestion solution was transferred to a 25 mL volumetric flask, and the volume was added to 25 mL with 3% nitric acid. The concentrations of zinc (Zn), manganese (Mn), copper (Cu) and iron (Fe) in samples were determined by inductively coupled plasma mass spectrometry (ICP-MS, PlasmaQuant MS, Analytik Jena AG, Germany).
Phytic Acid Determination
Phytic acid was extracted and determined according to Dai et al. (2007) with slight modification. The grounded rice flours of 0.5 g were weighed and placed into 50 mL polystyrene centrifuge tubes. The samples were extracted with 10 mL of 0.2 M HCl for 2 h, followed by centrifuged at 10000 g for 10 min. Supernatants of 2.5 mL was mixed with 2 ml of 0.2% FeCl 3 solution and boiled in a water bath for 1 h. After centrifugation at 10000 g for 15 min, the precipitates were re-suspended with 5 mL deionized water and 3 mL 1.5 M NaOH by vortexing for 4 min. Afterwards, the precipitate was recollected by centrifugation at 10000 g for 15 min and washed three times with 5 mL of deionized water. The supernatant was discarded and 3 mL 0.5 M HCl was added to dissolve the residue. Finally, the volume of solutions was made up to 20 mL by deionized water. The iron concentration in the solution was measured by ICP-MS (PlasmaQuant MS, Analytik Jena AG, Germany). The phytic acid content was subsequently calculated by multiplying iron content by the factor 4.2.
In Vitro Digestion
The in vitro digestion method was according to Wei et al. (2012b). Rice powder sample (0.5 g) was added with 0.5 mL of pepsin (0.2 g pepsin in 5 mL 0.1 M HCl, pH = 2.0) and incubated at 37°C in a shaking water bath for 2 h. Afterwards, the mixtures were added with 2.5 mL pancreatin-bile solution (0.45 g bile salts and 0.075 g pancreatin in 37.5 mL 0.1 M NaHCO 3 , pH = 5.0) and then incubated at 37°C in a shaking water bath for 2 h. The gastrointestinal digest was adjusted to pH 7.2, and boiled in a water bath for 4 min. The volume of the digest was brought to 15 ml with 120 mM NaCl and 5 mM KCl. After centrifugation at 3500 g for 1 h at 4°C, the supernatant was transferred into a new 50 mL polystyrene centrifuge tube. Finally, glucose (5 mM the final concentration) and HEPES (50 mM final concentration) were respectively added into the solution, and the osmolarity was adjusted to 310 ± 10 mOsm/kg with deionized water. The solution was used for Zn uptake experiment in Caco-2 cell model.
Zn Uptake Experiment in Caco-2 Cell Model
Caco-2 cells (passage 20) were obtained from the Institute of Biochemistry and Cell Biology, Chinese Academy of Sciences (Shanghai, China). The cells were cultured in high glucose (4.5 g/L) DMEM, which were supplemented with 10% (v/v) fetal bovine serum, 1% (v/v) antibiotic solution (penicillin−streptomycin), 1% (v/v) nonessential amino acids, 1% (v/v) L-glutamine, and 5 mM HEPES. The cells were maintained in an incubator with 5% CO 2 and 95% relative humidity air at 37°C. The medium was refreshed every 2 days. After reaching 80% confluence, cells were sub-cultured and digested by 0.25% trypsin-EDTA. When Caco-2 cells were between passages 30-46, they were seeded in a 6-well transwell plate (24 mm diameter, 0.4 μm pore size, Corning, USA) at density of 50,000 cells/cm 2 . Subsequently, 2.5 mL of complete DMEM was added into the basal chamber of transwell plate. The culture medium was changed every 2 day. After incubated for 21 days, the trans-epithelial electrical resistance (TEER) was measured for evaluating the cellular integrity of the monolayers. Caco-2 cell model with a TEER reading greater than 600 Ω cm 2 could be used in Zn uptake experiment.
The growth medium was removed from each culture well, and the cell monolayer was washed three times with 37°C HBSS. Afterwards, 2.5 mL of the transport solution (130 mM NaCl, 10 mM KCl, 1 mM MgSO 4 , 5 mM glucose, and 50 mM HEPES, pH 7.4) was added into the basal chamber of transwell plate, then the upper chamber was covered with 1.5 mL intestinal digestion solution. The cells were maintained in an incubator with 5% CO 2 and 95% relative humidity air at 37°C. After incubated for 2 h, the transport solution of the basal chamber was collected. The cell monolayers were harvested, followed by wash twice with ice cold HBSS. Then, 1 mL deionized water was added into the well, and the cells on filters were harvested after ultrasonic degradation. The Zn content of the gastrointestinal digestion solution, cell lysate, and the transport solution were analyzed by using an ICP-MS (PlasmaQuant MS, Analytik Jena AG, Germany). According to Wei et al. (2012b), the following equations were used to calculate the Zn bioavailability: Zn bioaccessibility (%) = Zn content in the solution after in vitro digestion (μg/g) × 100/ total Zn content in brown rice (μg/g); Zn retention efficiency (%) = Zn retention (μg/well) × 100/Zn content in the solution after in vitro digestion (μg/well); Zn transport efficiency (%) = Zn transport (μg/well) × 100/ Zn content in the digestion solution (μg/well); Zn uptake efficiency (%) = Zn retention efficiency + Zn transport efficiency; Bioavailable Zn content in the brown rice (μg/g) = Zn concentration in the brown rice (μg /g) × Zn bioaccessibility × Zn uptake efficiency /10000.
Statistical Analysis
Data were analyzed using SPSS 24.0 (SPSS, IBM, Chicago, USA). For multiple comparisons, analysis of variance (ANOVA) was performed with the least significant difference (LSD) to compare the various means of each series of experiments when variances were homogenous. Means were considered to be significantly different if P values were ≤ 0.05. Pearson correlation analysis was carried out to analyze the correlation.
Four Metallic Elements and Phytic Acid in the Brown Rice of Four Rice Lines Grown in Field Trial
Field experiments were conducted in 2017 and 2018. The yield-related agronomic traits of mutant lines and their wild-types were presented in Table S1. The seed setting rates of LLZ and DBX were higher than those of Os-lpa-XS110-1 and Xiushui110 (Table S1). No significant consistent differences among four rice lines were observed for other agronomic traits (Table S1).
Phytic acid (PA) and four metallic elements, including Mn, Fe, Cu and Zn, were evaluated for each year (Tables 1 and 2). There was no significant difference in Mn content among four rice lines for each year (Table 1). The mutant LLZ and its wild-type parent DBX accumulated higher levels of Fe and Cu compared to Os-lpa-XS110-1and Xiushui110 in each year, but the difference between varieties was small, even not significant for Fe content in 2018 (Table 1). There was a significant positive correlation with the contents of Zn and Fe or Cu, though the values of coefficient of determination (R 2 ) were less than 50% (Table S2). It is noteworthy that the mutant LLZ exhibited the highest Zn density in brown rice, which was at least 1.88-fold higher than those of other three rice lines on average ( Table 2).
The phytic acid content in brown rice varied among four rice lines. The mutant LLZ had the highest phytic acid content in brown rice, followed by DBX and Xiushui110 (Table 2). The lpa mutant Os-lpa-XS110-1 exhibited the lowest phytic acid contents consistently in two field trials, and the average phytic acid contents in brown rice of Os-lpa-XS110-1 were 48.4% lower than that in LLZ ( Table 2).
The molar ratios of phytic acid to Zn in the brown rice ranged from 13.44 to 16.15 in LLZ,23.48 to 30.77 in DBX,16.11 to 20.32 in Os-lpa-XS110-1, and 27.70 to 33.91 in Xiushui110 (Table 2). Two mutant lines displayed lower molar ratio of phytic acid to Zn in brown rice.
Comparison of Zn and Phytic Acid Contents between LLZ and Os-lpa-XS110-1 in Response to Zn Treatment
For evaluating the effects of Zn supply on the Zn and phytic acid content, the LLZ and Os-lpa-XS110-1 were grown in nutrient solution with three Zn concentrations (0.4, 4 and 40 μM). Both rice lines appeared to be affected by elevated Zn supply, resulting in significant reduction of biomass, plant height, root length and seedsetting (Table S3). However, grain weight was not significantly affected by high Zn stress in both rice lines (Table S3).
Mineral analysis showed that additional Zn supply had a large effect on grain Zn content in LLZ and Os-lpa-XS110-1. With elevated Zn supply, the Zn content in brown rice increased from 55.15 to 117.79 mg/kg in LLZ, but only 21.63 to 76.92 mg/kg in Os-lpa-XS110-1 (Fig. 1a). The average Zn content in LLZ were 2.55-fold, 1.90-fold and 1.53-fold, respectively, higher than those in Os-lpa-XS110-1 under 0.4, 4 and 40 μM Zn conditions (Fig. 1a). While the phytic acid content was not significantly affected by extra Zn supply, the LLZ had significantly higher phytic acid content in brown rice compared to Os-lpa-XS110-1 under each Zn condition, and the average phytic acid content of LLZ was at least 2.55-fold than that in Os-lpa-XS110-1 (Fig. 1b). Consequently, the molar ratios of phytic acid to Zn showed a negative response to increasing Zn supply (Fig. 1c). The molar ratios of phytic acid to Zn in LLZ was significantly lower than that in Os-lpa-XS110-1 when exposed to 0.4 μM Zn treatment, whereas it did not significantly differ between two mutant lines under 4 and 40 μM Zn conditions (Fig. 1c).
Contents of Zn and Phytic Acid in Cross-Bred Progenies
We obtained a total of 120 DH homozygous progenies through anther culture. The Zn contents in brown rice of homozygous progenies varied from 13.12 mg/kg to 55.16 mg/kg with the mean value of 33.94 mg/kg, while the phytic acid content varied from 2.14 mg/g to 8.45 mg/g with the mean value of 5.45 mg/g ( Table 3). The Zn content in progeny was not significantly correlated to the phytic acid content (r = 0.207, P<0.05).
Based on the Zn and phytic acid content in brown rice, the DH progenies could be classified into four types, including wild-type, LLZ-type, Lpa-type and double-mutant-type (DM-type in short) (Fig. 2). The χ 2 goodness-of-fit test indicated that the segregation ratio of these progenies fitted a 1:1:1:1 ratio (χ 2 = 2.2, P = 0.532). The wild-type progenies showed a similar phenotype to Xiushui110, with the average 21.48 mg/ kg Zn and 6.88 mg/g phytic acid in the brown rice (Table 3). The LLZ-type progenies had a higher Zn level in the brown rice, around 2-fold higher than those in the wild-type, whereas it also had the highest phytic acid content (7.24 mg/g) among all four types (Table 3). In contrast to LLZ-type progenies, the Lpatype progenies exhibited a significantly decreased level of phytic acid (3.62 mg/g) compared to the wild-type, but it had only 21.23 mg/kg Zn in the brown rice (Table 3). The DM-type progenies displayed the highest level of Zn (45.30 mg/kg) and lower level of phytic acid (4.06 mg/g) in the brown rice (Table 3). Thus, the molar ratio of phytic acid to Zn of the DM-type was lowest, which was a quarter of that of the wildtype progeny (Table 3). What's more, the contents of other three metallic elements (Mn, Fe and Cu) exhibited similar among four types of progenies (Table S4).
Environmental Impact on the Zn and Phytic Acid Content in Cross-Bred Progenies
Three rice lines for each type of progenies were selected for field trials at three sites (Yuhang, Fuyang and Lin'an). The yield-related agronomic traits for these progenies are presented in Figure S1. Due to significantly greater number of grains per plant in Yuhang, LLZ-type in Yuhang had the significant higher yield compared to that in Fuyang and Lin'an ( Figure S1). The other types of progenies from three field trials exhibited the same pattern regarding the agronomic traits ( Figure S1). There were no statistically significant differences in the average contents of Zn and phytic acid for all hybrid progenies among three sites ( Fig. 3a and b). All four types grown in Yuhang had significantly lower Zn contents than those grown in other sites; LLZ-type and DM-type grown in Fuyang had the higher Zn contents, but the Zn content in Lpa-type and wild-type grown in Fuyang and Lin'an did not differ significantly (Fig. 3a). In contrast, the phytic acid contents in each type showed similar levels in three sites except Lpa-type, which had the significantly higher content of phytic acid grown in Fuyang than that in Yuhang (Fig. 3b). For each field trial, the molar ratio of phytic acid to Zn was lowest in the DM-type progeny, intermediate in the LLZ-type and Lpa-type, and the highest in the wildtype progeny (Fig. 3c).
Zn Bioavailability in Cross-Bred Progenies
The cross-bred progenies were used to assess Zn bioavailability through in vitro digestion/ Caco-2 cell culture model. The LLZ-type and DM-type showed the higher Zn bioaccessibility (30.47% and 28.34%, respectively), whereas the wild-type and Lpa-type progenies had only 20.80% and 24.13% Zn bioaccessibility respectively (Table 4). In contrast, the Lpa-type had the highest Zn uptake efficiency (30.48%), followed by DM-type (23.05%) and LLZ-type (17.01%), with the wild-type progeny (13.58%) having the lowest percentages (Table 4). There was a significant correlation between total Zn content and the bioaccessible Zn content (r = 0.727, P = 0.007).
There was a significant difference in the bioavailable Zn content among four types. The average content of bioavailable Zn was 0.67 μg/g in wild-type, 2.29 μg/g in LLZ-type, 1.58 μg/g in Lpa-type and 3.21 μg/g in DMtype (Table 4). The level of bioavailable Zn was significantly correlated with Zn bioaccessibility (r = 0.780, P = 0.003) and Zn uptake efficiency (r = 0.627, P = 0.029). Due to the relatively high Zn bioaccessibility and Zn uptake efficiency, DM-type contained the highest content of bioavailable Zn in brown rice (Table 4). The average amount of bioavailable Zn in the DM-type progeny was 4.78-fold greater than that in wild-type progeny (Table 4).
Discussion
In this study, two previously developed rice mutants were selected as the parents to generate hybrid progenies. The mutant LLZ exhibited a high Zn density in grain, while the Os-lpa-XS110-1 accumulated significantly lower level of phytic acid. Phytic acid is the principal storage form of phosphorus (P) in rice grain, which will chelate with Zn to form a complex (O'Dell et al. 1972;Iwai et al. 2012). The negative relationship between phytic acid and Zn had been observed in previous studies, in which the decreased phytic acid in lpa mutants would increase grain Zn content (Karmakar et al. 2020;Liu et al. 2004). However, Sakai et al. (2015) concluded that the phytic acid content did not affect the Zn accumulation in rice seed. Yatou et al. (2018) found that there was not an obvious correlation between phytic acid content and Zn content. In our study, the crossbred progenies exhibited diverse contents of Zn and phytic acid in grain (Fig. 2). This suggested that phytic acid biosynthesis and Zn loading into grain are controlled by different regulatory mechanisms, and it is potential for manipulating the phytic acid content without affecting the Zn accumulation. Genotypic differences in the contents of phytic acid and Zn became significant between two hybrid parents, independent of environmental conditions (Table 2). A similar pattern of environmental impact has been determined for hybrid progenies of LLZ/Os-pla-XS110 grown on diverse sites (Fig. 3). Previous studies indicated that Zn and phytic acid in rice grain were complex traits with the involvement of multiple genes, and were significantly influenced by environments (Ahn et al. 2010;Wissuwa et al. 2008). Several rice populations have also been used to assess the variability for Zn and phytic acid content in grain, suggesting that the variances of Zn and phytic acid contents were attributed to genotypes, environments and interactions between the two factors (Garcia-Nebot et al. 2013;Norton et al. 2014;Zhang et al. 2008b). However, we found that the genotype was by far the dominant factor to determine Zn and phytic acid content in brown rice, and there was no significant interaction between genotype and year on the Zn and phytic acid content (Table S5).
Due to the variety-specific distribution of Zn in rice, the effect of additional Zn supply on grain Zn content depended on rice genotypes and Zn conditions. Gao et al. (2006) showed that improved Zn nutrition significantly enhanced grain Zn concentration in either paddy rice or aerobic rice under aerobic and flooded cultivations. Wissuwa et al. (2008) concluded that the Zn supply had a significant positive effect on grain Zn concentrations in the upland soil, whereas application of Zn supply to Zn deficient soil rendered a concomitant decrease in gain Zn content. The Zn contents in the Fig. 1 Contents of Zn and phytic acid in LLZ and Os-lpa-XS110-1 under different Zn conditions. a Zn content in the brown rice; b Phytic acid content in the brown rice; c Molar ratio of phytic acid to Zn in the brown rice. The results are presented as mean ± SEM. Asterisks represent statistically significant differences between LLZ and Os-lpa-XS110-1 (n = 4; *P < 0.05; **P < 0.01) brown rice increased by 0.8-fold in LLZ and 1.4-fold in Os-lpa-XS110-1 from 0.4 to 4 μM Zn conditions respectively but only increased by 0.2-fold in LLZ and 0.5-fold in Os-lpa-XS110-1 from 4 to 40 μM Zn conditions (Fig. 1a). Zn in the grain is thought to be supplied as complexes with ligands via the phloem after mobilization from vegetable tissues, and the synthesis of some Zn-binding ligands was becoming saturated with consistently elevated Zn supply, which was the main limiting factor to determine grain Zn content under Znsufficient conditions (Nishiyama et al. 2013;Wang et al. 2017;Wu et al. 2010). Thus, the magnitude of grain Zn enrichment descended with elevated Zn supply. In contrast, the grain phytic acid content was not significantly affected by high Zn stress (Fig. 1b). This result was inconsistent with previous study (Wei et al. 2012b), in which application of Zn fortification during the germination process could significantly reduce phytic acid content in brown rice. However, germination is a complex process during which the seed would quickly activate enzymes to hydrolyze phytic acid for providing nutrient requirement (Bollmann et al. 1980). Zn is an important cofactor for enzymes, and exogenous Zn supply will increase the grain Zn content to accelerate the hydrolysis of phytic acid. Nevertheless, our present results further indicated that phytic acid, a major storage form of phosphorus in rice grain, has different homeostasis mechanism from Zn. It is noteworthy that high Zn supply inhibited the growth of LLZ and Os-lpa-XS110-1, exhibited as decreased dry weight, plant height and root length (Table S3), and affected the grain production system of rice as the seeding-setting was significantly decreased although the grain weight seemed unaffected (Table S3). Similar results also found in previous reports (Impa et al. 2013;Jiang et al. 2007;Wang et al. 2017).
Many studies found that there were significant correlations between Zn and other metallic elements in rice grain (Della Valle and Glahn 2014;Hao et al. 2007; Zhang et al. 2018;Zhang et al. 2008a). Several high Zn density rice varieties would concomitantly increase the contents of other metallic elements in grains. Graham et al. (1999) found two high Zn density rice varieties would concomitantly increase the grain Fe contents. Two high Fe density mutant M-IR-75 and M-IR-58 were identified from the rice variety IR64, which had a common level of Zn in the grain (Jeng et al. 2012). Among the 274 rice varieties analyzed, Jiang et al. (2007) proposed that Zn content had closely associations with other three metallic element contents (Fe, Cu and Mn) in milled rice. We found the mutant LLZ also accumulated considerably higher levels of Cu and Fe in grain compared to the Os-pla-XS110-1 (Table 1), and Zn content showed a significant positive correlation with the contents of Fe and Cu (Table S2). While LLZ-type and DM-type progenies accumulated significantly more Zn contents in grains compared to Lpa-type and wild-type progenies (Table 3) but the Mn, Fe and Cu content in four hybrid progenies types were similar (Table S4). That indicated the great accumulation of Zn in grain may result in a significant increase in the contents of Fe and Cu but with a limited extent.
As assessed in Caco-2 cell model following in vitro digestion, Zn bioavailability of rice grain is dependent on Zn bioaccessibility and Zn uptake efficiency (Table 4). Zn bioaccessibility refers to the fraction of soluble Zn in the digestion that was released from the rice grain. Zn bioaccessibility has been shown to be affected by Zn speciation, the digestion phase and various dietary components (Zhang et al. 2020). The higher amount of bioaccessible Zn in progenies with high Zn density indicated that total Zn content had a positive effect on Zn bioaccessibility (Table 4). Wei et al. (2012b) also found the amount of soluble Zn in rice grain was increased by Zn fortification, and the total Zn content is the important factor to determine Zn bioaccessibility.
Zn uptake efficiency reflected the absorptive capacity of Zn in the Caco-2 cells, which was affected by dietary factors such as phytic acid, amino acids and other lowmolecular weight ions (Lonnerdal 2000). Phytic acid in rice was the primary inhibitor to reduce Zn absorption by complexing with Zn in the human body (Hambidge et al. 2011;Raboy 2009). Several studies confirmed that the addition of phytic acid to the diet significantly lower the Zn absorption in human body (Hambidge et al. 2011;Lonnerdal et al. 1984;Miller et al. 2007). Sreenivasulu et al. (2008) found that Zn uptake was inhibited by phytic acid in Caco-2 cells, and phytic acid content was negatively correlated with Zn bioavailability. Similar results were obtained in our study, in which the Lpa-type and DM-type progenies had the higher percentage of Zn retention, transport, and uptake efficiency by Caco-2 cell than other type progenies (Table 4). Zn uptake efficiency Fig. 3 Environmental impact on Zn and phytic acid contents in cross-bred progenies. Three rice lines for each type of progenies were planted at three sites (Yuhang, Fuyang and Lin'an). a Zn content in the brown rice. b Phytic acid content in the brown rice. c Molar ratio of phytic acid to Zn in the brown rice. The results are presented as box-and-whisker plots. Plus symbol: mean value; line in the box: median value; bottom and top of the box: first quartile (Q1) and third quartile (Q3); upper whisker: top of box + 1.5 × interquartile range (IQR = Q3 -Q1); lower whisker: bottom of box -1.5 × IQR. Different letters indicate statistically significant differences by LSD (P < 0.05) was significantly negatively correlated with the phytic acid content in brown rice (r = 0.665, P = 0.018). However, Jou et al. (2012) reported that the elevated content of phytic acid did not affect Zn absorption in the biofortified rice, which may be caused by the unchanged molar ratio of phytic acid to Zn.
The molar ratio of phytic acid to Zn seems to be an effective evaluation parameter for Zn bioavailability. According to World Health Organization, less than 15% Zn from food is absorbed when the molar ratio of phytic acid to Zn was greater than 15 (Allen et al. 2006). Fredlund et al. (2006) studied the dosedependent inhibitory effects of phytic acid on Zn absorption in man, and found that the zinc absorption was significantly decreased when the molar ratio of phytic acid to Zn in the meals increased from 2.9 to 11.5. Jou et al. (2012) exhibited a similar result that a dose-responsive decrease in Zn uptake could be observed in either Caco-2 cells or rat pups when the molar ratio of phytic acid to Zn molar increased from 2.5 to 20. Lonnerdal et al. (2011) found that the molar ratio of phytic acid to Zn was lower in the low phytic acid mutants than that in the WT, leading to substantial increases in Zn absorption in the low phytic acid mutants. That indicated it is feasible to improve Zn bioavailability by increasing Zn contents and reducing phytic acid contents in rice grain. Indeed, double mutant obtained by pyramiding breeding showed a lower molar ratio of phytic acid to Zn and higher Zn bioavailability than either low phytic acid mutant or high Zn mutant (Tables 3, 4). The double mutant seems to be a valuable genetic material that could be applicable to Zn biofortification breeding. However, the in vitro digestion/Caco-2 cell model is a preliminary method to evaluate Zn bioavailability. To predict the effect of cross-breeding on the nutritional status of individuals, these progenies will be further assessed in human feeding trials.
Conclusions
The present study shows a practical way to generate a double mutant with high Zn bioavailability by crossing high Zn density mutant and low phytic acid mutant. The double mutant progeny exhibited significant increment in Zn content and reduction in phytic acid content in brown rice. The results from the in vitro digestion/ Caco-2 cell model indicated the double mutant progeny had a higher Zn bioavailability than other progenies. These demonstrate that pyramiding breeding is an effective method to improve the amount of Zn and Zn bioavailability in rice grain.
Additional file 1: Figure S1.The yield-related agronomic traits of crossbred progenies grown in different field trials.
Additional file 2: Table S1. Comparison of yields and yield-related traits between the mutant lines and their wild type parents. Table S2. Relationship of four metallic elements in four rice lines. Table S3. Differences itn plant growth between LLZ and Os-lpa-XS110-1 under different Zn conditions. Table S4. Contents of Mn, Fe and Cu in the brown rice of cross-bred progenies. Table S5. ANOVA for the phytic acid and Zn content in brown rice. Wild-type 20.80 ± 0.88b 5.15 ± 1.04c 8.42 ± 0.53d 13.58 ± 1.21d 0.67 ± 0.06d Data represent the mean ± SEM of 4 samples replicate. Different letters behind the values indicate statistically significant differences by LSD (P < 0.05).
|
2021-02-28T06:16:49.327Z
|
2021-02-27T00:00:00.000
|
{
"year": 2021,
"sha1": "e3258fdf31369a5d224e945c69109535a3423c57",
"oa_license": "CCBY",
"oa_url": "https://thericejournal.springeropen.com/track/pdf/10.1186/s12284-021-00465-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d2d2587269c09b11492df2d5359ae741be8bd30",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1257582
|
pes2o/s2orc
|
v3-fos-license
|
Somatic and germline expression of piwi during development and regeneration in the marine polychaete annelid Capitella teleta
Background Stem cells have a critical role during adult growth and regeneration. Germline stem cells are specialized stem cells that produce gametes during sexual reproduction. Capitella teleta (formerly Capitella sp. I) is a polychaete annelid that reproduces sexually, exhibits adult growth and regeneration, and thus, is a good model to study the relationship between somatic and germline stem cells. Results We characterize expression of the two C. teleta orthologs of piwi, genes with roles in germline development in diverse organisms. Ct-piwi1 and Ct-piwi2 are expressed throughout the life cycle in a dynamic pattern that includes both somatic and germline cells, and show nearly identical expression patterns at all stages examined. Both genes are broadly expressed during embryonic and larval development, gradually becoming restricted to putative primordial germ cells (PGCs) and the posterior growth zone. In juveniles, Ct-piwi1 is expressed in the presumptive gonads, and in reproductive adults, it is detected in gonads and the posterior growth zone. In addition, Ct-piwi1 is expressed in a population of putative PGCs that persist in sexually mature adults, likely in a stem cell niche. Ct-piwi1 is expressed in regenerating tissue, and once segments differentiate, it becomes most prominent in the posterior growth zone and immature oocytes in regenerating ovaries of regenerating segments. Conclusions In C. teleta, piwi genes may have retained an ancestral role as genetic regulators of both somatic and germline stem cells. It is likely that piwi genes, and associated stem cell co-regulators, became restricted to the germline in some taxa during the course of evolution.
Background
Stem cells are essential for animal development and adult tissue homeostasis, and they can presumably differentiate into many specialized cell types. Specialized stem cells called primordial germ cells (PGCs) are populations of undifferentiated stem cells in sexually reproducing animals that will exclusively give rise to the germ cells, either spermatocytes or oocytes [1]. These germline stem cells insure that genetic information is passed to the next generation. In some animals, germline stem cells are segregated from somatic cells during embryonic development. Two distinct mechanisms of germline specification have been described: preformation and epigenesis [2]. According to the preformationist mode, germ cells are specified by maternally inherited determinants present within the egg. In the case of epigenesis, germ cells are not specified until later in development, and arise as a result of inductive signals from surrounding tissues. In some basally branching animals, there is not such a separation between the germline and the soma in the embryo, and germ cells can be segregated from somatic cells throughout the life cycle. This raises the question of the relationship between somatic stem cells and germline stem cells. It has been proposed that germline stem cells arose from a preexisting multipotent progenitor lineage that later in evolution became a restricted sublineage [3]. If this is the case, have some bilaterian animals retained an ancestral association between germline stem cells and somatic stem cells? Are core regulatory genes shared between multipotent stem cells and germline stem cells in some animal groups?
Studies in annelids are likely to provide insights into the relationship between somatic and germline stem cells. Polychaete annelids are highly variable in their reproductive patterns and many species can regenerate their heads, tails or both [4]. The polychaete annelid Capitella teleta, formerly known as Capitella sp. I [5] is a simple-bodied, marine polychaete annelid that undergoes sexual reproduction, continuously generates segments during its lifetime, and exhibits robust posterior regeneration, including regeneration of its ovaries. In C. teleta, there are males, females and hermaphrodites; males can transform into hermaphrodites as a result of changing environmental conditions [6]. Gametogenesis and the location of the reproductive organs in C. teleta have previously been described in detail [5,7,8]. The testes are specialized regions of the lateral peritoneum in the seventh and eighth segments and lack a well developed anatomical structure. Several later stages of spermatogenesis occur within the coelomic cavity, and in mature males, sperm are stored in paired genital ducts (coelomoducts) at the boundary between segments 7 and 8. The genital ducts are trumpet-shaped structures that open into the ventro-lateral coelomic cavity on one end and on the other end have a narrow canal that terminates in an intersegmental pore, separate from metanephridia present in the same segment. Females have well-defined segmentally repeated ovaries present in 10 to 12 continuous segments beginning with the first abdominal segment. The ovaries are ventrally positioned, paired structures adjacent to the gut tube. Each ovary is suspended by mesenteries on the ventral side of the coelomic cavity and is macroscopically visible. The sac of the ovary is composed of follicle cells that are formed by modified coelomic peritoneal cells. Vitellogenesis (and potentially a proliferative phase) occurs within the ovaries, and each ovary contains 5 to 20 oocytes at multiple stages of oogenesis, including mature oocytes. Distinct stages are spatially segregated within the ovary, with proliferative oogonia and pre-vitellogenic oocytes in the medial region of the ovary and mature oocytes localized to the lateral region [7]. Individuals of both sexes can reproduce multiple times. Hermaphroditic animals display both male and female characteristics. The detailed knowledge of gametogenesis and adult anatomy, generation of segments as an adult, available fate map [9], and the ability to regenerate make C. teleta a good lophotrochozoan model to study the segregation of the germline and the relationship between somatic and germline stem cells.
In contrast to the detailed knowledge of gametogenesis for many polychaetes, far less is known about primordial germ cells and the origin of the germline in polychaetes [10,11]. Along with the vasa and nanos genes, piwi has essential roles during germline development. Piwi genes are members of the Argonaute (Ago) family of proteins, which function through their interactions with small RNAs (sRNAs). In Drosophila melanogaster and mammals, piwi proteins bind to special classes of sRNA molecules such as repeat-associated small interacting RNAs (rasiRNAs), and piwi-interacting RNAs (piRNAs) [12][13][14]. PiRNAs are often complementary to transposon sequences and can silence transposable elements in the germline [15]. Piwi genes can thus affect germline determination, germline maintenance, gametogenesis, stem cell self-renewal, RNA interference (RNAi), and transposon silencing [14]. In D. melanogaster, piwi is expressed in embryos as well as adult gonads [12]. Within ovaries, piwi is expressed in both somatic and germline cells [16]. Mutations within the piwi gene of D. melanogaster cause male sterility due to defects in spermatogenesis, and mutant females are deficient in germline cells [17]. In mice, miwi (mouse homolog of piwi) is normally expressed during spermatogenesis where it is largely restricted to the testis [14], and miwi knockout mice exhibit male sterility, characterized by a block at the early spermatid stage [14]. Piwi family genes are also expressed in germ cells across a broad range of taxa, including zebrafish [18], sea urchins [19], ctenophores [20]), and jellyfish [21], although the function and molecular mechanism of its action is only known for a few species. There are limited examples of expression studies for piwi genes in annelids. These include characterization of a piwi gene ortholog in the gonads and during regeneration in the oligochaete annelid worm Enchytraeus japonensis [22], and expression in the PGCs of larvae and juveniles in the polychaete annelid Platynereis dumerilii [23].
In this study, we present a comprehensive expression analysis for the two piwi paralogs in the genome of the polychaete annelid C. teleta. Previously, Dill and Seaver [24] reported that orthologs of vasa and nanos are expressed in both the germline and somatic cells, primarily in cells of the posterior growth zone of the adult. If these germline regulators as well as piwi also function in somatic stem cells, one might predict they are expressed in regenerating tissue. We examined piwi expression during embryonic and larval development, and during adult growth and regeneration. Both Ct-piwi1 and Ct-piwi2 are expressed throughout the life cycle of C. teleta, from early cleavage stage embryos to reproductive adult worms and during gametogenesis. In addition, Ct-piwi1 has a complex and dynamic expression pattern during posterior regeneration in both somatic and germline precursors.
Results
Phylogenetic analyses of C. teleta Piwi Searches of the C. teleta genome identified two putative piwi homologs, which we call Ct-piwi1 and Ct-piwi2.
Both predicted open reading frames contain conserved PAZ and PIWI domains characteristic of piwi genes. The PIWI domain in both Ct-Piwi1 and Ct-Piwi2 is located near the 3' end of the ORF, which is typical of piwi genes [25]. Phylogenetic analyses were conducted by Bayesian and maximum likelihood methods. Both Ct-Piwi1 and Ct-Piwi2 cluster within the Piwi subfamily of the Ago family of proteins, separately from the Argonaute subfamily (Figure 1). There is 100% Bayesian posterior probability support for the Piwi subfamily node, as well as 100% maximum likelihood bootstrap support. Ct-Piwi1 and Ct-Piwi2 are more closely related to Piwi paralogs from other animals than to each other; we infer that these two genes do not represent a recent gene duplication. Indeed, most of the animals included in these analyses have at least two paralogs that are divergent from each other, which suggests a deep root for this duplication, perhaps in the bilaterian or even metazoan ancestor. While there is some lack of resolution among the Piwi1, Piwi-like1, Piwi-like3, and Piwi-like4 proteins, the Piwi2 and Piwi-like2 sequences group closely together, with 100% Bayesian posterior probability support and 97% maximum likelihood bootstrap support for this node. Ct-Piwi2 belongs to this Piwi2 subgroup, which includes representatives from a broad range of metazoan taxa.
Regenerative capabilities of C. teleta C. teleta has the ability to regenerate lost tissue, and upon amputation will regenerate posterior segments [26]. To aid our interpretations of piwi expression, we characterized posterior regeneration in reproductive adults (eight weeks post-metamorphosis). In C. teleta, there are two distinct body regions: segments 1 to 9 are the thoracic segments, and 40 to 50 abdominal segments are continuously added posteriorly throughout adult development. Transverse amputations were made on adult male and female worms at the segment boundary between the 11 th and 12 th segment (Figure 2A, dotted line). The rate of regeneration varies among individuals; this variation becomes more pronounced after five days post-amputation, and is likely due to environmental conditions. Within four hours of amputation, wound healing occurs by contraction of the severed edges of the body wall ( Figure 2B). The gut is closed off during early stages of regeneration by formation of an intact epithelium covering the wound. At one day post-amputation, the wound has fully healed and a small blastema (mass of undifferentiated cells) is visible ( Figure 2C). Between one and three days post-amputation, the blastema grows bigger. In addition, the anus has reopened and the worm can feed and excrete ingested material ( Figure 2D). Between three and seven days post-amputation, the blastema continues to grow and elongates, but there are no external signs of segmentation ( Figure 2E-G). At five days post-amputation, axons can be observed extending from the severed longitudinal nerves into the blastema, likely invading the regenerating tissue from cell bodies in the pre-existing tissue ( Figure 2F). The blastema has a smaller diameter relative to the pre-existing tissue, and a distinct pygidium and posterior growth zone appear between 10 and 14 days post-amputation ( Figure 2J). Typically, several small segments also become morphologically apparent between 10 and 14 days post-amputation ( Figure 2H-J). Nascent segments are initially visible by the appearance of forming ganglia and circular peripheral nerves extending from the ventral nerve cord ( Figure 2I, I', arrow, arrowheads); at this stage there are not yet external signs of segmentation ( Figure 2I', arrow). The formation of chaetae and intersegmental furrows of the ectoderm occur a few days later ( Figure 2K, arrowheads). When segments form, multiple small segments appear rather than a single segment at a time. As many as 20 segments have regenerated by 18 days post-amputation ( Figure 2K).
Ct-piwi1 and Ct-piwi2 embryonic and larval expression patterns
We characterized expression of the two C. teleta piwi genes during embryonic and larval development by whole mount in situ hybridization. C. teleta development has been previously described and follows an established staging system (Figure 3, top) [27]. In uncleaved zygotes, two-cell and four-cell stage embryos, Ct-piwi1 transcripts can only be detected after an extended color reaction ( Figure 3A-C, arrows). In subsequent cleavage stages, Ct-piwi1 is broadly expressed in most if not all cells ( Figure 3D, arrows). Gastrulation occurs during stage 3, and at this stage the Ct-piwi1 expression pattern becomes more restricted within the embryo. Near the end of gastrulation and following closure of the blastopore, Ct-piwi1 is transiently expressed in the endoderm ( Figure 3E). In early stage larvae (stage 4), Ct-piwi1 is expressed in several discrete domains including the presumptive brain, foregut, and mesodermal bands ( Figure 3F). These expression domains persist into stage 5, and in addition, two small ventro-lateral clusters of cells appear in the mid-body segments. These clusters are medial to the mesodermal bands ( Figure 3G, arrows), and become more easily detected at later larval stages. At stage 6, brain expression weakens, while expression becomes more apparent in the foregut, trunk mesoderm, and a band of Ct-piwi1-expressing cells immediately anterior to the telotroch that corresponds to the posterior growth zone ( Figure 3H). In addition, expression in the two ventro-lateral cell clusters becomes more prominent; each cluster contains two to five cells and is positioned within the mesoderm (Figure Giani et al. EvoDevo 2011, 2:10 http://www.evodevojournal.com/content/2/1/10 closer to the ventral midline relative to their position at stage 6. The position of the Ct-piwi1 positive ventro-lateral clusters also varies among stage 8 larvae, with the following observed patterns: (1) bilaterally symmetric clusters at the midline (n = 15/35), (2) bilaterally symmetric clusters lateral to the midline (n = 11/35), (3) asymmetrically positioned clusters lateral to the midline, with one of the clusters more anterior to the other (n = 9/35) ( Figure 3L, arrows). By stage 9, only a single cluster at the ventral midline of segment 4 is apparent (Figure 3M, arrows). This cluster is positioned dorsal to the ventral nerve cord and ventral to the gut tube, at the boundary between the foregut and midgut ( Figure 3N). From stage 6 to stage 9, the position of these Ct-piwi1expressing cells changes from paired ventro-lateral clusters (n = 35/35) to a single cluster at the ventral midline (n = 30/32). The observed variation in the position of the ventro-lateral cell clusters of Ct-piwi1-expressing cells and their progression to the ventral midline is consistent with migratory behavior of these cells. There is no obvious increase in the number of cells in these clusters between stage 6 and stage 9. We hypothesize that these cells are PGCs. By stage 9, Ct-piwi1 expression is restricted to the posterior growth zone and a single cluster of putative PGCs at the ventral midline.
The position of Ct-piwi1 expression domains was characterized in relation to patterns of cell division in larvae as visualized by EdU incorporation. A comprehensive study of cell division patterns during larval development of C. teleta has been previously reported [27]. Ct-piwi1 expression corresponds with regions of dividing cells in C. teleta larvae, including in the foregut and posterior growth zone in stage 7 larvae ( Figure 4A, B, D). However, there are also EdU positive cells that lack Ct-piwi1 expression, and there are Ct-piwi1 positive cells that do not appear to be EdU positive ( Figure 4A, B, C, arrows). For example, the putative PGCs are not EdU positive, consistent with our observations that the number of cells does not appear to increase during larval stages. Therefore, there is only partial overlap of these two patterns.
Expression of Ct-piwi2 was also characterized during embryonic and larval development. Ct-piwi2 expression patterns overlap with those of Ct-piwi1 throughout early cleavage stage embryos (data not shown), and larval stages 5 to 7 ( Figure 5). At stage 5, Ct-piwi2 is detected in the brain, presumptive foregut and in the mesodermal bands ( Figure 5A). At stage 6, brain and mesodermal band expression begins to fade, while foregut and posterior growth zone expression persists ( Figure 5B). Ct-piwi2 is also expressed in the PGCs at this stage ( Figure 5B, arrows). Expression in the segmental trunk continues to diminish during stage 7; however, expression is still evident in the foregut, posterior growth zone, and ventro-lateral cell clusters ( Figure 5C, arrows). The expression of Ct-piwi2 and and Ct-piwi1 in the ventrolateral cell clusters are also similar to each other at late larval stages when the clusters are localized to the midline (data not shown). Ct-piwi2 expression also overlaps with that of Ct-piwi1 in adults (see below), and is observed in the PGCs, genital ducts, immature oocytes, and posterior growth zone (data not shown). Although there may be minor differences in expression of Ct-piwi1 and Ct-piwi2, the expression patterns are largely similar during the stages examined, thus we did not further characterize expression of Ct-piwi2.
Ct-piwi1 expression in juveniles
Ct-piwi1 expression was observed in distinct domains in one week post-metamorphic juvenile worms ( Figure 6A, B). Within the thoracic region of both males and females (segments 1 to 9), there is expression in a discrete structure of approximately 25 cells that is localized within the coelomic cavity to the ventral midline of segment 5, usually extending into segment 6 ( Figure 6C, D, arrows). We think these cells are PGCs. Males and females also express Ct-piwi1 in the posterior growth zone ( Figure 6E, F). In addition, males have Ct-piwi1expressing cells in a pair of cell clusters positioned at the junction between segments 7 and 8 ( Figure 6C). In some animals, a second cluster was also apparent at the junction between and segments 8 and 9. These clusters are in a ventro-lateral position in the coelomic cavity, and based on their location, they are likely to be male gamete precursors. At this stage, the ovaries of the female have not yet developed, and there is only expression in the PGCs and posterior growth zone in females ( Figure 6B, D, F).
In two-week post-metamorphic juvenile worms, Ct-piwi1 expression domains are similar to those observed in one week post-metamorphic juvenile worms. The appearance of ovaries within the females is the biggest difference between the two stages. The ovaries appear as paired ventral structures adjacent to the lateral edges of the intestine, and at this stage primarily contain previtellogenic oocytes. The most anterior segment that contains ovaries is the 10 th segment ( Figure 7B), posterior to the thoracic region, and at this stage the ovaries span many segments. Ct-piwi1 is expressed in immature oocytes within the ovaries of females ( Figure 7B, D), and is not detected within mid-body abdominal segments of males ( Figure 7C). In males, Ct-piwi1-expressing cells are present in two pairs of ventro-lateral cell clusters positioned at the boundaries between segments 7 and 8 and segments 8 and 9 ( Figure 7A). The structure containing the putative PGCs is larger in area and more elongated compared to the structure in one-week post-metamorphic juveniles, and now contains approximately 50 cells, within segments 5 to 6 ( Figure 7A, B, arrows). The posterior growth zone of males and females maintains strong Ct-piwi1 expression, which is most prominent in the mesoderm ( Figure 7E, F). Anterior to the posterior growth zone in females, there are segmentally repeated, paired ventral cell clusters between the ventral nerve cord and gut that express Ct-piwi1 ( Figure 7F). These clusters are positioned along the anterior face of the septa at the segmental boundary. We hypothesize that these cell clusters are female germline precursors that will colonize the future ovaries once these segments mature and ovaries form within them. In approximately one-third to one-half of the two-week juveniles (n = 13/30), we also observed Ct-piwi1 expression in cells scattered in the trunk within the coelomic cavity (not shown). These cells have a large nuclear to cytoplasmic ratio, lack obvious signs of morphological differentiation, and their position is highly variable within the coelomic cavity among individuals. We rarely saw these cells in one-week juveniles and reproductive adults.
Ct-piwi1 adult expression patterns
Expression of Ct-piwi1 was also examined in reproductive adult worms eight weeks post-metamorphosis. The overall expression pattern in adults is similar to that of two-week post-metamorphic juvenile worms. Within the thoracic region (segments 1 to 9) of males and females, Ct-piwi1 expression persists in the putative PGCs localized to segment 5. This structure has continued to enlarge and at this stage contains over 75 Ct-piwi1expressing cells, typically extending into segment 6 (Figure 8A, B, E, arrows).
Ct-piwi1 is also expressed in the gonads. Adult females express Ct-piwi1 in the ovaries in the abdominal segments ( Figure 8B, D, F, G). Each ovary contains oocytes at different stages of development [28]. Ct-piwi1 is only detected in the medial immature oocytes and not in the large, laterally-positioned mature oocytes within each ovary ( Figure 8F, G). Immature oocyte expression is present in many mid-body abdominal segments as clusters of cells adjacent to the ventral midline ( Figure 8D). Males express Ct-piwi1 in the symmetrical ventrolateral genital ducts spanning the boundary between segments 7 and 8 ( Figure 8A), but there is no detectable expression within the mid-body abdominal segments ( Figure 8C). Male and female adults also maintain strong Ct-piwi1 expression in the posterior growth zone (data not shown).
Ct-piwi1 expression during regeneration
Amputations were made on adult female worms at the segment boundary between the 11 th and 12 th segments, and Ct-piwi1 expression was monitored at different time points during the course of regeneration. A schematic shows the location of the cut site at the 12 th segment ( Figure 9A, dotted line). At all stages examined, expression of Ct-piwi1 is maintained in the pre-existing tissue in the gonads and putative PGCs during regeneration.
Following amputation, wound healing occurs within four hours post-amputation ( Figure 9B). At one day post-amputation, the wound has fully healed; Ct-piwi1 is not expressed in the blastema at this point or during wound healing ( Figure 9B, C). The earliest Ct-piwi1 expression is detectable in the regenerating tissue at three days post-amputation, during growth of the blastema and prior to the appearance of segments ( Figure 9D, arrows). Expression is present in both the mesoderm and ectoderm, and is more pronounced on the ventral side of the blastema. As the blastema continues to grow (four to six days post-amputation), Ct-piwi1 expression persists in the regenerating tissue and is most prominent in the ventral mesoderm (Figure 9E, F, G, arrows). At these stages, it becomes clear that Ct-piwi1 is present in the proximal and mid-portion of the regenerate, but is absent from the most posterior end.
At later stages of regeneration, Ct-piwi1 expression becomes more restricted. In 10 through 18 days post-amputation, there is a morphologically distinct pygidium and posterior growth zone. During these stages, Ct-piwi1 is consistently expressed in the posterior growth zone of the regenerating tissue ( Figure 10A-C), and at 10 days post-amputation, it is the most prominent expression domain. At 14 days post-amputation, segments become apparent externally and additional expression domains appear, including in a loosely organized group of cells anterior to the posterior growth zone in the ventro-lateral region of the coelomic cavity ( Figure 10B, arrows). In addition, in the anterior segments of the regenerating animals, there is expression associated with the ventral face of the gut in the mesoderm; this domain corresponds to the position where the ovaries will form. We interpret these piwi-expressing cells to be oogonia. Approximately 20 segments have regenerated by 18 days post-amputation. At this stage, ovaries have begun to form in the anterior segments of the regenerate, and they contain piwi-expressing immature oocytes ( Figure 10C). In the middle segments of the regenerate, Ct-piwi1 is expressed in a pattern very similar to that observed in anterior regenerating segments of 10-day post-amputation adults, in putative oogonia. Expression is also evident in loosely organized cells anterior to the posterior growth zone in the coelomic cavity ( Figure 10C, arrows). Similar amputations were also performed on adult males, and there were no detectable differences in Ct-piwi1 expression between males and females within the regenerating tissue from zero to ten days post-amputation (not shown).
Moreover, between 14 and 18 days post-amputation, both males and females exhibit similar expression in the posterior growth zone and in a population of loosely organized cells anterior of the posterior growth zone. In summary, there are two distinct phases of Ct-piwi1 expression during regeneration: an early phase in a broad domain during blastemal growth, and a later phase of more restricted expression in the posterior growth zone, regenerating ovaries (of females and hermaphrodites), and in a localized population of loosely organized cells in the coelomic cavity.
Discussion
Expression of piwi in germline and somatic tissues of C. teleta Ct-piwi1 and Ct-piwi2 are expressed throughout the life history of C. teleta in a dynamic spatial pattern that includes both somatic and germline cells. The two genes show very similar expression patterns to each other. Both genes are broadly expressed during embryonic and early larval development, and gradually become restricted to the posterior growth zone, putative PGCs, and gonads during juvenile and adult stages. In larval stages, the position of the putative PGCs that express Ct-piwi1 and Ct-piwi2 closely corresponds with the position of cell cluster descendants of the blastomere 4d [9]. Thus, it is likely that the PGCs arise from 4d in C. teleta. By examining Ct-piwi1 expression at several juvenile stages, we were able to characterize the gradual expansion of the putative PGCs from a small cluster of approximately 4 to 10 cells in late stage larvae to a cluster of over 75 cells in the reproductive adult. The localization of a putative PGC population to a structure located in segment 5 of juveniles may serve as a transient intermediate target for PGCs or a PGC niche prior to adult gonad formation. In many animals, PGCs form far from the gonads and migrate to the sites of developing ovaries or testes in the embryo [29]. Migration of PGCs has also been reported in a few polychaetes [11,23]. We hypothesize that in older juveniles of C. teleta, some of the PGCs migrate from this niche to the gonads in segments 7 and 8 of males and to the reproductive abdominal segments in females. Our observation of scattered Ct-piwi1 expressing cells in the coelomic cavity of juveniles at a stage when the gonads begin to form (two weeks post-metamorphosis) is consistent with this hypothesis.
Localization of putative PGCs in C. teleta adult worms
A surprising result from our characterization of Ct-piwi gene expression patterns, and also from previous analyses of vasa and nanos gene orthologs in C. teleta [24] is the identification of a population of putative PGCs that persist in sexually mature adults, even after the gonads have formed and contain mature gametes. These putative PGCs are encased by a thin sheath of cells and suspended by mesenteries on the ventral side of the coelomic cavity, spanning segment 5 and partially into segment 6. This structure was not previously identified by TEM/morphological studies. In contrast to many segmentally-repeated features in the mid-body, this cluster of putative PGCs is present as a single structure. A similar structure called the 'primary gonad' has been described in P. dumerilii [23]; however, because we did not observe any steps of gametogenesis in this structure, we do not adopt the term 'gonad'. Instead, our observations are more consistent with referring to this structure as a PGC niche. This niche appears to be a permanent structure that provides a source of new PGCs throughout the animal's life, and it is located separate from the gonads. Although the biological significance of maintaining such a population of PGCs in mature adults is currently unknown, there are several features of the reproductive biology of C. teleta that may require a persistent stock of PGCs. Both males and females can reproduce multiple times and these PGCs may re-populate the gonads to ensure the propagation of multiple generations. Additionally, since ovaries can reform following amputation of abdominal segments, cells from this PGC population may colonize the regenerating ovaries. Furthermore, since sexually mature males can be environmentally induced to produce oocytes as hermaphrodites [6], maintaining a reservoir of PGCs may permit such phenotypic plasticity of the sexes and provide a source of germ cells to populate newly forming ovaries. Thus, this system provides a good opportunity to study the epigenetic control of gamete determination and phenotypic differentiation.
Origin of the germline during regeneration in C. teleta
The origin of the oocytes that populate regenerating ovaries following amputation in C. teleta is currently unknown. As previously hypothesized, oocyte precursors may originate from a resident PGC population in segment 5 (pre-existing tissue), and migrate in a posterior direction through numerous segments to nascent ovaries within the regenerating tissue [24]. This strategy would provide a rapid supply of germ cell precursors to the body of the regenerating worm. However, our examination of Ct-piwi1 expression during regeneration has led us to consider an alternate explanation, and is based on the observation that Ct-piwi-positive cells appear de novo in the coelomic cavity of regenerating segments when the ovaries begin to form (14 to 18 days postamputation) ( Figure 10B, C). In this scenario, oocytes may originate from multipotent stem cells within the regenerating tissue that later acquire a germ cell identity. We suggest that these germ cells arise from the lining of the coelomic cavities. Such a scenario is consistent with an epigenetic mode of PGC specification in which, following amputation, a signalling event induces somatic stem cells to produce germ cells that then differentiate into oocytes. If this were true, it would indicate that C. teleta has the ability to replace lost germ cells, and contrasts with the situation in the model organisms D. melanogaster, Caenorhabitis elegans, Danio rerio, and mice in which ablation of the germline results in sterile animals [13,14,16].
Since many model systems have a fully segregated germline and lack regenerative capabilities, detailed studies of these animals may have disproportionately influenced our views concerning the segregation of the germline from the soma. Multipotent stem cells in some bilaterian animals retain the ability to generate germline cells [3]. Under normal circumstances in such animals, PGCs are segregated from somatic tissues during early development and are responsible for generating all gametes. In altered conditions, such as during regeneration or when PGCs are experimentally removed, multipotent stem cells in somatic tissue may compensate and produce gametes. For example, during normal development in the ascidian Ciona intestinalis, germ cells in the tailbud of the tadpole stage are absorbed during metamorphosis and persist as PGCs in the young juvenile. However, upon removal of the larval tail prior to metamorphosis, PGCs from another source appear in the gonad rudiment at a later stage [30]. Following removal of vasa-expressing micromeres in the embryo of the sea urchin Strongylocentrotus purpuratus, an accumulation of Vasa protein is induced in other cells that presumably give rise to functional PGCs [31]. These observations indicate the presence of a compensatory mechanism to produce PGCs from somatic stem cells in the absence of the original germ cells.
Although piwi is best known for its role in the germline, there are a growing number of cases in which expression has also been reported outside the gonads and the germline, often in multipotent stem cells. Examples include not only early branching metazoans such as sponges [32,33], hydrozoan cnidarians [21] and ctenophores [20], but also bilaterians such as acoel flatworms [34], planarians [35,36], tunicates [30], and another polychaete annelid [23]. Our observations of Ct-piwi1 and Ct-piwi2 expression in both the germline and in regions of dividing cells in the posterior growth zone add another example to this list. Recently, piwi and vasa genes have been proposed to be ancestrally associated with stem cell character ('stemness'), rather than solely with germline stem cells [20]. In animal lineages such as in the mussel Mytilus galloprovincialis, in which there is restricted expression vasa in the germline [37], there could have been a partial or complete loss of expression of this gene from somatic stem cell lineages. In C. teleta, vasa, nanos and piwi orthologs are all expressed in very similar patterns to one another in both the germline and posterior growth zone, likely in multipotent stem cells.
Comparison of piwi expression patterns and PGC migration among annelids
The patterns of Ct-piwi1 and Ct-piwi2 expression in C. teleta show both similarities and differences when compared to piwi expression in the two other annelids that have been examined. In both C. teleta and the polychaete P. dumerilii, piwi is detected in PGCs and the posterior growth zone in larval and juvenile stages [23]. In C. teleta larvae, the PGCs intitially appear several segments anterior to the posterior growth zone, in the mid-trunk segments. In P. dumerilii, although the PGCs arise from the mesoderm in the posterior growth zone, at the end of larval development and in juvenile stages, the PGCs migrate anteriorly, and following generation of additional segments, to the primary gonad. It has been proposed that the primary gonad serves as an intermediate residence for PGCs in P. dumerilii [23]. Interestingly, juveniles of both polychaetes express piwi in a population of PGCs in segment 5, the location of the primary gonad in P. dumerilii. However, there is an important distinction between these two structures at this location. The PGC population in C. teleta is positioned on the ventral side of the coelomic cavity, whereas in P. dumerilii, it is on the dorsal side of the body in a circumferential band. In contrast to C. teleta, there are no somatic gonads in P. dumerili. Instead, oocytes mature within the coelomic cavity, initially as clusters and later singly as individual oocytes [28,38,39]. As P. dumerili juveniles mature, PGCs migrate from the primary gonad to the base of the segmentally iterated parapodia in both males and females, where gonial clusters later appear.
Piwi expression has also been described in the oligochaete E. japonesis. This species normally reproduces asexually and can be induced to undergo sexual reproduction under starvation conditions. In the asexual phase, Ej-piwi is expressed in cells distributed throughout the body. During starvation conditions, gonads form in the seventh and eighth segments and Ej-piwi is expressed in the developing gonads [22], similar to gonad expression in reproductive adults of C. teleta. During regeneration, piwi expression patterns are distinct between E. japonesis and C. teleta. In C. teleta, piwi is expressed during blastemal growth, and later, in a more restricted pattern following differentiation in the regenerating segments. This later phase includes piwiexpressing cells in the nascent ovaries of the regenerated tissue. In contrast, expression is not detected within the blastema during early stages of regeneration in E. japonensis. Instead, following amputation, discrete piwiexpressing cells localize to the region of the amputation site, proximal to the blastema. Later, as regenerating tissues begin to differentiate and segments form in E. japonesis, piwi-expressing cells appear in the regenerate and eventually become localized to the sites of the forming gonads. In summary, although there are clear species-specific differences in reproductive anatomy and morphogenesis, it appears that in annelids there is conservation of piwi expression in the primordial germ cells, developing gametes, and posterior growth zone.
Conclusions
The expression of Ct-piwi1 and Ct-piwi2 in both the germline and regions of dividing cells in the posterior growth zone provides a molecular link between germline stem cells and pluripotent somatic stem cells in C. teleta. Furthermore, the similarity in expression of Ct-piwi1 to the expression patterns previously observed for vasa and nanos homologs in C. teleta [24], suggests that this core set of stem cell regulators has retained an ancestral role in somatic and germline stem cell production. Such a dual role may reflect an ancestral metazoan feature in which there was a close link between somatic and germline stem cells, and contrasts with the segregation of the germline in animals such as D. melanogaster [40], C. elegans [41] and D. rerio [42].
Materials and methods
Cloning of Capitella teleta piwi1 and piwi2 genes Several overlapping expressed sequence tag (EST) sequences representing a single piwi1 homolog (Ct-piwi1) and another set representing a single piwi2 homolog (Ct-piwi2) were identified in BLAST searches of Capitella EST libraries from the C. teleta 8x genome sequencing project (Joint Genome Institute, Department of Energy, Walnut Creek, CA, USA, http://genome.jgipsf.org/Capca1/Capca1.home.html). Each set of sequences was aligned and compiled into a single predicted transcript for each gene. EST clones containing both Ct-piwi1 and Ct-piwi2 fragments from a mixed stage plasmid cDNA library were streaked on LB-ampicillin plates from -80°C glycerol stocks. Both Ct-piwi clones were checked for correct insert sizes, and sequenced for verification (Macrogen, Seoul, South Korea). The predicted transcripts were submitted to the National Center for Biotechnology Information (NCBI) as third-party annotation sequences with the following accession numbers: Ct-piwi1 (BK007975) and Ct-piwi2 (BK007976). The Ct-piwi1 riboprobe is approximately 1.5 kb and spans nearly the entire conserved PIWI domain, which consists of about 900 bp. The Ct-piwi2 riboprobe is 920 bp and includes the 3' end of the PIWI domain as well as 3' untranslated region. The two probes overlap by about 300 bp and have a sequence similarity in this region of 64%.
Sequence alignments and phylogenetic analysis tBLASTn searches of the C. teleta genome were conducted to find all homologs of D. melanogaster Piwi (NCBI accession number NP_476875.1). Two putative orthologs were found in the C. teleta genome corresponding to JGI protein IDs 154759 and 163584. Two additional sequence hits were examined and did not include the PAZ domain characteristic of Piwi proteins and were not included in further analyses.
Amino acid sequences for related proteins across a broad diversity of animal taxa were downloaded from the protein database in GenBank. Additional lophotrochozoan sequences were obtained from the genomes of Lottia gigantea and Helobdella robusta (Joint Genome Institute, Department of Energy, Walnut Creek, CA, USA, http://genome.jgi-psf.org/). The conserved PAZ (Piwi Argonaut Zwille) and PIWI domains were identified by a Pfam search using default parameters. Only the PIWI domain was used to create an amino acid sequence alignment due to the high divergence rates of the PAZ domain. The 327 amino acid alignment was created with ClustalX using default parameters in Mac-Vector v11.0 and hand corrected for obvious alignment errors.
ProtTest v2.4 [43] was used to determine the appropriate model of protein evolution. The RtRev model was recommended and used for both Bayesian and maximum likelihood analyses. Bayesian analysis was conducted with MrBayes v3.1.2 [44] A total of 3,000,000 generations were run, sampled every 100 generations, with four independent runs and four chains. Once convergence was reached, a majority rule consensus tree was generated with burnin of 8,100 trees. Maximum likelihood analysis was performed with PhyML 3.0 [45] using the RtRev model with 1,000 bootstrap replicates.
Trees were visualized in FigTree 1.3.1 [46] and drawn using Adobe Illustrator version CS4. GenBank and Swiss-Prot accession numbers and protein identification numbers from JGI are listed (Additional file 1). The nexus alignment is available upon request.
Animal husbandry A C. teleta colony was maintained in the laboratory at 19°C according to published culture methods [47]. Juvenile and adult worms were maintained in bowls of 20μm filtered sea water (FSW) and provided with sieved ocean mud as a food source. Parental brood tubes were recovered by sifting mud through a fine-mesh sieve. Embryos and larvae were dissected from brood tubes and raised to the desired stage.
Amputations
Regeneration experiments were performed on mature adults at eight weeks post-metamorphosis. Adults are sexually mature at 8 to 10 weeks when raised at 19°C. Animals were removed from the mud and placed in a dish of FSW for two hours to allow them to excrete ingested material. Animals were then relaxed for 20 minutes in 0.37 M MgCl 2 . Individual worms were placed on a strip of dental wax (Electron Microscopy Sciences, Hatfield, PA, USA) in one to three drops of FSW. With the aid of a dissecting microscope (Zeiss, Gottingen, Germany), all amputations were made at the anterior edge of the 12 th segment using a microsurgery scalpel (#715, Feather Safety Razor Co., Osaka, Japan). Posterior tissue segments were discarded. Following amputation, animals were returned to a separate 35 mm or 60 mm dish with FSW and left overnight. After 24 hours, mud was added to the dish and animals were maintained at 19°C. Specimens were periodically removed from the mud to monitor regeneration. Animals were fixed at different time points following amputation in 3.7% formaldehyde in FSW at 4°C for 16 to 24 h and then processed either for morphological analysis, immunohistochemistry or whole-mount in situ hybridization (see below).
Whole-mount in situ hybridization
Embryos (stages 1 to 3) were pretreated in a 1:1 mixture of 1.0 M sucrose and 0.25 M sodium citrate (Sigma-Aldrich Co., St. Louis, MO, USA) for three minutes, washed in FSW, and fixed in 3.7% formaldehyde in FSW overnight at 4°C. Larvae (stages 4 to 9), juveniles (one and two weeks post-metamorphosis), adults and amputated adults (eight weeks post-metamorphosis) were relaxed in 1:1 0.37 M MgCl 2 :FSW for 10 minutes and fixed in 3.7% formaldehyde/FSW overnight at 4°C. Embryos, larvae, juveniles and adults were then washed in phosphate-buffered saline (PBS), dehydrated in methanol and stored at -20°C. Whole-mount in situ hybridization followed published protocols [48,49]. Juveniles and adults were treated with the same conditions as embryos and larvae with the exception that proteinase K treatment was increased from 5 minutes to 10 minutes for juveniles and to 20 minutes for adults and amputated adults. In addition, for juvenile, adult and amputated adult stage experiments, the volume of all washes and hybridizations was increased from 0.5 to 1 ml. Digoxigenin-labeled riboprobes for Ct-piwi1 and Ct-piwi2 were generated with the MEGAscript kit (Ambion, Inc., Austin, TX, USA). For embryos, larvae, and one-week post-metamorphic juveniles, the Ct-piwi1 and Ct-piwi2 probe concentration was 1.0 ng/μl. For two-week post-metamorphic juveniles and adults, Ct-piwi1 and Ct-piwi2 were used at a concentration of 0.5 ng/μl. Following hybridization, probes were detected using nitroblue tetrazolium chloride/5-bromo-4chloro-3indolyphosphate (NBT/BCIP) color substrate. Typically, the color reaction was allowed to develop between one hour and three days; however, in uncleaved zygotes, two-cell and four-cell stage embryos a reaction between 6 and 11 days was necessary to detect transcripts. Specimens were equilibrated in glycerol (80% glycerol/10% 10x PBS/10% diH 2 O) and mounted on Rainex ® -coated slides.
Microscopy
Microscopic analyses were performed on a Zeiss Axioskop 2 compound light microscope (Zeiss, Gottingen, Germany). Micrographs were captured with a stemmounted SpotFlex digital camera (Diagnostic Instruments, Inc., Sterling Heights, MI, USA). Multiple DIC focal planes were merged for some images using Helicon Focus (Helicon Soft Ltd., Kharkov, Ukraine).
Confocal imaging was done using a Zeiss LSM 710 confocal microscope and Z-stack projections were generated using the ImageJ software (NIH).
EdU labeling
To detect dividing cells, the Click-iT EdU imaging kit was used to label cells undergoing DNA synthesis (Invitrogen Co., Carlsbad, CA, USA). The kit protocol was followed except for the following modifications. Stage 5 to 7 larvae were incubated for one hour in 300 μM 5ethynyl-2'-deoxyuridine (EdU) in FSW (10 mM working stock diluted in PBS), and then fixed overnight at 4°C, dehydrated in methanol. In situ hybridization experiments were performed according to the methods described above. Following the NBT/BCIP probe detection step, animals were washed in the following: 2X in PBS, 1X in PBS + 0.5% Triton for 20 minutes, and 2X in PBS + 3% BSA for 5 minutes each. Subsequently, the EdU detection reaction was carried out, but reduced to a total volume of 200 μl. Animals were rinsed several times with PBS, equilibrated in 80% glycerol, and analyzed and imaged following the methods for in situ hybridization described above.
Immunohistochemistry
Following fixation, regenerating adults were rinsed in PBS + 0.5% Triton, and permealibilized by incubation with 0.01 mg/ml proteinase K (Invitrogen) for 20 minutes. Specimens were then re-fixed for 20 minutes in 4% paraformaldehyde in PBS. Following 3X washes in PBS over 10 minutes and 2X washes in PBS + 0.5% Triton, animals were blocked for 2 hours at room temperature in PBS + 0.5% Triton + 10% normal goat serum (Sigma). Animals were incubated overnight at 4°C in a 1:400 dilution of anti-acetylated tubulin antibody (Sigma) followed by several washes in PBS + 0.5% Triton, and incubation in a 1:400 dilution of anti-mouse Alexa Fluor 488 secondary antibody (Invitrogen) overnight at 4°C. After washing out the secondary antibody for three hours at room temperature in PBS + 0.5% Triton, animals were cleared overnight in 80% glycerol and analyzed and imaged as described above.
Additional material
Additional file 1: Accessions. This file includes a table of the GenBank and Swiss-Prot accession numbers used for sequence alignments and phylogenetic analysis. Also included are Joint Genome Institute protein identification numbers for sequences from the genomes of C. teleta, L. gigantea, and H. robusta.
|
2014-10-01T00:00:00.000Z
|
2011-05-05T00:00:00.000
|
{
"year": 2011,
"sha1": "0a57b4f165caee74441a9c822a902bf64f8d8b87",
"oa_license": "CCBY",
"oa_url": "https://evodevojournal.biomedcentral.com/track/pdf/10.1186/2041-9139-2-10",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d670af873f198e032e7aae987de86ab8dc98a640",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
36002146
|
pes2o/s2orc
|
v3-fos-license
|
5-Oxo-6,8,11,14-eicosatetraenoic Acid Stimulates the Release of the Eosinophil Survival Factor Granulocyte/Macrophage Colony-stimulating Factor from Monocytes*
Allergic diseases such as asthma are characterized by tissue eosinophilia induced by the combined effects of chemoattractants and cytokines. Lipid mediators are a major class of endogenous chemoattractants, among which 5-oxo-6,8,11,14-eicosatetraenoic acid (5-oxo-ETE) is the most potent for human eosinophils. In this study, we investigated the effects of 5-oxo-ETE on eosinophil survival by flow cytometry. We found that this compound could promote eosinophil survival in the presence of small numbers of contaminating monocytes, but not in their absence. The conditioned medium from monocytes treated for 24 h with 5-oxo-ETE also strongly promoted eosinophil survival, whereas the medium from vehicle-treated monocytes had no effect. An antibody against the granulocyte/macrophage colony-stimulating factor (GM-CSF) completely blocked the response of eosinophils to the conditioned medium from 5-oxo-ETE-treated monocytes, whereas an antibody against interleukin-5 had no effect. Furthermore, 5-oxo-ETE stimulated the release of GM-CSF from cultured monocytes in amounts compatible with eosinophil survival activity, with a maximal effect being observed after 24 h. This effect was concentration-dependent and could be observed at concentrations in the picomolar range. 5-Oxo-ETE and leukotriene B4 had similar effects on GM-CSF release at low concentrations, but 5-oxo-ETE induced a much stronger response at concentrations of 10 nm or higher. This is the first report that 5-oxo-ETE can induce the release of any cytokine, suggesting that it could be an important mediator in allergic and other inflammatory diseases due both to its chemoattractant properties and to its potent effects on the synthesis of the survival factor GM-CSF.
Pronounced tissue eosinophilia is a hallmark of a number of diseases, including allergic disorders such as asthma and rhinitis and parasitic infections (1). In asthmatic subjects, increased numbers of eosinophils are found in the airways following exposure to allergens (2). Pulmonary eosinophils are believed to contribute to the pathophysiology of asthma through the release of cationic granule proteins, reactive oxygen metabolites, lipid mediators, and pro-inflammatory cytokines (3). In addition to eliciting tissue damage, eosinophilderived mediators can perpetuate the inflammatory reaction and lead to chronic changes in airway function (3).
Accumulation of eosinophils in inflammatory sites is thought to be mediated by a number of factors. Following mobilization from bone marrow in response to cytokines, eosinophils migrate into the lung and other tissues in response to the release of locally produced chemoattractants. Certain lipid mediators, most notably products of the 5-lipoxygenase pathway and platelet-activating factor, are potent stimulators of this process (4). Antigen-induced pulmonary eosinophilia is dramatically reduced in mice lacking the 5-lipoxygenase gene (5) as well as in humans (6) and other species (7,8) treated with inhibitors of this enzyme. Among lipid mediators, the 5-lipoxygenase product 5-oxo-6,8,11,14-eicosatetraenoic acid (5-oxo-ETE) 1 is the most potent and has been shown to induce eosinophil migration both in vitro (9,10) and in vivo in rats (11) and humans (12). 5-Oxo-ETE is produced by a variety of human inflammatory cells (9,13,14) as well as by mouse macrophages (15). In addition to chemotaxis, it induces a variety of other responses in eosinophils, including calcium mobilization, degranulation, superoxide production, actin polymerization, CD11b expression, and L-selectin shedding (16 -18). 5-Oxo-ETE acts via a highly selective G protein-coupled receptor (19,20), which has recently been cloned (21,22). The accumulation of eosinophils in tissues is dependent not only on their migration from the blood, but also on their survival within the tissue, as these cells are normally quite short-lived and rapidly undergo apoptosis (23). Because of the potent effects of 5-oxo-ETE on eosinophils, the aim of this study was to determine whether it can also promote the survival of these cells.
human interleukin-5 (IL-5), granulocyte/macrophage colony-stimulating factor (GM-CSF), eotaxin, and RANTES (regulated on activation normal T cell expressed and secreted) were purchased from Peprotech Inc. (Rocky Hill, NJ). The neutralizing monoclonal antibodies against recombinant human IL-5 and recombinant human GM-CSF were purchased from R&D Systems (Minneapolis, MN). Lipopolysaccharide (LPS) and polymyxin B were purchased from Sigma. The polyclonal antibody to human LPS-binding protein was obtained from Cedarlane (Hornby, Ontario, Canada).
Preparation of Eosinophils-Granulocytes were prepared from heparinized whole blood collected from healthy volunteers in Vacutainer blood collection tubes (BD Biosciences). No attempt was made to separate atopic from non-atopic donors, although none had asthma. Red blood cells were removed using dextran T-500 (Amersham Biosciences, Uppsala, Sweden) and mononuclear cells by centrifugation over Ficoll-Paque (Amersham Biosciences) as described previously (25). Any remaining red blood cells were then removed by hypotonic lysis. Finally, neutrophils and monocytes were removed from the resulting granulocyte preparation by treatment with a mixture of anti-CD16 (26) and anti-CD14 (unless otherwise indicated) antibodies coupled to paramagnetic microbeads (Miltenyi Biotec Inc., Bergisch-Gladbach, Germany), followed by passage through a column containing a steel matrix placed in a permanent magnet (MACS, Miltenyi Biotec Inc.). Eosinophils, contained in the passthrough fraction, were washed by centrifugation at 200 ϫ g for 10 min. The purity of eosinophils, as determined using a laser-based flow cytometer (CELL-DYN 3700 system), was 94 Ϯ 1%, the contaminating cells being neutrophils (4.6 Ϯ 0.9%) and lymphocytes (1.4 Ϯ 0.6%). No monocytes were detected in these preparations. Eosinophil viability was Ͼ99% as determined by trypan blue dye exclusion.
Preparation of Monocytes-Monocytes were purified from normal donors using a monocyte isolation kit from Miltenyi Biotec Inc., which permits negative selection of monocytes from peripheral blood mononuclear cells by immunomagnetic depletion of T cells, natural killer cells, B cells, dendritic cells, and basophils. Briefly, mononuclear cells were isolated from diluted heparinized peripheral blood by density gradient centrifugation over Ficoll-Paque (d 1.077). The mononuclear cells were then incubated with a mixture of hapten-conjugated antibodies against CD3, CD7, CD19, CD45RA, CD56, and IgE, followed by incubation with anti-hapten microbeads (Miltenyi Biotec Inc.) as described by the supplier. Monocytes were obtained in the passthrough fraction following immunomagnetic cell sorting using the MACS column as described above. The purity of monocytes (88 Ϯ 2%) was determined using the CELL-DYN 3700 system. Viability was Ͼ99% as determined by trypan blue exclusion.
Conditioned Medium from Monocytes-Monocytes (1 ϫ 10 6 cells/ml) were incubated in macrophage serum-free medium (Invitrogen) containing 2 mM L-glutamine, 100 units/ml penicillin, and 100 g/ml streptomycin in the presence or absence of 5-oxo-ETE (1 M, unless otherwise indicated) for 24 h at 37°C. The conditioned medium was collected following centrifugation of the cells and was stored at Ϫ20°C until used. In some experiments, the conditioned medium was incubated for 1 h at 37°C with saturating concentrations of neutralizing monoclonal antibody against IL-5 (20 g/ml) or GM-CSF (20 g/ml) prior to addition to monocyte-depleted eosinophils.
Assessment of Eosinophil Survival-Purified eosinophils were suspended in RPMI 1640 medium (Invitrogen) containing 10% fetal calf serum, 2 mM L-glutamine, 100 units/ml penicillin, and 100 g/ml streptomycin. Eosinophils (2 ϫ 10 5 cells/200 l) were incubated in 96-well flat-bottomed tissue culture plates (Corning Inc., Corning, NY) at 37°C for 48 h in the presence or absence of agonists or the conditioned medium from monocytes in the presence of humidified air containing 5% CO 2 . Eosinophil survival was analyzed by flow cytometry using a fluorescein isothiocyanate (FITC)-labeled annexin V/propidium iodide staining kit (R&D Systems). In brief, cells were removed from each well by gentle pipetting. To confirm that all the cells were removed, the wells were examined using an inverted microscope. After washing with phosphate-buffered saline by centrifugation, the cells were gently resuspended and incubated with FITC-labeled annexin V (0.25 g) and propidium iodide (0.5 g) in the dark for 15 min at room temperature. Samples were analyzed within 1 h by flow cytometry using a FACS-Calibur instrument with Cellquest software (BD Biosciences). Cell viability is expressed as the percentage of total cells that were not stained with either FITC-labeled annexin V or propidium iodide.
Quantitation of GM-CSF in Conditioned Medium-GM-CSF concentrations in the conditioned medium from monocytes were determined using a solid-phase enzyme-linked immunosorbent assay (BIO-SOURCE International, Camarillo, CA), which uses the multiple sandwich technique. GM-CSF was detectable in the linear portion of the binding curve (7-250 pg/ml), which was determined using recombinant human GM-CSF.
Statistics-The results are presented as the means Ϯ S.E. The statistical significance of the differences between various treatments was assessed using one-or two-way analysis of variance, followed by the Tukey test for post-hoc analysis. A p value of Ͻ0.05 was considered significant.
5-Oxo-ETE Induces
Monocyte-dependent Survival of Eosinophils-Eosinophils were isolated from peripheral blood by centrifugation over Ficoll-Paque, followed by immunomagnetic cell sorting of the granulocyte fraction using anti-CD16 antibody coupled to paramagnetic microbeads to remove neutrophils. These cells were treated with either vehicle or 5-oxo-ETE (1 M) for 48 h in the presence of 10% fetal calf serum, and the numbers of live cells were determined by flow cytometry following labeling with annexin V and propidium iodide. As shown in Fig. 1A (lower left quadrant, unstained cells), only 23% of the vehicle-treated cells remained alive, with the remainder of the cells being either apoptotic (lower right quadrant, annexin V ϩ /propidium iodide Ϫ ) or apoptotic/necrotic (upper right quadrant, annexin V ϩ /propidium iodide ϩ ). In contrast, 71% of the 5-oxo-ETE-treated cells remained alive after 48 h (Fig. 1B, lower left quadrant).
Examination of a larger number of subjects revealed that 5-oxo-ETE significantly increased eosinophil survival from 27 Ϯ 5 in vehicle-treated eosinophils to 50 Ϯ 10% (p Ͻ 0.01) compared with 83 Ϯ 5% in cells treated with the eosinophil survival factor IL-5 ( Fig. 2, left). However, there was considerable variability in the responses of different eosinophil preparations to 5-oxo-ETE, with some showing virtually no response.
In contrast, all preparations tested responded well to IL-5. This raised the possibility that the effect of 5-oxo-ETE was mediated by the release of an eosinophil survival factor from small numbers of contaminating cells. In support of this hypothesis, we found that the two cell preparations that displayed virtually no response to 5-oxo-ETE contained no detectable monocytes, whereas each of the five eosinophil preparations that responded to 5-oxo-ETE was contaminated with at least 1% monocytes (3.2 Ϯ 1.6%). We confirmed that monocytes were responsible for the survival-enhancing effects of 5-oxo-ETE by removing these cells from our eosinophil preparations using anti-CD14 antibody coupled to paramagnetic beads. This resulted in a nearly complete loss of the effect of 5-oxo-ETE on eosinophil survival without any reduction in the response to IL-5, which acted directly on these cells (Fig. 2, right).
Monocytes Secrete a Soluble Eosinophil Survival Factor in Response to 5-Oxo-ETE-Monocytes purified immunomagnetically by negative selection were incubated with either vehicle or 5-oxo-ETE (1 M) for 24 h. Aliquots of the conditioned medium from monocytes from a single donor were then incubated for 48 h with monocyte-depleted eosinophils from four different donors, and cell survival was assessed by flow cytometry. As described above, 5-oxo-ETE added directly to the four eosinophil preparations had only a small, statistically non-significant effect on cell survival, whereas IL-5 dramatically increased survival (Fig. 3A, open bars). In contrast, the conditioned medium from 5-oxo-ETE-treated monocytes strongly stimulated eosinophil survival (67 Ϯ 10%; p Ͻ 0.001) to almost the same extent as the direct addition of IL-5, whereas the conditioned medium from vehicle-treated monocytes had no detectable effect (11 Ϯ 4%). This experiment clearly demonstrates that different eosinophil preparations respond consistently to a 5-oxo-ETE-inducible factor released by monocytes. Therefore, for statistical purposes, for all of the experiments described below, n was taken to be the number of different monocyte preparations tested against one or more eosinophil preparations.
To determine the potency of 5-oxo-ETE in inducing the re-lease of the monocyte-derived survival factor, four different preparations of monocytes were incubated separately with different concentrations of 5-oxo-ETE for 24 h. The conditioned media were then incubated with monocyte-depleted eosinophils from a single donor for 48 h. 5-Oxo-ETE (EC 50 ϭ 5.8 Ϯ 2.1 nM) potently induced the release of the eosinophil survival factor (Fig. 3B). This effect was consistently observed with monocytes from different donors. As LPS has also been shown to induce the monocyte-dependent survival of eosinophils (27), we wanted to ensure that the response we observed was not an artifact due to contamination with this substance. We therefore conducted experiments in which either polymyxin B, which binds to the lipid A portion of LPS (28), or an antibody to LPS-binding protein was added to the conditioned medium from monocytes. Neither of these substances had any effect on either basal eosinophil survival or the survival-enhancing effect of 5-oxo-ETE (Fig. 4). In contrast, both inhibitors nearly completely blocked the increase in eosinophil survival elicited by incubation with the conditioned medium from monocytes that had been treated with LPS for 24 h (p Ͻ 0.001).
The Monocyte-derived Eosinophil Survival Factor Induced by 5-Oxo-ETE Is Identical to GM-CSF-Because both IL-5 and For the purposes of statistical analysis, the average percent survival for the two eosinophil preparations was calculated for each of the monocyte preparations, and n was taken to be equal to the number of monocyte donors (i.e. n ϭ 4). **, p Ͻ 0.01; ***, p Ͻ 0.001; NS, not significant.
5-Oxo-ETE Stimulates GM-CSF Release
GM-CSF are known to be potent eosinophil survival factors (1), we determined whether the effects of 5-oxo-ETE could be prevented by blocking antibodies to these two cytokines. To test the efficacy of the antibodies, monocyte-depleted eosinophils were incubated with IL-5 or GM-CSF in the presence or absence of anti-IL-5 or anti-GM-CSF antibody, respectively. As shown in Fig. 5 (left), these antibodies nearly completely abolished the responses mediated by their respective antigens (p Ͻ 0.001). In contrast, a control isotype-matched antibody had no effect on the survival of either vehicle-or cytokine-treated monocytes (data not shown).
To determine whether these antibodies could block the effects of 5-oxo-ETE, the conditioned medium from 5-oxo-ETEtreated monocytes was treated with either anti-IL-5 or anti-GM-CSF antibody. As described above, the conditioned medium from 5-oxo-ETE-treated monocytes strongly induced eosinophil survival (p Ͻ 0.001). This effect was completely abrogated by the addition of anti-GM-CSF antibody (p Ͻ 0.001), but was unaffected by anti-IL-5 antibody (Fig. 5, right), providing strong evidence that the effect of 5-oxo-ETE is mediated by GM-CSF.
5-Oxo-ETE Stimulates Monocytes to Produce GM-CSF-
To provide direct evidence that 5-oxo-ETE can induce GM-CSF release from monocytes, we measured its concentration following incubation of these cells with 5-oxo-ETE (1 M) for different times (Fig. 6A). 5-Oxo-ETE strongly stimulated GM-CSF release from monocytes, with the maximal level of this cytokine being observed after 24 h (p Ͻ 0.01). The effects of concentrations of 5-oxo-ETE between 1 pM and 100 nM on GM-CSF release from monocytes are shown in Fig. 6B. The effect of 5-oxo-ETE was concentration-dependent, with significant increases occurring at subnanomolar concentrations. However, the maximal response was not reached within this concentration range. The effects of higher concentrations of 5-oxo-ETE are shown in Fig. 6B (inset). The greatest response (15.3 Ϯ 2.9 versus 0.36 Ϯ 0.14 pM GM-CSF in vehicle-treated cells) was observed at 10 M 5-oxo-ETE, but it was not clear whether a plateau had been reached even at this concentration. The effects of LTB 4 on GM-CSF release were also investigated, but only a limited number of concentrations could be tested because of the small numbers of monocytes available. Concentrations of LTB 4 and 5-oxo-ETE in the picomolar range had similar effects on GM-CSF production, whereas 5-oxo-ETE induced a much greater response at higher concentrations (Fig. 6B).
To ensure that the concentrations of GM-CSF in the conditioned medium obtained from 5-oxo-ETE-treated monocytes were sufficient to prolong eosinophil survival, we incubated eosinophils with various concentrations of recombinant human GM-CSF and measured the numbers of surviving cells after 48 h. As shown in Fig. 6C, the EC 50 value for the effect of GM-CSF on eosinophil survival was ϳ0.37 pM. The concentration of GM-CSF in the medium from monocytes cultured with 1 M 5-oxo-ETE was 7.44 Ϯ 1.19 pM, whereas the concentration in medium from vehicle-treated cells was 0.68 Ϯ 0.15 pM (means of data from Fig. 6, A and B). However, as the conditioned medium from monocytes was diluted 20fold in the eosinophil survival studies shown in Figs. 3-5, this would give final concentrations of ϳ0.37 and 0.034 pM GM-CSF (Fig. 6C, arrows) following the addition of aliquots of the conditioned media from 5-oxo-ETE-and vehicle-treated monocytes, respectively.
DISCUSSION
Our preliminary results suggested that 5-oxo-ETE is capable of prolonging the survival of eosinophils. However, the variability of this response led us to suspect that it may be indirect, possibly mediated by small numbers of contaminating cells. The complete absence of monocytes in the two eosinophil preparations that failed to respond to 5-oxo-ETE suggested that these cells might be required for its survival-enhancing effect. This was confirmed when we found that 5-oxo-ETE was unable to promote survival in preparations of eosinophils from which all of the monocytes had been removed by treatment with anti-CD14 antibody. The medium from purified monocytes incubated with 5-oxo-ETE for 24 h strongly stimulated eosinophil survival to nearly the same extent as IL-5, a well known eosinophil survival factor, indicating that this effect was mediated by the release of a soluble factor from monocytes. This effect was quite reproducible, as it was consistently observed using both eosinophils and monocytes from many different donors. As the most likely candidates for the soluble factor released by 5-oxo-ETE appeared to be GM-CSF and IL-5, we tested the effects of monoclonal antibodies against these two cytokines. Anti-GM-CSF antibody completely blocked the survival-enhancing effect of the conditioned medium from 5-oxo- 5o; closed bars). The conditioned media (MCM) were then incubated for 60 min at 37°C in the presence or absence of monoclonal antibody (20 g/ml) against IL-5 or GM-CSF. Aliquots (10 l) of the conditioned media were then incubated for 48 h with monocyte-depleted eosinophils from two different donors, and cell survival was determined by flow cytometry. The average percent survival for the two eosinophil preparations was calculated for each of the monocyte preparations, and n was taken to be equal to the number of monocyte donors (i.e. n ϭ 4). ***, p Ͻ 0.001.
5-Oxo-ETE Stimulates GM-CSF Release
ETE-treated monocytes, whereas anti-IL-5 antibody was without effect. Furthermore, 5-oxo-ETE was a potent stimulator of GM-CSF release from monocytes. Incubation of monocytes with 1 M 5-oxo-ETE induced the release of GM-CSF in concentrations equivalent to its EC 50 value after dilution in the medium used to culture eosinophils in survival studies. The base-line levels of GM-CSF in the medium from vehicle-treated monocytes were Ͼ10 times lower and would be expected to elicit only a small increase in eosinophil survival. These results provide compelling evidence that GM-CSF can account for most, if not all, of the eosinophil survival-promoting activity released from monocytes in response to 5-oxo-ETE.
The concentration-response curve for 5-oxo-ETE-induced GM-CSF production did not reach a plateau, and it was not clear whether the maximal response had been achieved, even at a concentration as high as 10 M. This may be due to the metabolism of 5-oxo-ETE over the relatively long 24-h incubation period, which could have limited the response to lower concentrations of this substance. When present at higher concentrations, biologically effective levels of 5-oxo-ETE could have persisted for a much longer period of time, thus enhancing its effect on GM-CSF production and skewing the concentration-response curve. Alternatively, we cannot rule out the possibility that the effects of 5-oxo-ETE are mediated by its conversion to an active metabolite during the 24-h incubation period with monocytes. Murine macrophages convert 5-oxo-ETE to both -oxidation products (15) and 5-oxo-7-glutathionyl-8,11,14-eicosatrienoic acid, a glutathionyl conjugate, the formation of which is catalyzed by LTC 4 synthase (29). 5-Oxo-7-glutathionyl-8,11,14-eicosatrienoic acid is known to stimulate migration and actin polymerization in human eosinophils, and it is not known whether it also has other effects on these cells (30). However, unlike murine macrophages, human monocytes do not appear to have high levels of -oxidation activity (14); and at least in short-term incubations (30 min), we have not detected significant amounts of any 5-oxo-ETE metabolites formed by these cells.
This is the first report that 5-oxo-ETE can stimulate the release of a cytokine from any target cell, as until now, it had only been shown to elicit fairly rapid responses such as actin polymerization, calcium mobilization, and adhesion molecule expression (16 -18), which are associated with functional responses such as cell adhesion and migration (9). We also found that LTB 4 can induce GM-CSF release from monocytes, but the maximal response was much lower than that elicited by 5-oxo-ETE. LTB 4 has also been shown to stimulate monocytes to release interleukin-6 (31), but the present study appears to be the first direct evidence that a 5-lipoxygenase product can induce the release of GM-CSF. Antagonists of the cys-LT 1 receptor have been reported to inhibit GM-CSF release from antigen-stimulated mononuclear cells from asthmatics (32,33). However, the effect of LTD 4 itself on these cells was not reported, and the precise mechanism of action of cys-LT 1 antagonists in this mixed cell population is not well understood.
5-Oxo-ETE has weak chemotactic effects on monocytes and enhances their response to MCP-1 (monocyte chemoattractant protein) and MCP-3 (34). It also induces actin polymerization in these cells, but, in contrast to eosinophils, does not elicit calcium mobilization (34). Macrophages have been shown to contain mRNA for the recently cloned 5-oxo-ETE receptor, although at a lower level than eosinophils and neutrophils (22), suggesting that this receptor may also be present in monocytes and may mediate the effects of 5-oxo-ETE on GM-CSF release from these cells.
5-Oxo-ETE occupies a distinct position among eosinophil chemoattractants in that it is a potent stimulus for cell migration and, through its effects on monocyte GM-CSF release, can also promote eosinophil survival. In contrast, the potent eosinophil chemoattractants eotaxin and RANTES do not prolong eosinophil survival (35), even in the presence of contaminating monocytes (data not shown). The lipid mediator platelet-activating factor, another eosinophil chemoattractant, also fails to promote eosinophil survival (36). Another eicosanoid with eosinophil chemoattractant activity is prostaglandin D 2 , which acts through the DP 2 receptor/CRTH2 (37,38). However, this substance does not promote eosinophil survival, despite the FIG. 6. 5-Oxo-ETE induces GM-CSF release from monocytes. A, monocytes were incubated for different times at 37°C with either vehicle (control; E) or 1 M 5-oxo-ETE (q), and the amounts of GM-CSF in the media were determined by enzyme-linked immunosorbent assay as described under "Experimental Procedures." B, monocytes were incubated for 24 h at 37°C with either vehicle or different concentrations (1 pM to 100 nM) of either 5-oxo-ETE (q) or LTB 4 (E), and the amounts of GM-CSF in the media were measured. The inset shows the responses to higher concentrations (10 nM to 10 M) of 5-oxo-ETE (q) and LTB 4 (E). Because of limitations in the numbers of monocytes, only a limited number of concentrations of LTB 4 were tested. C, purified eosinophils from three different donors were incubated for 48 h at 37°C with either vehicle or different concentrations (10 fM to 100 pM) of recombinant human GM-CSF, and survival was assessed following staining with FITC-labeled annexin V and propidium iodide. *, p Ͻ 0.05; **, p Ͻ 0.01; ***, p Ͻ 0.001. fact that the selective DP 1 receptor agonist BW245C does have this effect (39), presumably due to stimulation of adenylyl cyclase, an effect shared by prostaglandin E 2 (40).
The present findings have significant implications regarding the potential role of 5-oxo-ETE in asthma, as GM-CSF is known to be very important for the survival of eosinophils once they have reached the lung (41). 5-Oxo-ETE is synthesized by both monocytes (14) and macrophages (15), which are prominent cells in the lung, and it could act on these cells to release GM-CSF. In this way, 5-oxo-ETE could both elicit the infiltration of eosinophils into the lung in concert with other chemoattractants such as eotaxin, with which it has a synergistic effect (42), and increase the lifetime of these eosinophils once they have entered the tissue. It is also possible that 5-oxo-ETE could promote the survival of other inflammatory cells by this mechanism because of the potent effects of GM-CSF on neutrophils and monocytes (43).
Another intriguing possibility is that GM-CSF, produced in response to 5-oxo-ETE, could act in a feed-forward loop to enhance both the production of and cellular responses to 5-oxo-ETE. GM-CSF has been shown to increase the production of 5-lipoxygenase products at multiple levels in neutrophils (44 -46) and would be expected to stimulate 5-oxo-ETE synthesis in a similar fashion. Furthermore, GM-CSF substantially enhances the responsiveness of both eosinophils (17) and neutrophils (47) to 5-oxo-ETE.
In conclusion, 5-oxo-ETE can prolong the survival of eosinophils by stimulating monocytes to release the potent survival factor GM-CSF. Interaction of 5-oxo-ETE with monocytes/ macrophages to release GM-CSF could have important implications in diseases such as asthma, as GM-CSF is known to be very important for the survival of eosinophils once they have reached the lung (41), in contrast to IL-5, which appears be more important for mobilization of eosinophils from bone marrow (1). Furthermore, because of the potent effects of GM-CSF on other inflammatory cells, 5-oxo-ETE could also be implicated in other diseases such as arthritis and atherosclerosis, which are characterized by the accumulation of neutrophils (48) and monocytes (49) in joints and arteries, respectively. The potent effects of 5-oxo-ETE on the migration of eosinophils, neutrophils, and monocytes, along with its survival-enhancing effects, suggest that this substance may be an important mediator in a variety of inflammatory diseases.
|
2018-04-03T03:03:23.625Z
|
2004-07-02T00:00:00.000
|
{
"year": 2004,
"sha1": "ea50bc20097f0c43712e177dd432e9bbaee6976c",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/27/28159.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "5c0550b24dc5d17d9fca59e8599f8da666d0fe9b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
108672102
|
pes2o/s2orc
|
v3-fos-license
|
Risk Management Plan and Pharmacovigilance System Biopharmaceuticals: Biosimilars
Editor literario del libro, Giancarlo Nota - All chapters are Open Access articles distributed under the Creative Commons
Non Commercial-Share Alike-Attribution 3.0 license, which permits to copy, distribute, transmit, and adapt the work in any medium, so long as the original work is properly cited.
Introduction
The chapter addresses similar biological medicinal products (biosimilars) safety monitoring and describes the activities that should be developed in their risk minimisation plan. This is an issue that has aroused great interest with the recent expiration of biotech drugs patents and the advent of biosimilar products on the market.
Risk management
A medicinal product is authorised on the basis that in the specified indication(s), at the time of authorisation, the risk-benefit is judged positive for the target population. However, not all actual or potential risks will have been identified when an initial authorisation is sought. In addition, there may be subsets of patients for whom the risk is greater than that for the target population as a whole. The management of a single risk can be considered as having four steps, risk detection, risk assessment, risk minimisation and risk communication which are summarized at table 1. However, a typical individual medicinal product will have multiple risks attached to it and individual risks will vary in terms of severity, and individual patient and public health impact. Therefore, the concept of risk management should also consider the combination of information on multiple risks with the aim of ensuring that the benefits exceed the risks by the greatest possible margin both for the individual patient and at the population level. Meanwhile Table 1 explains the management of a single risk, Figure 1 goes further and describes a complete risk management system, the so-called "Risk Management Plan" (EU-RMP) which contains two parts: pharmacovigilance and risk minimization. It covers how the safety of a product will be monitored and measured to reduce risk. This chapter focuses on the activities that should be developed in the risk minimisation plan to be applied to biopharmaceuticals and more specifically to biosimilars (medicines similar but not identical to a biological medicine approved once patent lifetime for the original biotherapeutic has expired). Biopharmaceuticals often exhibit safety issues such as immunotoxicity that may lead to a loss of efficacy and/or to side effects (Giezen et al., 2009; • Summarizes important identified risks, and important missing information, and addresses populations potentially at risk and outstanding safety questions.
•
Helps identify needs for specific data collection and facilitates construction of a pharmacovigilance plan 2. Pharmacovigilance plan: • Describes pharmacovigilance activities (routine and additional) and action plans for each safety concern
•
Proposes actions to address identified safety concerns, complementing the procedures in place to detect safety signals.
•
Lists safety concerns for which risk minimization activities are proposed • Discusses associated routine and additional risk minimization activities and the assessment of their effectiveness
•
Detail risk minimization activities to reduce risks associated with an individual safety concern 3. Evaluation of the need for risk minimization activities: • Discusses safety concerns including potential for medication errors and the need for routine or additional risk minimization strategies
•
Assesses for each safety concern whether risk minimization strategies are needed beyond the pharmacovigilance action plans
Risk identification and safety specification
This is a summary of the important specified risks of a medicinal product, important potential risks, and important missing information. It also addresses the populations potentially at risk and outstanding safety questions, which warrant further investigation to refine understanding of the benefit-risk profile during the post-authorisation period. Table 3 explains the different considerations to take in mind when collecting safety data during the non-clinical and clinical development of a biosimilar medicinal drugs. The safety issues identified in the safety specification should be based on the information related to the safety of the product included in the Common Technical Document (CTD), especially the overview of safety, benefits and risks conclusions and the summary of clinical safety (Zúñiga & Calvo, 2010a). The safety specification can be a stand-alone document, usually in conjunction with the pharmacovigilance plan, but elements can also be incorporated into the CTD. Clinical safety of similar biological medicinal products must be monitored closely on an ongoing basis during the post-approval phase including continued risk-benefit assessment. Even if the efficacy is shown to be comparable, the biosimilar product can exhibit a different safety profile in terms of nature, seriousness, or incidence of adverse reactions. Marketing Authorisation Holder (MAH) should provide safety data prior to marketing authorisation, www.intechopen.com Risk Management Trends 254 but also post-marketing as possible differences might become evident later, even though comparability with regard to efficacy has been shown. It is important to compare adverse reactions in terms of type, severity and frequency between biosimilar and reference medicinal product. Attention should be paid to immunogenicity and potential rare serious adverse events, focusing on patients with chronic treatments. The risk management plans for biosimilars should focus on:
•
Heightened pharmacovigilance measures • Conduct antibody testing • Implement special post-marketing surveillance For the marketing authorisation application a risk management program / pharmacovigilance plan is required. This includes a risk specification describing the possible safety issues caused by the differences (i.e. hostcells, manufacturing, purification, excipients etc.) of the biosimilar to the reference product.
Non-Clinical
Non-clinical safety findings that have not been adequately addressed by clinical data
Drug interactions
• Other toxicity-related information and data If the product is intended for use in special populations, consideration should be given to wether specific nonclinical data needs exist. Clinical Limitations of the human safety database
•
Discussion of the implications of the database limitations with respect to predicting the safety of the product in the marketplace • Reference to the populations likely to be exposed during the intended or expected use of the product in medical practice.
•
Discussion of the world-wide experience: -The extent of the world-wide exposure -Any new or different safety issues identified -Any regulatory actions related to safety • Detail the size of the study population using both numbers of patients and patient time exposed to the drug. This should be stratified by relevant population categories.
•
Detail the frequencies of adverse drug reactions detectable given the size of the database.
•
Detail suspected long-term adverse reactions when it is unlikely that exposure data is of sufficient duration and latency. Populations not studied in the pre-authorisation phase
•
Discussion of which populations have not been studied or have only been studied to a limited degree in the pre-authorisation phase and the implications of this with respect to predicting the www.intechopen.com • List the important identified and potential risks that require further characterization or evaluation (identified or potential risks) Identified Risks (an untoward occurrence for which there is adequate evidence of an association with the medicinal products of interest).
• Include more detailed information on the most important identified adverse events/ adverse reactions (serios, frequent and/or with an impact on the balance of benefits and risks of the medicinal product).
•
Include evidence bearing on a casual relationship, severity, seriousness, frequency, reversibility and atrisk groups, if available.
•
Discussion of risk factors and potential mechanisms Potential risks (an untoward occurrence for which there is some basis for suspicion of an association with the medicinal product of interest but where this association has not been confirmed).
•
Description of important potential risks with the evidence that led to the conclusion that there was a such a type of risk Identified and potential interactions including fooddrug and drug-drug interactions
•
Discussion of identified and potential pharmacokinetic and pharmacodynamic interactions • Summary of the evidence supporting the interaction and the possible mechanism • Discussion of the potential health risks posed for the different indications and in the different populations • Statement listing the interactions that require further investigation Epidemiology • Discussion of the epidemiology of the indications including incidence, prevalence, mortality and relevant co-morbidity (take into account stratification by age, sex and racial/ethnic origin) • Discussion of the epidemiology in the different regions with emphasis on Europe • Review the incidence rate of the important adverse events that require further investigation among patients in whom the medicinal product is indicated • Include information on risks factors for an adverse events Pharmacological class effects • Identify risks believed to be common to the pharmacological class (justified those risks common to the pharmacological class but not thought to be a safety concern) Additional EU requirements • Discussion of the following topics: -Potential for overdose -Potential for transmission of infectious agents -Potential for misuse for illegal purposes -Potential for off-label use -Potential for off-label paediatric use Summary • Important identified risks • Important potential risks • Important missing information Table 3. Elements of the risk identification and safety specification (EMA, 2006)
Pharmacovigilance plan
The pharmacovigilance plan should be based on the safety specification and propose actions to address the safety concerns identified (relevant identified risks, potential risks and missing information). An action plan model can be found on Table 4. Only a proportion of risks are likely to be foreseeable and the pharmacovigilance plan will not replace but rather complement the procedures currently used to detect safety signals.
Safety concern
Planned action (s)
Important identified risks <> List
Important potential risks <> List Important missing information <> List Table 4. Summary of safety concern and planned pharmacovigilance actions (EMA, 2006) The plan can be discussed with regulators during product development, prior to approval of the new product or when safety concerns arise during the post-marketing period. It can be a stand-alone document but elements could also be incorporated into the CTD (table 5) (Zúñiga & Calvo, 2010b).
ROUTINE PHARMACOVIGILANCE ADDITTIONAL PHARMACOVIGILANCE ACTIVITIES
• For medicinal products where no special concerns have arisen • For medicinal products with important identified risks, important potential risks or important missing information • The activities will be different depending on the safety concern to be addressed Table 5. Pharmacovigilance activities The action plan for each safety concern should be presented and justified according to the following structure: • Safety concern • Objective of proposed actions
Rationale for proposed actions
• Monitoring by the MAH for safety concern and proposed actions • Milestones for evaluation and reporting Protocols for any formal studies should be provided. Details of the monitoring for the safety concern in the clinical trial will include stopping rules, information on the drug safety monitoring board and when interim analyses will be carried out. The outcome of the proposed actions will be the basis for the decision making process that needs to be explained in the EU-RMP. CHMP biosimilars guidelines emphasise need for particular attention to pharmacovigilance, especially to detect rare but serious side effects. Important issues include:
•
Pharmacovigilance systems should differenciate between originator and biosimilar products (so that effects of biosimilars are not lost in background of reports on reference products).
• Ensure Traceability (importance of the international nonproprietary name, INN).
Evaluation of the need for risk minimisation activities
For each safety concern, the Applicant/Marketing Authorisation Holder should assess whether any risk minimisation activities are needed. Some safety concerns may be adequately addressed by the proposed actions in the Pharmacovigilance Plan, but for others the risk may be of a particular nature and seriousness that risk minimisation activities are needed. It is possible that the risk minimisation activities may be limited to ensuring that suitable warnings are included in the product information or by the careful use of labelling and packaging, i.e. routine risk minimisation activities. If an Applicant/Marketing Authorisation Holder is of the opinion that no additional risk minimisation activities beyond these are warranted, this should be discussed and, where appropriate, supporting evidence provided. However, for some risks, routine risk minimisation activities will not be sufficient and additional risk minimisation activities will be necessary. If these are required, they should be described in the risk minimisation plan which should be included in Part II of the EU-RMP.
Within the evaluation of the need for risk minimisation activities, the Applicant/Marketing Authorisation Holder should also address the potential for medication errors (some examples are listed on Table 6) and state how this has been reduced in the final design of the pharmaceutical form, product information, packaging and, where appropriate, device. Table 6. Potential reasons for medication errors that the applicant needs to take into account Applicants/Marketing Authorisation Holders should always consider the need for risk minimisation activities whenever the Safety Specification is updated in the light of new safety information on the medicinal product.
The risk minimization plan
The risk minimisation plan details the risk minimisation activities which will be taken to reduce the risks associated with an individual safety concern. When a risk minimisation plan is provided within an EU-RMP, the risk minimisation plan should include both routine and additional risk minimisation activities. A safety concern may have more than one risk minimisation activity attached to an objective. The risk minimisation plan should list the safety concerns for which risk minimisation activities are proposed. The risk minimisation activities, i.e. both routine and additional, related to that safety concern should be discussed. In addition, for each proposed additional risk minimisation activity, a section should be included detailing how the effectiveness of it as a measure to reduce risk will be assessed. Table 7 shows how to approach the risk minimisation plan.
Postmarketing pharmacovigilance
MAHs should ensure that all information relevant to a medicinal product´s balance of benefits and risks is fully and promptly reported to the Competent Authorities; for centrally authorised products, data also should be reported to EMA. The MAH must have a qualified person responsible for pharmacovigilance available permanently and continuously.
Legal framework
The legal framework for pharmacovigilance of medicinal products for human use in the European Union (EU) is given in Regulation (EC) (EudraLex, 2004). The requirements explained in these guidelines are based on the International Conference on Harmonisation (ICH) guidelines but may be further specified or contain additional request in line with Community legislation. The obligations concerned with the monitoring of adverse reactions occurring in clinical trials do not fall within the scope of pharmacovigilance activities. The legal framework for such obligations is Directive 2001/20/EC. However, Part III of Volume 9A deals with technical aspects relating to adverse reaction/event reporting for pre-and postauthorisation phases. Pharmacovigilance activities are within the scope of quality, safety and efficacy criteria, because new information is accumulated on the normal use of medicinal products in the EU marketplace. Pharmacovigilance obligations apply to all authorised medicinal products, including those authorised before 1 January 1995 (Fruijtier, 2006), whatever procedure was used for their authorisation. At approval there is limited clinical experience. Accurate pharmacovigilance and correct attribution of adverse events is vital. Pharmacovigilance has been defined by the Worl Health Organization as the science and activities relating to the detection, assessment, unsderstanding and prevention of adverse effects or any other medicine-related problem (EudraLex, 2007). The three main goals in Pharmacovigilance are: • Protect the patients
Protect the Pharmaceutical Company
• Comply with regulatory Requirements
Pre-Authorisation Phase
Once an application for a marketing authorisation is submitted to the Agency, in the preauthorisation phase, information relevant to the risk-benefit evaluation may become available from the Applicant or Member States where the product is already in use on a compassionate basis, or from third countries where the product is already marketed. Since it is essential for this information to be included in the assessment carried out by the (Co-)Rapporteur(s) assessment teams, the Applicant is responsible for informing immediately the Agency and the (Co-) Rapporteur(s).
In the period between the CHMP reaching a final Opinion and the Commission Decision there need to be procedures in place to deal with information relevant to the risk-benefit balance of centrally authorised products, which were not known at the time of the Opinion. It is essential for this information to be sent to the Agency and (Co-)Rapporteur(s) so that it can be rapidly evaluated to an agreed timetable and considered by the Committee for Medicinal Products for Human Use (CHMP) to assess what impact, if any, it may have on the Opinion. The Opinion may need to be amended as a consequence. Table 8 shows the main aspects to be considered relating biosimilar drugs safety during preautorisation and post-authorisation phase. The table highlights the additional reporting requirements for biosimilars when comparing to general safety reporting.
REPORTING OF ADVERSE REACTIONS AND OTHER SAFETY-RELATED INFORMATION
GENERAL REPORTING (Scharinger, 2007) BIOSIMILARS REPORTING
Signal Identification
It is likely that many potential signals will emerge in the early stages of marketing and it will be important for these to be effectively evaluated. A signal of possible unexpected hazards or changes in severity, characteristics or frequency of expected adverse effects may be identified by: • the Marketing Authorisation Holders; • the Rapporteur; • the Member States; • the Agency in agreement with the Rapporteur It is the responsibility of each Member State to identify signals from information arising in their territory. However, it will be important for the Rapporteur and the Agency to have the totality of information on serious adverse reactions occurring inside and outside the EU in order to have an overall view of the experience gathered with the concerned centrally authorised product. As a matter of routine, the Rapporteur should continually evaluate the adverse reactions included in the EudraVigilance system and all other information relevant to risk-benefit balance in the context of information already available on the product, to determine the emerging adverse reactions profile. Additional information should be requested from the Marketing Authorisation Holder and Member States as necessary, in liaison with the Agency. When a Member State other than the Rapporteur wishes to request information from the Marketing Authorisation Holder (apart from routine follow-up of cases occurring on their own territory) for the purposes of signal identification, the request should be made in agreement with the Rapporteur and the Agency. Member States will inform the Rapporteur(s) and the Agency when performing classreviews of safety issues which include centrally authorised products. The Pharmacovigilance Working Party (PhVWP) should regularly review emerging safety issues which will be tracked through the Drug Monitor system.
Signal Evaluation
As signals of possible unexpected adverse reactions or changes in the severity, characteristics or frequency of expected adverse reactions may emerge from many different sources of data (see above), the relevant information needs to be brought together for effective evaluation, over a time scale appropriate to the importance and likely impact of the signal. Irrespective of who identified the signal, a signal evaluation should be carried out by: • the Rapporteur; or • the Member State where a signal originated. The Rapporteur should work closely with the identifier of the signal to evaluate the issue. Agreement needs to be reached in each case on the responsibility for the Assessment Report on the risk-benefit balance, by the Rapporteur or the Member State where the signal originated from, or jointly. A Member State other than that of the Rapporteur should not start a full evaluation prior to having contacted the Agency and the Rapporteur, in order to prevent any unnecessary duplication of effort. At request of the CHMP, the PhVWP evaluates signals arising from any source and keeps any potential safety issues under close monitoring.
Evaluation of Periodic Safety Update Reports
The Marketing Authorisation Holder is required to provide Periodic Safety Update Reports (PSURs) to all the Member States and the Agency. It is the responsibility of the Agency to ensure that the Marketing Authorisation Holder meets the deadlines. The Marketing Authorisation Holder should submit any consequential variations simultaneously with the PSUR at the time of its submission, in order to prevent any unnecessary duplication of effort. Variations may, however, also be requested subsequently by the Rapporteur, after agreement by the CHMP. It is the responsibility of the Rapporteur to evaluate and provide a report in accordance with the agreed timetable and to determine what issues if any need to be referred to the PhVWP and CHMP. Actions required following the evaluation of a PSUR will be determined by the Rapporteur and the Marketing Authorisation Holder will be informed by the Agency, after agreement by the CHMP. Where changes to the marketing authorisation are required, the CHMP will adopt an Opinion which will be forwarded to the European Commission for preparation of a Decision (Ebbers et al., 2010).
Evaluation of Post-Authorisation Studies, Worldwide Literature and Other Information
Final and interim reports of Marketing Authorisation Holder sponsored post-authorisation studies and any other studies, and other relevant information, may emerge from the Marketing Authorisation Holder, the Member States or other countries at times in between PSURs. The Rapporteur should receive and assess any relevant information and provide an Assessment Report where necessary. As above, the Rapporteur should determine what issues if any need to be referred to the PhVWP and CHMP. The actions required following an evaluation will be determined by the Rapporteur and the Marketing Authorisation Holder will be informed by the Agency, after agreement by the CHMP. Where changes to the marketing authorisation are required, the CHMP will adopt an Opinion which will be forwarded to the European Commission for preparation of a Decision. The Marketing Authorisation Holder should submit any consequential variations simultaneously with the data, in order to prevent any unnecessary duplication of effort. Variations may, however, also be requested subsequently by the Rapporteur, after agreement by the CHMP.
Evaluation of Post-Authorisation Commitments
It is the responsibility of the Agency to ensure that the Marketing Authorisation Holder meets the deadlines for the fulfilment of specific obligations and follow-up measures, and that the information provided is available to the Rapporteur and the CHMP. The Marketing Authorisation Holder should submit any consequential variations simultaneously with the requested information for the fulfilment of specific obligations/follow-up measures, in order to prevent any unnecessary duplication of effort. Variations may, however, also be requested subsequently by the Rapporteur, after agreement by the CHMP. For marketing authorisations granted under exceptional circumstances, specific obligations will be set out in Annex II.C of the CHMP Opinion. Specific obligations should be reviewed by the Rapporteur, at the interval indicated in the Marketing Authorisation and at the longest annually, and should be subsequently agreed by the CHMP. As above, the Rapporteur should determine what issues if any need to be referred to the PhVWP and CHMP. For marketing authorisations granted under exceptional circumstances, the annual review will include a re-assessment of the risk-benefit balance. The annual review will in all cases lead to the adoption of an Opinion which will be forwarded to the European Commission for preparation of a Decision. For all marketing authorisations (whether or not the authorisation is granted under exceptional circumstances) follow-up measures may be established, which are annexed to the CHMP Assessment Report. These will be reviewed by the Rapporteur, and will be considered by PhVWP and CHMP at the Rapporteur's request. Where changes to the marketing authorisation are required, the CHMP will adopt an Opinion which will be forwarded to the European Commission for preparation of a Decision. In the case of non-fulfilment of specific obligations or follow-up measures, the CHMP will have to consider the possibility of recommending a variation, suspension, or withdrawal of the marketing authorisation. Table 9 shows the Omnitrope® Risk Management Plan Summary published by EMA.
Safety issue
Proposed pharmacovigilance activities
Safety Concerns in the Pre-Authorisation Phase
Following the receipt of Individual Case Safety Reports or other information relevant to the risk-benefit balance of a product by the Agency and the (Co-)Rapporteur(s), the latter should assess these pharmacovigilance data. The outcome of the evaluation should be discussed at the CHMP for consideration in the Opinion. If pharmacovigilance findings emerge following an Opinion but prior to the Decision, a revised Opinion, if appropriate, should be immediately forwarded to the European Commission to be taken into account before preparation of a Decision.
Safety Concerns in the Post-Authorisation Phase
A Drug Monitor, including centrally authorised products, is in place as a tracking system for safety concerns and is reviewed on a regular basis by the PhVWP at its meetings. This summary document also records relevant actions that have emerged from PSURs, specific obligations, follow-up measures and safety variations. Following the identification of a signal the relevant information needs to be brought together for effective evaluation, over a time scale appropriate to the importance and likely impact of the signal: • Non-urgent safety concerns • Urgent safety concerns
Information to healthcare professionals and the public
The management of the risks associated with the use of biosimilars demands close and effective collaboration between the key players in the field of pharmacovigilance. Sustained commitment to such collaboration is vital if the future challenges in pharmacovigilance are to be met. Those responsible must jointly anticipate, describe and respond to the continually increasing demands and expectations of the public, health administrators, policy officials, politicians and health professionals. However, there is little prospect of this happening in the absence of sound and comprehensive systems for biosimilars which make such collaboration possible. Understanding and tackling these are an essential prerequisite for future development of the biosimilars. Healthcare Professionals (and the public if applicable) need to be informed consistently in all Member States about safety issues relevant to centrally authorised biosimilar, in addition to the information provided in Product Information. If there is such a requirement the Rapporteur or the Marketing Authorisation Holder in cooperation with the Rapporteur should propose the content of information for consideration by the PhVWP and subsequent discussion and adoption by the CHMP. The agreed information may be distributed in Member States. The text and timing for release of such information should be agreed by all parties prior to their despatch. The Marketing Authorisation Holder should notify, at his own initiative, the Agency at an early stage of any information he intends to make public, in order to facilitate consideration by the PhVWP and adoption by the CHMP as well as agreement about timing for release, in accordance with the degree of urgency. Marketing Authorisation Holders are reminded of their legal obligations under Article 24(5) of Regulation (EC) No 726/2004 to not communicate information relating to pharmacovigilance concerns to the public without notification to the Competent Authorities/Agency (European Commission, 2004).
|
2017-10-15T07:44:45.769Z
|
2011-07-28T00:00:00.000
|
{
"year": 2011,
"sha1": "aa302b54384e812a9eda2a58bf19c2dbf8fcfecb",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/17372",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8671ad16265f4c36e56cf87339f7a0765b8003fd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
199036775
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of drugs used in chronic heart failure at tertiary care centre: a hospital based study
Introduction: There is lack of data on pattern of use of drugs in patients with chronic heart failure (CHF) from Nepalese population. This study was conducted to explore the trends of evidence based medications used for CHF in our population. Methods: This is a cross-sectional study on 200 consecutive patients with New York Heart Association (NYHA) class II to IV symptoms of CHF who attended cardiology clinic or admitted from September 2017 to August 2018 at Nobel Medical College Teaching Hospital, Biratnagar, Nepal. Results: Mean age of patients was 54 (range 15-90) years. Ischemic cardiomyopathy, rheumatic heart disease, dilated cardiomyopathy, hypertensive heart disease, peripartum cardiomyopathy were common etiologies of CHF. Analysis of drugs used in CHF revealed that 85% patients were prescribed diuretics, 58.5% angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs), 53% mineralocorticoid receptor antagonists (MRAs), 38% beta-blockers (BBs) and 24% digoxin. Digoxin was mainly used as add on therapy for patients with atrial fibrillation (24% of all patients). Antithrombotics (warfarin or aspirin), inotropic agents (dopamine, dobutamine or noradrenaline), antiarrhythmic agent (amiodarone) and nitrates (intravenous glyceryl trinitrate or oral isosorbide dinitrate) were prescribed for 48%, 28%, 5% and 6% patients respectively. All CHF patients with preserved or mid-range ejection fraction (25% of all patients) were prescribed diuretics along with antihypertensive drugs for hypertensive patients. Conclusion: CHF is associated with significant morbidity and mortality due to associated co-morbidities and underuse of proven therapy like BBs, ACEIs or ARBs and MRAs. Careful attention to optimization of different drugs therapy in patients with CHF may help to improve patient outcomes.
Introduction
Chronic heart failure (CHF) is a common cardiac problem with significant morbidity and mortality. It is a clinical syndrome caused by various cardiac conditions that impair the ability of the ventricle to fill with or eject blood. 1 The goals of treatment of CHF are the reduction of symptoms, minimization of the number of hospitalizations and prevention of premature death. The mainstay of treatment is lifestyle modifications and pharmacologic therapy. Implantable devices and surgery may be needed in selective patients. Angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs), beta-blockers (BBs) and mineralocorticoid receptor antagonists (MRA) have been shown to improve clinical condition and survival of patients with CHF with reduced ejection fraction (HFrEF). 2 Diuretic therapy helps to reduce cardiac load with improvement in left ventricular (LV) function. 3 Mortality and morbidity after symptomatic CHF have remained high although variable which could be due to differences in severity of disease and appropriate use of evidence-based treatment. The number of patients with CHF is increasing in lowincome countries like Nepal, as result of the adoption of western type lifestyles, leading to an increased number of risk factors, aging of the population and still a high burden of rheumatic heart disease. Due to the lack of data on the evidence-based treatment used in CHF, this study was conducted to explore the trends of medications used for CHF in the Nepalese population.
Materials and Methods
This is a cross-sectional hospital-based study. Total of 200 consecutive patients with a diagnosis of CHF who attended the cardiac clinic or admitted from September 2017 to August 2018 at cardiology unit, department of internal medicine of Nobel Medical College teaching hospital, Nepal were included in the study. The aim was to evaluate the current trends of the use of evidence-based drugs in patients with CHF. All patients with a diagnosis of CHF with reduced or preserved ejection fraction based on Framingham Criteria and echocardiographic assessment was included. Patients with CHF were categorized as having heart failure with preserved ejection fraction (HFpEF) (EF ≥50%), heart failure with reduced ejection fraction (HFrEF) (EF <40%), heart failure with mid-range ejection fraction (HFmrEF) (EF 40%-49%), based on the recent European Society of Cardiology guidelines. Inclusion criteria were: 1. Age ≥15 years, 2. Patients attending the cardiac clinic with a diagnosis of CHF 3. Patients admitted to the cardiology unit with a diagnosis of acute decompensation of heart failure (ADHF). Exclusion criteria were (1) Acute de novo heart failure after acute coronary syndrome or acute myocarditis, (2) Asymptomatic patients with echocardiographic evidence of LV dysfunction. Demographic and clinical variables were noted during enrollment which included age, gender, and underlying etiology of CHF, co-morbidities, and medications used by the patient. The serum electrolytes, renal function tests, electrocardiographic and echocardiographic parameters were reviewed from the case record of patients. Continuous and categorical variables were presented as mean, percentage and interquartile range wherever found necessary. The tabular presentation was made for necessary variables.
Results
Two hundred consecutive patients with a diagnosis of CHF were included in the study. There were 110 (55%) women and 90 (45%) men. The mean age of patients was 54 (range 15-90) years. Seventy-four (37%) patients were cigarette smokers and 28 (14%) had a significant history of alcohol consumption. About one third 64 (32%) of patients were hypertensive and 40 (20%) were diagnosed to have diabetes mellitus. Baseline characteristics of patients with CHF are summarized in Table 1. Among the underlying co-morbidities, 120 (60%) had anemia, 60 (30%) had underlying coronary artery disease, 30 (15%) had acute kidney injury, 20 (10%) had chronic kidney disease, 10 Abbreviations: NYHA, New York Heart Association; BPM, beat per minute; ADHF, acute decompensated heart failure; HFrEF, heart failure with reduced ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFmrEF, heart failure with a mid-range ejection fraction.
Discussion
This study evaluated the patterns of use of different evidence-based therapy in patients with CHF. There have been advances in the drug treatment of CHF over the past Chronic obstructive pulmonary disease 10(6.6%) -- Abbreviations: HFrEF: heart failure with reduced ejection fraction; HFpEF: heart failure with preserved ejection fraction, HFmrEF: heart failure with a mid-range ejection fraction. decades. The aims of CHF treatment are the reduction of symptoms and improvement of survival. Treatment should be focused on minimizing the detrimental effects of neurohormonal compensatory mechanisms. Sodium and water retention is common in CHF and diuretics are prescribed for patients with pulmonary or peripheral congestion. 4 Diuretic therapy results in favorable effects with a reduction in cardiac preload and afterload with improvement in LV function. 5 Eighty-five percent of our patients were prescribed diuretics at the time of admission except those who had hypotension or cardiogenic shock requiring inotropic support. The majority received loop diuretics. ACEIs or ARBs, BBs, and MRAs have been documented to improve clinical status and survival of patients with CHF. 2 The proven effects of ACE inhibition to prolong survival support their use as first-line agents in the management of CHF. 6 Only 58.8% of our patients were prescribed ACEIs or ARBs owing to a lower range of blood pressure or fear of worsening renal function. Similarly, only 38% of patients were started on BBs due to fear of decompensation or failure to achieve a euvolemic state. We noted underuse of disease-modifying drugs such as BBs, ACEI or ARBs and no prescription of combined hydralazine and isosorbide probably due to unavailability of this drug in our country. Although the use of BBs, ACEIs or ARBs have improved than previous years as a study done in Kathmandu valley by Baskota et al 7 in 2006 showed even more restricted use of these therapies. Serum aldosterone level is found to be elevated in patients on ACE inhibitor and may contribute to the worsening of HF. Spironolactone is a competitive antagonist of aldosterone and shows beneficial effects in patients already treated with ACE inhibitor. 8 In our study, the use of spironolactone was more liberal since 53% of our patients were prescribed spironolactone. The Digitalis Investigation Group showed no mortality benefit of digoxin in CHF. 9 However, a digoxin withdrawal study has shown to improves exercise capacity and the need for the hospital admission. 10 Digoxin has been prescribed commonly to control the ventricular rate in patients with CHF and AF. It would be reasonable to use digoxin in symptomatic patients despite an adequate dose of diuretics, ACE inhibitors, and BBs. In our study, digoxin has been used mainly in patients with AF (24%) with CHF as add on therapy without monitoring of serum level. Based on the pattern of dysfunction (HFrEF, HFmrEF or HFpEF), there were differences in the frequency of use of ACE inhibitors/ARBs, BBs and MRAs, being higher in HFrEF. This can be explained by the presence of good evidence supporting the efficacy of these drugs in patients with HFrEF.
No good evidence exists for the benefit of diuretics, ACE inhibitors, 11 BBs, MRAs or calcium antagonists in patients of HFpEF. Diuretics are often used to reduce and prevent fluid overload. Similarly, all our patients have prescribed loop diuretic agents along with antihypertensive drugs for hypertensive patients. HFmrEF might be managed in the same way as HFpEF because of limited evidence on it. 12 Our small number of patients with HFmrEF were managed in the same way as HFpEF. Although some studies demonstrated that BBs or ACEI/ARBs significantly improves the prognosis for patients with HFmrEF. 13,14 The prevalence of AF in patients with HF ranges from 10 to 30% 15 and has been observed to increase with increasing severity of HF. 16 Twenty four percent of our patients with CHF had AF. HF increases the risk of thromboembolism in patients with AF. Thus, antithrombotic therapy with either aspirin or a vitamin K antagonist warfarin is needed. The choice is made based on the presence of risk factors for thromboembolism, the risk of bleeding manifestations and patient preferences. 17 In our study, only around half (54%) of all eligible patients had been prescribed warfarin indicating the marked underuse of anticoagulant which might be due to difficulty in monitoring the monthly INR or reluctance on part of the physician. Some of the possible reasons for underuse of standard medications for CHF in our study may include poor awareness among physicians, high cost of multidrug therapy and the late Abbreviations: ACEIs, Angiotensin-converting enzyme inhibitors; ARBs, Angiotensin receptor blockers; BBs, Beta blockers; MRAs, Mineralocorticoid receptor antagonists; HFrEF, heart failure with reduced ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFmrEF, heart failure with a mid-range ejection fraction. presentation and severity of CHF which may prohibit the use of all proven therapies. Many physicians are still not comfortable commencing BBs in severely ill patients with CHF. Acute decompensated heart failure (ADHF) is a common problem with limited treatment options. There is a lack of consistent benefit with diuretics, vasodilators, and inotropes for ADHF although they are used frequently to stabilize the acute symptoms. 18 Around 20% of our patients who presented with ADHF were prescribed different inotropes or nitrates with an attempt to initial stabilization.
There are some limitations to this study. This was a hospital-based study in the limited number of patients who mainly represents severely symptomatic patients. Diagnosis of ischemic heart disease was based on history, risk factors, wall motion abnormality in echocardiography and may not be accurate because coronary angiography was not done in all cases. CHF is a common problem and associated with significant morbidity and mortality because of associated co-morbidities and underuse of proven therapy like BBs, ACEIs or ARBs and MRAs. Diuretics are necessary for patients with evidence of pulmonary or peripheral congestion. If tolerated, ACEIs and BBs reduce mortality and can prevent the progression of symptoms and the need for hospitalizations. Low dose MRAs improves survival in patients with CHF with ongoing symptoms despite standard therapy. Careful attention to the optimization of different drugs therapy in patients with CHF may help to improve patient outcomes.
Ethical approval
Ethical approval was obtained from the institutional review committee (IRC) prior to starting the study (IRC NMCTH 205/2018).
|
2019-08-02T13:24:46.677Z
|
2019-06-30T00:00:00.000
|
{
"year": 2019,
"sha1": "90767df13a4998dc0b8b8d9b8f9f0a71c79ea2b0",
"oa_license": "CCBY",
"oa_url": "https://jcvtr.tbzmed.ac.ir/PDF/jcvtr-11-79.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90767df13a4998dc0b8b8d9b8f9f0a71c79ea2b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246525325
|
pes2o/s2orc
|
v3-fos-license
|
C. elegans as an in vivo model system for the phenotypic drug discovery for treating paraquat poisoning
Background Paraquat (PQ) is an effective and widely used herbicide and causes numerous fatalities by accidental or voluntary ingestion. However, neither the final cytotoxic mechanism nor effective treatments for PQ poisoning have been discovered. Phenotypic drug discovery (PDD), which does not rely on the molecular mechanism of the diseases, is having a renaissance in recent years owing to its potential to address the incompletely understood complexity of diseases. Herein, the C. elegans PDD model was established to pave the way for the future phenotypic discovery of potential agents for treating PQ poisoning. Methods C. elegans were treated with PQ-containing solid medium followed by statistical analysis of worm survival, pharyngeal pumping, and movement ability. Furthermore, coenzyme Q10 (CoQ10) was used to test the C. elegans model of PQ poisoning by measuring the levels of reactive oxygen species (ROS) and malondialdehyde (MDA), mitochondrial morphology, and worm survival rate. Additionally, we used the classic mice model of PQ intoxication to evaluate the validity of the C. elegans model of PQ poisoning by measuring the effect of CoQ10 as a potential antidote for PQ poisoning. Results In the C. elegans model of PQ poisoning, 5 mg/mL PQ increased the levels of ROS, MDA content, mitochondrial fragments, which significantly shortened the lifespan, while CoQ10 alleviated these phenotypes. In the mice model of PQ poisoning, CoQ10 increased the chance of survival in PQ poisoned mice while reducing ROS, MDA content in lung tissue and inhibiting PQ-induced lung edema. Moreover, CoQ10 alleviated the lung morphopathological changes induced by PQ. Conclusion Here we established a C. elegans model of PQ poisoning, whose validity was confirmed by the classic mice model of PQ intoxication.
INTRODUCTION
Paraquat (PQ) is known as one of the most broadly used herbicides in the world (Diaz Kirmser et al., 2010). The high toxicity of PQ, together with its widespread use and ready accessibility, results in thousands of deaths each year by both accidental and deliberate selfpoisoning (Dinis-Oliveira et al., 2008), among which intentional suicide deaths currently predominate, especially in developing countries. The lung is the main target organ of PQ intoxication due to its active polyamine uptake transport systems, which concentrate PQ rapidly into alveolar epithelial cells (Cappelletti, Maggioni & Maci, 1998;Dunbar et al., 1988). However, the mechanism of PQ-induced lung toxicity has not been fully understood (Dinis-Oliveira et al., 2008). The main potential mechanism is that PQ results in mitochondrial dysfunction and the production of reactive oxygen species (ROS) Zhao et al., 2017). PQ acts by redox cycling, which affects the electron transport chain function and increases the production of superoxide in the mitochondria, disrupting the synthesis of pulmonary surfactants and damaging the vital cellular constituents (Tampo, Tsukamoto & Yonaha, 1999). Intensive studies of PQ toxicology and drug discovery for treating PQ poisoning have been performed (Liu et al., 2019;Shen et al., 2017;Zhang et al., 2019). However, no clinically approved antidote for PQ poisoning has been found.
CoQ10 is a lipid-soluble cofactor that acts as an electron-transfer carrier and is naturally synthesized by mammals and plants. CoQ10 links basic aspects of energy metabolism and antioxidant protection. This naturally occurring compound plays a major role in cellular metabolism since it contributes to oxidative phosphorylation by mediating electron transfer between Complexes I/II and Complex III in the mitochondrial inner membrane (Mantle & Dybring, 2020). Beyond that, CoQ10 has fundamental properties that make it an antioxidant or free radical scavenger and confer its potential benefit in a variety of clinical situations (Mantle & Dybring, 2020;Rabanal-Ruiz, Llanos-Gonzalez & Alcain, 2021).
Target-based drug discovery has been the main method of choice in both academic translational research centers and the pharmaceutical industry in the past two decades (Sams-Dodd, 2007). However, phenotypic drug discovery (PDD), which does not rely on the identity of a specific drug target or a hypothesis about its role in disease, is having a renaissance in recent years owing to its potential to address the incompletely understood complexity of diseases and the promise of delivering first-in-class drugs (Moffat et al., 2017;Zheng, Thorne & McKew, 2013). The mechanisms of PQ toxicity are complicated as all may occur and are likely to be synergistic (Gawarammana & Buckley, 2011;Suntres, 2002). Therefore, PDD would be a more suitable strategy for the discovery of antidotes for PQ poisoning. For more than half a century, C. elegans has been extensively used as a genetic model organism (Honnen, 2017). C. elegans has emerged as an intact in vivo system for PDD in the past decade (Carretero, Solis & Petrascheck, 2017), owing to conservation of cellular processes across species, small size (∼1 mm in length), rapid replication cycle (∼3 days) and ability to produce ∼300 offspring in ∼3 days, low husbandry costs, ease of growing and maintenance as well as manipulation, simple screening assays, and amenability to high-throughput and high content screening methodology (Leung et al., 2013;Maurer et al., 2015;Rangaraju, Solis & Petrascheck, 2015). Herein we aimed to establish a C. elegans model of PQ poisoning. Additionally, the classic mice model of PQ intoxication was used to evaluate the validity of C. elegans model of PQ poisoning by measuring the effect of CoQ10 as a potential antidote for PQ poisoning.
Synchronization of worms
To synchronize adult worms, day 1 post adult worms were allowed to lay eggs for 4-6 h on bacteria-seeded NGM plates and were subsequently removed from the plates. The remaining eggs were cultured to obtain a synchronous population.
Establishing the C. elegans model of PQ poisoning
Synchronized wild-type C. elegans (N2) at the L4 stage were grown on floxuridine (FUDR)-added NGM/OP50 plates for 24 h. Worms were then transferred to PQ (0, 5, 10, 20 mg/mL)-containing NGM/OP50 plates for 3 h. Worms were next transferred to new NGM/OP50 plates containing FUDR. The mortality rate of worms, locomotion behavior, and pharyngeal pumping were determined immediately. Locomotion behavior was quantified by monitoring body thrashing for 1 min. A movement of the worm that swings its head and/or tail to the same side was counted as one thrash. The pharyngeal pumping assay was evaluated by measuring the number of pharyngeal contractions in 30 s. The survival of the nematodes transferred to new NGM/OP50 plates after exposure to paraquat (5 mg/mL) was observed, which was checked every day with a soft touch at the tip of the pharynx of the worms. Each assay was repeated three times.
C. elegans lifespan measurement
Synchronized N2 worms at the L4 stage were plated on FUDR-added NGM/OP50 plates for 24 h. Worms were then transferred to PQ (5 mg/mL)-containing solid plates for 3 h. Worms were next transferred to FUDR added solid plates containing various concentrations of coenzyme Q10 (CoQ10, Fig. S1) (Energy Chemical, Shanghai, China) or without CoQ10 and incubated at 20 • C. The worm population was transferred every day during the lifespan assay, and animals were scored for survival. CoQ10 was dissolved in Tween 80 (0.1%) + glycerol (0.1%) solution. The control population was treated with Tween 80 (0.1%) + glycerol (0.1%) solution C. elegans ROS assay H 2 DCF-DA (Sigma-Aldrich, St. Louis, MO, USA) was used as the fluorescence probe to measure ROS in C. elegans based on the formation of highly fluorescent 2 , 7dichlorofluorescein (DCF) from nonfluorescent H 2 DCF-DA by reacting with ROS. Synchronized N2 worms at the L4 stage were plated on FUDR-added NGM/OP50 plates for 24 h. Worms were then transferred to PQ (5 mg/mL)-containing solid plates for 3 h, and next transferred to FUDR added solid plates containing 1.8 mg/mL CoQ10 or without CoQ10 and incubated at 20 • C. On Day 5, ROS levels were measured by H 2 DCF-DA staining. For imaging detection, the worms were washed with M9 buffer 3 times and then stained with 50 µM H 2 DCF-DA for 1 h. After three washes with M9 buffer, worms were imaged with a Nikon TS2-FL fluorescence microscope. The fluorescence intensity of the images was quantified using ImageJ software (http://rsb.info.nih.gov/ij/). The values in the experimental groups were normalized to the control group and subjected to analysis of relative fluorescence intensity. For each condition, up to 30 worms were observed and imaged. The assay was repeated in three independent experiments.
C. elegans mitochondrial imaging
C. elegans strain SJ4103 expressing mitochondrial-targeted green fluorescent protein (GFP) driven by the muscle-specific myo-3 promoter was used for mitochondrial imaging. Quantitative analysis of mitochondrial morphology in worms was performed by measuring mitochondrial area. Synchronized SJ4103 worms at the L4 stage were plated on FUDRadded NGM/OP50 plates for 24 h. Worms were then transferred to PQ (5 mg/mL)containing solid plates for 3 h, and next transferred to FUDR added solid plates containing 1.8 mg/mL CoQ10 or without CoQ10 and incubated at 20 • C. On day 5, worms were immobilized with levamisole before mounting on 2% agarose pads for microscopic examination with a Nikon TS2-FL fluorescence microscope. All snapshots were taken from the same part of C. elegans: muscle cells from the upper part of the worm. Mitochondria were segmented from the background by setting the pixel intensity threshold. The mitochondrial area was measured using ImageJ software (http://rsb.info.nih.gov/ij/). For each condition, up to 15 worms were observed and imaged. The values in the experimental groups were normalized to the control group. This experiment was repeated three times.
C. elegans malondialdehyde (MDA) measurements
Synchronized N2 worms at the L4 stage were plated on FUDR-added NGM/OP50 plates for 24 h. Worms were then transferred to PQ (5 mg/mL)-containing solid plates for 3 h, and next transferred to FUDR added solid plates containing 1.8 mg/mL CoQ10 or without CoQ10 and incubated at 20 • C. On day 5, the pretreated worms were sonicated using an Ultrasonics Processor KBS-900 (Kunshan Ultrasonic Instrument Co., Shanghai, China) with 2-sec pulse on and 4-sec pulse off at 4 • C for 2 min. The MDA (Fig. S1) content was measured by assay kits (Leagene Biotechnology, Beijing, China) according to the manufacturer's instructions. The MDA content was normalized to the protein concentration. The values in the experimental groups were normalized to the control group to analyze the relative MDA content. This experiment was repeated three times.
In vivo mice study
All the mice used in our experiments were male. SPF C57BL/6 mice (6-7 weeks old) were obtained from Liaoning Changsheng Biotechnology Co., Ltd. The mice were raised at 22−24 • C with relative humidity of 50-70% under 12 h light-dark illumination. The animals were housed in plastic cages (maximum 5 per cage). Standard diet and drinking water for raising mice were used to feed the mice. All procedures treated with mice were in compliance with the Regulations for the Administration of Affairs Concerning Experimental Animals in China. Protocols were approved by the Institutional Animal Care and Use Committee (IACUC) of Changchun Institute of Applied Chemistry, Chinese Academy of Sciences (CIAC2020-84). Note that the study was exploratory and not preregistered. Neither randomization nor blinding methods were used for the selection of animals. The sample sizes were estimated based on prior experience with variability and requirements to identify significant differences. In all studies, no mice were excluded from analyses.
Animals were observed daily throughout the course of the experiment and defined criteria for premature euthanasia (loss of body weight >20%, loss of ability to ambulate (inability to access food or water), unconsciousness with no response to external stimuli) were closely followed. Adequate measures were taken to minimize the pain of experimental animals. After the experiment, the surviving mice were euthanized with isoflurane.
PQ-induced lung injury analysis
In total, 24 mice were randomly divided into groups (n = 6/each) and administered with PQ (80 mg/kg) and CoQ10 (0, 62.5 and 125 mg/kg) as described in the above survival assay. After 48 h of PQ exposure, surviving mice were anesthetized, and lung lobe samples of mice were collected surgically. The left lung lobe tissues were homogenized, and the supernatants were collected for measuring MDA. The right lower lung lobes were homogenized, and the supernatants were collected for measuring ROS. According to the manufacturer's instructions, MDA and ROS measurements were performed by assay kits (Nanjing JianCheng Bioengineering Institute, Nanjing, China). The right middle lung lobe of mice was weighed and dried for 48 h in a constant temperature oven at 70 • C, and the dry weight was obtained. Lung wet/dry weight ratio=lung wet weight/lung dry weight ×100%. The values in the experimental groups were normalized to their respective control group and subjected to analysis.
Using standard procedures, right upper lung lobe fixation and HE staining were carried out as described (Chen et al., 2013). Slides (n = 3) were scanned at 400× magnifications. At least two nonconsecutive slides per block were used to analyze whether inflammation was present.
Statistical analysis
The data were analyzed using GraphPad Prism 5 software (GraphPad Software, La Jolla, CA, USA). The comparisons of the mean values of the analyzed parameters were performed using one-way ANOVA. Survival analyses were performed using the Kaplan-Meier method, and the significance of differences between survival curves was calculated using the log-rank test. For all experiments, significance was accepted at p < 0.05.
Establishing C. elegans as a model for PQ poisoning
Despite intensive studies of PQ toxicity, neither the final cytotoxic mechanism nor a clinically useful antidote has been discovered (Dinis-Oliveira et al., 2008). To find potential therapeutics for treating PQ poisoning, an animal model suitable for drug screening is urgently needed. C. elegans is an excellent in vivo system for PDD (Carretero, Solis & Petrascheck, 2017). Herein, we tried to establish a C. elegans model of PQ poisoning. To mimic human PQ poisoning, N2 wild-type worms were exposed to different concentrations (0, 5, 10, 20 mg/mL) of PQ for 3 h, owing to the poor permeability of the cuticle ( Van de Walle et al., 2019). It should be noted that when the PQ treatment times were less than 3 h, the experimental variability was large. As shown in Fig. 1A, 20 mg/mL PQ killed all the worms immediately after 3 h of treatment, 10 mg/mL PQ killed 30% of the worms, while 5 mg/mL PQ did not affect the survival of the worms. Meanwhile, the influence of PQ treatment on the physiology of N2 wild-type worms was further checked. PQ (10 mg/mL) severely affected the pumping rate of worms, while 5 mg/mL PQ treatment had no apparent effect on the pumping rate (Fig. 1B). PQ (5 and 10 mg/mL) did not affect body bends after 3 h of treatment (Fig. 1C). Unlike organic phosphorus pesticides, which are acetylcholinesterases with rapid onset of immediate symptoms following ingestion (Soares et al., 2019), ingestion of a moderate dose of PQ usually produces no symptoms except for possible corrosive lesions during the first phase of PQ poisoning (Dinis-Oliveira et al., 2008). Therefore, 5 mg/mL PQ, which did not significantly affect the physiology of N2 wild-type worms during the process (3 h) of PQ poisoning, was chosen as the working concentration for the C. elegans model of PQ poisoning. As shown in Fig. 1D and Table S1, PQ (5 mg/mL) poisoning significantly shortened the lifespan of N2 worms compared to the control group (12.5 ± 0.5 days versus 18.0 ± 0.6 days, p < 0.01), which indicated that the C. elegans model of PQ poisoning was successfully established.
It should be noted that PQ has been widely used in oxidative stress studies and the PQ-induced Parkinson's disease model in C. elegans (Dilberger et al., 2019;Wu et al., 2018). However, the typical PQ concentrations (0.2-1.6 mM) used in these studies were much lower than the PQ (5 mg/mL; i.e., ∼20 mM) used herein for the PQ poisoning model. The dose of PQ does matter, and the toxicology of PQ at high Figure 1 Establishing the C. elegans as a model for PQ poisoning. Synchronized N2 nematode populations were collected from the NGM plates at the day 1 of adulthood, subsequently exposed to different concentrations of PQ (0, 5, 10, 20 mg/mL) for 3 h. Mortality (A, n = 120), pharyngeal pumping (B, n = 30) and body bending (C, n = 30) were measured. (D) Survival curve of N2 worms after PQ (5 mg/mL) poisoning. Control, n = 114; Paraquat 5 mg/mL, n = 117. Data are representative of three independent experiments. * * * p < 0.001; n.s. , non-significant.
Full-size DOI: 10.7717/peerj.12866/ fig-1 doses is quite different from that of the mild dose of PQ (Dinis-Oliveira et al., 2008). This study reports the C. elegans model of PQ poisoning, which is obviously different from PQ-induced oxidative stress and Parkinson's disease studies.
The evaluation of C. elegans model of PQ poisoning
To evaluate the C. elegans model of PQ poisoning, we next tested this in vivo model system for PDD. CoQ10 (Fig. S1), which is both an electron transporter in mitochondrial respiratory chain complexes (I, II, and III) and an excellent free radical scavenger (Cui & Kawamukai, 2009), was discovered as a potential therapeutic for neuronal damage induced by PQ (McCarthy et al., 2004). PQ (5 mg/mL) poisoning significantly increased ROS in N2 worms compared to the control worms ( Figs. 2A-2B), which is consistent with the observations that PQ exerts its toxic effects primarily through the production of ROS observed in rodents, rabbits and humans (Brigelius et al., 1986;Gómez-Mendikute & Cajaraville, 2003;Mirzaee et al., 2019). CoQ10 (1.8 mg/ml) greatly suppressed PQ-induced ROS in N2 worms ( Figs. 2A-2B).
Recent studies have found that the PQ-induced increase in overall cell level ROS originates mainly from mitochondria (Zhao et al., 2017). We investigated the mitochondrial morphology in muscle cells with the transgenic strain SJ4103 [zcIs14 (myo-3::GFP mit )]. It was observed that PQ increased mitochondrial fragments with the characteristic of increased mitochondrial circularity along with decrease mitochondrial volume (Figs. 2C-2D). CoQ10 was found to reverse the PQ-induced mitochondrial morphology change (Figs. 2C-2D). MDA is a final product of polyunsaturated fatty acids peroxidation of membrane lipids (Takizawa et al., 2007). An increase in ROS causes the overproduction of MDA (Fig. S1), an indicator of oxidative stress (Gawel et al., 2004). As shown in Fig. 2E, PQ increased the MDA level, and CoQ10 strongly inhibited PQ-induced MDA.
Together, these data showed that CoQ10 protected the oxidative stress and mitochondrial morphology change induced by PQ in C. elegans, which implied that CoQ10 might have the chance of treating PQ poisoning. As shown in Fig. 2F and Table S2, worms were transferred to NGM plates with different concentrations (0, 0.6, 1.2 and 1.8 mg/mL) of CoQ10 after PQ (5 mg/mL) intoxication, and CoQ10 was found to attenuate PQ-induced mortality in a dose-dependent manner.
We next examined the effects of CoQ10 on C. elegans under normal culture conditions. Relative ROS production in worms treated with CoQ10 (1.8 mg/ml) was significantly enhanced compared with ROS production in untreated populations of worms (p < 0.001) (Figs. S2A, S2B). Under treatment with CoQ10 (1.8 mg/ml), intact mitochondria were no longer observed such that all mitochondria displayed a disrupted morphology (Fig. S2C). Furthermore, the relative mitochondrial area was dramatically reduced by 68% in the presence of CoQ10 (1.8 mg/ml) (Fig. S2D). CoQ10 (1.8 mg/ml) caused a significant increase in MDA levels compared to those in untreated worms (p < 0.001) (Fig. S2E). The results were unanticipated. As shown in Fig. S2F and Table S3, 0.6 or 1.2 mg/ml CoQ10 significantly extended the lifespan of wild-type worms, while CoQ10 (1.8 mg/ml) decreased the lifespan.
Mice model of PQ poisoning
To further evaluate the validity of the C. elegans model of PQ poisoning, CoQ10, which was discovered as a potential therapeutic agent for PQ poisoning by C. elegans, was tested by the classic mice model of PQ poisoning. Given the considerable toxicity of PQ, mice were administered PQ at the dose of 80 mg/kg, which caused a mice mortality rate of approximately 80-90% but not 100%. Different doses of CoQ10 were given 1 h after PQ intoxication. A study in rats has shown that CoQ10 is safe and well tolerated even at high doses (3,000 mg/kg per day) (Wang et al., 2007). The initial dosing regimen of CoQ10 was 62.5 and 125 mg/kg day. As shown in Fig. 3A, PQ exposure was shown to cause rapid progression of death, and a single administration of CoQ10 increased the survival rate of mice with PQ poisoning in a dose-dependent manner. The main molecular mechanism of PQ toxicity is based on its redox cycling and intracellular oxidative stress generation (Clejan & Cederbaum, 1989;Yeh et al., 2006). As expected, PQ-induced a significant increase in ROS (Fig. 3B) and MDA (Fig. 3C) in the lung tissues, which could cause endothelial damage, expansion of vessel permeability, and pulmonary edema (Riahi et al., 2010), while CoQ10 inhibited PQ-induced oxidative stress. As expected, PQ treatment significantly increased the lung wet/dry weight ratio, indicating the presence of pulmonary congestion and edema (Fig. 3D), and CoQ10 inhibited PQ-induced lung edema (Fig. 3D). As shown in Figs. 3E-3H, PQ-induced lung morphopathological changes were assessed by H&E staining. There were diffuse alveolar collapses with thickening of airway smooth muscle, as well as perialveolar, peribronchial, and interstitial fibrosis in lung tissue sections of PQ group mice. These PQ-induced pathological changes were markedly ameliorated by CoQ10. Consistent with the discovery from the C. elegans model of PQ poisoning, the mice results demonstrated the role of CoQ10 in blocking PQ-induced oxidative stress, edema, and fibrosis and therefore attenuated PQ-induced death. These in vivo data obtained from the mice model support the validity of the C. elegans model of PQ poisoning.
DISCUSSION
After the introduction of PQ, it has been a serious hazard to humans, not with its proper use, but mainly as a result of accidental and voluntary ingestion. For more than half a century after the first reports of PQ poisoning in humans, recovery in such cases remains poor. Since the molecular mechanisms of PQ toxicity are complicated, therapeutics for the treatment of PQ poisoning are virtually nonexistent (Dinis-Oliveira et al., 2008). PDD using an intact in vivo system that can closely mimic clinical responses would be more suitable than the target-based strategy for the discovery of antidotes for PQ poisoning (Swinney, 2013). Herein we established a C. elegans model of PQ poisoning and confirmed its validity by the classic mice model of PQ poisoning. PQ was found to cause oxidative stress and damage in both C. elegans and mice. Considering that C. elegans directly contacted PQ while the lung tissue of mice did not, it is not surprising that PQ caused higher ROS (Fig. 2B) and MDA (Fig. 2E) in C. elegans than in the lung tissue of mice (Figs. 3B-3C). The large physiological differences between C. elegans and mice should be acknowledged. It would be unlikely that C. elegans reacts in a quantitatively equivalent way as mouse to PQ treatment.
To evaluate the PDD system of C. elegans, CoQ10 was chosen as an agent for treating PQ poisoning in both C. elegans and mice. Administration of CoQ10 resulted in a remarkable increase in the median survival time of C. elegans after PQ intoxication, which was associated with blockade of PQ-induced oxidative stress and damage. Considering the physiological differences between C. elegans and mice, the mice model of PQ poisoning is always necessary to confirm the discovered potential PQ antidote derived from wormbased PDD. The measurements of ROS, MDA and wet/dry weight ratio in the lung, the morphopathological observation of lung tissue,and the survival curves of the mice model notably support the validity of the C. elegans model of PQ poisoning. We will screen candidate drugs for PQ detoxification in future experiments, and the application value based on the C. elegans model of PQ poisoning is worth further verification in follow-up work.
Currently, there are several clinical trials of CoQ10 in cardiovascular disease, metabolic syndrome, diabetes, kidney disease, neurodegenerative diseases, and male infertility, owing to the antioxidant effect of CoQ10 (Hernández-Camacho et al., 2018). We unexpectedly discovered that high-dose addition of CoQ10 caused oxidative damage to wild-type worms under normal culture conditions and shortened the lifespan of C. elegans. This might indicate that the use of CoQ10 within a certain concentration range has the risk of potential oxidative damage. The therapeutic applications of CoQ10 are greatly limited by its poor bioavailability due to its poor solubility in aqueous media and taking too long time to diffuse through cellular membranes (Bhagavan & Chopra, 2006). To overcome this major limitation in PQ poisoning patients requiring emergency treatment, a water soluble CoQ10 formulation, which can be safely administered by intravenous injection, might offer a better treatment effect. Although this study demonstrates that CoQ10 could be therapeutic for detoxification of PQ poisoning, much remains to be done for the convincing efficacy of CoQ10 at the clinical level.
CONCLUSIONS
In conclusion, this study provides a validated C. elegans model of PQ poisoning, which paves the way for the phenotypic discovery of potential agents for treating PQ poisoning. It should be acknowledged the physiological differences between C. elegans and mice models. The organizational complexity of C. elegans is much simpler than that of mammalian animals. It is not surprising that C. elegans has greater physiological plasticity, which increases their adaptation to adverse conditions and survival rate compared to higher organisms. Therefore, mammalian animal models of PQ poisoning are always necessary to confirm the discovered potential PQ antidote from the worm model.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This work was supported by the Chinese Academy of Sciences (CAS) Pioneer Hundred Talents Program, and Beijing National Laboratory for Molecular Sciences (BNLMS202108). Some strains were provided by the CGC, which is funded by the NIH Office of Research Infrastructure Programs (P40 OD010440). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
2022-02-04T16:34:44.136Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "ed6968bce18e09a9dd87c24fb05d2f3dd30e3fef",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.12866",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "034c7424a904b6cb332da5ff7a5f33740f42052f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56270481
|
pes2o/s2orc
|
v3-fos-license
|
Impact of HIV infection on self-rated health in a high-prevalence population with low awareness of own HIV status
Background: Self-rated global health status has been found to be a sensitive marker of declining health and to operate as an independent predictor of survival. This study examines the effect of HIV infection on self-rated health in a population with high prevalence of HIV infection and low awareness of own status of HIV infection. Methods: The data stem from a comprehensive population-based HIV survey conducted in selected urban and rural populations in Zambia in 1996. A total of 1951 males and 2158 females of age more than 14 years were interviewed of which 6% refused to be tested for HIV infection. A logistic regression model was used assuming socio-demographic factors, mental distress and health care use to be associated both with HIV infection and self-rated health. Results: The proportion of persons judging their health status as poor was higher in the rural than in the urban population. Generally no major difference in the proportions of persons rating their health status as poor was observed between sexes. The proportion of poor self-rated health status increased linearly with age. Use of health care services, mental distress and self-perceived risk of HIV infection were negatively associated with self-rated health. Both males and females living in an urban area and males living in a rural area of age more than 24 years who were infected with HIV were about twice as likely to rate their health status as poor compared to respondents who were not infected with HIV. Conclusion: HIV infection had a strong independent negative effect on self-rated health in persons of age greater than 24 years. This measure of people’s subjective health may be used as a valuable “diagnostic” tool in HIV-related care and support programmes, and should be evaluated for use in such services.
INTRODUCTION
Self-rated health (SRH) is an overall, self-assessment of an individual's health status.In most surveys it is based on the question: "How in general would you rate your health?"The responses are usually on a four-or five-point scale, ranging from poor to excellent (1), which has been dichotomised into 'good or excellent' and 'fair or poor' (2)(3).There is an extensive literature based on this simple global assessment, and one of the most intriguing findings from population-based studies is that it operates as an independent predictor of survival (1,4).A dose-response relationship between the probability of mortality and self-rated health has also been a consistent observation, i.e. showing the highest mortality among persons rating their health as poor and lowest among persons rating their health as excellent.Furthermore, the effect of SRH on the prediction of mortality is more apparent for males than females (1).Self-ratings thus represent a source of valuable data on health status, and as noted by Idler and Benyamini "Self-rated health represents an irreplaceable dimension of health status and in fact that an individual's health status cannot be assessed without it" (1).
In line with the results concerning the validity of self-rated health, the reliability of the measure has been found to be good.Lundberg and Manderbacka studied the test-retest reliability of self-rated health and compared it with the reliability of other more specific health status indicators (5).They concluded that the reliability is as good as or even better than that of the more specific indicator questions.Respondents seem to be taking into consideration a wide range of factors or conditions being of relevance to their health status.In 158 in-depth interviews, Krause and Jay found that study participants thought about specific health problems, general physical functioning and health behaviours in answering the question (6).
Studies of the predictors of self-rated health have provided insights into the different dimensions of selfrated health.Depression as an independent determinant has been confirmed in many studies.Even after taking into account physical illness and functional disability, self-rated health was still strongly and independently associated with depressive symptoms (7)(8)(9).Leibson et al. found that self-rated physical health was associated with both minor and serious depression (10).In another study, Gazmararian et al. revealed that fair or poor self-rated health had higher odds for depressive symptoms (11).
Consistently, SRH has been found to predict health care seeking.Fylkesnes and Førde and Fylkesnes found that self-rated health was the most important determinant of both general practitioner visits and use of referral services (12,13).An association between lifestyle or health related behaviour and self-rated health has also been confirmed.Manderbacka et al. concluded in their study on the contribution of risk factors and health behaviours to self-rated health that even in the absence of health consequences, risk factors and risky behaviours effect one's perceived health (14).Leisure physical activity has been found to be positively related to self-evaluated health in both males and females (7,12).In another study conducted by Fleishman and Crystal in 10 cities across the United States, it was shown that declines in physical functioning were related to poor self-rated health (15).In this regard it should be noted that inactivity has been found to predict mortality (16).
We have previously reported that declining selfrated health had a major influence on readiness for voluntary HIV counselling and testing among men and women aged >24 years (17).We have not found any work on the relationship between HIV and self-rated health.This paper is based on a comprehensive population-based HIV survey in Zambia revealing HIV prevalence rates of 25% in urban and 16% in rural populations and with less than 8% of the respondents being aware of their own HIV status (18)(19).Our main aim is to examine the impact of HIV infection on selfrated health by using theoretical statements about other important factors that relate both to HIV infection and self-ratings, i.e. social status, self-perceived risk of HIV infection, mental distress and use of health care services.
Study populations
Population-based HIV surveys are conducted in Zambia every three years since 1996 in order to document the dynamics of the HIV epidemic.These surveys are conducted in Chelstone, a medium density urban residential area in Lusaka, and in Kapiri Mposhi.Chelstone was selected to represent urban residential areas in Zambia.In 1996 both rural and urban areas of Kapiri Mposhi were surveyed.In subsequent surveys, urban Kapiri Mposhi was excluded from the surveys because it did not represent a typical urban area of Zambia.
A stratified random-cluster sampling method was used by employing the mapping system established by Central Statistical Office.This system divides the country into areas called Census Standard Areas (CSA) which are further subdivided into smaller areas called Standard Enumeration Areas (SEA).On average a CSA is divided into three SEAs.For the purpose of the study, one SEA was randomly selected and all persons of age 15 years or more were requested to participate in the surveys.For the current analysis, we chose to use data collected from rural Kapiri Mposhi, to represent rural areas, and Chelstone, to represent urban areas of Zambia.The CSA was considered a cluster.The 1996 surveys were conducted in 5 clusters in rural Kapiri Mposhi and in 10 clusters in urban Chelstone.
Sample size and participation
The total response rate for men was 71.3% compared with 92.0% among women (18), yielding totals of 1951 males and 2158 females, who participated in the study.The refusal rate to be tested for HIV by both males and females was 8.3% in urban area and 3.4% in rural area.Fylkesnes et al. have described the sample (18).
Personal interviews
The questionnaire had several modules among which were socio-demographics, health seeking behaviour and mental distress.The highest level of education completed was coded into three levels: up to Grade 7, Grades 8 and 9, and Grade 10 or more.Travel as a variable was derived from the question: Have you during the past years been on regular trips where you have to stay away from home for several days or more?A respondent was deemed to have used a health facility if he or she had visited any of the following in the last one-year preceding the survey: private doctor/ clinic, the local health centre, or the hospital.Meanwhile a participant was considered mentally distressed if he or she answered yes to any of the following questions: In the last 30 days: Have you slept badly?Have you cried more than unusual?Has the thought of ending your life been on your mind?
The dependent variable was based on the question "How would you say your health is at the moment?Is it very poor, poor, fair, good or excellent?In the analysis the dependent variable was grouped into two: very poor, poor or fair into one group we now call "poor self rated status", and good or excellent into another group we call "good self rated status".The outcome measure henceforth is poor self-rated status.
Laboratory analysis
For the sole purpose of research, saliva samples were collected and tested for HIV infection using Gacelisa HIV 1 & 2 (Welcome Diagnostics, Dartford, Kent, UK).The sensitivity and specificity of Gacelisa was 100% for both diagnostic parameters when compared with HIV 1 & 2 (BIONOR AS, Skien, Norway) (20).The rate of agreement between Gacelisa and HIV 1 & 2 (BIONOR AS) was 99.8% (20).The validation took into account paired saliva and serum samples which were collected from 494 antenatal care clients.
Ethical consideration
The study was approved by the National AIDS Research Committee, Zambia.Consent was obtained from the individual before administering the questionnaire.Further consent was obtained from the respondent before collecting saliva specimen.Participants wishing to know their HIV status gave separate consent and upon consenting, they were counselled following the laid down procedure by the Government of Zambia.
Theoretical framework
This study of predictors of self-rated health was conducted in populations with very high prevalence rates of HIV infection, whereas only a small proportion of the respondents knew their own HIV status, i.e. 8% of the survey respondents were previously tested for HIV infection (18).The information on history of HIV testing was not taken into account in the analysis since those previously HIV tested were not found to differ from those never tested either in terms of self-rated health or HIV status.The level of HIV-related knowledge was generally high, e.g.signs/symptoms of "HIV-disease" and preventive measures, but less than 1% of the HIV infected persons at the time of the survey was on ARV treatment.
The impact of HIV infection is expected to be seen after a certain period of being infected (17).Self-rated health has consistently been found to be a sensitive marker of declining health status, and our general assumption is that this global measure also taps illness experience directly related to HIV infection.Our model assumption was that socio-demographic factors (age, sex, marital status, educational attainment and residence) and risk-taking behaviour are associated both with HIV infection and self-rated health.We further assumed that the relationship between HIV infection and self-rated health is confounded by mental distress and use of health care services (as measured by use of professional health care services and traditional practitioners).
Data analysis
The data were computerised using Epi Info.Consistency and range checks were used to edit the data set.We opted for residence, age group and sex stratified analysis because of differences in the prevalence rates of HIV infection between groups of these factors.Analysis was done in Stata using the logistic regression for survey data in order to adjust for clustering effect, and to adjust for possible confounders.Possible confounders for the preliminary models were age, marital status and education.In the final model of the association between HIV infection and SRH, mental distress and health facility use were also, in addition, adjusted for in the analysis.Odds ratio (OR) and the 95% confidence interval were used to determine the magnitude and significance of the association.
RESULTS
The distribution of self-rated health status according to residence, sex and age is shown in Table 1.The proportion of persons judging their health status as poor was significantly higher in the rural (32.5%) than in urban (22.4%) area (p<0.001).Although generally there was a tendency for females to have higher rates than males for poor self-rated health status, no significant differences were observed between sexes except in the 20-24 years (p=0.029)and 50 years or more age groups.The proportion of poor self-rated health status significantly increased linearly with age (p<0.001) in both sexes and residential areas.
Socio-demographic factors and SRH
Table 2a shows the associations between sociodemographic factors and SRH, stratified by age and sex in rural Kapiri Mposhi.Associations were controlled for age.Only travel and risk perception of HIV infection, were independently associated with SRH among males in the age group 15-24 years.Male res-pondents who had travelled were 3.63 (95% CI 1.17, 11.25) times more likely to rate their health status as poor compared to male respondents who had not travelled.Meanwhile, male participants in the same age group who perceived themselves as being at risk of HIV were 2.24 (95% CI 1.06, 4.71) times more likely to rate their health status as poor.
Among female participants in the age group years, the socio-demographic factors that determined the rating of health status as poor were marital status and self perceived risk of HIV infection.Married women were 2.20 (95% CI 4.51) times more likely to rate their health status as poor compared to unmarried women.Female respondents who self perceived themselves as being at risk of HIV infection were 96% (OR=1.96,95% CI 1.06, 3.61) more likely to rate their health status as poor compared to respondents who had not travelled.
In the age group 25 years or more, no significant associations were observed between socio-demographic factors and self perceived risk of HIV infection with SRH in rural area.
Employed male respondents of age 15-24 years who were urban residents were 66% (OR=1.66,95% CI 1.03, 2.68) more likely to rate their health status as poor compared to the respondents who were not employed (Table 2b).Meanwhile, females in the same age group who were married (OR=1.42,95% CI 1.02, 1.99), of less than 8 years of education (OR=1.44,95% CI 1.08, 1.92) and who self-perceived themselves of being at risk of HIV infection (OR=1.68,95% CI 1.26, 2.25) were 42%, 44% and 68%, respectively, more likely to rate their health status as poor compared to females in he same age group who were unmarried, had at least 8 years of education, and who did not perceive themselves as being at risk of HIV infection.
Several socio-demographic factors were significantly associated with SRH among respondents of age greater than 24 years (Table 2b).Male participants who attained 10 years or more of education (OR=0.70,95% CI 0.51, 0.97) and had travelled (OR=0.51,95% CI 0.33, 0.80) were 30% and 49%, respectively, less likely to rate their health status as poor compared to male participants who had attained less than 10 years of education, and had not travelled.Males who self perceived as being at risk of HIV infection were 51% (OR=1.51,95% CI 1.09, 2.09) more likely to rate their health status as poor, compared to males who did not perceive themselves as being at risk of HIV infection.
Education, employment and self-perceived risk of HIV infection were significantly associated with SRH among female participants of age greater than 24 years.Female participants of low education level were more likely to rate their health status as poor compared to female respondents who had attained higher educational level.Employed female respondents were 38% (OR=0.62,95% CI 0.47, 0.82) less likely to rate their health status as poor compared to unemployed female respondents.Females who self-perceived themselves as being at risk of HIV infection were 70% (OR=1.70,95% CI 1.29, 2.25) more likely to rate their health status as poor compared to female respondents who did not perceive themselves as being at risk of HIV infection.
Mental distress and SRH
Table 3 shows associations of mental distress with self-rated health adjusted for age.In all groups except for males of age less than 25 years, mental distress was associated with SRH.Stronger associations were observed in a rural area than in an urban area.Persons who were distressed were more likely to rate their health status as poor compared to participants who were not distressed.
Health care use and SRH
The associations between health facility utilisation and self-rated health are shown in Table 4.No significant associations were observed between health facility use and SRH in the younger age group (15-24 years) in a rural area.In the older age group (25 or more years), use of traditional healers among males (OR=2.93,95% CI 1.16, 7.43) and use of modern health facilities among females (OR=1.63,95% CI 1.01, 2.61) were significantly associated with SRH.
In an urban area, significant associations between health facility utilisation and SRH were observed in both sexes and age groups.Consistently, in all groups, persons who utilised health care facilities were more likely to rate their health status as poor compared to persons who did not use the facilities.Generally, associations were stronger for use of traditional healers than modern health facilities.
Impact of HIV on SRH
Associations between infection and SRH were first adjusted for age, then in addition, for other sociodemographic factors and self perceived risk of HIV infection, and lastly, in addition, for mental distress and health care utilisation (Table 5).This analysis enabled us to determine the direct impact of HIV infection on self-rated health.HIV infection did not have a significant impact on self-rated health in the younger age group (15-24 years) in both rural and urban areas.In the older age group (25 years or more), males who were HIV infected were 1.93 (95% CI 1.03, 3.63) and 2.19 (85%CI 1.44, 3.35) times more likely to rate their health status as poor compared to non-HIV infected older males in Kapiri Mposhi and Chelstone, respectively.Mental distress and health facility utilisation accounted for 12.3% and 13.1% reduction in the magnitude of association (changes in the odds ratios when both mental distress and health facility utilisation are in the model and when they are not in the model) between HIV infection and SRH, in a rural and urban area, respectively.
Although there was no association between HIV infection and SRH among older female respondents in a rural area, older females in an urban area were 2.20 (95% CI 1.55, 3.13) times more likely to rate their health status as poor compared to female respondents who were not HIV infected.Mental distress and health facility utilisation accounted for a 9.8% reduction in the impact of HIV infection on SRH among females of age more than 24 years in an urban area.
The impact of HIV infection on SRH according to five-year age groups for urban males and females is shown in Figure 1.The impact of HIV infection on SRH in the urban area among males was greatest in the age groups 30-34 and 35-39 years, while that for females was in the age groups 25-29 and 30-34 years.
DISCUSSION
There are few studies on self-ratings of global health from African populations and no previous study has been found which examines the influence of HIV infection on such ratings.We studied the impact of HIV infection on self-rated health status in a high-prevalence population where the HIV-related knowledge was relatively high but where only a small proportion of infected persons had been tested for HIV infection and thus are likely to be aware of their HIV status.The main finding was that HIV infection had a significant independent impact on self-rated health in those aged >24 years.This age-specific diversity in impact fits well with established epidemiological knowledge that most infections in the younger age groups have occurred recently and thus might have a limited effect on the immune system compared with older age groups where the bulk of infections are older.Accordingly, self-rated health seems to be a sensitive indicator of health-related changes directly linked to HIV when the infection has reached a stage subsequent to significant immune system changes.
Non-response bias in this survey can either be due to refusal (refused to give a saliva sample for anonymous HIV testing) or absence (not found at home after two follow up visits to the household).Refusal was relatively low; i.e. 6%, and no difference by sex, and it seems that the use of saliva can reduce refusal to a very low level compared with reports from populationbased surveys applying blood-based testing (21).Absence was the main cause of non-response among men, however, the total response rate among men was 71% compared with 92% among women (18).We have previously reported that mobility was not associated with HIV infection in this survey (22).There was no clear pattern in the association between mobility and selfrated health, and this might indicate non-response bias due to absence to be limited.Perceiving their own risk of being HIV infected as high might lead to a higher likelihood of refusal, and we found a clear pattern of self-perception of risk of HIV infection to reduce selfrated health.Although the magnitude of this bias is likely to be limited due to the low refusal, any effect will be in the direction of reducing the association between HIV infection and self-rated health.
Self-rated health is assumed to represent a summary statement of how various threats related to own health or life stressors are perceived by the individual.Our survey was conducted in a context of different ethnic and language groups.The main challenge was in agreeing on proper translations of the term "health".It is likely that there are cultural differences in the way health is conceptualised and thus having different meanings.However, the findings from this study on determinants of self-rated health were in agreement with consistent findings from the wealthy literature in this regard.This includes both depression and health care use (7)(8)(9)(10)12).Studies from Norwegian populations showed self-rated health as the main independent determinant of health care seeking (1,23).There was no consistent pattern in our findings regarding the relationship between educational attainment and self-rated health, and this is in accordance with previous findings (1,23).As a possible explanation of the independent effect on mortality, Idler and Benyamini suggested in their comprehensive review of research on self-rated health, that negative assessments of health may stimulate the neurological system in ways that compromise the immune system (1).In a study on acceptability to voluntary HIV counselling and testing (VCT) we found selfrated health to be the most powerful factor explaining readiness for VCT in those aged >24 yeas and not in the younger (17).Our suggestion was that in a high prevalence rate of HIV infection setting like in Zambia, people might perceive declining health as a likely sign of HIV infection which in turn leads to higher readiness for finding out their HIV status.The present analysis is in support of this suggestion, that self-rated health is a sensitive marker of declining health caused by HIV infection.The age-specific differential in the effect of HIV is per se strengthening this observation since the bulk of HIV infections in the young people are relatively recent.
In order to improve the poor self-rating of health among mentally distressed persons, it is critical that prevention and interventions are instituted early.This will in turn facilitate the development of effective coping strategies for people living with HIV that require positive self-rated health (24).At the individual level there are huge differences in terms of time from infec-tion to the immune system is "seriously" affected and thus experienced as illnesses that in turn affect health perceptions.In this regard, and based on the present findings, self-rated health might have a diagnostic value, as a proxy of "HIV disease" being of practical relevance particularly in high prevalence populations.
In summary, our findings represent a valuable addition to the wealthy literature, mainly from highincome countries, on the importance of self-rated health as a sensitive marker of health declines by adding evidence from African settings with high prevalence rates of HIV infection and health perceptions being related to HIV infection.The relevance of this measure of peoples' health should be evaluated, e.g. for use as a proxy in support, care and treatment services for HIV infected.
Figure 1 .
Figure 1.HIV prevalence by self-rated health.
|
2018-12-17T18:26:37.759Z
|
2009-10-09T00:00:00.000
|
{
"year": 2009,
"sha1": "c1e6d0c81c4de47739c5218ba384eb60a201469c",
"oa_license": "CCBY",
"oa_url": "https://www.ntnu.no/ojs/index.php/norepid/article/download/215/193",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c1e6d0c81c4de47739c5218ba384eb60a201469c",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268638648
|
pes2o/s2orc
|
v3-fos-license
|
Tardive Dysphoria With Selective-Serotonin Reuptake Inhibitors Treated Successfully With Atypical Antidepressants: A Case Series From Tertiary Hospital Setting
Tardive dysphoria (TDp) is a phenomenon characterized by the delayed onset or worsening of depressive symptoms following the discontinuation or alteration of antidepressant medications. TDp is a recently defined, under-recognized, and understudied condition. We present a series of five TDp cases exploring the diverse presentations, management strategies, and associated medical conditions. In all the cases, TDp manifested after prolonged use of selective serotonin reuptake inhibitors (SSRIs). All cases were successfully managed with atypical antidepressants. These cases offer insight into TDp, providing clinicians and researchers with examples of atypical trajectories in depressive symptomatology.
Introduction
Tardive dysphoria (TDp) represents a distinctive and under-recognized clinical entity within the spectrum of mood disorders [1].Depressive disorders have been extensively studied, but the phenomenon of TDp differs because of the association with prolonged antidepressant use [1]."TDp is defined as a chronic treatmentresistant depressive state occurring in the setting of ongoing, persistent antidepressant treatment in subjects with a history of a recurrent major depressive disorder who have historically experienced an initial positive response to antidepressant medication (generally with their first exposure)" [2].
El-Mallakh et al. proposed the term TDp in 2011 [1].It is distinct from typical mood disorders.A significant proportion of patients with treatment-resistant depression develop TDp [1].They experience anhedonia and reduced interest, energy, and motivation; however, their function is frequently preserved, with no disturbances in appetite and self-care [1].Generally, patients can differentiate TDp symptoms from their previous depressive episodes [1].
Although treatment resistance with prolonged antidepressant use is observed in clinical practice, TDp remains relatively understudied [1,3].Delayed onset of depressive symptoms and loss of treatment efficacy during maintenance treatment have been noted with antidepressant regimens; therefore, medication duration and transitions have to be carefully monitored [3].TDp poses diagnostic challenges and necessitates a nuanced approach to treatment.Guidelines or consensus regarding the diagnostic criteria and optimal management strategies for TDp are lacking.TDp is not yet listed in the Diagnostic and Statistical Manual of Mental Disorders.The neuropathology of TDp is not fully elucidated.The sparse literature on TDp emphasizes the need for additional research and guidelines to identify the symptoms of this condition.A PubMed search with the term "tardive dysphoria" gives 16 results with only three focusing on TDp [1,2,4].
Misdiagnosis or inadequate management of TDp may lead to adverse outcomes and diminished quality of life for affected individuals.Case studies can provide an understanding of the unique aspects of TDp.This case series presents five case studies of patients who manifested delayed-onset or exacerbated depressive symptoms following prolonged use of antidepressant medications.Since no case studies on TDp have been reported from India or Asia, through this case series, we aim to contribute to the evolving literature on TDp.We report the clinical nuances of the symptom onset and drugs involved, thereby paving the way for future investigations.
A 36-year-old married male presented with persistent low mood, anhedonia, fatigability, early morning awakening with worsening mood, feeling of worthlessness, decreased psychomotor activity, decreased attention and concentration, suicidal ideation, decreased libido, and alterations in sleep and appetite after four weeks of discontinuing escitalopram 20 mg.He had a history of two previous depressive episodes that positively responded to escitalopram 20 mg.He was on medication for two years.
Upon reducing the dose of escitalopram to 15 mg, the patient's symptoms improved.The Hamilton Depression Rating Scale (HAM-D) score reduced from the initial episode score of 20 to 4. Escitalopram 15 mg was continued for almost six months.However, the depressive symptoms escalated.The HAM-D score increased to 10, prompting an escalation of escitalopram back to 20 mg, but this further exacerbated the symptoms, with the HAM-D score reaching 15.The patient reported prominent early morning worsening of mood, decreased libido, and recurrent suicidal ideation during this phase.The patient was gradually crosstapered to mirtazapine over a period of four weeks, reaching a dose of 30 mg.Subsequently, the symptoms improved, as evidenced by a reduction in the HAM-D score to 3. The improvement maintained over an eightmonth follow-up period.Among associated medical conditions, dyslipidemia was noted for a period of two years, with a total cholesterol level of 300 mg/dL noted during the last depressive episode.
Case 2
A 49-year-old married female presented with persistent low mood, anhedonia, fatigability, early morning worsening of mood, psychomotor retardation, passive death wishes, and decreased sleep and appetite.She had a stroke seven months prior to the presentation.
The patient initially responded well to sertraline 100 mg over an eight-week period, leading to a decrease in the HAM-D score from the initial episode score of 18 to 6.However, the patient reported a worsening of mood after approximately one year on sertraline.Sertraline was increased to 150 mg over four weeks, but this exacerbated the symptoms, reflected by an increase in HAM-D score to 14. Sertraline was gradually tapered and discontinued.Subsequently, the patient was started on bupropion, with the dose reaching up to 300 mg.The symptoms improved after four weeks of bupropion treatment, reflected by a reduction in the HAM-D score to 4. The improvement maintained over a one-year follow-up period.Among associated medical conditions, the patient had a history of hypertension for five years, for which the patient took antihypertensive medications irregularly.
Case 3
A 36-year-old married male presented with excessive worry about multiple activities, difficulty in controlling worry, restlessness, fatigability, decreased attention and concentration, muscle tension, and decreased sleep.The primary manifestation was suggestive of generalized anxiety disorder (GAD) with the patient showing a Hamilton Anxiety Rating Scale (HAM-A) score of 25.
The patient initially responded well to escitalopram 15 mg, wherein the HAM-A score reduced from 25 to 4 over a four-week period.However, the patient reported a decline in mood, anhedonia, fatigability, decreased libido, and passive death wishes after approximately six months on escitalopram 15 mg.Additionally, anxiety symptoms worsened.Escitalopram was increased to 20 mg over four weeks, but this further exacerbated the symptoms, with HAM-D and HAM-A scores reaching 22 and 20, respectively.Escitalopram was gradually cross-tapered with mirtazapine.Over a four-week period with mirtazapine 30 mg, the symptoms improved, with both HAM-D and HAM-A scores reducing to 3 and 4, respectively.The improvement maintained over a six-month follow-up period.
Case 4
A 50-year-old married female presented with persistent low mood, anhedonia, fatigability, early morning awakening with worsening mood, feeling of worthlessness, decreased psychomotor activity, decreased attention and concentration, suicidal ideation, and decreased sleep and appetite.The patient positively responded to fluoxetine 40 mg over a six-week period, wherein the HAM-D score reduced from the recent episode score of 22 to 2. However, the depressive symptoms subsequently worsened after approximately one year on fluoxetine 40 mg and escitalopram 15 mg, with an increase in HAM-D score to 12. Fluoxetine was increased to 60 mg resulting in further deterioration, with the HAM-D score reaching 16.The patient reported a prominent early morning worsening of mood, marked decreased appetite, and recurrent suicidal ideation during this phase.Fluoxetine was gradually cross-tapered with mirtazapine over an eight-week period, finally reaching a dose of 45 mg.Subsequently, the symptoms improved, with the HAM-D score reducing to 2. The improvement maintained over a six-month follow-up period.
The patient had associated medical conditions: Sturge-Weber syndrome and a history of hypertension for 10 years managed with regular antihypertensive medications.As per the medical records, she had tonic-clonic seizures many years back for which she is on carbamazepine 300 mg twice daily and has had no episode of seizure for the last 15 years.A 35-year-old married male presented with persistent low mood, anhedonia, fatigability, feelings of worthlessness and helplessness, decreased psychomotor activity, decreased attention and concentration, recurrent suicidal ideation, decreased libido, and decreased sleep and appetite for four weeks, while on sertraline 150 mg for approximately one year.The patient had a history of two previous depressive episodes.The patient positively responded to escitalopram 20 mg after the first episode.After the second episode, the patient responded to fluoxetine 40 mg but did not show complete improvement of symptoms and was started on sertraline 150 mg with positive outcomes.
After the recurrence of symptoms, sertraline was changed to mirtazapine 30 mg for four weeks without any improvement.The patient was then started on bupropion, with the dose reaching up to 300 mg for four weeks.Subsequently, the symptoms improved, reflected by a reduction in the HAM-D score from 24 to 4. The improvement maintained over a six-month follow-up period.Among associated medical conditions, the patient had dyslipidemia for a period of three years, with a total cholesterol level of 350 mg/dL noted during the last depressive episode.Additionally, the patient had a history of hypertension for one year, which was managed with antihypertensive medications.
Diagnostic criteria
All patients were diagnosed with TDp based on the criteria proposed by El-Mallakh et al. [1].In all cases, most of the patient symptoms and characteristics met the proposed criteria for TDp, but a few did not (Table 1).However, all patients were diagnosed with TDp because they showed distinct dysphoric symptoms after prolonged exposure to selective serotonin reuptake inhibitors (SSRIs).
Discussion
The presented case series sheds light on the complex and challenging landscape of TDp.While TDp is a phenomenon characterized by antidepressant-induced dysphoric symptoms due to prolonged exposure to antidepressants, all the cases presented had SSRI-induced dysphoric symptoms.The cases focus on the interplay between depressive symptoms and medication responsiveness.We also present associated comorbid medical conditions along with the varied presentations of the cases.
TDp occurs due to prolonged antidepressant use, particularly SSRIs [1].The period and symptoms of antidepressant withdrawal are not considered as part of TDp [1].In TDp, the gradual improvement in depressive symptoms after discontinuation or change of antidepressants may not represent full remission [1].The cases illustrate the diverse responses to antidepressant medications, particularly SSRIs including escitalopram and fluoxetine.There is no definitive treatment for TDp; however, atypical antidepressants can be tried, which we did.The cases were resolved with atypical antidepressants like mirtazapine and bupropion.Patients' responses to treatments vary emphasizing the importance of individualized approaches to treatment regimens including the duration of treatment.
Case 1 presents a classic case of TDp, wherein the patient had exacerbated symptoms upon prolonged use of escitalopram 20 mg, which reduced intermittently upon reduction in dose to escitalopram 15 mg but exacerbated further.The patient from case 2 had a history of stroke, which may have contributed to the psychiatric manifestations.Case 3 presented the challenge of managing comorbid anxiety and depression, emphasizing the importance of selecting appropriate interventions based on the evolving symptomatology.Case 4 highlights the challenge of managing depression in the context of complex medical comorbidities.
The patient has Sturge-Weber syndrome, which introduces additional challenges in the treatment of TDp.
The patient was on carbamazepine 300 mg twice daily, and drug-drug interactions should be considered in the management of depressive symptoms.Case 5 presented the complexity of managing recurrent depressive episodes and the importance of selecting interventions considering the concurrent medical conditions of dyslipidemia and hypertension.The cases do not fit into the proposed criteria for TDp; however, we suggest changes in the timelines mentioned in the proposed criteria to include certain patients who do show depressive symptoms after prolonged use of SSRIs, so that patients with TDp are diagnosed early for appropriate management.
In all cases, the patients showed an initial positive response to SSRI, a subsequent recurrence while prolonged use of the SSRI, and the ultimate success with atypical antidepressants.The patients' sustained improvement over the follow-up period contributes valuable insights into the long-term management of cases with treatment-resistant depression.It also questions the current practice of prolonged treatment of depression with SSRIs [3].This may not be applicable to antidepressant discontinuation in other conditions.For example, antidepressant discontinuation in patients with migraine is significantly associated with an increased risk of depression, but not TDp [5].However, in the psychiatric setting, antidepressants are hypothesized to have a pro-depressant effect [6].
The cases also present successful outcomes observed with cross-tapering strategies and transitioning between different classes of antidepressants.The presence of comorbid medical conditions such as Sturge-Weber syndrome, hypertension, and dyslipidemia further complicates clinical management due to drug-drug interactions and drug-disease interactions.Integration of psychiatric and medical management is crucial for comprehensive care.
While this case series provides valuable clinical insights, the absence of a standardized diagnostic criterion for TDp introduces potential biases and limits the interpretation of the symptoms.Further research, including neurobiological investigations, is warranted to deepen our understanding of TDp and refine treatment guidelines.The relationship between the serotonin transporter gene, risk for depression, and response to serotonergic antidepressants is currently under study in a randomized clinical trial [6].
Conclusions
This case series adds to the literature on TDp and paves the way for future investigations.It emphasizes the need for personalized and adaptable treatment strategies, considering factors such as medication response, comorbidities, and the evolving nature of symptoms.The successful outcomes observed with different atypical antidepressants, including mirtazapine and bupropion, highlight the necessity of changing treatment strategies and question the prolonged use of SSRIs for treating episodic depression.The unique presentation of TDp challenges traditional concepts of depressive disorders, and more such studies are needed to refine our understanding of its etiology, clinical features, disease trajectory, and management strategies.
have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
TABLE 1 : Symptoms and diagnosis based on the proposed criteria for TDp.
SSRI, selective serotonin reuptake inhibitor; TDp, tardive dysphoria.
|
2024-03-24T15:12:28.177Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "c21e5f18ad207988518cc67f36e9d9a88149031b",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/232969/20240322-19539-1tca6sv.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0743b857adcfdb455da6a5d9011612cf5426899",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
49480240
|
pes2o/s2orc
|
v3-fos-license
|
Serum of sickle cell disease patients contains fetal hemoglobin silencing factors secreted from leukocytes
Background The mechanisms that regulate fetal hemoglobin (HbF) expression in sickle cell disease (SCD) remain elusive. We previously showed that steady-state SCD patients with high HbF levels due to a γ-globin gene mutation demonstrate strong inverse correlations between HbF levels and leukocyte counts, suggesting that leukocytes play a role in regulating HbF in SCD. Materials and methods To further investigate the role of leukocytes in HbF expression in SCD, we examined the presence of HbF silencing factors in the serum of 82 SCD patients who received hydroxyurea (HU) therapy. Results HU-mediated HbF induction was associated with elevated total hemoglobin levels and improved red blood cell parameters, but there was no correlation with reticulocyte or platelet counts. Importantly, we again found that HU-induced HbF levels correlated with reductions in both neutrophils and lymphocytes/monocytes, indicating that these cell lineages may have a role in regulating HU-mediated HbF expression. Our in vitro studies using CD34+-derived primary erythroblasts found that patient serum preparations include HbF silencing factors that are distinct from granulocyte-macrophage colony-stimulating factor, and the activity of such factors decreases upon HU therapy. Conclusion Together, these results demonstrate the importance of leukocyte numbers in the regulation of HbF levels for SCD patients both in steady state and under HU therapy, and that leukocytes secrete HbF silencing factors that negatively affect HbF expression in erythroid-lineage cells in SCD.
Introduction
The primary genetic defect in sickle cell disease (SCD) is a mutation of the b-globin gene. 1 It produces sickle hemoglobin, which polymerizes under low oxygen tension. This formation of sickle hemoglobin polymers is assumed to underlie vaso-occlusive crisis, chronic hemolysis, and ischemia-reperfusion injury associated with inflammation in this disorder. [2][3][4] Despite sharing a common b-globin mutation, the clinical severity among patients with SCD is extremely heterogeneous and the underlying mechanisms remain unknown. Elevated levels of fetal hemoglobin (HbF) expression alleviate the clinical severity of SCD; however, expression levels are also variable. 5 HbF expression is regulated by single-nucleotide polymorphisms (SNPs) of multiple genetic loci, [6][7][8] which presumably help to determine the level of HbF production in SCD. 9 In addition to HbF, which is otherwise dormant in people without anemia, leukocytosis is frequently observed in untreated SCD patients even in the absence of bacterial infection. 10 An elevated baseline Dovepress Dovepress 96 ikuta et al leukocyte count is associated with a risk for early death. 11 Further, high leukocyte counts in children with SCD predict severe clinical complications later in life. 12 These clinical observations suggest that both HbF levels and leukocyte counts are consequential to the clinical severity of SCD.
Although the pharmacological stimulation of HbF expression by hydroxyurea (HU) is an established treatment for patients with clinically severe SCD, 13 HbF response to HU varies significantly and a number of SCD patients are resistant to HU therapy. Clarifying the predictors of HbF response to HU will allow clinicians to determine which patients are most likely to respond to HU therapy. This will in turn limit the toxicities associated with the treatment among those who would not benefit from the therapy. A study by Charache et al showed that initial leukocyte count and HbF concentration as well as post-therapy plasma HU levels are predictors of high post-therapy HbF levels. 14 Ware et al also demonstrated that greater changes in blood counts in SCD children on HU therapy result in better HbF response to HU. 15 However, the mechanisms by which these hematologic parameters predict HbF response remain elusive.
As to the mechanisms responsible for leukocytosis in SCD, we had reported a positive correlation between leukocyte count and plasma granulocyte-macrophage colonystimulating factor (GM-CSF) levels in SCD patients, 16 indicating that plasma GM-CSF levels help to regulate leukocyte count. We subsequently analyzed retrospective data from SCD patients receiving HU therapy and found that the rate of HbF induction associated with HU therapy is proportional to the reduction of peripheral blood leukocyte count. 17 We also found that GM-CSF has a negative regulatory effect on HbF expression. 17 These results suggest that leukocytes are an important regulator for determining HbF response to HU.
To determine whether leukocytes are also involved in the regulation of HU-mediated HbF expression in SCD, in this study we analyzed retrospectively collected pre-HU and post-HU therapy hematological data. We found that HbF response was again associated with reduced leukocyte count, but not reticulocyte and platelet counts, suggesting that leukocytes likely play a critical role in HbF response to HU. Furthermore, our in vitro studies suggest that patient sera contain as-yet unidentified factors that appear to inhibit HbF expression in CD34 + -derived erythroid progenitors cultured with erythropoietin (Epo). Interestingly, the ability to inhibit HbF activity varied among steady-state patients and was significantly decreased upon HU therapy. This study has revealed novel aspects of the molecular mechanisms by which HU regulates HbF expression, which will help us to better understand HU resistance among SCD patients. 15,18 Materials and methods scD patient blood serum The study was performed in accordance with the principles of the Declaration of Helsinki and approved by the institutional review board of Augusta University. We collected retrospective data on 337 adult SCD patients who were homozygous for the β S mutation and under the care of the Sickle Cell Center of the Medical College of Georgia at Augusta University; the clinical characteristics of this cohort were reported previously. 17 Of these patients, 82 were receiving HU therapy (15-35 mg/kg/day) and had not received transfusions for at least 6 months. Hematologic values were reported as the average of at least 3 months of data. Serum preparations that had no visible hemolysis were obtained from multiple patients under HU therapy, before daily intake of HU to minimize carryover of HU. Written informed consent was obtained from all patients.
in vitro culture of human cD34 + -derived erythroid progenitor cells Human CD34 + cells obtained from the National Heart, Lung and Blood Institute Programs of Excellence in Gene Therapy Hematopoietic Cell Processing Core (Fred Hutchinson Cancer Research Center, Seattle, WA, USA) were cultured by a method described previously 19 with minor modifications. Briefly, CD34 + cells (1-10 × 10 4 cells/mL) were cultured in Iscove's modified Dulbecco's medium containing 30% fetal bovine serum (FBS) or 30% human AB serum, 3 unit/mL Epo, 20 ng/mL stem cell factor, and 10 ng/mL interleukin 3. Culture media were replaced every 4 days. Anti-human GM-CSF antibody (Thermo Fisher Scientific, Waltham, MA, USA) was added to sera from SCD patients at a dilution ratio of 1:10 followed by a 30-minute incubation at room temperature before adding to culture media. Cells were harvested on day 14. Cell photographs were taken by EVOS FL Cell Imaging System (Advanced Microscopy Group, Bothell, WA, USA).
Flow cytometry
Cells were suspended in FACS buffer (phosphate-buffered saline containing 5% FBS and 0.1% sodium azide). Following incubation with human Fc blocker (Thermo Fisher Scientific) for 10 minutes at room temperature, cells were then washed twice in FACS buffer and stained at 4°C for 30 minutes with a phycoerythrin (PE)-labeled anti-human glycophorin A monoclonal antibody (CD235a PE) (BD Biosciences, San Journal of Blood Medicine 2018:9 submit your manuscript | www.dovepress.com
97
Fetal hemoglobin silencers in sickle cell disease patients Jose, CA, USA) and fluorescein isothiocyanate-labeled CD71 monoclonal antibody (BD Biosciences). Paraformaldehyde was added in an equal volume to a final concentration of 0.1% to fix cells for 30 minutes at room temperature. The expression of glycophorin A and CD71 on erythroblasts was analyzed in a Becton-Dickinson FACScan using Cell Quest software (Franklin Lakes, NJ, USA). 20 Data were represented as percent of positive cells.
isolation of whole cell extracts from erythroblasts and immunoblotting Whole cellular extracts were prepared from CD34 + -derived erythroblasts as described. 21 Briefly, cells (5-10 × 10 6 cells) were suspended with 1 × RIPA lysis buffer (Santa Cruz Biotechnology, Santa Cruz, CA, USA) supplemented with 1 mM phenyl-methyl sulfonyl fluoride, 100 mM sodium orthovanadate, and protease inhibitor cocktail. Whole cellular extracts were obtained by centrifugation at 14,000× g for 15 minutes. Immunoblotting was performed as described previously. 22 Approximately 2-3 micrograms of cellular extracts were separated on 12% SDS polyacrylamide gels and transferred to nitrocellulose membranes (Invitrogen, Carlsbad, CA, USA). All antibodies used for immunoblotting analyses were purchased from Santa Cruz Biotechnology unless otherwise stated. Protein bands were visualized by the Phototope HRP Western blot detection system (Cell Signaling Technology, Danvers, MA, USA) according to the protocol provided by the supplier. Immunoblotting for γ-globin expression was performed by using serum preparations with no visible hemolysis that were isolated from multiple SCD patients.
real-time (rT)-Pcr
Expression of γ-globin mRNA in human CD34 + -derived erythroid progenitors treated with or without SCD patients' serum was examined by RT-PCR as described previously. 23 To determine expression levels of human γ-globin mRNA in primary erythroblasts, human CD34 + cells were cultured as described above. Total RNA was extracted from primary erythroblasts using RNeasy Mini Kit (Qiagen, Germantown, MD, USA), and cDNA was generated with the SuperScript II Reverse Transcriptase kit (Invitrogen). RT-PCR was carried out with the Mx3000P QPCR System (Agilent Technologies, Santa Clara, CA, USA) using SYBR Green Supermix (Bio-Rad, Hercules, CA, USA) according to the manufacturer's instructions. All amplifications were performed in triplicate, and 18S rRNA was used as internal control. Relative expression was quantitated using the standard ΔΔCt method. The primers used were as follows: human γ-globin, forward-5′-TGGATGATCTCAAGGGCAAC-3′; reverse-5′-TCAGTGGTATCTGGACA-3′; human 18S rRNA, forward-5′-TTGGAGGGCAAGTCTGGTG-3′; reverse-5′-CCGCTCCCAAGATCCAACTA-3′. All primers were designed and obtained from Integrated DNA Technologies (Coralville, IA, USA). RT-PCR for γ-globin mRNA expression was carried out by using several serum preparations with no visible hemolysis from multiple SCD patients.
statistical analysis
The Spearman correlation coefficient (r s ) was used for correlations with non-Gaussian distributed data, which included hematologic values (Figures 1-4). Other data were analyzed by Student's t-test. P-values less than 0.05 were considered statistically significant.
response of hbF to hU is associated with hematologic improvements in scD patients
Our previous study showed that reduced leukocyte counts in response to HU therapy is critical for efficient HU-mediated HbF induction. 17 To further investigate the mechanisms by which HbF response is regulated in SCD, we first examined whether HbF response was associated with hematologic improvements in a retrospective study of a cohort of SCD patients described previously. 17 Of these, 82 HU responders were selected. There were substantial correlations between the levels of HbF induction by HU and increases in the total hemoglobin levels, MCV, and MCHC ( Figure 1A-C). This suggests that HU therapy significantly improves RBC parameters, suggesting its clinical effectiveness for SCD. Our cohort of SCD patients may be pathophysiologically comparable to those reviewed recently. 24 hU-induced hbF levels are not correlated with reticulocyte or platelet counts Next, we investigated whether HU-mediated HbF levels correlated with reticulocyte and platelet counts, both of which are assumed to play a role in modulating the pathophysiology of SCD. 25,26 However, we saw no significant correlations between HU-induced HbF levels and reticulocyte or platelet counts (Figure 2A and B), suggesting that the mechanisms controlling the numbers of these lineage cells in peripheral blood may not be relevant to the mechanisms by which HbF response is regulated by HU. We had previously found that HU-induced HbF correlates with reduced leukocyte counts, 17 suggesting that the mechanisms by which HU regulates HbF induction in SCD may be relevant to those controlling leukocyte numbers in peripheral blood. Charache et al demonstrated that predictors of HbF response include pre-therapy leukocyte counts and HbF levels. 14 However, we found that HU-mediated HbF induction levels did not correlate with pre-therapy leukocyte counts ( Figure 3A, P=0.056) but did correlate with post-therapy leukocyte counts ( Figure 3B, P<0.004). The HbF increase was not influenced by pre-therapy HbF levels ( Figure 3C, NS), a result that was inconsistent with Charache et al's study. 14 This might reflect genetic and cellular differences in the cohorts studied.
To investigate the cell lineages of leukocytes involved in determining HbF response to HU therapy, we compared HU-mediated HbF increases and reductions of neutrophil or lymphocyte/monocyte counts ( Figure 4). The correlation value between the HbF increases and the reduction levels of neutrophil counts (P<0.00268) was the almost the same as that between the HbF increases and lymphocyte/monocyte reductions (P<0.00267; Figure 4A and B). This suggests that both cell lineages are involved in determining HbF levels induced by HU.
hbF silencing factors that are present in blood serum of scD patients Importantly, the results shown in Figure 5, together with our previous finding that leukocyte counts of steady-state SCD patients with high HbF levels demonstrated a strong inversely correlation with HbF levels (N = 47, R 2 = 0.229, P<0,0006), 17 has led us to hypothesize that leukocytes may have a role in downregulating HbF expression in erythroid-lineage cells by secreting HbF silencing factors.
To detect the HbF silencing factors present in serum of SCD patients, we first determined whether CD34 + cells cultured with 30% human AB serum can be differentiated to erythroblasts as our previous studies had employed both FBS and human AB serum. 19,20,27 As shown in Figure 5A and B, 30% human AB serum permitted CD34 + cells to differentiate to erythroid-lineage cells to a degree similar to that by 30% FBS, as analyzed by flow cytometry on the basis of expression of glycophorin A and CD71. Next, we investigated the expression of both γ-globin and γ-globin mRNA in CD34 + -derived erythroblasts incubated with FBS, normal human AB serum, or SCD patient serum ( Figure 5C and D). The leukocyte counts of SCD patients studied here were as follows: Pt.1, 7,800/µL; Pt. 2, 15,300/µL; Pt. 3, 11,800/µL; the counts represent average numbers for 3-6 months. Although CD34 +derived erythroblasts (lane 2) cultured with 30% human AB serum expressed γ-globin at a level similar to that of cells cultured with 30% FBS (lane 1), the cells that were cultured with the serum of SCD patients demonstrated significantly lower levels of γ-globin expression (lanes 3-6). It is of note that the levels of γ-globin expression was substantially varied among steady-state SCD patients without HU (lanes 3 and 4), and serum from a SCD patient who was receiving HU revealed a weaker HbF silencing effect (lane 5). Because we previously showed that the proinflammatory cytokine GM-CSF has an inhibitory effect on HbF expression in erythroidlineage cells, 17 it is confirmed that such HbF silencing is not blocked by the addition of anti-GM-CSF antibody (lane 6), indicating that HbF silencing factors are distinct from GM-CSF. Next, we examined γ-globin mRNA expression levels in these erythroblast preparations by RT-PCR ( Figure 5D). The results of γ-globin mRNA expression were in accord with those of immunoblotting shown in Figure 5C. Thus, it is likely that in SCD, leukocytes may secrete HbF silencing factors that negatively affect HbF expression in erythroid cells, and that HU may reduce the serum levels of such factors, which may be relevant to HU-mediated HbF induction in SCD patients.
Discussion
It is of paramount importance to elucidate the mechanisms by which HU regulates HbF expression in the context of SCD because the magnitude of HbF response to HU in SCD is remarkably heterogeneous and a large subset of patients are non-responders. 28 Furthermore, HU-induced HbF increases are eventually lost even in SCD patients who initially responded to HU and these underlying mechanisms also remain unknown. 15,18 A number of SNPs associated with HbF responders in SCD patients have been identified; 29 however, the significance and the implications of such SNPs are yet to be established.
We recently reported that HU-induced HbF levels correlate with reduced leukocyte counts. 17 This suggests that the mechanisms mediating HU-induced HbF expression may be relevant to those that control leukocyte counts. In this study, we performed a retrospective analysis of SCD patients for whom clinical data of pre-HU and post-HU therapy were available. We found that the levels of HbF induction following HU therapy are associated with the improvements of hematological parameters such as total hemoglobin, MCV, Journal of Blood Medicine 2018:9 submit your manuscript | www.dovepress.com
101
Fetal hemoglobin silencers in sickle cell disease patients and MCHC (Figure 1), suggesting that HbF response to HU is a legitimate marker for confirming the clinical effectiveness of HU therapy. By contrast, there were no significant correlations between HU-induced HbF levels and reticulocyte or platelet counts (Figure 2), suggesting that these lineage cells are unlikely to play a role in determining HbF response to HU; rather, their changes reflect secondary effects of HU therapy.
Previous studies have shown correlations between HUinduced HbF expression and leukocyte counts. 15,17,30 We also found that HbF expression levels of steady-state SCD patients with high HbF levels due to a mutation in the Gγ-globin gene promoter inversely correlate with leukocyte counts. 17 Also, a notion that HU-induced HbF expression may be relevant to the mechanisms controlling leukocyte counts is supported by an earlier report by Steinberg that SCD patients with high baseline leukocyte counts who exhibit a great reduction in leukocytes with HU therapy have more robust increases in HbF. 31 Thus, multiple lines of clinical evidence strongly Figure 5 Presence of hbF silencing factors in serum of scD patients. Notes: (A) culture of cD34 + -derived erythroblasts with 30% FBs (left panel) or 30% human aB serum (right panel). cell photographs were taken on day 10 using eVOs Fl cell imaging system. White bars indicate 400 µM. (B) analysis of cD34 + -derived erythroblasts that were harvested on day 14 by flow cytometry using anti-glycophorin a and anti-cD71 antibodies. (C) immunoblotting analysis of γ-globin and gaPDh in whole cell lysates prepared from cD34 + -derived erythroblasts. lanes: 1, cD34 + -derived erythroblasts cultured with 30% FBs; 2, cD34 + -derived erythroblasts cultured with 30% human aB serum; 3 and 4, cD34 + -derived erythroblasts cultured with 30% serum from steady-state scD patients (Pt.1 and 2); 5, cD34 + -derived erythroblasts cultured with 30% serum from scD patients with hU therapy; 6, cD34 + -derived erythroblasts cultured with 30% serum from scD patients with hU therapy that was incubated with anti-gM-csF antibody as stated in Materials and methods. gaPDh was used for internal control. (D) Quantitation of γ-globin mrna levels in cD34 + -derived erythroblasts cultured in (C) by rT-Pcr. lane designation is the same as that of (C). The γ-globin mrna level of cD34 + -derived erythroblasts cultured with 30% FBs was set to 100%. *P<0.05 compared to lane 1. **P<0.01 compared to lane 1. Abbreviations: hbF, fetal hemoglobin; scD, sickle cell disease; FBs, fetal bovine serum; gaPDh, glyceraldehyde 3-phosphate dehydrogenase; hU, hydroxyurea; gM-csF, granulocyte-macrophage colony-stimulating factor; rT-Pcr, real-time Pcr. suggest that HbF expression is significantly influenced by the mechanisms controlling leukocyte count in the context of SCD. The molecular and cellular mechanisms by which HUinduced HbF levels are associated with reduced leukocyte counts are not yet clear. Based on our study results, we hypothesize that leukocytes may secrete protein factors that bind to erythroid cells and inhibit HbF expression. Our in vitro studies ( Figure 5) have demonstrated the presence of HbF silencing factors in the serum of SCD patients, whether or not they are receiving HU therapy. This is the first demonstration of HbF silencing factors in the serum of SCD patients. Interestingly, the HbF silencing activity in the serum varied significantly among SCD patients ( Figure 5C, lanes 3 and 4). However, it can be argued that such factors may not in fact be secreted by leukocytes. Leukocyte numbers usually decrease in response to HU therapy in SCD because HU is a ribonucleotide reductase inhibitor. 32 Consistent with this clinical evidence, HbF silencing activities are also concomitantly reduced in someone with SCD who is receiving HU ( Figure 5C lane 5). Further rigorous scrutiny is required to verify that HbF silencing factors derive from leukocytes. Importantly, such HbF silencing factors were not absorbed by anti-GM-CSF antibody ( Figure 5C), suggesting that HbF silencing factors are distinct from GM-CSF, which we showed has a negative consequence on HbF expression. 17 Our current model of the role of leukocytes in HUmediated HbF expression in erythroid cells is summarized in Figure 6. We and others have shown that the soluble guanylate cyclase (sGC)-cGMP pathway plays a role in HU-induced HbF expression. [33][34][35] HU is assumed to augment HbF levels at least in part by generating nitric oxide and through the sGC-cGMP pathway. Although the sGC-cGMP pathway is also implicated in erythroid cells as a signaling mechanism for chemically induced HbF expression, it is still unknown whether extracellular signals from other lineage cells are transduced to erythroid cells and whether such signals are capable of modulating HbF expression by intracellular signaling pathways. In this study, we have clearly shown that HbF levels in SCD patients, whether they are receiving HU treatment, or not, are closely affected by leukocyte count in peripheral blood, presumably because of HbF silencing factors. As leukocyte count generally decreases in response to HU administration in SCD patients, it is reasonable to speculate that in addition to the nitric oxide/sGC-cGMP axis, HU might also increase HbF expression in SCD patients by inhibiting the secretion of HbF silencing factors from leukocytes. Collectively, there may be at least three groups of HbF regulatory proteins or pathways involved in HbF expression are possibly involved in inducing vaso-occlusive crisis therapy. One is GM-CSF, which is supposed to cause leukocytosis in SCD 16 and has a negative consequence on HbF expression. 17 Myeloid cytokines such as G-CSF or GM-CSF are possibly involved to induce vaso-occlusive crisis. 36,37 Second, HU is shown to stimulate HbF expression at least in part through the nitric oxide/sGC-cGMP axis. [33][34][35] This is consistent with prior vitro and in vivo findings showing that nitric oxide is generated from HU. 38,39 A third mechanism includes HbF silencing factors that are presumably released mainly from leukocytes, as reported in this study ( Figure 5). HbF silencing factors may be cytokines or chemokines; further studies are necessary to precisely characterize HbF silencing factors.
Conclusion
This study has shown that HU-induced HbF expression is regulated at least in part by the mechanisms controlling leukocyte counts, and that HbF silencing factors that are secreted, possibly by leukocytes, may be involved in HU-regulated HbF expression in SCD. Thus, this study provides an important clue to the mechanisms by which HbF expression is regulated in the context of SCD patients receiving HU. It would be interesting to compare serum levels of cytokines or chemokines in SCD patients, both with and without HU Figure 6 Possible model for the mechanisms that regulate hU-induced hbF expression. Notes: hU has been shown to stimulate hbF expression in erythroid cells through the nitric oxide/sgc/cgMP pathway. [33][34][35] hU is expected to suppress the proliferation of leukocytes by inhibiting ribonucleotide reductase, 32
103
Fetal hemoglobin silencers in sickle cell disease patients treatment. Further insight into HbF silencing factors might help us clarify the mechanisms underlying HU resistance as seen in a subset of SCD patients. 15,18
|
2018-07-10T00:14:25.504Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "bd28f2f8b7d1cd836de331a941dbea54067d81be",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=42760",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bb24878a5a13a66f9efdbc97e91bb2b0ebb850a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7187984
|
pes2o/s2orc
|
v3-fos-license
|
Harbour porpoises react to low levels of high frequency vessel noise
Cetaceans rely critically on sound for navigation, foraging and communication and are therefore potentially affected by increasing noise levels from human activities at sea. Shipping is the main contributor of anthropogenic noise underwater, but studies of shipping noise effects have primarily considered baleen whales due to their good hearing at low frequencies, where ships produce most noise power. Conversely, the possible effects of vessel noise on small toothed whales have been largely ignored due to their poor low-frequency hearing. Prompted by recent findings of energy at medium- to high-frequencies in vessel noise, we conducted an exposure study where the behaviour of four porpoises (Phocoena phocoena) in a net-pen was logged while they were exposed to 133 vessel passages. Using a multivariate generalised linear mixed-effects model, we show that low levels of high frequency components in vessel noise elicit strong, stereotyped behavioural responses in porpoises. Such low levels will routinely be experienced by porpoises in the wild at ranges of more than 1000 meters from vessels, suggesting that vessel noise is a, so far, largely overlooked, but substantial source of disturbance in shallow water areas with high densities of both porpoises and vessels.
surprisingly, given their hearing abilities, several studies have demonstrated that harbour porpoises do show what appears to be avoidance behaviour in response to vessels at long ranges 22,23 , where the radiated noise, rather than the physical presence of the vessel, is more likely to deliver the negative stimulus. Many small toothed whale species inhabit shallow waters which are high productivity areas 24 that have some of the heaviest vessel traffic densities of any marine habitats 17 . However, shallow water environment acts as a steep high-pass filter were the low-frequency sounds do not propagate well 25 . Therefore this, in combination with the poor low-frequency hearing of porpoises, suggests that porpoises may respond to noise energy at mid-or high-frequencies that are present in vessel noise 26,27 , but currently not considered when estimating noise impact on cetaceans 28,29 .
Here, we test this hypothesis by studying the behaviour of captive harbour porpoises in a net pen being exposed to noise of passing vessels. We show that a strong, stereotyped, behavioural response in the form of porpoising is triggered by low levels of high-frequency component of vessel noise that can occur at more than 1000 meters from the source. The implication is that thousands of porpoises in shallow water habitats with dense vessel traffic may potentially face daily, repeated noise-induced behavioural disruptions, which is a potentially large, but so far, overlooked conservation issue.
Results
Vessel noise from 133 boats was recorded at two stations across the net pen a total of 225 times during the study period, along with observations of porpoise behaviour. Implementation of a set of selection criteria (see Methods) reduced the total number of good quality recordings to 80 (14 registered at the left station and 66 at the right station). The selected recordings included noise from vessels of various size and design: from sailing boats moving on engine, through 4-10 m recreational boats with outboard engines, to fishing boats up to tens of meters long with inboard engines, and a single military vessel. In 22 cases (27.5%), a very robust and stereotypical reaction, in the form of porpoising (Supplementary Vid. S1), was observed when different boats were passing the net-pen complex.
Echosounder pulses. First, we tested if the reactions were in fact triggered by vessel noise and not echosounders. Among 80 recorded vessels, 31 (39%) had a high-frequency (200 kHz) echosounder turned on (no other echosounders were recorded). A distinct reaction of the porpoises was observed in the presence of 11 of them (35%), but no statistically significant relationship between the presence of echosounder pulses and reaction was shown (p-value BHY = 0.9464; Supplementary Tab. S1). The effect of the cumulative sound exposure levels (cSELs) of the echosounder signals was then examined to test whether the summed exposure level of several transients at 200 kHz could explain the initiation of porpoising ( Supplementary Fig. S1). The cSELs in 30-second-long time windows with most energy reached values between 105-145 dB re 1 μ Pa 2 s. The commencement of porpoising did not coincide with the largest changes in the cSEL, nor a particular cSEL value ( Supplementary Fig. S1) and there was no significant difference between the cSELs of echosounder pulses from vessels that did, and did not, elicit the response (p-value BHY = 0.8170; Supplementary Tab. S1). The high-frequency echosounders were therefore unlikely to have caused the observed porpoising reactions, which suggest that the vessel noise itself triggered the responses.
Broadband root-mean-square sound pressure level. The root-mean-square (rms) measure of sound pressure critically depends on the length of the time window over which the squared pressures are averaged 30 . Here, sliding time windows containing noise from passing vessels were moved so as to contain maximum energy in a 3-second-and 30-second-window. Additionally, segments of 3 seconds before and 30 seconds around the time of porpoise reaction were selected. The differences in rms pressure level between averaging windows of different durations and positions with respect to noise energy and time of reaction were negligible (Fig. 1a,c) and statistically insignificant (p = 0.7869). Therefore, a 30-second-long averaging window was used for all further analyses.
We proceeded to test if broadband rms received levels could explain the reactions as suggested by Southall et al. 19 in their noise exposure criteria. Counter to this prediction, the broadband rms level was higher when porpoises showed no reaction to vessel noise (Fig. 1a). Results of a generalised linear mixed-effects model (GLMM) corroborated this finding by demonstrating that the association between the broadband rms sound pressure level and probability of reaction was not statistically significant (p-value BHY = 0.8414; Supplementary Tab. S1). This suggests that certain spectral components, rather than the overall received level, would trigger the response.
Spectral characteristics of the vessel noise. Power spectral density analysis of the vessel noise showed that it was broadband with most power at frequencies below 10 kHz ( Supplementary Fig. S2). On average, the octave levels at frequencies greater than 500 Hz were between 20 and 60 dB above the porpoise audiogram (Fig. 2). Levels below 250 Hz were likely below the hearing threshold of porpoises 21,31 . To identify the frequency components of the vessel noise that were most likely to cause the behavioural response of the porpoises, a GLMM was performed for each of the 12 octave bands with centre frequencies between 31.5 Hz and 63 kHz. Additionally, we performed two more GLMMs using third-octave bands with centre frequencies at 63 and 125 Hz proposed by the Marine Strategy Framework Directive (MSFD) as indicators of general noise levels from continuous sources such as boats 29 . The mean, standard deviation (SD), median and interquartile ranges for all variables included in the GLMMs are shown in Supplementary Tab. S2. The results showed a statistically non-significant relationship between the porpoise reaction and both the 63-and 125-Hz third-octave bands (p-value BHY = 0.8414 and 1.0000, respectively; Supplementary Tab. S1). In contrast, results of the GLMMs for the octave bands indicated a statistically significant, positive association between the probability of porpoising and rms levels in bands with centre frequencies at 500, 2000, 16000 and 31500 Hz (p-value BHY = 0.0276, 0.0348, 0.0331, 0.0331, respectively; see odd ratios (OR) and p-values for all variables in Supplementary Tab. S1). Moreover, a two-dimensional biplot (see Methods and 32), representing 78% of the variance of the data, revealed a homogeneous display of observation points with no groups or extreme values. However, the biplot indicated two clear groups of vectors representing the octave bands with centre frequencies between 31.5 and 125 Hz and, separately, from 0.25 to 63 kHz (Fig. 3). Based on these findings, the correlated bands were merged into broader bandwidths, low-(31.5-125 Hz) and high-frequency (0.25-63 kHz), and their effects on porpoise reaction were tested. The results showed a statistically non-significant relationship between porpoise probability of reaction and sound pressure level at low-frequencies (p-value BHY = 0.8414; Supplementary Tab. S1). However, a statistically significant impact of sound at high frequencies on porpoise behaviour was detected, indicating that higher levels of noise at these frequencies lead to an increase in the probability of reaction [OR = 1.37 (95% CI 1.09-1.73), p-value BHY = 0.0273; Supplementary Tab. S1].
The prevalence of high-frequency bands as explanatory variables prompted us to test if the M-weighting proposed by Southall et al. 19 could be used as a simple response variable for practical implementation. The idea appears logical in view of the fact that the rms sound pressure level computed over a high-pass-filtered version of the vessel noise to some degree matches the high frequency hearing of porpoises 31 . A box plot was created to examine the distributions of the rms pressure levels of high-frequency M-weighted (cut-off frequencies: 200 Hz -100 kHz; 19 ) noise that did and did not elicit a behavioural response (Fig. 1b). Compared to the non-weighted data (Fig. 1a), a clear change in the level distributions was observed, with a higher level of M-weighted noise coinciding with a higher probability of porpoise response. This observation was supported by the GLMM results [OR = 1.41 (95% CI 1.12-1.78), p-value BHY = 0.0273; Supplementary Tab. S1].
Discussion
The most common and dominant contributors of anthropogenic noise in water are ships that radiate noise continuously at high levels 16,17 . Despite this, very little attention has been given to the effects of ship noise on small toothed whales that often inhabit waters with considerable vessel activity 33 . An argument for dismissing effects of vessel noise on small toothed whales is their poor hearing at low frequencies 31 where large vessels radiate the most noise power 7 . Nevertheless, porpoises have been shown to avoid vessels at substantial ranges suggesting that they may in fact respond to low levels of vessel noise 22,23 . To test that hypothesis, we here used a stereotyped behavioural response as a measure of behavioural impact 34 of a large number of vessel passes recorded with broadband, calibrated hydrophones. We show that despite long-term residence in a harbour, an environment that is inseparable from man-made noise, the four porpoises reacted in a manner of porpoising in the presence of almost 30% of boats where noise was recorded, lending little support for habituation effects.
Current recommendations of continuous underwater noise exposure criteria often stipulate a certain broadband rms level that cannot be exceeded 19 . Our data imply ( Fig. 1a; Supplementary Tab. S1) that broadband rms levels cannot be used to predict behavioural responses to vessel noise of harbour porpoises, a high-frequency species 31 . This, in turn, suggests that exposure levels in certain spectral bands may be responsible for the observed responses.
An example of specific spectral bands proposed for quantifying the impact of vessel noise on marine life is that of the European MSFD recommending that levels in third-octave bands centred around 63 and 125 Hz serve as indicators of good environmental status 28,29 . However, we demonstrate that received levels in these two low-frequency bands cannot explain the observed behavioural responses, and the biplot analysis (Fig. 3) reveals a very weak association between the low-(31.5-125 Hz) and the mediumto high-frequency (0.25-63 kHz) octave bands of the noise where porpoise hearing is much better 31 . The latter finding is consistent with recent recordings of larger vessels in shallow water 27 . The proposed 63and 125-Hz bands of the MSFD are therefore unsuited for establishing exposure limits for behavioural effects of vessel noise on porpoises and likely also for other small toothed whales, and they are in general poor proxies for noise loads at higher frequencies in shallow water environments 27 .
Rather, we show that higher levels of medium-to high-frequency components (0.25-63 kHz octave bands) of vessel noise significantly increase the probability of porpoising (Supplementary Tab. S1; Fig. 2). Thus, the porpoises responded to increases in the part of the noise spectrum where their hearing is good 21,31 , implying that the onset of a behavioural response is triggered by the perceived loudness of the sound 35,36 . This finding lends weight to the recent proposal by Tougaard et al. 37 that behavioural responses of porpoises can be predicted from a certain level above their threshold at any given frequency. However, the extent to which noise affects an animal's behaviour may also be determined by the background noise 38 . Our results suggest that behavioural and environmental covariates do affect the response threshold level of harbour porpoises, as the mean onset level of 123 dB re 1 μ Pa (rms, M-weighted; ranging from 113 to 133 dB re 1 μ Pa) for the porpoising behaviour is only slightly above the levels of noise that did not trigger the reaction (120 dB re 1 μ Pa, rms, M-weighted; ranging from 108 to 138 dB re 1 μ Pa). Nevertheless, such low levels are routinely encountered by porpoises in the wild from passing vessels at ranges of more than 1 km 27,39 which can then explain the reported vessel avoidance of porpoises at considerable ranges 22,23 . Consequently, if wild porpoises respond to the same levels as documented here 34 , vessel noise may in heavily trafficked areas have a large, but so far undetected, effect on porpoises and potentially also on other small toothed whales 40,41 .
Porpoising and other behavioural responses to ship noise may be short-term, but they come at the cost of the energetic investment in moving, lost opportunities in foraging and social behaviour, as well as potential abandonment of calves. Thus, repeated vessel-noise-induced short term behavioural disruptions, as documented here, may have fitness consequences for porpoises in densely trafficked areas. This hypothesis can be tested with onboard acoustic and multi-sensor tags where behavioural states, locomotion effort, feeding success, and ventilation rates can be logged in concert with noise exposure levels [e.g. 26].
We conclude that porpoises respond to low levels of medium-to high-frequency vessel noise. This finding is consistent with observations of ship avoidance at sea 34 , and points to a potentially large, but so far largely overlooked, conservation problem in areas of dense shipping and high porpoise numbers. The 63-and 125-Hz bands proposed in the European MSFD are not suited as measures of behavioural disturbance of porpoises whereas filtering using M-weighting 19 , loudness 35 or the audiogram 37 seem to provide a meaningful proxy for estimating behavioural disturbance with a tentative 50% onset at 123 dB re 1 μ Pa (rms, M-weighted) averaged over 30 s. Before implementation in mitigation measures and conservation efforts, we recommend that such a threshold should be tested thoroughly on a larger number of animals in the wild. used during the experiments. Different amplifiers and gain settings resulted in self-noise values varying by as much as 6 dB, therefore, only the maximum self-noise values were used.
Data collection. The study took place between
Sound recording was started as soon as a boat came into view. Background ambient noise was recorded opportunistically when no boats were observed, but to minimise contribution of vessel noise in the ambient noise recordings, only audio files recorded 30 minutes before or 30 minutes after a vessel passage were later analysed. Hence, the number of background noise recordings selected for analyses varied from 2 to 6 per day, depending on the amount of vessel traffic.
Observations of porpoise behaviour were made simultaneously with vessel noise recordings. Response of the animals to boat presence was classified into two categories: "reaction" or "no reaction. " "Reaction" to noise was defined to occur when one or more animals suddenly and dramatically increased their swimming speed and sprayed the water upon surfacing in a stereotyped manner in a behaviour coined "porpoising" (see Supplementary Vid. S1). This type of behavioural response is commonly used in studies of noise influence on captive porpoises [e.g. 42,43]. "No reaction" response was defined as a lack of porpoising while the porpoises may have responded in other ways, inconsistent with the definition of porpoising. Data processing and signal analysis. A number of selection criteria were applied to the dataset in order to analyse the most representative levels of noise affecting the porpoises. For each vessel passage, only data from the station closest to the vessel were considered. Furthermore, recordings of boats that never got within 100 m of the station were excluded from further analysis. The selected files were then visually inspected using Adobe Audition 3.0 (Adobe Systems), to reject all recordings with clipped vessel noise or intense electrical noise.
Further sound analysis was custom-programmed in Matlab R2012b (MathWorks Inc.). All measurements were corrected for the frequency response of the hydrophones and the amplifiers.
Prior to further processing, relevant background and vessel noise sections of the audio files were extracted. For each background noise recording, the 30-second-long segment with the least broadband energy was identified visually using a spectrogram. For each vessel noise recording, a 2-minute-long segment with the most energy was selected. Within these segments, the 3-second-and 30-second-long time intervals with the highest energy contents were determined from bandpass filtered data using custom-written Matlab code. A bandpass filter with cut-off frequencies at 2-100 kHz (4 th order, Butterworth) was used to eliminate the contribution of electrical noise or wave noise (the transients resulting from wave actions on the pontoons), and exclude porpoise sounds from these fragments used to define the 3-second-and 30-second-windows. Sequences of 3 seconds before and 30 seconds around the time of harbour porpoise reaction were also selected to examine the noise recorded directly before the reaction was noted. Those fragments often differed in their levels from the ones with the most energy.
The selected segments of unfiltered noise were low-pass filtered at 100 kHz (4th order, Butterworth) to avoid the inclusion of omnipresent porpoise clicks in the level calculations. The broadband noise level was then quantified as root-mean-square sound pressure level over time windows of 3 seconds and 30 seconds. The four intervals (i.e., 3 seconds and 30 seconds with maximum energy vessel noise, and 3 seconds before and 30 seconds around the time of porpoise reaction) were used to explore the effects of averaging windows, differing in length and determination method, on rms measure of continuous, but varying noise 30 .
The detailed spectral features of the recorded noise were examined by means of power spectral density (PSD) analysis using the Welch's method (8192 FFT points, 61-or 122-Hz bin width, non-overlapping rectangular window). The PSD levels were shown as normalised histograms of dB levels (histogram bin width of 1 dB) in 100-Hz frequency bins [based on 44]. The rms sound pressure levels were also computed in 36 third-octave bands (centre frequencies from 25 to 80000 Hz) according to ANSI standard S1.6-1984 using a filter bank (modified filtbank Matlab function provided by Christophe Couvreur, Faculte Polytechnique de Mons, Belgium). The third-octave bands were later combined into 12 octave bands (OL; centre frequencies from 31.5 to 63000 Hz) to simplify the number of variables fed to the GLMM.
Reported sensitivity of harbour porpoises to mid-and high-frequency impulsive sounds 45,46 prompted us to test the effect of the echosounders at 200 kHz. Cumulative sound exposure levels (cSELs) were used as a proxy for accumulating received levels 47 from all the pulses in a 30-second-long periods of high-pass filtered noise (Supplementary Fig. S1a). The echosounder pings were detected on 180-220 kHz passband filtered noise (4th order, Butterworth) using an automatic routine coded in Matlab. Only pulses above 114 dB re 1 μ Pa (pp) were selected for further analysis. Pulses lower than the set threshold were rarely detectable in 30 seconds with maximum energy vessel noise. Moreover they would have negligible contribution (<1 dB) to the cSELs which were dominated by the highest energy pings (see Supplementary Fig. S1a). The output of the automatic routine was verified manually in Matlab using a custom-made supervised detector. The sound exposure level (SEL) of each detected echosounder pulse was calculated from the high-pass filtered recordings (4th order, Butterworth, 160 kHz) in a time window containing 90% energy of the signal 30,47 .
Furthermore, following Southall et al. 19 a marine mammal frequency weighted (M-weighted) rms sound pressure level was computed over the low-pass filtered (4th order, Butterworth, 100 kHz) vessel noise data. The M-weighting function is analogous to the C-weighting function from human audiometry and was here applied with the high-pass filter settings recommended by Southall et al. 19 for functional hearing group of high-frequency cetaceans (i.e. with a cut-off frequency of 200 Hz).
Statistical analysis.
Statistical analysis was carried out using R statistical package version 2.15.2 (www.R-project.org). We performed a series of statistical tests to uncover the sound elements triggering the porpoise reaction to vessel noise. First, we tested for differences between four different averaging windows (i.e., 3 seconds and 30 seconds with maximum energy vessel noise, and 3 seconds before and 30 seconds around the time of porpoise reaction) using the Wilcoxon signed-rank test. Subsequently, the direct relationship between the broadband rms level and the probability of porpoise reaction, as well as the effect of the presence and level of echosounder pulses in the vessel noise, were assessed using GLMMs, where the occurrence of porpoise reaction to noise was the outcome. These three models were adjusted for the random effects of day, as a grouping factor, and the station, since for a given vessel only data from the station closest to the vessel were considered.
The relationship between the response of porpoises and received rms sound pressure level in different frequency bands (63-and 125-Hz third-octave bands, and 31.5-63000 Hz octave bands) was explored in a set of GLMMs with the presence of porpoise reaction to noise as an outcome. Due to the collinearity between third-octave and octave bands, a different model for each of the bands was used. Additionally, a more specific association between different octave band levels was studied by means of a biplot 32 . The initial dataset matrix consisted of the received rms sound pressure level in different frequency bands, with each row representing an individual recording (i.e. an observation), and each column containing a different frequency band (i.e. a variable). Once the matrix was centred by column and non-standardized, we performed a factorisation of it by means of singular value decomposition, and took the first two singular vectors to calculate the coordinates to create a two-dimensional biplot. In the biplot, each observation (individual noise recording) is represented as a point in a two-dimensional space and each variable (frequency band) as a vector. The spatial disposition of the observation points in a biplot shows the correlation structure of the observations in the two-dimensional space defined by the singular vectors. Therefore, a short distance between either two observations will indicate a stronger association. Similarly, the angle between vectors represents the correlation of the variables and the length of these vectors -the standard deviation of the variable. Consequently, the smaller the angle between two vectors, the stronger the correlation between the respective variables. This visual representation of the correlation structure of the data allowed us to find associations between either noise recordings or frequency bands, and combine octave-band levels into broader bandwidths. The resulting groups of correlated bands were subsequently used as individual explicative variables in the GLMMs. Another GLMM was performed to test the effect of the M-weighted rms sound pressure 19 . A directed acyclic graph was used to a priori identify the best possible strategy for potential confounders 48 . The GLMMs [63-and 125-Hz third-octave bands, and 31.5-63000 Hz octave bands, low-and high-frequency, M-weighted noise (rms)] were adjusted for the rms sound pressure level of the vessel noise, as it could potentially be related to the octave-band levels and hence influence the probability of reaction of the animals. The models were also adjusted for the random effects of the station and day.
The initial level of significance was 0.05. However, to prevent multiple comparison false discoveries in the GLMMs, the significance level of each test was re-evaluated with Benjamini-Hochberg-Yekutieli (BHY) procedure 49 taking into account the 20 different models tested.
|
2016-05-04T20:20:58.661Z
|
2015-06-22T00:00:00.000
|
{
"year": 2015,
"sha1": "60454ffee6093802c848a00ba19b30988562ffd0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep11083.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a376f975dbe0876df88351b4a36745b654239393",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
}
|
6174894
|
pes2o/s2orc
|
v3-fos-license
|
Simultaneous abdominal wall defect repair and Tenckhoff catheter placement in candidates for peritoneal dialysis
Introduction The presence of pre-existing abdominal wall defect (AWD) could represent a potential contraindication for peritoneal dialysis (PD) treatment. We report the results of our 6-year experience involving simultaneous repair of pre-existing AWD and catheter insertion for PD. Methods Patients with estimated glomerular filtration rate (e-GFR) 7–10 ml/min attending a single nephrology clinic between January 2008 and December 2014 were evaluated. Simultaneous AWD repair and catheter placement was performed. For inguinal (IH) or umbilical hernia (UH), a prolene mesh repair technique was adopted. Except for one case of total anaesthesia, the surgical procedure was performed under either spinal or local anaesthesia. Ceftazidime alone or in association with quinolones was administered 1 h before surgery in a single dose. Patients were discharged 2 days after surgery, and returned to the clinic twice during the 1st week for peritoneum washing (first volume of peritoneal dialysis solution: 300 ml). From week 3, volume (2000 ml) and dwells were personalized according to the patient’s clinical condition; options were: incremental PD, standard PD, or continuous cycling PD. Surgical follow-up was planned at 1, 6, and 12 months. Results Peritoneal catheters were inserted in 170 patients. IH, UH and incisional hernia were found in 18, 2 and 1 patients, respectively. IH was bilateral in 4 patients; concomitant IH and UH occurred in 1 patient. There were no deaths, nor intra-operative complications apart from scrotal haematoma in 1 patient. Over a mean follow-up of 551 days (range 342–1274) no hernia recurrence was registered and the peritoneal catheter continued functioning without problems. Conclusions Simultaneous AWD repair and peritoneal catheter placement seems a reliable and safe surgical procedure that allows patients with AWD to benefit from PD treatment.
Introduction
In patients on peritoneal dialysis (PD), the intra-abdominal pressure (IP) increases due to the flow of dialysis fluid into the peritoneal cavity. The increase of IP is proportional to the quantity of liquid introduced [1][2][3], and is frequently the cause of hernia. However, even a normal IP pressure may be dangerous for the abdominal wall in patients with increased body mass index, polycystic kidney disease, in those who engage in certain types of physical activity, as well as in multiparous women [1][2][3][4][5]. Therefore, PD is regarded as the primary cause of occurrence of abdominal wall defect (AWD) and, on the other hand, the presence of pre-existing AWD is considered a potential contraindication for PD [1][2][3]. However, the latter limitation is debated. To help clarify this issue, we report the results of our 6-year experience involving simultaneous repair of pre-existing AWD and catheter insertion for PD.
Materials and methods
Patients attending a single nephrology clinic between January 2008 and December 2014 were evaluated. Patients with estimated glomerular filtration rate (e-GFR) between 7 and 10 ml/min underwent physical examination by a dedicated team of nephrologists, surgeons and skilled nurses. AWD such as inguinal, umbilical and incisional hernia were carefully checked for by surgeons. In the presence of AWD, simultaneous repair of it and peritoneal catheter placement was performed in a one-stage procedure. AWD repair preceded peritoneal catheter insertion. The surgical procedure was performed under either spinal or local anaesthesia.
In cases of inguinal hernia, the modified Lichtenstein technique was adopted [6,7]. In brief, patients underwent tension-free hernioplasty. The inguinal canal was prepared and the hernial sac managed according to the Lichtenstein technique. The ilioinguinal nerve, iliohypogastric nerve and genital branch of the genito-femoral nerve were prepared and preserved. A semi-absorbable lightweight prolene mesh 10 9 6 cm (ULTRAPRO Ò , Ethicon Products, Somerville, NJ, USA) was placed on the inguinal floor, overlying the pubic tubercle by 2 cm, and fixed with a nonabsorbable suture. After repositioning the external oblique muscle and Scarpa's fascia, the skin was closed with a nonabsorbable continuous suture. In the case of umbilical hernia the procedure was conducted according to the technique proposed by Stabilini [8].
The peritoneal catheter was inserted through longitudinal incision 2-3 cm below the umbilical transversal line. The catheter tip was located in the Douglas root. The proximal cuff was fixed to the peritoneum with an interrupted absorbable suture. The fascia was closed with an absorbable suture. The distal cuff was tied to the anterior face of the rectum muscle fascia. The catheter skin exit was directed downwards or laterally. The catheter was flushed with 20 ml of normal saline to ensure patency and correct functioning. The skin was closed with a non absorbable continuous suture.
Ceftazidime alone or in association with quinolones was administered as a single dose 1 h before surgery.
Patients were discharged 2 days after surgery, and returned to the Nephrology clinic twice during the first week for peritoneum washing (mean initial dialysis solution: 300 ml). The volume of washing solution was progressively increased during the following 3 weeks (from 1000 to 1500 to 2000 ml at weeks 1, 2, and 3, respectively). From week 3, volume (2000 ml) and dwells were personalized according to the patient's clinical condition; options were: incremental PD, standard PD, or continuous cycling PD (CCPD). Surgical follow-up was planned at 1, 6, and 12 months. Informed consent was obtained from all participants in the study.
Results
During the study period, peritoneal catheters were placed in 170 patients (94 males and 76 females). Among these patients, inguinal hernia, umbilical hernia and incisional hernia were found in 18, 2 and 1 patients, respectively. Inguinal hernia was bilateral in 4 patients (3 males; 1 female); concomitant inguinal hernia and umbilical hernia occurred in 1 patient. Clinical characteristics of patients with AWD are shown in Table 1. Mean age was 61 ± 11 years (range 35-80); 50 % were aged \65 years. Mean body mass index was 24.7 ± 2.6; 6 patients were over-weight, and the remaining normal weight. Diabetes was present in 6 patients.
The mean operative time was 55 min (range 40-130). There were no deaths, nor intra-operative complications apart from scrotal haematoma in 1 patient who was conservatively managed and recovered within 1 month. During a mean follow-up of 551 days (range 342-1274) no hernia recurrence was registered and the peritoneal catheter continued to function without any problems.
Discussion
In our cohort of 170 patients who had been admitted to a single nephrology Unit for initiation of PD, the rate of occurrence of AWD was 15 %. Inguinal hernia was the most common AWD, being found in 13 % of patients. This incidence is similar to that reported elsewhere [9,10] while the incidence of umbilical hernia was lower than in a previous report ([60 %) [10]; the higher incidence in that case could be due to the fact that many of those patients were obese. It is commonly thought that AWD is more common in older people. Of note, we found AWDs in some of our younger patients. This finding is in line with other reports where AWDs were found in PD patients younger that those recruited in the present study [5,9,10].
The results of this study are clinically relevant. They suggest that simultaneous AWD repair and peritoneal catheter placement is, on the one hand, a reliable surgical procedure and, on the other hand, that it may represent a valid option for critical patients. Indeed, the peritoneal catheter continued to function efficiently and no recurrence of AWD was registered during the long follow-up of our study. These findings suggest that repair of pre-existing AWD does not interfere with endurance of the peritoneal catheter and does not affect dialysis efficacy. It is interesting that no recurrence of AWD was registered in our patients during PD treatment. Recurrence of AWD has been related to uraemia-dependent muscle frailty; however, it cannot be excluded that there was an asymptomatic AWD pre-existing PD initiation.
Our data strengthen the notion that a one-stage surgical procedure of simultaneous repair of AWD and peritoneal catheter insertion may offer clinical advantages to patients in some circumstances. In the case of late referral of a patient with advanced renal failure and concomitant presence of AWD, PD treatment may be initiated within a shorter time without the time-consuming double procedure of AWD repair and successive peritoneal catheter insertion. In addition, it may likely avoid the introduction of a central venous catheter for extracorporeal dialysis treatment, which could further postpone initiation of PD program.
It is worth noting that the prolonged follow-up of our study distinguishes it from others [9,10]. In one study, 19 patients were followed up for a mean period of 22 months (range 6-48) [9], while in the other 21 patients had a mean follow-up of 24 months (range 6-39) [10].
In recent years, the insertion of peritoneal catheters, as also artero-venous fistula construction, has been personally managed by nephrologists. In the case of a patient with AWD, however, both nephrologist and surgeon must be present in the theatre during placement of the peritoneal catheter, as the nephrologist does not have the expertise required for AWD repair [11].
Conclusions
The long-term peritoneal catheter survival and the absence of AWD recurrence during PD treatment found in our study suggest that simultaneous surgical AWD repair and peritoneal catheter insertion can be regarded as a safe surgical procedure. This strategy makes PD possible for some patients who would otherwise be excluded from the possibility of PD and, in addition, it eliminates the risks of repeated anaesthesia and reduces the costs of hospitalization.
Acknowledgments The authors did not receive any grants in relation to this work.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest. Ethical approval All procedures performed in the present study were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments.
Informed consent Informed consent was obtained from all participant in the study.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2018-04-03T03:26:25.981Z
|
2015-11-30T00:00:00.000
|
{
"year": 2015,
"sha1": "af423b6ad3b1f70369085ef064f483495fca6669",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40620-015-0251-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "af423b6ad3b1f70369085ef064f483495fca6669",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1078077
|
pes2o/s2orc
|
v3-fos-license
|
On the group of alternating colored permutations
The group of alternating colored permutations is the natural analogue of the classical alternating group, inside the wreath product $\mathbb{Z}_r \wr S_n$. We present a 'Coxeter-like' presentation for this group and compute the length function with respect to that presentation. Then, we present this group as a covering of $\mathbb{Z}_{\frac{r}{2}} \wr S_n$ and use this point of view to give another expression for the length function. We also use this covering to lift several known parameters of $\mathbb{Z}_{\frac{r}{2}} \wr S_n$ to the group of alternating colored permutations.
Introduction
The group of colored permutations, G r,n , is a natural generalization of the Coxeter groups of types A (the symmetric group) and B (the hyperoctahedral group). Extensive research has been devoted to extending the enumerative combinatorics aspects and methods from the symmetric group to the group of colored permutations (see for example [2,3,7,13,15,16], and many more).
It is well-known that the symmetric group S n has a system of Coxeter generators which consists of the adjacent transpositions: The alternating subgroup, A n , which is the kernel of the sign homomorphism, is a well-known subgroup of the symmetric group of index 2. A pioneering work, expanding one of the fascinating branches of enumerative combinatorics, namely, the study of permutation statistics to A n , has been done by Roichman and Regev in [14]. They defined some natural statistics which equidistribute over A n and yielded identities for their generating functions.
Brenti, Reiner, and Roichman [6] dealt with the alternating subgroup of an arbitrary Coxeter group. They started by exploring Bourbaki's presentation [4, Chap. IV, Sec. 1, Exer. 9] and elaborated on a huge spectrum of extensions of the permutation statistics of S n to the (general) alternating group.
In this paper, we study the subgroup of G r,n , consisting of what we call alternating colored permutations, which is the analogue of the usual alternating group A n in the colored permutation group. For every n ∈ N and even r, the mapping which sends all 'Coxeter-like' generators of G r,n (see the definition in Section 2) to −1 is a Z 2 -character, whose kernel is what we call here the group of alternating colored permutations, denoted by A r,n . We present here a generalization of Bourbaki's presentation, for r = 4k + 2, equipped with a set of canonical words, an algorithm to find a canonical presentation for each element of the group, and a combinatorial length function.
For the study of permutation statistics of A n , Regev and Roichman [14] used a covering map from A n+1 to S n , which enabled them to pass parameters from S n to the alternating group A n+1 . In this paper, we use a similar idea, where in this time we consider the group of alternating colored permutations as a 2 n−1 -cover of the group of colored permutations of half the number of colors. We use this technique to shed a combinatorial flavor on our length function and to pass some statistics and their generating function to the group of alternating colored permutations.
Note that there are two additional candidates for the group of alternating colored permutations. Namely, every Z 2 -character of G r,n provides a kernel which deserves to be called a group of alternating colored permutations. A work in this direction which gives a profound treatment to the other two non-trivial kernels, and points out the connections between the three groups, and some interesting properties of each group separately is in progress. This paper is organized as follows. In Section 2, we gather the needed definitions on the colored permutation group, as well as some notations which we use in the sequel. A Coxeter-like presentation for the group of colored permutations, G r,n , is presented at the end of this section. The notion of alternating colored permutations is introduced in Section 3. We present its set of generators, and show their corresponding relations. In Section 4, we present an algorithm for writing each element as a product of the generators. A detailed analysis of that algorithm yields a set of canonical words, as well as a length function. Section 5 is devoted to some technical proofs, as well as to the generating function of the length function.
In Section 6, we present the covering map and study the structure of the cosets, thereby providing a way to decompose the length function via the quotient group. The part of the length which varies over each coset (fiber) is called the fibral length and is studied here in a combinatorial way. Then we provide a generating function for this parameter. In Section 7, we give some examples for using the covering map for lifting parameters from the colored permutations group of half the number of colors to the group of alternating colored permutations.
Preliminaries and notations
In this section, we gathered some notations as well as preliminary notions which will be needed for the rest of the paper.
Note that z i is the color of the digit i (i is taken from the window notation), while c j is the color of the digit τ (j). Here, j stands for the place, whence i stands for the value.
The group G r,n is generated by the set of generators S = {s 0 , s 1 , . . . , s n−1 }, defined by their action on the set {1, . . . , n} as follows: whereas the generator s 0 is defined by It is easy to see that the group G r,n has the following 'Coxeter-like' presentation with respect to the set of generators S: • (s 0 s 1 ) 2r = 1.
where the partial order is the length order defined above.
For any a, n ∈ N, let R n (a) be the representative of [a] ∈ Z n satisfying 0 ≤ a < n.
In the sequel, we will use the following operator: 2 ) a ≡ 0(mod 2) It is easy to see that the operator ⊘ commutes with the addition operation in Z r 2 , i.e.
3. The group of alternating colored permutations The main target of this paper is the group of alternating colored permutations. We proceed now to its definition. Let ϕ be the function defined on the set S by ϕ(s i ) = −1 for any 0 ≤ i ≤ n − 1. It is easy to see that for even r, ϕ can be uniquely extended to a homomorphism from G r,n to Z 2 , so the following is well-defined: Definition 3.1. Let r be an even positive number. Define: A r,n = ker(ϕ).
The group A r,n is called the alternating subgroup of G r,n .
Since A r,n is a subgroup of index 2, we have: |A r,n | = r n n! 2 . In this paper, we concentrate on the case r = 4k + 2. The other case will be treated in a subsequent paper.
We start by presenting a set of generators for A r,n (we prove that they indeed generate the group in the next section). Define: 1 , a 2 , . . . , a n−1 }, where: It is easy to see that the following translation relations hold in G r,n : (1) s i s j = a i a j for i, j ∈ {2, . . . , n − 1},
The Combinatorial algorithm
In this section, we introduce an algorithm which presents each element of A r,n as a product of the set of generators A of A r,n in a canonical way.
Let π ∈ A r,n . We first refer to π as an element of G r,n and apply the known algorithm on π to write it as a product of elements in S. In the second step, we translate that presentation into the set of generators A of A r,n .
The algorithm for writing π as a product of elements in S consists of two parts: the coloring part and the ordering part.
In the coloring part, we start from the identity element and color all the digits i having z i = 0. This part terminates with an ordered permutation σ with respect to the length order. In the second part, we use only generators of the set S − {s 0 } to arrive at π from the ordered permutation σ. Note that the set Col(π) contains the colored digits in the image of π, (i.e. those appearing in the window notation), and not their places. We order Col(π) as follows: Col(π) = {i 1 < i 2 < · · · < i col(π) }. We start with the identity element and color each digit i ∈ Col(π) by z i colors. This process is done according to the order of the elements in Col(π). We use s i k −1 s i k −2 · · · s 1 s z 0 to color the digit i k by z colors. Example 4.1. Let π = 12 [2] 45 [1] 3 [3] ∈ G 6,5 . (12345) The permutation σ is an ordered permutation with respect to the length order.
4.2. The ordering part. For simplifying the presentation, in this part we start with π and arrive at the ordered permutation σ, instead of continuing the algorithm from the point we have left it at the end of the coloring part.
We start by pushing the element i 1 = |σ|(1) in the window notation of π to its correct place. Let p = |π| −1 (i 1 ). The pushing is done by multiplying π (from the right) by the element u 1 = s p−1 s p−2 · · · s 1 . Now, we continue to push the other digits of π: For each 1 < k ≤ n − 2, assuming that i k = |σ|(k) is now located at position p, we use the element s p−1 s p−2 · · · s k in order to push the digit i k to its correct place.
Example 4.2. We continue the previous example.
The algorithm described above gives a reduced word representing π in the generators of G r,n . This fact was proved in [2,Theorem 4.3]. The same algorithm can also be found in [16]; see also [15]. The word which was obtained in this way is called the canonical decomposition of π.
4.3.
Translation. Now, we translate the word obtained by the algorithm described above into a word in the generators in A: Let π ∈ A r,n . Use the above algorithm to write a reduced expression of π (in the usual generators of G r,n ) in the form: s i 1 s i 2 · · · s i 2k . Divide the elements of the reduced expression into pairs: We continue with π = 12 [2] 45 [1] 3 [3] ∈ A 6,5 from the previous examples. As we saw, π = s 1 s 2 0 s 2 s 1 s 3 0 s 4 s 3 s 2 s 1 s 0 s 3 s 2 s 3 s 4 s 1 s 2 s 3 . Now, we perform the translation: In the last equality, we cancelled some appearances of the bold-faced generator a 0 , since in A 6,5 , a 3 0 = 1. 4.4. Analysis of the algorithm. For analyzing the algorithm described above, we define the following sets of elements of G r,n and A r,n .
The coloring part. Let
For each 1 < i ≤ n, define for odd i − 1: or, in the language of the set A of generators of A r,n : For even i − 1, we define: or, in the language of the set A of generators of A r,n : Define also: Let π ∈ A r,n . Write π as a product of the generators of G r,n in the canonical form described above.
If there is no coloring part, then π ∈ A n (the classical alternating group in S n ), so its expression contains an even number of generators from the set {s 1 , . . . , s n−1 }. We can easily make the pairing by the relations mentioned above. The length of such an expression is clearly inv(π) (note that in this case, it does not matter whether we use the length order or the usual order).
Otherwise, we start with the coloring part. Denote by i = i 1 the smallest colored digit in the window notation of π, and by z = z i 1 its color. We divide our treatment into four cases: (1) i−1 and z are both even: In this case, we translate s i−1 · · · s 1 to a i−1 · · · a 1 and s z 0 to a z 2 0 , so the contribution of this sub- ∈ C i and leave an additional generator s 0 which will be treated during the coloring of the next digit, or just before the ordering part. Note that since π ∈ A r,n , there must be some s j , j = 0, appearing right after the subexpression s i−1 · · · s 1 s z 0 . In calculating the contribution of coloring the current digit (including the missing generator s 0 which will be paired later), consider the sub-expression and note that in the next colored digit, we complete the remaining a If i is the last colored digit, then the term a r+2 4 0 ∈ C n+1 will be chosen from the set C n+1 . (3) i − 1 and z are both odd: In this case, the sub-expression s i−1 · · · s 2 will be translated to a i−1 · · · a 2 and s 1 s z 0 will be written as s 1 s to the length of π, and we have used (4) i − 1 is odd and z is even: Here, again, the sub-expression s i−1 · · · s 2 will be translated to a i−1 · · · a 2 and s 1 s z−1 , so we use: and leave an additional generator s 0 which will be paired with some s j during the coloring of the next digit or just before the ordering part. In order to calculate the contribution of coloring this digit to the length of π (including the missing generator s 0 which will be paired later), we borrow the generator s j appearing just after the coloring expression of the current digit: and the remaining a r+2 4 0 will be taken from the next colored digit or from C n+1 (as in case (2)). Now, we apply the same procedure to the next colored digits, but note that there might be a situation in which the expression coloring the digit j is s 0 s j−1 s j−2 · · · , due to our debt of the generator s 0 from the preceding colored digit, so the cases might be switched after converting s 0 s j to a r+2 4 0 a j−1 . The following example will illuminate the situation.
Example 4.4. Let π = 12 [2] 45 [1] 3 [3] ∈ A 6,5 . Then the ordered permutation is: σ = 5 [1] 3 [3] 2 [2] 14 . We perform the coloring part: Step 1 : The smallest colored digit is 2, which has to be colored by two colors, so we are in case (4). We choose s 1 s 0 = a −1 1 a 2 0 from C 1 2 . The additional generator s 0 will be treated in the next step. Note that in the calculation of the contribution of this step to the length of π we borrow the generator s 2 from the next colored digit: This expression contributes only 2 to the length of π. The generator a 2 will be counted in the next step.
Step 2 : The next colored digit is 3, and we have a debt of a generator s 0 from the previous step. Thus, we choose . Even though i − 1 = 2 is even and z = 3 is odd (case (2)), after s 0 s 2 in the previous step, we are actually again in case (4). Note that the expressions a −1 1 a 2 0 from step 1 and a 2 0 a 2 a −1 1 from step 2 join together to be a −1 1 a 0 a 2 a −1 1 .
Step 3 : The next colored digit is 5. We choose: , and leave the treatment of the additional generator s 0 to the next step (in this case, to the transition between the coloring part and the ordering part, i.e. a 2 0 ∈ C 6 ). We will elaborate on this point after describing the ordering part.
The ordering part.
We turn now to the ordering part. For 1 ≤ k ≤ n − 1, define the sets O k as follows: and for 2 ≤ k ≤ n − 1: We start by pushing the digit i 1 = |σ|(1) of π to its correct place. Let p 1 = |π −1 |(i 1 ). The pushing will be done by multiplying π (from the right) by the element o 1 ∈ O 1 , where: Now, we continue to push the other digits of π: for each 1 < k ≤ n − 2, assuming that the digit i k = σ(k) is now located at position p k , we use the element o k ∈ O k defined by: in order to push the digit i k to its correct place in π. Now, we have two possibilities: • The coloring part was completed without remainders, which means that both the coloring part and the ordering part consist of even number of G r,n -generators. In this case, we choose 1 ∈ C n+1 , and we have that: • The coloring part has a remainder of the generator s 0 . This means that the coloring part required an odd number of G r,ngenerators, and therefore the ordering part had an odd number of G r,n -generators as well (since the sum of their lengths is even). Thus, in the ordering part, there will be a remainder of s ∈ C n+1 , and we have that: In both cases, we have now: π = σ · o −1 n−1 · · · o −1 1 , and we are done. Example 4.5. We continue the previous example. Again, let π = 12 [2] 45 [1] 3 [3] . After the completion of the coloring part, we have reached the ordered permutation: σ = 5 [1] 3 [3] 2 [2] 14 .
Now, we go the other way around: we start with π and order it to obtain σ: 12 [2] 45 [1] 3 [3] 7 s 3 s 2 s 1 s 3 0 −→ 5 [4] 12 [2] [2] 14 = σ. In step 7 , we push the digit 5 to its correct place with respect to the ordered permutation σ. We use s 3 s 2 s 1 s 3 0 = a 3 a 2 a −1 1 ∈ O 1 . Note that the digit 5 is bearing extra three colors. In step 6 , we push the digit 3 into its place, using s 4 s 3 s 2 s 3 0 = a 4 a 3 a 2 ∈ O 2 . Note that the color of the digit 5 is correct again. Next, in step 5 , we push 2 to its correct place using s 3 s 3 0 = a 3 ∈ O 3 . Now, note that after completing step 5 , we still did not arrive at σ, since the digit 5 has again a wrong color. On the other hand, we have a debt of a generator s 0 from the coloring part. Both problems would be solved simultaneously by using s 0 s 3 0 = a 2 0 ∈ C 6 . This is exactly what we have done in step 4 .
From the above analysis we can conclude that each permutation π ∈ A r,n has a canonical decomposition with respect to the set A. This is the context of the following theorem.
1 } generates A r,n . Moreover, for each π ∈ A r,n , there is a unique presentation as: This presentation is called the canonical decomposition of π.
Proof. Let M be the Cartesian product We start by defining a subset L of M which we call the set of legal vectors. A vector ω = (γ 1 , . . . , γ n , γ n+1 , o n−1 , . . . , o 1 ) ∈ M is called a legal vector if it satisfies the following two conditions: (1) Let i and j be two indices satisfying γ k = 1 for all i < k < j, γ i = 1 and γ j = 1 (i.e. the digits i and j are colored, but the digits between them are not colored). If γ i ends with s r−1 0 , then γ j does not start with s 0 .
(2) Let i and j be two indices satisfying γ k = 1 for all i < k < j, γ i = 1 and γ j = 1. If γ i ends with s 1 , then γ j starts with s 0 .
We have to prove the following two claims: (a) The algorithm associates a legal vector in L to any π ∈ A r,n . This proves the existence of the presentation. (b) |L| = r n n! 2 (= |A r,n |), which implies the uniqueness. Claim (a) is implied immediately from the algorithm, so we pass to the proof of Claim (b). For that, we define the notions of external and internal components of a vector: ω = (γ 1 , . . . , γ n , γ n+1 , o n−1 , . . . , o 1 ) ∈ L.
A component γ i ∈ C i is called external, if it ends either with s r−1 0 or with s 1 (i.e. a component which imposes a restriction on the next non-trivial component), and internal otherwise. Note that γ 1 is always internal, since by definition, the generator s 1 does not appear in γ 1 and it cannot end with the expression s r−1 0 since r − 1 is odd. For constructing an element of L, we start by choosing the element γ 1 ∈ C 1 , out of r 2 possibilities. Next, we choose which components will be external. Note that for each external component there are two possibilities, but each external component restricts the possibilities for the next non-trivial component (i.e. γ i = 1). Next, for each internal component, we choose one out of r − 1 possibilities.
When the coloring part is over, we have two possibilities: If we have no remainder from the coloring part, then we choose γ n+1 = 1, and we have to complete the process by a permutation of S n of even length. On the other hand, if we do have a remainder, then we choose γ n+1 = a r+2 4 and we have to complete the process by a permutation of S n of odd length. Altogether, this contributes n! possibilities for completing the presentation.
Following the above discussion, we have that the number of legal vectors is: as needed.
Remark 4.7. Note that we do not claim that the presentation, described above, is irreducible as it. Take for example the expression for π = 12 [2] 45 [1] 3 [3] , computed in Example 4.3, to be: a −1 1 a 2 0 a 2 0 a 2 a −1 1 a 2 0 a 2 0 a 2 0 a 4 a 3 a 2 a 1 a 2 0 a 3 a 2 a 3 a 4 a 1 a 2 a 3 , which can be shortened to a −1 1 a 0 a 2 a −1 1 a 4 a 3 a 2 a 1 a 2 0 a 3 a 2 a 3 a 4 a 1 a 2 a 3 . On the other hand, after we cancel all the redundant appearances of a 0 , we do obtain an irreducible expression, as will be proven in the next section.
For π ∈ A r,n , let L A (π) be the number of generators needed to write π as a product of the A r,n -generators by the algorithm.
As a consequence of the analysis of the algorithm, we have the following result: Theorem 4.8.
(1) Let π ∈ A r,n , and let ω = s z 1 0 b 1 s z 2 0 b 2 · · · s zn 0 b n where b i ∈ (S − {s 0 }) * be its canonical presentation with respect to S. Then the translation of ω to the generators in A will be (2) Let π ∈ A r,n . Then: Proof. Part (1) is straightforward from the analysis of the algorithm, so we proceed to the proof of part (2).
In [2,Theorem 4.3], the algorithm for presenting an element in G r,n is described (see also [16]). It is proven that for π ∈ G r,n , the length of π, with respect to the generators in S, is: (z i (π)) , so by part (1) we have: Another consequence of the algorithm is the following criterion for an element for being in the group A r,n : Theorem 4.9. Let π ∈ G r,n . Then: π ∈ A r,n if and only if: Proof. By the definition, π ∈ A r,n if and only if ℓ Gr,n (π) ≡ 0 (mod 2). By the algorithm described above, ℓ Gr,n (π) = csum(π) + k, where k is the number of generators s i , for i = 0, used in the presentation of π. On the other hand, if we remove the appearances of s 0 from the presentation of π, we get |π|. Since lengths of different presentations of the same element of S n have the same parity, we have that k ≡ inv(|π|)(mod 2), and therefore the criterion follows.
The presentation of A r,n and its length function
In [5], Dynkin-like diagrams were presented for the groups G r,n . Such diagrams are based on a Coxeter-like presentation. In this section, we compute a Coxeter-like presentation for A r,n , as well as a Dynkin-like diagram for the groups A r,n . 5.1. The presentation of A r,n . We start with the presentation of A r,n .
Remark 5.2. Note that relation (5) implies (a −1 1 a 2 ) 3 = 1 too, and relation (8) implies a −1 1 a 0 a −1 1 = a 1 a 0 a 1 and a −1 1 a 0 a 1 = a 1 a 0 a −1 1 . Proof. We have already shown in Theorem 4.6 that A generates A r,n . Here, we prove that the set R is the complete set of relations for G r,n .
We imitate the idea of the proof of Proposition 2.1.1 in [6]. Consider the abstract group A + r,n generated by the elements A = {a 0 , a 1 , a −1 1 , . . . , a n−1 }, with R as the set of relations. Note that the set mapping α : . . , n − 1}, extends to a group automorphism α on A + r,n . Indeed, considering A r,n as a subgroup of G r,n , α is the inner automorphism defined by the conjugation by s r 2 0 . Thus, the group Z 2 = {1, α} acts on A + r,n and we have the semidirect product A + r,n ⋊ Z 2 , where the product is defined as follows: The semidirect product has the following presentation: (6) A + r,n ⋊ Z 2 = α, a 0 , a 1 , a −1 1 , a 2 , . . . , a n−1 | R, αa i α = α(a i ) for all i . We prove now that G r,n ∼ = A + r,n ⋊ Z 2 . In order to do this, we define the following two homomorphisms, which are inverses of each other: ϕ : G r,n → A + r,n ⋊ Z 2 , defined on the generators by: It is easy to see that ρ and ϕ are isomorphisms, so we have that G r,n ∼ = A + r,n ⋊ Z 2 . Now, since ρ(A + r,n ) ⊆ A r,n and A r,n , A + r,n are both subgroups of G r,n of index 2, they must be isomorphic.
The relations defining A r,n can be graphically described by the following Dynkin-like diagram, where the numbers inside the circles are the orders of the corresponding generators, an edge without a label between two circles means that the order of the multiplication of the two corresponding generators is 3, and an edge labeled 2r between two circles means that the order of the multiplication of the two corresponding generators is 2r (two circles with no connecting edge mean that the two corresponding generators commute): Given a group G, generated by a set A, we denote by ℓ A the length function on G with respect to A. Explicitly, for each π ∈ G: In this section, we prove that the algorithm described above, indeed gives us a reduced word with respect to the set of generators A. In other words, we prove that for each π ∈ A r,n , ℓ A (π) = L A (π), where L A (π) is the number of generators in the presentation of π, obtained by the algorithm (and was computed in Theorem 4.8(2)).
We start with the following set of definitions: b k+1 be any factorization of π ∈ A r,n as a product of the generators of S, such that b i ∈ (S − {s 0 }) * for 0 ≤ i ≤ k + 1.
• Let n(ω) be the number of generators s i in ω such that i = 0.
Let ω be a reduced word in generators from A. Apply the map ρ, defined in the proof of Theorem 5.1 above, which sends a i to s This yields a word η, factorizing π in G r,n -generators, satisfying: ℓ A (π) = β(η) + n(η). This implies that the minimal value of µ over all the factorizations of π to G r,n -generators is at most ℓ A (π), since µ(π) is defined as the minimal value of all such expressions. So, we have: ℓ A (π) ≥ µ(π) (see Example 5.5(a) below).
In order to prove the opposite inequality: µ(π) ≥ ℓ A (π), let ω be a factorization of π to G r,n -generators which achieves the minimal value of µ(π). Apply the map ϕ, defined in the proof of Theorem 5.1, which sends s 0 to αa r+2 4 0 , and for all i ≥ 1 sends s i to αa i on each letter separately, and concatenate. This gives us a factorization η of π in A ∪ {α}-generators, having n(ω) generators a i with i = 0 and β(ω) occurrences of a 0 . The factorization η contains also an even number of occurrences of the letter α (one occurrence for each s i for i ≥ 0; recall that the number of s i 's is even by definition). By using the relations αa i α = a i for 1 ≤ i ≤ n and αa −1 1 α = a −1 1 , we can cancel out all occurrences of α and we have an A * -word (i.e. a word written using generators from A) factorization of π of length µ(π) (see Example 5.5(b) below). This proves that µ(π) ≥ ℓ A (π).
Hence, we have obtained the following corollary, which is the main result of this section: Corollary 5.7. The function L A is indeed the length function of the group of colored alternating permutations with respect to the set of generators A. Explicitly: for each π ∈ A r,n , Consequently, we can easily get the generating function for the length function: 1 + q j−1 (1 + 2q + · · · + 2q r 2 −1 ) .
Proof. Each π ∈ A r,n has a canonical decomposition into generators from the set S. By Theorem 4.4 of [2], the generating function for the length with respect to S over the whole group G r,n is: π∈Gr,n q ℓ S (π) = [n]! q n j=1 1 + q j−1 (1 + q + · · · + q r−1 ) .
By Theorem 4.8(1), the factor s j−1 · · · s 1 s z 0 which colors the digit j by z colors is converted to a j−1 · · · a ±1 1 a z⊘2 0 . Since the mapping Z r → Z r 2 is a 2:1-epimorphism, the factor (1 + q j−1 (1 + q + · · · + q r−1 )) is converted to 1 + q j−1 (1 + 2q + · · · + 2q r 2 −1 ) . Finally, after completing the coloring part, only half of the permutations of S n are permitted (since the total length of the word in G r,n -generators should be even), so we have to divide the generating function by 2.
The colored alternating group as a covering group
In [14], a covering map f : A n+1 → S n was defined and used to lift some identities of S n to A n+1 . In this section, we use a similar technique with a covering map from A r,n to G r 2 ,n . Unlike the case of A n+1 , this map is an epimorphism, and hence the kernel of this map will be combinatorially described. We also present a section s : G r 2 ,n → A r,n which gives us a way to decompose the length function, ℓ A (π), into two summands, one of them is constant on the coset of π, while the other, which will be called the fibral length, varies over the coset. We present a nice combinatorial interpretation of the last parameter, as well as a generating function for it over each coset. In Section 7, we use this covering map to lift some identities and permutation statistics from G r 2 ,n to A r,n . Define the following projection: [cn] n , then: n . Example 6.1. Let π = 3 [0] 2 [1] 4 [2] 1 [3] ∈ G 6,4 . Then: Then, we have: Lemma 6.2. The map p is an epimorphism. Moreover, the kernel of p is the normal closure of a 2 1 in A r,n . Thus: G r 2 ,n ∼ = Ar,n ≪a 2 Proof. The map p is clearly a homomorphism since the operator ⊘ commutes with the addition operation in Z r 2 (see Equation (4)). Now, [cn] n ∈ G r 2 ,n , and j ∈ {1, . . . , n} satisfies b j = 1, then, by Theorem 4.9, we have either where the computations are made modulo r. This implies that p is an epimorphism. It remains to find the kernel. Since a 2 and {i | c i = 0}| is even. For each i < j, we can use the element t i,j a 2 1 t −1 i,j ∈≪ a 2 1 ≫, where t i,j = s i−1 s i−2 · · · s 1 · s j s j−1 · · · s 2 in order to color digits i and j in r 2 colors without touching the other digits. This proves that ≪ a 2 1 ≫= ker(p), as needed. We emphasize the following two observations, which can be concluded from the proof of the previous theorem, for a future use. Observation 6.3.
(1) | ker(p)| = 2 n−1 . (2) Let π, π ′ ∈ A r,n be such that p(π) = p(π ′ ). Then, for each i ∈ {1, . . . , n}, c i (π) ≡ c i (π ′ ) (mod r 2 ). Moreover, c i (π) and c i (π ′ ) differ by r 2 for an even number of indices. The following obvious lemma presents the action of p on the generators of A r,n : We introduce the following section of the covering map p: Define s : G r 2 ,n → A r,n as follows: if π = p [c 1 ] 1 · · · p [cn] n and p j = 1, then: where the computations are made modulo r. It is easy to verify that p • s = Id.
6.1. The fibral length. For each π ∈ A r,n , the length function of π with respect to the set of generators A can be decomposed into two summands. The first summand is the length of p(π) as an element in G r 2 ,n , which is obviously invariant on the fiber of π. The second summand, which varies along the fiber, will be called the fibral length. As will be shown in this section, it has a nice combinatorial interpretation. We start with the definition of the fibral length. Definition 6.6. For each π ∈ A r,n , define the fibral length of π to be: For π ∈ G r,n , denote c(π) = z i (π) =0 (i − 1). By the definition of π 0 = s(p(π)), we have: We will need the following two lemmata in the sequel: ℓ F (π) = c(π) − c(π 0 ) + inv(π) − inv(π 0 ).
Proof. Let π ∈ A r,n . By Lemma 6.7, we have: By Equation (7), c(π) − c(π 0 ) ≥ 0. Now, let 1 ≤ k < m ≤ n be such that A rather tedious but easy calculation should convince the reader that the only possibility is |π(k)| = i > j = |π(m)| with α = r 2 and β = 0 (and hence α ′ = β ′ = 0), so that the digit i is colored in π but not in π 0 . Now, since this situation can occur at most i − 1 times, the contribution of i to c(π) − c(π 0 ) which is i − 1, will cancel the corresponding negative contribution to inv(π) − inv(π 0 ), and hence the total sum will be positive.
Denote by ℓ G the length function of the group G r 2 ,n . Then, we have: Lemma 6.10. Let π ∈ A r,n and π 0 = s(p(π)). Then: Proof. Let π ∈ A r,n . Then:
6.2.
A combinatorial interpretation of the fibral length. For presenting the fibral length in a combinatorial way, we introduce the following parameter on A r,n .
Let 1 < i ≤ n be such that z i (π) = r 2 . Then, by Equation (7), the contribution of i to the left hand side is i − 1, so we have to show that i contributes the same to the right hand side, i.e. for each 1 ≤ j < i, the pair (i, j) contributes 1 to the expression inv(π) − inv(π 0 ) + 2tinv(π). This can be easily done by a subtle, though, direct check. Note that i = 1 contributes 0 to both sides.
If 1 ≤ i ≤ n satisfies z i (π) = r 2 , then i contributes 0 to both sides. We are interested in the distribution of the fibral length of π ∈ A r,n over the coset containing π. Define: It would be much easier to calculate this distribution if we would have translated Theorem 6.14 to the language of Lehmer codes [12]. Recall that the Lehmer code of a permutation π ∈ S n is defined by: L(π) = l π(1) · · · l π(n) , where for each 1 ≤ i ≤ n, For example, if π = (31452) ∈ S 5 (in window notation), then: L(π) = (l 3 l 1 l 4 l 5 l 2 ) = (20110).
Note that this definition is slightly different from the usual definition of the Lehmer code.
Some permutation statistics
In this section, we present some permutation statistics for the group of alternating colored permutations. 7.1. Passing parameters from G r 2 ,n to A r,n . We exhibit now how to pass parameters defined on the full group of colored permutations of half the number of colors to the group of alternating colored permutations. In order to do that, we have to define the notion of a fiber-fixed parameter.
Definition 7.1. Let f A : A r,n → N and f K : G r 2 ,n → N be two permutation statistics. The parameter f A is called fiber-fixed if for each π ∈ A r,n , (11) f K (p(π)) = f A (π).
By Observation 6.3(1), we have the following connection between the corresponding generating functions: Lemma 7.2. Let f A : A r,n → N and f K : G r 2 ,n → N be such that f A is a fiber-fixed parameter. Then: π∈Ar,n q f A (π) = 2 n−1 π∈G r 2 ,n q f G (p(π)) .
We give two examples offiber-fixed parameters, the first one is the flag-inversion number, and the second is the right-to-left minimum. 7.2. The flag-inversion number. The flag-inversion number was introduced by Foata and Han [8,9]. Adin, Brenti and Roichman [1] used it as a rank function for a weak order on the groups G r,n . We introduce it in G r,n : Definition 7.3. Let π ∈ G r,n . The flag-inversion number of π is defined as: finv(π) = r · inv(|π|) + csum(π).
In [10], the generating function of finv over G r,n was computed: [ri] q .
We define here a version of the flag-inversion number for the alternating colored permutations whose generating function over A r,n can be computed using on the covering map p.
Definition 7.5. Let π ∈ A r,n . Define: It is easy to see that the parameter finv is indeed fiber-fixed, in the sense of Equation (11). Explicitly, for each π ∈ A r,n , finv A (π) = finv(p(π)). Consequently, we have: 7.3. The right-to-left minima. Another parameter, which can be computed using the covering map, is the right-to-left minima.
Definition 7.7. Let p = (a 1 , . . . , a n ) be a word over an ordered alphabet (Σ, <). Then a i ∈ {1, . . . , n} is a right-to-left minimum if for any j > i, one has: a j > a i . The number of right-to-left minima will be denoted by RtlMin(p).
Regev and Roichman [15] defined a version of the right-to-left minima for G r,n as follows: [cn] n ∈ G r,n . Define: RtlMin(π) = |{a i | ∀j > i : a j > a i , c i = 0}|.
We introduce here a version of the right-to-left minima for A r,n . Definition 7.10. Let π ∈ A r,n . Define: RtlMin A (π) = a i ∀j > i, a j > a i , c i = 0, r 2 .
|
2014-01-22T06:54:40.000Z
|
2014-01-22T00:00:00.000
|
{
"year": 2014,
"sha1": "f3eb7a407dfbacdea9f0a5c08c693109395f27f9",
"oa_license": "CCBY",
"oa_url": "https://www.combinatorics.org/ojs/index.php/eljc/article/download/v21i2p29/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f3eb7a407dfbacdea9f0a5c08c693109395f27f9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
49212998
|
pes2o/s2orc
|
v3-fos-license
|
Chronic vitamin D insufficiency impairs physical performance in C57BL/6J mice
Vitamin D insufficiency (serum 25-OH vitamin D < 30 ng/ml) affects 70-80% of the general population, yet the long-term impacts on physical performance and the progression of sarcopenia are poorly understood. We therefore followed 6-month-old male C57BL/6J mice (n=6) consuming either sufficient (STD, 1000 IU) or insufficient (LOW, 125 IU) vitamin D3/kg chow for 12 months (equivalent to 20-30 human years). LOW supplemented mice exhibited a rapid decline of serum 25-OH vitamin D levels by two weeks that remained between 11-15 ng/mL for all time points thereafter. After 12 months LOW mice displayed worse grip endurance (34.6 ± 14.1 versus 147.5 ± 50.6 seconds, p=0.001), uphill sprint speed (16.0 ± 1.0 versus 21.8 ± 2.4 meters/min, p=0.0007), and stride length (4.4 ± 0.3 versus 5.1 ± 0.3, p=0.002). LOW mice also showed less lean body mass after 8 months (57.5% ± 5.1% versus 64.5% ± 4.0%, p=0.023), but not after 12 months of supplementation, as well as greater protein expression of atrophy pathway gene atrogin‑1. Additionally, microRNA sequencing revealed differential expression of mIR‑26a in muscle tissue of LOW mice. These data suggest chronic vitamin D insufficiency may be an important factor contributing to functional decline and sarcopenia.
INTRODUCTION
Sarcopenia affects between 10% and 35% of individuals over the age of 65 and underlies functional decline and increases fall risk in older individuals [1,2]. Although the underlying causes of sarcopenia remain unknown, sarcopenia is associated with vitamin D insufficiency (25-OH vitamin D >10 ng/ml and < 30 ng/ml) [3,4]. Vitamin D insufficiency is also prevalent worldwide, affecting as many as 70% of the population across all age demographics [5,6], raising the likelihood that some individuals may be vitamin D insufficient for decades or more.
Vitamin D insufficiency has been associated with a range of physical performance impairments across multiple studies, including: poor grip and leg strength, lower scores in the short physical performance battery (SPPB), low activities of daily living (ADL) scores, and reduced physical activity in community dwelling older individuals [3,7,8]. Low serum 25-OH vitamin D (< 12.5 ng/ml) was also associated with worse physical performance as a composite score of gait speed, chairstand, and grip strength in the Women's Health Initiative (WHI) study [9]. Low serum vitamin D (<12.0 ng/ml) has also been associated with poor physical performance, fracture risk, and fractures in the AGING Longitudinal Aging Study Amsterdam (LASA) [10]. However, several studies involving older individuals have reported no association between vitamin D and physical performance [11][12][13], raising uncertainty regarding the role of vitamin D in physical performance.
Animal studies can provide insights free from genetic and lifestyle factors that might confound the interpretation of human studies, leading to conflicting reports. Vitamin D receptor (VDR) knockout mice exhibit declines in several aspects of performance including swimming endurance, rotarod, and open field activity [14][15][16]. Furthermore, VDR ablation in muscle stem cells has demonstrated its critical role in muscle metabolism including muscle anabolic signaling [17,18], satellite cell proliferation [19], and regulation of the uptake of 25-OH vitamin D into muscle cells [20]. Recently a study silencing VDR expression with siRNA in muscle stem cells (C2C12) demonstrated inhibition of myogenic differentiation [21]. Dietary elimination of vitamin D has also been reported to impair performance, including lesser grip strength in mice [22] and swim endurance [23], while 1α-OH vitamin D treatment in ovariectomized rats was associated with increases in muscle strength, but not with muscle fatigue [24].
Animal studies rarely include the physiological relevant condition of vitamin D insufficiency, opting for vitamin D receptor knockouts or complete dietary elimination. Furthermore, there are no rodent or human studies that follow the impacts of vitamin D insufficiency for long periods of time (≥1 year for mice or about 20-30 years for humans), particularly while investigating physical performance. Therefore, we established groups of mice at serum vitamin D sufficient and insufficient levels, and followed changes in body composition and physical performance over a 12-month period. Our findings demonstrate that vitamin D insufficiency modulates muscle miRNA signaling, increases atrophy pathway proteins, and impairs specific aspects of physical performance.
Depletion and repletion of serum 25-OH vitamin D occurs rapidly in response to changes in the amount of vitamin D3 supplementation, and levels remain consistent relative to the amount of supplementation
To understand the potential impacts of chronic vitamin D insufficiency we supplemented 6-month old male C57BL6/J mice with standard (STD) facility levels of vitamin D3 in chow (1000 IU vitamin D3/kg) or reduced to a lower amount (LOW, 125 IU vitamin D3/kg) to induce vitamin D insufficiency. We observed that the STD level of supplementation results in stable serum 25-OH vitamin D levels (30-40 ng/ml) ( Figure 1A, black line). We further observed that LOW supplementation leads to a rapid decline in serum 25-OH vitamin D, reaching human equivalent levels of vitamin D insufficiency after just two weeks, and remaining consistently between 10-15 ng/ml for the remainder of the experiment ( Figure 1A, dark gray line). After 4 months, we tested the rate of vitamin D repletion by switching a group of mice receiving 125 IU vitamin D3/kg to 1000 IU vitamin D3/kg chow ( Figure 1A, light gray line), and found repletion also occurs within two weeks with resultant serum 25-OH vitamin D levels in these mice similar to levels in the STD group. After 12 months, there were no statistically significant differences in serum calcium or levels of the active metabolite of vitamin D, serum 1,25-(OH) 2 vitamin D between the STD and LOW groups ( Figure 1B & 1C, respectively). We did observe a trend towards higher serum intact parathyroid hormone in vitamin D insufficient mice (STD: 175.2 ± 18.4 pg/ml versus LOW: 279.7 ± 125.4 pg/ml, p=0.10, Figure 1D).
Chronic vitamin D insufficiency did not affect body weight, but accelerates the reduction in lean body mass and the increase in fat mass
We examined the impacts of chronic vitamin D insufficiency on body weight and composition. We found body weights to be similar between the two groups at all time points (Figure 2A), with equivalent overall weight gains (STD: 41.3% ± 10.6% versus LOW: 44.7% ± 15.6%, p=0.66), which is expected for mice of this age. Additionally, we examined lean body mass, fat mass, and bone mineral density using dual Xray absorptiometry (DEXA). After 4 months of treatment (10 months of age), vitamin D insufficient mice trended towards a lower lean body mass and greater fat mass (p=0.08 for both, Figures 2B and 2C) and was significantly different from STD mice after 8 months (14 months of age, lean mass -STD: 64.5% ± 4.0% versus LOW: 57.5% ± 5.1%, p=0.0231 and fat mass -STD: 35.4% ± 4.0% versus LOW: 42.5% ± 5.2%, 5p=0.0243). However, there was no difference in either parameter after 12 months of treatment (18 months of age) (p=0.37). Bone mineral density of LOW mice was significantly lower than STD mice after 8 months (STD: 54.8 ± 0.7 mg/cm 2 versus LOW: 52.8 ± 1.3 mg/cm 2 , p=0.0078, Figure 2D), but significant differences were not observed after 12 months.
Vitamin D insufficiency impairs physical performance across multiple domains
To examine whether vitamin D insufficiency impacts physical performance, and in what ways, we performed AGING a range of behavioral assessments ( Figure 3). We observed that grip strength did not significantly change over the 12-month period of our experiment in either cohort ( Figure 3A). There was also no difference in rotarod fall latency between supplementation groups, an assessment of balance and coordination, although both groups exhibited declines with age (STD baseline time to fall: 272.0 ± 45.4 seconds versus endpoint: 183.6 ± 39.1 seconds, paired T-test: p=0.014; LOW baseline: 231.7 ± 36.6 seconds versus endpoint: 157.8 ± 54.7 seconds, p=0.005, Figure S1A). However, vitamin D insufficient mice showed a deficiency in grip endurance, as determined by both grip wire (STD 65.8 ± 18.6 seconds versus LOW: 35.4 ± 6.7 seconds, n = 6 and 5, respectively, p=0.0039, Figure 3B) and grip grid latency (STD 147.5 ± 50.6 seconds versus LOW: 34.6 ± 14.1 seconds, n = 6 and 5, respectively, p=0.001, Figure S1B). No difference was observed in treadmill endurance ( Figure 3C), although a difference between the two groups was trending (p=0.11 at 8 months and 0.06 at 12 months). The treadmill assessment was performed at 0˚ inclination, and gradual increases in speed likely imposed greater utilization of anaerobic respiration. However, to more completely examine if vitamin D insufficiency impairs anaerobic performance, we devised an uphill treadmill assessment by setting the inclination to 25˚ and introducing periods of active recovery at low speed to maximize anaerobic response. We observed that after 48 weeks, vitamin D sufficient mice achieved greater time before exhaustion than did vitamin D insufficient mice (STD: 9.4 ± 1.7 mins versus LOW: 5.1 ± 0.8 mins, p=0.0007, n = 6 and 5, respectively, Figure 3D).
We further analyzed functional capacity by assessing open field activity. Although we did not observe a difference in open field exploration between the two groups (open field quadrant crossings: STD: 23.5 ± 12.4 versus LOW: 16.5 ± 11.0, p=0.33, n = 6 and 5, respectively, Figure 3E), we observed that vitamin D insufficient mice reared less often (STD: 13.8 ± 6.6 versus LOW: 4.4 ± 2.1, p=0.0141, n = 6 and 5, respectively, Figure 3E). Additionally, vitamin D insufficient mice exhibited significantly shorter stride length at 8 months (p=0.0318) and at 12 months (p=0.0018) ( Figure 3F) and these mice also exhibited further decline in stride length from 8 to 12 months (LOW: 4.9 ± 0.3 to 4.4 ± 0.3 stride : femur length, p=0.0026, n=6), which was not observed in vitamin D sufficient mice (STD: 5.3 ± 0.2 to 5.1 ± 0.2, p=0.1757, n=5). AGING Vitamin D insufficient mice exhibit trends for lower muscle fiber size and myofibrillar protein content, in addition to greater protein expression of atrophy associated atrogin-1 We next sought to identify factors that may be driving the observed differences in physical performance. NADH histological staining on quadriceps muscle ( Figure 4A) showed a trend towards smaller cross sectional area (CSA) in light stained fibers in vitamin D insufficient mice (LOW: 3,057.1 ± 708.3 versus STD: 4,046.7 ± 977.1 μm 2 p=0.13, n=4 and 5, respectively, Figure 4B). We further examined if differences occurred in the content of specific muscle fractions by isolating myofibrillar, sarcoplasmic, and mitochondrial proteins using differential buffers and centrifugation ( Figure 4C). We did not observe differences in sarcoplasmic protein content, yet there was a trend towards lower myofibrillar protein content in vitamin D insufficient mice (LOW: 17.4 ± 6.3 μg/mg tissue versus STD: 23.1 ± 4.9 μg/mg, p=0.11). Although these data are only suggestive, they are consistent with studies that report muscle decline in vitamin D receptor knockout models [25,26]. As aberrant vitamin D signaling is associated with greater atrophy pathway signaling [27], we investigated the expression of atrogin-1 and found greater expression in the quadriceps muscles of the vitamin D insufficient mice (STD: 2.33 ± 0.52 versus LOW: 2.88 ± 0.24, p=0.0393, Figure 4D). We did not observe differences in mitochondrial protein content (p=0.38, Figure 4C), the ratio of mitochondrial : nuclear DNA content in soleus muscle (STD: 0.99 ± 0.06 versus LOW: 0.89 ± 0.17, p=0.18, Figure 4E), or mitochondrial complex IV activity (p=0.82, Figure 4F).
Vitamin D insufficient and sufficient mice exhibit a similar inflammatory profile
Both clinical [28] and cell culture studies [29,30] support a role for vitamin D in modulating inflammatory cytokines. We therefore set out to determine if 12 months of vitamin D insufficiency modulated serum AGING cytokine levels ( Figure 5A and Table 1). Surprisingly, we did not identify any serum cytokine concentrations as being significantly different due to treatment, which included IL-1α, IL-1β, IL-6, IL-10, IL-15, IL-18, MCP, and TNFα. We note that the serum level of IL-18 was trending lower in vitamin D insufficient mice (STD: 118.0 ± 9.9 pg/ml versus LOW: 109.6 ± 9.2 pg/ml, p=0.16). We further investigated tissue concentrations of IL-6 in brain, heart, and epididymal adipose tissue, but did not find any elevation in these tissues ( Figure 5B). However, we also note that adipose tissue IL-6 trended higher in vitamin D insufficient mice (STD: 251.7 ± 75.5 ng/μg protein versus LOW: 425.5 ± 213.6 ng/μg protein, p=0.09).
Vitamin D insufficient mice exhibit a distinct muscle miRNA profile
MicroRNAs are powerful effectors underlying many physiological processes, including the pathophysiology of muscle mass decline [31]. To investigate whether vitamin D insufficiency modulates muscle miRNA signaling, we isolated RNA from tibialis anterior muscles and then prepared samples for Next-Gen RNA sequencing using an Illumina NextSeq 500 highthroughput sequencing system. RNA sequencing revealed an average of 11.7 ± 1.5 million and 11.9 ± 1.4 million total reads per mouse for STD and LOW, respectively. Of these 4.8 ± 1.0 x 10 5 and 4.9 ± 1.2 x (LOW) mice were assessed across a range of physical performance domains that include: grip strength assessed every 4 months as the best 3 of 5 trials on a grip strength meter, n=6 (A); grip endurance as the best of two trials timed for latency to fall from a wire, n=5 (B); aerobic endurance assessed as a single trial for time before exhaustion on a mouse treadmill, n=6 (C); anaerobic endurance assessed as a single trial of increasing intensity intervals on an inclined (25º) mouse treadmill, n=6, 5 respectively (D); exploratory behavior as a count of quadrant crossings and rearings over 5 minutes in an open field arena, n=6, 5 respectively (E); and gait as assessed by measurement of stride length normalized to femur length determined using dual X-ray absorptiometry, n=5, 6 respectively (F). Statistical significance indicated by "*" p < 0.05, "**" p< 0.01, "***" p<0.001, and "ns" indicating non-significance. AGING 10 5 reads for each STD and LOW, respectively, were mapped to known miRNAs using mIRBase v21. To avoid using insufficient data due to low expression, we removed any miRNA that did not reach at least 10 reads across 66% or more of the samples (per Jung et al. [32]), yielding a total of 202 unique miRNAs for further analysis. Our data revealed 12 miRNAs with potential for differential expression between STD and LOW ( Figure 6 and Table 2, p<0.05); however, after false discovery rate correction we identified only one differentially expressed miRNA, mIR26a-5p, with a q-value = 0.0392.
DISCUSSION
Vitamin D insufficiency is a prevalent condition for which the long-term impacts are poorly understood. Here we demonstrate that mice kept in a serum 25-OH vitamin D insufficient state for 12 months exhibit significant declines across multiple domains of physical performance. These domains include grip hang endurance, uphill sprint, and open field rearing, which together may be indicative of a decline in anaerobic capacity due to vitamin D insufficiency. The loss of anaerobic capacity is partially supported by a trend insufficiency on muscle biology, tissues were harvested following 12 months of sufficient (STD) or insufficient (LOW) supplementation. Quadriceps muscle was then analyzed with NADH staining (A) allowing for quantification of the cross sectional area (CSA) of light stain fibers that corresponds to fast twitch fibers, n=5, 4 respectively (B). Gastrocnemius muscle was analyzed by differential centrifugation to determine myofibrillar, sarcoplasmic, and mitochondrial protein content, n=6 (C). Atrogin-1 expression was also determined in gastrocnemius muscle with western blotting and relative expression (atrogin / tubulin) was quantified using ImageJ software, n=6 (D). Mitochondrial biomass (E) and activity (F) in soleus muscle, n=6, were determined by quantitative PCR and biochemical assays, respectively. Statistical significance indicated by "*" p < 0.05 and "ns" indicating non-significance. AGING (p=0.13) towards lower smaller fast twitch fiber CSA in vitamin D insufficient mice and lower lean mass in vitamin D insufficient mice, although this latter difference was of limited duration.
Belenchia et al. [33] also observed loss of lean mass in mice due to dietary vitamin D deficiency (serum 25-OH vitamin D < 10 ng/ml) of approximately 40 weeks in female mice initiated at 8 weeks of age.
AGING
We suspect our study was underpowered to identify such histological differences; however, the possibility that vitamin D insufficient mice exhibit smaller fast twitch fiber CSA is supported by our finding that vitamin D insufficient mice also exhibit greater expression of atrogin-1, for which this and other atrophy associated proteins have previously been linked to vitamin D signaling [27,34,35]. Thus these data support the notion that long-term vitamin D insufficiency may contribute to the progression of sarcopenia. Our finding that vitamin D insufficient mice exhibit shorter stride lengths further emphasizes the potential contribution of vitamin D status, as such gait disturbances are an integral component of functional capacity decline associated with sarcopenia [36,37].
AGING
Other areas of physical performance were not impacted in the time frame of our study. Surprisingly, we did not identify differences in grip strength in our mice in contrast to multiple human studies reporting an association between vitamin D and strength [38][39][40][41], but not all [42,43]. However, we did observe a trend towards lower myofibrillar protein content (p=0.11), which may be indicative of future grip strength decline had we continued our experiment beyond 12 months. We also did not observe significance differences in treadmill performance, although vitamin D insufficient mice were trending towards lower treadmill performance at both 8 months (p=0.11) and 12 months (p=0.06). We believe this may be due to a greater anaerobic requirement as opposed to a possibility of aerobic deficit, as supported by our finding of differences when assessing treadmill performance using uphill intervals. Our findings that neither mitochondrial biomass nor activity were affected by vitamin D insufficiency further supports the idea that both groups exhibit similar aerobic capacity.
Additionally, although the relationship between vitamin D and fall risk is suggestive, it remains inconclusive [44]. We anticipated vitamin D insufficient mice would exhibit worse rotarod performance, as rotarod is an indicator of balance and coordination [45]. Yet our data show both groups exhibit age-dependent declines. These findings are surprising in light of Sakai et al. [46] who reported vitamin D receptor ablation impairs rotarod performance, perhaps indicating that stark vitamin D deficiency or receptor ablation is necessary or that such differences would have appeared if we continued the experiment into advanced ages (> 24 months of age).
We found long term impacts of serum 25-OH vitamin D on body weight and body composition, which is consistent with our 6-month study examining the impacts of alterations of serum 25-OH vitamin D levels in lean and obese mice [47]. However, Belenchia et al. reported declines in body weight in vitamin D deficient mice after 6 months until the endpoint of the study after 10 months, at which point these mice also exhibited decreased fat mass and lean body mass [33]. In contrast, our study identified a decrease in lean mass, and an increase in fat mass, after 8 months of insufficiency that was not observed after 12 months. However, the similarities and differences in our body composition findings compared to Belenchia et al. may be explained by the use of males versus females, insufficiency versus deficiency, and that our study continued two months longer. It was also surprising that our study failed to show differences in the inflammatory milieu in light of the reported relationships between serum vitamin D and chronic inflammation [28]. In particular, lower serum vitamin D was previously shown to be associated with increased IL-6 expression [28,48,49], yet we did not observe any increases in serum, brain, or cardiac tissues. We did observe a trend of greater IL-6 expression in adipose tissue of vitamin D insufficient mice (0.09), which would be consistent with mechanistic reports of the impacts of vitamin D in adipocytes [50].
Our analysis of miRNA sequencing revealed a single miRNA, miR-26, was differentially expressed in vitamin D insufficient mice. miR-26 has previously been shown to be important for skeletal muscle differentiation [51,52], and was found in two human studies to be differentially expressed in response to exercise [53,54]. Additionally, miR-26a was differentially expressed in two separate pilot studies that included 5 subjects supplemented with high dose vitamin D for a 12-month period [55]. However, the authors further reported that mIR-26a was not differentially expressed in a larger scale study [55], indicating the need for additional studies to better elucidate the impact of vitamin D status on miRNA profiles and expression. Our study also identified other miRNAs with reported roles in skeletal muscle biology including miR-204 [56], miR-139 [57,58], miR-146 [58][59][60], and miR-30 [58,61], although none of these met the threshold for false discovery rate (q < 0.05). We think this may be in part due to insufficient power, which may have also prevented us from identifying differences in other parameters (i.e. fiber size, inflammation, physical performance). Additionally, our low sequencing depth (of just under 500,000 mapped miRNA reads per sample) may have also restricted our ability to discern differences in only those miRNAs with high expression [62].
Our study also confirms the findings that altering vitamin D supplementation results in a rapid shift (both depletion and repletion) in serum 25-OH vitamin D levels (within 2 weeks) that is sustained relative to the amount of supplementation [33,47,63]. Our 12-month study was approximately 6 weeks longer than Belenchia et al. [33], which was performed in female mice, and together these studies demonstrate little to no impact of gender on the relationship between vitamin D supplementation and serum 25-OH vitamin D concentration. Interestingly, we did not observe differences in serum 1,25-(OH) 2 vitamin D between 25-OH vitamin D sufficient and insufficient mice. Although this finding is consistent with our previous study when we induced vitamin D insufficiency for 6 months in mice [47], it remains surprising in light of the functional role of 1,25-(OH) 2 vitamin D in skeletal muscle regulation [64][65][66]. AGING old males and females, parameters that were not found to be correlated with serum 25-OH vitamin D [67]. Serum 1,25-(OH) 2 vitamin D was also found to be correlated with low muscle mass in cross-sectional analyses of men and women aged 21-97 years, as well as knee extension force in women [11]. Additionally, both low serum 25-OH and 1,25-(OH) 2 vitamin D were independently found to be associated with the incidence of sarcopenia at a 5 year follow-up in men >70 years of age [68]. Yet, Boonen et al. reported no association between 1,25-(OH) 2 vitamin D and knee extension strength in women aged 70 to 90 [69], and Gielen et al. also reported no association between 1,25-(OH) 2 vitamin D and physical performance, specifically grip strength and gait speed in men aged 70 and older [70]. These studies were observational and did not involve long-term abatement of dietary vitamin D3. It is therefore possible that the duration and the degree of serum 25-OH vitamin D reduction may affect outcomes. Another possibility is that serum measures of 1,25-(OH) 2 vitamin D in our year long study did not reflect the actual 1,25-(OH) 2 vitamin D within muscle since skeletal muscle expresses the 25-hydroxyvitamin D3 1-alpha-hydroxylase [20,71].
With regards to other serum markers, Belenchia et al. reported changes in serum calcium levels in vitamin D deficient mice, which was not observed in this study, our previous study [47], or by Mallya et al. [63]. Likewise, Belenchia et al. was the only study to report significant differences in PTH concentrations; however, intact PTH was trending towards elevation in our vitamin D insufficient mice (p=0.10), and our study may have been insufficiently powered to observe this. Despite these differences, the behavior of serum 25-OH vitamin D in response to altered vitamin D supplementation was consistent between these studies and supports the use of the dietary supplementation and deprivation to examine serum 25-OH vitamin D related phenomena.
CONCLUSION
Serum 25-OH vitamin D declines rapidly and remains consistently depressed in response to low supplementation. Prolonged vitamin D insufficiency induces characteristics of sarcopenia that include poor anaerobic capacity, lower lean mass, and a trend towards smaller fast twitch fiber CSA, as well as gait disturbance. Vitamin D insufficient mice also exhibited increased expression of atrophy-associated Atrogin-1 and differential expression of muscle regulation associated miR-26a. These data suggest a role for chronic vitamin D insufficiency in the development of sarcopenia, highlighting the need for further animal and human studies to investigate the impacts of vitamin D during aging.
Animals
Twelve C57BL6/J mice (5 months old) were purchased from Jackson labs (Bar Harbor, ME). After 1 month the mice were randomly sorted into groups (n=6) that received AIN-93G chow (Dyets Inc., Bethleham, PA) supplemented with either the standard facility amount of 1000 IU vitamin D3 / kg chow (STD) to maintain serum 25-OH vitamin D sufficiency or 125 IU vitamin D3 / kg chow (LOW) to induce serum vitamin D insufficiency over a period of 12 months (Table 3). Additionally, eight mice were initially supplemented with 125 IU, but then switched after two months to 1000 IU to examine the rate of serum 25-OH vitamin D repletion. Food and water were provided ad libitum, and mice were housed in large shoebox animal cages containing 6 or 8 mice per cage. Lighting was on a 12 hour on / 12 hour off cycle, and cages were shielded to reduce exposure to facility lighting. Body weight was measured every two weeks. All studies and experimental protocols were approved by and in compliance with guidelines of the Miami VA and VA Western New York Animal Care and Use Committees.
ELISA and Colorimetric assays
Blood was collected through the sub-mandibular vein using a mouse lancet (MEDIpoint, inc., Mineola, NY.) into microcentrifuge tubes. Samples were held at room temperature for 10 minutes to allow coagulation and then centrifuged at 16,0000 x g for 10 minutes at 4 °C to allow separation of serum. Analysis of serum was performed using ELISA kits for 25-OH vitamin D (ImmunoDiagnostic Systems, Inc., Scottsdale, AZ), 1,25-(OH) 2 vitamin D (MyBioSource, San Diego, CA), and intact PTH (MyBioSource). Colorimetric assays were performed to assess serum calcium concentration (Biovision, San Francisco, CA.) according to manufacturer protocols. Multiplex ELISA was performed using a multi-analyte ELISA plate (Biorad, Hercules, CA) that includes IL-1α, IL-1β, IL-6, IL-10, IL-15, IL-18, MCP, and TNFα, which was then analyzed using a Bio-plex Magpix (Biorad, Hercules, CA).
Dual-energy X-Ray Absorptiometry (DEXA)
Analysis of bone mineral density, body fat % and lean mass were performed using a Lunar PIXImus II (GE Healthcare, United Kingdom). Animals were anesthetized using a ketamine/xylazine cocktail and then analyzed with a single scan after 4 months of treatment and every 4 months thereafter.
Physical performance assessments
A single investigator, blinded to the group designations of the mice, performed all animal assessments. Additionally, all experiments were performed during lighted hours and at the same time of day at each assessment time point. Protocols for each assessment were as follows:
Grip strength
Maximal grip strength data was generated using a Columbus instruments grip force meter (Columbus, OH) as the average of the best 3 of 5 trials for each mouse. For each trial the mouse was held firmly near the base of the tail and placed with all four paws upon a metal grid attached to a force meter. The mouse was then pulled such that the body of the mouse was parallel to the ground until the mouse lost grip. Mice were given 10 seconds of rest between trials.
Treadmill endurance
The mice were given a single assessment at each time point on a Columbus instruments treadmill set with no inclination (flat -0°). In the trial, the belt slowly accelerates from 5 to 25 m/min over 60 minutes and the mouse is timed until exhaustion, defined as 10 visits to a shock pad (54V, 0.72mA), 20 total shocks, or having remained on the treadmill belt for 60 minutes. Prior to the initial assessment, mice were given 3 similar trials (separated two weeks apart) to acclimate the animals to the device.
Uphill sprint assessment
To assess uphill sprint endurance the treadmill was inclined at 25° and mice were given a warm up period of 1 minute at 5 m/min. This was then followed by intervals that started at 10 m/min for 20 seconds and increased by 1 m/min increments after each 20 second active recovery period at 5 m/min. The mouse continued until exhaustion, defined as visiting the shock pad 5 times or receiving 10 total shocks.
Grip wire endurance
Mice were placed on a 5 mm thick neoprene wire and timed until fall. Dividers were placed on either side of the wire to prevent mice from leaving the apparatus. Mice were given three attempts to attain a minimum 15 seconds per trial, and the better score of two trials was used for each mouse.
Stride length
To measure stride length, the front paws of the mouse were coated in dye (Bradford reagent) and the mouse was then placed on one end of an apparatus (75 cm x 30 AGING cm wide) that was lined with paper. A mouse shelter from the home cage was inserted on the other end of the apparatus to motivate the mouse to walk. Stride length was measured as the average of 5-6 steps per mouse as the distance between the centers of the paws. The stride length was further normalized by femur length as determined by measuring images of the femur generated by DEXA.
Open field activity
To assess spontaneous activity the mouse was placed into a 90 cm x 90 cm apparatus that was divided equally into 4 quadrants. An investigator was positioned approximately 3 feet away and manually counted crossings into new quadrants and rearings (standing on hind legs) over a 5-minute period.
Muscle histology
NADH histological analysis of quadriceps muscle (rectus femoris) was assessed as described previously [72]. Briefly, 10 µm frozen muscle sections were submerged in a solution containing 1 mg/ml NADH (Sigma, St. Louis, MO), 1 mg/ml Nitro Blue Tetrazolium (VWR #TCD0844), and 0.2 M Tris-HCl buffer at pH 7.4 for 45 minutes at 37° C. Sections were then immersed in a series of acetone baths, rinsed in distilled water, and dehydrated by immersing in ethanol and xylene, before finally mounting on a cover slip with Cytoseal (Fisher #23-244257). An investigator, blinded to the identity of the mice, identified and tallied fiber types and also measured cross sectional area (CSA) of the fibers using Motic software (Motic, Hong Kong).
Analysis of muscle myofibrillar, sarcoplasmic, and mitochondrial fractions
Our methodology to assess relative percentages of protein fractions was adapted from previous studies [73][74][75]. Approximately 30-40 mg of gastrocnemius muscle was homogenized using a pestle tissue homogenizer in 3.0 mL of ice-cold analysis buffer (20 mM tris-HCl, 250 mM sucrose, 100 mM KCl, 5 mM EDTA, pH 6.8).
The homogenate was centrifuged at 1,000 x g for 15 minutes. To isolate the myofibrillar fraction, the pellet was washed twice with 5 mL of ice cold wash buffer (20 mM tris-HCl, 175 mM KCl, 5 mM EDTA, 0.5% (v/v) Triton-X100 pH 6.8) and centrifuged at 1,000 x g for 10 minutes at 4°C. The pellet was suspended in 1.0 mL of ice-cold analysis buffer for subsequent protein determination. To isolate the mitochondrial and sarcoplasmic fractions, the initial supernatant was centrifuged at 9,000 x g for 20 minutes at 4°C. The supernatant was then collected as the sarcoplasmic fraction for subsequent protein determination. The pellet was washed in 5 mL of ice cold SHE buffer (250 mM sucrose, 10 mM Hepes, 1 mM EGTA, pH 7.2) and centrifuged at 9,000 x g for 20 minutes at 4°C. The pellet was then suspended in 100 μL of ice-cold SHE buffer. Determination of protein concentration was performed using a Bradford assay and used to determine the protein content of each fraction normalized to mg of wet tissue weight.
MicroRNA analysis
Total RNA was harvested from tibialis anterior muscle using Qiagen miRNAeasy Purification Kit (Qiagen, Germantown, MD) according to the manufacturer's instructions. RNA libraries for sequencing were established using the NEBNext Multiplex Small RNA Library preparation kit, and the MiRNA libraries were sequenced on the Illumina NextSeq 500 generating 76cycle single reads. Demultiplexing was performed with Illumina's bcl2fastq version 2.17.1.14. General sequence quality was evaluated with FastQC, and reads were trimmed of adapters using trim galore v0.4.4. Subsequently, reads were aligned to the Ensembl GRCm38 genome build using bowtie2 v2.2.8 with the very-sensitive-local parameter set [76]. Aligned reads were quantified using featureCounts [77] against miRBase v21 miRNA database, and the resulting counts were tested in R using the Bioconductor package DESeq2 [78]. MiRNA with counts of less than 10 reads in 66% of the samples, post normalization, were removed from the analysis. Statistical analysis was performed using DESeq2, which includes a Benjamini-Hochberg correction for false positives [79].
Statistics
Statistical analysis was performed using XLStat statistical software (Addinsoft, New York, NY). A Student's t-test was used for all comparisons of standard (STD) supplementation versus insufficient (LOW) supplementation. All data were screened for outliers using a Grubbs outlier test with alpha equal to 0.05. The cut-off for significant comparisons was p < 0.05. All data are presented as mean ± standard deviation.
Rotarod balance and coordination
Mice are timed for ability to stay atop a spinning cylinder that increases in speed from 4 to 40 RPM over 5 minutes. Latencies to fall or a maximum time of 360 seconds are recorded in 3 trials. The mice tolerate the fall without harm.
Grip Grid
Mice are placed on a wire grid and then inverted over a 30 cm x 30 cm x 45 cm (L x W x H) box and timed until fall. Three timed trials are administered, and the trial is stopped if the mouse falls or maintains a grip for 600 seconds. The mice tolerate the fall without harm. Figure S1. Physical performance in vitamin D sufficient and insufficient mice. Vitamin D sufficient (STD) and insufficient (LOW) mice were assessed across a range of physical performance domains that include: rotarod latency to fall as the best 2 of 3 trials (A), and grip endurance as the best of two trials timed for latency to fall from a grid apparatus (B). Statistical significance indicated by "**" p< 0.01.
|
2018-07-03T04:47:34.439Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "cbabe58737b0b78e688dc9e3bc82050d9d70cfda",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/aging.101471",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0013358bd4ac711de2989696a9bed7c3aa3cd88d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250153349
|
pes2o/s2orc
|
v3-fos-license
|
Alcohol Diffusion in Alkali-Metal-Doped Polymeric Membranes for Using in Alkaline Direct Alcohol Fuel Cells
The alcohol permeability of anion exchange membranes is a crucial property when they are used as a solid electrolyte in alkaline direct alcohol fuel cells and electrolyzers. The membrane is the core component to impede the fuel crossover and allows the ionic transport, and it strongly affects the fuel cell performance. The aim of this work is to compare different anion exchange membranes to be used as an electrolyte in alkaline direct alcohol fuels cells. The alcohol permeability of four commercial anion exchange membranes with different structure were analyzed in several hydro-organic media. The membranes were doped using different types of alkaline doping agents (LiOH, NaOH, and KOH) and different conditions to analyze the effect of the treatment on the membrane behavior. Methanol, ethanol, and 1-propanol were analyzed. The study was focused on the diffusive contribution to the alcohol crossover that affects the fuel cell performance. To this purpose, alcohol permeability was determined for various membrane systems. The results show that membrane alcohol permeability is affected by the doping conditions, depending on the effect on the type of membrane and alcohol nature. In general, heterogeneous membranes presented a positive correlation between alcohol permeability and doping capacity, with a lower effect for larger-size alcohols. A definite trend was not observed for homogeneous membranes.
Introduction
The great environmental damage caused by fossil fuels, coupled with the limited amount of them, has encouraged the research for alternative energy sources [1]. One of the biggest promises in this area is the membrane-based fuel cell technology [2]. This fact has encouraged the development of membranes with the appropriate characteristics to be used in this application [3,4], as well as in the electrolyzers to generate green hydrogen [5,6].
Acid membranes, such as Nafion, are commonly used as polymer electrolyte membranes in fuel cells. However, in comparison to acid proton exchange membrane fuel cells (PEMFCs), the alkaline medium rendered by anion exchange membrane fuel cells (AEMFCs) presents advantages such as the potential to use non-precious-metal catalysts. Moreover, alkaline membrane water electrolysis is a relatively new technology with the advantages of both alkaline water electrolysis (AWE) and proton exchange membrane water electrolysis (PEMWE), overcoming some of their limitations [7]. This technology has been scarcely investigated so far [8]. For this reason, the design of new anion exchange membranes for AEMFC applications has attracted attention, and the number of published articles on membranes for AEMFCs' applications has increased continuously over the last decade [9][10][11][12].
Among the possible types of fuel cells, direct alcohol fuel cells (DAFCs) emerge as a good candidate for power sources for portable devices and household appliances [13][14][15]. Although methanol has been more extensively explored in DFCs to replace hydrogen fuel because of this higher energy density, other fuels such as ethanol, n-propanol, or ethylene glycol are also possible alternative fuels to hydrogen with high energy density. Moreover, they are easy to store, transport, and handle [16]. Similarly, alcohol electrolysis using polymeric membranes as the electrolyte is a promising route for storing excess renewable energy in hydrogen [17,18]. In portable power applications, alkaline alcohol solution electrolysis can be suitable to obtain hydrogen fast and pure at low temperature [19]. Moreover, alcohol electrolysis reduces the energy demand compared to water electrolysis. The oxidation of the alcohol molecule takes place at lower electrical potential than that required to achieve water splitting [20,21]. Moreover, some results suggest that alcohol electrolysis is more efficient using OH-conducting membranes under appropriate operation conditions [22].
One of the main problems in a direct alcohol fuel cell is the transport of nonoxidized alcohol through the membrane and the dehydration of the typically used Nafion proton exchange membrane [23]. This transport is known as crossover and has two contributions. One is the alcohol diffusion from the anode to the cathode due to the existing concentration gradient, and the other is the electro-osmotic transport accompanying the charge carrier ions [24]. Exploring alkaline fuel cells with anion exchange membranes as the electrolyte to replace cation exchange membranes has been one of the suggested solutions [25]. Unlike in an acid cell, the electro-osmotic transport of alcohol is not an issue in an alkaline fuel cell, since the ionic flow is in this case due to hydroxide ions, and it occurs in the reverse direction to that in proton conduction systems. However, the alcohol diffusion causes conversion losses in terms of lost fuel and depolarization losses at the cathode, strongly affecting the fuel cell performance. The alcohol permeability is also a crucial factor in alkaline-exchange-membrane-based alcohol electrolysis. In this application, the membrane acts as a barrier to the passage of fuel, and its alcohol permeability is a fundamental issue. A common parameter to estimate the diffusion contribution to the crossover is the membrane permeability defined as the product of diffusivity and solubility of the membrane.
In a previous work [26], a correlation was observed between the alkali-doping capacity and the swelling properties of different commercial anion exchange membranes. An effect of the doping process on their alcohol permeability, and so in the diffusion contribution to the alcohol crossover, would be expected when these membranes are used as the electrolyte in direct alcohol fuel cells. The aim of this work was to study this effect for different alcohols and doping agents, and to analyse the influence of the membrane structure. This aspect does not usually receive much attention and it is an important issue as the alcohol crossover is one of the more important factors limiting fuel cell performance.
Materials
Four different commercial anion exchange membranes were tested in this study. Ralex AM(H)-PES membrane (hereafter named PES) and AM(H)-PP (hereafter named PP) membrane are composites formed from ion exchange resins with polyethylene basic binder on base quaternary ammonium. Both membranes have different reinforcing material; PES membrane is a polyester-fitting fabric and PP is a polypropylene-fitting fabric. Neosepta AMX membrane (hereafter named AMX) is composed of a styrene divinylbenzene copolymers with tri-alkyl ammonium fixed-charge groups. It contains a reinforcing inert mesh. Fumasep FAP-450 (hereafter named FAP) is a non-reinforced, fluorinated anion exchange membrane.
With respect to their structure and preparation, Ralex PES and PP membranes are considered as heterogeneous membranes, whereas Neosepta AMX and Fumasep FAP are considered as homogeneous membranes. Membranes were used as received, without any previous treatment. Table 1 shows some of their main properties. The doping solutions used in this study were selected taking into account the results obtained in a previous work with these same membranes [26]. In that work, it was observed that heterogeneous membrane PP presented the maximum doping capacity, and the homogeneous non-reinforced FAP membrane only presented a significant doping capacity in 1-propanol media. For this reason, the PP membrane was selected for one more completed study, and the doped FAP membranes were only tested in 1-propanol media. The materials used in the experiments were water, methanol (MeOH), ethanol (EtOH), and 1-propanol (1-PrOH) as pure liquids, and water-alcohol mixtures of 1M concentration as solvents. Table 2 shows some properties of the pure liquids and mixtures used as solutions. LiOH, NaOH, and KOH of 1M concentration and NaOH of 2M concentration were used as alkaline salts. The alcohol presented in the doping solutions used to dope the membranes with 1M alkaline salts was the same than that used in the diffusion process. Pure pro-analysis-grade chemicals and doubly distilled, degassed pure water were used.
Membrane Doping
Before the experiments, the membrane samples (Supplementary Materials Figure S1) were dried under vacuum for 24 h and weighted in a high-precision balance (±0.0001 g). After that, the samples were immersed in closed bottles containing the corresponding solution and allowed to equilibrate at controlled temperature by placing the bottles in a large controlled temperature box. After a minimum of seven days of immersion, the swollen membranes were removed from the concentrated alkaline solution and washed in deionized water for several times to remove the free alkali which remained in the membrane. Afterwards, the membranes were dried under vacuum for 24 h and weighted again. The doping capacity, which indicates the amount of alkaline agent retained by the membrane, was estimated from the alkali uptake as [29]: where m d is the mass of the nondoped dry membrane and m OH−d is the mass of the corresponding dry-doped membrane.
Alcohol Permeability
The membranes were doped as was indicated in the previous section. In this case, after the membranes were removed from the solution and washed in deionized water, they were kept in closed glass containers with deionized water until used.
The experimental device used to measure the alcohol permeability of the membranes, shown in Figure 1, was similar to the one used in previous works [30]. It consisted of a diffusion cell with the membrane separating two chambers, the water chamber containing initially pure water, and the alcohol chamber containing initially a water-alcohol mixture of 50% wt. concentration. Two glass reservoirs of capacity 0.5 × 10 −3 m 3 contained the circulation solutions in both chambers. The corresponding solutions circulated from the thermostated reservoirs by means of a peristaltic pump. The circulation velocity of the solutions was set to 300 mL min −1 . The whole system was immersed in a large, controlledambient-temperature box. The temperature of the experiments was 25 • C. The effective area was 18.5 × 10 −4 m 2 . Pure water was introduced in one reservoir and a water-alcohol solution in the other one. When the temperature of 25 • C was achieved in both chambers, the solutions were circulated through the cell.
doping capacity, which indicates the amount of alkaline agent retained by the membrane, was estimated from the alkali uptake as [29]: where md is the mass of the nondoped dry membrane and OH d m is the mass of the corresponding dry-doped membrane.
Alcohol Permeability
The membranes were doped as was indicated in the previous section. In this case, after the membranes were removed from the solution and washed in deionized water, they were kept in closed glass containers with deionized water until used.
The experimental device used to measure the alcohol permeability of the membranes, shown in Figure 1, was similar to the one used in previous works [30]. It consisted of a diffusion cell with the membrane separating two chambers, the water chamber containing initially pure water, and the alcohol chamber containing initially a water-alcohol mixture of 50% wt. concentration. Two glass reservoirs of capacity 0.5 × 10 −3 m 3 contained the circulation solutions in both chambers. The corresponding solutions circulated from the thermostated reservoirs by means of a peristaltic pump. The circulation velocity of the solutions was set to 300 mlmin −1 . The whole system was immersed in a large, controlled-ambient-temperature box. The temperature of the experiments was 25 °C. The effective area was 18.5 × 10 −4 m 2 . Pure water was introduced in one reservoir and a water-alcohol solution in the other one. When the temperature of 25 °C was achieved in both chambers, the solutions were circulated through the cell.
With methanol, a test was also carried out using pure alcohol as initial solutions in the alcohol chamber. With methanol, a test was also carried out using pure alcohol as initial solutions in the alcohol chamber. In this device, with the membrane separating two solutions of different concentrations, the alcohol permeability can be modelled on the basis of Fick's law for a diaphragmcell diffusion if the experimental conditions allow the assumption of a pseudo-steady state and negligible-concentration polarization effects; that is, a large volume of solution comparing to the membrane volume and well-stirred solutions. The volume of solutions in the experiments was about 3 × 10 −4 m 3 in each chamber.
If we do not consider the existence of water transport through the membrane, the total flux will be only due to the alcohol diffusion and it can be estimated from the change with time of the concentration in one of the chambers, c 1 , of the diffusion cell [31].
where V 1 is the volume of the chamber 1 and A is the effective membrane area. Under pseudo-steady conditions, the concentration change can be considered linear with time, and we can use to estimate the alcohol permeability the known following expression [30,32]: where V 0 1 is the initial volume of the water chamber and c 0 1 and c 0 2 are, respectively, the initial concentrations in the water and alcohol chambers. In this case, the concentration was measured in the diluted chamber containing pure water at the initial moment of the process; thus, c 0 1 = 0. The alcohol permeability is obtained as: in which parameter α indicates the rate of change of alcohol concentration in the corresponding chamber. This is one of the most usual methods to determine ex situ the alcohol permeability of a membrane [29][30][31][32][33][34].
For determining the parameter α in our device, the density of the chamber that initially contained pure water was measured every hour during the experiments. We took small solution samples from the water reservoir every hour during six or seven hours of each experiment. The temperature of these samples was led to 20 • C; then, the density measurements were made using an AP Paar Density Meter model MDA58 with an accuracy of ±10 −2 kcm −3 . Afterwards, we determined the concentration of the samples as a function of time using previously obtained calibration concentration-density curves [27,28,[35][36][37]. For each experiment, the values were fitted to a straight line, whose slope let us calculate the value of parameter α. This parameter was used to calculate the permeability by means of Equation (4).
Alcohol Permeability for Nondoped Membranes
We carried out a previous study with the nondoped membranes to compare them with those subsequently obtained with doped membranes. To this purpose, experiments were carried out with pure methanol and 50% wt. alcohol-water mixtures, using methanol, ethanol, and 1-propanol as alcohols, in the concentrated chamber.
As an example, the results for the time dependence of the concentration in the water chamber are shown in Figure 2 in the case of using methanol. We can observe that the alcohol concentration always increased in the diluted chamber, according to the diffusion process occurring from the concentrated to the diluted chamber. We observed that all the membranes looked, in general, deteriorated at the end of the experiments when pure methanol was used in the concentrated chamber. For this reason, no experiments were carried out with the other pure alcohols, and only 50% wt. wateralcohol mixtures were used as concentrated solutions in the rest of the experiments. Similar concentration-time curves were obtained for nondoped membranes using ethanol and 1-propanol. Different studies suggest that the molecular transport of organic solvent in a rubbery polymer system is controlled by a combination of sorption, diffusion, and permeation mechanisms [38]. With linear alcohols, the polymer matrix is usually unaffected by the diffusant so that diffusion is expected to follow Fick´s law [39], in agreement with the observed behaviour in the experiments. Table 3 shows the alcohol permeability estimated for the nondoped membranes from the obtained values of parameter and Equation (4) under the different studied conditions. For nondoped membranes, we can see that larger methanol permeabilities and higher influence of the methanol concentration were observed for homogeneous AMX and FAP membranes. Methanol permeabilities were found to be of similar values to those obtained for other membranes studied in the literature [31,33,40,41]. When we compare the alcohol permeabilities values obtained for different alcohols using a 50% wt. wateralcohol mixture, we can observe that the alcohol permeability decreases with the molar mass of the alcohol for all membranes. This is probably due to the hydrated molecular We observed that all the membranes looked, in general, deteriorated at the end of the experiments when pure methanol was used in the concentrated chamber. For this reason, no experiments were carried out with the other pure alcohols, and only 50% wt. water-alcohol mixtures were used as concentrated solutions in the rest of the experiments. Similar concentration-time curves were obtained for nondoped membranes using ethanol and 1-propanol. Different studies suggest that the molecular transport of organic solvent in a rubbery polymer system is controlled by a combination of sorption, diffusion, and permeation mechanisms [38]. With linear alcohols, the polymer matrix is usually unaffected by the diffusant so that diffusion is expected to follow Fick s law [39], in agreement with the observed behaviour in the experiments. Table 3 shows the alcohol permeability estimated for the nondoped membranes from the obtained values of parameter α and Equation (4) under the different studied conditions. For nondoped membranes, we can see that larger methanol permeabilities and higher influence of the methanol concentration were observed for homogeneous AMX and FAP membranes. Methanol permeabilities were found to be of similar values to those obtained for other membranes studied in the literature [31,33,40,41]. When we compare the alcohol permeabilities values obtained for different alcohols using a 50% wt. water-alcohol mixture, we can observe that the alcohol permeability decreases with the molar mass of the alcohol for all membranes. This is probably due to the hydrated molecular size increasing with increasing alcohol molar mass, leading to a relatively high resistance for alcohol across the membrane. A similar trend has been also observed with other kinds of membranes [40][41][42]. The highest influence of the membrane structure was observed with methanol. For ethanol, similar values were estimated for all the tested membranes. With 1-propanol, only the nonreinforced homogeneous FAP membrane presented a significant difference with respect to the other membranes.
Alcohol Permeability for Doped Membranes
In Figure 3, examples of the concentration in the water chamber as a function of time for different doped membrane systems are shown. We also included the values of the nondoped membranes in these figures for a better comparison. It can be observed that the behavior was similar to those obtained with the nondoped membranes, with an increase in concentration in the diluted chamber. The more interesting aspect in these figures is to observe that the membrane doping affects the membranes' alcohol diffusion properties.
Membranes 2022, 12, x FOR PEER REVIEW 7 of 12 size increasing with increasing alcohol molar mass, leading to a relatively high resistance for alcohol across the membrane. A similar trend has been also observed with other kinds of membranes [40][41][42]. The highest influence of the membrane structure was observed with methanol. For ethanol, similar values were estimated for all the tested membranes. With 1-propanol, only the non-reinforced homogeneous FAP membrane presented a significant difference with respect to the other membranes.
Alcohol Permeability for Doped Membranes
In Figure 3, examples of the concentration in the water chamber as a function of time for different doped membrane systems are shown. We also included the values of the nondoped membranes in these figures for a better comparison. It can be observed that the behavior was similar to those obtained with the nondoped membranes, with an increase in concentration in the diluted chamber. The more interesting aspect in these figures is to observe that the membrane doping affects the membranes' alcohol diffusion properties. As in the case of the nondoped membranes, from the concentration-time data, parameter was estimated and the value of the alcohol permeability obtained using Equation (4). Tables 4 and 5 show the results obtained for the doped membranes. The values corresponding to the nondoped membranes were also included for a better comparison. As in the case of the nondoped membranes, from the concentration-time data, parameter α was estimated and the value of the alcohol permeability obtained using Equation (4). Tables 4 and 5 show the results obtained for the doped membranes. The values corresponding to the nondoped membranes were also included for a better comparison. Table 4 shows the values corresponding to the alcohol permeability of membrane PP with the different doping agents. Table 5 shows the values of the alcohol permeability obtained for doped PES, AMX, and FAP membranes.
The results presented in Tables 4 and 5 show that the effect of the doping process on the alcohol permeability is different for each membrane and it depends on the doping agent.
For a better look of the doping effect on the diffusion properties of the membrane, Figure 4 shows all the alcohol permeability estimates for the different membrane systems analysed.
Heterogeneous nondoped membranes presented in general lower alcohol permeability values. Nevertheless, when these membranes were alkali-metal-doped, their alcohol permeability increased. This effect was more pronounced for the PES membrane. However, the heterogeneous-doped PP membrane also presented, in general, lower values for the alcohol permeability than the homogeneous membranes, not showing a high influence of the doping agent on the alcohol permeability. The homogeneous AMX membrane also increased its alcohol permeability value after the doping process, with the exception of the sample doped with NaOH 2M, which reduced its methanol permeability. For the homogeneous FAP membrane, the doping process reduced the 1-propanol permeability, mainly when the membrane was doped in 1-propanol medium. Figure 5a shows the alcohol permeability as a function of the doping capacity for heterogeneous PP and PES membranes. For these membranes, a general increasing trend of the alcohol permeability with the doping capacity was observed. Previous results [26] showed that, for heterogeneous membranes, the doping process led to minor liquid uptakes in the doped membranes and to larger water affinities. Thus, the results would indicate that for the heterogeneous membranes, a lower water content favours the diffusion of alcohol through the membrane. Nevertheless, this effect decreases with increasing the viscosity of the alcohol. Heterogeneous nondoped membranes presented in general lower alcohol permeability values. Nevertheless, when these membranes were alkali-metal-doped, their alcohol permeability increased. This effect was more pronounced for the PES membrane. However, the heterogeneous-doped PP membrane also presented, in general, lower values for the alcohol permeability than the homogeneous membranes, not showing a high influence of the doping agent on the alcohol permeability. The homogeneous AMX membrane also increased its alcohol permeability value after the doping process, with the exception of the sample doped with NaOH 2M, which reduced its methanol permeability. For the homogeneous FAP membrane, the doping process reduced the 1-propanol permeability, mainly when the membrane was doped in 1-propanol medium. Figure 5a shows the alcohol permeability as a function of the doping capacity for heterogeneous PP and PES membranes. For these membranes, a general increasing trend of the alcohol permeability with the doping capacity was observed. Previous results [26] showed that, for heterogeneous membranes, the doping process led to minor liquid uptakes in the doped membranes and to larger water affinities. Thus, the results would indicate that for the heterogeneous membranes, a lower water content favours the diffusion of alcohol through the membrane. Nevertheless, this effect decreases with increasing the viscosity of the alcohol. These results seem to indicate that a key factor in the alcohol permeability of the more porous heterogeneous membranes is the membrane water content. The doping process affects the swelling properties of the membranes and, thus, their diffusive behaviour. Similar results have been found with alkaline-doped PBI membranes. However, for these membranes, the observed increase was explained taking into account the water uptake increase accompanied by the increased alkali uptake. In this case, the increased permeability resulted from the reduced intermolecular interaction due to the established ionic channels [29].
Alcohol Permeability and Doping Capacity
For homogeneous membranes, with lower doping capacity and water content, a definite trend was not observed (Figure 5b). Despite their low doping capacity, these membranes showed a great influence of the doping process on their alcohol permeability. It was observed in a previous work [26] that the doping process had low influence on the membrane water uptake. Thus, for denser homogeneous membranes, it cannot be the cause of the observed decrease in the alcohol permeability. The permeability obtained for Equation (4) results from the influence of the membrane thickness. As the doping capacity affects the expansion properties of the membranes, it is possible that the doping process could affect the membrane thickness and, thus, the alcohol permeability. For homogene- These results seem to indicate that a key factor in the alcohol permeability of the more porous heterogeneous membranes is the membrane water content. The doping process affects the swelling properties of the membranes and, thus, their diffusive behaviour. Similar results have been found with alkaline-doped PBI membranes. However, for these membranes, the observed increase was explained taking into account the water uptake increase accompanied by the increased alkali uptake. In this case, the increased permeability resulted from the reduced intermolecular interaction due to the established ionic channels [29].
For homogeneous membranes, with lower doping capacity and water content, a definite trend was not observed (Figure 5b). Despite their low doping capacity, these membranes showed a great influence of the doping process on their alcohol permeability.
It was observed in a previous work [26] that the doping process had low influence on the membrane water uptake. Thus, for denser homogeneous membranes, it cannot be the cause of the observed decrease in the alcohol permeability. The permeability obtained for Equation (4) results from the influence of the membrane thickness. As the doping capacity affects the expansion properties of the membranes, it is possible that the doping process could affect the membrane thickness and, thus, the alcohol permeability. For homogeneous membranes, it was observed that the presence of hydroxide in the solution had, in general, less effect on the membrane surface expansion [26]. It could indicate that the doping effect occurs mainly in the direction of membrane thickness. Further works would be necessary to clarify this statement.
Conclusions
The alkali-metal-doping effect on the alcohol diffusion properties of different anion exchange membranes was investigated. The membranes and the doping agents used to carry out the study were selected taking into account previous results about the membranes' doping capacity. Membranes with higher doping capacity were selected to carry out the study.
Alcohol permeability values of the order of 10 −7 -10 −8 ms −1 were estimated. No definite trend was observed in general between doping agent and permeability, depending on the particular influence of the membrane and type of alcohol. However, all the membranes showed an alcohol permeability decrease with increasing alcohol molar mass, with the trend: P 1-PrOH < P EtOH < P MeOH , independently of the membrane-doping process.
In general, heterogeneous membranes presented a positive correlation between alcohol permeability and doping capacity, but with a lower effect for larger-size alcohols. This may be due to the doping process causing a decrease in the membrane water content, favouring the alcohol diffusion through the membrane. A definite trend was not observed for homogeneous membranes.
The results obtained show that the doping process affects the diffusion properties of the membranes. However, in general, it increases the membrane alcohol permeability; therefore, it would favour the alcohol diffusion process. Only homogenous membranes showed a decrease in the alcohol permeability after the doping process: the membrane AMX doped with NaOH 2M, which reduced its methanol permeability, and the nonreinforced FAP membrane, which showed a decrease in the 1-propanol diffusion after the doping process.
Only in two of the studied cases would the doping process be useful to reduce the diffusion contribution to the alcohol crossover in DAFCs. This is an important issue to take into account when doping processes are used to modify membranes as the membrane alcohol crossover is one of the more important aspects limiting fuel cell performance. Funding: Financial support of this work by Banco de Santander and Universidad Complutense de Madrid within the framework of Project PR108/20-02 is gratefully acknowledged.
Institutional Review Board: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-07-01T15:08:02.676Z
|
2022-06-28T00:00:00.000
|
{
"year": 2022,
"sha1": "cc027de215c873229f028a5607d220d9fbf3682a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/12/7/666/pdf?version=1656493239",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19db0272d2242a54df6badad46622cfada72aead",
"s2fieldsofstudy": [
"Engineering",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4680585
|
pes2o/s2orc
|
v3-fos-license
|
Exploiting Nonnegative Matrix Factorization with Mixed Group Sparsity Constraint to Separate Speech Signal from Single-channel Mixture with Unknown Ambient Noise
This paper focuses on solving a challenging speech enhancement problem: improving the desired speech from a single-channel audio signal containing high-level unspecified noise (possibly environmental noise, music, other sounds, etc.). Using source separation technique, we investigate a solution combining nonnegative matrix factorization (NMF) with mixed group sparsity constraint that allows exploiting generic noise spectral model to guide the separation process. The experiment performed on a set of benchmarked audio signals with different types of real-world noise shows that the proposed algorithm yields better quantitative results in term of the signal-to-distortion ratio than the previously published algorithms.
Introduction
Speech enhancement is a process of removing unexpected audio signals (noise) from their mixture with a desired speech signal.This subject has been widely studied for decades as it brings huge impact in many different domains such as communication, speech-based control systems, medical surveillance, audio post-processing in movies and entertainments, etc., [1].Recent scientific research [2][3][4] has shown that the performance of speech recognition systems in practical noisy and reverberant environments degraded dramatically.This situation demonstrates the need for improving speech quality in such noisy recordings.Popular approaches for speech enhancement includes beamforming [5,6], spectral subtraction [7], and source separation [8][9][10].
Considering speech and noise as two independent sources to be separated, audio source separation technique can be used to isolate the desired speech from high level noise.Some recent work has developed methods for single-channel speech enhancement based on e.g., NMF [11,12], Gaussian mixture model (GMM) [13], or deep neural network [14,15].The two former methods first learn the characteristics of speech and noise signals, then such learned models were used to guide the signal separation process.The deep learning based approaches can learn the separation mask or the separation model by end-to-end training and gain a significant impact.However deep learning based systems require a lot of training data and processing power.For cases with only few training examples available, the work of Sun and Mysore [16] proposed the use of NMF [17] to establish the general spectral model for speech signals from some other voices.Studies of El Badawy et al. [18][19][20] employed the similar NMF-based spectral models learned from source examples obtained by a search engine to guide the separation algorithm.
In this paper, we focus on a slightly different setting compared to the existing works [16][17][18], where the speaker is assumed to be known but the noise signal is non deterministic.This speaker-dependent situation is very popular in practice.
For instance, when speech is used to control robots or devices, the operator/speaker is often known so that his/her voice can be collected in advance for training the system.Concerning noise, it is highly non-stationary and if the operating environment is changed (different moments or different locations), it will vary accordingly.Therefore, noise should not be well-identified in the training process.From this intuition, we propose a novel approach that first constructs the general spectral noise model from some noise examples in advance.Such noise examples can beeasily pre-collected in some environments.Then the general noise model is used to guide the separation process.Within the considered NMF based approach, we investigate the combination of the existing block sparsity proposed in [16] and component sparsity proposed in [18] in order to improve the source separation performance.Developing further from our preliminary studies [21,22], this paper presents more detail about the algorithm and extends the experiments using large test database containing various types of noise signals to confirm the effectiveness of the proposed approach.Furthermore, we report the investigation of the algorithm's convergence and stability.
The paper is organized into five sections.We first summarize the baseline audio source separation algorithm using the NMF model in Section 2. We then present the proposed approach in Section 3. Section 4 discusses experiment settings, algorithm analysis, and speech enhancement results.Finally, we conclude in Section 5.
Baseline Supervised NMF-based Speech Separation Method
To extract the desired speech signal from the single-channel noisy signal (referred to as mixture), we consider the mixture as a signal which created by mixing two audio sources: the desired speech and the noise.Noise can be environmental noise and any other unwanted sounds.In general, the source separation processing is done in the time-frequency domain after the short-time Fourier transform (STFT) so that the 1D waveform is represented by the 2D spectrogram.Then this 2D spectrogram is modeled by the NMF, which is a widely used model in audio signal processing in general and in audio separation in particular [23][24][25].
Let X ∈ C F×M , Y ∈ C F×M , and Z ∈ C F×M are the complex-valued matrices of the short-time Fourier transform (STFT) coefficients of the observed mixture signal, the speech signal, and the noise signal, respectively, where F is the number of frequency bins, M is the number of time frames, then the mixing model writes: Denoting by V = |X| .2 the power spectral matrix of the mixture signal, where X .n is the matrix whose elements are where * is the normal matrix multiplication, B ∈ R F×K + is the spectral basis matrix whose column vectors are spectral characteristics appearing in V, A ∈ R K×M + is the activation matrix whose row vectors are times of appearance of spectral components in B, K is the number of spectral components to be synthesized.Depending on the applications and properties of input data, K is usually chosen such that B is able to represent most spectral characteristics of the input signal [26].To estimate the latent matrices, B and A are initialized with random non-negative values and are updated in an iterative process such that the cost function (3) representing the divergence between V and B * A is minimized: where f and m denote frequency bin index and time frame index, respectively, and is the Itakura-Saito divergence.This divergence is commonly used in ausio source separation as it offers the scale-invariant property.In each iteration steps, B and A are updated via the well-known multiplicative update (MU) rules [26] as in which C T is the transposition of matrix C, denotes the element-wise Hadamard product, the power and the division is also element-wise.Suppose that B Y and B Z are spectral basis matrices of speech and noise, respectively.In training process of the supervised approach, they are learned from the corresponding training examples by optimizing similar criterion as (11), then the spectral model for two sources B is obtained by In the speech enhancement process, this spectral model B is fixed, and the time activation matrix A is estimated via the MU rule by iterating ( 5) and ( 6).Note that A also consists of two blocks as A Y and A Z , which are block characterizing the time activations for speech and noise, respectively, as After the parameters B and A are obtained, the speech STFT coefficients are determined by Wiener filtering as the following Finally, the estimated speech signal in time domain is obtained via the inverse STFT.
Proposed Method
In the unspecified noise scenario, clean speech example from a desired speaker is assumed to be available a priori for training but exact noise example is not available.
where A Y is the time activation matrix. ( where A (iii) Spectral model for all sources The spectral model for all speech and noise is computed by In the speech enhancement phase, this spectral model B is fixed, and the time activation matrix A is estimated via the MU rule.Matrix A includes the speech activation matrix A Y and noise activation matrix A Z as
Proposed Mixed Group Sparsity-inducing penalty for Noise model fitting
The generic spectral model for noise B Z become a larger matrix when the number of noise examples P increases.Moreover, it is actually redundant when different examples share the similar spectral patterns [27][28][29].Thus, in the NMF model fitting for the signal separation, sparsity constraint is naturally needed so as to fit only a subset of the large matrix B Z to the actual noise representing in the mixture [28].In other words, the mixture spectrogram V is decomposed by solving the following optimization problem where Ω( A Z ) denotes a penalty function imposing sparsity on the activation matrix A Z , λ is a trade-off parameter determining the contribution of the penalty.
Recent work in audio source separation has considered two penalty functions.The first one is block sparsity-inducing penalty [16] formulated as the following where is a non-zero constant, A Z is a subset of A Z representing the activation coefficients for p − th block, . 1 is 1 -norm operator, and P denotes the total number of blocks.In this case, a block represents one training example and P is the total number of noise examples.This penalty enforces the activation for relevant examples only while omitting the poorly fitting examples since their corresponding activation block will likely converge to zero.The second one is named component sparsity-inducing penalty [18] formulated as where a source in the mixture, while the remaining components in the model do not.Thus instead of activating the whole block, this penalty allows selecting only the more likely relevant spectral components from B Z .
However, the component sparsity-inducing penalty also quite slowly removes unsuitable parts, because it carefully considers each row in the large matrix.Inspired by the advantage of these two state-of-the-art penalty functions, in our recent works [21,22], we proposed to combine them in a more general form as where the first term on the right hand side of the equation presents the block sparsity-inducing penalty, the second term presents the component sparsity-inducing penalty, and α ∈ [0, 1] weights the contribution of each term.Proposed penalty function (18) can be seen as the generalization of ( 16) and (17) in the sense that when α = 1, (18) is equivalent to (16) and when α = 0, (18) is equivalent to (17).In order to derive the parameter estimation algorithm optimizing (15) with the proposed penalty function (18), one can rely on MU rules and the majorization-minimization algorithm.The proposed algorithm is summarized in Algorithm 1, where E (p) is a uniform matrix of the same size
Experiment
We start by describing the data set and parameter settings in Section 4.1.We then describe evaluation metrics in Section 4.2.The performance of the proposed speech enhancement algorithm and its sensitivity with respect to the choice of the hyper parameters are presented in Section 4.3.
Dataset and parameter settings
To validate the performance of the proposed approach, we select noise examples from DEMAND1 dataset for training the generic noise spectral model, and perform the test on the benchmarked dataset from SISEC campaign2 .These datasets were carefully designed by researchers in the audio source separation community and widely used.
Training speech example is five-second long and is spoken by the same person with speech in the tested mixtures.We use five types of environmental noise: kitchen sound, waterfall, metro, field sound, cafeteria to train the generic end for E = [E T (1) , . . ., E T (P ) ] T // Taking into account component sparsity-inducing penalty for k = 1, ..., K do noise spectral model (see Section 3.1).They are extracted from DEMAND with duration varying from 5 to 15 seconds.The performance of the proposed algorithm was evaluated over a test set containing 15 single-channel mixtures of two sources artificially mixed at 0 dB signal to noise ratio (SNR).Note that with this 15 mixtures with various types of noise could be sufficient to access the performance of the proposed algorithm.During the mixing process, we made sure that in all mixtures both sources appear all the time.The mixtures were sampled at 16000 Hz and their duration varies between 5 and 10 seconds.The speech samples include female speech and male speech in English, they were obtained from SiSEC data set.The noise samples were obtained from DEMAND from one channel out of the 16 channels.Some of them were mixed two noises, e.g., traffic + wind sound, ocean waves + birdsong, restaurant + guitar, forest birds + car, square + music, ect.,.
The parameters were set as follow.The STFT was calculated using a sliding window with a frame length of 1024, 50% overlap.The number of NMF components were set to 32 and 16 for speech and noise, respectively.The number of iterations for MU updates was 100 for the training step and was tested with values from 1 to 100 in the testing in order to investigate the convergence of the algorithm.
Evaluation method
We compare the separation performance obtained by proposed algorithm with several state-of-the-art algorithms as follows: • Baseline NMF -without training: The NMF-based algorithm was described in Section 2. This test did not use training data, instead, the spectral models for both speech and noise were initialized with random nonnegative values and were iteratively updated via ( 5) and ( 6).
• Baseline NMF -speech training: The algorithm based on NMF was described in Section 2. In this experiment, the spectral model for speech signal was learned by speech examples that were five-second long and were made by the same person with the speech in the tested mixtures.The spectral model for noise was initialized with random non-negative values and was iteratively updated via ( 5) and ( 6).
• NMF non-sparsity: The algorithm based on NMF was described in Section 2. The spectral model for speech was also learned by five-second long that was spoken by the same person with the speech in the tested mixtures.The noise spectral model was learned by one noisy file which was made by pairing five noise samples in the noise training set described in Section 4.1.
Separated speech results were evaluated using the source-todistortion ratio (SDR) measuring overall distortion as well as the source-to-interference ratio (SIR) and the source-toartifacts ratio (SAR).They were measured in dB and averaged over all sources where the higher is the better.These criteria, known as BSS-EVAL metrics, have been mostly used in the source separation community [30].
Results and Discussion
The results are averaged over all 15 testing mixtures for six different algorithms and indicated in Table 1. as a function of the parameters λ and α is shown in Figure 3.
It is interesting to see in Table 1 that the results obtained by the "NMF non-sparsity" method even were lower than the results of "Baseline NMF -speech training" method.It reveals that the generic noise spectral model itself is redundant and contains some irrelevant spectral patterns with the actual noise in the mixture.Thus the importance of such sparsity penalty is explicitly confirmed by the fact that the results obtained by three algorithms based on the NMF with group sparsity-inducing penalties were far more better than the remaining three algorithms.It is also not surprising to see that the baseline NMF method yielded quite good results when using training data for speech signal (i.e."Baseline NMF -speech training" method gained 5.8 dB SDR), but without training data, the result is very low (i.e."Baseline NMF -without training" method gained -0.5 dB SDR).Finally, the "Proposed NMF -Mixed sparsity" algorithm offers the best speech enhancement performance in terms of all SDR, SIR, and SAR compared to the five existing ones.More specifically, compared to two algorithms based on the NMF with group sparsity-inducing penalties, the proposed NMF -Mixed sparsity method gained 0.4 dB and 0.3 dB SDR higher than those of the "NMF -Block sparsity" method and the "NMF -Component sparsity" method, respectively.The proposed method's results were also far better than results of three first methods.This proves the effectiveness of the successful combination of two state-of-the-art group sparsity-inducing penalties we have proposed.
Investigating the convergence of proposed method, Figure 2 shows that all measure SDR, SIR, and SAR increases with more number of MU iterations.This confirms that the derived algorithm converges correctly and saturates after about 20 MU iterations.
The average speech separation performance over all mixtures in the test set, as a function of λ and α, is shown in Figure 3.As can be seen, the proposed algorithm is less sensitive to the choice of α and more sensitive to the choice of λ.It is quite stable with the small value of λ, and the result is best with 1 ≤ λ ≤ 25 and 0 ≤ α ≤ 0.4.Overall the proposed algorithm is not very sensitive to the choice of such hyper-parameters and thus in the practical implementation one can set them quite easily.
Conclusions
In this paper, we have presented a speaker-dependent single-channel speech separation method based on the matrix factorization framework.Our method employed some different noise signal files to build the general spectral model for noise.For the estimation of the speech and noise signal from their mixture, we proposed the combination of NMF with two types of sparsity constraints.Experimental results showed the effectiveness of the proposed algorithm.Our further investigation showed the algorithm's convergence and its robustness to the choice of hyper-parameters λ and α.These properties are very useful for setting parameter in practical installation of the algorithm.Future work could be devoted to extend the work to multi-channel case where the spatial model, such as the one considered in [31], for audio sources is incorporated.Additionally, validating the effectiveness of the proposed denoising approach for automatic speech recognition (ASR) would be a particular interest.
pZ
is the time activation matrix.After all spectral model B p Z , p = 1, . . ., P , are learned from noise examples, the generic noise spectral model, denoted by B Z , is constructed as the following
Z
denotes k − th row of A Z .This penalty is motivated by the fact that only a part of the spectral model learned from an example may fit well with the targeted 3 EAI Endorsed Transactions on Context-aware Systems and Applications 07 2017 -03 2018 | Volume 4 | Issue 13 | e5
Figure 1 .
Figure 1.General workflow of the proposed speech enhancement approach.
Z
, and g (k) a uniform row vector of the same size as a (k) Z .
1 .
Figure 2 shows the convergence of the proposed algorithm as a function of the number of MU iterations.Performance of the algorithm Algorithms SDR (dB) SIR (dB) SAR (dB) sparsity [18] (λ = 1, α = 0) Proposed NMF -Mixed sparsity (λ = 1, α = 0.2) Average performance of speech enhancement obtained on the test set.
Figure 2 .
Figure 2. Speech enhancement performance of the proposed method as a function of MU iterations.
Figure 3 .
Figure 3. Average speech enhancement performance of the proposed method as a function of λ and α.
6
EAI Endorsed Transactions on Context-aware Systems and Applications 07 2017 -03 2018 | Volume 4 | Issue 13 | e5 However, some general noise examples could be collected easily from different noisy environments for training also.For example, in order to separate speech and environmental noise, we collect some environmental sounds such as wind sound, street noise, cafeteria, etc., for noise training.The global workflow of the proposed approach for speech separation is shown in Fig.
1.In the following, we first present the training for both speech spectral model B Y and generic spectral noise model BZ in Section 3.1.We then describe the model fitting with the proposed mixed group sparsity constraint for the source separation process in Section 3.23.1.Training Spectral Models for Speech and Noise(i) Speech spectral model Let V Y = |Y| .2 is the spectrogram of a clean speech example obtained by the STFT transform.Speech spectral model B Y is learned given V Y by optimizing the divegence between V Y and B Y * A Y as min
|
2018-04-08T19:54:12.086Z
|
2018-03-14T00:00:00.000
|
{
"year": 2018,
"sha1": "c79cf7b41161bf5a6a78ad903fdc10528a97c6f9",
"oa_license": "CCBY",
"oa_url": "http://eudl.eu/pdf/10.4108/eai.14-3-2018.154342",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c79cf7b41161bf5a6a78ad903fdc10528a97c6f9",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
5839481
|
pes2o/s2orc
|
v3-fos-license
|
A personal journey from the joint to the heart
Predicting complications of diseases such as rheumatoid arthritis (RA) as well as the efficacy and toxicity of drugs used to treat the disease based on an understanding of genetic differences is leading to the development of highly individualized, personal medicine. The prevention of cardiovascular complications of RA has assumed greater importance as our ability to treat the underlying joint disease has improved and it may be possible to predict which patients with RA are at greatest risk of developing cardiovascular disease.
With the availability of increased information about the role of common genetic polymorphisms in disease suscep tibility, response to therapy and toxicity of therapies, the prospect of increasingly personalized medicine is becoming a reality. Palomino-Morales and colleagues [1] report the novel observation that a common polymorphism in the gene for methylene tetrahydrofolate reductase (MTHFR) that markedly reduces enzyme activity (by as much as 65% in homozygotes) may predispose to the development of atherosclerotic cardiovascular disease (ASCVD) in patients with rheumatoid arthritis (RA).
Although RA primarily aff ects the joints, it is a systemic disease that clearly contributes to a marked increase in the risk for development of ASCVD (for example, [2]). Th e increased risk of developing ASCVD has been attributed to generalized infl ammation that enhances development of atherosclerosis, the use of steroids, changes in lipid profi les and other unknown mechanisms.
Elevated homocysteine levels have long been associated with ASCVD (for example, [3]) and homo cysteine is a direct toxin for the vascular endo thelium. Low dose methotrexate therapy, by inhibiting MTHFR, diminishes recycling of homocysteine to methio nine, leading to increases in plasma homocysteine, although folic acid supplementation abrogates the increase in homocysteine levels in methotrexate-treated patients with RA [4]. One potential contributor to homo cysteine elevation is genetic alterations in MTHFR; two common poly morphisms in this enzyme have previously been reported to alter MTHFR activity (C677T and A1298C).
Th e A1298C polymorphism is quite common and is present in as many normal individuals (40% were AC heterozygotes and 10% were homozygous CC) as RA patients (41% were heterozygotes and 10% were homo zygous for the C allele). Th e striking fi nding by Palomino-Morales and colleagues was that RA patients with cardiovascular events were more likely to have the C allele of A1298C than those without (62% versus 50%, respectively) and the accumulated risk increased over time was strongly associated with the C allele of the A1298C polymorphism. It is interesting to note that the polymorphism associated with more marked declines in MTHFR activity, the C677T polymorphism, was not associated with a greater risk for cardiovascular disease. Th e numbers were probably too small to determine whether there was an interaction between these two common polymorphisms that further contributed to risk.
In some of the patients with RA the authors were able to directly probe the health of the endothelium by measuring the fl ow-dependent forearm vasodilatation and found that the RA patients with the minority A1298C allele in MTHFR had diminished vasodilatory responses, consistent with a less healthy vascular endothelium.
Patients with the C677T polymorphism, but not the A1298C polymorphism, are at greater risk for developing complications of methotrexate therapy and methotrexate therapy may ameliorate the risk of ASCVD in patients with RA [5,6]. Th us, it would be interesting to determine whether methotrexate therapy aff ected the frequency of cardiovascular events in this population and whether there was any interaction between methotrexate and genetic risk for cardiovascular disease in this population. Moreover, it would be important to know how many of these patients were taking folic acid supplements since folic acid supplementation has previously been shown to lower homocysteine levels in patients taking methotrexate [4,[7][8][9], presumably by providing higher levels of
Abstract
Predicting complications of diseases such as rheumatoid arthritis (RA) as well as the effi cacy and toxicity of drugs used to treat the disease based on an understanding of genetic diff erences is leading to the development of highly individualized, personal medicine. The prevention of cardiovascular complications of RA has assumed greater importance as our ability to treat the underlying joint disease has improved and it may be possible to predict which patients with RA are at greatest risk of developing cardiovascular disease. substrate for the enzyme, and it is possible that folic acid supplementation in this group might have had a greater eff ect in the patients with the polymorphism on reducing risk of cardiovascular events.
More often than not, candidate genetic association studies, such as that described here, are not reproducible [10] and it is possible that this study may share the fate common to so many of these types of candidate gene studies. Nonetheless, Palmino-Morales and colleagues have made an interesting observation that may suggest a contributing factor to the development of cardiovascular disease in patients with RA. Moreover, this study provides an even greater rationale for the addition of folic acid to the therapy for RA, prevention of cardiovascular disease.
Competing interests
BNC holds or has fi led applications for patents on the use of adenosine A 2A receptor agonists to promote wound healing and use of A 2A receptor antagonists to inhibit fi brosis; use of adenosine A 1 receptor antagonists to treat osteoporosis and other diseases of bone; the use of adenosine A 1 and A 2B receptor antagonists to treat fatty liver; and the use of adenosine A 2A receptor agonists to prevent prosthesis loosening. Consultant (within the past 2 years) King Pharmaceutical (licensee of patents on wound healing and fi brosis above). CanFite Biopharmaceuticals, Savient Pharmaceuticals, Bristol-Myers Squibb, Roche Pharmaceuticals, Cellzome, Tap (Takeda) Pharmaceuticals, Prometheus Laboratories, Regeneron (Westat, DSMB), Sepracor, Amgen, Endocyte, Protalex, Allos, Inc., Combinatorx, Kyowa Hakka. Honoraria/Speakers' Bureaus: Tap (Takeda) Pharmaceuticals. Stock: CanFite Biopharmaceuticals received for membership in Scientifi c Advisory Board.
|
2014-10-01T00:00:00.000Z
|
2010-08-13T00:00:00.000
|
{
"year": 2010,
"sha1": "9e3e2b8eb36a83de90485a4c7ce6bfde5223819e",
"oa_license": "CCBY",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar3099",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af973a0c46dac1c206a9b9b94818bdf5d5cbd939",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
19810171
|
pes2o/s2orc
|
v3-fos-license
|
Adherence to Health-Related Lifestyle Behavior Recommendations and Association with Quality of Life among Cancer Survivors and Age-Matched Controls in Korea
Recently, the survival rate and survival time in cancer have been improving. Between 1999 and 2009, cancer incidence in Korea increased at an annual average of 3.4% as per statistics of the Korea Central Cancer Registry (KCCR). The survival rate has been consistently increasing, and it was reported that the five-year survival rate from 2005 to 2009 was around 64.1%, and that one out of every 100 people is a cancer survivor (Korea Central Cancer Registry, 2012). However, after treatment, cancer survivors usually face physical and psychological problems (Kye and Park, 2012) and are at a greater risk of cardiovascular disease, diabetes, osteoporosis, and decline of physical function than the general population (Demark-Wahnefried et al., 2009; National Cancer Institute, 2012). Therefore, cancer survivors are encouraged to practice health habits such as eliminating tobacco exposure, eating a nutritious diet, reducing alcohol intake, and increasing physical activity
Introduction
Recently, the survival rate and survival time in cancer have been improving.Between 1999 and 2009, cancer incidence in Korea increased at an annual average of 3.4% as per statistics of the Korea Central Cancer Registry (KCCR).The survival rate has been consistently increasing, and it was reported that the five-year survival rate from 2005 to 2009 was around 64.1%, and that one out of every 100 people is a cancer survivor (Korea Central Cancer Registry, 2012).
However, after treatment, cancer survivors usually face physical and psychological problems (Kye and Park, 2012) and are at a greater risk of cardiovascular disease, diabetes, osteoporosis, and decline of physical function than the general population (Demark-Wahnefried et al., 2009;National Cancer Institute, 2012).Therefore, cancer survivors are encouraged to practice health habits such as eliminating tobacco exposure, eating a nutritious diet, reducing alcohol intake, and increasing physical activity The American Association for Cancer Research (AACR) has estimated the percentage of cancer cases caused by the following identifiable and/or potentially preventable factors: tobacco (33%), excess weight and obesity (20%), diet (5%), lack of exercise (5%), and alcohol (3%) (American Association for Cancer Research, 2012).Danaei and colleagues reported in a review that smoking and drinking are associated with many kinds of cancer-related risk factors.Such mutual interaction may increase cancer mortality rate (Danaei et al., 2005).
On the other hand, a team of international experts from the American Institute for Cancer Research (AICR) reported that physical activity has advantages in the prevention of cancer and in improving patients' psychological and physiological health during and after treatment.Advanced nations are leading in largescale studies and initiatives on different cancer types.Alternatively, lifestyle intervention has become an important issue in cancer management and prevention (Secretan et al., 2009).Even though the government leads cancer survivorship programs such as the 10-years Cancer Plan in Korea, there are still some limitations.Most research has investigated the importance of lifestyle modification in cancer survival but there is no such study conducted in a local Korean setting; thus, lifestyle guidelines for Korean cancer survivors remain undeveloped (Kim, 2010).We deem that there is a need for a cancer survivorship program and guidelines specifically tailored to the situation in Korea.
In this study, we assessed the health-related lifestyle of cancer survivors in Korea after exploring the relationship between quality of life (QoL) and lifestyle.In addition, we discussed the key issues of health care in cancer survivors.The findings from this study could be utilized for the development of cancer survivorship program and guidelines for the promotion of a healthy lifestyle among Korean cancer survivors.
Data sources
The 4 th Korea National Health and Nutrition Examination Survey (KNHANES Ⅳ) used in this article was performed by Korea Centers for Disease Control and Prevention.It collected pertinent data regarding national health condition, health-related knowledge and behaviors, disease prevalence rate, and nutrition levels.
The KNHANES was first conducted in 1998, and the fifth survey is currently underway.Raw data from such surveys can be used for the development of health policies and research, except for the survey date and location.This study was based on the KNHANES Ⅳ.
Study population
The subjects, who were at least 40 years old at the time of the KNHANES, were divided into the cancer survivor (CS) and control group (CG).Of the 9,744 participants who were previously or newly diagnosed with cancer, 471 (173 men, 298 women) were allocated to the CS group.Of the 8290 participants who were never diagnosed with cancer or had no physical activity limitations, 471 (173 men, 298 women) were selected for the CG group; the CG members were matched for age and sex with the CS.
The survey data were publicly available and the study design was approved by the Institutional Review Board of the Catholic University (MC12EASI0053).
General characteristics of subjects
The sample comprised 942 persons.Sex, age, education, economic status, occupation, and marital status are expressed in frequency (N) and percentage (%).The economic status of the participants was classified into three categories: high (top 25%), middle (top 26-74%), or low (bottom 25%).Occupation was classified into white collar (administrator, professional, and office worker); blue collar (manufacturing, mining, construction, mechanical work, maintenance, technical installation, and various other types of physical work); or non-employed, student, or housewife.Participants from the CS group were classified according to their type of cancer and length of survival after diagnosis.
Health-related behavior factors
Assessment of health behaviors, including drinking, smoking, and exercise, was performed.Drinking behavior was assessed using the Alcohol Use Disorders Identification Test (AUDIT; a score of ≥12 points indicates a heavy drinker, <12 points, a moderate one).The current smokers and nonsmokers were also identified in the group as well as the exercisers and non-exercisers; the exercisers were further divided into vigorous, moderate, and low-intensity exercisers.Vigorous-intensity exercise was defined as sessions of >20 minutes more than 3 times per week of jogging, climbing, bicycling (high speed), swimming (high speed), soccer, basketball, jump rope, squash, tennis (singles), or job activity (such as moving heavy loads).Moderate exercise was defined as sessions of >30 minutes more than 3 times per week of swimming (low speed), tennis (doubles), volleyball, badminton, table tennis, or job activity (such as light load moving).Lowintensity exercise was defined as walking or commuting for >30 minutes more than 3 times per week.
Quality of life (QoL)
Health-related QoL was assessed using the Euro QoL Questionnaire 5-Dimensional Classification (EQ-5D).It consists of 5 dimensions: mobility, self-care, usual activity, pain/discomfort, and anxiety/depression.Each dimension is classified into three categories: extreme problems, some problems, or no problems.Furthermore, each dimension is scored using the EQ-5D index: 0 as the worst health status and 1 as the best health status.
Analysis
SPSS 19.0 for Windows was used to analyze the demographic characteristics, percentages, means, and standard deviations.A chi-square test was used to determine associations in drinking, smoking, and exercise behaviors between groups.An independent t-test was used to determine differences in health-related QoL between groups.
The cancer types of the subjects in CS are shown in Table 2.The majority of them have survived cancer for over 5 years, followed by those who had survived cancer for 1-2 years, 3-4 years, and <1 year.Gastric cancer accounted for the highest percentage of cancer cases DOI:http://dx.doi.org/10.7314/APJCP.2013APJCP. .14.5.2949Lifestyle Behavior and Quality of Life of Korean Cancer Survivors (20.8%), followed by cervical, breast, colorectal, lung, liver, and others.
Health-related behaviors
Table 3 shows the results of the between-group comparison of health related behaviors.The number of heavy drinkers was lower in CS than in CG (9.4% vs. 15.8%,p<0.01).There were also significantly fewer smokers in CS than in CG (9.1% vs. 14.0%,p<0.05).In contrast, the number of people engaging in exercise behavior did not differ significantly between the two groups.
Quality of life
An independent t-test was applied to determine whether QoL differs across persons engaging in each of the health-related behaviors studied (Table 4).Results showed that drinkers and smokers' QoL scores did not
Discussion
This study investigated the relationship between QoL and lifestyle to establish a basis for the development of a cancer survivorship program and lifestyle guidelines for Korean cancer survivors, using the KNHANES.According to the National Cancer Center, the number of cancer survivors worldwide has increased rapidly worldwide, and 6 of 10 cancer patients will survive the disease for more than 5 years (Korea Central Cancer Registry, 2012).
Managing cancer survivors is a significant area of interest of national health service and a great calling of our time.The governments of advanced nations have already started interventions related to lifestyle factors (diet, exercise, smoking, and weight loss).In particular, the United States seems to have made more efforts to counteract the health problems of cancer survivors than other countries.The National Cancer Center was opened in 1937 (National Cancer Institute, 2012), and other national organizations such as the National Cancer Institute (NCI) and the National Coalition for Cancer Survivorship (NCCS) are already fully operational.These organizations strive to improve the health and welfare of cancer survivors (Kim, 2010).
South Korea also initiated the 10-year Cancer Plan on a national level to prevent and treat cancer in 1996-2005, and the second 10-year Cancer Plan is currently underway (Ministry of Health and Welfare, 2011).It attempted to coordinate various approaches such as symposiums or workshops for cancer survivors in institutes and hospitals.However, cancer survivorship programs in Korea are disorganized and have not shown evidence of efficacy (Kim, 2010;Chung et al., 2011).Understanding cancer survivors is essential for their management but even cohort studies about their health-related behavior have not been conducted.While the KNHANES IV was not specifically administered to cancer survivors, it is one of the few surveys that provide substantial information on cancer survivors on a national level.
This survey shows that 471 people of 9744 people were cancer survivors who have been diagnosed or undergoing cancer treatment.However, the National Coalition for Cancer Survivorship (NCCS) defines a cancer survivor as a person who has survived for more than 5 years after cancer treatment.In a broader sense, a cancer survivor is also defined as a person who has experienced cancer (National Cancer Institute, 2011).The government of Korea has not accurately defined cancer survivors; hence, we used the broader definition in this study.
Cancer survivors are consistently suffering because of fatigue, depression, physical problems, and pain during or after the treatment.For this reason, the QoL for cancer survivors was lower than that of non-cancer survivors in various studies (Kim, 2010;Bloom et al., 2012;Turkoglu and Kilic, 2012).Bloom et al. reported that we recognize health conditions differently according to individual expectations or hopes (Bloom et al., 2012).On this basis, even if cancer survivors make excellent progress after the operation, their QoL remains relatively poor, owing to the fear of recurrence, uncertainty, disappointment in body changes, depression, and fatigue.The key issue of healthcare in cancer survivors is understanding the problems that they are facing.
Health-related behaviors to improve QoL in cancer survivors play an important role in the prevention of cancer recurrence and improvement of overall health.In the current study, the reported unhealthy habits-smoking, obesity or overweight, unhealthy diet, lack of exercise, and excessive alcohol consumption-were risk factors for cancer that should be modified.Furthermore, obesity and inactivity are associated with at least 8 kinds of cancer risk, alcohol intake is associated with at least 7 kinds of cancer risk, and smoking is known to cause 18 kinds of cancer (Secretan et al., 2009;American Association for Cancer Research, 2012).
Most people become aware of good lifestyle habits for cancer survival through health campaigns, the mass media, and results of previous studies.Jo and Jung (2011) found that more than 80% of the respondents in their sample knew of 5 of the 10 codes about lifestylerelated knowledge of cancer prevention: (i) Smoking and indirect smoking can lead to cancer (81.1%).(ii) Eating enough vegetables and fruits is helpful in preventing cancer (87.9%).(iii) Consumption of burnt or charred foods can cause cancer (88.7%).(iv) Walking for at least 30 minutes five or more days per week helps prevent cancer (88.3%).(v) Maintaining a normal body weight is helpful in preventing cancer (90.3%).The results from the aforementioned study were able to approximate the level people's knowledge of the association of smoking, diet, and exercise with cancer, although generalizing these results to the Korean population should be done cautiously.
Our study shows that cancer survivors seem to not only be aware of lifestyle factors but also voluntarily change their lifestyle.They drink and smoke less than the noncancer survivors.However, we did not find a difference between cancer survivors and non-cancer survivors in exercise behavior.Together, these results suggest that cancer survivors are indeed conscious about their diet and the health habits that are unsafe for their condition (e.g., smoking); however, they are not aware of the benefits of exercise compared with non-cancer survivors.
Further, the EQ-5D score, which indicates QoL, had no relationship with drinking and smoking behavior; it was positively associated with only exercise behavior.Thus, cancer survivor did not modify their exercise behavior despite its strong relationship with QoL, even if they changed other health-related behaviors.This result was consistent with previous reports (Irwin et al., 2004;Holmes et al., 2005;Holick et al., 2008;Wang and Chung, 2012) in which lack of exercise was also observed among cancer survivors.This raises the question of why they do not change their exercise behavior despite knowing that DOI:http://dx.doi.org/10.7314/APJCP.2013APJCP. .14.5.2949Lifestyle Behavior and Quality of Life of Korean Cancer Survivors exercise is good for health.
Previous research (Irwin et al., 2004;Stevinson et al., 2009) on exercise or physical activity levels among cancer survivors showed that most of the cancer survivors were not meeting the physical activity guidelines (Tucker Welk and Beyler, 2011) of "moderate-intensity physical activity for at least 30 minutes, five or more days of the week" or "vigorous-intensity physical activity for at least 20 minutes, five or more days of the week."A similar study (Blaney et al., 2013) revealed that the exercise barriers of cancer survivors in Northern Ireland were mainly healthor treatment-related factors (e.g., illness, pain, weakness, and fatigue) and environmental factors (e.g., weather and lack of facilities for cancer survivors).Loh et al. (2011) reported that barrier factors for physical activity did not substantially differ among China, India, and Malaysia.
Likewise, Wang and Chung (2012) reported that some breast cancer survivors who had not drunk alcohol or smoked had improved health, although they did not attempt to change their exercise behavior.Their study did not explore the reasons for such behavior.In practice, exercising regularly was relatively difficult because of certain constraints (costs, time, effort, etc.).Cancer survivors in general have not received proper education on exercise behavior.In particular, healthcare professionals dealing with cancer survivors have not adequately utilized the "teachable moment," which is the best moment for them to change their lifestyle, that will subsequently improve their health (Blanchard et al., 2008;Senore et al., 2012).
For decades, there has been a paradigm shift regarding exercise for cancer survivors.Exercise treatment for cardiovascular diseases is considered an essential inclusion nowadays, which was previously thought to be unnecessary.In the late 1990s, experts advocated the importance of "rest" for cancer survivors (Lucia et al., 2003).In the early 2000s, there was accumulating evidence that exercise helps cancer survivor improve their health and prevents complications (Schmitz et al., 2010;Speck et al., 2010).Even the ACSM has integrated exercise into its guidelines for cancer survivors (Schmitz et al., 2010).
Unfortunately, specific exercise recommendations and precautions during the preparatory stage, screening test, or treatment have not been properly formulated for each cancer type.Personalized care should be provided after a thorough assessment of the type and stage of cancer, with consideration for the patient's general health condition, exercise behavior, fitness level, and cancer complications.Future studies should attempt to personalize a cancer care system that is based on various factors, including the physical, emotional, and mental state of the individual.Establishing such system also entails professionalism and cautionary measures.
However, this study has limitations in that it could not infer causality.Nevertheless, it is expected that these results can be used for the development of a health promotion program specifically tailored to Korean cancer survivors.In conclusion, we encourage cancer survivors to exercise for the maintenance of a healthy lifestyle and improvement of their QoL.
to improve physical function, mental ability, and diseasefree survival.
Table 3 . Comparison of Health-Related Lifestyle of the Subjects
a Calculated by a chi-square test
Table 4 . Differences in EQ-5D Scores according to Subjects' Lifestyle Characteristics
a EQ-5D index scores (0:Lowest possible health, 1:Best possible health), b
|
2018-04-03T02:49:32.786Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "3058f8d7aaee765eb64f9564be57b1a32c698825",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7314/apjcp.2013.14.5.2949",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0acfebcdd62451d98acc0bb4c56951f1b4845464",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214055583
|
pes2o/s2orc
|
v3-fos-license
|
Vocal range profile in elderly women with and without voice symptom
Purpose: to characterize and compare the vocal range profile in elderly women with and without voice symptoms. Methods: a total of 23 elderly women attending an elderly care public service partici pated in the study. They were divided into groups - with and without voice symptoms, according to the results of the Voice Symptom Scale (VoiSS). All participants were submitted to a vocal range profile analysis, by means of the Vocalgrama software. Appropriate statistical tests were applied, by adopting a significance level lower than 0.05. Results: the values for vocal range profile were 3.74% (±1.56) and 3.62% (±1.95) in the groups with and without symptoms, respectively. There were no differences between the groups in the various parameters of the vocal range profile. Conclusion: in the elderly women studied, the vocal range profile showed to be redu -ced, regardless of having or not voice symptoms. The importance of the elderly inves ting in the possibilities of vocal training is highlighted, with emphasis on vocal flexibi -lity, aiming at increasing vocal range in this population.
INTRODUCTION
Each cycle of life has its own particularities, and the phase of senescence is not an exception. In this stage, decline in the daily activities are perceived, interfering with their quality of life, as well as diminishing their cognitive, sensorial and motor aspects 1,2 . Among these changes, it is common to find, in speech-languagehearing clinics, voice problems resulting from the aging process, called presbyphonia.
Some characteristics related to voice aging, as inherent to the patient's age, stand out: decrease in respiratory capacity, decrease in maximum phonation time, hoarse-breathy vocal quality, decrease in vocal intensity [3][4][5] , and changes in voice frequency [6][7][8][9][10] . Men's fundamental frequency is set around 147 Hz, which represents a high value for males; among females, this acoustic parameter often averages 195 Hz, which represents a low value for women.
The presence of reduced vocal intensity is related to the deficient respiratory support, due to the diminished elasticity and the stiffening of the respiratory musculature, as well as the decrease in vital capacity and maximum phonation time. Such set of factors favors a decreasing infraglottic pressure and, consequently, a reduction in voice intensity 4,5 .
Another relevant symptom in this population is the difficulty to sing. Many times, elderly refer to difficulties in reaching extreme notes 4 , which may be related to changes in vocal range. Some authors highlight the restricted tonal range as one of the acoustic changes present in this population 11 . This characteristic may particularly have a negative influence in singing.
It is important to mention that the elderly are more and more integrated in community activities, such as singing groups or choirs, which favors a better quality of life in physical, psychological and social aspects, improving their well-being 12,13 . However, it is noticed that the vocal characteristics oftentimes presented by the senescent interfere with their singing performance, causing complaints to arise concerning their difficulty to sing 4 . Furthermore, the presence of negative vocal symptoms, as coughing, throat clearing, screaming, difficulty to speak, and vocal fatigue interfered with the vocal range performance of elderly women chorists, especially in the group presenting greater occurrence of these vocal symptoms 14 .
Hence, the elderly may feel disadvantaged before a vocal problem or limitation, no longer being able to join group activities that specifically use the voice, thus shying away and even isolating themselves.
For the assessment of intensity-related vocal range, i.e., analysis of phonatory and dynamic ranges, there are acoustic-measuring devices, such as phonetography and the Vocalgrama, for example, which enables the measurement of the maximum and minimum frequency and intensity limits a person can reach, producing a graph whose area is measured in square centimeters or in percentage. Therefore, the greater the represented area, the better is the vocal performance, in relation to frequency and intensity control 15 .
In this article, the purpose was to measure the vocal range profile in elderly women, with and without vocal symptoms, considering the vocal and dynamic ranges, aiming at verifying the impact of age and vocal symptoms on these parameters. Such purpose is justified by the fact that the results obtained may furnish means to propose vocal training alternatives for this population, as well as to control the effects of the training applied. The goal is to renew the possibilities of activities for the elderly in the social context, as in singing, for example, making improvements in their quality of life possible.
Thus, the initial hypothesis for this research was that elderly women without vocal symptoms may have better results in vocal range profile measurements, when compared to those with symptoms. In addition, it was assumed that elderly women have reduced vocal range in relation to the adult women population. Hence, this paper aimed at characterizing and comparing the vocal range profile in elderly women with and without voice symptoms.
METHODS
This is an analytical, comparative, cross-sectional research, characterized as a subproject of a broad study already submitted to and approved by the Research Ethics Committee of the Universidade Federal de Pernambuco, Brazil, under Certificate of Presentation for Ethical Consideration (CAAE)447252215.6.0000.5208, evaluation report no. 1.076.660. The sample was collected from an elderly care public service, counting on the participation of 23 elderly women aged from 62 to 76 years, averaging 69.42 (±4.88) years old.
The individuals included in the research were women 60 years old or over, attending an elderly care public service from a public university. Elderly women with illnesses that could affect the proper functioning of the vocal tract, such as stroke, Parkinson's disease, dementia, and respiratory tract chronic diseases were excluded from the sample, as well as those that had been submitted to head and neck surgery. For such, the patients' medical history was consulted in the healthcare unit.
Initially, the elderly women were informed about the study, and those who agreed to participate signed the Informed Consent Form (ICF). Then, the Voice Symptom Scale (VoiSS) protocol was applied, in order to divide them in two groups: with and without vocal symptoms, according to the score achieved on the protocol; the passing score was set at 16 points 16 , as shown in Table 1. The group without symptoms was composed of 12 elderly women, aged from 62 to 73 (average: 69.57±5.69 years). The group with vocal symptoms was composed of 11 elderly women, aged from 63 to 76 (average: 69.28 ±4.33). It is noted that the groups differed from each other in all domains of the scale. Afterwards, the data related to vocal and dynamic ranges of all participants were collected by means of the Vocalgrama software, which takes the measurements of the frequency and intensity acoustic parameters in their maximum and minimum limits, and of the voice range profile (VRP), in percentage.
The VRP enables the vocal limits of a person to be analyzed through the definition of the maximum and minimum frequency and intensity acoustic parameters. The VRP is set by means of the Vocalgrama software, made by "CTS Informática." This program quantifies, in exact figures, the person's range and translates it into a graph. Its analysis consists of the following measures: Hertz (Hz) for the frequencies and pitch range; semitones for vocal range; decibels (dB) for the maximum and minimum limits of voice intensity; and, percentage for the graph area corresponding to the vocal range profile 15,17 .
The data from the VRP were obtained through the vocal records identified in the program itself. The recordings were made with the use of an HP Notebook PC computer, connected to a Karsect HT-2 headset microphone and Andrea PureAudio™ USB-AS Adaptor, which filters and reduces noise. The microphone was kept at a distance of, approximately, four centimeters from the mouth, in order to minimize interferences in the records.
The vocal records of the VRP were made while the person was in the sitting position. Each participant was instructed to perform a sustained emission of the phoneme /ε/ in upward and downward glissando, until they reached their maximum frequency limit, both for highs and lows, in the weakest intensity possible, and then in the strongest. The graph result registered by the program was described quantitatively, in percentage. All the records were obtained individually in a previously scheduled appointment between researcher and participant, in a room reserved for collecting the data.
For the data analysis, the acquired values were numerically distributed. The variables included the maximum and minimum frequencies, the maximum and minimum intensity, vocal range in number of semitones and in Hertz, and the area of the vocal range profile in percentage. Moreover, the presence or not of vocal symptoms was taken into consideration. To investigate the normality of the studied population, the Shapiro-Wilk test was performed. Considering that most of the variables studied presented non-normal
DISCUSSION
This study aimed at characterizing and comparing the vocal range profile in elderly women, according to the presence or absence of vocal symptoms.
Regarding the minimum frequency measurements, the averages were found to be approximate between the groups, agreeing with what was observed in another research on vocal range profile in elderly women, using phonetography, whose average result was of 154 Hz 18 . Such values are inferior to those found in young women population, whose average was of 167.32 Hz in women aged from 18 to 29 years 19 . It is possible that the farther reach towards lower frequencies occurred due to the lowering of frequency, commonly found in old age women. It so happens that, with menopause, the hormonal fall may trigger the vocal fold edema and, consequently, the lowering of the elderly women's voice 6,7 . Moreover, it is interesting to notice that elderly chorist women may present an even lower reach, possibly resulting from training. A research has presented average values corresponding to 134.82Hz and 137.28Hz in elderly chorist women of two different age groups 14 .
Concerning maximum frequency, this study presented average value of 386.35Hz for the group of elderly women with voice symptoms, and 374.65 Hz for Table 2 presents the average values of the vocal range profile variables in elderly women with and without vocal symptoms. It is observed that there was similarity between the values obtained from the groups, without significant differences being registered.
RESULTS
distribution, it was then opted for the use of the Mann-Whitney non-parametric test for the comparison between averages. The SPSS software (Statistical Package for the Social Sciences), version 21.0, was used, and the significance level adopted was lower than 5%. the group of elderly women without voice symptoms.
Another study, on the other hand, presented an average value of 478.88 Hz, most of the elderly women presenting frequencies at about 440 Hz 18 . In young women, these values are much higher, and the reach towards high notes goes as far as 1000 Hz, with average value of 908.45 Hz for the maximum frequencies of women between 18 and 29 years 20 . The differences registered between young and elderly women are also expected, as the loss of vocal flexibility due to laryngeal changes in old age is taken into consideration 21,22 . In chorists, on the other hand, the registered average reach for younger elderly women was of 349.96 Hz, whereas the group of older elderly women had an average value of 348.59 Hz 14 . Hence, it is reinforced the difficulty to reach higher frequencies in old age.
As for the pitch range in Hz and semitones, this study, in addition to not differing between the groups of elderly women with and without voice symptoms, presented results that diverged from those obtained from another group of elderly women, whose average was of 324.05 Hz 18 , possibly because the elderly women in this study had reached lower values of maximum frequency, as mentioned above. Such results are also below the values found in the adult population, whose established normality is of at least 20 semitones 19,20,23 , the difficulty in singing presented by some participants. On the other hand, the presence or not of vocal symptoms does not seem to be related to the vocal range in elderly women. Attention is called to health programs towards the elderly that may explore their vocal efficiency, resistance and flexibility, in order to renew the possibilities of activities for the elderly in the social context, as in singing, for example, thus, improving their social integration and quality of life.
with average records of 19.6 semitones in elderly women, and average of 28 semitones in young women. These findings may also be explained by the endocrinal factors related to the postmenopause period in women aged 60 or over. It should be further noted that chorist and non-chorist elderly may have their range in different semitones, and singing is a practice that may help to increase such measurement 24 .
As for the vocal range area, this study presented an average of 3.74% and 3.62%, respectively, in elderly women with and without voice symptoms, which is considered as reduced in relation to the vocal range area of adult women 19 , as expected, considering the vocal changes commonly found in old age individuals.
The minimum intensity data obtained from both groups are values considered lower, when compared to adult women 19 . As for the maximum intensity from both groups, it appears with lower averages in relation to those found in young adult women aged from 18 to 29, whose average was of 113.14 dB 19 . Such results may be explained by the fact that adults have greater subglottic air pressure by emission than in elderly women population. The elderly may suffer the influence of loss of elasticity of lung tissue, with reduced subglottic air pressure and, consequently, reduced voice intensity. Furthermore, it is highlighted that the intensity control is directly related to laryngeal tonicity and glottal resistance, which, in elderly, may be reduced. Studies carried out with elderly women present either similar or lower average 14,18 .
When the parameters of vocal range profile of elderly women were compared, according to the presence or not of vocal symptoms, a similarity in average values was observed, without significant differences being recorded. It is possible that, regardless of the presence or not of vocal symptoms, the characteristics of voice range and dynamics be affected by changes of structural aspects of the vocal fold and the very condition of the mucosa in the senescence period. Moreover, it is important to consider that the similarity in result between the groups may be attributed to the fact of the vocal symptoms having been self-reported.
The main limitation to this study refers to the amount of participants. A more encompassing sample would enable a better understanding of the difficulties related to vocal range in the elderly population.
CONCLUSION
It is possible to conclude that the elderly women, in general, present reduced vocal range, which justifies
|
2019-11-22T01:34:36.838Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "7d64843746102be8141b886e7ab2b64f23d2bf73",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rcefac/v21n4/1982-0216-rcefac-21-04-e18218.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "45f4fd9dc7877cb7f88ed9905fb6d9a04bd1b930",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225645821
|
pes2o/s2orc
|
v3-fos-license
|
A Five Year Retrospective Study of Female Sexual Assault in Qaluybia Governorate, Egypt
ABASTRACT Background: Sexual assault has serious effects on the society and on victim's health. Aim: The aim of this work was to estimate the epidemiological features and characteristics of female sexual assault during the period from 2014 to 2018 in Qalyubia Governorate, Egypt. Subjects and Methods: This study was based on collection and analyze of retrospective data during the period from the start of January 2014 to the end of December 2018 from the archives of Qalyubia Medico-legal Department, Ministry of Justice, Egypt, with respect to demographics features, number of assailants, relationship between victim and perpetrator, time of reporting, pattern of physical and genital injuries. Results: The total number of sexual assault cases was 145 within the studied period. Most cases (49 %) were between 12-18 years old and came from the urban areas (70.3%). Unmarried cases constituted 90.3% and 94.5% of cases were with normal mentality. Sexual assaults occurred mostly in the spring (44.14%). Most of the assailants were extrafamilial and unknown to the victims (70.34%). Most cases committed in the assailant's home 46.21% and a single assailant was the perpetrator of 81.4% of the cases. Complete vaginal penetration was the most frequent type of assault (44.1%). The most common type of non-genital injury was abrasion (49.5%), the highest percentage of genital injury was lacerations (39.8%), and the most common site was the hymen (35.3%). Most cases (65.5%) examined after the first month of assault and the shortest time between alleged assault and the examination was in the first day in 6.2%. Conclusion: The highest percentage of sexually assaulted cases in this study was unmarried female under eighteen. Most cases were examined late after the first month of assault and the crime mostly committed by one unknown assailant with complete vaginal penetration and hymen laceration was the most frequent type of genital injury.
I-INTRODUCTION
exual assault can be defined as any form of sexual contact or conduct without consent of the victim (Ellison et al., 2008). In Egypt, the establishment of any sexual relationship outside of marriage is an unlawful relationship and is criminalized by religions and the law and it is a violation of the law, values and ethics recognized in the Egyptian society (El-Elemi et al., 2011). Sexual assault is a crime against women all over the world that was met with silence in most of the time (Haider et al., 2014). The rate of sexual assault against female is in increased manner all over the world and the international statistics are shocking (Tamuli et al., 2013). Worldwide about twenty percent of female have sexually abused in childhood (Krug et al., 2002). Sexual violence includes acts that range from oral annoyance to forced penetration and physical force . Whatever the circumstances, sexual assault leave a bad effect exist for life affecting the mental set up of the victims and their relatives (Bandyopadhay et al., 2013). Victims may be unacceptable by their relatives, may become unwanted by their A Five Year Retrospective Study of Female Sexual … -76-Pagecommunities and may be murdered by the assailants (Tamuli et al., 2013). The victims may get a high harm of genital and general injuries, fatal effect, unintended pregnancy or sexually transmitted diseases (Drezett et al., 2012).
The number of cases of sexual assault may be higher than that recorded as some victims may not report as they may be ashamed or afraid from blaming by their community (World Health Organisation, 2008) and some of them belief that it is a private situation and no privacy offered to them if reporting about the assault. Also absence of specialists qualified to deal with these crimes and some of them do not appreciate what the victims have been subjected to. Some of victims are afraid of the perpetrator's revenge if the perpetrators are brought to justice; they may escape with lowest punishment and back to take revenge (Egypt violence against women study, 2009). So examination of sexual assault cases considers one of the most difficult jobs in forensic medicine and the responsibilities of the doctor are very hard (Green, 2000).
Accurate studying of the incidences of the sexual assaults gives significant guide and helps to get the main causes of these situations helping the community and authorities to put suitable measures and programs to decrease these crimes. So, the aim of this work was to estimate the epidemiological features and characteristics of female sexual assault during the period from 2014 to 2018 in Qalyubia Governorate, Egypt.
II-Subjects and Methods
This study included collection of retrospective data during the period from the start of January 2014 to the end of December 2018 of alleged female sexual assault in Qalyubia Governorate, Egypt, from the archives of Qalyubia Medico-legal Department, Ministry of Justice after approval from authorities to see the records and analysis of all medico legal reports related to these living victims. The data obtained from the records included the age of victim which was classified to four categories as following: child up to 11 years, adolescent (12-18 y), young adults (19-30 y), adults (older than 30 y), marital status (married or unmarried), mental state (normal or mentally ill), residence of victim (rural or urban), season of occurrence, number of assailants (one or more than one), relation between assailant and victim (intrafamilial or extrafamilial), location of assault (victim's home, assailant's home, assailant's workplace, the fields or isolated place or other) , the time passed between assault and the medico-legal examination (in the 1 st day, 2 nd day up to 1 week, 1 st week to the 1 st month or after the 1 st month) , type of assault (vaginal, anal or mixed), type of nongenital injuries (abrasions, contusions, fracture bone, joint sprain, head injury, broken teeth, bite marks or multiple types of injuries) and genital injuries (redness and swelling, abrasions, contusions, lacerations or multiple types of injuries). These data were collected, tabulated in specially designed sheet (Appendix 1) and statistical tests were calculated.
Inclusion criteria include: females, all ages, sexually assaulted cases while exclusion criteria include: males, any cases rather than sexual assault cases as head injuries, firearms injuries, abortion cases, infirmity cases, poisonous cases etc...
Statistical Analysis
All the tested variables were expressed as numbers and percentages. Chi square test was used and P value of ≤ 0.05 was considered statistically significant. All analyses were performed using SPSS (Statistical package for social science) version 16 software (SpssInc, Chicago, ILL Company) (Greenberg et al., 1996).
III-RESULTS
Out of 6061 recorded cases at Qalyubia Medico legal Department, Ministry of Justice, Egypt, during the period from January 2014 to December 2018, only 145 were victims of sexual assault As regard the age of the victims; table (3) shows the classification of female sexual assault victims according to age groups where the age of female victims was classified into 4 groups. The first one was up to 11 years, the second from 12 to 18 years, the third from 19 to 30 years, and the fourth from older than 30 years. The most vulnerable age group to sexual assault was highly statistically significant (P <0.001) in the age group of 12 to 18 years (71 cases, 49%), and lowest among females aged older than 30 years (7 case, 4.8%). N. = number of cases * highly significant X 2 : Chi square test Table 6 shows that only 8 victims (5.5%) had mental illnesses, while the rest of the victims (137 cases, 94.5%) had normal mentality which was highly statistically significant (P <0.001). N. = number of cases * highly significant X 2 : Chi square test According to the place of assault, most cases were committed in the assailant's home (67 cases, 46.21%) while eighteen cases (12.41%) occurred in the fields or isolated place, 7 cases (4.83%) occurred in victim's home, 6 cases (4.14%) occurred in assailant's workplace, 47 cases (32.41%) occurred in different other places; these differences were highly statistically significant (P <0.001) ( Table 8). According to the number of assailants, one hundred eighteen victims (81.4%) exposed to sexual assault by only one assailant, but the rest of victims exposed to sexual assault by more than one assailants (27 cases, 18.6%); these differences were highly statistically significant (P <0.001) (Table 9, Fig.5). N. = number * highly significant X 2 : Chi square test As regarding victim-assailant relationship, the distribution of cases shows that one hundred and twenty eight assaults of 145 cases (88.27%) were extrafamilial to the victims, whereas 17 assaults (11.7%) were considered intrafamilial. From the extrafamilial cases, there were 102 cases (70.34%) unknown to the victims, and 26 cases (17.93%) were known. From the intrafamilial cases, there were 7 cases of incest (4.83%), and 10 cases (6.9%) sexually assaulted by relatives; these differences were highly statistically significant (P <0.001) Table 10. -82-Page -As regard the time of medico legal examination after the assault (the time between the assault and the examination of the victims), 9 cases (6.2%) examined in the first day of assault, 14 cases (9.7%) examined from the second day to the first week, 27 cases (18.6%) examined from the first week up to the first month, and 95 cases (65.5%) examined after the first month (Table11). N. = number of cases * highly significant **not significant X 2 : Chi square test As regarding non genital injuries one hundred and three victims (71%) of 145 cases showed evidence of general violence. When the injuries were involving more than two places of the body, it was named as multiple injuries. The frequency of injuries in sexually assaulted females is represented in -83-Pagelowest percentage was fracture bone (1 case, 1%). Contusions, bite marks, broken teeth, joint sprain, and head injuries constituted 9.7% (10 cases), 7.8% (8 cases), 5.8% (6 cases), 4.9% (5 cases), and 1.9% (2 cases), respectively. Regarding the site and type of genital injuries the total genital injury was 91.7 % (133 of 145 cases); genital injuries were most commonly located in the hymen (47 cases, 35.3%), posterior fourchette (38 cases, 28.6%), vulva (12 cases, 9%), vagina (8 cases, 6%), anus (5 cases, 3.8%), and in more than one location (23 cases, 17.3%).
IV-DISCUSSION
The rate of sexual assault against female is in increased manner all over the world and the international statistics are shocking (Tamuli et al., 2013). This retrospective study was conducted to estimate the epidemiological features and characteristics of sexual assault against female in Qalyubia Governorate, Egypt, from the start of January 2014 to the end of December 2018.
In the present study, 145 cases of female sexual assault were recorded over 5 In the present study, the most vulnerable age was between 12-18 years old (49%). The results of the present study was in agreement with study done by Sharaf El-Din et al. (2015) which reported that the highest percentage of victims was among age group 12 to 18 years, and the lowest was among females aged older than 30 years.
Also the results of the present study was in line with the study of Sherif et al. (2018) that founded the most cases (63.3%) were less than 18 years. This finding of the present study was similar to that of Kaushik et al. (2016) study as it was observed that incidences of alleged rape were most amongst girls of 14-17 years in 45.16% cases followed by 18-30 years in 31.64% cases. Pal et al. (2015) reported the age between 11-20 years as the highly affected age. The age of 11-15 years was highly affected age group according to study conducted by Suri and Sanjeeda (2013). At the same time, the study of Tariq et al. (2014) reported that the victims aged between 10-19 years constituted the most raped cases. Young age group may be more vulnerable than adults as they are weaker, they can be easily grabbed and overcome and easily deceived (Felson and Cundiff, 2014). Also the reason for this can be attributed to the fact that the young age does not have sufficient awareness in addition to the lack of knowledge about these and how to protect themselves (Hassan et al., 2007).
The results of the present study regarding the age of the victims founded that the lowest vulnerable age was females aged more than 30 years (4.8%). This result was in contrast with the study done by Bandyopadhay et al. (2013) who reported that the maximum numbers of study population were aged between 21-30 years (40%). Also Vertamatti et al. (2013) reported that 47.3% were between 20 and 39 years. -85-Page -This age may be more vulnerable to sexual assault due to the increase in sexual attractiveness (Felson and Cundiff, 2014). The difference in the age of victims in many studies may be a result of different cultures and customs (Karanfil et al., 2013).
According to residence of the victims, the results of the present study revealed that the most cases came from urban areas Regarding mental status the results of the present study showed that only (5.5%) had mental illnesses, while the rest of the victims (94.5%) had normal mentality. This result of the present work was in agreement with study done by Sharaf El-Din et al. (2015) who stated that small numbers were with mental illnesses, while the majority of victims had normal mentality. On contrary to the result of the present study, Martin et al. (2006) and Brownlie et al. (2007) stated that the victims with mental disability are at great risk for sexual abuse.
According to seasonal variation the finding of the present work revealed that most cases occurred in spring (44.14%) and the lowest proportion occurred in winter (10.34%). This result in agreement with the study done by Sherif et al. (2018) that reported that most of sexual assault cases (31.4%) were in spring season. Also, Davies and Jones (2013) reported that most cases were in spring (28.2%). The high occurrence of sexual assault in spring may be due to the nice weather in that time of the year that promotes people to spent most of their time outside leading to increase vulnerability to sexual assault (Sivarajasingam et al., 2004). On the other hand, the study of Demireva et al. (2013) reported the highest incidence of sexual assault in summer (34.14%). Sukul et al. (2009) also reported that majority of the cases were occurred during summer months.
According to the place of assault, the result of the present study founded that the most cases were committed in the assailant's home (46.21%). This result was in line with study of Kaushik et al. (2016) who reported that the common site of assault was the house of perpetrator in 33.33% of cases. Also, the results of the present study were consistent also with the study of Pal et al. (2015) According to the number of assailants, the results of the present work founded that 81.4% of cases were exposed to sexual assault by only one perpetrator. The time between sexual assault and medico-legal examination is extremely important to proof the assault and the consequent rights of the victim and penalties for the perpetrator, however, most of the victims are late to report due to their fear of shame or because of society's perception of them Kaushik et al. (2016). In the present work as regard the time of medico legal examination after the assault there were 65.5% of cases examined after the first month and the lowest percentage of cases (6.2%) examined in the same day of assault. This was in agreement to Sharaf El-Din et al. Furthermore, in early examinations, subcutaneous contusions are unnoticed as they appear after forty-eight hours (Haider et al., 2014).
Regarding the site and type of genital injuries, the present work founded that the total genital injury was 91.7 % (133 of 145 cases); which mostly located in the hymen (35.3%) The most common type of genital injury was lacerations (39.8%) followed by multiple types of injuries (34.6%). This result was in agreement with Sharaf El-Din et al. In this wok, children up to age 11 years were the most susceptible to anal penetration while the age group between 12 and 30 years were the most susceptible to complete vaginal penetration. This result was in line with Sherif et al. (2018) stated that most of cases were victims of rape (54%) and most of cases under the age of 18 years (40%) were subjected to anal sex. also Sharaf El-Din et al. (2015) who founded that the highest percent of anal penetration was in children up to age 12 years while age group between 12 and 30 years were most susceptible to complete vaginal penetration.
V-CONCLUSION
The highest percentage of sexually assaulted cases was unmarried female under eighteen. Most cases were examined late after the first month of assault and the crime mostly committed by one unknown assailant with complete vaginal penetration and hymen laceration was the most frequent type of genital injury.
VI-RECOMMENDATIONS
More researches are needed to find out the real prevalence of female sexual assault in the different areas around Egypt as there is underestimation of this problem. Early reporting of sexual assaults without delay in examination to avoid loss of evidence and loss of victim's right. Protection programs to increase the awareness of the children and their family about early reporting of these assaults and how to protect themselves against these situations.
|
2020-07-23T09:01:49.705Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "06a4ea86fe51a825dee189fd2ea97f23ed461a01",
"oa_license": null,
"oa_url": "https://zjfm.journals.ekb.eg/article_102636_71a218c7ed0ecbff891bdc8bb93ea6b8.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ae7c6078ef830e9b4855558c3b33fbea6be937e",
"s2fieldsofstudy": [
"Law",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269116601
|
pes2o/s2orc
|
v3-fos-license
|
Moving toward Narrowing the United States Gap in Assisted Reproductive Technology (ART) Racial and Ethnic Disparities in the Next Decade
The Disparities in Assisted Reproductive Technology (DART) hypothesis, initially described in 2013 and further modified in 2022, is a conceptual framework to examine the scope and depth of underlying contributing factors to the differences in access and treatment outcomes for racial and ethnic minorities undergoing ART in the United States. In 2009, the World Health Organization defined infertility as a disease of the reproductive system, thus recognizing it as a medical problem warranting treatment. Now, infertility care is largely recognized as a human right. However, disparities in Reproductive Endocrinology and Infertility (REI) care in the US persist today. While several studies and review articles have suggested possible solutions to racial and ethnic disparities in access and outcomes in ART, few have accounted for and addressed the multiple complex factors contributing to these disparities on a systemic level. This review aims to acknowledge and address the myriad of contributing factors through the DART hypothesis which converge in racial/ethnic disparities in ART and considers possible solutions to effect large scale societal change by narrowing these gaps within the next decade.
Introduction
In 2009, the World Health Organization defined infertility as a disease of the reproductive system, thus recognizing it as a medical problem warranting treatment [1].Since then, infertility care has become recognized as a human right in the US and internationally [1][2][3].However, disparities in Reproductive Endocrinology and Infertility (REI) care in the US persist [4].In recent decades, inequities in infertility care have become a topic of actionable interest.In 2020, the American Society for Reproductive Medicine (ASRM) expressed concern regarding racial and ethnic disparities affecting patients seeking or in need of fertility care and, accordingly, created a dedicated diversity, equity and inclusion (DEI) taskforce [5].While healthcare disparities in the US exist based on numerous sociodemographic factors, including but not limited to income, availability of insurance, geography, sexual and gender identity, they are notably significant when broken down by race/ethnicity, rippling into multiple clinical contexts [6].
Racial/ethnic disparities in infertility care are multifactorial; they can often begin long before patients present for infertility care or qualify for treatment and funnel into Assisted Reproductive Technology (ART) outcomes [7].This is evidenced by decreased contraceptive knowledge among Black and Hispanic veteran patients, particularly regarding an awareness of the irreversible nature of tubal sterilization, decreased rates of HPV vaccination, decreased antimullerian hormone levels/ovarian reserves at a given age, increased tubal factor disease and fibroid burden, and increased rates of sterilization at the time of cesarean section in racial/ethnic minorities [8][9][10][11][12][13].Additionally, the rate of infertility is higher in minority populations, with Black women having at least 1.5 times the rate of infertility as White women, in addition to increased rates of comorbidities and risk factors associated with infertility [14].In this review, we will focus specifically on ART outcomes, with the understanding that disparities are often at play prior to needing or meeting the criteria for specialized infertility care.As it currently stands, non-White race is an independent poor prognostic factor in infertility care.However, race is believed to be a social construct, and does not necessarily reflect biological differences between groups [15,16].As such, this review offers a comprehensive series of suggestions by reviewing and discussing previous publications through the framework of the Disparities in ART (DART) hypothesis.Through this hypothesis, we focus on factors which contribute to racial/ethnic disparities, and discuss ways to potentially narrow the gap in access and treatment outcomes in ART within the next 10 years.
Current Disparities in ART
Prior to exploring new and previously proposed interventions intended to narrow the disparity gap in ART, it is prudent to discuss the pervasiveness of racial/ethnic disparities, specifically in ART access and treatment outcomes.Also, it is necessary to build this discussion by reiterating that rates of infertility are higher in minority populations, with Black women having the highest rates of infertility and, therefore, the highest need for infertility care [14].
Though use of in vitro fertilization (IVF) has tripled in the last 20 years, this trend is disparate for racial/ethnic minority groups [6,17].In an observational study evaluating utilization of infertility services in the US by race/ethnicity, using data from the National Survey of Family Growth cycles from 2002, 2006-2010 and 2011-2013, based on participant responses regarding use of infertility services, the disparities in utilization were underscored.Despite higher rates of infertility, Black, Hispanic, and American Indian/Alaska Native patients are less likely to utilize medical assistance to achieve pregnancy compared to White patients.After adjusting for relevant covariates, this difference persists in Black patients only, with Black women exhibiting a 23% lower prevalence of medical assistance in becoming pregnant [18].Differential utilization of ART can be quite stark, with some studies showing up to 70% lower prevalence in Black women in regional versus national studies, respectively [6,19].In a retrospective cohort study examining the relationship between race/ethnicity and the utilization of different infertility treatments using the United States (US) birth data files from 2011 to 2019, non-Hispanic Black and Hispanic women had about a 70% lower likelihood of receiving any infertility treatment, compared to non-Hispanic White women.Furthermore, non-Hispanic White women were the most represented group for live births associated with any type of infertility treatment (53.2%) and non-Hispanic Black women were the least represented (3.7%) [6].
Oocyte cryopreservation, an REI outcome that is typically independent from an infertility diagnosis, is less common in racial and ethnic minorities.In a retrospective cohort analysis using the Society for Assisted Reproductive Technology Clinical Outcome Reporting System for patients undergoing oocyte cryopreservation from 2012 to 2016, oocyte cryopreservation was least common in Black and Hispanic patients [20,21].Of the cycles with reported race/ethnicity data, 66.5% were performed in White patients, 9.6% in Asian patients, 7.1% in Black patients, and 4.5% in Hispanic patients.Interestingly, oocyte yield was comparable across ethnic groups, with the mean (standard deviation) of oocytes retrieved per cycle equaling 12.9 (9.7) for White patients, 13.2 (11.4) for Black patients, 10.6 (8.4) for Asian patients, 12.1 (9.9) for Hispanic patients, and 13.7 (10.3) for patients who identified as another race [21].This re-demonstrates that disparities in REI care are present even prior to the diagnosis of infertility and the need for infertility treatment amongst racial/ethnic minorities.
For many patients, additional barriers emerge even after presenting to care which, in turn, limits the ability to provide quality care.In a retrospective study of 87 patients seeking fertility care at a single resident/fellow REI clinic in New York from 2020 to 2022, 88.5% of participants identified as non-White and most had Medicaid; 70-80% completed their lab evaluation; 59.8% were able to complete a scheduled HSG; and only 27.6% of patients' partners completed a semen analysis [22].While more research is needed to fully elucidate the likely multi-factorial etiology of increased incomplete workups in racial/ethnic minority patients, transportation, cultural/social stigma, and financial constraints have been identified as contributing factors [23].
Accessing and completing care are certainly not the only barriers contributing to racial/ethnic inequities in ART utilization.Implicit bias, seen with differential fertility counseling in young Black and Latina women compared to White women newly diagnosed with cancer, and insurance reimbursement models or the lack thereof, are examples of pervasive systemic and structural bias contributing to this disparity [7,24,25].While studies show referrals for fertility preservation are notably low in general, they vary by race/ethnicity, highlighting the role provider bias plays in perpetuating racial/ethnic disparities in ART.In a retrospective cohort study of women aged 18-42 years diagnosed with a new breast, gynecologic, hematologic or gastrointestinal cancer at a single institution between 2008 and 2010, the odds of a fertility preservation consultation referral were about two times higher for White women, compared to Black, Hispanic and Asian women [26].Patients are aware and concerned about differential or inferior treatment, as studies show Black and Hispanic women face more difficulty finding a fertility physician with whom they feel comfortable, thus leading to delays in workup and treatment [27,28].Provider bias and its detrimental effects on patient health continue to be evaluated and publicized [29].In 2020, The American College of Obstetrics and Gynecology published a collective action addressing racism as a joint statement recognizing historical, societal, institutional and practitioner factors contributing to inequities in obstetrics and gynecology [30].
Racial/ethnic disparities also persist in ART treatment outcomes [31].For example, despite higher access and utilization of fertility care by Black women in a military population, lower ART success and decreased live birth rates were seen; which was, in part, attributed to an increased fibroid burden in Black women [32].Additionally, in a study of women undergoing autologous in vitro fertilization (IVF) from 2010 to 2012, Black and Asian women had lower odds of clinical intrauterine pregnancy and live birth, and higher rates of spontaneous abortion compared with White women [33].Similarly, studies consistently show a decrease in live birth rates for Black women undergoing autologous, non-donor, fresh embryo transfers compared to White women, after controlling for multiple factors.Live birth rates are additionally lower for Black patients undergoing frozen embryo transfers [34][35][36][37].Racial differences are also seen with intrauterine insemination (IUI).In a retrospective analysis of patients undergoing IUI from 2007 to 2012, American Indian patients had 66% lower pregnancy rates compared to non-Hispanic Whites when patient and cycle characteristics were controlled for [38].Differences in live birth rates are likely multi-factorial and, in part, attributable to other comorbid diseases that occur at a higher rate in minority populations, such as tubal factor disease, uterine factor disease, and elevated BMI.In a retrospective cohort study with 1110 patients undergoing 2254 autologous IVF cycles between 2014 and 2019 at an academic fertility center in the Southeastern United States, the neighborhood deprivation index, a proxy for socioeconomic and environmental factors was not statistically significantly associated with the live birth rate.Live birth per cycle was lower among Black (24%) compared to White patients (32%), and the crude probability of miscarriage per clinical pregnancy was higher among Black patients (22%) compared to White patients (12%) [39].
Biological disparities are seen at the level of ovarian function with decreased agerelated ovarian reserves observed in Black women compared to White, Asian and Hispanic women [40,41].Additionally, despite greater ovarian responsiveness, Black and Hispanic patients have lower live birth rates compared with White patients, though this was not statistically significantly different after adjusting for confounders [42].Furthermore, birth rates remain lower for Black women, even when using vitrified oocytes from healthy donors [43].Racial/ethnic differences are also seen in hormone production, metabolism, and endometrial receptivity, ultimately contributing to worse outcomes in minority populations.For example, in a retrospective cohort study of 3289 ART cycles conducted between 2009 and 2013 at the Shady Grove Reproductive Science Center, premature progesterone elevation on the last day of ART stimulation was shown to have a negative effect on live birth rates.Additionally, the prevalence of elevated progesterone on the last day of ART stimulation was higher in Black, Asian and Hispanic women, compared to White women [44].Lastly, even when live births are achieved, perinatal outcomes are persistently worse for racial/ethnic minority groups with higher rates of gestational diabetes, fetal growth restriction, preterm labor, preeclampsia and type II diabetes postpartum [45][46][47].
What Has Been Proposed?
Some of the previously suggested solutions/approaches to mitigating inequities in ART have focused on cost burden and legislation.In models where ART is more affordable, such as in the military, Black women demonstrated a fourfold increase in utilization of ART [48].Currently, 21 states provide some amount of insurance infertility coverage in the US, which is frequently limited to infertility workup and evaluation.Treatment coverage is often limited in terms of amount and type of treatment, however.For instance, IVF is often excluded from these mandates, and when IVF is included, a trial of IUI is typically required before only a limited number of IVF cycles become eligible [49].State insurance mandates have been suggested to make ART more affordable and thus more accessible, as comprehensive mandates have been associated with reduced disparities in ART utilization in Hispanic and non-Hispanic Black populations [35].However, more recent evidence shows that while states with mandated coverage for fertility diagnosis and treatment have seen an increase in access to ART in all racial groups, especially for Asian patients, outcomes remain unchanged [50,51].So far, the current state mandates for donor oocyte ART have been insufficient for decreasing racial/ethnic disparities [52,53].This is in part due to exemptions, which in turn present additional obstacles for otherwise eligible patients.Certainly, this has been the case in Massachusetts, which has provided mandated coverage for IVF since 1987 [54].Despite this, exemptions exist for those enrolled with self-insured, employer-sponsored health plans, Medicare and/or Medicaid, OPM-affiliated health plans, and TRICARE, making this benefit inaccessible to many patients in Massachusetts [54].In fact, only 26.2-36.0% of Massachusetts-based reproductive-age women comprised eligible beneficiaries of the Massachusetts Infertility Insurance Mandate during the 2016-2019 study period.
Moreover, recent publications have suggested addressing the mismatch in supply and demand of the infertility provider pipeline in ART by recommending expansion of much-needed clinical services to other non-REI trained providers [55,56].Additionally, telehealth utilization and resident/fellow run fertility clinics have been suggested in previous studies and reviews as solutions to increase accessibility of infertility care and bridge disparities [57,58].To narrow the existing national racial/ethnic disparities in access and treatment outcomes in ART within the next decade, we suggest possible solutions by approaching the challenges of disparity in care through the prism of the DART hypothesis.
Pathways for Accelerated Change-DART Hypothesis Revisited
The DART hypothesis in racial and ethnic disparities in access and outcomes of IVF treatment in the US was initially proposed by Seifer et al., 2013, in a book chapter entitled "Toward a Better Understanding of Racial Disparities in Utilization and Outcomes of IVF Treatment in the USA," and further revisited and revised in 2022 [59,60].This approach calls for identifying, integrating, and addressing the multiple factors contributing to racial/ethnic disparities in ART.
The prohibiting factors at play prior to patients presenting for fertility care provide an opportune area for potential improvement.Educating patients about reproductive health, fertility, and the prevalence of age-related infertility, as well as proper utilization of ART, may help to mitigate stigma and shame, which likely contribute to delayed presentation in patients from racial/ethnic minority backgrounds [27,61].This delay may exacerbate the already well-known present challenge of age-related infertility [62,63].Timely referral by OB-GYN generalists and primary care to a fertility specialist is highly encouraged for women 35 or older after 6 months of unprotected intercourse and immediate referral for women 40 or older to not exacerbate the impact of age-related infertility.Beyond normalizing the timetable for referral for specialized fertility care, increased education may yield improved utilization due to a better understanding of reproductive health, disease prevention, and an increased awareness of potential insurance options available.Seeking an explanation of insurance benefits early on may assist those who have insurance to understand their options of pursuing appropriate treatment in a more timely manner.Possible time-sensitive intervention points may include high school, when sexual and reproductive health are often introduced into educational curricula, college, and at various timepoints in community centers, as studies show there is a need for increased education among reproductive aged women.In a cross-sectional study including 1127 participants, a validated fertility awareness survey entitled the Fertility & Infertility Treatment Knowledge Score was administered to 18-45 year old reproductive aged women in the US, and revealed a mean score of 55.9%, indicating an overall low fertility awareness [64].During educational interventions, emphasis should simultaneously be placed on the reproductive lifespan, diseases contributing to subfertility and infertility, treatment/prevention, and when to seek care with a fertility specialist concurrently with general reproductive health education.Similarly, co-morbidities contributing to subfertility and infertility in racial/ethnic minority populations may necessitate aggressive treatment to address the higher disease burden contributing to poorer ART outcomes.
Moreover, recruiting fertility providers from diverse racial, ethnic, and cultural backgrounds is essential [65].Healthcare studies show patients generally fare better when care was provided by more diverse teams.In a 2019 umbrella review of systematic reviews and meta analyses, positive associations were noted between diversity, quality, and financial performance in the healthcare environment [66].In addition to mitigating societal and educational disparities, this may assuage patient concerns regarding comfort with and understanding from providers, a known factor delaying presentation to care for racial/ethnic minorities, which likely exacerbates the negative impact of age-related infertility [27,59,67].Additional benefits of increasing fertility provider diversity are a theoretical increase in availability and accessibly of providers, particularly in underserved areas, as travel and travel distance contribute to increased rates of discontinuation of care after unsuccessful IVF cycles in minority patients, even in the case of insurance coverage [68].Lastly, recruitment of diverse fertility providers will help address the shortage of REI trained physicians in the US.In a 2009 review of the economic impact of assisted reproductive technology, it was estimated that North America meets only 24% of its ART needs [69].Increasing diversity within the REI workforce, therefore, will simultaneously address the national shortage of REI providers and the lack of diversity among REI providers.Leveraging other non-REI trained providers, including general obstetrician/gynecologists and advanced practice practitioners, has also been proposed to address the widening supply and demand mismatch for REI providers and delivers yet another possible solution for patients from racial/ethnic minority backgrounds to access timely care [55,56].Educational tools and guidelines, such as practice bulletins (AGOC Committee Opinion No. 781) and hospital pathways, for non-REI trained providers may further bolster this possible option [70,71].More research is indicated to assess the effectiveness of the above interventions in reducing racial/ethnic disparities in ART.
Additional actionable items emerge when patients overcome the obstacles of presenting to care.First, this includes using tailored evidence-based ART approaches for patients from racial/ethnic minority backgrounds, particularly when prior "standard of care" attempts have failed.Additional research in this area of racial/ethnic disparities is encouraged to further consider and customize care for patients who fail "standard of care" therapies.ART methods with more equitable outcomes should also be prioritized, when appropriate.For instance, in a retrospective cohort study including patients with infertility undergoing IVF with Intracytoplasmic Sperm Injection (ICSI), preimplantation genetic testing for aneuploidy (PGT-A), and an autologous single euploid embryo transfer (SEET) from 2015 to 2019 at a single private and academic assisted center, there was no difference in euploid or live birth rates based on self-reported race [72].However, as is discussed below, these additional tests and methodologies can be costly and thus prohibitive.Therefore, such testing is of highest utility in applicable populations, and shared decision making with patients is recommended.Third, mandatory implicit bias training for providers is encouraged to improve the patient-provider relationship, particularly as it pertains to cultural competence, understanding and addressing patient mistrust, and combating implicit bias [5,30,73].
For many racial/ethnic minorities, the cost of IVF is prohibitive.IVF in the US is costly, with standard IVF cycles starting at $12,500 USD.Many patients are not prepared or able to spend 50% of their disposable income on IVF, which is often required to cover this cost [69].In a systematic review using data from 40 studies in high-income countries from 2011 to 2022, ART interventions were examined using an economic evaluation component.Specifically, the study identified the most common high-cost interventions not necessarily adding to care.This included preimplantation genetic testing for aneuploidy (PGT-A) for the general population and ICSI for unexplained infertility [74].Therefore, access to fertility care can additionally be expanded at the level of the provider or practice for racial/ethnic minority patients by avoiding unnecessary testing, where applicable.
Addressing systems-based, institutional and societal contributors to racial/ethnic disparities in ART is required for meaningful and durable change [75].In addition to the above suggestions, funding, and legislation changes are needed.Policy reform may be warranted to implement and expand state and federal mandates for insurance coverage of fertility care.Though insurance mandates alone have demonstrated to be insufficient in bridging the gap in racial/ethnic disparities in ART, more comprehensive insurance mandates, as opposed to limited or no mandates, confer more equitable access [50,53,76,77].Second, institutions providing fertility care should be encouraged to review outcomes stratified by sociodemographic factors, including race/ethnicity, to truly identify and address disparities on an institutional level.This will also facilitate introspection and selfremediation at an institutional level.Third, increased allocation of resources from the public and private sector are of utmost importance to continue identifying, understanding, and ultimately narrowing the racial/ethnic disparities gap in ART.Lastly, currently only 66% of ART cycles have race/ethnicity information completed.Policies to strongly encourage and incentivize recording of patient self-reported race/ethnicity data could be implemented resulting in greater effort for annual reporting of ART cycles to SARTCORS and thus increase opportunities for disparities research [35,78].
Conclusions
In summary, a multi-pronged approach is encouraged in the next decade to narrow and ultimately close the racial access and treatment disparity gap in reproductive medicine.Future efforts focused on enhancing provider cultural competency, patient and community education regarding timely referral for evaluation and the challenge of coping with agerelated infertility, advocacy for broadening greater insurance coverage, and more favorable public healthcare policies are likely to narrow the racial/ethnic disparity gap in ART access and treatment outcomes in the next 10 years.
Future Directions
Racial and ethnic disparities in reproductive medicine are a result of multiple complex factors.Novel integrated and multifaceted solutions are needed to comprehensively address racial/ethnic disparities in ART.This review of the DART hypothesis provides one such framework of the myriad contributing factors converging in current racial/ethnic disparities in ART, and provides suggested solutions to achieve large-scale change and a narrowing of this gap in the next decade (Figure 1 and Table 1).
Future Directions
Racial and ethnic disparities in reproductive medicine are a result of multiple complex factors.Novel integrated and multifaceted solutions are needed to comprehensively address racial/ethnic disparities in ART.This review of the DART hypothesis provides one such framework of the myriad contributing factors converging in current racial/ethnic disparities in ART, and provides suggested solutions to achieve large-scale change and a narrowing of this gap in the next decade (Figure 1 and Table 1).
Problem Solution
Financial obstacles leading to decreased care utilization Expand comprehensive insurance mandates to cover fertility care/treatment.
Immediate referral to REI specialist for women 35 or older who have had 6 months of unprotected intercourse and immediate referral for women 40 or older.
Increase funding from private and public sectors for: early education at the high school, college and/or community level.
Emphasize prevention and treatment of co-morbidities contributing to infertility.Table 1.List of suggested solutions to narrow the racial/ethnic disparity gap in ART access and treatment outcomes in the next decade.
Problem Solution
Financial obstacles leading to decreased care utilization
Figure 1 .
Figure 1.The DART hypothesis in racial and ethnic disparities in access and outcomes of IVF treatment in the US.Published with permission from Reproductive Sciences [60].
Figure 1 .
Figure 1.The DART hypothesis in racial and ethnic disparities in access and outcomes of IVF treatment in the US.Published with permission from Reproductive Sciences [60].
Table 1 .
List of suggested solutions to narrow the racial/ethnic disparity gap in ART access and treatment outcomes in the next decade.
Table 1 .
Cont.Prioritize evidence-based ART methods that work for racial/ethnic minorities when the "standard of care" fails.•Encourageinstitutionsprovidingfertility care to stratify outcomes by race/ethnicity to facilitate introspection and self-remediation.•Continue to identify and address systemic and institutional contributors.
•Expand comprehensive insurance mandates to cover fertility care/treatment.
|
2024-04-14T15:05:18.371Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "8798e3fb31a88f0af24db545553de1298a094cdf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/jcm13082224",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a236bc11df2ea936782decc00ff8492e6d7e0a6d",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237307450
|
pes2o/s2orc
|
v3-fos-license
|
A Dose‐Confirmation Phase 1 Study to Evaluate the Safety and Pharmacology of Glucarpidase in Healthy Volunteers
Abstract Glucarpidase rapidly decomposes methotrexate. A phase 1 study of glucarpidase in an open‐label, randomized parallel group was conducted to evaluate the safety, pharmacokinetics, and other pharmacologic effects in Japanese healthy volunteers without methotrexate treatment. A dose of 50 U/kg (n = 8) or 20 U/kg (n = 8) of glucarpidase was administered as an intravenous injection, with 1 repeated dose at 48 hours after the first dose. No dose‐limiting toxicities, no significant clinical examination findings, and no clinically relevant differences between dose levels were observed. The pharmacokinetic parameters at a first dose of 20 or 50 U/kg were similar to those at a second dose and were as follows: half‐life, 7.45 and 7.25 hours; area under the plasma concentration–time curve from time 0 to infinity, 8.25 and 19.05 μg·h/mL; total clearance, 4.85 and 5.47 mL/min; and volume of distribution during the elimination phase, 3.12 and 3.41 L, respectively. The area under the plasma concentration–time curve increased in a generally linear dose‐proportional manner. An ethnicity specificity in the pharmacokinetic profile was not observed in Japanese volunteers. The serum folate concentration decreased after glucarpidase administration in all the volunteers. The production of anti‐glucarpidase antibody was observed in many cases in both cohorts. Although the long‐term effect of anti‐glucarpidase antibody will need to be investigated in the future, the effects produced by the anti‐glucarpidase antibody were not influenced by the pharmacokinetics of glucarpidase within 96 hours after the first dose. The observed safety and tolerability, pharmacokinetics, and pharmacodynamics support the continued evaluation of glucarpidase in the patients with lethal methotrexate toxicities.
for <5% of methotrexate. 6 High-dose methotrexate has been established as a standard component of combined chemotherapy for acute lymphoblastic leukemia, malignant lymphoma, osteosarcoma, and other conditions. High-dose methotrexate is inevitably required in 1% to 10% of patients receiving careful supportive care, and the accompanying nephrotoxicity can lead to the exacerbation of hepatotoxicity, serious mucositis, and pancytopenia. 7,8 Several treatments, such as plasmapheresis, dialysis, and high-dose leucovorin, can be used to prevent potentially lethal methotrexate toxicities, but the resulting effectiveness of these treatments has been limited. 8 Accompanying renal disorder seems to arise from disturbances in the elimination of plasma methotrexate, which is mainly excreted in the urine.
Glucarpidase, which was originally isolated from Pseudomonas species, has been developed as an enzymatic drug (a 390-amino acid homodimer protein with a molecular weight of 83 kDa) that directly decomposes methotrexate into 2,4-diamino-N 10 -methylpteroic acid and glutamate to prevent potentially life-threatening toxicity. 9 Currently, glucarpidase is the only and de facto standard drug approved for methotrexate-delayed excretion in Europe and the United States.
Glucarpidase rapidly mediates the degradation of methotrexate and reduces plasma methotrexate concentrations by >95% within 10 minutes to 1 hour. [10][11][12][13][14] A theoretical 50 U/kg dose of glucarpidase, which is sufficient to reduce a high methotrexate concentration, has been used for clinical treatment. In an open-label, single-site study, Phillips et al 15 reported that glucarpidase was safe and effective at a dose of 50 U/kg. Several case reports of glucarpidase treatment at a dose of 50 U/kg, which is recommended for compassionate treatment, have been published. Previous case reports have suggested the efficacy of lower doses of glucarpidase (15-70 U/kg), but these dose-finding studies of glucarpidase were not conducted in humans.
The rebound of methotrexate concentration after the first administration of glucarpidase and the repeated administration of glucarpidase has been reported. Repeated glucarpidase treatment is likely to be less effective, considering the immunogenicity of glucarpidase, and the repeat administration of glucarpidase within 48 hours of the first dose during the same methotrexate course is not recommended. 9,10 On the other hand, continued monitoring of the methotrexate concentration for >48 hours is recommended to monitor potential rebounds in the methotrexate concentration. 9 No pharmacokinetic studies have examined the repeated administration of glucarpidase.
Folate and folate rescue therapy are important for preventing adverse events associated with methotrexate, such as oral ulceration and oral mucositis. 16 However, no prospective studies have evaluated the pharmacol-ogy of glucarpidase, and the effect of folate and its derivatives (5-methyltetrahydrofolate ) and the production of anti-glucarpidase antibody remain unknown.
Little information is available regarding the safety and pharmacokinetic properties of glucarpidase. There is a need to clarify the pharmacokinetics of plasma and urine glucarpidase in ethnic groups and to confirm that a 50 U/kg dose of glucarpidase is necessary based on clinical trials to evaluate the nondevelopment of doselimiting toxicity (DLT) at 2 different dose levels. We conducted and evaluated a phase 1 study of the safety, pharmacokinetics, and other pharmacologic effects of glucarpidase at 2 doses, with 1 repeated administration, in a clinical drug development trial in Japanese subjects.
Study Design and Subjects
This was an open-label, randomized parallel group, phase 1 study in which subjects were randomized into 2 cohorts, with each cohort being allocated a different fixed dose (low dose or standard dose) of glucarpidase. The study took place between November 2011 and January 2012 in healthy Japanese adult volunteers at the Department of Pharmacology, Hamamatsu University School of Medicine, Shizuoka, Japan. Eligible participants were healthy men aged 20 to 45 years weighing 50 to 100 kg with a body mass index of 18.5-25.0 kg/m 2 . Subjects were excluded if they had a history of clinically significant neurologic, cardiovascular, pulmonary, hematologic, gastrointestinal, hepatic, renal, endocrine, or adrenal function disease; a drug allergy; alcohol or drug abuse; or abnormal infectious disease blood test results (hepatitis B virus, hepatitis C virus, HIV, or positive serologic test for syphilis). Each cohort contained 8 subjects. Written informed consent was obtained from each subject before participation in this study. This study was conducted in compliance with the Human Institutional Review Board of Hamamatsu University School of Medicine with the ethical principles proposed in the Declaration of Helsinki.
This investigator-initiated clinical trial was supported by the Center for Clinical Trials, Japan Medical Association. The study was registered with the Japan Medical Association Clinical Trial Registry (identifier: JMA-IIA00078).
Dosage and Administration
Glucarpidase (BTG International Ltd., London, UK) was administered as an intravenous injection over 5 minutes. Sixteen healthy young volunteers without methotrexate treatment were randomly assigned to receive 20 U/kg (low dose; cohort 1) or 50 U/kg (standard dose; cohort 2) of glucarpidase. The glucarpidase dose of 1 unit was equivalent to 2.2 mg in prepared intravenous solution in this study. The contents of the vial reconstituted with 1 mL of saline for injection and the intravenous solution was administered into a peripheral vein over 5 minutes. Blood samples were drawn from a contralateral venous site. Subjects with cancer were not included in this trial. In each cohort of 8 subjects, 2 doses of glucarpidase at the same dose level were administered at a 2-day interval.
Safety Analysis
All observed and self-reported safety parameters were evaluated for 7 days after the start of glucarpidase administration and were assessed by monitoring for adverse events, clinical laboratory data, and vital signs (performance status, body temperature, systolic and diastolic blood pressure, pulse rate), and subjective symptoms.
DLTs were defined by the occurrence of severe toxicities according to the National Cancer Institute's Common Terminology Criteria for Adverse Events classification version 4.0, Japanese translation by the Japan Clinical Oncology Group. The DLT criteria of this study were as follows: (1) grade 2 adverse event with a duration of ≥2 days, (2) grade 3 or grade 4 adverse event, and (3) death at 7 to 10 days after the first administration of glucarpidase.
Pharmacologic Effects
Plasma samples for pharmacokinetic assessments of glucarpidase were collected before dosing and at 5 and 15 minutes and 2, 8, 12, 24, and 48 hours after the start of the first dose of glucarpidase; 48 hours after the first dose, the second dose was administered, and plasma samples were collected before dosing and at 5 minutes and 8, 12, 24, and 48 hours after the second dose. Urine samples were collected during the following intervals: 0 to 2 hours, 2 to 12 hours, and 12 to 24 hours after the start of the first administration of glucarpidase only.
The serum folate and plasma 5-MeTHF profiles were evaluated before dosing and at 48 and 96 hours (48 hours after second dose) after the start of the first dose of glucarpidase. Subjects with low folate levels were monitored for 7 to 10 days with no specific treatment administered. Plasma samples for the analysis of the anti-glucarpidase antibody titer were taken before dosing and 4 to 6 weeks after glucarpidase administration. Subjects with positive titer values were monitored for 2 to 7 months thereafter.
Statistical Analyses
The pharmacokinetic parameters of glucarpidase were calculated using WinNonlin software (Pharsight Corp., Mountain View, California). The pharmacokinetic evaluations included maximum plasma concentration (C max ), time to C max , area under the plasma concentration-time curve (AUC) from time 0 to 24 hours (AUC 0-24 ), AUC from time 0 to infinity (AUC 0-∞ ),total clearance, volume of distribution during the elimination phase, steady-state volume of distribution, and half-life. Each AUC was calculated using the linear trapezoidal rule. Less than the lower limit of quantification (LLOQ) of the subjects was recorded as below the limits of quantitation, which was represented as 0 in the calculations of the pharmacokinetic parameters.
Statistical analyses were performed using JMP Pro 15 (SAS Institute Inc., Cary, North Carolina). An unpaired t-test with Welch's correction was used for the statistical analyses of continuous variables in 2 independent groups. A P value <.05 was considered statistically significant.
Bioanalyses
Blood samples were centrifuged for 15 minutes, separated immediately after centrifugation, and stored at -70°C until analysis. Plasma and urine concentrations of glucarpidase were determined using an enzyme-linked immunosorbent assay (ELISA).
In the analysis of glucarpidase, the ELISA plate was coated with goat purified antibody against glucarpidase overnight and incubated with calibration standards, quality controls, and study samples. After excess samples had been washed away 4 times, affinity purified anti-glucarpidase immunoglobulin G solution as a secondary antibody was added to the plate and incubated. The wells were then washed before addition of horseradish peroxidase conjugated goat anti-rabbit immunoglobulin G (H+L-chain specific). The optical density of each well was measured by the dual-wavelength method, using a detection wavelength of 450 nm and a reference wavelength of 630 nm. The calibration ranges for glucarpidase were defined by the LLOQ and the upper limit of quantification (ULOQ) with 7 calibration standards of different concentration levels, including the LLOQ and the ULOQ, with a correlation coefficient ≥0.999. The LLOQ and the ULOQ of the urine and plasma concentrations of glucarpidase were 1 ng/mL and 640 μg/mL, respectively. The interassay variability of the urine and plasma levels were ≤12.1% and ≤7.8 %, respectively.
Plasma anti-glucarpidase antibody titers were determined using ELISA. In the analysis of antiglucarpidase antibody, briefly, glucarpidase-coated ELISA plates were loaded with samples. After excess samples had been washed away, biotin-binding glucarpidase was added to the plate, and the wells were washed again. A color reaction was elicited by adding peroxidase-labeled avidin D using a detection wavelength of 450 nm and a reference wavelength of 540 nm. The interassay variability of the anti-glucarpidase antibody titer was ≤16.3%. A positive anti-glucarpidase antibody titer was judged as any value over the cut point, which was defined by the average value of drugnaive specimens obtained before the administration of glucarpidase. Serum concentrations of folate were measured by a chemiluminescent enzyme immunoassay using the Access Folate Reagent and UniCel DxH 800 (Beckman Coulter, Inc., Indianapolis, Indiana). The LLOQ and the ULOQ of the serum concentrations of folate were 1 ng/mL and 22 ng/mL, respectively. The interassay variability of the serum level was ≤2.2%.
The glucarpidase and 5-MeTHF levels and the antiglucarpidase antibody titer in plasma or urine were analyzed by Shin Nippon Biomedical Laboratories, Ltd. (Wakayama, Japan). The serum folate level was analyzed by SRL laboratory, Inc. (Tokyo, Japan). All of the analytical methods were validated according to the Guidelines on Bioanalytical Method Validation in Pharmaceutical Development in Japan.
Safety
Sixteen randomized subjects (Table 1) who received glucarpidase were included in the safety and pharmacologic analyses. Two cohorts at repeated dose levels of 20 U/kg (cohort 1) and 50 U/kg (cohort 2) were examined; no DLTs or significant clinical examination findings were observed. Among the clinical laboratory parameters, some grade 1 events were reported. The rates of adverse events in cohorts 1 and 2 were 75.0% and 62.5%, respectively. No clinically relevant differences between glucarpidase dose levels (20 U/kg and 50 U/kg) were seen ( Table 2).
Pharmacologic Analyses
Glucarpidase. The plasma concentrations after repeated dosing in cohorts 1 and 2 are shown in Figure 1. The pharmacokinetics parameters after the administration of glucarpidase in both cohorts are shown in Table 3. The AUC increased in a generally linear, doseproportional manner in both cohorts and was similar for the first and second doses. No accumulation of glucarpidase at the time of the second administration was seen, similar to the theoretical accumulation rate calculated using the half-life. Glucarpidase was not detected in the urine after administration in any of the patients.
Anti-Glucarpidase Antibody. The anti-glucarpidase antibody titer was negative in all the subjects before dosing. In cohorts 1 and 2, positive anti-glucarpidase antibody results were obtained at 4 to 6 weeks after glucarpidase administration in 4 and 7 cases, respectively (Table 4). With the exception of 1 case, positive antibody titers continued to be observed in both groups for 5 to 7 months.
Folate. The effects of glucarpidase on the plasmaconcentration profile for folate following the administration of glucarpidase are shown in Figure 2 for both cohorts. The mean (±standard deviation [SD] values) concentrations of folate before dosing and at 48 and 96 hours were as follows: cohort 1: 5.19 ± 1.97, 3.40 ± 0.92, and 3.58 ± 0.88 ng/mL; and cohort 2: 4.19 ± 0.98/mL, 2.86 ± 0.87, 3.04 ± 0.78 ng/mL. 5-MeTHF. 5-MeTHF was detected in 4 of the 8 cases in cohorts 1 and 2 of the 8 cases in cohort 2. The concentrations of 5-MeTHF decreased or were not detectable after the administration of glucarpidase (Table 5).
Discussion
This glucarpidase phase 1 study was conducted to confirm tolerability and the absence of DLTs and to evaluate the pharmacodynamics and pharmacokinetics of repeated administration at 2 dose levels. Glucarpi-dase was tolerated in both cohorts, with an acceptable safety profile, no DLTs, and no clinically relevant differences between dose levels ( Table 2).
In the pharmacokinetics study, the total clearance and distribution volume of glucarpidase at steady state were similar between the 2 cohorts and between the first and second doses (Table 3). Proportional dose-dependent increases in in C max and AUC 0-∞ were confirmed for both cohorts . An unchanged form of glucarpidase 2 was not detected in any of the urine samples in the present study, supporting the finding reported by Phillips et al 15 that the pharmacokinetics were unaltered in patients with impaired renal function. Consequently, dose adjustments of glucarpidase are not required for patients with renal impairment. The pharmacokinetic parameters obtained at a dose level of 50 U/kg in the Japanese subjects were a C max of 2430 ng/mL and an AUC 0-∞ of 20.88 μg·h/mL, which were similar to previously reported results for healthy subjects of White and African descent (C max , 2970 ng/mL; AUC 0-∞ , 22.4 μg·h/mL).
The estimated distribution volume of glucarpidase per body weight was 56.1 mL/kg, which was calculated using a volume of distribution during the elimination phase of 50 U/kg at first dose and individual body weight. This apparent distribution volume, similar to previous study (steady state, 58.0 mL/kg), 15 suggests that glucarpidase of high molecular weight is mainly distributed in the plasma. A linear correlation between blood volume and body weight in pediatric subjects has been reported, with reported blood volumes of 52.3 ± 8.3 in boys and 47.9 ± 7.7 in girls (mL/kg). 17 The metabolism of glucarpidase does not depend on organ function or pediatric growth. These findings provide important suggestions for the pharmacokinetics in children, which are thought to be similar to those in adults. Actually, a pooled analysis of clinical trials with a median age of 20 years (range, 5 weeks to 84 years) showed a clinical efficacy of glucarpidase corresponding to a ≥99% sustained reduction in the methotrexate concentration. 18 Although the serum folate concentration decreased after glucarpidase administration in all the subjects (Figure 1), significant clinical symptoms were not observed and the serum folate level was easily restored by regular food intake. Quantitative evaluations of the reductions in serum folate concentrations and plasma 5-MeTHF concentrations were difficult because of the low baseline concentrations. Restricted food intake often occurs during high-dose methotrexate therapy; therefore, the administration of intravenous folate is recommended on the day following glucarpidase administration.
The production of anti-glucarpidase antibody was observed in many cases in both cohorts ( Table 4). The high molecular weight of glucarpidase may result in a strong immunogenicity, but a National Cancer Institute study reported that the prevalence of antiglucarpidase antibody decreases after 6 months. Thus, a possible reduction in efficacy caused by the presence of a neutralizing antibody and subsequent allergic reaction is unlikely if readministration occurs after a long interval.
The effect of anti-glucarpidase antibody on efficacy and safety during the second dose of a methotrex-ate course remains unknown. However, the pharmacokinetic parameters of a second dose administered at 48 hours after the first dose were similar to those of the first dose. Although the long-term effect of anti-glucarpidase antibody must be evaluated in future investigations, anti-glucarpidase antibody was not thought to affect the pharmacokinetics of glucarpidase within 96 hours after the start of the first dose.
Conclusion
No DLTs or significant clinical examination findings were observed in a repeated-dose phase 1 study of glucarpidase conducted at 2 dose levels in healthy Japanese adult subjects. The 50-U/kg dose of glucarpidase may be suitable as a safe intervention that is capable of achieving a maximum effect. From a pharmacokinetics point of view, repeat dosing with glucarpidase within 96 hours after the start of the first dose administration has the potential to reduce high methotrexate concentrations. The administration of intravenous folate on the day following glucarpidase administration is recommended for high-dose methotrexate therapy because of the degradation of serum folate. The observed safety and tolerability, pharmacokinetics, and pharmacodynamics support the continued evaluation of glucarpidase in the patients with lethal methotrexate toxicities in cancer chemotherapy.
|
2021-08-27T06:16:19.944Z
|
2021-08-25T00:00:00.000
|
{
"year": 2021,
"sha1": "eaf9e09a33b31cd7fab26a7fdd60ec6954374176",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "3dbb228ced63d1505ebe7efa3e0e64ceba2ac7f9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10114894
|
pes2o/s2orc
|
v3-fos-license
|
From anonymity to “open doors”: IRB responses to tensions with researchers
Background Tensions between IRBs and researchers in the US and elsewhere have increased, and may affect whether, how, and to what degree researchers comply with ethical guidelines. Yet whether, how, when, and why IRBs respond to these conflicts have received little systematic attention. Findings I contacted 60 US IRBs (every fourth one in the list of the top 240 institutions by NIH funding), and interviewed leaders from 34 (response rate = 55%) and an additional 12 members and administrators. IRBs often try to respond to tensions with researchers and improve relationships in several ways, but range widely in how, when, and to what degree (e.g., in formal and informal structure, content, and tone of interactions). IRBs varied from open and accessible to more distant and anonymous, and in the amount and type of “PR work” and outreach they do. Many boards seek to improve the quantity, quality, and helpfulness of communication with PIs, but differ in how. IRBs range in meetings from open to closed, and may have clinics and newsletters. Memos can vary in helpfulness and tone (e.g., using “charm”). IRBs range considerably, too, in the degrees to which they seek to educate PIs, showing them the underlying ethical principles. But these efforts take time and resources, and IRBs thus vary in degrees of responses to PI complaints. Conclusions This study, the first to explore the mechanisms through which IRBs respond to tensions and interactions with PIs, suggests that these committees seek to respond to conflicts with PIs in varying ways – both formal and informal, involving both the form and content of communications. This study has important implications for future practice, research, and policy, suggesting needs for increased attention to not only what IRBs communicate to PIs, but how (i.e., the tone and the nature of interactions). IRBs can potentially improve relationships with PIs in several ways: using more “open doors” rather than anonymity, engaging in outreach (e.g., through clinics), enhancing the tone as well as content of interactions, educating PIs about the underlying ethics, and helping PIs as much and proactively as possible. Increased awareness of these issues can help IRBs and researchers in the US and elsewhere.
only in limited ways. In July 2011, The US Office of Management and Budget released an Advance Notice of Proposed Rule Making (ANPRM) recommending changes to 45-CFR-46 (the so-called "Common Rule"), the federal regulations governing IRBs [8,9]. The ANPRM addresses several issues, reflecting in part researchers' complaints about IRBs. Specifically, the document seeks to increase central review; reduce variations between IRBs that can impede research; allow some minimal risk research to be "excused" from IRB review; and address challenges raised by biobanking. But whether any of these possible changes in formal structural elements of IRB reviews will be made, and if so, which, to what degree, in what form, and when, is unclear. Critical questions arise, too, of whether other changes may be needed or beneficial as well in improving the current system. Importantly, though PIs' complaints about IRBs have been described [1,2,6,10,11], little, if any, attention has been given to whether IRBs respond to these critiques, and if so, how. Yet researchers' beliefs that IRBs are unfair may dissuade these researchers from fully adhering to research ethics guidelines [12]. Logistical aspects of IRBs have been examined (e.g., sociodemographics of members, and time required for IRB approval [2,13]), but whether IRBs decide to address strains with researchers, and if so, how, and to what degree, have not been systematically examined.
As part of a qualitative, in-depth interview study of IRB chairs, focused on understanding their views, attitudes, and roles regarding research integrity (RI), broadly defined [14], many issues arosee.g., concerning differences in how IRBs made decisions and interacted with PIs, and viewed and approached conflicts of interest [15], central IRBs [16], research in the developing world [17], and variations between IRBs [18]. Yet, other separate issues arose concerning how IRBs addressed interactions and conflicts with PIs, and tried to improve these relationships. Since qualitative research allows for further probing of themes that arise, these interviews then explored these mechanisms in greater detail. Crucial questions emerged of how IRBs responded to PIs, and what approaches facilitated and/or impeded their relationships. This paper thus analyzes and explores these realms.
Methods
As described elsewhere [14], in-depth telephone interviews of approximately 1 to 2 h each were conducted with 46 chairs, directors, administrators, and members. The leadership of 60 IRBs (every fourth one in the list of the top 240 institutions by NIH funding) was contacted and IRB leaders from 34 of these institutions were interviewed, yielding a response rate of 55%. In certain cases, both a chair/director as well as an administrator from an institution were included (e.g., if the former thought that the latter could better provide detail about certain areas). Thus, in all, 39 chairs/directors and administrators from these 34 institutions were interviewed. To understand the impact of varying social and institutional milieus in these domains, institutions ranged in location, size, and public/private status. Every other interviewee was also asked to disseminate information about the study to their IRB members, in order to also recruit 1 member from each IRB. Seven other members were thus included, as well.
As summarized in Table 1, the 46 interviewees included 28 chairs/co-chairs; 10 administrators (including 1 director of a compliance office); and 7 members. In all, 58.7% were male, and 93.5% were White. Interviewees were distributed across geographic regions, and institutions by ranking in NIH funding. This study was approved by the Columbia University Institutional Review Board. All interviewees gave informed consent.
Appendix A presents relevant portions of the semistructured interview guide, which sought to elucidate interviewees' aspects of their decisions, lives, and social situations by trying to grasp their own experiences and language, not by imposing theoretical structures [19]. The methods draw on elements from grounded theory [20].
After completion of all of the interviews, a trained research assistant (RA) and the principal investigator (PI) conducted additional analyses in two phases. In the first phase, each interview was read, and "core" codes or categories were assigned to blocks of text (e.g., instances of IRB interactions and tensions with PIs). Together, these independently-developed coding schemes were then reconciled. A coding manual was produced, listing and defining the codes. Any areas of disagreement were explored until consensus was reached. Issues that did not fit in the original coding manual were discussed, and modifications were made when necessary.
In the second phase of the study, the two coders independently content-analyzed the interviews, examining the main subcategories, and ranges of variation in each of the core categories. They reconciled sub-themes into a single set of "secondary" themes and an elaborated set of core categories. Sub-themes included, for example, specific types of interactions with researchers (e.g., use of memos, face-to-face meetings), PI reactions (e.g., PIs' complaints about the IRB to institutional leadership), and IRB efforts to reduce conflicts with PIs (e.g., changing the tone of memos sent).
Codes and sub-codes were then used in analysis of all of the interviews, with the two coders analyzing all interviews.
Results
As summarized in Figure 1, and described more fully below, IRBs face several choices regarding both the structure and content of interactions with PIs. These committees often try to respond to tensions with PIs, and improve relationships in various ways, but range in how, when, and to what degree. IRBs varied in the formal and informal social structures, and the content and tone of interactions with PIs, and confronted several challenges.
From anonymity to open doors
In general, IRBs differed in the types, amounts, and effectiveness of their efforts, and ranged across a spectrum from remaining distant and anonymous, to being open and accessible. IRB members often said they knew they were seen as "obstructionistic" by PIs, but these interviewees varied in how much they were troubled by, and responded to, these perceptions. Chairs generally said that they were supportive of PIs, but they differed widely in how, and to what degree they demonstrated that stance. IRBs often adopted approaches to try to reduce tensions, but these methods then had both strengths and limitations that IRBs confronted.
Some IRB leaders suggested that they were highly attentive to PI views, and that they tried to be "open" and helpful, while others expressed these concerns much less. For example, some IRBs tried to be very open in both the form and tone of the interaction.
Our approach here is: "Please call." Not: "We're out to get you." With a lot of IRBs, the relationship and rapport they have with the faculty causes the problem. PIs don't want to deal with them, as opposed to picking up the phone and trying to get some information. That's just a general philosophy, tone, and culture. IRB26 IRBs may thus differ in both their implicit and explicit attitudes and practices.
IRB anonymity
In contrast, other interviewees suggested that their committees remained more distant from PIs. Much of IRB work occurs behind closed doors, and chairs may try to shield IRB members from researchers' criticism by keeping these members anonymous. Such anonymity can result from several causes. For instance, a reviewer may have a relatively low position in the institutional hierarchy, and not want to offend superiors on whom his or her future may depend.
One scientist was new to the IRB, and was appalled at a colleagues' poorly thought-out protocol, but unwilling to go to him outside of the meeting, and make suggestions. The PI probably wouldn't have listened, so we just kind of limped along with it. IRB26 This anonymity can thus create tensions, protecting the reviewer, but potentially delaying or hampering streamlining of a review. To avoid friction with PIs, many IRBs have reviewers of specific protocols remain anonymous.
At our institution, the cancer researchers know who the cancer reviewers arefrom whom the feedback comes. However, friction from that confrontation is held to a minimum. We give members an option of whether to reveal themselves or not when communicating to PIs prior to committee meetings. We respect reviewers' desire for anonymity and confidentiality. If members have questions for a PI, they can go through IRB staff instead. Otherwise, our IRB membership lists are provided to PIs along with our correspondence, making the names of the reviewers known. IRB9 Regarding openness to PIs, IRBs may develop their own group processes and culture that can be difficult to change. A chair may try to alter an IRB by adding members who, he or she feels, will optimally perform their tasks. The chair can also dismiss other members. The interviewee above wants members to "reach out" to PIs, and he has: . . .removed any member from the IRB who wants to remain permanently anonymous, because we require that board members make an effort to reach out to our PIs. That's just pure old marketing, and good will with our researchers. That's how we change the perception here of the IRB. IRB9 Some IRBs may thus see themselves as having to actively "market" their services (i.e., to engender support from researchers).
When he revamped the IRB, to improve relationships with researchers, this director asked a dozen members to leave.
When we reorganized, we removed 12 members, who had been on the IRB for many years, because most of them had no desire to communicate with investigators prior to meetings. One of the prerequisites to then becoming a member was willingness to reach out and contact PIs prior to committee meetings. At times, they kept confidentiality when it was a personal colleague, or friend, or someone they work with closely. That's OK. IRB9 IRB members can thus vary in their interest and willingness to communicate with PIsi.e., the degrees to which they would "reach out." Questions arise of whether IRB decision-making processes should be more transparent to not only a particular PI, but more broadly. IRBs keep minutes private, along with all correspondence and decisions (except to the PI involved). Yet at times, certain interviewees felt that heightened transparency could potentially also improve perceptions of IRBs among PIs.
Minutes are not now publicly available, but should be. I don't see why not. I guess researchers may feel it's embarrassing to have your stuff rejected. IRB22 Some interviewees felt that redaction of details at any institution may also be hard, but it is not possible.
Outreach and public relations (PR) work
IRBs varied in the amount and type of "PR work" and outreach they do with PIs, with some boards working hard to convey the message that "we're not the enemy." As one chair said, "Our educational sessions have helped. Because the IRB members are human beings." (IRB4) IRBs may thus try to shape their image and alter the notion that they are a faceless bureaucracy, rather than consisting of fellow individuals.
At times, interviewees felt that PIs blamed IRBs for trying to make protocols conform to federal regulations. An IRB may try to establish "good PR" to help reduce this problem by making itself less impersonal and anonymous.
Our IRB has done years of public relations work all over campus saying, "We're not the enemy. We're not here to hurt you. We're here to help you, to talk with you. Tell us what problems you're having. How can we assist?" That hasn't always been the case here. IRB39 IRBs and institutions may also thus try to change their approaches over time.
Some IRBs go further, asking PIs how the IRB can improve its interactions and relationships. In these efforts, IRBs can be very strategicbeing proactive, and targeting wary researchers.
We have identified departments that are particularly hard to work with, and said, "Can we come talk to you?. . .Let us know how we can work with you." We go make presentations to faculty meetings, project coordinators, or individual faculty members: "How can we make this better for you?" IRB39 Certain IRBs, recognizing that they are viewed warily by PIs, explicitly try hard to alter these negative perceptions, to give the message, "We're not devils. We're doing a good job" (IRB27)seeking to reverse this metaphor of embodying evil (i.e., "devils").
Chairs differ in these efforts, which can take time and effort. New chairs may adopt methods that are new to an institution. One started calling new PIs.
I had some tools that this campus hadn't seen before. I just picked up the phone and said, "Can I come and show you what this is all about? Because I know you're not going to know." And they were fine with it. I tell them it's going to make it a whole lot easier. IRB21 He also invites PIs to educate the IRB, which he thinks works well.
If we have new methodologies, protocolswhere the committee is frankly just ignorantthey start out gray. We usually invite that investigator to come inif they have an illustration or tools to help us understand. Then the committee can get educated. From a PR standpoint, that's helped, because investigators feel we're willing to be taught, and reach out. IRB21
Open door policies
Many chairs seek to improve relationships with PIs through the structure and amount of other kinds of interactions as well, striving to maintain "open door" policies, making themselves directly and highly accessible to PIs via email, phone, or cell phone. These types of physical and logistical structures of personal interaction may shape psychological and social attitudes and vice versa. But the nature, extent, and rationales involved varied.
Here, too, chairs may play critical roles, and prod their IRB to collaborate with PIs as much as possible. "We encourage the reviewers to work constructively with the reviewees, not have one of those hands-off things." IRB33 Chairs may push for open doors because of their personal, political, and/or bureaucratic philosophy. Some chairs were wary of bureaucracies. ("We are easy, and try to be accessible." IRB5) Another chair, who is a lawyer, tries to avoid having to "police" researchers. "I really hate having to deal with compliance issues. I hate being the enforcer. . .That's not fun. That's not what I do this for." (IRB19) Other factors, such as structural space parameters, can also play important roles in shaping IRB-PI relationships. As one IRB administrator said, At times, it has more to do with physical office design: we don't have a receptionist. When we had someone between us and the world, PIs said that I "suddenly started screening calls," or "didn't have an open door policy anymore." Now, researchers feel they can just come in and sit down, and we can talk about their protocolwhat they haven't addressed, whether they can address it, and whether I can explain to the board why it's not therethat it's on its way. IRB13 At many institutions, interviewees complained that their IRBs invited PIs to meetings, but that these PIs often did not attend. These interviewees felt that such attendance could improve relationshipsshowing that IRBs are trying to be reasonable. Chairs may be surprised that researchers do not accept these invitations.
It's an open forum meeting. We tell all researchers to come. But they won't. It's a two or three-hour evening meeting, though they don't have to come to the whole thing. It would help if they could look around the table and see their colleagues, and people from the lay community, and clergymen: this isn't a group of cynics and big red pens. We really discuss the issues. Researchers just need to realize that this is what actually happens during the meetings, and we're all pretty reasonable. IRB27 Some chairs thus try to have PIs not see IRBs simply as faceless bureaucrats.
Such joint meetings can also prompt IRBs to be more sensitive to researchers.
It's tempting, and easier, to sit around a table and be really tough and critical and blunt when we have some anonymous individualwe've got a name there, but nobody knows them. When you have met the individual, you tend to temper those kinds of remarks. Even if ultimately the message is the same, we've backed off on the brute force. IRB21 It may be harder for IRBs to be overly harsh and callous to a PI when meeting him or her in person, as opposed to interacting only through memos.
Yet other IRBs may prefer to close their meetings, or struggle to determine what to do. One administrator said, "In the past, the IRB would have investigators go to the meeting. Now they've closed them, which is helpful." (IRB23) Several IRBs established other institutional structures as well, such as "clinics," to address PI concerns outside of formal reviews, per se.
We started doing clinics. Researchers can meet with a subset of the IRB, and talk about what they want to do, and how to write their protocol. That helped. It was in a sense a pre-review, but also worked through questions that stymied the investigator from putting the protocol in. Mostly, [researchers] were afraid it wouldn't get through. Everybody really appreciated it both the IRB members (because they got better protocols), and the investigators (because they better understood the processes, and could then write better protocols). IRB28 Such meetings and workshops may enhance openness, transparency, and communication.
The clinics showed the investigators what we're thinking, and brought them into the process, rather than just dropping the protocol off in some box, with a closed door. IRB28 To improve relationships, other IRBs established online or in-print newsletters with updatese.g., a "tip of the week." As one chair said, We did a newsletter to keep people up-to-date, because they are not in the IRB world, and don't know something has changed until it gets sprung on them. They said, "We'd like to know ahead of time what's going on." So, we established "The Tip of the Week." It was easy to email: write it once, click a button, and send. IRB4 IRBs can also disseminate such "tips" in response to errors that PIs may make. Thus, such advice can also potentially help prevent problems. This chair continued, A couple of investigators changed a study, and the IRB didn't become aware until the protocol came up for renewal. So, we sent out a "Tip of the Week" that said you can't do that! "But if a subject is on site, and a procedure needs to be repeated, and the PI's best medical judgment is that it needs to be repeated, then repeat it. Don't wait for IRB approval." We tried to put federal regulations into a real world environment. IRB4 Such advice may thus be helpful in several ways. Yet these efforts can consume much of a chair's or administrator's time. A chair who spends a lot of time with PIs to diminish the IRB's reputation of being obstructionistic may find that these activities take more time than he or she is compensated for. The chair above said his time is billed at 20%, but is really 35-40%.
I spend more time with investigators than do any of our members, because I am so sick and tired of hearing that the IRB is a roadblock or a stumbling block. I don't like the committee having that reputation. So I work very hard with our investigators to make that not the case. IRB4 The time demands of such enhanced IRB availability can necessitate difficult tradeoffs, and cause tensions. IRB chairs and administrators can become overwhelmed, and need to weigh the advantages of open doors vs. limited resources.
Hence, while some chairs are highly concerned about PI complaints and try hard to reduce these, other chairs may be far less responsive or flexible. The latter may remain more removed from PIs, and interact with them more indirectly. IRBs may also struggle to achieve a balance. Boards may try to adopt an overall "open" policy, but adjust and limit their approach with difficult PIs over time.
Some people are chronically unhappycomplainers. No matter what we do, they're unhappy. So, we try to be diplomatic, gracious, and non-confrontational, but hold our ground. We're not going to cave in just because somebody is yelling at us. We can't turn the world on its head because a PI got his protocol in late! IRB40 Changing the tone and content of interactions Helpful memos: the content of communications IRBs often sought to improve, too, the quantity, quality, and helpfulness of written communication with individual PIs, but varied in how and to what degree they did so. Both the content and the tone of communication and interactions can be important. Some chairs write lengthy memos to PIs in response to submitted protocols, assisting these PIs in rewriting studies. Yet these activities can take time, and chairs and staff may therefore carefully choose to which PIs and studies to devote such efforts. As one administrator said, "Some people just want you to write the darn thing for them. That's not my job." (IRB23) Hence, if a PI has a track history of poorly written proposals, the IRB may invest less such effort.
For a very flawed study, I am much more likely to write back a three-page letter with 20 points, basically re-writing the protocol for them, than I am to disapprove it. I disapproved one from an investigator with a track record of submitting flawed protocols, and sloppy workmaking us do a tremendous amount of work. Instead of doing the work for him for the fifth time, I said, "No. Here are the major problems. Fix it." Instead of a three-page letter, I write a half-page letter, identifying the major issues, and put the ball back into their court. IRB40 However, for various reasons, PIs may not all respond to these efforts as IRBs expect or hope. Interviewees felt that PIs revise and resubmit most, but not all, protocols. After receiving the three-page memo mentioned above, this chair said that the PI never resubmitted the proposal.
I wrote an extensive, helpful letter, and he never responded. My guess is that he couldn't fix it, and was overwhelmed. He is not a good researcher. But, for 99% of researchers, the concerns are fixable. They fix them, respond, and move on. IRB40
Charm: establishing the right tone in communications
IRBs also attempt to mitigate friction with individual PIs through not only the content of communications, but the tone as well. Several chairs tried hard to use a respectful tenor that gave the message that the IRB wishes to be helpful, not obstructionistic; but such an approach was not always easy to establish and maintain.
Several interviewees described the importance of having what one chair calls a "deft touch." I always fear that faculty feel worn down by regulations, and don't have enough time for anything. I just try to get them to keep true to the IRB's mission, and not just dismiss it as a whole set of hoops they have to jump through. Sometimes that just requires a deft touch. I don't know if I'm even good at it -I hope I am. But that's what I try to do. I'd much rather somebody get a phone call from me, since I'm also a doctor, and have done research, than just hear from somebody in the office that they didn't do something. IRB32 Chairs may thus also be uncertain as to how effective they are in these efforts. Achieving this tone, while simultaneously ensuring that PIs are following the regulations, can also be among the most difficult aspects of IRB workdoing PR while trying to protect subjects as much as possible.
The hardest part of being an IRB administrator is walking this fine line. We are facilitators as well as monitors, and maintain a positive PRwe're here to facilitate and help you with the process. At the same time, we're here to make sure you comply with the regulations. Keeping the line of communication and trust open is critical, and setting the tone. IRB16 IRB chairs may thus struggle to have their staff use "the right manner" to improve relationships with PIs, but establishing and maintaining such a tone may in part be an innate ability that not everyone equally shares.
It's a challenge not only to find folks who have the talent for IRB work, but to teach people to get the right tone and balance, not shaking a stick at researchers, but trying to be collegial, and knowing when and how to make exceptions, be flexible, or compromise. We don't want to be one of those IRB offices that people hate, and complain about all the time. IRB18 To strengthen relationships with PIs, and defuse friction, one chair tries to "say no with a smile" (IRB29) for unrealistic requests for rushed approval. An administrator, originally from the South, uses "Southern charm" and a sense of humor. When PIs don't turn in paperwork, she says she tries to take the blame, rather than confronting them with their negligence.
PIs will say they brought paperwork over, and I know they didn't. They are sure. So most of the time, if there was an error, or they didn't send something, I try to be first to say, "You brought that consent form to me the other day, and I have absolutely no idea what I've done with it. Could you send me another one, please?" It doesn't really bother me anymore. PIs will backdate memos to the IRB, but we have a timeclock. I'll say: "I know you intended to get that over here. I'm so sorry. Can we deal with it now? How can we help you today?" I say that because I'm not saying that they didn't turn it in. I'm trying to give them an out so that they don't have to say, "I promised that, but have no idea what happened!" It works a lot better for me to say, "I'm so sorry: it was in my hands and I must have misplaced it. Do you have another copy?" rather than, "Oh, we never received it." They know they didn't bring it. But we say, "We have looked high and low for that, but if you bring us another copy this afternoon, we'll see if we can work it into the schedule." It just doesn't do any good to make demands. Let's just move forward and see what we need to do to get everything running again. IRB13 Thus, IRBs can seek to defuse potential tensions before these erupt.
Educating PIs
Interviewees felt that PIs and research staff range widely in quality and quantity of prior education concerning research ethics and IRB procedures. Interviewees thought that institutions vary in whether they require training in the Responsible Conduct of Research (RCR) for all staff involved in all research, and if so, how much. Several interviewees want the federal government to mandate comprehensive training more clearly, "requiring education in human subjects protection for everybody" (IRB26).
Since IRB members sensed deficits in investigators' training in these areas, "research ethics training," now formally required by many institutions, may often consist merely of relatively short on-line exercises. Key aspects of issues, such as definitions of "adverse events," may not be included in Good Clinical Practices or Responsible Conduct of Research courses. Yet interviewees felt that researchers may resent additional requirements.
Research conducted by trainees may pose particular challenges. Interviewees were often unsure where and to what degree junior PIs have learned about research ethics. Interviewees thought that at some institutions, residents and other trainees were mandated to do research, but may lack adequate methodological or research ethics training. IRBs often felt that some trainees may have deficient education in appropriate research design (e.g., as to whether the sample size is appropriate to warrant the study). "Residents say, 'My faculty mentor told me what I need for the IRB, ' and the mentor is somebody I've never heard of, who's never done research." (IRB13)
Showing PIs the regulations
Some IRBs try to explicitly show PIs the regulations, to demonstrate that the IRB is not arbitrary in its use of power.
When you can tell a researcher, "The FDA says, 'No, you can't do this, ' or 'You should do this, ' or 'have to do this, '" they understand. If you can show them the regs, they're even happier. IRB25 Other chairs explain to PIs not just the regulations, but the larger underlying ethical principles as well.
There's nothing worse than saying, "The regulation's required." You always have to tie it back to an ethical principlesay not only what they need to do, but why. It's not what you say, it's the way you say it. IRB26 Again, how IRBs communicate can thus play important roles, but can vary. Not all IRB staff may offer such broader explanations.
Interviewees also felt that challenges existed in getting PIs to appreciate the ethics underlying the regulations. At times, interviewees thought that researchers resisted, or were not interested in these explanations. The guidelines themselves could also shift over time.
Getting faculty or investigator buy-in, so that they understand the reason behind the regulation, is a challengebeing current on regulations and institutional and regulatory expectations. IRB28 IRBs often felt that many PIs simply completed the necessary paperwork as requirements, not thinking of these documents as concerning larger ethical principles.
They don't think through the reasons why there's compliance, or put it in an ethical context. Whether the answers are right or not may not matter to them. IRB28 Several interviewees felt that IRB procedures and protections can themselves evolve to defend the institution from liability more than protect the well-being of subjectsthereby undermining PI dedication to these processes as ethically important.
Optimally, there would be an ethics and integrity arm of research, and the compliance committees' forms under thatbathed in ethics, rather than in institutional protection. IRB28 Encouraging PIs to appreciate the principles underlying IRB concerns or requests regarding a particular study may also require resources that IRBs may lack. This former chair continued, Everything would come out better if a staff person would spend time with each investigator to improve understanding of the ethics behind the regulation, and help them write protocols in a more informed way. The science would be better, the subjects would end up better, and the investigator would feel a lot more buy-in. IRB28
Conclusions
This study, the first to explore the mechanisms through which IRBs respond to tensions with PIs, suggests that these committees react to conflicts with PIs in a variety of waysboth formal and informal, involving both the form and content of interactions and communications. These boards differ in how and to what degree they explicitly address conflicts with PIsfrom proactively helping researchers respond to ethical concerns, to being less involved, and more anonymous. These data suggest that IRBs can potentially improve relationships with PIs in several ways, using more "open doors" rather than anonymity, engaging in outreach (e.g., through clinics), enhancing the tone as well as content of interactions, educating PIs about the underlying ethics, and helping PIs as much and proactively as possible. Still, these efforts can require resources, and encounter PI resistance.
While prior studies of IRBs have tended to be quantitative, and to view IRBs as static, the present data illuminate how these committees interact with PIs within the context of dynamic and evolving relationships with individuals PIs. These data thus highlight how IRBs operate as part of complex social systems, and serve as critical mediators between federal regulations and individual researchers.
IRBs face tensions partly because PIs may in effect "blame the messenger" (the IRB) for the news (that these regulations need to be followed). As IRBs have certain power [21], they can thus encounter challenges in establishing the right balance in these relationships. Since IRBs monitor PIs, these committees' PR efforts may be suspect. Undoubtedly, PIs are also very busy, and confront many competing demands for their time, making it difficult for them to attend meetings to which IRBs invite them. Moreover, while many chairs would like PIs educated to "understand the ethics behind the regulations," researchers may disagree with an IRB's interpretation and application of these regulations. While a common adage is that "good ethics makes good medical care," some interviewees extend this notion to research as well, feeling that good research ethics also make good science. Yet whether this aphorism pertains to all interpretations and applications of research ethics is unclear As bureaucracies, IRBs vary not only in the formal social structures they establish, but in their tone and attitudes, which can shape how communication occurs, both formally and informally. IRBs face choices concerning how, when, and to what degree to interact and communicate with PIs, monitoring and seeking to shape these researchers' attitudes and behaviours. A lack of transparency (vs. openness) can exacerbate PI frustrations with, and demonization of, IRBs. It may be easier to demonize an "anonymous" bureaucracy, rather than fellow human beings. IRBs, more than researchers, may prefer the lack of transparency and argue that keeping minutes private is easier than making this documents more available and redacting details. But, that objection may not sufficiently offset the potential benefits of openness in fostering ethical behaviour. Redaction may be possible. These interviewees thus raise questions of how much anonymity is or should be permitted.
This study suggests needs for increased attention to not only what IRBs communicate to PIs, but how (i.e., the tone and the nature of interactions), and to examine the lived experiences of IRBs and PIsthe ways interactions about ethics are carried out that can thus shape the effectiveness of these interactions.
Federal regulations do not explicitly discuss these issues, and some IRBs have developed their own approaches that can potentially be adopted more widely. In part, tensions may exist between IRBs and PIs because of underlying conflicting prioritiespursuing research vs. protecting subjects. The strategies presented here (e.g., "open doors") may not wholly eliminate tensions, but can help.
For several reasons, including desires to improve relationships between IRBs and PIs, the ANPRM seeks to make formal structural changes (e.g., increasing the use of central IRBs in multi-site studies) [6,9,22]. But the present data suggest that not only the formal structure, but the content and tone of interactions are crucial. For full board reviews of multisite studies, for instance, centralization alone thus may not resolve all tensions. To reduce these strains, these data suggest that informal behaviours and attitudes of both IRBs and PIs should also shift.
While limited resources may restrict these committees in certain ways, other IRB approaches to enhancing relationships with PIs require relatively little time and energy (e.g., adopting an effective tone, and sending newsletters or "tips."). Hence, the status quo can be improved by encouraging IRBs to try to ameliorate strains with PIs by adopting such approaches. Specifically, IRBs should realize more fully that they have a certain degree of latitude in their interactions with PIs, and can strengthen these relationships through not only what, but how they communicate. If an IRB lacks resources to institute some of these practices, the chair can potentially present these possible approaches to institutional leaders, highlighting the benefits, in hopes of garnering additional funds.
These data thus underscore the importance of trying to enhance IRBs and their interactions with PIs through not only formal macro policy (e.g., altering federal regulations, and establishing centralized IRBs), but more informal micro levels as wellat the level of daily interactions and lived experiences.
As a potential solution to many of these problems, the Institute of Medicine (IOM) report of almost 10 years ago supported IRB accreditation, which has since spread widely. Yet the current data suggest limitations in accreditation: it provides standards for formal mechanisms, but does not address many aspects of how IRB personnel in fact fill these functions. The present data suggest that implicit and explicit attitudes and tone need to be addressed as well, perhaps through further, finergrained, and more nuanced education, and heightened awareness of these issues. Moreover, IRBs face challenges in part because the system is not static, but fluid, with new scientific methods and new trainees.
Future research is needed to examine more fully how IRBs work within these dynamic relationships with researchershow often, when, in what ways, and to what degrees IRBs in fact adopt the range of approaches described here; how successful each of these strategies are, and can be; and what factors are associated with IRB decisions to adopt or avoid these techniques. In part, IRB chairs may make these choices based on the degrees to which they and/or their members are "proresearch;" and their prior attitudes and perceptions concerning relationships with PIs. Future studies can thus probe more fully how IRBs make these decisions, and whether and how the approaches presented here can lower tensions, and thereby enhance PI cooperation and compliance with regulations, improving human subject protection.
The potential benefits of creating cultures of "compliance" and of "conscience" have been described [23][24][25], but whether, when, how, and to what degree IRBs actually do or can achieve such goals has received little, if any attention. The present data suggest that IRBs face a range of options for facilitating such aims, but do not always adopt or follow these methods.
These data also have important implications for professional education of IRB chairs, members and staff, research investigators and staff, and traineesto enhance their interactions as effectively as possible.
This study has several potential limitations. These interviews explored subjects' views now and in the past, but not prospectively over time, to explore changes. Participants' statements reflect their views and attitudes, and do not necessarily represent objective "fact" per se, but are nonetheless valuable in and of themselves. Interviews also did not include PIs at each of the institutions contacted. However, future studies can employ these approaches. These data reflect on in-depth interviews with IRB chairs and members, and did not include direct observation of IRBs engaged in meetings, or of written IRB records. Future research can observe IRBs and examine such documents. Yet these added data may be hard to procure since, anecdotally, IRBs have often required researchers to obtain consent from all IRB members, the PIs, and protocol funders.
In sum, IRBs range considerably in whether, to what degree, and when they adopt strategies that may potentially reduce tensions with PIs. Federal regulations do not mention these approaches; yet increased awareness of the range and benefits of these strategies can potentially improve IRB interactions with researchers, thus enhancing the dual goals of promoting socially beneficial science, while protecting study participants.
|
2017-04-06T05:23:14.513Z
|
2012-07-03T00:00:00.000
|
{
"year": 2012,
"sha1": "1380c785efcc0fcda6116d2d28c9749ecddccfe9",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-5-347",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1380c785efcc0fcda6116d2d28c9749ecddccfe9",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
117038976
|
pes2o/s2orc
|
v3-fos-license
|
Orbits of Hyades Multiple-Lined Spectroscopic Binaries. Paper 2: The Double-Lined System HD 27149
A new determination of the orbit of the Hyades double-lined spectroscopic binary HD 27149 is presented. The well-defined orbit provides the spectroscopic basis for an extremely accurate orbital parallax for the system -- in particular, the size of the relative orbit ($a\sin i = (a_1 + a_2)\sin i = (67.075 \pm 0.045) \times 10^6$ km) is accurate to $\pm 0.07$ %. The minimum masses for the primary and secondary -- $m_1\sin ^3i = 1.096 \pm 0.002 M_\odot$ and $m_2\sin ^3i = 1.010 \pm 0.002 M_\odot$ -- are unexpectedly large for the spectral types thus suggesting the possibility of eclipses. Although the probability of eclipses is not large, the system being composed of G3V and G6V stars in a 75-day orbit, the possibility is of great interest. A rediscussion of a search for eclipses made by J{\o}rgensen&Olsen$^1$ in 1972 shows that central eclipses can be excluded, but that shorter duration off-centre eclipses cannot be ruled out. Ephemerides for possible primary and secondary eclipses are given.
Introduction
These papers report the determination of new orbits for double-lined spectroscopic binaries in the Hyades cluster. When complemented by visual orbits determined by means of optical interferometry, which may soon start becoming available, the results will provide accurate distances, by means of orbital parallax, for these systems and, thus, the Hyades cluster. In this paper we examine HD 27149. 1 Basic data for HD 27149, which has a Hyades cluster designation vB 23, are as follows: α (2000) 4 h 18 m 01.8 s , δ (2000) +18 • 15 ′ 24 ′′ ; spectral type G5 (from HD catalogue); V = 7.53, B − V = 0.68. An earlier spectral type of dG3 is given by Wilson 2 in the General Catalogue of Stellar Radial Velocities. The secondary is not much fainter than the primary and, at the low or medium dispersions generally used for spectral typing, the two spectra are always blended so the spectral type is much more uncertain than that of a single star and is not the spectral type of the primary, but is intermediate The four individual velocities, given in the footnotes to the table, are all different and span a range of 21 km s −1 . (The dates of the observations are not given, but can be found in the later useful compilation of Abt 11 .) The dispersions of Wilson's spectra -three of 80 and one of 36Å/mm -were such that the primary and secondary spectra of HD 27149 would have been blended so these velocities must have The data were processed and wavelength-calibrated in a conventional manner with the IRAF package of programs. The spectra are double-lined with primary and secondary lines of similar strength and, at most orbital phases, the secondary lines are well separated from their primary counterparts. Fig. 1 shows an example in which we see primary and secondary Ni i and Fe i lines near 6400Å.
The procedure used to measure the radial velocities was the same as in Paper 1 of this series and has been described there. To summarise: the wavelengths of well-defined primary and secondary lines were measured by fitting Gaussian profiles with the IRAF splot routine, the wavelength differences between the measured and rest wavelengths of the lines provided the topocentric radial velocities, telluric O 2 lines were measured in the same way so as to determine the wavelength offset between the stellar spectrum and its associated Th-Ar comparison spectrum, the stellar topocentric velocities were then corrected by subtracting from them the telluric line offsets in velocity form and, finally, the heliocentric correction led to the heliocentric radial velocities. The velocities are, thus, absolute velocities.
In Paper 1 a tiny additional adjustment of the stellar velocities was made in order to force the averages for similarly measured velocities from observations of the radial- 19 -it is evident ǫ Tau might be subject to similar radial-velocity variations so the small difference between the velocities from the two telescopes might be real. There is thus no compelling reason for making an adjustment and so it was decided not to do so -the measured velocities from the 2.7-m and 2.1-m telescopes are used as they stand. Table I gives the UT dates, heliocentric Julian dates and heliocentric radial velocities for the McDonald observations. We now turn to the determination of the primary-secondary orbit.
The orbit
The method of differential corrections was used to determine the primary-secondary orbit from the primary and secondary velocities. A necessary preliminary to the orbit calculation was the assigment of suitable weights for the various velocities -namely velocities for observations with blended primary and secondary spectra, velocities from the 2.7-m telescope versus those from the 2.1-m and primary versus secondary velocities.
Observations with blended primary and secondary spectra: In a few observations the small wavelength separation between the primary and secondary spectra meant the primary lines and their secondary counterparts were blended, to a greater or lesser 5 degree. For these observations the deblend option in splot had been used to fit double Gaussian profiles to the pairs of blended primary and secondary lines and successfully measure separate primary and secondary velocities. For three observations, however, the primary-secondary wavelength separations were so small and the blending so severe that it proved best to give zero weight to the velocities from these observations. In one of these cases the observation was so close to a single-lined phase that only a single velocity representative of the blended primary and secondary spectra could be measured (see Table I and Fig. 2). In the other two observations (see Table I The phases and velocity residuals for this solution are given in Table I, the orbital elements are given in Table II and Fig. 2 shows the observed radial velocities and calculated radial velocity curves. We now examine the most interesting features of the orbit.
Discussion
The new orbit confirms the characteristics of HD 27149 discovered by Batten & Although it might appear that the next step is to remove the gravitational redshift and convective blueshift from the spectroscopic radial velocity of HD 27149 so as to obtain a modified spectroscopic radial velocity that is directly comparable with the astrometric radial velocity, one more factor must first be reckoned with. We recall that each individual spectroscopic radial velocity is derived from the differences between the measured and rest wavelengths for the set of measured stellar absorption lines and, furthermore, the rest wavelengths adopted for the lines are their measured Further into the future the error will slowly increase as the accumulation of the period error makes its presence felt. Table III gives -the one with only one velocity, representing the blended primary and secondary spectra -is single-lined and the other two are nearly so.
Table II
Orbital elements of HD 27149
|
2014-10-01T00:00:00.000Z
|
2002-12-19T00:00:00.000
|
{
"year": 2002,
"sha1": "5048f93a755514197be26c2667fd253d3480c3f2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5048f93a755514197be26c2667fd253d3480c3f2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
8181207
|
pes2o/s2orc
|
v3-fos-license
|
Randomised, double-blind, placebo-controlled trial of oral budesonide for prophylaxis of acute intestinal graft-versus-host disease after allogeneic stem cell transplantation (PROGAST)
Background Gastrointestinal graft–versus-host disease (GvHD) is a potentially life-threatening complication after allogeneic stem cell transplantation (SCT). Since therapeutic options are still limited, a prophylactic approach seems to be warranted. Methods In this randomised, double-blind-phase III trial, we evaluated the efficacy of budesonide in the prophylaxis of acute intestinal GvHD after SCT. The trial was registered at https://clinicaltrials.gov, number NCT00180089. Patients were randomly assigned to receive either 3 mg capsule three times daily oral budesonide or placebo. Budesonide was applied as a capsule with pH-modified release in the terminal ileum. Study medication was administered through day 56, follow-up continued until 12 months after transplantation. If any clinical signs of acute intestinal GvHD appeared, an ileocolonoscopy with biopsy specimens was performed. Results The crude incidence of histological or clinical stage 3–4 acute intestinal GvHD until day 100 observed in 91 (n =48 budesonide, n =43 placebo) evaluable patients was 12.5% (95% CI 3-22%) under treatment with budesonide and 14% (95% CI 4-25%) under placebo (p = 0.888). Histologic and clinical stage 3–4 intestinal GvHD after 12 months occurred in 17% (95% CI 6-28%) of patients in the budesonide group and 19% (CI 7-32%) in the placebo group (p = 0.853). Although budesonide was tolerated well, we observed a trend towards a higher rate of infectious complications in the study group (47.9% versus 30.2%, p = 0.085). The cumulative incidences at 12 months of intestinal GvHD stage >2 with death as a competing event (budesonide 20.8% vs. placebo 32.6%, p = 0.250) and the cumulative incidence of relapse (budesonide 20.8% vs. placebo 16.3%, p = 0.547) and non-relapse mortality (budesonide 28% (95% CI 15-41%) vs. placebo 30% (95% CI 15-44%), showed no significant difference within the two groups (p = 0.911). The trial closed after 94 patients were enrolled because of slow accrual. Within the limits of the final sample size, we were unable to show any benefit for the addition of budesonide to standard GvHD prophylaxis. Conclusions Budesonide did not decrease the occurrence of intestinal GvHD in this trial. These results imply most likely that prophylactic administration of budenoside with pH-modified release in the terminal ileum is not effective.
Background
Acute intestinal GvHD is a frequent complication after allogeneic stem cell transplantation (SCT) and remains a major cause of morbidity and mortality. In spite of standard GvHD prophylaxis between 20 to 80% of the patients suffer from clinically relevant acute GvHD [1][2][3][4][5][6][7][8][9]. Whereas acute GvHD with affection of the skin and/or liver rarely becomes life-threatening, acute intestinal GvHD represents one of the most frequent causes of death after allogeneic SCT. Severe cases may suffer from watery stools up to a volume of several litres, bloody stools or ileus. The median survival of correspondent patients with acute GvHD grade 3 and 4 constitute between two and three months only [8]. Therefore a prophylactic approach seems to be warranted.
Budesonide has demonstrated its efficacy in the treatment of various chronic inflammatory bowel diseases [10][11][12][13][14][15][16][17][18][19][20]. It is a locally acting steroid derived from 16αhydroxyprednisolon with strong anti-inflammatory, antiexudative and anti-oedematous characteristics. The local effect of budesonide is comparable to prednisolone [11,15,21]. It underlies a high first pass effect in the liver and therefore is associated with fewer side effects compared to corticosteroids with systemic efficacy. The bioavailability accounts for 9 to 12%. Some reports of the effectiveness of budesonide and other locally acting steroids in acute GvHD already exist [22,23]. A study on the potential value of budesonide for the prophylaxis of intestinal GvHD has not been performed.
Study design and patients
The PROGAST trial, a study of budesonide as an agent for prevention of acute intestinal GvHD was a randomised, double-blind, placebo-controlled multicentre trial. The study was conducted at 3 centres from March 2003 through May 2007. The medical ethics committee of the TU Dresden and the ethics committee at Charité, Berlin approved the protocol, and all patients provided written informed consent. The trial was registered at https:// clinicaltrials.gov, number NCT00180089.
Eligible male and female patients were at least 12 years of age and in preparation for related or unrelated allogeneic SCT. The stem cell donors -related or unrelated-were selected based on the compatibility for 10 HLA alleles (HLA-A, −B, −C, DRB1 and DQB1) by high-resolution (2 digit for class I, 4 digit for class II) DNA typing. One single allele mismatch was allowed within the same broad serotype or within a cross-reactive group. GvHD prophylaxis regimes followed international standards with cyclosporine A or tacrolimus in combination with methotrexate, optionally combined with anti-thymocyte globulin (ATG) or alemtuzumab.
Patients who received a T-cell-depleted graft, or who had received budesonide within 4 weeks prior to transplantation, as well as patients with local gut infections, apparent infectious disease, portal hypertension, profound liver function impairment, liver cirrhosis or severe psychiatric diseases were excluded.
Study treatment and randomisation
Patients were randomly assigned to receive either oral budesonide (Budenofalk® 3 mg, Dr. Falk Pharma GmbH, Freiburg, Germany) at a daily dose of 9 mg (3 mg TID) or placebo. Budesonide was administered as a capsule. The galenical formulation assured a drug release according to a pH >6.4 which resembles the pH in the terminal ileum. Medication started one day before allogeneic SCT and was continued until day 56. Afterwards the patients went into a follow-up period until 12 months.
Randomisation was performed centrally with the use of a randomisation procedure stratified according to the relationship of the donor (related or unrelated), conditioning regimens ( dosage reduced or intensive), and invivo T-cell-depletion (with or without anti-thymocyte globulin (ATG)/alemtuzumab).
Evaluation of efficacy and safety
GvHD evaluation was performed weekly starting from day 5 until day 56 after SCT and followed by visits in week 12, 16, 20, 24 and 56. Clinical signs of intestinal GvHD were classified according to Glucksberg-classification [24] of acute GvHD: occurrence of diarrhoea, bloody stools, abdominal pain, nausea and vomiting. If one of these symptoms emerged, a colonoscopy with specimens according to a standardized protocol was performed [25]. GvHD was histologically classified following Lerner's classification [26]. Monitoring for adverse events using common toxicity criteria (CTC) was performed until 12 months after transplantation.
Primary and secondary end points
The primary efficacy end point was the rate of patients with acute intestinal GvHD > stage 2 until day 100 after transplantation. Patients with histologic GvHD > grade 2 and patients with clinical signs of GvHD > stage 2 together with a positive histologic result for GvHD were classified as failures with respect to the primary end point. Secondary end points included the rate of patients with acute intestinal GvHD > stage 2 during follow-up until 12 months after transplantation, tolerability and safety of budesonide, severity of acute intestinal GvHD, incidence of chronic intestinal GvHD and infectious complications. Survival end points were overall and relapse-free survival, as well as relapse incidence and non-relapse mortality.
Study oversight
The study was jointly designed by haematologists and gastroenterologists of the University Hospital Dresden. A total of three centres participated in this trial (University Hospital Dresden, Charité-Campus Benjamin Franklin in Berlin, German Hospital for Diagnostics, Wiesbaden). Data was collected and analysed by the local Coordination Centre for Clinical Trials (KKS) in Dresden. The academic authors vouch for the veracity and completeness of the data and the data analyses.
Statistical analysis
For the primary end point, the rate of patients with acute intestinal GvHD > stage 2 after transplantation a rate of 30% for the placebo group was assumed. The expected incidence of GI GvHD seems to depend mainly on the frequency of endoscopic investigations. In fact Martin and coworkers [27] could show that the incidence of early stages of gut GvHD can be as high as 60%, especially in recipients of grafts from unrelated donors. It was calculated that 242 patients would be needed to provide a power of 80% in order to detect a difference in GvHD occurrence of 15% among the two groups (budesonide versus placebo). The software was nQuery Advisor® 7.0.
The overall incidence of acute and chronic GvHD and infectious complications were compared with the use of the chi-square test and accordingly Fisher's exact test. Efficacy analyses were performed according to the intention-to-treat principle. All patients receiving at least one dose of the study drug were included in the safety analysis. Tolerability of budesonide was assessed by means of CTC score. The comparison of overall and relapse-free survival was made with Kaplan-Meier survival analysis [28] and adjacent log-rank test between the budesonide and the placebo group. The cumulative incidence of stage 3 to 4 GI GvHD until 12 months after transplantation was calculated with death without intestinal GVHD as a competing risk [29]. Relapse incidence and non-relapse mortality were considered as competing events. Cumulative incidences were compared with the Gray test. Cross tables were analysed by means of Fisher's exact test.
Patients
Due to a lack of sufficient patient recruitment, the protocol committee decided to terminate the study prematurely. Of the 94 patients who underwent randomisation, 48 received budesonide and 46 received placebo. Three patients assigned to the placebo group did not take any study medication (2 patients withdrew their consent; 1 patient died), thus the ITT population consisted of 91 patients. The baseline disease characteristics were similar among the two groups (Table 1) except for multiple myeloma which was more frequent in the placebo group. A total of 91 patients completed the trial ( Figure 1).
Primary end point
Within the first 100 days after transplantation, a total of 6 patients (12.5%, 95% CI 3-22%) in the budesonide group and 6 patients (14%, 95% CI 4-25%) in the placebo group had experienced histologic or clinical acute intestinal GvHD > stage 2 according to Lerner's-and Glucksberg classification [24,26] of intestinal GvHD. There was no significant difference within the two groups (p = 0.888, Table 2). With the final sample size of 91 patients the post hoc power to detect a 15% difference in a chi-square test was 31% (15% acute GvHD in the treatment arm compared to 30% acute GvHD in the placebo arm). The final sample size should only permit to reveal a difference of 25% (5% acute GvHD in the experimental arm and 30% acute GvHD in the placebo arm) with a type I error of 5% and a power of 80%. Nevertheless if the trial would have continued to enroll and results continued on current trajectory there would be no significant difference (1.5%) within the two groups.
Secondary end points
The crude incidences of histologic and clinical stage 3-4 intestinal GvHD after 12 months observed in 91 Figures 2 and 3). Through month 12, the incidences of adverse events (AE) and severe adverse events (SAE) were similar among the two groups (Table 3). Every patient in the budesonide group (100%) and 97.7% in the placebo group had at least one adverse event. Classified by the CTC score, budesonide was tolerated. There was a statistically insignificant trend to a higher rate of overall infectious complications until month 12 in the budesonide group (47.9%) compared to placebo (30.2%), but there was no difference in the rate of gastrointestinal infections (placebo 4.6% vs. budesonide 4.4%). Further on a statistically insignificant trend to a higher overall survival (72.9% budesonide, 62.8% placebo) in the budesonide group was observed.
The incidence of chronic intestinal GvHD showed a higher percentage in the placebo group (budesonide 6.3% (n = 3), placebo 11.6% (n = 5), but failed to reach significance.
The distribution of the severity stages of acute intestinal GvHD, as well as incidence and severity stages of skin, liver and grades of overall GvHD showed no difference between the two groups.
Discussion
GI GvHD remains a huge problem, with few therapeutic options. Acute GvHD is mediated by immunocompetent donor T cells, which migrate to lymphoid tissues soon after allogeneic stem cell transplantation. In the light of the high mortality of severe GI GvHD which often does not respond to steroid therapy, alternative treatments are being actively investigated. Besides the options of in-vivo or in-vitro T-cell-depletion, a prophylactic pharmacologic approach seems to be the most promising. Ideally, prophylactic treatment should not affect transplantation associated mortality and the incidence of relapse. T-cell-depleted grafts effectively reduce the risk of acute GvHD but are associated with a higher relapse rate because of the missing Graft-versus-leukemia effect. An intensified pharmacologic prophylaxis has been associated with an increase in the relapse rate, a higher rate of systemic infections and a late recurrence of GvHD.
A prophylactic approach with budesonide asa locally acting immunosuppressive treatment seems to be attractive because of a lower risk of systemic complications during therapy, as seen in patients with chronic inflammatory bowel disease. Underlying rationale is a strong antiinflammatory local effect of budesonide on the one hand and a high first pass effect in the liver of more than 90%, which leads to negligible systemic effects on the other hand.
Even though the present study suggests that oral budesonide is not effective for the prevention of acute intestinal GvHD, early intervention in patients with a high-risk of gastrointestinal GvHD still seems to be an attractive strategy.
Furthermore it should be taken into account that the applied galenic formulation of budesonide has its maximum effect in the terminal ileum and right-sided colon. This is based on the pH-modified release of oral budesonide and therefore it is not adequate for prophylaxis of intestinal GvHD in the jejunum or proximal ileum. Some open-label studies showed efficacy in distal ulcerative colitis, but this finding has to be confirmed in controlled trials [30]. Therefore it remains unclear, if budesonide has a sufficient effect in the prophylaxis of GvHD in the distal colon.
Besides pharmacokinetic analyses suggest that a single dose of 9 mg budesonide may lead to a higher local concentration of budesonide compared to 3 mg three times per day [31] as performed in this study and therefore, could increase the therapeutic potential.
Overall infectious complications showed a trend to a higher frequency in the budesonide group, but gastrointestinal infections were equal in both groups and reflect better potentially side effects of locally acting budesonide.
As an additional second endpoint there was also no difference in the incidence of liver involvement, a site where the first-pass effect would lead to the assumption of a local efficacy of the study compound.
Conclusion
In summary, this study failed to show a significant effect of prophylactic treatment with oral budesonide in preventing gastrointestinal GvHD. This study was closed prematurely because of slow accrual. Within the limitations of the sample size, no significance difference in outcomes were able to be detected in primary and secondary outcomes.
|
2018-04-03T00:00:37.097Z
|
2014-11-26T00:00:00.000
|
{
"year": 2014,
"sha1": "755786992ba2d2a9472baf637ba047355018458b",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-014-0197-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02724561eea71d99de0271095ad198c0cd028582",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251351886
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of symptoms, comorbidities, fibrin amyloid microclots and platelet pathology in individuals with Long COVID/Post-Acute Sequelae of COVID-19 (PASC)
Background Fibrin(ogen) amyloid microclots and platelet hyperactivation previously reported as a novel finding in South African patients with the coronavirus 2019 disease (COVID-19) and Long COVID/Post-Acute Sequelae of COVID-19 (PASC), might form a suitable set of foci for the clinical treatment of the symptoms of Long COVID/PASC. A Long COVID/PASC Registry was subsequently established as an online platform where patients can report Long COVID/PASC symptoms and previous comorbidities. Methods In this study, we report on the comorbidities and persistent symptoms, using data obtained from 845 South African Long COVID/PASC patients. By using a previously published scoring system for fibrin amyloid microclots and platelet pathology, we also analysed blood samples from 80 patients, and report the presence of significant fibrin amyloid microclots and platelet pathology in all cases. Results Hypertension, high cholesterol levels (dyslipidaemia), cardiovascular disease and type 2 diabetes mellitus (T2DM) were found to be the most important comorbidities. The gender balance (70% female) and the most commonly reported Long COVID/PASC symptoms (fatigue, brain fog, loss of concentration and forgetfulness, shortness of breath, as well as joint and muscle pains) were comparable to those reported elsewhere. These findings confirmed that our sample was not atypical. Microclot and platelet pathologies were associated with Long COVID/PASC symptoms that persisted after the recovery from acute COVID-19. Conclusions Fibrin amyloid microclots that block capillaries and inhibit the transport of O2 to tissues, accompanied by platelet hyperactivation, provide a ready explanation for the symptoms of Long COVID/PASC. Removal and reversal of these underlying endotheliopathies provide an important treatment option that urgently warrants controlled clinical studies to determine efficacy in patients with a diversity of comorbidities impacting on SARS-CoV-2 infection and COVID-19 severity. We suggest that our platelet and clotting grading system provides a simple and cost-effective diagnostic method for early detection of Long COVID/PASC as a major determinant of effective treatment, including those focusing on reducing clot burden and platelet hyperactivation.
Introduction
As approximately 30% of COVID-19 patients infected with the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) continue, and in some cases begin, to suffer a variety of debilitating symptoms weeks or months after the acute phase of infection. The precise definition of this Long COVID/Post-Acute Sequelae of COVID-19 (PASC) (here referred to as Long COVID/PASC) is rather unclear and in some instances even vague. This is because most pathophysiological mechanisms involved have not yet been fully identified, and many different symptoms have been reported. The most frequently reported symptoms persist for 6 months or longer after acute infection [1]. COVID-19 survivors complain of recurring fatigue or muscle weakness, being out of breath, sleep difficulties, and suffer from anxiety or depression [2]. Symptoms noted in Long COVID/PASC patients show numerous similarities to those seen in chronic illnesses, including Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) [3][4][5][6][7][8], Postural Orthostatic Tachycardia Syndrome [9] and Mast Cell Activation Syndrome [1,10]. In a large global survey of 3762 Long COVID/PASC patients from 56 countries it was found that nearly half still could not work full-time 6 months post-infection, due mainly to fatigue, post-exertional malaise, and cognitive dysfunction [11].
An important component of severe COVID-19 disease is virus-induced endothelialiitis. This leads to disruption of normal endothelial function, initiating a state of failing normal clotting physiology. Massively increased levels of von Willebrand Factor (VWF) lead to overwhelming platelet activation, as well as activation of the enzymatic (intrinsic) clotting pathway. We have previously found persistent circulating fibrin amyloid microclots, that are resistant to fibrinolysis, in samples from acute COVID-19 patients [12,13]. We also, for the first time, reported on the minor effect of clotting in a patient with the Omicron variant [14]. Endothelial, microclot and platelet pathologies are also present in Long COVID/PASC patients [15,16]. In a recent study we identified numerous dysregulated molecules in circulation that might cause or reflect the lingering symptoms for those individuals with Long COVID/PASC [15]. We used proteomics to study the proteins present in both digested supernatant and trapped persistent pellet deposits (after protein digestion via trypsin). Dysregulated molecules include the acute phase inflammatory molecule Serum Amyloid A (SAA) and α(2)-antiplasmin (α2AP). We had previously discovered that in many chronic diseases fibrinogen can clot into an amyloid form that is resistant to fibrinolysis, and that these fibrin amyloid (micro)clots could be detected with a fluorogenic amyloid stain [12,[17][18][19][20][21][22][23][24]. Thus, we used fluorescence microscopy to report large amyloid microclots and hyperactivated platelets present in blood samples from Long COVID/PASC; we also showed that these deposits are highly resistant to fibrinolysis [15,25]. The plasmin-antiplasmin system plays a key role in blood coagulation and fibrinolysis [26]. Plasmin and α2AP are primarily responsible for a controlled and regulated dissolution of the fibrin polymers into soluble fragments such as d-dimer [26,27]. We also developed a platelet and microclot grading system to classify platelet and microclot pathology [28]. The grading system should ideally be applied as part of a multi-pronged approach, which in addition to appropriate anticoagulation, may also include antiviral treatment to limit cell entry of SARS-CoV2.
Differentiation of platelet and microclot pathology due to Long COVID/PASC from cardiovascular disease (CVD), hypertension, hypercholesterolemia or diabetes as the main co-morbidities associated with SARS CoV-2 infection, is important in our search for ways to influence the underlying pathogenesis prophylactically. Many of the Long COVID/PASC symptoms that have been reported are related to symptoms that are cardio-pulmonary in nature. In this work we present results from a cohort of 845 patients who completed an online South African Long COVID/PASC registry. In parallel, blood samples of 80 patients who visited the clinical practice of our clinical co-author were collected to report on the presence of microclots and platelet pathology associated with persistent symptoms after recovery from acute COVID-19. Before contracting acute COVID-19, these patients did not suffer from fatigue and other symptoms that they subsequently reported, and which are typically associated with Long COVID/PASC. Thus, they were diagnosed as having Long COVID/PASC by means of eliminating all other common diseases, including heart failure. diagnostic method for early detection of Long COVID/PASC as a major determinant of effective treatment, including those focusing on reducing clot burden and platelet hyperactivation.
Ethical clearance
Ethics approval for the study was obtained from the Health Research Ethics Committee (HREC) of Stellenbosch University, South Africa, with references B21/03/001_COVID-19, project ID: #21911 (long COVID registry data) and N19/03/043, project ID #9521 (Long COVID blood collection). The experimental objectives, risks, and details were explained to volunteers and informed consent were obtained prior to blood collection. Strict compliance to ethical guidelines and principles of the Declaration of Helsinki, South African Guidelines for Good Clinical Practice, and Medical Research Council Ethical Guidelines for Research were kept for the duration of the study and for all research protocols.
Data collection and analysis of patients who filled in the South African Long COVID/PASC registry
The South African Long COVID/PASC registry is an online platform where patients can self-report long COVID/PASC symptoms and previous comorbidities. Data were analysed for risk factors associated with developing Long COVID/PASC. All data were anonymised.
The statistical analysis of the South African Long COVID registry data was carried out in a Jupyter notebook environment [29] and the Pandas library [30] was employed for data manipulation and statistical analysis. With the aid of an interactive python data library, Plotly (https:// plot. ly), visualisation of the statistical analysis was drawn using Sankey plots. We have also used lattices (a technique based in knowledge representation and artificial intelligence) to visually represent the data. Lattices for exploratory data-science and artificial intelligence are less common than other techniques, but often yield different insights and paths for further exploration [31]. An introduction to lattices in exploratory data-science is given in [32] among others. The "conexp" software package (freely available from conexp.sourceforge.net) was used for preparing the lattices in this paper. The data of the 845 participants in the cohort were condensed into a single matrix mapping comorbidities to symptoms, in preparation for drawing the lattices. The input was a comma-separated values (CSV) file containing one patient per row, with entries of 0 or 1 (absence or presence) in columns, where the comorbidities and symptoms appear as individual columns. First, the percentage prevalence for each symptom was calculated by traversing all rows; this was subsequently used as a threshold vector. Next, for each comorbidity, the comorbidity-implied percentage prevalence was calculated for each symptom, giving a matrix of comorbidity (rows) versus symptoms (columns) and percentage entries. Finally, in order to draw easily visualised lattices with 0 and 1 entries, this last matrix was normalised based on the initially calculated threshold vector.
Blood sample collection from the cohort of 80 patients
Blood was drawn from 80 patients (35 females and 45 males); (mean/SD age 48 ± 16) who visited our clinical collaborator's practice. Either a qualified phlebotomist or medical practitioner drew citrated blood into sample tubes (BD Vacutainer ® , 369714), via venepuncture, adhering to standard sterile protocol. Whole blood (WB) was centrifuged at 3000×g for 15 min at room temperature and the supernatant platelet poor plasma (PPP) samples were collected and stored in 1.5 mL Eppendorf tubes at − 80 °C, until the analysis was performed. Haematocrit samples were analysed on the day of collection.
Long COVID/PASC diagnosis
Patients gave consent to study their blood samples, following clinical examination. Participants who filled in the South African Long COVID/PASC registry, gave consent on the platform for the team to use their de-identified data. Symptoms must have been new and persistent symptoms noted after acute COVID-19. Initial patient diagnosis of the 80 participants who visited our clinical collaborator's practice (and who gave a blood sample), was the end result of exclusions, only after all other pathologies had been excluded. This was done by taking a history of previous symptoms (before and after acute COVID-19 infection), clinical examinations, and investigations including: full blood counts; N-terminal pro b-type natriuretic peptide (NTproBNP) levels (if raised it suggests cardiac damage); thyroid-stimulating hormone (TSH) and C-reactive protein levels. Lingering symptoms that can be ascribed to Long COVID/PASC were then assessed and included shortness of breath; recurring chest pain; lingering low oxygen levels; heart rate dysfunction (heart palpitations); constant fatigue (more than usual); joint and muscle pain; brain fog; lack of concentration; forgetfulness; sleep disturbances and digestive and kidney problems. These symptoms should have been persistent and new symptoms that were not present before acute COVID-19 infection and persistent for at least 2 months after recovery from acute (infective) COVID-19. This part of the examination was done only where the participants gave a blood sample.
Platelet pathology
Haematocrit samples of all 80 patients in the cohort were exposed to the two fluorescent markers, CD62P (PE-conjugated) (platelet surface P-selectin) (IM1759U, Beckman Coulter, Brea, CA, USA) and PAC-1 (FITC-conjugated) (340507, BD Biosciences, San Jose, CA, USA). CD62P is a marker for P-selectin that is either on the membrane of platelets or found inside them [13,33]. PAC-1 identifies platelets through marking the glycoprotein IIb/IIIa (gpIIb/ IIIa) on the platelet membrane. To study platelet pathology, 4 µL CD62P and 4 µL PAC-1 was added to 20 µL haematocrit, followed by incubation for 30 min (protected from light) at room temperature. The excitation wavelength band for PAC-1 was set at 450 to 488 nm and the emission at 499 to 529 nm and for the CD62P marker it was 540 nm to 570 nm and the emission 577 nm to 607 nm. Samples were viewed using a Zeiss Axio Observer 7 fluorescent microscope with a Plan-Apochromat 63x/1.4 Oil DIC M27 objective (Carl Zeiss Microscopy, Munich, Germany).
Platelet poor plasma (PPP) and the detection of amyloid fibrin(ogen) protein and anomalous microclotting
Microclot formation in PPP samples from all 80 Long COVID/PASC patients were analysed. These patients were diagnosed by our clinical collaborators, and they were not yet placed on any clinician-initiated treatment regimens. PPP were exposed to the fluorescent amyloid dye, Thioflavin T (ThT) (final concentration: 0,005mM) (Sigma-Aldrich, St. Louis, MO, USA) for 30 min (protected from light) at room temperature [20][21][22][23]. After incubation, 3 µL PPP was placed on a glass slide and covered with a coverslip. The excitation wavelength band for ThT was set at 450 nm to 488 nm and the emission at 499 nm to 529 nm and processed samples were viewed using a Zeiss Axio Observer 7 fluorescent microscope with a Plan-Apochromat 63x/1.4 Oil DIC M27 objective (Carl Zeiss Microscopy, Munich, Germany) [12,13,25].
South African Long COVID/PASC registry
In Figs. 2, 3, 4, 5 and 6, the distribution of the South African Long COVID/PASC registry participant data (845 participants) was analysed according to the patients' gender, comorbidities, age group, initial COVID-19 symptoms, and Long COVID/PASC symptoms, using Sankey plots. The same participant versus comorbidity versus symptom data were further manipulated to produce a mapping between comorbidities and symptoms, represented as a matrix with comorbidities as rows and symptoms as columns. This was used to draw a lattice, giving insight into the implications (simple binary implications for visualisation) from comorbidities to symptoms. The corresponding lattices with different components highlighted, correspond to the most prominent comorbidities emerging from the Sankey diagrams: high blood pressure, high cholesterol, Type 2 diabetes, auto-immune disease, and previous blood clots. In the following figures (Figs. 1, 2, 3, 4 and 5), we rehearse the implications in more detail. Figure 1 gives a general overview of the South African Long COVID/PASC registry. About 10% (i.e. 87) of the participants were not initially tested for SARS-CoV2 using a PCR test, whereas in 90% (i.e. 758) of the patients, a COVID-19 positive test was reported. Moreover, patients were also categorised according to gender. Thus, 70% and 30% (i.e. 593 and 252) of the study cohort were identified as female and male, respectively, in line with common observations [34,35]. The majority (i.e. 76%) of the participants were between the ages of 31-40, 41-50, and 51-60 years. We observed that participants with comorbidities such as high blood pressure, high cholesterol, type-2 diabetes, autoimmune disease, and previous blood clots were in the majority. Figure 2 shows the gender distribution for the participants with the Long COVID/PASC symptoms in more detail. In a similar trend as observed in Fig. 2, the common Long COVID/PASC symptoms were noted as fatigue; brain fog, loss of concentration, and forgetfulness; shortness of breath, as well as joint and muscle pains. Interestingly, Long COVID/PASC symptoms such as kidney problems, digestive problems, and low oxygen levels were less commonly reported. Figure 3 shows the age versus Long COVID/PASC symptoms distribution of the participants. We note that the majority of participants were within the age ranges of 31-40, 41-50, and 51-60 years. Figure 4 shows a Sankey plot that illustrates the population distribution of participants' comorbidities versus Long COVID/PASC symptoms, while Fig. 5A, B shows representative lattice plots of high blood pressure and high cholesterol levels, confirming in more detail the correlations already shown in Figs. 3 and 6. Reading upwards in the lattice, for example in 8A, the "high blood pressure" node connects upwards through a highlighted network Fig. 6 Fluorescence microscopy examples of the different stages of platelet activation and spreading, that was used to score the platelet activation in the Long COVID patients, with Stage 1, with minimally activated platelets, seen as small round platelets with a few pseudopodia, seen as healthy/ control platelets that progresses to Stage 4, with egg-shaped platelets, are indicative of spreading and the beginning of clumping. Taken from [28] with permission to a variety of symptoms, ranging from "digestive problems" (on the left) to "shortness of breath" on the right. The complexity/density of the blue highlighted network represents prevalence of the comorbidity amongst the patients, as well as implications of a variety of symptoms.
Blood analysis
We studied blood samples from 80 diagnosed Long COVID/PASC patients (age median/SD 48 ± 16) (35 females and 45 males). Microclot and platelet analysis showed presence of microclots and platelet pathologies in all 80 patients. We used a platelet grading system to identify platelet pathologies, that we have developed and described previously [28] see Figs. 6 and 7 and Table 1. Platelet and microclot grading were done on a subset of 30 of these individuals. We also used a clotting grading system that we have developed and have published previously [28], see Fig. 8 and Table 2. Both the scoring of the platelet pathology and PPP microclots were combined and given a final score to determine the severity of the disease (Table 3).
Our overall platelet and microclot scoring results for 30 of the 80 patients were 7 ± 1.3, pointing to moderate activation. Figures 9 and 10 show representative micrographs of platelet activation and the presence of fibrin amyloid microclots in two of the Long Covid/PASC participants blood samples. Figure 11 shows representative micrographs of a patient suffering from Long COVID/PASC for 11 months, where Fig. 11A, B show a tile scan of cellular debris present in the haematocrit. Figure 11D, E show representative microclots and platelets where the patients have been using Aspirin only, before sample collection. As anticipated, significant microclot formation were still seen, but platelets were not significantly activated.
Discussion
Here we report on the comorbidities and symptoms identified in a cohort of 845 South African Long COVID/ PASC patients who filled in the South African Long COVID/PASC registry. We show that hypertension, high cholesterol levels (dyslipidaemia) and T2DM are important comorbidities that may play a significant role in the development of Long COVID/PASC in this cohort. (We also recognise that other comorbidities, such as previous viral infections, are also very important, but may manifest only in a larger cohort). It is well-documented that impaired endothelial function is associated with increased cholesterol and hypertension (which may be underpinned by generic variation) due to increased vascular oxidative stress and inflammation [36,37]. Here we also showed the presence of fibrin amyloid microclotting and platelet pathologies in another cohort of 80 patients that visited a clinical practice complaining of persistent symptoms, where these patients were diagnosed with Long COVID/PASC. We found that in this cohort, all of the patients did indeed have both increased amyloid microclotting, as well as platelet pathologies, as assessed by a platelet and clotting grading system that we have developed previously [28].
Normal blood clotting goes through a variety of established mechanisms [33], a major step being the cleavage by thrombin of the complex fibrinogen molecule (roughly cylindrical, with a 5 × 45 nm size). This releases two fibrinopeptides, and causes the thermodynamically favourable formation of fibrin macrofibres, typically 50-100 nm in diameter. They may be crosslinked by Factor XIII. The clots are usually removed by fibrinolysis, leading to the residual formation of d-dimer; its normally low background levels reflect this background activity. It was always assumed that the normal conformation of a protein is that of its lowest free energy, as per Christian Anfinsen's famous protein refolding experiments. However, this is not the case. Many proteins can fold into a form of lower free energy but retain the identical sequence. Some of these forms, containing ordered betasheet structures, are generically referred to as amyloids, and many are well known to be associated with certain diseases. Ab in Alzheimer's disease and synuclein in Parkinson's disease are examples, with over 50 recognised [20]. Note, however, that almost any protein can form amyloid structures (e.g. recombinant insulin will do it over time, as will lysozyme held at an acid pH). Another well-known example of a class of proteins that can exist in two conformations of identical sequence is represented by prion proteins. The normal form with alphahelices is called PrP c and the amyloid one PrP Sc , the latter being of lower free energy i.e. more thermodynamically stable. It is also highly resistant to proteolysis. This transition between the two forms can itself be catalysed by the PrP Sc . The key point of importance in microclot formation in Long COVID/PASC is that fibrin(ogen) too can, in the presence of various trigger substances, fold into an amyloid form that has a very different macrostructure 21:148 characterised by different fibre diameters and pore sizes [38]. We noted for example that in type 2 Diabetes Mellitus (T2DM) clots have a netlike appearance [39][40][41][42], while in Alzheimer's disease [18,[43][44][45] and Parkinson's disease the fibres may be larger in size [17,24]; they are also much more resistant to proteolysis [19], and are much more prevalent in the steady state.
We have been observing fibrin(ogen) changes generally for many years, initially via electron microscopy (e.g. [46][47][48][49][50][51], and many others). Since 2011 we have also studied the effect of fibrin(ogen) folding in the presence of various inflammatory molecules, such as iron ions [52], that can stimulate the anomalous fibrin form [53]. These anomalous structures could also be found in a variety of disease states [19], such as T2DM [40] and Alzheimer's disease [54]. In this earlier literature we often referred to these anomalous clots as 'dense matted deposits' . In 2016, we showed that this anomalous resistance to fibrinolysis was because the anomalous structures were in fact amyloid in nature [38]. Such structures are easily observed under the optical microscope, and in particular may be stained with the fluorogenic dye thioflavin T and by the more Fig. 11 Microclots and platelets in two individuals with Long COVID/PASC. A Representative micrographs of microclots in an untreated patient diagnosed with Long COVID/PASC, suffering from the condition for 11 months. The plasma was stained with thioflavin T (ThT); B Platelet hyperactivation of the same patient, where PAC-1 and CD62PE were used to mark platelets. C In this patient, cellular debris in the haematocrit was noted-here such cellular debris is shown in a tile scan. D, E Microclot presence and platelets from a patient who was on self-administrated aspirin (anti-platelet) treatment before blood collection, where major microclots were noted, but the platelets were minimal hyperactivated, possibly due to the use of anti-platelet therapy recently developed oligothiophene dyes marketed by Ebba Biotech as Amytrackers [17,21,23].
Most recently, we have demonstrated this explicitly in COVID-19 patients [12,13]. Here, these microclots were observed without the addition of clotting agents, i.e. thrombin, and therefore, these amyloid microclots were in the plasma of the individuals at the time of sampling. In particular, it was found not only that the clots contained fibrin (fibrinogen being one of the most concentrated proteins in plasma), but that these microclots had entrapped alpha-2-antiplasmin [15] and a variety of other proteins and even antibodies, which therefore were not observed in plasma from which the microclots had been removed (so did not appear as biomarkers, even though they were present). Because these clots are insoluble and effectively inert, they do not contribute to plasma viscosity as determined via TEG ® , whose values can thus appear normal in Long COVID/PASC (unpublished data). Another characteristic of COVID-19 is the extremely high levels of activation of platelets [13]. Together with platelet pathology and the presence of microclots in the circulation, endothelial damage may be key drivers of persistent Long COVID/PASC symptoms [15]. See Fig. 12 for a snapshot of the interactions that platelets have with circulating blood cells and the Fig. 12 (1) After activation, platelets express P-selectin on their membranes, followed by platelet-T cell complex formation (2); P-selectin on platelet membranes are also recognized by macrophages, possibly by the Fcγ-receptor; clearance may result due to either receptor binding or phagocytosis (3). CD40L is released from platelets and can migrate to membranes or shed as soluble (s)CD40L (4). sCD40L can bind to both the α IIb β 3 or CD40 receptors (5). The P-selectin on the membranes of sCD40L-activated platelets can also form complexes with monocytes (6). Platelet-neutrophils also form complexes (7). Diagram created with BioRender (https:// biore nder. com/) and adapted from [55] various complexes they form (for a detailed review, see [55]).
Conclusion
In the current study, we report data from the South African Long COVID Registry for the first time. It was noted that each of 80 patients diagnosed with Long COVID/ PASC and who provided blood samples, showed platelet hyperactivation and microclot formation. We suggest that a platelet and clotting grading system should be used as a simple and cost-effective diagnostic method for the early identification of Long COVID/PASC. Diagnosis of Long COVID/PASC requires exclusion of other pathologies and evaluation of the duration of symptoms (> 2 months after acute infection). If a bleeding tendency (not seen commonly) is a concern, a TEG ® can be used as a safety-net to not overtreat the patient. The exact combination of treatment and duration needs further investigation and testing in randomized control trials. Unfortunately, treatment protocols are not yet widely available and current protocols are based on clinicianinitiated approaches and experience in managing these patients. No clinical trials have been performed yet; however, this will be an important next step to urgently consider in parallel to patient monitoring using a multimodal pathology-supported genetic testing approach [56]. For implementation of personalised medicine it will be essential to bring together clinicians, researchers, patients and policy makers to advance healthcare while allowing for adjustment and flexibility in view of new discoveries [57]. The large number of affected individuals who develop Long COVID has major detrimental effects on public health and necessitates long-term follow-up and support that can be mediated via the Long COVID Registry as a major strength of this study.
|
2022-05-21T13:09:24.350Z
|
2022-08-06T00:00:00.000
|
{
"year": 2022,
"sha1": "fc15ccbc9e98b81c692f7f8758031e8997b4d767",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "842d6b3d262ae9b2d010e0a5b4cb6081f7668742",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231885796
|
pes2o/s2orc
|
v3-fos-license
|
A Memetic Algorithm for an External Depot Production Routing Problem
: This study aims to compare the results of a memetic algorithm with those of the two-phase decomposition heuristic on the external depot production routing problem in a supply chain. We have modified the classical scheme of a genetic algorithm by replacing the mutation operator by three local search algorithms. The first local search consists in exchanging two customers visited the same day. The second consists in trying an exchange between two customers visited at consecutive periods and the third consists in removing a customer from his current tour for a better insertion in any tour of the same period. The tests that were carried out on 128 instances of the literature have highlighted the effectiveness of the memetic algorithm developed in this work compared to the two-phase decomposition heuristic. This is reflected by the fact that the results obtained by the memetic algorithm lead to a reduction in the overall average cost of production, inventory, and transport, ranging from 3.65% to 16.73% with an overall rate of 11.07% with regard to the results obtained with the two-phase decomposition heuristic. The outcomes will be beneficial to researchers and supply chain managers in the choice and development of heuristics and metaheuristics for the resolution of production routing problem.
Introduction
In the supply chain field, simultaneously planning production, inventories and distribution while minimizing the overall cost of these operations is a complex exercise known as the production routing problem (PRP) [1]. The PRP is a combination of two well-known problems in the literature. On the one hand, we have the lot sizing problem (LSP) and the vehicle routing problem (VRP) on the other hand. LSP determines the best trade-off between production and storage operations. The aim is to simultaneously determine the production schedule, quantities to be produced and quantities to be stored to minimize the overall cost of production and inventory. See [2] for more detailed literature review on the LSP.
VRP is a difficult NP-Hard problem as a particular case of the traveling salesman problem (TSP) which is itself a NP-Hard problem [3,4]. It consists of arranging routes between customers who must be visited at the same time based on the number of vehicles available and finding the best scheduling of visits to minimize the overall cost of transport over the planning horizon. Readers are also invited to see [5][6][7] for a review of the mathematical formulations of the problem and the methods or algorithms used to resolve it.
In the practice of industries, these two problems are disjointedly and sequentially analyzed. However, the work of Chandra et al. [8,9] have shown that it is possible to make gains of 3% to 20% on the overall cost of production and distribution by integrating and coordinating the decisions of LSP and VRP. This integrated and coordinated PRP is a NPhard problem since it contains the VRP. In an integrated concept of supply chain planning, the customer no longer has exclusive control over decisions on his visit dates and quantities to be received. The implementation of new practices such as vendor managed inventory (VMI) and replenishment policies such as order-up-to-level (OU) and maximum level (ML) are necessary to achieve the goal of satisfying customer demand. VMI is a practice in which the supplier decides when and how much to deliver to the customer in a joint manner. He must also ensure that there is no shortage of stock at the customer. In the OU replenishment policy, all customers have maximum storage capacity and the amount delivered is such that the maximum level of storage is reached at each delivery. Whereas in the ML policy, all customers have maximum storage capacity and the quantity delivered is such that the maximum storage level is not exceeded at each delivery (0 ≤ q it ≤ L i ). See [10][11][12][13] for more details on the practice of VMI and the use of OU and ML replenishment policies. A detailed literature review of PRP was provided by [1].
Although PRP is an NP-hard problem, exact methods for its resolution have been proposed. Among these exact approaches, the Lagrangian relaxation was one of the first methods proposed for the resolution of the PRP [14]. A branch-and-price algorithm was proposed to solve a PRP with the use of a homogeneous fleet of vehicles [15]. In a study using a single vehicle [16], the authors used a branch-and-cut (B&C) algorithm to solve the PRP problem. The B&C algorithm has also been proposed to solve the multivehicle version of the PRP [17].
Given the complexity of the PRP, several heuristics and metaheuristics have been developed for its resolution. In the first works on the PRP introduced by Chandra et al. [9], an H1 type decomposition method was proposed. The authors decomposed the problem into a capacitated LSP and VRP followed by some local search heuristics to solve the problem. Test results have shown that the integrated approach allows gains on the overall cost of operations ranging from 3% to 20%. Another two-phase decomposition method was presented for solving a PRP with a heterogeneous fleet of vehicles [18]. The authors used a mixed integer programming (MIP) to solve the first phase of the problem. This first phase of the problem concerns a problem of LSP with direct shipment (DS). In the second phase, an efficient algorithm was used to solve the VRP. It is a H2 type algorithm because the DS decisions are incorporated into the LSP, which is not the case for the H1 type algorithm. The H2 type of two-phase decomposition heuristic (TPDH) has also been used to solve a PRP with external depot (EDPRP) [19]. The authors used a MIP for solving the LSP problem with DS in the first phase and then used a genetic algorithm to solve the VRP in the second phase.
Metaheuristics are algorithms used to solve difficult combinatorial optimization problems. They are therefore master strategies that use other heuristics to find a better approximation of the best overall solution to an optimization problem. They are mostly used when a more efficient classical method to solve a given combinatorial optimization problem is not known. The high level of abstraction of metaheuristics makes them suitable for a wide range of problems. Thus, they find their application in various fields of scientific research.
Swarm intelligence (SI) is a family of metaheuristics that relies on nature through the interactions of agents (ants, bees, etc.) to solve complex optimization problems. SI has been used in wind energy potential analysis and wind speed forecasting to reduce the operating cost of wind farms [20]. The authors used a hybrid algorithm that combines the advantages of the genetic and adaptive particle swarm optimization (PSO) algorithm to optimize the weights and biases of the nonlinear network of extreme learning machines (ELMs) to efficiently improve the accuracy of the ELMs. Another SI based on particle swarm optimization has been proposed for the analysis of handover in the field of spectrum sharing in mobile social networks [21]. With an increase in the globally adaptive inertia to 75.66% and an increase in data transfer rate of 47.29% compared to the IEEE802. 16 protocol, the authors obtained a maximum mean signal-to-noise ratio of 14.8 dB. This value is the overall optimum that is required during handover for any mobile social network. Thus, the authors claim that their algorithm outperforms those in the mobile network literature by 75% by optimizing various aspects of handover.
In the age of big data, analyzing data and extracting relevant information from it is a challenge. A very important issue in this area is the selection of the most informative features in a dataset. Given the success of SI in solving difficult NP problems, it is increasingly used for selecting relevant characteristics in data analysis or in machine learning algorithms. A detailed literature review on the use of SI in feature selection is provided by [22]. The authors presented a unified SI Framework to explain the methods, techniques, and parameter choice in a SI algorithm for feature selection. They also presented the different datasets used to implement SI algorithms as well as the different SI algorithms.
Another literature review focusing on PSO algorithms and ACO algorithms as representative of SI was presented by [23]. The authors presented a description of the PSO and ACO algorithms as well as a state of the art on other SI algorithms. They also presented a status report on the use of SI in solving real life problems.
Regarding the use of SI in solving PRPs, a PSO algorithm has recently been proposed to solve the planning problem in an intelligent food logistics system model [24]. The authors have formulated the problem in a mixed integer multi-objective linear programming. Four objectives are addressed in this study. They are the minimization of total system expense, the maximization of average food quality, the minimization of the amount of CO 2 emissions in transportation, and the production and minimization of total weighted delivery time.
Tests conducted on small, medium, and large data sets resulted in cost reductions of 15.51% compared to a three-step decomposition method.
A self-adaptive evolutionary algorithm was proposed for berth planning in the operation of maritime container terminals [25]. The authors proposed a chromosome in which the probabilities of crossover and mutation are encoded in the chromosome. They focused their study on the comparison of several methods of parameter selection in evolutionary algorithms. On the one hand, there are the parameter tuning strategy approaches in which the values of the selected parameters remain unchanged throughout the execution of the algorithm. On the other hand, we have the parameter control approaches in which the parameters of the algorithm are adjusted throughout the execution of the algorithm according to certain strategies. Parameter control can be deterministic, adaptive, or self-adaptive. Test results show that the evolutionary algorithm of self-adaptive parameter control outperforms evolutionary algorithms using deterministic parameter control, adaptive parameter control and parameter tuning strategy by 4.01%, 6.83%, and 11.84% respectively based on the value of the objective function.
Another evolutionary algorithm was used to address a VRP in a "factory in box" [26]. This "factory in a box" concept involves assembling production modules into containers and transporting the containers to different customer sites. The authors modeled the problem as mixed integer programming to solve the problem and used CPLEX to solve the mathematical model of the problem. In addition to the evolutionary algorithm, they also proposed three metaheuristics including variable neighborhood search (VNS), taboo search (TS), and simulated annealing (SA) to solve the problem. The results of the tests carried out on large-sized instances show that the evolutionary algorithm provides better results than the other metaheuristics (VNS, TS, SA) developed to solve the problem.
Genetic algorithm (GA) is a stochastic strategy for solving complex optimization problems that takes better advantage of concepts from natural genetics and the theory of evolution. It is used to solve a production, inventory and distribution planning problem that takes into account several products. [27], the authors have solved the integrated problem of LSP and VRP. They also proposed integer programming to minimize the total cost of the system. Tests performed on small instances found an optimality deviation of 1.739% compared to the branch-and-bound (B&B) method. A multi-plant supply chain model with multiple customers was proposed by [28]. In this supply chain, products are delivered directly from the factory to the customers. The authors developed a hybrid AG to address the minimization of the overall cost of production and distribution. They used a multi-point crossover operator. This number of cut-off points is equal to the length of the chromosome divided by 4 with a 15% probability of crossover.
A greedy randomized adaptive search procedure (GRASP) has been proposed for solving a PRP involving a single type of product over a multi-period planning horizon with the use of a homogeneous fleet of vehicles [29]. The authors proposed three versions of GRASP for the integrated resolution of the PRP These are a classical GRASP, an improved version of GRASP with the incorporation of reactive mechanisms and a version of GRASP with a path reconnection process. Test results on 90 randomly generated instances with 50, 100 and 200 clients over 20 periods established the effectiveness of GRASP on the traditional method of breaking down the problem into sub-problems. These results also showed that GRASP with path re-connection and GRASP with reactive mechanisms give better results than traditional GRASP (without improvement). A GRASP has also been developed to solve a bi-objectives production and distribution problem [30]. These objectives concern the minimization of total production and distribution costs and the balancing of the total workload in the supply chain. They used a homogeneous fleet of vehicles to transport products. The results of tests on literature instances show that the GRASP developed allows a relatively small number of non-dominated solutions to be obtained in a very short computing time. Although the approximation of the Pareto front for each instance is discontinuous and not convex, it highlights the trade-off between the two objective functions.
A reactive tabu search was presented for solving a PRP to satisfy a time varying demand with a homogeneous fleet of vehicles over a finite planning horizon [31]. The authors compared the test results with the results obtained with GRASP on instances of up to 200 clients and 20 periods. This comparison revealed an improvement in results ranging from 10% to 20% on the overall cost of operations with an increase in computing time. A Tabu search (TS) with path relinking (TSPR) was used for PRP resolution with a homogeneous fleet of vehicles [32]. Tests performed on datasets from the literature have shown that TSPR gives better results than memetic algorithm with population management (MA|PM) and an improvement in results ranging from 2.20% to 8.78% compared to RTS.
An adaptive large neighborhood search (ALNS) procedure was used for computing the lower bound in the formulation of the multivehicle production and inventory routing problem (MVPRP) [33]. The results of tests carried out on small, medium, and large size instances established the effectiveness of ALNS on GRASP, MA|PM, RTS, and TSPR. The same authors also used ALNS for the computing of upper bounds in a B&C procedure for solving the MVPRP [17].
The variable neighborhood search (VNS) was developed for PRP resolution with a homogeneous fleet of vehicles [34]. The test results for this algorithm show that it is as competitive as the ALNS algorithm.
Introduced by Moscato [35], memetic algorithm (MA) is a powerful version of GA based on the use of local search to intensify the search in order to increase its efficiency. The preservation of diversity is crucial in evolutionary algorithms in general [36] and in GA in particular. Crossover and mutation operators are the best means of guaranteeing the diversity of the population from one generation to the next. A good strategy of combining diversification and intensification is the key to success which gives the MA a definite advantage over the GA.
In the area of chain supply planning, MA was used to resolve the LSP. In this problem family, MA has been proposed for a problem of stochastic multi-product sequencing and LSP [37]. It has also been used for LSP in soft drink plants [38] and for a multi-stage capacitated LSP [39]. For the VRP family, MA has been proposed for a VRP with time windows (VRPTW) [40,41]. See also [42,43] for the capacitated VRP as well as [44,45] for the use of heterogeneous fleets of vehicles. The use of MA is also very present in the optimization of integrated problems such as PRP. A MA with population management (MA|PM) has been developed by Boudia and Prins [46] to solve the integrated production, inventory, and distribution problem. The authors tackled a problem of simultaneous minimization of three costs consisting of the setup cost, the inventories cost and distribution cost. Tests on 90 randomly generated instances showed the efficiency of MA|MP compared to TPDH and GRAPS. The MA was also used in a study aimed at minimizing the overall cost of setup, inventory, and distribution in a multi-plant supply chain. In this supply chain, the information flow management system called "KANBAN" was implemented to manage information [47]. A real case study concerning the minimization of the overall cost of production and distribution was conducted in a large automotive company [48]. In this study, the authors modeled the problem as a non-linear mixed integer programming and used a custom MA for its resolution.
The MA used in this work is an adaptation of the one proposed by Boudia and Prins [46] to solve an integrated production, inventory, and distribution problem.
The remainder of this work is organized as follows: Section 2 is a description of the EDPRP. Section 3 shows the details of the MA used for EDPRP resolution. Section 4 highlights the conditions for experimentation and the comparison of the results obtained by the MA and the TPDH of H2 type. Finally, Section 5 provides a conclusion on the comparative study of MA and TPDH concerning EDPRP.
External Depot Production Routing Problem (EDPRP)
The EDPRP is an extension of the classic case of the PRP in which deterministic demands are met by a homogeneous fleet of vehicles over a discrete (multi-period) and finite planning horizon. In the EDPRP, the geographical position of a plant without storage capacity is different from that of the depot. The depot is supplied by the plant and the customer demand is only satisfied from the products stored at the depot. All vehicles start and end their trips at the depot. The collection of products from the plant to supply the depot is carried out either by vehicles that leave the depot and then collect a quantity of products directly from the plant and return to the depot, or by vehicles after satisfying the demand of one or more customers. In this extension of the PRP, products manufactured at the plant are not delivered directly to customers. The products are first stored at the depot before being delivered to customers. It is also important to note here that the quantities of products received by the depot in period are not distributed in the same period. These products must first be stored before distribution. Only the ML policy has been implemented in this work as a replenishment policy.
To describe the problem, the following elements have been defined: G = (N, A) is a complete graph in which N represents the set of nodes formed by the plant, the depot and the customers with the index i ∈ {0 . . . n + 1} and A(N) = {(i, j): i, j ∈ N, i = j} all the arcs in G. The plant is represented by n + 1, the depot is indexed by 0 and all customers are represented by N c = {1, . . . , n} is the set of customers. N dc = {0, . . . , n} is the set made up by the depot and the customers. N cu = {1, . . . , n + 1} is the set made up by the customers and the plant. N = {0, . . . , n + 1} is the set consisting of the depot, the customers, and the plant. T = {1, . . . , l} is the set of periods (days) of the planning horizon and K = {1, . . . , m} is the set of vehicles. See [19] for settings, variables and the MIP linking settings and variables. The product collection and distribution network in the EDPRP can be summarized in Figure 1. The authors. proposed a TPDH to solve the problem. In this work, we develop an MA to compare its results with those of the TPDH in the resolution of the EDPRP. The details of this MA are presented in the following section.
Encoding
We use the tuple (P, Y, X, R) to describe a complete solution to the problem. In this tuple, P is a list of the quantity produced in each period t of the planning horizon. Each quantity produced t P or P [t] is an integer between 0 and the maximum production capacity of the plant (C). Y is a list of binary numbers over each period t of the planning horizon. t Y = 1 means that there is production at the plant on date t and 0 otherwise. X is a (n + 1) × l matrix in which it X represents the amount of product sent to the depot (i = 0) or to a customer ( It is a successive list of trips delimited by the symbol (0) when the trips belong to the same day or the symbol (−1) when the trips belong to different days. Thus 0 and −1 also refer to the depot in the modeling of the solution. As the symbol (−1) is also the day delimiter, a succession of this symbol indicates a period without a vehicle tour.
The knowledge of R, X and 0 ( ) t T ∈ and to verify the constraints related to the storage capacity of customers ( ) i L t T ∈ , the vehicle limit load (Q) as well as the limitation of the number of vehicles in the fleet (m). With the knowledge of Y, X, and 00 and check the constraints on the storage capacity of the depot 0 ( ) L . The dataset of Table 1 allows to build an example of a complete solution for n = 10, l = 3, m = 2, C = 304 and Q = 198. This solution can be described by Table 2. In Table 2, R is a succession of trips that begin at the depot (0 or −1) and end at the depot over a planning horizon of 3 days (or three periods) delimited by the symbol −1.
Encoding
We use the tuple (P, Y, X, R) to describe a complete solution to the problem. In this tuple, P is a list of the quantity produced in each period t of the planning horizon. Each quantity produced P t or P[t] is an integer between 0 and the maximum production capacity of the plant (C). Y is a list of binary numbers over each period t of the planning horizon. Y t = 1 means that there is production at the plant on date t and 0 otherwise. X is a (n + 1) × l matrix in which X it represents the amount of product sent to the depot (i = 0) or to a customer (i ∈ N c . X it is an integer between 0 and the maximum storage capacity L i of the node i ∈ N dc and can be made up of several demands for the customer i. R is a list of integers with R p ∈ {−1} ∪ N. It is a successive list of trips delimited by the symbol (0) when the trips belong to the same day or the symbol (−1) when the trips belong to different days. Thus 0 and −1 also refer to the depot in the modeling of the solution. As the symbol (−1) is also the day delimiter, a succession of this symbol indicates a period without a vehicle tour.
The knowledge of R, X and I i0 (i ∈ N c ) allows to compute I it (i ∈ N c ) for all t ∈ T and to verify the constraints related to the storage capacity of customers L i (t ∈ T), the vehicle limit load (Q) as well as the limitation of the number of vehicles in the fleet (m). With the knowledge of Y, X, and I 00 , it is also possible to compute P t (t ∈ T), I 0t (∀t ∈ T) and check the constraints on the storage capacity of the depot (L 0 ). The dataset of Table 1 allows to build an example of a complete solution for n = 10, l = 3, m = 2, C = 304 and Q = 198. This solution can be described by Table 2. In Table 2, R is a succession of trips that begin at the depot (0 or −1) and end at the depot over a planning horizon of 3 days (or three periods) delimited by the symbol −1. X it is the amount sent to each customer or depot i at the period t. The amount distributed to customers in the first period is 25 + 18 + 33 = 76. This quantity means that the demand of the first period is met based on the initial stocks of the depot (I 00 = 76). No quantity is sent to the plant since it has no storage capacity. Y 1 = 1 means that there is production in the first period of T. This automatically leads to a replenishment of the depot (0 or −1) with a quantity P 1 = 137 units of products. P and Y can therefore be represented by Y [0,0,1] and P [137,0,0]. Finally, with the knowledge of R, X, Y, and P it is possible to evaluate a solution.
. Evaluation of the Solution
The cost (or fitness) of a solution Z can be determined by the following formula In this three-part formula, ∑ t∈T (uP t + f Y t ) denotes the total cost of production. This cost can be subdivided into two costs. Namely the variable cost of production defined by the sum of the costs of productions per period (uP t ) and the sum of setup costs when there is production on the day t ( f Y t ). Then we have the cost of the inventory expressed by ∑ t∈T ∑ i∈N dc h i I it and finally the cost of distribution (transport) represented by The parameter c ij refers to the Euclidian distance between the i node and the node j.
Construction of the Initial Population
Let Pop_size be the size or number of individuals (solutions) in the initial population Pop 0 . This population consists of a Pop_size number of randomly generated solutions. Everyone in the initial population is generated in three steps described as follows: Step 1: Y construction.
The construction of Y consists in determining the days of production. Here, it is a question of determining the days t for which Y t = 1. To achieve this goal, we will first determine the total quantity produced over the planning horizon. Let NP be this quantity of products. It can be obtained by the following relationship: It is equal to the difference between the sum of the demands of all customers and the sum of the initial inventory at the depot and at the customers. Once the total quantity to be produced is calculated, we determine the number of days required to produce that quantity. To avoid a problem of hard bin packing with the limited fleet, we will limit the capacity of the fleet to 90%. Let c f be the capacity of the fleet and NY (NY = ∑ t∈T Y t ) the number of days needed to produce NP. Then c f and NY are calculated as follows: c f = 0.9 × m × Q and NY = (∑ t∈T ∑ i∈N c d it − ∑ i∈N dc I i0 )/min(C, c f , L 0 ) . Once NY is determined, a NY number of days are randomly drawn in T/{l}. It is not possible to produce on the last day of the planning horizon (l) because this production cannot be distributed to customers. Here, P can already be initialized by assigning NP to P t for which t is the smallest value of T having Y t = 1 without considering the violation of maximum production capacity (C).
Step 2: P and X construction day after day.
For any customer i, X i1 is initialized with the necessary quantity so that I i0 and X i1 at least constitute a demand (for each costumer i, d it ). For example, if the demand d it = 10, for I i0 = 5, we have on a X i1 = 5. For I i0 = 10, we have X i1 = 0 and for I i0 = 15 we have X i1 = 0. For each period t, giving priority to customers who are out of stock, we aggregate demands for each customer i without violating its storage limit capacity L i , and that of the fleet c f in respect of the quantity of products available at the depot the day before I 0,t−1 . The quantities produced are then adapted to the production capacity C and then to the capacity of the fleet c f and those of the depot L 0 . This makes it possible to resolve the production capacity overrun authorized in step 1. This approach is an adaptation of the dynamic programming (DP) proposed by Wagner and Whitin [49] for the resolution of the classic LSP.
Step 3: Construction of R day after day.
After determining P and X, the quantities sent to customers per period are successively aggregated to reach the capacity of a vehicle This operation is repeated until the number of vehicles necessary to deliver the products of each period is determined. This procedure is an adaptation of Clarke and Wright's economic algorithm [50] through the imposition of a limited fleet. At this stage, R does not yet contain the plant represented by the node i = n + 1. The plant is added to R only in the periods t for which Y t = 1. The number of symbol n + 1 added is equal to the number of vehicles necessary to send all the production of the period t to the depot (for Y t = 1 with t ∈ T/{l}, the number of vehicles needed to transport P t from the plant to the depot is P t /Q ). Once the initial population has been constituted, a selection and crossover procedure are carried out for the formation of the child population CH ILDS 0 . In this study, the population size of children is set at POP_size/2. The following section describes the procedure of selection and crossover of parents of the initial population for the constitution of CH ILDS 0 .
Selection and Crossover Procedure
Each child from the initial generation CH ILDS 0 is the result of the crossover of two parents selected by a binary tournament procedure. A selection by binary tournament consists of selecting the best solution from two randomly drawn solutions in the population Pop 0 . Let A be the parent selected at the first selection procedure and B the parent selected at the second selection procedure. Then a two-point crossover procedure of parents A and B is implemented for the determination of a single child E. The crossover procedure can be described as follows: Let us note by (•) the membership operator. By this operator writing A•R means that R belongs to A. Let us note R p or R(p) the element of rank p in R and R p→q or R(p→q) the sub-list of R going from p to q. In this work, the cutting points must correspond only to the day delimiter (−1). For the determination of cutting points, a random draw of two dates t 1 , t 2 ∈ T/{l} with t 1 = t 2 is made. Then d 1 and d 2 are determined so that d 1 = min(t 1 , t 2 ) and d 2 = max(t 1 , t 2 ). Let pos(t) be the function that at each date associates its position in A•R (p = pos(t)). A•R will be segmented into three parts. The left part will be identified by ) and the right part by A•RR = A•R(pos(d 2 ) + 1 → pos(l)) . Similarly, parent B will also be segmented according to the values of d 1 and d 2 determined for the A•R segmentation. The construction of child E from the crossover of A and B is done as follows: let E•RL, E•RM, and E•RR be the three parts of E•R. These three parts will be initialized as follows: E•RL = B•RL, E•RM = A•RM and E•RR = B•RR. After the initialization of E•R, a check is made to correct any anomalies in its construction. Contrary to Boudia et Prins' correction strategy of browsing the symbols in E•R [28] we adopt a simplified correction strategy of browsing the symbols in N c . This strategy allows to quickly identify the missing customers in E•R and make the appropriate corrections. The correction is as follows: Let i be the browsing index of N c , TX i the total quantity delivered to the customer i over T in E•X and TU i the total quantity of product that was to be delivered to i over T. We have TX i = ∑ t∈T X it and TU i = ∑ t∈T d it − I i0 . Through a comparison between TX i and TU i , We group customers into three subsets. The first subset is that of customers representing Case 1 such as TX i = TU i . The second subset representing Case 2 concerns the customers i for whom TX i > TU i and Case 3 brings together the customers for whom TX i < TU i . In the process of repairing the chromosome, all customers affected by Case 2 are treated first. Then come the customers concerned by Case 3 and finally those concerned by Case 1. Each case is treated as follows: Case 1. If TX i = TU i , then there is nothing to do for the costumer i. All demands (d it , ∀t ∈ T) are already met by the initial stock (I i0 ) and the quantities delivered over T (TX i ).
Case 2. If TX i > TU i , then a reverse browsing of T is made (from i to 1). E•X it is successively reduced by a demand d it . For a given period t, if E•X it falls to 0 then i is removed from its trip at the date t in E•R. This correction stops at the period t at which + 1 be the out-of-stock date at i if it is not supplied from the last date on which its initial stock covers its demands and TC i = TU i − TX i the quantity needed to complete TX i to TU i . Browsing T from τ to l, E•R is corrected as follows: (1) If E•X it = 0, then add TC i to E•X it (2) If Q (vehicle limit capacity) is violated, then i is removed from its current tour. However, if E•X it = 0, E•R is corrected by implementing steps (3), (4), and (5).
Example of Application of the Crossover and Repair Procedure
Consider the dataset (n = 10, l = 6, k = 2) described in Table 3. Let A and B be two solutions got from the initial population. Chromosome A is represented by Table 4 and Solution B is represented by Table 5. Let t 1 = 3 and t 2 = 5 be the cut points of chromosomes A and B (t 1 and t 2 correspond to dates or periods on the planning horizon). Child C represented in Table 6 will consist of three parties. The left side of C is noted C After the C chromosome is formed, abnormalities can be found. To repair the C chromosome, we go through the customers list and make the following corrections: TU i is the total amount of products that should have been received by customer i and TX i the amount received by the customer i.
Customers for whom TX i > TU i : i ∈ {3;4;8} For the customer i = 3, we have TX i = 60 + 15 = 75 and TU i = 15 × 6 − I i,0 = 90 − 30 = 60. By browsing T from t = 6 to t = 1, we subtract a demand to TX i on the date t = 4 and we get TX i = 75 − 15 = 60. Thus, the amount sent to i at period 4 becomes X 3,4 = 15 − 15 = 0 and i = 3 is removed from the tour of period 4. The result of this correction is shown in Table 7. Table 7. Correction for i = 3 in C. For the customer i = 4, we have TX i = 49 and TU i = 7 × 6 − 7 = 35. By browsing T from t = 6 to t = 1, we remove 2 demands in TX i to the period t = 4 to get TX i = 35 and X 3,4 = 21 − 7 − 7 = 7. See Table 8 for C chromosome after correction for i = 4. For the customer i = 8, we have TX i = 104 and TU i = 13 × 6 − 13 = 65. The successive subtraction of a demand in TX i to equalize TU i allows to obtain the following results. Subtracting a demand from the period 6 allows to have TX i = 104 − 13 = 91 and X 8,6 = 0. So, the customer 8 is removed from the tour of period 6. Subtracting two demands in TX i in the period t = 4 allows to have TX i = 91 − 13 − 13 = 65 and X 8,4 = 39 − 13 − 13 = 13. The result of this correction is presented in Table 9. Table 9. Correction for i = 8 in C. The tour of period 4 contains i = 1, so we apply (1) by adding TC i to X i,4 . We have TC i + X i4 = 10 + 20 = 30. This changes the vehicle's load from 84 to 94 in period 4 for a maximum load of 198. The load of the vehicle is not violated, and the process stops for the customer i = 1. Table 10 illustrates the correction for customer i = 1. We have τ = 30+15 15 + 1 = 4 and I 0,4 = I 0,3 + P 3 − 94 = 152 + 0 − 94 = 58 > TC i . The tour of the period t = 4 does not contain i = 2, hence we apply the step (3) by making a better insertion of i = 2 in the tour of period t = 4. As a result of this insertion, the new load of the vehicle will be 139 < 198. The procedure stops for i = 2. The resulting chromosome is shown in Table 11. Table 11. Correction for i = 2 in C. We have τ = 110+0 22 + 1 = 6 and I 0,4 = I 0,3 + 3 − 94 = 152 + 0 − 139 = 13; and I 0,5 = I 0,4 + P 5 − 0 = 13 + 44 = 57; I 0,6 = I 0,5 + P 6 − (16 + 19)= 57 + 0 − 35 = 22 ≥ TC i . The tour of the period t = 6 does not contain i = 7, hence we apply the stage (3) by also making a better insertion of i = 7 in the tour of the period 6. As a result of this insertion, the new load of the vehicle will be 16 + 19 + 22 = 57 < 198. The procedure stops for i = 2. It is noticeable here that I 0,6 = 0 after the delivery of the customer 7 (see Table 12). Customers for whom TX i = TU i : i ∈ {5;6;8;9;10}. For customers 5, 6, 8, 9, and 10 there is nothing to do because the theoretical amount to be received is equal to the amount received. The result of this repair procedure is presented in Table 13.
Local Search Procedures
Three local search algorithms have been developed for the intensification phases of MA. ∀i, j ∈ N c , these algorithms can be described as follows: SWAP1 (S1): It is a local search algorithm which allows to exchange the position of two customers i and j during the same period t (∀t ∈ T) in accordance with the capabilities of the vehicles assigned to transport the products. This exchange is only possible if customer i and j are visited on the same date t regardless they belong to the same trip.
BEST INSERTION (BI): The BI consists in removing customer i from its current trip on the date t and searching in all existing trips on the date t the position that offers the minimum cost of transport in accordance with the capacity of the vehicles (∀t ∈ T). SWAP2 (S2): This algorithm consists of exchanging two customers i and j visited respectively at consecutive periods t and t + 1. If that does not cause a stock outage and meets the conditions of maximum capacity of production C, storage L i (i ∈ N c ), L 0 and vehicles Q, the resulting solution is compared to the best current solution (∀t ∈ T/{l}).
Global Description of the MA
Let Pop g be the population of the generation g with g ∈ {0, 1, . . . , Max_gen} (Pop 0 is the initial population). Max_gen Max_gen the maximum number of generations, Childs the population of children and Term_Crit the end criterion. The MA used in this work can be described by the algorithm in Algorithm 1. g ← 0, Popg ← Ø, Childs ← ∅ and.
Experimentation
The MA described in Figure 2 is implemented in C++ on a 64-bit Intel Pentium Dual Core 1.60 GHz personal computer with 4 GB RAM. Details of the instances used in the various simulations can be found in [19]. In this work, the mutation operator is replaced by a local search and the selection of parents is done by a binary tournament procedure. Thus, the relevant parameters of the MA developed are the size of the population (Pop_Size), the maximum number of generations (max_gen) and the probability with which the local search (LS_search_prob) is applied to each solution. A strategy of adjustment of the parameters based on experimental comparisons (testing combinations of three values designating the size of the population, three values each representing the maximum number of generation and three rates designating the probability of local searches) was carried out on the four classes of instances with 20 clients, six periods, and two vehicles. A total of 3 × 3 × 3 × 4 = 108 tests allowed to retain the best parameters presented in the Table 14. This parameter tuning strategy is opposed to other methods based on parameter control. For a better understanding of parameter handling, see [25]. The aim of this study is to compare the results of the MA with those obtained with the two-phase decomposition heuristic (TPDH). To achieve this goal, several tests are conducted to evaluate the effectiveness of each local search algorithm as well as the combination of some of them.
Results
The tests performed on 128 instances have yielded the averages over the four classes of instances represented by the rows in Table 15.
In this table, %Diff denote the percentage difference between the cost determined with the MA and the cost determined with the TPDH. This rate of change determines the relative evolution of the results of the MA compared to the TPDH. The column Instance with the sub-columns n, l, m designates the instances used for tests with n clients, l periods on the planning horizon and m vehicles. The columns SWAP1, BI, SWAP2, BI&SWAP2, SWAP1& SWAP2, and ALL contain the sub-columns of the total cost percentage difference and the CPU percentage difference of the corresponding local search algorithms.
In Table 15, the combination of local search SWAP1 and SWAP2 (SWAP1&SWAP2) offers the best percentage difference with an overall average decrease of 10.50% in the cost obtained with MA compared to TPDH. The second-best combination of local search algorithms is the use of BI and SWAP2 (BI &SWAP2) with an overall reduction of 10.23% on the total cost of production, storage, and vehicle rounds. In this experiment, it was found that the combination of all local searches is less efficient than the combination of two of them. In terms of calculation time, only the use of BI as a local search method offers a reduction in calculation time of 36.85% (%Diff CPU = −36.85% in the Table 15). Although it has produced the largest number of better solutions, MA with a combination of SWAP1 and SWAP2 local searches shows an overall relative increase of 1759.09% in computation time compared to TPDH. Out of 192 (32 × 6) results obtained on all tests, TPDH obtained better results on only three tests concerning instances (n = 40, l = 3, m = 3) with the use of the local search methods BI, SWAP2 and the combination of SWAP1 and SWAP2 which gives a very low rate of 1.56% (3/192). The best solution for each of the 32 instances is obtained with the MA. Table 16 shows the distribution of the best solutions according to the local search method or combination of local search methods used, considering both the overall cost and the computation time. In Table 16, Nbr_BS refers to the number of best solutions obtained with each local search method in the implementation of MA. According to Tables 15 and 16, the local search methods BI, SWAP1(S1) and SWAP2(S2) individually had the same number of best solutions in the implementation of MA. Thus, each local search taken individually yielded 2 best results out of the 32 average results, i.e., a rate of 6.25% each. The combination of all the local search methods (ALL) also resulted in two best results, which also corresponds to a rate of 6.25% (100 × (2/32)). The highest number of best solutions was obtained with the combination of the local search methods SWAP1 and SWAP2. This combination alone represents the 17 best results out of the 32 in Table 16, a rate of 53.125%. The second-best combination is the one using BI and SWAP2. This combination yielded 7 best solutions out of 32 with a rate of 21.875%. In Table 17, each row gives details of the best solution for each instance in Table 15. The columns PROD, INV, and TRANS indicate the average production, storage, and transport costs (vehicle trips) of the best solution obtained with MA, respectively. COST is the average total cost of production, storage, and distribution (transport). RL refers to the local search used to obtain this result. Overall, Table 17 shows that the share of production cost in the overall cost is 72.87% (100 × (72,758.91/99,847.04)). The overall cost of storage accounts for 11.39% (100 × (11,372.73/99,847.04)) of the overall cost and the cost of vehicle rounds (distribution) accounts for 15.74% (100 × (15,715.41/99,847.04)) of the overall cost. Tables 18-20 provide details of the comparison of production, storage, and distribution costs, respectively. In Table 18, all production costs obtained with the MA are less than or equal to the production costs of the TPDH. The costs of production for 16 cases resolved with MA are invariant to the costs of production for TPDH, which corresponds to 50% of the relative results in Table 18. Overall, there is an average decrease of 5.18% in the cost of production obtained with the MA method compared to TPDH.
Regarding inventory (storage) cost, there is an increase in the average inventory costs on all instances with an overall average of 9887.34 for the cost obtained with the TPDH method against and a cost of 11,372.73 for the MA. The overall average increase in inventory costs from tests with MA is 15.38% relative to the cost obtained with the TPDH method.
Contrary to the inventory costs, a decrease in distribution cost is observed on all results obtained with the MA compared to the costs calculated with the TPDH. We have an overall average of 24,147.83 of the costs with TPDH against an overall average of 15,715.41 of the costs obtained with the MA. The overall average of percentage difference in the distribution cost calculated with the MA compared to the distribution cost calculated with the TPDH is −35.60%. This reflects an overall average decrease of 35% in the distribution cost obtained with the MA compared to the TPDH. Table 21 shows a comparison of the total cost constituted by the production, inventory and distribution cost obtained with the MA compared to the cost calculated with the TPDH. With an average overall cost of 112,783.76 for TPDH compared to 99,847.04 for MA, there is a decrease ranging from 3.65% to 16.73% on all total costs calculated with MA compared to TPDH with an average overall decrease of 11.07%. With an overall average computation time of 459.51 for TPDH compared to 280.98 for MA, there is an overall average increase of 1485.59% in the computation time for MA compared to TPDH. This is since the average does not consider the relative dispersion of computation times. Figure 2 highlights the effectiveness of MA in reducing the overall cost of production, inventory, and distribution compared to TPDH. In this figure, we can see that the use of MA reduces the overall cost of operations. This reduction varies between 3614 and 35,339. Figure 3 shows an increase in the calculation time with MA on all instances except instances 12, 20, 23, 24, and 28. For these instances, 95% of the computation time is consumed by the first phase with the use of CPLEX to solve an LSP problem with direct delivery.
Algorithms 2021, 14, x FOR PEER REVIEW 21 of 24 Figure 2 highlights the effectiveness of MA in reducing the overall cost of production, inventory, and distribution compared to TPDH. In this figure, we can see that the use of MA reduces the overall cost of operations. This reduction varies between 3614 and 35,339. Figure 4 shows a general evolution of the MA algorithm for EDPRP like the one presented by [34,51]. In this figure, instances (1), (2), (3), and (4) are representative of the four classes of instances used to conduct tests on the dataset. These instances consisting of 40 clients (n = 40), six periods (l = 6), and three vehicles highlight the general behavior Figure 4 shows a general evolution of the MA algorithm for EDPRP like the one presented by [34,51]. In this figure, instances (1), (2), (3), and (4) are representative of the four classes of instances used to conduct tests on the dataset. These instances consisting of 40 clients (n = 40), six periods (l = 6), and three vehicles highlight the general behavior of the MA. The figure shows that the memetic algorithm proposed for the EDPRP converges rapidly from relatively bad solutions to good solutions.
Conclusions
This paper focuses on the comparative study of the results obtained with a memetic algorithm (MA) and a two-phase decomposition heuristic (TPDH) for the management of an external depot in a production routing problem (EDPRP). In the MA developed in this work, three local search algorithms (LS) were used to intensify the search for the best solution on each instance adapted from the work of Adulyasak and al. [17]. The results have shown an improvement in the production cost computed with MA compared to TPDH ranging from 0% to 16.64% with an overall average rate of 5.18%. Similarly, there has been an improvement in the cost of transport ranging from 17.08% to 51.26% with an overall average rate of 35.60%. However, the inventory cost increased from 5.69% to 32.10% with an overall average increase rate of 15.38%. As regards the overall cost of production, inventory and distribution, there was a reduction ranging from 3.65 to 16.73% with an overall rate of 11.07%. Such a finding highlights the effectiveness of MA in relation to TPDH.
Beyond the contributions developed in this work, many avenues of research remain to be examined. MA using the local search method BI is the only one that provides a reduction (36.85%) in MA computation time compared to TPDH with an overall average time of 12.87 s. This implementation of the MA allowed to obtain two better results with a decrease of 8.94% of the total global average cost compared to the TPDH. The use of MA with local search BI thus appears as a very good algorithm in the search for a good initial
Conclusions
This paper focuses on the comparative study of the results obtained with a memetic algorithm (MA) and a two-phase decomposition heuristic (TPDH) for the management of an external depot in a production routing problem (EDPRP). In the MA developed in this work, three local search algorithms (LS) were used to intensify the search for the best solution on each instance adapted from the work of Adulyasak and al. [17]. The results have shown an improvement in the production cost computed with MA compared to TPDH ranging from 0% to 16.64% with an overall average rate of 5.18%. Similarly, there has been an improvement in the cost of transport ranging from 17.08% to 51.26% with an overall average rate of 35.60%. However, the inventory cost increased from 5.69% to 32.10% with an overall average increase rate of 15.38%. As regards the overall cost of production, inventory and distribution, there was a reduction ranging from 3.65 to 16.73% with an overall rate of 11.07%. Such a finding highlights the effectiveness of MA in relation to TPDH.
Beyond the contributions developed in this work, many avenues of research remain to be examined. MA using the local search method BI is the only one that provides a reduction (36.85%) in MA computation time compared to TPDH with an overall average time of 12.87 s. This implementation of the MA allowed to obtain two better results with a decrease of 8.94% of the total global average cost compared to the TPDH. The use of MA with local search BI thus appears as a very good algorithm in the search for a good initial solution in the global scheme of a branch-and-cut algorithm.
In the model studied in this paper, the supply chain consists of a factory and a depot. Studies of versions with several factories and/or depots will help to reflect certain realities in supply chain practice and management. Similarly, considering certain characteristics such as the use of heterogeneous vehicles, dynamic (stochastic) demands or delivery from the plant during production days would also make it possible to highlight other aspects of realities in supply chain management.
A comparative study on the management of MA parameters could reveal other potentials of this algorithm. For example, this study would allow to compare the strategy of parameter tuning by the three methods of parameter control which are: deterministic parameter control, adaptive parameter control, and self-adaptive parameter control.
|
2021-02-12T14:10:07.457Z
|
2021-01-19T00:00:00.000
|
{
"year": 2021,
"sha1": "4c9d926640485d672c3deca63f9a9ac3d2c409bc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4893/14/1/27/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6221f46ae17961e9466c99b2dd412206382b3d27",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
256866803
|
pes2o/s2orc
|
v3-fos-license
|
The Polish Society of Gynecological Oncology Guidelines for the Diagnosis and Treatment of Endometrial Carcinoma (2023)
Background: Due to the increasing amount of published data suggesting that endometrial carcinoma is a heterogenic entity with possible different treatment sequences and post-treatment follow-up, the Polish Society of Gynecological Oncology (PSGO) has developed new guidelines. Aim: to summarize the current evidence for diagnosis, treatment, and follow-up of endometrial carcinoma and to provide evidence-based recommendations for clinical practice. Methods: The guidelines have been developed according to standards set by the guideline evaluation tool AGREE II (Appraisal of Guidelines for Research and Evaluation). The strength of scientific evidence has been defined in agreement with The Agency for Health Technology Assessment and Tariff System (AOTMiT) guidelines for scientific evidence classification. The grades of recommendation have been based on the strength of evidence and the level of consensus of the PSGO development group. Conclusion: Based on current evidence, both the implementation of the molecular classification of endometrial cancer patients at the beginning of the treatment sequence and the extension of the final postoperative pathological report of additional biomarkers are needed to optimize and improve treatment results as well as to pave the route for future clinical trials on targeted therapies.
Background
The Polish Society of Gynecological Oncology (PSGO) has developed the following recommendations for diagnosis, preoperative assessment for surgical treatment, radiotherapy, systemic treatment, treatment of recurrent disease and post-treatment surveillance of 1.
Evaluation of guidelines by external reviewers. 6.
Integration of external reviewers' comments with the original content of the guidelines.
The strength of scientific evidence was defined in agreement with The Agency for Health Technology Assessment and Tariff System (AOTMiT) guidelines for scientific evidence classification [2] (Table 1). The grades of recommendation were based on the strength of evidence and level of consensus of the PTGO development group as described in Table 2. Any clinician intending to apply PSGO guidelines is expected to perform a careful medical evaluation of individual clinical circumstances to determine the best course of patient care and/or treatment.
For the first time, recommendations were developed according to standards set by the guideline tool Appraisal of Guidelines for Research and Evaluation (AGREE) II in gynaecologic oncology.
The strengths of the guidelines are they are comprehensive and up-to-date. The limitation is the uncertainty of how many of the authors using guideline AGREE II were methodological experts.
Recommendations indicate an urgent need for changes in the system of financing health services in Poland.
Molecular
The latest morphological and clinical studies have shown that both the traditional histological classification [3][4][5][6][7][8] and the two pathogenic types of endometrial carcinoma according to Bokhman [9] do not allow for reliable assessment of prognosis and response to treatment. Obtaining this type of information is possible thanks to the molecular classification of endometrial cancer introduced in 2013 as The Cancer Genome Atlas (TCGA) sponsored by the National Cancer Institute (NCI) and by The National Human Genome Research Institute. The aforementioned classification identifies four molecular subtypes of endometrial cancer-POLE, MMRd/MSI-H, TP53-mutated (abn) and TP53wt-NSMPdiffering in mutation profile, immunogenicity and prognosis [10] (strength of evidence IVA) and requiring different management [10][11][12] (strength of evidence IVD, V and V) ( Table 4).
Endometrial Biopsy
The biopsy of the endometrium (abrasion, aspiration biopsy and hysteroscopic biopsy) is recommended for (1) Women with postmenopausal bleeding whose endometrial thickness is > 3 mm [13] (strength of evidence IIIA) (grade of recommendation 2A); (2) Women with adult granulosa cell tumour undergoing fertility-sparing treatment (with preservation of the uterus) [14] (strength of evidence IVA) (grade of recommendation 2B); (3) Women with bleeding during tamoxifen treatment lasting up to 5 years [15] (strength of evidence IVA) (grade of recommendation 2B).
There are no data regarding the safety of such an approach for patients using tamoxifen for up to 10 years. Tamoxifen therapy beyond 5 years significantly increases the risk of endometrial cancer (HR-1.74) [16] (strength of evidence IIA). Caution is recommended in this group of women (transvaginal ultrasound evaluation of the uterus every 6 months) (expert opinion) (strength of evidence V) (grade of recommendation 2B).
Regardless of the duration of tamoxifen use, special attention should be paid to menopausal women who can be asymptomatic due to stenosis of the cervical canal (grade of recommendation 1).
The endometrial thickness cut-off at which biopsy in asymptomatic women in the general population (pre-and postmenopausal) and in those with increased risk of endometrial cancer (PCOS, obesity, no childbirths or late menopause) *, which would have acceptable sensitivity and specificity, has not been established. Thus, an individual approach is recommended (expert opinion) (strength of evidence V) (grade of recommendation 2B). * Note: The management of endometrial hyperplasia (a precancerous condition) is a separate subject not covered by this recommendation.
The sensitivity of endometrial biopsy (cumulative value for abrasion and aspiration biopsy) is 89% and the false-negative rate is 10% [17] (strength of evidence IVA) [18] (strength of evidence IIIA).
The sensitivity of an adequate aspiration biopsy is significantly higher: 91% for premenopausal and 99.6% for postmenopausal women [19] (strength of evidence IIIA). However, obtaining tissue material adequate for the histopathological assessment using this method concerns 85% of samples [18] (strength of evidence IIIA).
For the reasons mentioned above, there is no preferred method of endometrial biopsy (grade of recommendation 2A).
The existing scientific evidence indicates (1) The advantage of the new molecular classification over the former based solely on the type of endometrial cancer and grading in making therapeutic decisions at the beginning of treatment [20][21][22] (strength of evidence IIIE, IIIA and IIA); (2) A significantly higher concordance between pre-and postoperative results for the new molecular classification based on the ProMisE classifier and/or sequencing compared to previously considered features (type and grading) [23][24][25][26][27][28] (strength of evidence IIIB, IIIB, IIID, IIID, IIID, IIIB), (3) High sensitivity and specificity of the ProMisE classifier [29,30] (strength of evidence IIID and IIIC), which is potentially realisable in most pathomorphology units. It is recommended that molecular classification be defined (at least a basic variant of ProMisE) at the initial diagnosis of endometrial cancer (biopsy), and if this is impossible, it should be performed at the latest before the decision on adjuvant treatment (grade of recommendation 2A).
CAUTION: Every woman with endometrial cancer for whom fertility-sparing treatment is being considered must obligatorily be subject to molecular classification (at least a basic variant of ProMisE). A similar requirement applies to high-risk patients with comorbidities who do not qualify for surgical treatment (grade of recommendation 2A).
A detailed description of the ProMisE classifier and comprehensive endometrial carcinoma diagnosis algorithm NGS+IHC is included in File S1.
The centres where the diagnostic minimum (ProMisE molecular classification) cannot be performed may, in the transitional period, use existing criteria: type and grading (does not apply to the decision on fertility-sparing treatment and management of nonoperable cases) (grade of recommendation 3).
If molecular classification was not performed at the time of the biopsy, it must be performed for the final report (at least a basic variant of ProMisE) (grade of recommendation 2B).
LVSI invasion is a very important predictive factor indicating individual risk of recurrence and a decisive factor in the choice of adjuvant therapy [31] (strength of evidence IIC).
The semiquantitative LVSI assessment system, which distinguishes focal and substantial * LVSI depending on the number of vessels involved, confirmed the high agreement of the results [24] (strength of evidence IIC).
* Substantial LVSI signifies the involvement of more than five lymphovascular spaces (LVSI) and does not include LVSI within the tumour and in the immediate vicinity of the tumour margin [32] (strength of evidence IA).
The presence of substantial LVSI [32][33][34] (strength of evidence IB, IIIA and IIIE) is both predictive and prognostic and, therefore, the final histopathological result in the case of LVSI invasion should indicate if it is focal or substantial (grade of recommendation 1).
A detailed description of all clinically necessary elements of the histopathological report is included in File S2.
Imaging Prior to Treatment Decision
The best method of assessing the local advancement of endometrial cancer (the depth of the myometrial invasion and infiltration of the cervical stroma-pT2) is magnetic resonance imaging (MRI) with contrast [35,36] (strength of evidence V and IIIA). Expert ultrasound has a diagnostic value comparable to MRI in the assessment of myometrial infiltration but is significantly worse in the assessment of the T2 feature [35] (strength of evidence V). Computed tomography (CT) is only useful in assessing the spread of cancer beyond the pelvis. Radiological assessment of the pelvis by CT is inferior to MRI and expert ultrasound [36] (strength of evidence IIID).
Therefore, before deciding on the sequence of endometrial cancer treatment, clinical and radiological staging should be performed based on gynaecological examination, pelvic MRI and CT of the abdomen and the chest (grade of recommendation 2B).
In justified cases, expert ultrasound can replace magnetic resonance imaging in the assessment of changes in the pelvis (grade of recommendation 2B).
FIGO staging for endometrial carcinoma is shown in Table 5 [37]. Standard surgery is a simple total hysterectomy with bilateral salpingo-oophorectomy (BSO) **; peritoneal fluid sampling for cytological examination is not recommended [37,38] (strength of evidence V and IIIA) (grade of recommendation 2A).
As minimally invasive surgery (total laparoscopic hysterectomy (TLH) and total robotic hysterectomy (TRH)) does not compromise the prognosis and has a significant advantage in perioperative and postoperative outcomes over open surgery [39,40] (strength of evidence IIIA and IIIA), it is recommended where possible (grade of recommendation 2A).
Modified radical or radical hysterectomy (i.e., hysterectomy with a vaginal margin and/or partial/total parametrial resection) increases the number of complications and does not improve results [38] (strength of evidence IIIA). Therefore, performing such procedures and claiming them for medical reimbursement are not recommended (grade of recommendation 2A).
Exceptions are as follows: (1) Sparing treatment in women who want to preserve fertility and who have met the criteria * and achieved complete clinicopathological remission after hormone therapy [41] (strength of evidence IIIA) (grade of recommendation 2A). (2) No oophorectomy in women < 45 years old who have met the criteria ** [42][43][44] (strength of evidence IIIA, IIIA and IIIA) (grade of recommendation 2A).
* Criteria: no myometrial invasion (MRI with contrast) and exclusion of metastatic disease (CT of the abdomen and chest); bioptate G1 TP53wt or every G when POLE [45] (strength of evidence IIIA); and no contraindications for hormonal treatment and/or pregnancy (including age under 45) [41,46,47] (strength of evidence V, IIIA and IIIE) (a detailed algorithm of how to proceed is included in File S3: fertility-sparing treatment). ** Criteria: age < 45; FIGO I/II-necessary exclusion of FIGO IIIC by systematic lymphadenectomy or sentinel lymph node procedure in the group at high risk of metastases in radiologically negative lymph nodes (see indications for lymphadenectomy); and high risk of ovarian cancer excluded such as for BRCA1/2 mutation carriers or hereditary nonpolyposis colorectal cancer (HNPCC) (Lynch syndrome).
A meta-analysis based on the results of a systematic review of randomised trials showed that removal of lymph nodes in radiological FIGO I patients, regardless of risk factors (histological type and grade), does not affect overall survival and the time-torecurrence; it is, however, associated with a significant number of complications [50] (strength of evidence IA).
Therefore, lymphadenectomy should be considered in patients with substantial* (highintermediate and high) risk of metastases in radiologically negative lymph nodes (grade of recommendation 1).
The goal of lymphadenectomy is to rule out the highly probable FIGO IIIC in this group of patients. The potential up-stage change is of prognostic significance [37] (strength of evidence V) and influences the choice of adjuvant treatment (see FIGO I/II adjuvant treatment vs. FIGO III adjuvant treatment for details).
Because the new molecular classification better defines the risk groups of metastases to the lymphatic system [20][21][22] (strength of evidence IIIE, IIIA and IIA) and there is a significantly greater risk of committing an error in making decisions about lymphadenectomy based on existing criteria (histopathological type, grading and MI) [23][24][25][26][27][28] (strength of evidence IIIC-IIID), before deciding to perform a lymphadenectomy in patients with radiological FIGO I/II, it is recommended to determine the risk of metastasis based on molecular criteria (grade of recommendation 2A), or if this is not yet possible, conditionally based on existing pathological and radiological features (grade of recommendation 3). * Significant risk factors for metastases to the lymphatic system (indications for lymphadenectomy) are as follows: • Furthermore, in the group of patients at high risk of metastasis, lymphadenectomy should include the removal of radiologically negative pelvic and para-aortic lymph nodes up to the left renal vein because it increases overall survival [53] (strength of evidence IIIE) (grade of recommendation 2A).
The sentinel lymph node procedure can be considered to assess the status of the nodes (pN) [54,55] (strength of evidence IIIB and IIIB) (grade of recommendation 2A).
Note: for the sentinel node procedure, pathomorphological ultrastaging of the excised lymph nodes is obligatory [55] (strength of evidence IIIB) (grade of recommendation 2A).
Greater Omentum
Infracolic omentectomy should be performed only in patients with serous or undifferentiated endometrial carcinoma due to the increased risk of omental metastasis that occurs in these histopathological types [56] (strength of evidence IIIA) (grade of recommendation 2A).
Adjuvant Treatment (Post-Surgery Patients with No Residual Disease R0)
Radiotherapy is the method of choice [57] (strength of evidence IIA). Chemotherapy is not recommended as the adjuvant treatment of FIGO I/II endometrial cancer (this also applies to the non-endometroid types, including serous carcinoma) [58] (strength of evidence IIIA) (grade of recommendation 1).
The decision to proceed with adjuvant radiotherapy should be made after determining the individual risk of recurrence based on molecular type, grading, LVSI and MI [59] (strength of evidence 5).
For low-risk patients, observation is recommended [60] (strength of evidence IIA) (grade of recommendation 1).
For For patients at high risk, brachytherapy (BT) and external beam radiation therapy (EBRT) are recommended (strength of evidence V) (grade of recommendation 1).
Clinicoradiological FIGO Stage I/II (Inoperable-Not Suitable for Surgery Due to Health Issues)
There is a lack of randomised studies comparing different methods of radiotherapy (EBRT/BT) in the radical treatment of inoperable FIGO stage I/II endometrial cancer.
Retrospective cohort studies have shown that the use of BT HDR in the radical treatment of inoperable endometrial cancer improves overall survival (OS) and progression-free survival (PFS) time [63] (strength of evidence IIIE). The prognosis of patients undergoing independent EBRT (without BT) may be worse [64] (strength of evidence IIIE).
Other studies suggest that adding EBRT to BT does not improve results [65,66] (strength of evidence IIIE, IIIE) and significantly increases the toxicity of treatment [66] (strength of evidence IIIE). However, these are retrospective studies assessing the treatment of patients with various doses at a time when modern irradiation techniques were not used.
There is a need to conduct prospective randomised trials comparing standalone BT, standalone EBRT and combo therapies: BT and EBRT.
Until an unequivocal answer is obtained as to which option is the most favourable, the treatment of choice safest for patients is always the BT HDR variant, supplemented in the group at high risk of metastases to lymph nodes with EBRT (using modern irradiation techniques).
Therefore, in patients who are not eligible for surgery (poor general condition or lack of consent to surgery) with significant risk factors for metastasis to the lymph nodes, the following are recommended: • In other cases, only BT should be used (expert opinion) (strength of evidence V) (grade of recommendation 2B).
It is recommended that the intensity-modulated radiation therapy (IMRT)/VMAT technique or conformal radiotherapy be used in the area of the reproductive organs and lymph nodes. HDR brachytherapy should be planned based on CT or MRI performed after the insertion of the applicator (expert opinion) (strength of evidence V) (grade of recommendation 2B).
FIGO IIIB, IVA-Infiltration of Structures Adjacent to the Uterus: Parametrium, Bowel and Bladder Cytoreductive surgery in radiologically advanced endometrial cancer (FIGO IIIB and IVA) is allowed only when the patient's general condition is good (the patient qualifies for major surgery) and when the operator can perform complete cytoreduction (lack of residual disease R0-microscopically negative margins of resection) [67] (strength of evidence IIID) (grade of recommendation 2A).
If the disease extends beyond the uterus but does not exceed the pelvic boundaries (rectal/sigmoid infiltration and/or *parametrium), the recommended method of surgical treatment is en-block cancer resection with reconstruction of the gastrointestinal tract or * radical hysterectomy (expert opinion) (strength of evidence V) (grade of recommendation 2B).
Systematic lymphadenectomy, in these cases, is not recommended [49,50] (strength of evidence IIA and IA) (grade of recommendation 3).
FIGO IIIC-Pelvic and/or Para-Aortic Lymph Nodes Radiologically Suspected of Metastasis
Simple TAH with BSO is recommended without peritoneal fluid sampling for cytological examination and with a biopsy of the suspected lymph nodes or, where possible, a selective lymphadenectomy (authors' opinion) (strength of evidence V) (grade of recommendation 2B).
FIGO III Adjuvant Treatment (Post-Surgery with No Residual Disease R0)
CAUTION: This applies to preoperative apparent FIGO I/II cases in which FIGO IIIA or IIIC was diagnosed postoperatively as a result of surgical staging and successfully resected cases of operable FIGO IIIB.
Radiochemotherapy is a method of choice (operable FIGO IIIA, B and C endometrial carcinoma, every histopathological type) [68,69] (strength of evidence IIIA and IIA) (grade of recommendation 1).
The recommended sequence of radiochemotherapy is two cycles of cisplatin with EBRT followed by four cycles of carboplatin with paclitaxel [69] (strength of evidence IIA) (grade of recommendation 1).
It is allowed to reverse the sequence of the treatment regimen: four cycles of carboplatin with paclitaxel and then two cycles of cisplatin with EBRT (expert opinion-strength of evidence V) (grade of recommendation 2B). In POLE and MMRd endometrial cancer subtypes, EBRT alone is recommended because the addition of chemotherapy (CHT) before and during irradiation is of no benefit in this group of patients [69] (strength of evidence IIA) (grade of recommendation 1).
Clinical and Radiological FIGO Stage IIIA/B/C/IVA (Inoperable or Unresectable Locally Advanced Cancer)
In patients with locally advanced cancer without distant metastases (note: M1-FIGO IVB: the presence of metastases outside the pelvis or in the nonregional lymph node) who are not eligible for surgery (poor general condition, no consent to surgery and/or complete cancer resection is impossible), the method of choice is EBRT including uterus and lesions in the pelvis depending on their location (parametrium, adnexae and pelvic lymph nodes) and/or metastatic para-aortic lymph nodes combined with BT (uterus) (expert opinion) (strength of evidence V) (grade of recommendation 2B).
It is recommended that the IMRT/VMAT technique or conformal radiotherapy be used in the area of reproductive organs and pelvic lymph nodes. HDR BT should be planned based on CT or MRI performed after the insertion of the applicator. In cases where radical treatment is not feasible, the method of choice is systemic treatment with or without palliative radiotherapy (expert opinion) (strength of evidence V) (grade of recommendation 2B). The type of therapy should be selected individually considering the histological type, receptor status and/or molecular profile.
Variants of Systemic Treatment Hormonotherapy
For patients with low-grade endometroid carcinoma, it is recommended to determine the expression of E/P receptors (receptor status) because hormone therapy is a preferred option in this group (the percentage of objective responses (ORR) is 21.6%: ER + 26.5% and PR + 35.5%) (does not apply to cases with rapid progression of the disease) [71] (strength of evidence IIIA) (grade of recommendation 2A).
Progestogens: megestrol acetate at a dose of 160 mg/ day or medroxyprogesterone at a dose of 200 mg/day are recommended. Progestogens can be alternated with tamoxifen [70,71] (strength of evidence IIIA and IIIA) (grade of recommendation 2A).
The percentage of objective response rates (ORR) after the use of aromatase inhibitors is low (7%), but due to the high (44%) clinical benefit (stabilisation and response) [77] (strength of evidence II D), their use may be considered in selected cases (grade of recommendation 2B).
First-Line Chemotherapy
For patients with high-grade endometroid carcinoma and non-endometroid carcinoma (serous, clear cell and carcinosarcoma), a carboplatin and paclitaxel regimen is recommended [72] (strength of evidence IIA) (grade of recommendation 1).
Trastuzumab in Serous Carcinoma
In a randomised phase II trial [78] (strength of evidence IIA), patients with advanced/metastatic (FIGO III-IV) or recurrent HER2-positive serous endometrial cancer received carboplatin and paclitaxel with or without trastuzumab and continued treatment with trastuzumab until disease progression or unacceptable toxicity.
A particular benefit of trastuzumab was observed in the first line of treatment with PFS of 17.9 vs. 9.3 months (HR 0.40, 90% CI 0.20 to 0.80; p = 0.013) [78] (strength of evidence IIA) and with a median OS not reached in the trastuzumab arm compared to 24.4 months in the control group (HR 0.49, 90% CI 0.25 to 0.97; p = 0.041) [79] (strength of evidence IIA). Adding trastuzumab to chemotherapy did not increase the toxicity of the treatment. Therefore, in patients with advanced/metastatic or recurrent serous cancer, it is recommended to determine the status of the HER2 receptor because the addition of trastuzumab to chemotherapy (carboplatin and paclitaxel) is the most favourable (recommended) therapeutic option in the group of HER2-positive patients (grade of recommendation 1).
The standards for determining the status of the HER2 receptor are included in (File S4).
Second-Line Chemotherapy
Based on the results of a randomised phase III trial [74] (strength of evidence IIA) indicating a significant advantage of immunotherapy (pembrolizumab plus lenvatinib) over chemotherapy (paclitaxel or doxorubicin) (reduction in the risk of recurrence by 46% (HR of 0.54) and the risk of death by 38 % (HR of 0.62)), it is recommended to use chemotherapy only in recurrent sarcoma (grade of recommendation 1).
In other histological types, the use of chemotherapy (paclitaxel or doxorubicin) may only take place in clinically justified situations or when there are limitations in the availability of immunotherapy (grade of recommendation 3).
When choosing chemotherapy, the patient should be informed of a significantly worse prognosis (expert opinion) (grade of recommendation 2B).
Immunotherapy
In the entire population of patients with recurrent or advanced endometrial cancer, regardless of MMR/MSI status, the use of a combination of a PD-1 inhibitor (pembrolizumab) with a multiply tyrosine kinase inhibitor against VEGFR1, VEGFR2 and VEGFR3 (lenvatinib) was superior to chemotherapy (reduction in the risk of recurrence by 46% (HR of 0.54) and the risk of death by 38% (HR of 0.62)). The objective response rate (ORR) was 32%. The rate of serious adverse events was 89%, and 33% of patients discontinued therapy [74] (strength of evidence IIA).
In a single-arm study [73] (strength of evidence IID) and in a phase II study [75] (strength of evidence IIC), monotherapy with PD-1 inhibitors (dostarlimab and pembrolizumab) showed high effectiveness in the treatment of patients with recurrent, advanced or metastatic endometrial cancer with mismatch repair deficiency (MMRd)/high microsatellite instability (MSI-H) that have progressed during or after platinum-based therapy.
Therefore, in patients with incomplete resection of locally advanced cancer (FIGO III-IVA, R2) or disseminated cancer (FIGO IVB) or with unresectable recurrence who progressed after platinum-based chemotherapy (at least one cycle), the treatment of choice is immunotherapy [73][74][75] (strength of evidence IIA, IIC and IID) (grade of recommendation 1).
MMRd)/MSI-H cancers may be treated with PD-1 inhibitors (dostarlimab or pebrolizumab) or with a combination of PD-1 inhibitor (pembrolizumab) and lenvatinib (grade of recommendation 1). Due to the high percentage of treatment discontinuation observed in the combo therapy of pembrolizumab and lenvatinib, monotherapy with PD-1 inhibitors is the preferred option in this group of patients (expert opinion-strength of evidence V) (grade of recommendation 2B).
In the group of patients with a mismatch repair proficiency (MMRp), the treatment of choice is a combination of a PD-1 inhibitor (pembrolizumab) with lenvatinib (strength of evidence IIA) (grade of recommendation 1).
Follow-Up
There is no evidence that any post-treatment surveillance regimen for endometrial cancer improves patient survival time.
The randomised TOTEM study assessed the role of intensive (INT) vs. minimalist (MIN) surveillance after the treatment of patients with endometroid carcinoma of the endometrium [80] (strength of evidence IIA). Patients were stratified into low-(FIGO IA, low-grade) and high-risk (≥FIGO IA, high-grade) groups for recurrence. There were no statistically significant differences in survival and time-to-recurrence in patients under minimal and intensive surveillance in both the low-and high-risk groups.
The TOTEM study looked only at endometroid cancer and did not consider the molecular criteria for assessing the risk of disease recurrence, including TP53 status ( Table 6). Table 6. Risk groups for endometrial carcinoma (applies only to postsurgery patients with no residual disease (R0)). In the opinion of experts, patients with non-endometroid carcinoma and those with endometroid carcinoma in whom an abnormal/mutated P53 gene was detected should be subject to intensive surveillance (expert opinion) (strength of evidence V) (grade of recommendation 2B).
Risk Group Characteristics
Extrapolating the results of the TOTEM study [80] and expert suggestions, the recommended follow-up schedule adjusted to the current risk groups for recurrence [59] is described in Table 7 (expert opinion) (strength of evidence V) (grade of recommendation 2B).
In this group, every 8th woman (5% of endometrial cancer cases) has a germinal mutation in the MSH2, MLH1, MSH6 and PMS2 genes coding for DNA mismatch repair (MMR) proteins [81] (strength of evidence V) and EPCAM deletions (EPCAM-MSH2) and thus has Lynch syndrome [82] (strength of evidence IVA).
The lifetime risk of developing colorectal cancer in the female population is 4.7% [85] (strength of evidence V). In women with Lynch syndrome, the risk is on average 10 times higher and, depending on the mutation, it ranges between 10% and 45% [84,85] (strength of evidence IVA and V). The highest risk is for mutations in MLH1 (45%) and MSH2 (33%) and the lowest for mutations in MSH6 (26%) and PMS2 (10-12%) [86,87] (strength of evidence IIIB and IIIB).
The lifetime risk of developing endometrial cancer in the general population is 2.9% [85] (strength of evidence V). In women with Lynch syndrome, it is on average 10 times higher and, depending on the mutation, it ranges between 21% and 51% [85] (strength of evidence V). The highest risk is for carriers of the mutations in MSH2 (51%) and MSH6 (49%) and the lowest for the mutations in MLH1 (34%) and PMS2 (12-24%) [85][86][87][88] (strength of evidence V, IIIB, IIIB and IIIB).
Hereditary endometrial cancer is diagnosed in the fourth and fifth decades of life. Carriers of the MSH2 mutation are affected the earliest (mean age 47) and MSH6 mutation carriers the latest (mean age 53) [85] (strength of evidence V).
Lynch syndrome also increases the risk of developing ovarian cancer (3%-20% compared to 1.3% risk in the general population) [89] (strength of evidence IIIB).
Genetic testing for Lynch syndrome should be performed in all cases of endometrial cancer with MMRd/MSI-H regardless of age, histological type [90] (strength of evidence IIID) (grade of recommendation 2A), and Amsterdam Criteria (expert opinion) (strength of evidence V) (grade of recommendation 2B).
The Amsterdam Criteria II (screening for Lynch syndrome in the general population) are as follows: • At least three relatives with a Lynch-associated cancer (colorectal, endometrial, small intestine, ureter and renal pelvis) verified pathologically; • One is a first-degree relative of the other two; • At least two successive generations affected; • At least one relative is diagnosed before the age of 50; • Familial adenomatous polyposis has been ruled out; • Tumours should be verified by pathologic examination [91] (strength of evidence V).
Other rare hereditary syndromes predisposing to endometrial cancer are described below.
Cowden Syndrome
PTEN germline mutations (<1%) increase the risk of developing endometrial cancer by 25% [72] (strength of evidence V). It does not affect the risk of ovarian cancer [92] (strength of evidence IIIA).
Hereditary Breast Cancer Side Specific (HBss), Breast and Ovarian Cancer (HBOC) and Ovarian Cancer (HOC) Syndrome
Germline mutations in BRCA1/2 are detected in 1% endometrial cancer. Carriers of the BRCA1 germline mutation often show a loss of the second copy of the gene, which causes a total biallelic inactivation of the BRCA1 protein, which is manifested by a deficiency in DNA double-strand break repair (HRD) and susceptibility to treatment with platinum compounds and PARP inhibitors. In contrast, biallelic inactivation of BRCA2 is rare [93] (strength of evidence IVA).
Among carriers of germline mutations in BRCA1/2 genes, the absolute risk of developing endometrial cancer up to 75 years of age is 3% (similar to life risk in the general population-2.9% [76] (strength of evidence V)), and for the serous type, 1.1% [93] (strength of evidence IIIA).
Confirmation of mutations in MMR and EPCAM deletions (EPCAM-MSH2) (Lynch syndrome) in BRCA1/2 (HBss/HBOC/HOC) and in PTEN (Cowden's syndrome) should result in genetic testing among relatives to identify carriers of the germline mutation.
In the case of Lynch syndrome, each family member with a confirmed mutation (regardless of gender) should undergo colonoscopy screening according to the Polish Society of Oncological Surgery (PSOS) recommendations (expert opinion) (strength of evidence V) (grade of recommendation 2B).
Women (Lynch syndrome and Cowden syndrome) should be encouraged to have children early and be offered prophylactic hysterectomy after the completion of reproductive plans [86,87,92] (strength of evidence IIIB, IIIB and IIIA) (grade of recommendation 2A).
Before a prophylactic hysterectomy, the presence of endometrial, ovarian and colorectal cancer should be excluded. For this purpose, endometrial biopsy, transvaginal ultrasound of ovaries with the determination of serum CA125 and colonoscopy are recommended (expert opinion) (strength of evidence V) (grade of recommendation 2B).
Prophylactic oophorectomy and salpingectomy are recommended in Lynch syndrome [86,87] (strength of evidence IIIB and IIIB) and in the case of detection of the germinal mutation BRCA1/2 (HOC/HBOC/HOC) [94] (strength of evidence IIIA) (grade of recommendation 2A).
In the case of PTEN (Cowden syndrome) germinal mutation, a prophylactic oophorectomy and salpingectomy are not recommended [92] (strength of evidence IIIA) (grade of recommendation 2A).
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm12041480/s1, File S1: ProMisE classifier and comprehensive endometrial carcinoma diagnosis algorithm NGS+IHC; File S2: Postoperative final report; File S3: Sparing treatment for women who want to preserve fertility; File S4: Carcinoma of the endometrium-Determination of gene and HER2 receptor status. References [95,96]
|
2023-02-15T16:16:15.389Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "24e67b4049e5dc0671f5b8cbaf220793dec620a0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/4/1480/pdf?version=1676282101",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c42afd18e15432a54026902f3c94201b21779fa1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216238760
|
pes2o/s2orc
|
v3-fos-license
|
Novel Coronavirus in the light of current knowledges : Facts and unknowns
Enfeksiyon Hastalıkları Anabilim Dalı, Sivas Cumhuriyet Üniversitesi Uygulama ve Araştırma Hastanesi, Sivas, Türkiye Corresponding author: Aynur Engin, MD., Enfeksiyon Hastalıkları Anabilim Dalı, Sivas Cumhuriyet Üniversitesi Uygulama ve Araştırma Hastanesi, Sivas, Türkiye E-mail: aynurum2000@yahoo.com Received/Accepted: April 07, 2020 / April 07, 2020 Conflict of interest: There is not a conflict of interest.
INTRODUCTION
In Wuhan, China a novel and alarmingly contagious primary atypical pneumonia broke out in December 2019. Soon after the causative agent was identified as a novel coronavirus by China's health authorities 1 . A novel coronavirus (CoV) is a new strain of coronavirus that has not been priorly described in humans. At the early period of outbreak the new virus provisionally named 2019-nCoV. The new virus quickly infected thousands of people, especially in China. The clinical view of 2019-nCoV infection ranges from asymptomatic infection to severe pneumonia with acute respiratory distress syndrome, septic shock and multi-organ failure, that can outcome in death.
The rapid spread of the disease caused panic of the people. There was produced some conspiracy theories about the new virus. Some misinformation has spread rapidly on the internet among the public. For example, in the early period of outbreak, one paper wrongly claimed that amino acid residues in the 4 inserts in the spike glycoprotein which are unique to the 2019-nCoV have similarity to those in the HIV-1 gp120 or HIV-1 Gag. These claims have resulted in considerable public panic and controversy in the community. Biologists rapidly denied this, emphasizing that the supposed similarities are present in lots of viruses. Soon after, the authors withdrew the approval of nowdiscredited study that linked coronavirus to HIV 2 .
In this article, it is reviewed the epidemiology, virologic and clinical features, diagnosis, management, and prevention of 2019-nCoV infection in the light of current knowledges.
VIROLOGY
Coronaviruses are a family of viruses that infect a wide range of several species including humans, cattle, pigs, chickens, dogs, cats and wild animals. Coronaviruses are enveloped positive-stranded RNA viruses whose name derives from their characteristic crown-like appearance in electron micrographs 3 . Schematic structure of Coronavirus is shown Figure 1 4 . The coronavirus subfamily is further classified into four genera as alpha, beta, gamma, and delta coronaviruses. Human coronaviruses (HCoVs) were first described in the 1960s for patients with the common cold 5 . The human coronaviruses are in two of these genera: alpha coronaviruses (HCoV-229E and HCoV-NL63) and beta coronaviruses (HCoV-HKU1, HCoV-OC43, MERS-CoV, and SARS-CoV). Four of HCoVs (HCoV 229E, NL63, OC43, and HKU1) are endemic globally and account for 10% to 30% of upper respiratory tract infections in adults 6 The coronavirus genome codes four or five structural proteins. These are S, M, N, HE, and E. The S, M, N, and E proteins code by HCoV-229E, HCoV-NL63, and the SARS coronavirus. However, HCoV-OC43 and HCoVHKU1 also include a fifth gene that codes the HE protein 3 . The spike (S) protein projects through the viral envelope and forms the characteristic spikes in the coronavirus "crown".
At the early period of outbreak the new virus called as 2019-nCoV. It was designation as 2019-nCoV, because, 2019 for year of detection, n for novel (meaning new) and CoV for coronavirus. However 2019-nCoV was not official name of the new virus. Moreover, some people call as Wuhan coronavirus to this new virüs. In the later period of the outbreak, on February 12, 2020, the novel coronavirus was renamed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease associated with the new virus is now called to as COVID-19. COVID-19 is a short for "coronavirus disease 2019".
The new name of the virus, SARS-CoV-2, was introduced recently, and at present, the former name is more known in public. Therefore, the former name of the virus, 2019-nCoV, was more common use in this article. Full-genome sequencing and phylogenic analysis indicated that 2019-nCoV is a betacoronavirus in the same subgenus as SARS virus, but in a different clade. MERS virus, another betacoronavirus, was more distantly related. It was reported that 2019-nCoV might be able to bind to the angiotensin-converting enzyme 2 (ACE2) receptor in humans 7 . Wan et al., reported that the novel virus's receptor-binding motif that directly contacts ACE2, is similar to that of SARS-CoV 8 .
Coronovirus can be zoonotic origin. For example, it is known that SARS-CoV was transmitted to humans from civet cats and MERS-CoV is transmitted from camels to humans. 2019-nCoV has close similarity to bat coronaviruses. Thus, it is possible that bats are the primary source. However, whether 2019-nCoV is transmitted directly from bats or through some other mechanism (by an intermediate host) is unknown. During the 2019-nCoV outbreak, different animals were suggested as a potential source or intermediate host for it. In a study by Liu et al, coronaviruses were detected as potential pathogens of Malayan pangolins. The authors reported that though there was high species variety of Coronavirus detected, SARS-CoV was the most widely distributed. Liu et al., suggested that Malayan pangolins could be a host with the potential of transmitting the SARS coronavirus to humans 5 . Interestingly, on 7 Feb, Shen Yongyi and Xiao Lihua who worked at the South China Agricultural University have described the pangolin as the potential source of 2019-nCoV on the basis of a genetic comparison of coronaviruses received from the animals and from infected people. They reported at a press conference, genetic sequences of viruses separated from this animals are 99% similar to that of the circulating virus. But some scientists have thought that the research was far from robust and this is not scientific evidence. Because the evidence for the potential involvement of pangolins in the outbreak has not been formally published, other than by a university press release. Also some scientists in Beijing claimed that snakes were the origin of 2019-nCoV, but that theory was rejected by other researchers 9 . After all, we still don't know, whether 2019-nCoV is transmitted directly from bats or by means of intermediate hosts.
EPIDEMIOLOGY
Up until now, there have been 3 major outbreaks with coronaviruses. These are SARS-CoV, MERS-CoV and finally SARS-CoV-2 (formerly 2019-nCoV). The SARS outbreak, which also originated in China, killed 774 people worldwide. Globally 8096 SARS cases were reported (based on data as of the 31 December 2003). Case fatality ratio of SARS-CoV infection was 9.6% 10,11 13 . On January 7, 2020, the Chinese scientists isolated the new type of coronavirus. And 5 days later, on Jan 12, 2020, China shared the genetic sequence of the novel coronavirus for countries to use in forming specific diagnostic kits 1 .
The initial source of 2019-nCoV still remains unknown. Initially, Chinese Health officials declared the outbreak originated at the Huanan Seafood Market. This theory is possible but not substantiated yet. In this market, there was outdoor stalls selling fish and meat, some of it from wildlife. The seafood market also sold live rabbits, snakes, and other animals. Some patients had worked or visited there. Thus, Huanan Seafood Market was blamed as most probable index source of zoonotic 2019-nCoV infections. The Huanan Seafood Wholesale Market in Wuhan city was closed by officials on Jan 1, 2020 for environmental sanitation and disinfection 1 . But, as the outbreak progressed, it was determined a lot of laboratory-confirmed cases had no contact with this market. Human-to-human transmission has been confirmed in China and the other countries.
We do not have enough knowledge about how 2019-nCoV spreads. Our existing knowledge is generally based on what is known about other coronaviruses 14 . Spread from person-to-person commonly occurs between close contacts (around 1 meter). Person-to-person spread is thought to occur largely via respiratory droplets. The droplets can be produced when an infected person coughs or sneezes. It is also possible that a person can take 2019-nCoV by touching a surface or object having the virus on it and then touching their own mouth, nose, or probably their eyes 14 . The WHO has reported that according to recent reports, the people with infected with 2019-nCoV can also be infectious during asymptomatic period 15 .
On 13 January 2020, the Ministry of Public Health, Thailand reported the first imported case of labconfirmed novel coronavirus (2019-nCoV) from Wuhan, China. On 15 January 2020, the Ministry of Health, Labour and Welfare, Japan reported an imported case of laboratory-confirmed 2019-novel coronavirus (2019-nCoV) from Wuhan, China. And then cases from different countries began to be reported. The number of cases increased steadily. On 23 January 2020, the first case of 2019-nCoV was reported in the United States. First confirmed case in Australia was reported on January 25, 2020. On January 27, 2020, first case reported in Canada. On 30 January 2020, the International Health Regulations Emergency Committee of the World Health Organization announced the outbreak a "public health emergency of international concern" 16 . The first death has been reported outside of China, in the Philippines on 2 February 2020. The first case in the Europe were confirmed in France on January 25, 2020 17 .
As of today (23 February 2020), WHO announced globally 78 811confirmed cases of COVID-19 18 . Most of these cases (77 042) are in China and 2445 deaths was occured in China. And the case count has been rising daily. According to the report released by the WHO, there are 17 death cases reported outside China as of Feb 23, 2020; these are from Philippines (1), Japan (1), Italy (2), France (1), Iran (5), Republic of Korea (5) and other (Diamond Princess which is a cruise ship currently in Japanese territorial waters) (2). As of February 23, 2020, the situation reports of the World Health Organization and the number of confirmed COVID-2019 cases reported through these reports are shown in Table 1 17 . The map of countries, territories or areas with reported confirmed cases of COVID-19 is shown Figure 2 18 .
Although it is very close, the number of COVID-19 cases are differ slightly between WHO and European Centre for Disease Prevention and Control (ECDC). According to the ECDC, countries, territories or areas with reported confirmed COVID-19 cases and deaths worldwide is shown in Table 2. In Table 2 is shown data as of 23 February 2020 19 .
As of 23 February, 2020, 121 cases and three deaths have been reported in the EU/EEA and the UK (EU: European Union, EEA: European Economic Area and UK: the United Kingdom) 20 . Distribution of laboratory confirmed cases of COVID-19 in the EU/EEA and the UK, as of 23 February 2020 shows in Table 3.
As of 20 February 2020, Egypt is the first country to report a confirmed case from the African continent. At present, cases have been reported in 5 different continents (Asia, Europe, America, Australia, and Africa). As of 23 February 2020, the cases were reported from 28 countries in addition to China. There are also 634 cases identified on a cruise ship currently in Japanese territorial waters 18 .
According to data reporting on 14 February 2020, Health care workers (HCWs) account for 1716 confirmed cases of COVID-19 including six deaths in China 21 . One of the deceased HCWs was Dr Li Wenliang. Doctor Li Wenliang, who was among the first triying to warn about the coronavirus outbreak in late December, died on 7 February 2020, after becoming infected with the new virus. Dr Li, an ophthalmologist in Wuhan, caught the virus from a infected patient 22 .
CLINICAL FEATURES
The incubation period of 2019-nCoV is thought to be within 14 days following exposure. Pneumonia appears to be the most serious manifestation of infection, characterized primarily by fever, cough, dyspnea, and bilateral infiltrates on chest imaging. However, patients can present with a spectrum of disease ranging from mild respiratory illnesses including a runny nose, sore throat, cough, and fever to severe disease requiring intensive care. Mild disease is common in healthy younger adults or children. Although we still need to learn more about how the new virus influences human, so far, older people, and people with comorbid medical conditions (such as diabetes and heart disease) seem to be more at risk of forming severe disease 15 . Asymptomatic infection is possible but the frequency of asymptomatic infection is unknown. However approximately 20 percent of confirmed patients have had critical illness. The overall case fatality rate is uncertain, because, the outbreak is in progress. With current numbers, the fatality rate for COVID-19 is approximately 3.1%.
In a study including 62 patients with laboratoryconfirmed 2019-nCoV, fever (77%) and cough (81%) have been found as the most common symptom. In mentioned study, on admission, bilateral involvement on chest radiographs in 84%, leucopenia in 31% and lymphopenia in 42% of the patients were found. Characteristic chest computed tomography findings of infected patients on admission were bilateral or multiple lobular or subsegmental areas of consolidation or bilateral ground glass opacity 23 .
While the outbreak continue, human-to-human transmission confirmed in different countries 15 At present, due to COVID-19 have also seen in some countries outside China, if a person with severe acute respiratory infection had a history of visiting to areas with assumed continuing community transmission, the clinicians should be keep in mind this disease. Any person meeting the criteria for a suspected case should be tested for SARS-CoV-2. SARS-CoV-2 can be detected by polymerase chain reaction in a reference laboratory.
WHO has made a lot of tests available to WHO Regional offices and national laboratories. These tests are being shipped to some laboratories across all WHO regions. Also, CDC has developed a real time Reverse Transcription-Polymerase Chain Reaction (rRT-PCR) test that can diagnose 2019-nCoV in respiratory samples from clinical specimens 26 .
For definitive diagnosis, when possible, specimens from both lower (i.e. bronchoalveolar lavage, endotracheal aspirate, or expectorated sputum) and upper respiratory tracts (i.e. nasopharyngeal swab, oropharyngeal swab, nasopharyngeal aspirate or nasal wash) should be collected 27 . Induction of sputum is not indicated. Routinely, viral culture is not recommended due to safety 28 Although there are some studies, the efficacy and safety of these drugs for 2019-nCoV still need to be further confirmed by clinical experiments.
PREVENTION
There is no approved vaccine for COVID-19 yet. Therefore prevention of the disease is very important in community settings and in the healthcare settings. Individuals with suspected infection in the community have to wear a medical mask and as soon as possible apply to a medical centre. The optimal way to avoid infection from having potentially touched a contaminated surface is still to avoid touching your face with your hands, and to wash your hands with soap and water often. HCWs in contact with COVID-19 patients should use appropriate personal protective equipment like eye protection, face shield, a medical mask, a longsleeved gown, gloves.
As a respiratory hygiene; -All patients have to cover their nose and mouth with a tissue or elbow when coughing or sneezing, -Everyone have to perform proper hand hygiene after contact with respiratory secretions.
-The patients with suspected 2019-nCoV are recommended to wear medical mask while waiting in public areas or in cohorting rooms.
HCWs should supply proper hand hygiene before touching a patient, before any clean or aseptic procedure is performed, after exposure to body fluid, after touching a patient, and after touching a patient's surroundings. Hand hygiene can supply by alcohol-based hand rubs are preferred if hands are not visibly soiled, otherwise, it should be washed with soap and water when they are visibly soiled.
It should be routinely cleaned and disinfect surfaces which the patient is in contact. Generally, fully cleaning environmental surfaces with water and detergent and applying commonly used hospital-level disinfectants for example sodium hypochlorite are efficient and adequate procedures.
Everyone should apply contact and droplet precautions before entering the room where suspected or confirmed nCoV patients are admitted. The patients should be placed in sufficiently ventilated single rooms. When single room is not available, the patient should be assigned to the cohort room. One meter or more space have to maintain between the beds in a cohort isolation room. The doors to the isolation rooms should be closed, and there should be sufficient external ventilation. HCWs should use a medical mask.
Tracheal intubation, bronchial suctioning, noninvasive ventilation, tracheotomy, cardiopulmonary resuscitation, sputum induction, manual ventilation before intubation, and bronchoscopy are aerosol-generating procedures. These procedures have been related with an increased risk of transmission of coronaviruses like SARS-CoV and MERS-CoV. So, in this case, airborne precautions should be taken. HCWs performing aerosol-generating procedures should use a particulate respirator as a US National Institute for Occupational Safety and Health (NIOSH)-certified N95, European Union (EU) standard FFP2, or the other equivalent.
The medical equipment should be either single-use and disposable or devoted equipment. If equipment needs to be shared among patients, it should clean and disinfect it between use for each individual patient (e.g., by using ethyl alcohol 70%).
The patients should stay their room unless medically necessary. If transport is required, it should be use predetermined transport routes to minimize exposure for other people.
All samples collected for laboratory studies should be accepted as potentially infectious. Specimens for transport should place in leak-proof specimen bags (i.e., secondary containers) that have a separate sealable pocket for the specimen (i.e., a plastic biohazard specimen bag). It should not to use use pneumatic-tube systems to transport specimens.
ECDC announced that although there is so far no evidence of airborne transmission, they recommend a cautious approach due to lack of studies excluding this mode of transmission 34 .
Staff providing care to confirmed 2019-nCoV cases, and staff who have been exposed to cases before the implementation of infection control measures, should be awake in terms of fever and any respiratory symptoms in the 14 days following the last exposure to a confirmed case.
The duration of infectivity for 2019-nCoV patients remains unknown. However patients with severe ill can spread 2019-nCoV for long times. Confirmed 2019-nCoV cases should remain in isolation until recovery from clinical symptoms of 2019-nCoV and viral detection tests should assist in the decision on when to discontinue additional precautions for hospitalised patients.
In conclusion, the outbreak of 2019-nCoV has become an important problem of today. Unfortunately, the outbreak is in progress worldwide. It is hard to say how many people will be effected by it in the future. The situation with the new coronavirus is rapidly changing, so, the new information about 2019-nCoV are continue. That's why, it should be kept in mind that
|
2020-04-09T09:15:19.835Z
|
2020-05-20T00:00:00.000
|
{
"year": 2020,
"sha1": "02cb23a55ad8746432992baaea01595cf904b80d",
"oa_license": null,
"oa_url": "https://dergipark.org.tr/en/download/article-file/1112805",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5698fb7428b7fbc82260473307f63c69bbacb7fb",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250113828
|
pes2o/s2orc
|
v3-fos-license
|
Ultracold ion-atom experiments: cooling, chemistry, and quantum effects
Experimental setups that study laser-cooled ions immersed in baths of ultracold atoms merge the two exciting and well-established fields of quantum gases and trapped ions. These experiments benefit both from the exquisite read-out and control of the few-body ion systems as well as the many-body aspects, tunable interactions, and ultracold temperatures of the atoms. However, combining the two leads to challenges both in the experimental design and the physics that can be studied. Nevertheless, these systems have provided insights into ion-atom collisions, buffer gas cooling of ions and quantum effects in the ion-atom interaction. This makes them promising candidates for ultracold quantum chemistry studies, creation of cold molecular ions for spectroscopy and precision measurements, and as test beds for quantum simulation of charged impurity physics. In this review we aim to provide an experimental account of recent progress and introduce the experimental setup and techniques that enabled the observation of quantum effects.
Contents
Over the last decade, a new field of cold atomic physics has emerged in which single cold ions are immersed into ultracold atomic gases. These experiments were motivated by prospects to buffer gas cool trapped ions, study quantum chemistry between atoms and ions and investigate cold collisions between the particles. Moreover they offer prospects for simulating quantum many-body physics and as applications in quantum information processing. In this review, we aim to give an up-to-date experimental account of where the field is currently standing and where it may potentially go. We put a strong emphasis on experimental techniques and results that lead to the recent breakthroughs. Evidently, the merger of neutral and charged particles is by no means new in atomic and molecular physics and to give a complete review is not our aim here. Instead, we choose to make a clear distinction in terms of energy: we only consider the coldest systems available. This means in practical terms that we omit the beautiful work done with atoms in magneto optical traps and focus instead on atoms in optical dipole traps with temperatures in the (sub) µK regime. These ultracold atoms are typically obtained through evaporative cooling, as their temperatures are sub-Doppler, and are in or close to a degenerate quantum gas. Furthermore, we focus on one or a few ions trapped in a many-body bath of atoms.
The ultracold ion-atom systems we consider can be illustrated by Fig. 1, which shows the relevant energy scales of the Yb + -Li system studied in Amsterdam 1 . The collision energy between the atom and ion in this system was observed to reach the quantum, or s-wave regime in which the collisional angular momentum is quantized and only allows for s-wave collisions. In this regime, the description of interacting atoms and ions requires quantum mechanics and quantum effects can be observed. Each of the separate systems of atoms and ions have their own quantum limits. For the ions, the quantization of motion in its trap occurs once its energy drops below that of a single quantum of motion given by ω, with ω the ion trap frequency and the reduced Planck constant. For the atoms, the crossover occurs once the system turns into a quantum gas. Then the atom-atom collisions are described by a single parameter, the s-wave scattering length, and the collisions require an explicit quantum mechanical treatment. Cooling the Figure 1: Possibilities of ultracold ion-atom mixtures when cooling further towards the quantum regime. The equi-collision-energy lines correspond to the combination 171 Yb + -6 Li and the black dot represents the energy reported in Ref. (Feldker et al., 2020).
atoms even further, such that the interparticle spacing is smaller than the de Broglie wavelength, quantum degeneracy is reached. In the case of bosons a Bose-Einstein condensate is formed and for fermions a degenerate Fermi gas. The 'deep quantum regime' in which both atoms, ions and their interactions are in the quantum regime, has not been reached in a sustained manner and remains an outstanding goal for the future with applications in e.g. quantum impurity physics.
Since the onset, the holy grail of the field has been to observe quantum effects in these hybrid ion-atom systems and most of the mentioned applications benefit from or require reaching the so-called quantum limit (see Fig. 1). However, efforts to reach this regime have long been hindered and only recently have two groups reported observations of quantum effects in a trapped ion interacting with an ultracold cloud of atoms. In the first (Feldker et al., 2020), the collision energy of the particles was determined to be on the order of the so-called s-wave regime, in which the angular momentum in ion-atom collisions freezes out, and spectral features in ion-atom spin exchange allowed for a first estimate of ion-atom s-wave scattering lengths.
In the second, Feshbach resonances between atoms and trapped ions were observed for the first time (Weckesser et al., 2021b). Feshbach resonances are ubiquitous in ultracold neutral atom systems where they are used to tune the interaction between atoms, for the creation of ultracold molecules and in studies of quantum many-body physics (Chin et al., 2010). The observation of Feshbach resonances in ion-atom systems is important for extending these techniques to neutral-charged mixtures with exciting possibilities for e.g. quantum impurity physics, quantum chemistry and quantum simulation.
Here, we start by providing a detailed review of the most important experimental techniques used for preparation of and measurements on ion-atom systems, followed by followed by a discussion of the most recent experimental results. In Sec. 2, we introduce the ion-atom interaction and collisions that play a role in ion-atom mixtures. Then in Sec. 3, we focus on the experimental techniques for preparation and detection of the ion and ultracold quantum gas, before discussing the ion-atom mixtures available today and the challenges that come with combining the two. This sets the stage to discuss the latest experiments with hybrid ion-atom systems that have focus especially on buffer gas cooling, chemistry, quantum effects and ion transport in Sec. 4. We end the review with a short outlook on what is next.
Ion-Atom Interactions and Collisions
The opportunities offered by hybrid ion-atom systems depend greatly on the understanding and control of the interactions at play between the ion and the gas of atoms. The charge of the ion induces a dipole moment on the atom as the electric field of the ion distorts the distribution of charge in the atom. The amount of distortion depends on the polarizibility α of the atom. This is the leading order contribution to the ion-atom interaction potential. Next-order contributions come from the interplay between the charge of the ion and the induced electric quadrupole moment of the atom as well as the dispersion interaction. Furthermore, if the ion and atom have nonzero orbital angular momentum, the interaction potential becomes more complex 2 .
The long-range behavior of the ion-atom interaction potential is typically described by with r the distance between the ion and atom and the coefficient C 4 = α/2 in atomic units is related to the polarizibility of the atom 3 . A repulsive potential is often added to capture the hard core potential for short-range interactions 4 . This ion-atom interaction potential is of intermediate range as compared to the van-der-Waals potential (∝ 1/r 6 ), the dipole-dipole (∝ 1/r 3 ) or the Coulomb potential (∝ 1/r). The real interaction potential is more complex and contains a multitude of effects. It requires both theory and experiment to be fully characterized. In particular, when the atom and ion are so close together that their electronic clouds overlap, the interactions should be obtained through quantum chemistry calculations which take into account all contributions from electronic correlation and exchange. These calculations provide the potential energy curves of the complex structure of ion-atom molecular states, which determine the reaction pathways and collision rates of the processes found in ion-atom systems. Experimental input is needed to benchmark these calculations and improve the accuracy of interaction potentials, cross sections and rate constants of all the collisions and reactions that take place. The ion-atom collisions in the system can be elastic, inelastic or reactive. This leads to changes of the initial colliding partners in momentum, quantum state or chemical species. In elastic collisions the kinetic energy is conserved and the momentum of the particles changes, while the internal state of the colliding particles does not change. In other words, the translational degrees of freedom are decoupled from the internal degrees of freedom of the colliding particles and the latter is kept unchanged. However, because of the momenta changes, elastic collisions can lead to heating of the atom cloud and atom loss from the trap as well as cooling or heating of the ion. Away from the quantum regime, the elastic cross section was found to follow σ el (E col ) = π 4µC 2 4 2 1/3 1 + π 2 16 (E col ) −1/3 , using a semi-classical approach describing many partial waves (Côté and Dalgarno, 2000). Here, µ = m i ma m i +ma is the reduced mass and E col the collision energy. In the relative frame of two particles colliding, the collision energy corresponds to the relative kinetic energy between an ion and an atom (Gerlich, 2008). Thus, the mean collision Here p is the induced dipole moment and E the electric field.
4 For example, a 1/r 6 term can be added as in . However, this is only one model an in accordance with the Lennard-Jones version of molecular potentials a 1/r 8 repulsive term could also be used. energy is given by with E i the ion energy and E a the atom energy 5 . The latter is related to the temperature T a of the atom cloud E a = 3 2 k B T a , with the Boltzmann constant k B .
Inelastic and reactive collisions come with a change in kinetic and/or internal energy of the particles. The internal and translational degrees of freedom are coupled and the internal states of the colliding particles can change. An exothermic reaction results in a release of energy when the reaction takes place and an endothermic reaction requires energy to happen. The total energy during a reaction is conserved, so the difference in internal energy between the reactants and product states will be translated into kinetic energy. Collisions can thus take away or add kinetic energy or internal energy to the systems and lead to cooling or heating. For instance an ion relaxes from an excited hyperfine state back to the lower manifold, thereby releasing the hyperfine energy difference. Inelastic collisions lead to a change of the quantum state. For instance, spin changing collisions can lead to both spin conserving and spin non-conserving changes in the spin states of the initial colliding partners. The first is commonly called spin exchange and the second spin relaxation, respectively. Reactive collisions are chemical reactions, where the initial reactants are transformed into different final products. A prime example is resonant charge exchange where the electron is transferred and the ion becomes an atom and vice versa via A+B + → A + +B. Another example is molecular ion formation whereby A+B + → AB + , which can happen radiatively or via three-body recombination where the second atom carries away the released energy, i.e. A+ A+ B + → AB + + A. Both inelastic and reactive collisions can be radiative or nonradiative. They can happen by absorbing or emitting a photon or through adiabatic crossings and resonances. Especially radiative association and dissociation (A + B + ↔ AB + + γ) whereby a molecular ion is formed or the molecular ion breaks into an atom and ion, are aided by light (γ) present for cooling and trapping of the atomic and ionic species.
Another common type of ion-atom collisions are quenching collisions, 5 For a single collision E col = 1 2 µ v 2 i + v 2 a − 2v i v a cosθ , with θ the angle between the colliding ion of velocity v i and atom of velocity v a . By averaging this leads to E col = 1 2 µ v i 2 + v a 2 and thus Equation 2 where the internal states of the ion relax to energetically lower lying states. For collisions that involve spin states, this so-called spin relaxation depends on the ion-atom system and the coupling mechanisms between the atom and ion states. They were measured to be higher for Yb + -Rb (Ratschbacher et al., 2013) than for Yb + -Li and were found to be even smaller for Sr + -Rb . This could be explained by the difference in second-order spin-orbit coupling (Tscherbul et al., 2016), which is higher for Yb + -Rb than Yb + -Li and Sr + -Rb. These measurements rely on the initial state preparation of the ion in a particular spin state as well as on the creation and preparation of spin-polarized atom clouds in various atomic spin states. Ion-atom collisions can be sorted into Langevin or glancing collisions based on how close the ion and atom encounter each other. A Langevin collision happens for a close encounter and comes with a large transfer of energy and/or momentum. Typically, charge exchange and molecular ion formation processes occur via a Langevin collision. A glancing collision is a far away encounter and only leads to a slight deflection of the particles' trajectories. However, it can still facilitate processes like resonant charge exchange that leads to cooling in homonuclear mixtures (Ravi et al., 2012;Mahdian et al., 2021).
Most inelastic and reactive collisions in ion-atom mixtures, happen through a close encounter and can be described by the Langevin model. This model assumes that given the impact parameter b between the ion and the atom, a Langevin collision inevitably occurs when b < b c and a glancing collision happens when b > b c . The critical impact parameter b c depends on the collision energy and the atom polarizibility as b c = (4C 4 /E col ) 1/4 . The Langevin capture model predicts a Langevin rate which is independent of the collision energy. The Langevin rate in units of m −3 s −1 is given by whereby the cross section σ L = 2π C 4 E col . 6 Here, v represents the relative velocity of the ion and atom. At large distances and for the kinetic energy dominating the interaction energy of the system, E col = 1 2 µv 2 . Filling this in, leads to the right side of the equation and the collision energy drops out. The total number of Langevin collisions that can take place between the ion and atom cloud depends on the density of the atoms n a , the Langevin rate and the interaction time τ of how long the ion is immersed in the bath. Therefore the probability of a collision to happen is given by P = 1 − e −Γ L naτ with n a the atomic density.
So far the collisions were described classically, however in the quantum regime, the classical capture picture dissolves and the reaction pathways occur via tunneling, reflection, and resonances. The quantum regime can be reached by cooling down both the ion and atoms degrees of freedom (see Fig. 1). Classically, the ion-atom interaction is described by many partial waves as the total scattering wave function can be decomposed into contributions of different partial waves, i.e. different angular momenta of the rotational motion. However, in the quantum regime all partial waves except the s-wave are frozen out 7 . The s-wave regime is reached when the ion-atom collision energy of the system is smaller than the lowest centrifugal barrier. All partial waves, except for the lowest s-wave, have a centrifugal barrier in their potential. The range and energy of the centrifugal barrier for a partial wave with angular momentum l are given by R l = 2 l(l+1) 2µC 4 2 and E l = l 2 (l+1) 2 4 4 4µ 2 C 4 . Here, R l=1 defines the characteristic range R * which e.g. for the Yb + -Li and Ba + -Li is about 70 and 69 nm, respectively. This is an order of magnitude larger than the typical range of the van der Waals interactions for neutral atoms, e.g. for Rb and Li R vdW = 4.4 and 1.7 nm (Chin et al., 2010). Similary, E l=1 represents the p-wave centrifugal barrier and is called the characteristic energy E * . For collision energies below this energy all partial waves freeze out and only the isotropic s-wave contribution to the scattering survives. It is therefore also called the s-wave limit, i.e. E s = 4 4µ 2 C 4 . Below this limit, the ultracold collisions are entirely determined by single partial-wave scattering in the incoming collision channel and a quantum treatment of the interactions is necessary. Here, one expects to see quantum effects, such as deviation from the Langevin rate and Feshbach resonances (Sec. 4.3). The s-wave energy depends on both the reduced mass as well as the polarizibility of the atoms and depends greatly on the choice of ion-atom system as we will discuss in Sec. 3.3
Setup and Techniques
Creating hybrid ion-atom experiments requires the combination of a trapped ion setup with that of a cold quantum gas machine. Both are ultra-high vacuum setups, however the ion requires electric feedthroughs for the operation of the electrodes, whereas the atoms require a lot of optical access to fit in all the beams required for magneto-optical trapping, optical trapping and absorption imaging. Both atoms and ions can be cooled by laser light, however their frequencies differ as do their detection requirements. Ions are detected via fluorescence imaging and require an efficient collection of the light to measure a single ion. In the ion-atom mixture, the ion represents a single well-controlled reaction center and most of the measurements rely on the the exquisite control and read-out for which trapped ions are known. From the atom cloud, the relevant parameters are typically the temperature, atom number and density probed by the ion. The number of atoms in the cloud is determined by absorption imaging where a shadow image of the cloud is made. This requires a good imaging system, as also the width of the cloud is needed to obtain the density and temperature of the atoms. Moreover, this is a destructive imaging method and afterwards a new atom cloud needs to be loaded. Thus, ultracold atom setups rely on the reproducibility of the atom cloud, while the same trapped ion can be probed numerous times. When designing these hybrid setups an intricate balance between the various wishes needs to be found.
In this section we discuss the ion and atom preparation and detection as well as the experimental mixtures available and the challenges of combining ions and atoms. As an example of the ion-atom setup to keep in mind, Fig. 2a gives an overview of the design of the Yb + -Li setup. Here the atom-part on the left consists of an oven followed by a Zeeman slower, before the atomic beam enters the core of the setup. The ion-oven is located close to the Paul trap. The heart of the setup is the central chamber with various optical viewports for good optical access. Fig. 2b illustrates two key elements of a hybrid ion-atom experiment. Lasers are used for both ion and atom cooling as well as trapping, while electric fields aid in controlling the ion. Here, a radio frequency Paul trap for ion trapping is shown, which is the most commonly used trap. Other elements are magnetic coils, which can be used Figure 2: Schematics of the setup of an ion-atom experiment. a) Overview of the entire ultra-high vacuum setup. Shown are the atom-part consisting of oven and Zeeman slower as well as the central part where the Paul trap is located. The main chamber is surrounded by various magnetic field coils, which can be used to tune the atom-atom interactions. b) The heart of the setup depicting the key elements of a hybrid ion-atom experiment: laser beams for atom trapping, laser beams for ion cooling and detection and a radiofrequency Paul trap for ion trapping. Not shown are the camera to take fluorescence images of the ion and the camera and laser beam for taking absorption images of the atoms, which are shown in d and c, respectively. Adapted from Hirzler et al. (2020a) and Feldker et al. (2020) for atom trapping and to tune the atom-atom interactions via a Feshbach resonance (FR) and, finally, cameras to take fluorescence images of the ion (Fig 2c) and absorption images of the atoms (Fig 2d).
Ion preparation and detection
Ions used in hybrid ion-atom experiments usually are singly charged earth-alkaline or alkali ions. The former has the electron structure of neutral alkali atoms, although the heavier ions distinguish themselves by featuring low lying metastable 2 D levels. The low energy levels of Yb + also are alkalilike but feature in addition two metastable 2 D levels and the extremely long lived 2 F 7/2 state (Lange et al., 2021). On the one hand, the metastable states complicate laser cooling since more repump lasers are required than for the neutral alkalis, but on the other hand the availability of narrow line width quadrupole transitions, 2 S → 2 D, facilitate sensitive detection methods as described below.
The ions are typically created by photoionizing neutral atoms emerging from an oven in an ultrahigh vacuum environment. Usually, two-photon ionization is employed. In this method, the isotope shifts in the ion species can be easily resolved such that the process is isotope selective. Since ion traps are usually extremely deep, precooling of the atoms is not needed. Doppler laser cooling after ionization localizes the ions in the center of the trap at temperatures of ∼ mK. Fluorescence during laser cooling can be collected on a camera or photon detector to infer the presence of the ions. In typical traps, the ions form crystaline structures due to their Coulomb interactions with ion separations in the 2-20 µm range, which are easily resolved with a modest microscope. The number of ions can be readily controlled by inspection and turning off the photo-ionization laser. When too many ions were loaded by accident, temporarily lowering the trap allows for a surprisingly easy method to remove ions one by one with relatively large success rate.
Further laser manipulation allows to cool the ions to near their ground state of motion using e.g. resolved sideband cooling. Moreover, the electronic, hyperfine and Zeeman state of the ions can be prepared and read out with laser pulses as well. The closed-shell alkali ions do not feature practical optical transitions and are usually created by ionizing an ultracold alkali atom and are detected on multi channel plates. For further reading, we refer the reader to the various detailed reviews on trapped ions showcasing the toolbox at hand especially for their use as quantum simulators and qubits for quantum computation, e.g. (Leibfried et al., 2003;Monroe et al., 2021;Bruzewicz et al., 2019).
Paul trap
The most common method of trapping ions is by using a Paul trap (Paul, 1990). In it, the ions are confined by a combination of static and radio frequency electric fields, E s and E rf (t). The total electric field is given by In an ideal Paul trap, both electric fields disappear in the trap center and we can expand the fields to first order in the ion coordinate r around the origin: E(t) ≈ (G s + G rf (t)) · r. Here, G s and G rf (t) are the 3×3 matrices representing the gradient of the vector fields E s and E rf with elements and i, j ∈ {x, y, z}. The Maxwell equation ∇ · E s = 0 imposes the constraint Tr (G s ) = 0, suggesting that charged particles cannot be trapped by static fields alone since the retaining force F = eE s will have to be of the wrong sign in at least one direction. Here e is the elementary charge. The result is know as Earnshaw's theorem (Earnshaw, 1842), for a discussion and a full proof see Weinstock (1976). As we will see later in Section 3.3, the timedependent field E rf (t) can have serious consequences when combining ions with ultracold atoms. However, Earnshaw's theorem makes it impossible to simply switch it off, unless we consider other means of trapping ions such as optical trapping or Penning traps. The radio frequency electric field generally has a simple harmonic time dependence G rf,i,j (t) = G rf,i,j cos(Ω rf t) and we call Ω rf the trap drive frequency, with the subscript rf denoting radio frequency. In experiments we typically have Ω rf /(2π) between 100 kHz and 50 MHz. It is convenient to introduce unitless parameters a ij and q ij that relate the electric field gradients to Ω rf such that G s,i,j = −(m i Ω 2 rf /e)a ij and G rf,i,j = −(mΩ 2 /(2e))q ij , with m i the mass of the ion. The parameters a ij and q ij are known as the static and dynamic stability parameters, respectively, because they determine whether an ion can be trapped stably. In case all a ij = 0, we have that the eigenvalues of q should all be < 0.908 (Ghosh, 1995) for the ion to be trapped. The more general case requires solving the coupled Mathieu equations m ir (t) = e(G s + G rf (t)) · r with e the (unit) charge of the ion.
The experimentally most relevant case is the so-called linear Paul trap. Here, the radio frequency field supplies confinement in the two radial di-rections, which we set as the x-and y-direction here, while the static field supplies confinement in the z-direction. We require q x = −q y = q and a z = −a x − a y > 0 due to the Maxwell equations and to assure trapping in the z-direction. We only consider positively charged ions here and have assumed for simplicity that G s and G rf can be simultaneously diagonalized such that the equations of motion decouple in each of the independent directions of motion. For the linear Paul trap, the equations of motion take the form of Mathieu equations and can be solved analytically using a Floquet Ansatz (Ghosh, 1995;Leibfried et al., 2003). To lowest order in the stability parameters, the solutions can be approximated as: This equation describes harmonic motion with amplitude A j and phase φ j on top of a fast intrinsic micromotion (IMM) of amplitude A j q j /2 A j and frequency Ω rf . The secular trap frequencies are given, to lowest order, by ω j ≈ (Ω rf /2) a j + q 2 j /2 Ω rf . Since the micromotion has a small amplitude as compared to the secular motion, it is often neglected in a process called the secular approximation. However, whether this approximation is justified depends a lot on the situation at hand. It is known that it can fail dramatically when the ion is interacting with neutral particles (Major and Dehmelt, 1968;Itano et al., 1995;Gerlich, 1995;DeVoe, 2009;Zipkes et al., 2011;Cetina et al., 2012;Chen et al., 2014;Höltkemeier et al., 2016;Meir et al., 2016;Rouse and Willitsch, 2017;Rouse and Willitsch, 2019).
For the ideal linear Paul trap, q z = 0 such that the ion motion in this direction does not have micromotion. In practical Paul traps however, |q z | > 0. This has an effect that needs consideration when minimizing so-called excess micromotion. Excess micromotion (EMM) is the additional micromotion in the trap that stems from imperfections in the electric field as real setups deviate from the ideal linear Paul trap scenario. It can be well-characterized and compensated to a certain degree as described in the next section.
For non-negligible q, it is worthwhile to consider the next higher order in q to accurately describe the ion trajectories (Meir, 2016): and In practical experiments, usually q < 0.5 because higher values may result in parameteric excitation and heating of the ions. This effect results from anharmonic terms in the trapping fields that we neglected in the first order expansion of the trapping field (Alheit et al., 1995;Pedregosa et al., 2010;Deng et al., 2015). Finally, let us consider the amount of energy that is stored in the micromotion. Using Eq. 6, we can obtain the average kinetic energy in accordance with a normal harmonic oscillator. Here, we assumed a q 2 such that ω x,y → Ωq x,y /(2 √ 2) and v j is the velocity in direction j. Therefore, the two directions with micromotion add an additional ∼ k B T /2 to the total kinetic energy each, a result that was obtained in Berkeland et al. (1998). In summary, the total kinetic energy of the ion in a linear Paul trap is given by E j kin = E j sec + E j IMM + E j EMM , for each direction. It consists of the secular energy and the energy due to the intrinsic and excess micromotion. Thus, micromotion is of importance when dealing with collisions between ions and neutral atoms. The relative contribution of micromotion along the z-axis is of order ∼ q 2 z /(8a z ) 1 as long as a z q 2 z . Lastly, note that intrinsic micromotion occurs mostly at frequencies Ω rf ± ω j as is clear from Eq. 6. This distinguishes intrinsic micromotion from excess micromotion that occurs at (multiples of) Ω rf as described below.
Excess micromotion
Up until now, we have considered ideal Paul traps, but there are several experimental imperfections that need to be taken into account before we can look at realistic ion trapping. Here, the most important of these is without doubt excess micromotion. Intrinsic micromotion in an ideal Paul trap adds kinetic energy on the order of k B T to the ion that needs to be taken into account when it interacts with another particle (Berkeland et al., 1998). Excess micromotion can further increase this energy, independently of T and poses a serious restriction in studying collisions and chemistry between ultracold atoms and ions in Paul traps. There are three types of excess micromotion commonly named radial, axial and quadrature (or phase) excess micromotion. All originate from deviations in the ideal electric field of the Paul trap, are independent of the secular motion amplitudes A j and occur at (multiples of) the trap drive frequency Ω.
Radial micromotion. Here, the position at which each field E s and E rf vanish do not overlap. Since the latter is the more important for radial confinement, we quantify the electric offset field as the static field E offs = E s (r = 0) and E rf (r = 0) = 0. The offset field has the effect of pushing the ion slightly away from the radio frequency trap center, by an amount d j ∼ eE offs,j /(mω 2 j ). Here, the ion undergoes excess micromotion of amplitude ∼ q j d j /2, corresponding to an average kinetic energy of to lowest order, where we used q 2 x,y |a x,y |. Taking e.g. a 40 Ca + with Ω rf = 2π× 20 MHz, q = 0.2 and E offs,j = 1 V/m, we obtain E rad kin ∼ k B × 4 mK, an order of magnitude higher than the Doppler limit for laser cooling 8 . By getting a better control on the electric field or compensating for the offset field down to < 0.1 V/m, excess micromotion energies in the µK regime are allowed.
Reducing radial excess micromotion can be done by tighter ion confinement since E rad kin ∝ 1/ω 2 j . However,the other types of micromotion do not share this property and care needs to be taken when minimizing the total excess micromotion. Moreover, the effect of micromotion on ion-atom collisions does also not prefer a tight confinement (Sec. 3.3).
Axial micromotion. Another type of excess micromotion occurs when there is no point where the axial rf field disappears, but rather there is a homogeneous rf field in the axial direction. Even with q = q x = −q y , this field is permitted by the Maxwell equations. This type of micromotion is best characterized by the strength of the oscillating field E ax EMM ∝ q. Axial micromotion has an amplitude of ∼ eE ax EMM /(m i (Ω 2 rf −ω 2 z )) and an average kinetic energy of E ax . Considering again 40 Ca + with Ω rf = 2π× 20 MHz, ω z = 2π× 1 MHz and E ax EMM = 1 V/m, we get E ax kin ∼ k B × 200 nK. Axial micromotion can occur for instance due to a small angle between the rf electrodes.The electric field that causes the axial excess micromotion is proportional to the trapping electric fields. Thus, reducing the effect can be done by using a low Ω rf and q, which unfortunately leads to a larger effect of radial excess micromotion.
Quadrature micromotion. Phase-or quadrature micromotion (Berkeland et al., 1998;Meir et al., 2017b) originates from a phase difference δφ rf between the radiofrequency voltages on different rf-electrodes. This can be seen as a time dependence in the location of the rf null. It causes an out-of-phase micromotion of amplitude ∼ q j R trap δφ rf , where R trap is half the distance between the two rf-electrodes. This results in an average kinetic energy of Quadrature micromotion originates either from impedance or mechanical mismatches between the electrodes. For instance a simple difference in the length of the wires supplying voltage to the electrodes results in a phase difference. Like with axial micromotion, the out-of-phase field is proportional to the trapping field and reducing its effect can thus be done by using a low Ω rf and q. Setting a low Ω rf makes it easier to reduce δφ rf because the wavelength of the rf increases. The scaling with ∝ q j R trap δφ rf is very unfavorable. To have a phase micromotion energy below the Doppler limit, δφ rf 0.01 • for the typical R trap in the mm range and taking again 40 Ca + with Ω rf = 2π× 20 MHz, q = 0.2, this corresponds to path length differences much smaller than 0.5 mm. At the same time, phase micromotion cannot be inferred from indirect measurements, as is often done with radial micromotion and will be discussed next. Because of all of this, phase micromotion is often the dominant form of excess micromotion.
Prevention. Excess micromotion poses challenges to a wide variety of applications of trapped ions including for atomic clocks and in quantum computing (Berkeland et al., 1998;Keller et al., 2015) and a large amount of work has been done in either preventing or undoing its effects. In this section we describe a few known measures to prevent excess micromotion.
Radial excess micromotion occurs because of uncontrolled static electric fields in the experiments. The laser used for creating, cooling and detecting ions typically operate in the UV and can easily free electrons from any metal that the beams hit (Harlander et al., 2010). These electrons are accelerated in the electric field of the Paul trap and may accumulate on areas that are non-conducting, causing electric offset fields. Therefore, non-conducting elements must be 'hidden' from the ion's line of sight as much as possible. In the case of hybrid ion-atom experiments however, there is an additional danger. The atoms used in the experiment may land on the ion trap electrodes and can, after some time, oxidize to form a non-conductive layer on which electrons may stick and cause electric offset fields. In the group in Amsterdam, we periodically clean the trapping electrodes using a high power IR laser, to evaporate any material sticking on it, eliminating the electric offset fields (Hirzler et al., 2020a) for a few weeks. Large Paul traps help mitigate these problems, such that the atoms are kept away from the electrodes as much as possible.
Phase and axial micromotion can be prevented by accurate mechanical and electrical construction of the Paul trap. This means that asymmetries, angles and differences in wire lengths need to be avoided. Adequate filtering can prevent radio frequency pickup in the electrodes. Numerical simulations of the trapping fields are a valuable tool for checking the resilience of the trap to realistic asymmetries and angles between the electrodes. Furthermore, optimization of the trap parameters (q and Ω) can be helpful as the various types of excess micromotion depend differently on them as we previously showed. However, other demands also play a role when designing ion traps, such as optical access or the need to reach the so-called Lamb-Dicke regime for ion-laser interactions (See Sec. 3.1.4).
Detection and compensation. Radial offset fields that can cause excess micromotion are easily detected and compensated by applying additional electric fields. These fields typically originate from strategically placed dedicated compensation electrodes. While radial excess micromotion can be compensated with static electric fields, compensating axial and phase micromotion requires the application of time-dependent compensation fields with full phase control. This is typically quite challenging and so prevention is the preferred approach. Here, we describe the various techniques available to detect radial offset fields and micromotion in general.
For detecting radial offset fields, the fact that the ion position in the trap under the influence of an offset field depends on the trapfrequency ω j according to d j ∼ eE offs,j /(m i ω 2 j ), is often used. The trap frequency may be controlled by changing the rf voltage while the distance that a fluorescing ion moves as a function of it may be tracked with a camera. Crucially, both ω j and the magnification of the camera (and thereby d j ) can be accurately determined by independent measurement. The former is measured by applying a small oscillating electric field and observing the fluorescence of the ion, upon hitting the resonance at ω j the fluorescence abruptly changes due to the Doppler effect as the ion motion gets excited. The magnification can be determined by loading two ions in the trap at a known axial trapfrequency ω z and noticing that the distance between them is given by (James, 1998): The radial excess micromotion may be compensated by applying additional electric fields until the ion does not move upon changing ω j . In these camera-position measurements, a potential systematic error is the radiation pressure of the laser that makes the ion fluoresce. This causes a force of ∼ kγ ph with γ ph the photon scattering rate and k the wavenumber. For γ ph ∼ Γ/2 with Γ the transition linewidth of typically Γ ∼ 2π× 20 MHz for alkali earth ions and typical wavelengths λ ∼ 400ṅm, the radiation pressure is equivalent to the effect of an offset field of ∼ 1 V/m. Its effect can be reduced by aligning the laser along an axis in which the ion does not move or by reducing its intensity to far below saturation.
The technique does not work for ion movement in the direction of the camera. However, tricks are available to map the information of the ion position onto the internal state of the ion. This may for instance be done by applying a magnetic field gradient and detecting the energy shift between states with a magnetic moment as a function of ω j Hirzler et al., 2020a). One can also detct the position dependent phase of a laser. These phase differences may in turn be mapped onto the internal states of ions (Higgins et al., 2021). These techniques have the benefit of not suffering from radiation pressure.
Another method to detect an offset between the static and rf fields is to mix in a voltage at frequency ω j in the rf signal. If the ion is not situated in a position where E rf = 0, this field can parametrically excite the ion, which will be visible in the fluorescence (Ibaraki et al., 2011;Narayanan et al., 2011;Tanaka et al., 2012;Keller et al., 2015;Nadlinger et al., 2021).
The micromotion in the trapped ion can be directly probed by making use of the Doppler effect. The 'photon correlation method' works by detecting photons on e.g. a photomultiplier tube. Due to micromotion and the Doppler effect, the ion fluoresence shows a modulation at the trapdrive frequency Ω rf that can be made visible by averaging over many periods and making sure the detection and the trapdrive are exactly in sync. The success of compensation methods can be checked by flattening this fluoresence curve. The method also gives access to the relative phase between the micromotion and the trapdrive and can thus distinguish between phase and axial/radial micromotion (Berkeland et al., 1998;Keller et al., 2015).
Another method based on the Doppler effect is to look at the fluorescence in the spectral domain. In the rest frame of the ion, the laser field is frequency modulated with frequency Ω rf and modulation index β = kx EMM with x EMM the amplitude of the micromotion (Berkeland et al., 1998) and k the wavevector projected on the direction of the micromotion. This causes sidebands at multiples of Ω rf in the spectral response of the ion when probed with near resonant light. The success of compensation methods can be checked by minimizing the strength of these sidebands. However in ion-atom mixtures, the sidebands are usually not resolved as the ions typically have linewidths Γ ∼ 2π× 20 MHz on their main cooling transitions and these mixtures favor low Ω rf . One solution is to rather perform the measurement with a weaker repump transition Goham and Britton, 2021).
A widely applied variation is to make use of narrow linewidth transitions and directly probe the coupling strength of the laser on a number of the sidebands (Berkeland et al., 1998). This method is sometimes called the Rabi method since the coupling strengths are usually derived from measured Rabi flops. The Rabi frequencies on the carrier Ω car and first micromotion sideband Ω sb are related by with J 1 (β mm ) and J 0 (β mm ) Bessel functions of the first kind. The heavy earth-alkaline ions and Yb + used in ultracold ion-atom experiments have narrow linewidth quadrupole transitions available to the low lying 2 D states that can be used for this type of micromotion detection. Potential systematic shift can originate from Stark shifts for weak sideband transitions (i.e. close to perfect compensation) and modulation of magnetic substates due to magnetic fields generated by the rf trapping fields (Meir et al., 2017b).
For ion-atom experiments, the particulars of the experimental setup allow for very precise determination of excess micromotion. This is because the outcome of ion-atom collisions depends very strongly on the amount of excess micromotion as we describe in section 3.3. Excess micromotion can then be detected either by measuring the loss of atoms in energetic collisions (Mohammadi et al., 2019) or by measuring the energy of the ion after interacting with atoms (Hirzler et al., 2020a). Both methods are among the most accurate available and radial excess micromotion compensation Since the dipole trap is very shallow, any stray electric field will lead to ions loss, making the system an extremely sensitive probe for radial excess micromotion.
Background heating
Electric field noise originating either from technical noise or room temperature electrodes in the vicinity of the ions may cause background heating rates that are independent of the presence of atoms. Unlike buffer gas cooling (see Sec. 4.1), this heating is nearly independent of the temperature of the ions in the cold regime. Simply equating the cooling and heating rates shows that background heating limits buffergas cooling by ∆T i = Γ heat /γ cool with Γ heat the background heating rate in K/s and γ cool the buffer gas cooling rate such that dT i /dt = −γ cool (T i − T ∞ ) + Γ heat with T ∞ the equilibrium temperature of the ion in the buffer gas without background heating (Hirzler et al., 2020a). Note that this simple analysis can only be expected to hold in the large mass ratio limit since otherwise the ion energy distribution will deviate from a thermal distribution (DeVoe, 2009;Zipkes et al., 2011;Chen et al., 2014;Meir et al., 2016;Höltkemeier et al., 2016;Rouse and Willitsch, 2019;Feldker et al., 2020;Pinkas et al., 2020). For low densities, the cooling force of the ultracold buffer gas is small and the background heating can be a limiting factor with e.g. (Feldker et al., 2020) citing ∆T i = 20 − 40 µK. In general, background heating can be mitigated by employing large Paul traps, in which the ions are far separated from any surfaces (Brownnutt et al., 2015).
State detection and thermometry
Detection of one electron ions such as the earth alkaline ions and Yb + can be easily done by collecting laser induced fluorescence on a camera or photon detector. With photon scattering rates of several MHz and collection efficiencies of up to ∼ 1% for nearby lenses and photon detectors, the presence of an ion can be determined within milliseconds (Leibfried et al., 2003).
Typically, the ions used feature multiple states that can be driven with laser fields to make it possible to read out the internal state of the ion (Dehmelt, 1975). For instance, one of the states may lead to fluorescence while the other(s) do not (Leibfried et al., 2003). The dark and bright states may correspond to hyperfine ground states of an ion and the optical transition used for detection may be closed via selection rules. The ions Ca + , Sr + , Ba + and Yb + also feature low lying 2 D 5/2 states that may remain dark during fluorescence detection. This feature can be used for state detection with a bright ion corresponding to an ion measured to be in the 2 S 1/2 ground state and a dark ion corresponding to 2 D 5/2 . Additional laser pulses operating on the narrow 2 S 1/2 → 2 D 5/2 quadrupole transition may be used to map other states or information about e.g. ion motion to the dark state. Using this electron shelving method electronic, hyperfine and Zeeman substates may be distinguished with very high accuracy (Myerson et al., 2008).
For closed shell ions such as Rb + , that are typically created by ionizing a buffer gas atom or via charge transfer from a one electron ion, there are no practical optical transitions available. In this case, the ions may be detected via a multi channel plate. Here, the application of extraction fields and timeof-flight analysis gives access to the energy and position of the ions (Dieterle et al., 2021). Another method to detect closed shell ions is by detecting the back-action that they have on an atomic gas, for instance in the form of atom loss due to energetic collisions (Schmid et al., 2010).
Co-trapping of an ion. A very common method in ion trapping is to cotrap ions, or even molecular ions with ions that allow for straightforward fluorescence detection. Sympathetic laser cooling allows the formation of an ion crystal and the impurity ions show up as dark vacancies in a crystal. Moreover, the trap frequencies of the ion crystal depend on the masses and composition of the crystal. In a linear Paul trap, ω z ∝ 1/ √ m i , while in the radial directions ω ⊥ ∝ 1/m i . This effect can be used to accurately determine the atomic mass of the impurity ions with resolution on the order of or even below an atomic mass unit (Morigi and Walther, 2001;Schmid et al., 2010;Home, 2013). The motional modes of ion crystals containing ions with different masses have been described in Refs. (Morigi and Walther, 2001;Home, 2013). In a crystal of two ions with masses m 1 and m 2 aligned along the radio frequency null of a linear Paul trap, the distance between them is independent of their masses. In contrast, the axial eigenmode frequencies are related via ω with ω z,m 1 the trapfrequency of a single ion of mass m 1 in the trap and ν = m 1 /m 2 . In the case that m 1 = m 2 , this leads to the well known result that the eigenmode frequencies are given by ω z and √ 3ω z , for the center-ofmass and relative motion respectively (James, 1998).
Information about the motional state of the ions can be obtained by making use of the Doppler effect which relates the photon scattering rate to the velocity of the ions. The first of these methods is known colloquially as the Doppler re-cooling method and was first proposed and used in Wesenberg et al., 2007). This method works over a wide range of energies and can even be used to probe large ( K) amounts of energy released e.g. in a chemical reaction in a single run by timing the onset of ion fluorescence as was shown in (Meir et al., 2017a). However, methods based on the Doppler effect have as a lower limit the Doppler limit, which typically lies in the the 0.5 mK range for the strong cooling transitions in alkaline earth ions and Yb + . Reaching the quantum regime for interacting ion-atom mixtures requires lower ion temperatures for all experimentally studied systems.
Measuring the secular energy. To analyse the kinetic energy of an ion in the ultracold regime, we can couple information about it to its internal state followed by fluorescence detection. This not only gives access to the average ion energy, but can also be used to measure the energy distribution, the amplitude and phase of its motion and the probability distribution in phase space, see e.g. (Wallentowitz and Vogel, 1995;Meekhof et al., 1996;Leibfried et al., 1996Leibfried et al., , 1998Leibfried et al., , 2003Lougovski et al., 2006;Santos et al., 2007;Lamata et al., 2007;Schmitz et al., 2009;Gerritsma et al., 2010;Zähringer et al., 2010;Flühmann and Home, 2020). These techniques were developed in the context of trapped ion quantum computing and rely on laser-induced qubitmotion coupling. It is important to realise that since readout relies on the measured value of a single qubit, little information is gained per measurement and typically a lot of averaging is required. Moreover, the creation of an ultracold gas of atoms relies on time-consuming evaporation cooling such that typically a lot of measurement time needs to be reserved for measuring the motion of a single ion. These caveats not withstanding, the tools are very powerful and allow for example for an accurate determination of the total ion energy (Feldker et al., 2020).
The secular energy can be determined via the 'Rabi method' of measuring the Rabi flops on a transition. Consider, the Hamiltonian of a two level system with levels |0 and |1 in a harmonic trap with a laser field of frequency ω L and wavenumber k. This can be described as where we introduced the Rabi frequency Ω with a factor 2 in anticipation of the rotating wave approximation andσ + andσ − are the usual raising and lowering operators for the state. We can go into the interaction picture with the unitary transformationÛ = e iH 0 t/ with H 0 = 3 j=1 ω jâ † jâ j + ω 0 |1 1| the Hamiltonian in the absence of the laser andâ † j andâ j the creation and annihilation operator belonging to the motional mode j with frequency ω j . Furthermore, we quantize r with k ·r = 3 j=1 η j â † j +â j with the Lamb-Dicke parameters η j = l j k j with l j = /(2mω j ) and k j the laser wavevector projected on the direction of motion j. In cases of interest, η j 1 and we make the Lamb-Dicke approximation
All this results in the interaction Hamiltonian
Here, we introduced the detuning of the laser, ∆ L = ω L − ω 0 . Three options are of interest, commonly called carrier (∆ L = 0), red (∆ L = −ω j ) and blue sideband (∆ L = +ω j ) transitions. When we set ∆ L = 0, we simply drive the transition |0 ↔ |1 , while the motional state changing terms ∝ η n are not resonant and can be discarded as fast rotating terms. It turns out that when we omit the Lamb-Dicke approximation, there are resonant terms coupling to the ion motion also in the carrier. These originate from terms such as ∝ η 2 jâ † jâ j and ∝ η 4 jâ † jâ jâ † jâ j in the Lamb-Dicke expansion. The Rabi frequency can be generalised to Ω n j = Ω j e −η j /2 L n j (η 2 j ) with n j the motional quantum number of harmonic oscillator j. Measuring the coupling strength on the transition |0 ↔ |1 thus allows to obtain information about the states of (thermal) motion. A much employed technique is to vary the duration of the laser pulse such that Rabi flops are recorded. Since the Rabi frequency depends on the populations n j in each run, the recorded flops will dephase at a rate that gives information about the statistical spread p(n j ). Note that the laser direction may be aligned such that it only couples to a single direction of motion and each distribution of n j may be interrogated separately. In this case, the Rabi frequency may be approximated as Ω n j ≈ Ω(1 − η 2 j n j ). Obtaining the full energy distribution (and determining whether it is e.g. thermal or some other distribution such as a Tsalis distribution) is possible but relatively time consuming . This is especially true for large average populationsn j since a sum of many frequencies needs to be sampled. When some thermodynamic assumptions are justified, such as equipartition of energy and a thermal energy distribution, p therm (n j ) =n n j j /(1 +n j ) n j +1 , average ion energies ('ion temperatures') can be obtained very efficiently (Feldker et al., 2020;Hirzler et al., 2020a).
The 'Rabi method' discussed above requires η 2 jn j ∼ 1 to obtain a reasonable signal. In the quantum regime of ion-atom interactions, typically on the order of 1 quanta of motion remain on average. In this case, it can be more efficient to use a blue or red sideband pulse in which ∆ L = ±ω j . In this setting the measured direction of motion can be set with the frequency of the laser. The Rabi frequency on the blue and red sideband can be approximates as Ω bsb n j = Ωη j n j + 1 and Ω rsb n j = Ωη j √ n j .
Note that the latter equals zero for the ground state of motion (i.e. n j = 0). Once more, the energy distribution may be obtained by varying the pulse width and measuring the probability of each state. Here, the recording pulse length scan is often Fourier transformed to obtain the frequency components in it. The sideband method has the benefit that none of the frequencies are commensurate because of the squareroot in the Rabi frequency, making it easier to distinguish them. If only the valuesn j are to be measured, a trick can be used that saves a lot of measurement time. Here, one only records the curvature of the probability of finding the ion in state in |1 , starting from |0 . This curvature scales as: −2n j η 2 Ω 2 and can be measured quickly by sampling a handful of small pulse durations 1/(η j Ω). Moreover, for smalln j , the strength of the red and blue sideband may be compared in the frequency domain (Turchette et al., 2000), giving direct access ton j .
Another elegant method of obtaining the average kinetic energy of the ion is by loading it into a shallow optical dipole trap. Here the survival prob-ability is a direct probe of the ion energy (Schmidt et al., 2020b;Weckesser et al., 2021b).
Atom bath preparation and detection
In essence, the atom part of the ion-atom experiments addresses a cold cloud of one atomic species trapped and cooled by laser light and observed by absorption imaging within an ultra-high vacuum setup. Therefore, the main elements of the atom-side of the setup (see Fig. 3) are optical beams, magnetic field coils and cameras for absorption imaging. In hybrid ion-atom systems, alkali atoms (e.g. Li, Rb, Na or Ca) are commonly used. They are cooled to the s-wave regime where the quantum mechanical underpinnig of atom-atom interactions comes into play. The atoms interact with each other through the van der Waals interaction, which stems from an induced electric dipole moment. This second order electric dipole-dipole interaction is short-ranged and isotropic. Beyond alkali atoms, for elements like erbium, dysprosium and chromium which have a magnetic dipole moment, the dipoledipole interaction also plays a role. Furthermore, ultracold gases offer an unprecedented control in intra-species interaction strength by magneticallytunable Feshbach resonances (FRs).
Each measurement begins with the creation of the gas by cooling and trapping the atoms from an atomic beam and ends with the destructive imaging of the cloud of atoms. The techniques behind this make-probediscard cycle are well-established as are the requirements for the experimental setup. The atomic isotopes selected determine the details of the optical and magnetic fields needed as well as the cooling and trapping cycle required to prepare the mixture at a specific temperature and density. Additionally a radio-frequency (rf) antenna can be implemented to change the spin states of the atoms, create spin polarized mixtures and do both frequency as well as time-domain spectroscopy.
A quantum gas is a metastable state of matter and requires an ultra-high vacuum setup. It requires a a wall-free trap for confinement, most commonly a magneto-optical, magnetic or optical trap. These traps prevent the nucleation on surfaces, which triggers the phase transition of gas into a solid. The lifetime of the gas in the trap is determined by two-and three-body loss processes. Through collisions the particles can gain enough kinetic energy to leave the trap and this gives an upper bound on the densities that quantum gases can have. The rate of three-body recombination, where three atoms collide and form a bound molecule and a free atom that carries away the binding energy, scales as the density cubed. It is this inelastic loss process that drives the transition towards chemical equilibrium. A second constraint on the density comes from the elastic scattering between the particles, which enables the gas to rethermalize and reach a kinetic equilibrium. If the density is too low, collisions between particles take a long time to occur and thermalization might not happen within the lifetime of the gas. These constraints lead to the typical densities of about 10 16 − 10 19 m −3 most commonly seen in experiments with ultracold gases.
Degeneracy and interactions
When discussing ultracold interacting gases, three length scales matter. These are the interparticle distance d between the atoms, the thermal de Broglie wavelength λ dB and the range r 0 of the atom-atom interaction. For alkali atoms at low temperatures, r 0 is given by the s-wave scattering length parameter a. The de Broglie wavelength changes with the atom gas temperature T a according to its definition λ dB = 2π 2 / (mk B T a ), where is the reduced Planck constant, m the mass of the particle and k B the Boltzmann constant. It characterizes the wave nature of particles in the context of the wave-particle duality. For high temperatures, in the regime of λ dB r 0 d, the system is described as a classical gas with only pairwise interactions. For colder temperatures, when the s-wave scattering picture applies, the collisions between particles are affected by quantum mechanics once a λ dB d and collisions require explicit quantum mechanical treatment. Therefore the swave regime can be reached once λ dB > a and the s-wave temperature for atom-atom interactions is given by 2π 2 / (mk B a 2 ). For lithium this is about 50 mK.
Quantum degeneracy is obtained for a d < λ dB and the system can then be described as a weakly interacting Bose-Einstein condensate (BEC) or a degenerate Fermi gas (DFG) within the mean-field approximation. Here, the waves that describe each particle overlap and interfere, and the distinction between individual waves is lost. The gas becomes degenerate. In terms of phase-space density this happens for d −3 λ 3 dB > 1. Depending on the density, for Lithium this regime is entered for temperatures of a few hundreds of nanokelvin.
When the particles are indistinguishable, the distinction between bosons or fermions is important. The wavefunction of a many-body system of bosons is symmetric under the exchange of particles, while that of fermions is antisymmetric. For fermions, this results in Pauli's exclusion principle, where no two identical fermions can occupy the same quantum state. Thus a system of N identical fermions will occupy N different quantum states. For temperatures close to zero, these fermions fill the energy levels up from the lowest level to the Fermi energy. The Fermi energy is the energy of the highest filled quantum state, which depends on the number of fermions in the system and benchmarks the DFG. On the contrary, all atoms in a system of N identical bosons can occupy the same quantum state. Moreover, the occupation of the same state is actually favored. When N bosons occupy the same state, the probability to get an additional boson in that state is enhanced by a factor of (N + 1). Thus, for temperatures close to zero a macroscopic occupation of a single quantum state occurs and a BEC is formed.
By tuning the interaction in an ultracold quantum gas, one can also reach the strongly interacting regime where d < a < λ T . Here, the description of the degenerate many-body system as a single macroscopic wavefunction fails and rich physics and complex quantum phases can be expected. At the typical densities (n = d −3 ) of an ultracold gas, the temperatures required to obtain degeneracy are around a few hundred nanokelvin. Of course increasing the density of the gas would give fewer constraints on the temperature, but this shortens the lifetime of the gas and thus the measurement time of an experiment.
Feshbach resonances provide the tunability of interactions for which ultracold gases are famous (Chin et al., 2010). A FR occurs when a molecular bound state of almost no energy couples resonantly to the free state of two colliding atoms 9 . The difference in the magnetic moment between those two states, can be used to tune the states in and out of resonance by changing the magnetic field. Close to the FR, the scattering length as a function of magnetic field is given by a(B) = a bg (1 − ∆/(B − B 0 )), with a bg the background scattering length and δ the FR width. At the Feshbach resonance center B 0 the scattering length diverges. Due to a FR, the interaction between atoms can be tuned from weak to strong and from attractive to repulsive. The lifetime of a gas with strong interactions is limited by three-body recombination, and often atom loss measurements are used to characterize FRs. Feshbach resonances can occur between atoms in the same spin state, in different spin states and between spin states of different elements. On the repulsive side of a FR, a weakly bound state exists and this allows the creation of weakly bound pairs (Köhler et al., 2006;Ferlaino et al., 2009). Close to B 0 , the binding energy of the dimer follows E b = 2 / (2µa), with µ = 1 2 m a for a single species atom bath.
Laser cooling
To create an ultracold gas, a hot gas is formed and cooled down by laser cooling (Metcalf and van der Straten, 1999;Schreck and Druten, 2021). The atomic beam exiting the oven is slowed down either via a Zeemanslower or via a 2D-MOT stage, both relying on light scattering. The Zeemanslower uses a counterpropagating laser beam and the magnetic fields changes along the atomic pathway to compensate for the Doppler shift and keep the atoms resonant using the Zeeman effect. A 2D-MOT is similar to the typical magneto optical trap (MOT), however in one direction there is the possibility for the atoms to escape, which can be facilitated by using a pushing beam. It depends on the species used which of the two methods are favorable. This first stage typically slows the atoms down to a few tens of metres per second.
The atoms are subsequently caught in a magneto optical trap and further Doppler cooled. The MOT is a combination of counterpropagating reddetuned optical beams from all three directions and a magnetic field gradient. Together they provide a trapping environment for the atoms in which those travelling away from the trap center are brought back through light scatter-9 Depending on the channels involved, these FRs can be attributed to different partial waves and are called s, p, etc..-wave FRs. For ultracold temperatures, the s−wave FRs are the most important. p-wave resonances are especially interesting for spin-polarized DFG, as the Pauli principle prevents them from interacting via s-wave scattering.
ing. As the lasers are tuned to a frequency below the transition frequency, high velocity atoms are more likely to absorb photons from the optical beams counter to the atoms direction of movement, which leads to both cooling and trapping. The temperatures of a MOT are typically limited by the Doppler temperature T D = γ/ (2k B ) which depends on the linewidth γ of the optical transition used. For laser cooling of alkali atoms, this is most commonly the D2 transition (S 1/2 → P 3/2 ). The number of atoms in the MOT depends on the loading time, during which the atoms are captured from the atomic beam, and is limited by the loss of atoms due to photoassociation.
Although several sub-doppler techniques exist, the most common next step is to load an off-resonant optical dipole trap (Grimm et al., 2000) or magnetic trap(Pérez-Ríos and Sanz, 2013) and do evaporative cooling to further decrease the temperature. The ultimate limit for laser cooling is set by the recoil temperature T r = h 2 2k B mλ 2 , which is 3.5 µK for lithium.
Optical trapping
In an far-off resonant single beam optical trap (Grimm et al., 2000), the atoms experience a trapping potential which depends on their polarizibility α and the waist w 0 and power P of the trapping laser of wavelength λ. The polarizibility depends on the wavelength used for optical trapping and is typically calculated from theory and verified experimentally 10 . Assuming a gaussian beam the potential has a trap depth of U opt = − 1 2 0 c Re(α) 2 π w 2 0 P , with 0 the vacuum permittivity and c the speed of light. This leads to a trapping frequency in radial direction of ω r = 2 w 0 U opt ma and axially ω ax = 2U opt λ 2 / (m π 2 w 4 0 ). Most commonly a crossed beam setup is used such that the confinement in the axial direction can be increased. The optical trap can be fully characterized by measuring its trap frequencies. Besides an optical potential, the scattering rate of the dipole trap is also important. This highlights the rate with which the trapping light can be absorb by the atoms and re-emitted. The scattering rate for a gaussian beam is given by Γ sc = 1 0 c Im(α) 2 π w 2 0 P . To keep the scattering rate as low as possible, traps that are far-off resonant from optical transitions within the atom are typically used.
Evaporative cooling (Ketterle and van Druten, 1996) takes place by low-ering the power of the optical dipole trap, which reduces the trap depth U opt (0). It relies on removing energetic particles from the trap and letting the remaining particles rethermalize. The gas then acquires a lower temperature and a higher phase-space density. The limit of evaporative cooling is given by the ratio of inelastic to elastic collisions. Elastic collisions are needed for thermalisation, while inelastic collisions typically shorten the lifetime of the sample. The trap depth you set in order to truncate an atom cloud of a certain temperature is given by the truncation parameter η, i.e. η = U opt (0)/(k B T a ), and is typically kept constant and around 10. Typically a decrease of 3 orders of magnitude in atom number leads to an increase in phase-space density of six orders of magnitude. Therefore, evaporative cooling is a powerful tool to reach quantum degeneracy and get ultracold samples. For a deep enough trap, the ODT trap seen by the atoms can be treated as an harmonic trap with trap frequencies ω a i for all three directions i = x, y, z. The density distribution in the trap is non-uniform as the trapping potential is not a box-potential 11 . The density of a thermal cloud is then given by with L i = 2πkT m(ω a i ) 2
1/2
, which is related to the width of the cloud by σ i = kT m(ω a i ) 2
1/2
. The density depends on the mass of the atom, the total atomnumber N and the temperature. The thermal peak densityn t in an harmonic trap is given byn Here,(ω a ) 2 is the geometrical average of the trap frequency and can be calculated asω a = (ω a x ω a y ω a z ) 1/3 for a cigar-shaped cylindrical trap. The mean density is given by,n t = 1 √ 8n t . For a single-component Fermi Gas in an harmonic trap 11 Box potentials, leading to a homogeneus density, are being explored though (Novan et al., 2012).
with a bb the bose-bose intraspecies scattering length. Usually the cold atoms are created far away from the ions in order not to contaminate the ion trap electrodes and a form of transport is needed to overlap the atoms with the ions. Typically this is done by optical transport, either by changing the focus of the optical trap or by using piezo-mirrors to change the beam pointing.
Absorption imaging
The most important experimental parameters of the bath are the atom number, density and temperature, which are commonly obtained by absorption imaging. An absorption image is taken by shining light resonant to an optical transition on the atoms for a fixed duration. The atoms absorb the light and this leads to a depletion of the intensity of the probe beam depending on the optical density OD of the cloud at each specific spatial position. Subtracting images of the recorded light intensities with and without the atoms present gives the absorption image and information on the optical density of the cloud. For low intensities, resonant light, and a closed optical transition, the optical density is related to the column density n(x, y) by the resonance cross section σ, i.e. OD = n(x, y)σ. In this example, the imaging happens along the z-direction. The absorption images are then directly related to the column density of the atomic cloud and by integrating along one direction a 1D density distribution is obtained. This density profile can then be fitted with the appropriate distribution function (Inguscio et al., 2008;Ketterle et al., 1999), which gives the atom number and width of the cloud. The 1D density profile of a a thermal cloud is fitted with a Gaussian distribution function, for a BEC a parabola or bimodal distribution is used and a DFG requires a polylogarithmic function. Integrating another time, the total atom number can be found.
To obtain the temperature, a time-of-flight measurement is taken. Here the atomic cloud is imaged at various times after it is released from the optical trap. The expansion of the cloud in the absence of any confining potential is related to its temperature. From the time-of-flight curves, assuming free expansion, the temperature T x and T y as well as the in-situ width of the cloud, σ 0 x and σ 0 y , in both x-and y-direction can be obtained. This follows from fitting the expansion of the width of the cloud as a function of expansion time using For determining the width, it is important to measure the magnification of the camera. The density of the cloud can also be inferred from the time-of-flight measurements as n = N/V = N/ (2π) 3/2 σ 0 x σ 0 y σ 0 z . Comparing this to Eq. 16, the trapfrequencies can also be obtained as σ 0 α = kT mω 2 α 1/2 . They can also be measured independently by exciting and measuring the frequency of the collective modes in the trap (Grimm, 2008).
Ion-atom mixtures
Several ion-atom experiments currently exist that look into the interplay between an ion and an ultracold (∼ µK) cloud of atoms. See table 1 for an overview including their latest experimental results. Here, we focus on the setups with ultracold atoms beyond the MOT phase, where the atom cloud is already in the regime where only the s-wave scattering length characterizes the atom-atom interactions 12 . Most setups prepare the atoms away from the ion and then move the atoms on top of the ion or vice versa, e.g. (Hirzler et al., 2020a;Perego et al., 2020;Meir et al., 2017a;Schmid et al., 2012). The transport can be done optically by changing the position of the atoms' optical dipole trap or using compensation electric fields for displacing the ion. The separate preparation requires either a two-chamber or two-stage design of the experimental setup, which makes it easier to have good optical access facilitating large beams for the atomic MOT-stage. While the ion can be reinitialized, usually the atom bath is discarded and is reloaded for each new measurement.
In ion-atom mixtures, the ion is typically trapped in a radio-frequency Paul trap (Paul, 1990), which relies on a time-dependent potential (see Table 1: Overview of experimental ion-atom mixtures with ultracold atoms and their characteristics. Given are the ion-atom mass ratio ξ, the C 4 -coefficient with definition V ia (r) = − C4 r 4 , the s-wave energy E s and the most up-to-date experimental reference. The ion can be either trapped in a Paul trap (PT), in an ion-only optical dipole trap (IDT) or together with the atoms in a Bi-chromatic optical dipole trap (Bi-ODT). The ion can also be unconfined, which is typically for creation via the ionisation of a Rydberg excitation (Ry). Moreover the ion can be created from the atom bath by three-body recombination (TBR) and subsequent chemical reactions.
Ion
Atom Sec. 3.1.1). However, ions by themselves can also be trapped in a multipole trap (Wester, 2009) or in an optical dipole trap (Schaetz, 2017;Karpa, 2019). Combining any of these traps with an ultracold cloud of atoms leads to some difficulties. For multipole traps, reaching very low temperatures for the ion-atom collision energy is in principle easier compared to a Paul trap as the ion can be confined in a region with little micromotion. But on the other hand, this makes the ion more prone to stray electric fields that cause excess micromotion Niranjan et al., 2021). Multipole traps have been used for studying ion-atom collisions and buffer gas cooling at K-mK temperatures (Wester, 2009;Asvany and Schlemmer, 2009;Nötzold et al., 2020). They allow the exploration of ion-atom systems beyond the critical mass ratio of ξ = m a /m i of about 1 (Höltkemeier et al., 2016). Optical trapping of the ion requires relatively high optical powers to create a deep enough trap to catch the ion, which is typically laser-cooled to the Doppler limit 13 . The optical trap depth of the ion is given by the optical potential and reduced by stray electric fields and static defocusing. Moreover, the atoms also react to this optical potential, which is typically very deep compared to what is normally used for optical atom trapping. This causes a very tight confinement of the cloud, where the high density leads to enhanced three-body losses as well as a high heating rate of the atoms by the ion's trapping light. Precooling the ion to sub-Doppler temperatures, would reduce the optical potential needed and thus the effect on the atoms. Additionally, a second light field with an anti-confining frequency can be used to compensate for the detrimental effects of the ion optical trap. This bi-chromatic trap creates a milder potential for the atoms, while still deeply trapping the ion (Karpa, 2021). For Ba + -Rb mixture a bi-chromatic optical trap of 532 and 1064 nm was demonstrated in which both atoms and ions could be confined (Schmidt et al., 2020b). Here, the 532 nm light is actually anti-trapping for the atoms and this is compensated by the 1064 nm light. In Ba + -Li an ion-only optical trap at 532 nm was used to capture the ion after precooling in the Paul trap and by the atom buffer gas. Both setups still require precooling of the ion in the Paul trap, to reduce its energy to trappable regimes. Developments for an electro-optical trap which combines the optical trapping with an electrostatic field (Karpa et al., 2013;Karpa, 13 Ground-state cooling of an ion in an optical lattice is feasible and was demonstrated by Karpa et al. (2013) 2019) are also under way (Perego et al., 2020). Alternatively, an ion can be obtained following three-body recombination of the atoms into a molecular ion which subsequently photo-ionizes and dissociates, creating a single ion as was demonstrated for Rb .
Furthermore, ion-atom mixtures can be created by exciting atoms to a Rydberg state and ionizing the Rydberg atom. These ions are formed from the ultracold bath and lead to homonuclear ion-atom mixtures. These ions could then again be trapped, yet at the moment one studies how they move as free ions through the cloud. Their interaction with the bath can be probed for the duration of tens of microseconds and is limited by three-body recombination. For the Rb + -Rb mixture an ion with an energy of about k B ×50 µK was created with a lifetime of ∼ 20 µs (Dieterle et al., 2021). The Rydberg blockade takes care that only a single ionic impurity is created (Balewski et al., 2013;Kleinbach et al., 2018). Rydberg atoms can also be coupled to either free or trapped ions Ewald et al., 2019;Haze et al., 2019;Deiß et al., 2021;Zuber et al., 2021). These Rydberg atoms have a stronger polarization and provide a strong interaction with the ion. Moreover, a trapped ion itself can be excited to a Rydberg state (Mokhberi et al., 2020), however its influence on the interaction with the bath has not yet been studied.
Reaching the quantum regime is more accommodating for ion-atom combinations with a heavy ion and light atoms. There, ξ 1 and the reduced mass µ ≈ m a . This leads to s-wave energies in the µK-regime, as E s = 4 4µ 2 C 4 . This energy needs to be compared to the collision energy which also depends on the atom and ion masses as well as their energies (see Eq. 2). Ultracold atoms can be routinely cooled to the few µK regime, whereas ions after Doppler cooling are typically in the several hundreds of µK regime. Therefore, reducing the collision energy is most profitable when cooling the ion further, either by buffer gas cooling or using sub-Doppler cooling techniques such as resolved side-band or EIT-cooling Diedrich et al. (1989); Roos et al. (2000). The mass ratio plays not only a role for cooling, but also in the energy distribution of the ion. For a heavy ion with a light atom, the shape of the distribution remains largely unaffected. However, for equal masses a non-thermal Tsallis-distribution was seen and the massratio was predicted to influence the exact nature of this nonthermal energy distribution that appears after multiple collisions (Rouse and Willitsch, 2019;Pinkas et al., 2020). Large ion to atom mass ratios result in energy distributions that are indistinguishable from a thermal distribution (Feldker et al.,
2020).
For ions in a Paul trap, the rf-induced heating, caused by the coupling of the micromotion with the attractive ion-atom potential is a limiting factor. This heating sets a lower bound on the ion temperatures that can be reached and thus on the attainability of the quantum regime. For one dimension, the ion heating in a single collision between an ion and atom at rest was found (Cetina et al., 2012), which translate to a 3D heating of This equation has been confirmed by numerical simulations (Cetina et al., 2012;Pinkas et al., 2020). We can use it to compare the prospects for various ion-atom combinations in ideal Paul traps.
We can equate W 3D 0 with the s-wave energy E S = 4 /(2µ 2 C 4 ) to find out what restrictions we have in choosing Ω and q such that the energy released in the first collision between a ground state cooled ion and ultracold atom remains in the s-wave regime. Setting q = 0.2 and in the limit |q 2 j | |a j | such that ω j ∼ |q j |Ω rf / √ 8, we show the results in table 2. We see that combinations with a large ion-to-atom mass ratio allow for using Paul traps Table 3: Heating and the final energy for realistic ion-atom systems in an RF-Paul trap with the static and dynamic stability parameters a, q and trap driving frequency Ω given by literature. The thermal equilibrium energy of the ion is obtained from numerical simulations using the code of taking into account an radial stray field of 0.05V/m and an atom bath of 2 µK.
Ion
Atom with Ω rf /(2π) 20 MHz. On the other hand, smaller mass ratios require smaller drive frequencies with already 23 Na demanding Ω/(2π) 1 MHz. For such small drive frequencies, radial excess micromotion is expected to be the dominant limiting factor. Equating the s-wave limit with the contribution of the ion's micromotion energy to the collision energy allows us to estimate the requirements on static offset field compensation. The collision energy (Eq. 2) of an ion in a bath of atoms in the limit T a → 0, is given by µ m i E s . This means that the ion can have a maximal energy of E max i = m i µ E s , in order for the ion-atom collision to be in the quantum regime i.e E col < E s . Using Eq. 9, we can further calculate the maximal radial offset field that can be present in the setup. Both values are listed in table 2 for various ion-atom combinations. For Li and 23 Na combined with heavy ions these values are experimentally challenging but in reach, while alkali atoms heavier than 23 Na require forbidding micromotion compensation and seem out of reach. Note that the present discussion only includes the first collision and cannot be used to predict the thermalization of the ion in a gas or indeed whether it thermalizes at all.
Apart from heating the ion, atom baths also buffer gas cool the ion as their temperature is typically 2-3 orders of magnitude lower than that of the ion (see Sec. 4.1). The competition between heating and cooling, determines the final temperature reachable in ion-atom mixtures. Using numerical simulations based on the code published by Trimby et al. (2022), we now proceed to obtain this equilibrium temperature of the ion for the various ion-atom systems discussed, in the limit of many collisions. We do this for a realistic experimental scenario. We assume an atom bath of 2 µk and the ion initially at rest and a radial stray field of 0.05V/m. The latter reflects the stray field compensation achievable in the ion-atom experiments with Paul traps 14 . After several thousands of collisions of the ion with the atoms, we extract the final secular ion temperature given the realistic values of q and Ω rf . The total equilibrium kinetic energy of the ion is then obtained using E ion ∞ = 5 2 k B T sec and is shown in table 3. Here, we assume all three directions to contribute equally to the kinetic energy and the intrinsic micromotion to add kinetic energy on the order of k B T to the ion (Berkeland et al., 1998). The mixtures with a heavy ion and a light atom show the lowest final energies. By comparing the two Ba + -Rb setups, the influence of the trap parameters becomes clear.
Experimental Results
The recent experiments with hybrid ion-atom systems especially focus on cooling to the quantum regime, understanding the interactions and losses of the system, and observing how the ion behaves in a many-body environment of atoms. The colder the system is, the more likely it is that quantum effects play a role. This requires the interactions to be treated quantum mechanically and shed light on what happens beyond the typical Langevin picture of collisions. Here we discuss these recent experiments and the results that were obtained.
Buffer gas cooling
The idea of cooling one system by immersing it into another system which is much colder is a general concept. This buffer gas cooling or sympathetic cooling relies on the energy exchange between the hot system and the coolant. In ion-atom systems this method is used to reduce the energy of the typically much hotter ion by the atom bath. As it purely relies on collisions between the two components, it is a very versatile technique applicable to many atomic and ionic species. Unlike laser-cooling or magnetic evaporation, it does not require the atom or ion to have a particular resonant transition or dipole moment. Atom baths can be readily prepared at sub-Doppler temperatures in the 10-0.1 µK regime, while the ion's temperature is in the 10 − 0.1 mK regime. Buffer gas cooling was also proposed as an application in trapped ion quantum computing (Daley et al., 2004), where the superfluid coolant, e.g. a Bose-Einstein condensate, protects the ion-qubit. Besides reducing the ion's energy, buffer gas cooling is the way to reduce the collision energy and reach the quantum regime.
To determine the effect of buffer gas cooling ion thermometry is used. Depending on the temperature regime different methods are applied. For instance, in a first experiment with an 174 Yb + ion in a Rb-BEC, the technique of Doppler recooling Wesenberg et al., 2007) showed the cooling possibilities of atomic baths to below the K regime (Zipkes et al., 2010). With the same technique, in the 10 K-mk regime, buffer gas cooling in ion-atom systems was demonstrated with Ca + (Haze et al., 2018) by a Libath of about 4.5 µK. For detecting lower temperatures, thermometry using a narrow-line transition can be applied. For instance, carrier Rabi spectroscopy was used to detect the heating of a groundstate cooled 88 Sr + ion in a bath of 5 µK 87 Rb atoms , below the mK regime. For this equal mass ion-atom mixture, the ion's energy distribution was demonstrated to evolve from a thermal (Maxwell-Boltzmann) to a power-law distribution best described by the Tsallis function, by interacting with the atom cloud. This is caused by the interplay between the rf fields and the ion-atom interaction, which depends on the mass-ratio (Rouse and Willitsch, 2019). Alternatively, the ion can be captured in a shallow optical potential, which sets an upper bound on the energy of the ion (Schneider et al., 2010;Weckesser et al., 2021b).
Recently, for Ba + -Rb in optical traps, buffer gas cooling was measured to reduce the ion energy by 100 µK (Schmidt et al., 2020b) in a single collision. This was shown using a bi-chromatic optical trap, which captured both Ba + and Rb and could cool Ba + to below its Doppler temperature of 370 µK. About 10 collisions between the ion and atom cloud would be needed to reach thermal equilibrium. However, extending the measurements to further cool down the ion to about the temperature of the bath was not possible due to three-body losses and the experimental stability. Especially the good relative alignment of the two beams of the optical trap and the presence of parasitic ions triggered by multi-photon ionization and three-body recombi-nation, remain ongoing challenges (Karpa, 2021). Loading the atoms not as an ensemble, but into individual lattice sites of an optical lattice, could be a possible way out.
Moreover, buffer gas cooling to the s-wave regime was observed in the Yb + -Li mixture (Feldker et al., 2020). For cold collisions one can decompose the total scattering wave function into contributions of different partial waves, i.e. different angular momenta of the rotational motion. For E col < E s , the ion-atom collisions can be called ultracold and are entirely determined by a single partial-wave scattering in the incoming collisional channel. For the temperatures of Yb + -Li mixture, most partial waves are frozen out and the quantization of interaction leads to only contributions of the s-wave and pwave molecular potentials to the collisions. When studying collisions, this system should therefore show deviations from the classical treatment of collisions and, indeed, quantum effects in the spin-exchange rate as a function of collision energy were seen (see Section4.3).
By precise thermometry and taking into account the full energy budget of the ion a collision energy of E col = 1.15(±0.23)E s was found after 1 s of interaction with a 2 µK cloud of Li. The Yb + -Li has an s-wave energy of k B × 8.6µK. The collision energy depends on the atom kinetic energy ( 3 2 k B T a ) and the total kinetic energy of the ion, which is made up of several components. For Yb + -Li, besides the secular temperature in axial and radial directions, the intrinsic micromotion and excess micromotion in all three directions was determined to deduce the final collision energy. Furthermore, care was taken to compensate the excess micromotion as much as possible 15 . The buffer gas cooling of Yb + outperformed Doppler cooling by a factor of five. The cooling curve of the ion by a 10 µK 6 Li bath with a peak density n a = 3.1(±1.5) × 10 10 cm −3 is shown in Figure 3. The secular radial temperature was obtained by measuring the laser excitation on the 411 nm S 1/2 → D 5/2 transition as a function of pulse width. These Rabi oscillations are shown as insets for both the hot and cold ions. Both the frequency and damping of the oscillations change with temperature and by fitting these oscillations to a model assuming a thermal distribution, the mean motional quantum number and subsequently the secular temperature are obtained (Leibfried et al., 1996;Meir et al., 2016). The ion was cooled Figure 3: Buffer gas cooling to the s-wave regime in the Yb + -Li mixture. The data shows the measured secular radial temperature of the ion for various ion-atom interaction time. The solid line represents and exponential fit to the data, while dotted (dashed) lines are molecular dynamics simulations without (with) the time-dependence of the Paul trap. The insets show the Rabi oscillations on the 411 nm S 1/2 → D 5/2 transition used to determine the temperature. From Feldker et al. (2020) . from T ⊥ sec = 600 µK, which is close to the Doppler temperature of 0.63 mK, to a final temperature of 98(±11) µK. With the experimental uncertainties, the presence of the ion was not found to influence the atom temperature or number, as determined by time-of-flight analysis and spin-selective absorption imaging.
The ion temperature equilibrates to a value which is higher by an order of magnitude than the temperature of the atom bath. This energy difference is confirmed by molecular dynamics simulations as shown by the dotted and dashed lines in Figure 3. Taking only the secular description of the Paul trap into account (dotted line) a final temperature equal to that of the atom is found. However, when including the time-dependence (micromotion) of the Paul trap, the realistic excess micromotion of the setup and background heating of the ion without the bath, agreement with the data is found as shown by the dashed line. The remaining mismatch in final temperature may be partially explained by temperature overestimation. It was assumed that the dephasing in the Rabi flops was fully caused by the motion of the ion, while other possible effects such as laser frequency and intensity noise were not considered. Another, more tantalizing reason may be the onset of quantum effects (Oghittu et al., 2021). Nonetheless the discrepancy between the ion's and atom's final temperatures has its origin mostly in the micromotion-induced heating caused by the ion-atom interaction, which is typical for ion-atom systems in Paul traps DeVoe, 2009;Zipkes et al., 2011;Chen et al., 2014;Rouse and Willitsch, 2017;Höltkemeier et al., 2016). Using the same type of simulations, one could show that the final ion temperature reachable by buffer gas cooling could even be a factor of two lower, through trap parameter optimization
Collisions and chemistry
The excellent control and read-out of the ion, makes it a good reaction center with which collisions and chemical reactions can be studied. Collisions can lead to state or momentum changes, whereas chemistry leads to a change in the nature of the components for instance by combining the reactants into a molecule or exchanging an electron between them. Moreover, because ionatom systems are ultracold, they facilitate the exploration of chemistry and collisions at temperatures well below the mK regime. There, the classical picture of colliding particles and barriers needs to be replaced by a full quantum mechanical treatment and measurements are necessary to shed light at the chemistry that goes on there (Heazlewood and Softley, 2021).
Observation of collisions and chemistry relies on the controlled preparation of the reactants and the detection of the energy, state and/or kind of the products. In ion-atom experiments, the initial state of the atom and ion reactants can be readily prepared and the final state of the products can be read-out through both state-selective atom and ion imaging (see Sec. 3). Moreover the gain or loss in kinetic energy can be measured with ion thermometry or by time-of-flight analysis of the atom cloud. Another method is to look at atom or ion loss from the trap. Particles with a higher energy than the trapping depth of the potential, leave the trap and this can be detected as loss either by measuring the atom number or observing the loss of ion-fluorescence. As ion traps are very deep, the large energy released during an exothermic reaction will commonly not excite the ion out of the trap. However it will give the ion enough kinetic energy which will make it off-resonant with the fluorescence light due to the Doppler shift. See Fig.4a and b for an example fluorescence measurement of a bright and dark ion.
Loss of fluorescence can also point to the creation of a different ion species that can not fluoresce at the given wavelength. For instance a molecular ion is formed or charge exchange leads to the atom becoming ionized and the ion becoming neutral. As long as the mass of the new product is still fulfilling the trapping criteria for the ion-trap, it will remain trapped after the reaction, however the optical control and read-out is lost. The trappability of this other ion depends mainly on its mass and whether that matches the stability criteria for the trap. The outcome of such a reaction can be detected by mass spectrometry (e.g. (Härter et al., 2013;Schmidt et al., 2020a)), which reveals the mass of the product or through quantum logic spectroscopy (Schmidt et al., 2005) when co-trapping another ion.
Quantum logic spectroscopy (Wolf et al., 2016;wan Chou et al., 2017;Sinhal et al., 2020) provides a way to detect product ions of chemical reactions of whom the optical control and read-out is inaccessible, either because the setup does not have the relevant lasers available or the optical detection of this particular product has not yet been shown. The power of this method was demonstrated for isotopes of the Sr + ion inside a Rb-cloud (Katz et al., 2022). Here the 88 Sr + was used as the read-out ion. This logic ion was then co-trapped with another ion, called the chemistry ion which is either 84 Sr + , 86 Sr + , 87 Sr + or another 88 Sr + . The latter is used for calibration of the measurements. Isotopes can be selectively loaded by adjusting the photoionization frequency and are sympathetically cooled by the logic ion. The loading of the isotopes can be detected via mass spectrometry (Drewsen et al., 2004).
By preparing the Rb atoms in an excited hyperfine state (F = 2), the exothermic hyperfine and spin changing reactions between these atoms and various Sr ion isotopes could be studied with quantum logic spectroscopy (Katz et al., 2022). Through an ion-atom collision a Rb atom can relax to the lower hyperfine state (F = 1) and the energy of the atom decreases by the amount of the hyperfine energy, which is about k B ×328 mK for Rb. Because of conservation of spin, the Sr ion will also change its state and energy by about k B ×1 mK. The total internal energy release during the reaction ∆E = ∆E a ± ∆E i , depends on whether the ion gains or reduces its internal energy. As energy is conserved, the released energy is distributed among the atom and ion as kinetic energy. The ratio of distribution depends on their mass and, in the center-of-mass frame, the ion takes up µ m i ∆E. This energy gets divided over the six motional modes of the trap and corresponds to about k B ×27 mK per mode. The increase in the motion of the chemistry ion is sensed by the logic ion as the two are coupled by the Coulomb force. Any motional change in one thus influences the other. The logic ion can than be readily read-out using ion thermometry (see section 3.1.4) and in the Sr + -Rb setup this is done by electron shelving. As a result the hyperfine changing collision rate for all isotopes could be determined and interestingly the odd isotope showed a twice as low rate. The precise determination of spin exchange and spin relaxation rates aids the ab initio calculations of the molecular potentials of intermediate complexes that can form during the reaction of Rb with Sr + and gives insight into the dynamics of the system.
Molecular ions
In ion-atom systems molecular ions can be formed through spontaneous radiative association, photoassociation, magnetoassociation as well as through three-body recombination (Mohammadi et al., 2021) or chemical reactions between an ion and Feshbach dimers . Typically these molecular ions are still weakly bound and in highly excited states. This makes them very reactive. Once a molecular ion is formed, they too can collide with the atom cloud and their survival depends on how hard it is to dissociate them either by collisions or assisted by light present in the setup. Especially the far off-resonant light of the optical dipole trap for the atoms can be detrimental to the lifetime of molecular ions. Moreover collisions can lead to electronic and vibrational state changes of the molecular ion. Here we discuss the recent studies on molecular ions performed using ion-atom mixtures with µK atoms in optical dipole traps. However, molecular ions are widely studied e.g. in the field of trapped molecular ions Sinhal et al., 2020) Katz et al. (2022).
For BaRb + , the question of how the molecular ion collides, relaxes, or dissociates within the cloud of atoms was measured by studying the elastic, inelastic and reactive processes (Mohammadi et al., 2021). They measured the rates for collisional and radiative relaxation as well as photodissociation, spin-flip collisions, and chemical reactions between this alkaline earth and alkali species. Directly after formation, the collisions between BaRb + and Rb lead mainly to vibrational relaxation. As the molecular ion gets deeper bound, the dominant mechanisms shifts to radiative relaxation. Other collisional and dissociation processes found, led to the detection of Ba + , Rb + and Rb + 2 . This shows the possible pathways along the lifetime of a molecular ion.
The dissociation of the molecular ion into an atom and ion typically leads to the observation of so-called recooling events. See Fig. 4c for an example. Here, the fluorescence returns after a certain cooling time t c . As fluorescence is collected by shining resonant Doppler light in the setup, a 'hot' ion can be cooled back to a temperature regime where it will fluoresce again. Because of their kinetic energy, 'hot' ions see a Doppler shift with respect to the imaging transition and remain dark until they are cold enough to be resonant with the light. Despite the Doppler shift, the light still Doppler cools the ion. For Doppler cooling being red-detuned with respect to the transition is a prerequisite. The recooling time depends on the energy of the ions and could be traced back to the energy of the molecular ion. However, this requires a good knowledge of all contributions to the (molecular) ion's energy. Nevertheless, these kind of events are used to measure the energy of the ion in so-called single-shot Doppler cooling thermometry (Meir et al., 2017a).
A novel way to form a molecular ion is through chemical reactions between the ion and Feshbach dimers (Hirzler et al., 2020b), which was recently observed with Yb + -Li . Using Feshbach resonances, weakly bound ultracold molecules in high vibrational states, so-called Feshbach dimers, can be created from an ultracold gas. In the 174 Yb + -6 Li systems, the reaction Li 2 + Yb + → LiYb + +Li was studied by looking at the probability of the ion going dark after interaction with the atom-dimer bath (see Fig. 4). The dimer density was maximally 10% of the atom density and the ion was prepared in its ground state (S 1/2 ). The dark-probability showed an anti-correlation with the atom density, excluding atoms to be taking part in the reaction. This was further confirmed by exciting the ion to the P 1/2 state and measuring the charge exchange rate, which is a good probe for the local density of the atomic cloud . The creation of the molecular ion was confirmed with mass spectrometry. Similarly as to the BaRb + , recooling events were seen as is depicted in Fig. 4c. The most likely cause here was photodissociation of the molecular ion by the 1064 nm light of the optical trap holding the atoms and dimers.
The molecular ion formation through ion-molecule collisions was con- The blue line is a numerical solution to rate equations describing independently the number of dimers based on the atomic evaporation ramp which creates the dimers, the atom density, temperature and B Li2 . The shaded regions correspond to a 20% error in the atom temperature. Adapted from .
firmed by varying the number of dimers and demonstrating the sensing properties of an ion probe. Figure 4d shows the remarkable agreement between the probability of molecular ion formation and the number of dimers. The latter can be varied by varying the magnetic field at which the Feshbach dimers are formed through three-body recombination of the atoms. The blue shaded area shows the calculated probability based on atom-data alone and relies upon the assumption that each ion-molecule collision leads to a molecular ion, which agrees with the ion measurements. For a P dark = 0.2 about 50 dimers in a bath of 20000 atoms could still be detected. Thus, the ion was used as a probe for trace amounts of Li 2 molecules. This shows the sensing capabilities of an ion interacting with a many-body system. The next step for molecular ions in hybrid ion-atom systems would be to resolve the rotational and vibrational states of the molecular ion and gain control on the preparation of specific molecular states. For instance by using quantum logic spectroscopy (Wolf et al., 2016) with a single ion as the logic ion and the molecular ion as the chemistry ion. These techniques are being developed in the field of trapped molecular ions, e.g. Ca + and N + 2 Sinhal et al., 2020), and could also be applied for molecular ions in an atomic bath. Although the dissociation by the trapping light of the atoms might limit the lifetime, this can be prevented by directly discarding the bath after the molecular ion was formed. A way to control the energy of the molecular ion is by controlling the binding energy of the Feshbach dimers that play a role in its creation (Hirzler et al., 2020b). The cold controlled molecular ions could be used in combination with precision spectroscopy in the search of new physics (Safronova et al., 2018) or for laboratory astrochemistry. Furthermore, the ion-molecule collisions which can be studied within hybrid ion-atom systems give access to a new regime, as the molecules created from the ultracold gas are in the µK regime. Thereby they can complement the ion-molecule research done with Coulomb crystals (Heazlewood and Softley, 2015;Heazlewood, 2019) and molecular beams (Meyer and Wester, 2017).
Other reactions
When the ion is prepared in an excited state, for most ion-atom mixtures, charge-exchange reactions dominate the inelastic collisions. However, for Sr + -Rb two other mechanisms were measured to play a more important role. These were electronic excitation exchange and spin-orbit change (Benshlomi et al., 2020). The reason lies in the lack of avoided crossings between the potential energy curves for Sr + -Rb and Rb + -Sr. Distinction between the two mechanisms could be made by using single-shot Doppler cooling thermometry (Meir et al., 2017a) on the ion and measuring the ion's energy after a few collisions with the atoms. An excited Sr + (D 5/2 or D 3/2 ) was measured to rapidly decay to Sr + (S 1/2 ) by simultaneously exciting Rb from the S 1/2 to the P 1/2 state and releasing the remaining energy into motion. Looking at the energy distribution of the ion, they could distinguish between the two mechanisms, as they led to distinguishable amounts of energy being released. The cross section of these processes for varying collision energy, was subsequently measured by shuttling the atoms with an optical lattice through the ion. Depending on the speed with which the lattice was moved the collision energy could be tuned from 0.2-12 mK×k B with a resolution of the expected Langevin scaling of E −1/2 col . By improving the long-term stability of this measurement method, significant deviations from this scaling could be observed, which would point towards quantum resonances (see Sec. 4.3).
Charge exchange reactions are especially interesting when discussing homonuclear ion-atom mixtures, where the charge gets swapped between the ion and atom, i.e. A + A + → A + + A. In these systems, as a bonus, the charge exchange leads to swap cooling and provides a way to create a cold ion in a single step (Ravi et al., 2012). This was directly observed by comparing cooling in a mixture of Rb-Rb + to Rb-Ba + , where the atom bath had a temperature of T a = 600 nK. In the homo-nuclear mixture a fast cooling of the ion was seen, whereas in the heteronuclear mixture this was absent. The mechanism behind the so-called swap cooling is resonant charge exchange and happens through glancing collisions, limiting the application to ions with temperatures above the 200 K. Below the 200 K, the buffer gas cooling due to Langevin collisions dominates and determines the cooling dynamics.
Quantum effects
Although attaining ion-atom collision energies close to the s-wave regime remains challenging, it is possible. In this regime, only a single partial wave describes the interaction. Therefore, the interaction can be described by a single parameter, i.e. the ion-atom scattering length a. Furthermore, swave Feshbach resonances (FRs) can then be used to tune the ion-atom interaction (Idziaszek et al., 2011;Gacesa and Côté, 2017;Tomza et al., 2015), in analogy to ultracold quantum gases where FRs are the workhorse of the field (Chin et al., 2010).
The characteristic energy below which the s-wave regime is reached is given by E s = 4 4µ 2 C 4 . To know that the quantum regime is attained one can either look for quantum effects or measure the collision energy. The latter requires a well-rounded measurement of all energy contributions to the total ion energy (secular, IMM, EMM) and the atom bath. Subsequently the collision energy can be calculated and compared to the characteristic energy scale E s .
With a favorable ion-atom mass ratio and good control over stray electric field the quantum regime can be accessed and quantum effects have been seen. The first observation saw a deviation from the classical Langevin rate for spin-exchange for low collision energies with Yb + /Li. In Ba + /Li Feshbach resonances were seen when approaching the s-wave limit. We discuss these discoveries below together with the effects that can still be observed in the mK regime.
In the mK regime
Reminiscent features of the quantum regime can be observed in the mK regime when looking at the spin dependence of ion-atom collisions. This effect is called partial-wave phase locking and was observed for both Sr + -Rb and Yb + -Li Côté and Simbotin, 2018). When ions and atoms collide, spin exchange or spin relaxation can happen and the rate of these types of collisions depends on the initial state of the ion and the atom cloud. The spin exchange collision dynamics is given by the singlet and triplet scattering lengths as well as the atomic polarizability because of the short-range nature of the spin exchange interaction. Even at high temperatures, i.e. when multiple partial waves are present, it is insensitive to the centrifugal barrier Côté and Simbotin, 2018).
The spin-exchange measurements allow for an estimation of the difference between the singlet and triplet scattering lengths, which can be used to predict the magnetic fields at which the ion-atom Feshbach resonances are expected to occur (Chin et al., 2010). Extending the measured spin dependent rates to different isotope combinations, enables a good comparison with coupled-channel scattering calculations. From matching theory and experiment, then an estimate for the triplet (a T ) and singlet (a S ) scattering lengths can be inferred. For Yb + -Li, this resulted in the prediction of the scattering lengths being large and opposite in sign, e.g. a T ∼ −a S ∼ R 4 . This is in good agreement with the later measured values of a S = 1.2(0.3)R 4 and a T = −1.5(0.7)R 4 (Feldker et al., 2020), when the s-wave regime was probed.
Spin exchange
With the coldest system of Yb + -Li, quantum effects in ion-atom collisions were observed by studying the spin-exchange dynamics as a function of collision energy. The latter was tuned by deliberately adding excess micromotion to the ion using compensation electrodes. This increases the ion's energy and thus the collision energy. In the classical regime, the spin exchange rate is expected to be proportional to the Langevin collision rate Γ = n a σv, which depends on the atom density n a , the relative velocity v and the collision cross-section σ and is independent of the collision energy (see section 2). However in the quantum regime, deviations are expected. Here, the wave-nature of the interactions matters. This leads to quantization of the angular momemtum l and the scattering of the particles is described by partial waves with l = 0, 1, 2, ..., called s, p, d, ...-waves. With each partial wave of l > 0, a centrifugal barrier can be associated. Particles can both tunnel through this barrier or be reflected from it and the likelihood that this happens depends on the collision energy. Therefore, shape resonances and structure are expected to show up in the collision energy dependence of the spin-exchange rate.
The measured spin-exchange probability versus collision energy is shown in figure 5 after a 10-ms interaction with an atom cloud of n a = 21(±) × 10 15 m −3 . A clear deviation from an energy-independent Langevin-rate can be seen. The location of the centrifugal barriers for the various partial waves are depicted by the dashed vertical lines. The red theory line represents a best fit to the data of multichannel quantum scattering calculations which take into account the complete description of molecular and hyperfine structures as well as the collision energy distribution of the ion. As fit parameters the triplet and singlet scattering lengths, i.e. a s = 1.2(0.3) and a T = −1.5(0.7)R 4 with R 4 = 70 nm, and an average number of 1.2 Langevin collisions were found.
The observation of this quantum effect, demonstrated that the Yb + -Li mixture has access to the quantum regime. A next step is the search for Feshbach resonances, which have been predicted (Idziaszek et al., 2011;Tomza et al., 2015). These studies would be feasible when increasing the density of the atom cloud, which decreases the cycling time of the measurements and makes the detection of Feshbach resonances possible.
Feshbach resonances
With Ba + -Li, ion-atom Feshbach resonances were detected (Weckesser et al., 2021b) by using a high atomic density spin-polarized Li gas with n a = 33 × 10 17 m −3 . These resonances occur due to coupling between a molecular potential and a background continuum of states. Together with only a single partial wave contributing to the collision, a Feshbach resonance gives the prospects of controlled magnetic-tuning of ion-atom interactions. In neutral-neutral atom mixtures (Chin et al., 2010), this control and tunability of interactions already led to a wide exploration of both many-body (Bloch et al., 2008) and few-body physics (Wang et al., 2013). Around the Feshbach resonance's center, the interaction increases and changes sign. Therefore the signatures are an increase of the collision rate between the atom and ion, resulting in ion loss or an increase in cooling capacity of the atom buffer gas.
The measurements looked at the ion survival probability after interaction with the atoms for various magnetic fields. They are shown in Figure 6. At a Feshbach resonance, three-body recombination is greatly enhanced, which leads to the observed ion-loss. In the magnetic field range of 705-330 G, 11 loss features where seen. From fitting a Lorentzian to the features, the center and full-width-half-maximum was determined. The overall collision energy of the system was below 90 µK, which allows for both s-, p-and d-wave resonances to be detected. An l-wave resonances, is a Feshbach resonance where an l-wave molecular level couples resonantly to the entrance collision channel.
Four resonances, shown in red, could be attributed to s-wave Feshbach resonances through comparison with theoretical predictions (dashed vertical lines). The theory is based on ab initio electronic structure and multichannel quantum scattering calculations. In total, only 5 resonances were predicted when considering spin-projection conserving electronic interaction (δm F = 0). However, the s-wave resonances turned out to all be associated with δm F = 1 interactions. This shows that coupling terms like second-order spin-orbit coupling need to be considered, increasing the number of expected resonances (Ticknor et al., 2004). The theory can predict the number of resonances, the distance between them and their width. However, it does require experimental input to determine the exact location of the Feshbach resonances. Combining the data with the theory, the singlet and triplet scattering length was obtained, which pins down the Feshbach resonance locations. For Ba + -Li, a S = 0.236R 4 and a T = −0.053R 4 , with R 4 = 69 nm was found (Weckesser et al., 2021b). The assignment of the other resonances, requires more theory investigation.
To further explore the found Feshbach resonances, the three-body recombination and the cooling properties of the gas were probed. Close to the s-wave FR at 296.13 G, the loss rate of the ion was determined for varying atom densities. It was found to follow Γ Loss ∝ n 2 a , which confirmed that the observed ion-loss was caused by three-body recombination. Moreover, the increased cooling of the ion by the atoms was observed by determining the trapping probability of the ion in an optical trap for varying magnetic fields around the FR center. The shallow potential of the optical trap only holds the ion if it is cold enough, compared to its trap depth. By holding the ion for a fixed interaction time in the cold atom bath, a different trapping probability was found for varying magnetic fields. For these observations the stray electric fields were compensated down to 0.003 V/m. The next steps would be to use the tunability of the ion-atom interaction through their Feshbach resonance to explore various interaction regimes or even fully switch off the interaction. Furthermore, a Feshbach resonance has a bound molecular state on the repulsive side, which by ramping across the resonance can be used for the association of molecular ions from the ion and the atoms themselves. Detecting these molecular ions would be an important step forward too. For Ba + -Li, these studies would be enabled by freezing out the other partial waves contributions and reaching the s-wave regime.
Ion transport
Ion-atom collisions, both in the classical and quantum regime, are especially of interest when looking at charge transport. An ion moving through a dense environment will interact with its surroundings, which affects the ion's motion and direction. Therefore to understand the dynamics of an ion travelling through a medium, insight into the collisions behavior is important. Here, experiments in hybrid ion-atom systems can contribute, especially when combined with a good spatial read-out of the ion's location in the cloud. In a radio-frequency or optical trap, the ion's motion is controlled by the trapping potential. However a free ion in an atom cloud can be created by photo-ionizing an atom of the cloud via a Rydberg excitation.
Using a Rb + -Rb mixture, the transport of a single ion inside a Bose-Einstein condensate (BEC) could be measured (Dieterle et al., 2021) for the timescale of tens of microseconds. A single rubidium ion was created by applying a laser pulse, leading to a Rydberg excitation of a Rubidium atom. Subsequently, a small dc electric field pulse ionized the Rydberg atom. The two-step process leads to the deterministic creation of a single ion, because of a strong Rydberg blockade (Balewski et al., 2013;Kleinbach et al., 2018;Engel et al., 2018). The free ion was not confined by any external electric or optical potential and had an initial energy of about 50 µK, coming from the small electric field that creates it. To study the transport, the ion was subject to an external force coming from a tunable homogenous electric field of 1-6 mV/cm. For a period of 1-20 µs, the free Rb + ion was transported through the Rb BEC (n a = 4 × 10 14 cm −3 ). The pulling of the field accelerates the ion for a given time after which the location of the ion is detected. The detection happens by turning on a strong electric field and accelerating the ion towards a multichannel plate through an ion lens. By measuring the time it took the ion to arrive at the detector after turning on the read-out fields, the location of the ion in the cloud was found. In comparison, the ion was transported through a thermal gas with a 40 times lower density.
The ion transport in the BEC showed diffusive behavior, even though frictionless behavior could have been expected because of the superfluidity of the BEC. For the 40 times sparser gas, the transport was ballistic. This shows that frequent ion-atom collisions lead to the diffusive transport observed in the BEC, which was further confirmed by numerical simulations. Taking into account the variable density that the ion encounters when travelling throught the BEC, the ion's motion was modeled by subsequent elastic Langevin collisions with ballistic motion in between (Zipkes et al., 2011;Chen et al., 2014). The ion-atom Langevin scattering rate γ L = 2πn a C 4 /µ set the time in-between collisions events. Furthermore, glancing collisions were not included and given the good agreement between data and simulations, were found to have a negligible role in the transport dynamics.
Another mechanism encountered in the Rb + -Rb measurements, was threebody recombination via Rb + + Rb + Rb → Rb + 2 + Rb (Dieterle et al., 2020). Because of the heavier mass, Rb + 2 arrives later at the detector than Rb + and the two ions can be discriminated. The electric field could also be used to destroy the ion dimer. The amount of electric field needed for the dissociation reaction (Rb + 2 + E → Rb + + Rb) to happen, was found to be a good measure of the binding energy of the dimer. The binding energy of the dimer could be read-out, by measuring the ratio of ion and ion-dimers detected for varying strengths of the electric field (Dieterle et al., 2020). The longer the ion-dimer resided within the atom cloud, the more it became deeply bound. This points to inelastic secondary collisions with the Rb atoms as the likely cause of the relaxation.
Looking at sub-microsecond timescales might provide access to the regime where quantum effects dominate and only a few partial waves contribute to the collisions that describe the transport of the ion. By improving the control of the electric fields and the spatial resolution of the detection scheme, single collisions could be detectable. This opens the possibility to probe the quantum regime of ion impurity transport (Côté, 2000), formation of mesoscopic molecular ions (Côté et al., 2002) and more specifically charged polaron physics (Casteels et al., 2011;Astrakharchik et al., 2021;Christensen et al., 2021;Oghittu et al., 2021). Here, high resolution microscope techniques and momentum spectroscopy as developed for quantum gases could be useful, e.g. (Veit et al., 2021;Geppert et al., 2021).
Prospects
With the observation of quantum effects in the hybrid ion-atom systems, exciting possibilities for further exploration lie ahead. For large massimbalanced ion-atom systems the tunability of ion-atom interactions is within reach, which enables full control of the ion-atom collisional properties and creates a versatile platform for state-to-state quantum chemistry, molecular ion creation and detection as well as for quantum simulation. However, there is still room to gain, as the colder the ion as well as the atomic gas, the better the control over the inter and intra-species interactions. Moreover, for the more equal mass ion-atom systems the quest remains on how the quantum regime can be attained and probed. Here, we briefly outline the experimental directions and prospects of what might be next.
From the experimental perspective of probing the quantum regime, two angles are important. The development of new techniques to better control and probe the current systems at hand as well as the progress in new designs which facilitate other types of ion-atom systems to be studied at colder temperatures. Although for now the Paul trap offers the most direct way to the quantum regime, other trap designs are important for bringing more ionatom systems to the quantum regime. Here, combinations of optical traps, tweezers and combinations of static fields with optical fields are all on the table, e.g. (Schaetz, 2017;Schmidt et al., 2020b;Perego et al., 2020;Karpa, 2021). Each ion-atom combination provides it own set of promises and challenges for quantum simulation. In current setups the experimental challenge lies in simplifying and standardizing the overall cooling, trapping and detection sequences, in order to improve their stability and shorten the run time of the experiments. This provides room for extending the measurement cycle with additional cooling or trapping steps to further decrease the collision energy and increase the density to reach deeper in the quantum regime. There, the ion-atom interaction can be tuned independently and becomes an experimental knob to turn.
With control come the opportunities of using the ion-atom mixtures as test beds for quantum simulation of charged impurity physics. Of special interest is the ionic polaron, which unlike its neutral atom counterpart, is not short-ranged. This causes the typical polaron picture to break down and measuring the ionic polarons properties would shed light on this quasiparticle of intermediate range and its transport properties (Casteels et al., 2011;Astrakharchik et al., 2021;Christensen et al., 2021;Oghittu et al., 2021;Christensen et al., 2022). For the transport studies improving the timeresolution to look at shorter timescales is an important premise, as here the quantum effects play a role. The ionic impurity-bath studies are also interesting in the light of quantum information. Especially to see what happens to qubits in quantum baths as for example, the measurement of decoherence of a spin qubit in spin-polarized Bose-Einstein condensate (Ratschbacher et al., 2013). It would be interesting to see what happens in a degenerate Fermi gas and for varying ion-atom interaction strengths. Moreover, by going for large systems of a few ions inside a bath, the prospects appear to look into induced-interactions mediated by the bath (Ding et al., 2022) Not only from the many-body aspect, but also from the few-body perspective the impurity in a bath is of interest (Pérez-Ríos, 2021a). There is much more to explore on the front of ultracold chemistry. Especially when looking into the creation of molecular ions, promises lie ahead on getting a better grip on the product-state distribution. For instance, by resolving the rotational and vibrational states of the molecular ion and gaining control on the preparation of specific molecular states. Probing the energy of the molecular ion right after creation by e.g. quantum logic spectroscopy, would give insights into the reaction path and if there are possible ways to influence it. Here the better read-out and protection of molecular ions or in-between product states would be greatly beneficial.
Moreover to extend the ion-bath studies to different regimes, enhancing the complexity of the ion-atom system under study is another direction worth pursuing. For instance, an ion in a bath of dipolar atoms or molecules, would open the possibility to look into baths which have other controllable interactions beyond the van der Waals interaction. The ion can be used as a sensor of the many-body properties of the complex bath. The power of ion sensing was already demonstrated by measurements involving an ion in BEC, which revealed that the ion could measure the density profile of the atom cloud (Schmid et al., 2010;Zipkes et al., 2010) or the atom-dimer creation by studying chemical reactions . Here, one can already use the tunability of the bath, to sense different many-body environments with the ion. However, simultaneous control over the ionatom interaction would further increase the flexibility of the system. So far most ion-atom systems with an ultracold bath only employ single-atomic ions which are singly-charged. This can be extended to probe the interactions between a bath and multiple-charged ions or molecular ions. Several ion-MOT experiments exist in this direction (Hassan et al., 2022;Dörfler et al., 2019;Puri et al., 2019) and they could advance to the few µK regime for the atom bath by implementing an optical dipole trap and evaporative cooling. Also the various Rydberg-bath hybrid systems Ewald et al., 2019;Haze et al., 2019;Deiß et al., 2021;Zuber et al., 2021;Mokhberi et al., 2020) come to mind to shed light on the interaction of ions within a complex bath.
All together these prospects which rely on the experimental control, design and ingenuity yet to come, will benefit our knowledge of ion-atom interactions from both the many-body and few-body perspectives. This is requisite to define the possibilities of ion-atom hybrid systems not only as quantum simulators but also for their applications for quantum technologies.
|
2022-06-30T01:15:58.473Z
|
2022-06-29T00:00:00.000
|
{
"year": 2022,
"sha1": "9e36ebe4e9e84aea6a2d325cafb3526a9e89f2f4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2206.14471",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fd700db6006ce15399060e439dae79b7212896b7",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
}
|
10370561
|
pes2o/s2orc
|
v3-fos-license
|
MACULAR ATROPHY INCIDENCE IN ANTI-VASCULAR ENDOTHELIAL GROWTH FACTOR–TREATED NEOVASCULAR AGE-RELATED MACULAR DEGENERATION
This post hoc analysis of 2 prospective studies, with identical criteria and protocol using either ranibizumab or aflibercept for neovascular age-related macular degeneration over 2 years, investigated the incidence of macular atrophy within this time frame. An inverse association with the number of injections was found, but not with treatment drug.
I n the industrial world, age-related macular degeneration (AMD) is the leading cause of severe visual loss in people older than 50 years 1 because of neovascular AMD (nAMD) and geographic atrophy (GA). The current standard of care for nAMD consists of repeated intravitreal injections of anti-vascular endothelial growth factors (anti-VEGF). These are drugs that inhibit the actions of vascular endothelial growth factor, and they have demonstrated similar improved visual outcomes with ranibizumab 2,3 aflibercept, 4 and bevacizumab. 5 The introduction of anti-VEGF treat-ment has profoundly changed the visual prognosis of nAMD. However, concerns have been raised about progressive loss of the benefit in the long-term results, 6,7 which has been linked to progressive appearance of atrophy in eyes treated with anti-VEGF for nAMD. 6,8,9 Atrophy is cited as the primary reason for visual acuity loss in patients with nAMD receiving anti-VEGF treatment. 8,9 Macular atrophy (MA) is a term encompassing GA and atrophy, which forms in association with regressed neovascularization. To date, it is not entirely clear to what degree the atrophic changes in treated nAMD are due to the underlying degenerative process of AMD, are induced by the neovascular complex, or result from the anti-VEGF treatment. Reports showing higher MA incidence under ranibizumab compared with bevacizumab 10 and under monthly retreatment compared with pro re nata have raised concerns. 10,11 While a range of ocular baseline factors have been shown to be associated with MA incidence, [10][11][12][13][14][15][16][17] the role of anti-VEGF treatment in MA development remains controversial, both in terms of treatment frequency and treatment agent. In addition, the effect of aflibercept regarding MA development has not yet been investigated. Although the consensus remains that undertreatment-not overtreatment-poses the greater danger to vision in nAMD, better understanding of the role of anti-VEGF in the development of MA is necessary. The aim of this study was to investigate treatment factors, along with ocular and systemic factors for their association with MA incidence in eyes with nAMD treated with anti-VEGF aflibercept or ranibizumab according to their individualized need.
Methods
Two subsequent prospective interventional 2-year studies served as the source of information for this post hoc analysis. Both original protocols were designed to investigate the usefulness of the Observe-and-Plan regimen, an individually planned, interval based, variable dosing regimen, using ranibizumab 18,19 or aflibercept, 20 respectively, as the anti-VEGF drug to treat naive nAMD. These protocols were identical except for the treatment drug, and they were performed subsequently because of the later availability of aflibercept. The Observe-and-Plan regimen was shown to be safe and efficient, with the advantage of preserving clinical resources due to the preplanning of injections and only occasional monitoring visits. [18][19][20] For this post hoc analysis on MA incidence, only those eyes were included which completed the 2-year study protocol and which did not show evidence of MA at baseline. In cases of bilateral eligibility to this MA analysis, only the right eye was selected, to avoid a bias due to intereye correlation.
The study was approved by the local ethics committee, and was performed in accordance with the ethical standards set by the Declaration of Helsinki.
Data Collection and Image Analysis
Baseline data collected for this analysis included age, sex, history of arterial hypertension, cardiovascular disorders, smoking, the best-corrected visual acuity on the Early Treatment of Diabetic Retinopathy Study (ETDRS) chart, and the type of anti-VEGF drug administered. The number of injections according to the Observe-and-Plan regimen over 2 years was recorded. Imaging data were collected from multimodal retinal imaging for baseline and at the end of the 2-year study protocol, including spectral domain optical coherence tomography, fundus color photography (Topcon TRC-50IX, Topcon, Tokyo, Japan), fundus autofluorescence imaging, fluorescein angiography, and indocyanine green angiography. The OCT machine used was the Heidelberg Spectralis (6 mm, 49 lines; Heidelberg Engineering, Heidelberg, Germany), or Cirrus macular cube (512 · 126; Carl Zeiss Meditec, Inc, Oberkochen, Germany); the same machine was always used for each eye for the duration of the study. The machine used for fundus autofluorescence and angiography was either the Topcon TRC-50IX (Tokyo, Japan) or the Heidelberg Retina Angiograph (Heidelberg Engineering).
The presence of MA was based on a multimodal imaging definition (Table 1): a dark zone on fundus autofluorescence, with at least one of the following: increased visibility of the choroid on FA or color, or sharply demarcated increased choroidal reflectivity on spectral domain optical coherence tomography with the absence of the retinal pigment epithelium (RPE) line. The diameter had to be at least 250 mm to be considered as MA.
Additional ocular baseline characteristics, which were graded on the various imaging modalities included the co-localization of new atrophy with the baseline choroidal neovascularization (CNV) complex, the angiographic type of CNV, RPE detachment and its height, the presence or absence of reticular pseudodrusen, hyperpigmentation, depigmentation, epiretinal membrane, vitreomacular adhesion, the presence of intraretinal cysts, subretinal fluid, subretinal tissue complex, and subfoveal choroidal thickness. Table 1 shows the multimodal imaging definitions that were used for grading of each of these
SD-OCT
Subretinal fluid Hyporeflective distance between the RPE and the photoreceptor layer. The whole SD-OCT cube was scanned for the thickest point of subretinal fluid and measured in the 1:1 scan presentation, vertically to the orientation of Bruch membrane.
SD-OCT
Subretinal tissue complex Hyperreflective material located between the RPE and the photoreceptor layer. The whole SD-OCT cube was scanned for the thickest point of subretinal tissue and measured in the 1:1 scan presentation, vertically to the orientation of Bruch membrane. parameters. Quantitative measures on OCT were performed manually (PED height, choroidal thickness, subretinal tissue thickness) because the automatic measures are machine dependent and, therefore, linked to the treatment group, which was not acceptable in this analysis. For the same reason, we did not include automatic central retinal thickness measures.
In addition, the fellow eye was evaluated, under the condition that no CNV was present. The collected fellow eye data included the presence or absence of atrophy at baseline and at year two and its area.
The primary outcome measures were the factors associated with MA incidence. Secondary outcome measures were the degree of symmetry of MA presence with the nonneovascular fellow eye.
The Observe-and-Plan Regimen
The details of the regimen have been described in a previous publication. 19 Briefly summarized, the regimen started with three monthly loading doses of anti-VEGF, followed by a monthly observation period to determine the individual injection-recurrence interval ( Figure 1). Active recurrence was defined as the presence of any intra-or sub-retinal fluid (no tolerance regimen) or the presence of new hemorrhage. The observed interval from the last injection to the first reappearance of disease activity was then used to calculate the future treatment interval (half a month shorter, 3 months at longest). This was applied in a treatment plan including several injections (3 injections if interval #2 months, 2 injections if interval $2.5 months), followed by a monitoring visit after the injection series at the same time interval. The monitoring visit allowed for periodically adjusting the treatment interval according to the presence or absence of exudative signs on spectral domain optical coher-ence tomography. The patients remained on the same drug during the entire 2-year study period.
The results of the regimen have been published elsewhere. 18,19 The reported key results are good and stable visual acuity improvement over 2 years (improvement by 8.7, 9.7, and 9.2 letters at months 3, 12, and 24, respectively), which was achieved with a mean number of injections of 7.8 and 5.8 during years 1 and 2, respectively, and a mean number of ophthalmic examinations of 4.0 and 2.9, respectively. The mean treatment interval (after the loading doses) was 2.0 months during year 1, and 2.2 months during year 2. 18
Statistical Analysis
Descriptive statistics were performed, and univariate and multivariate analysis served to identify risk factors associated with the incidence of MA. The statistical tests used included the two-sided t-test, and chi-square contingency tables, for continuous and categorical variables, respectively. The logistic model was used for univariate variables. Factors included into the multivariate model needed to show a P value , 0.2 in the univariate analysis. Owing to their particular significance for the scope of the study, the drug type and number of injections were planned to be included into the multivariate model, independent of their P value in the univariate analysis. The multivariate model was obtained using stepwise logistic regression for the dichotomous outcome of MA incidence. Statistical significance was evaluated using analysis of variance. For data analysis, a Microsoft Excel 2010 spreadsheet, and JMP software for Windows (version 8.0.1, SAS institute Inc, Cary, NC) were used. A 2-tailed probability of 0.05 or less was considered statistically significant. After the initial loading doses of three monthly injections, the monthly observations with optical coherence tomography (OCT) allow for defining the individual injection-recurrence interval. This interval, shortened by 2 weeks, is thereafter applied in a individually planned treatment schedule (*) of several injections. Regular monitoring visits after a series of injections allow for adjustment of the treatment interval: if the OCT shows a dry macula the interval is lengthened; if fluid is present the interval is shortened. Possible treatment plans (*) include 3 · 1, 3 · 1.5, 3 · 2, 2 · 2.5, and 2 · 3 months. If still dry at 3-month intervals, the next step is observation.
Results
Of the 206 patients (227 eyes) included into the 2 prospective Observe-and-Plan trials, 186 patients (205 eyes) completed the 2-year study duration and had images available for this post hoc analysis. In 43 eyes, MA was found at baseline, and they were, therefore, excluded. The remaining 162 eyes belonged to 149 patients; thus, we included 13 patients with both eyes eligible; the right eye was systematically chosen in these 13 patients. Finally, a total of 149 eyes (149 patients) were included in the present analysis: 70 eyes received aflibercept injections, and 79 eyes were treated with ranibizumab injections.
The percentage of women was 66%, and the mean age was 79.0 (SD 7.3) years. Of these patients, 63 eyes (42%) developed de novo atrophy by year 2, with a mean area of the new atrophy of 1.9 mm 2 (SD 0.2 mm 2 ) and a median of 1.1 mm 2 . The atrophic lesion area was #1 mm 2 in 44% of eyes and .5 mm 2 in only 11% of eyes. Of the 63 eyes with de novo atrophy, it was colocalized within the area of the baseline CNV complex in 48 eyes (76%), located purely outside the CNV complex in 6 eyes (10%), and the location was mixed in 9 eyes (14%).
The univariate analysis examining risk factors for MA incidence is summarized in Table 2. A significant Factors with P values between 0.05 and 0.2 in the univariate analysis that were included in the multivariate model were increasing age (P = 0.06), the drug type (aflibercept, P = 0.14), the presence of retinal hyperpigmentation (P = 0.07), increasing RPE detachment at baseline (P = 0.13), and thicker subretinal tissue complex at baseline (P = 0.11). After multivariate stepwise logistic regression analysis including parameters with a P value , 0.2 (continuous parameters were used if available), the final multivariate model was significant (P , 0.0001) and the R 2 value was 0.34. The model contained the following baseline factors as significantly associated with de novo MA incidence (Table 3): a lower number of injections within the 2 years of observation (P = 0.011), the presence of depigmentation (P = 0.0004), the presence of reticular pseudodrusen (P = 0.0005), lower baseline visual acuity (P = 0.0006), and neovascularization type of RAP (P = 0.0011). The drug was not associated with MA incidence (P = 0.21).
Localization of New Macular Atrophy Regarding Choroidal Neovascularization Complex
De novo MA was observed completely outside the baseline CNV complex in 7 eyes (10.8%), completely within the baseline boundaries of the CNV complex in 48 eyes (73.8%), and the localization was mixed in 10 eyes (15.4%). Performing the multivariate analysis after exclusion of MA purely within or purely outside the CNV complex, respectively, did change the final model in the following way ( Table 4): The model including those eyes with some MA appearing within the baseline area of CNV (inside and mixed) did show significant impact of fewer injections (P = 0.030), intraretinal fluid (P = 0.034), and confirmed the factors of depigmentation (P = 0.016), reticular pseudodrusen (P = 0.001), and RAP (P = 0.011). However, the baseline visual acuity lost its significance (P = 0.07), although it was retained as a factor in the stepwise regression.
The model including those eyes with some MA appearing outside the baseline area of CNV (outside and mixed) did confirm the factor of fewer injections (P = 0.028), depigmentation (P = 0.009), and reticular pseudodrusen (P = 0.009). However, baseline visual acuity was not retained in the model after stepwise regression, and RAP lost significance in the final model (P = 0.054).
Comparison With the Nonneovascular Fellow Eye
Of the 149 study patients in this analysis, there were 93 patients with a fellow eye without neovascular complications during the study period. Of these, only 7 (8%) showed MA (GA) in the fellow eye at baseline of the study eye, and 86 (92%) had, at baseline, symmetrical lack of atrophy in either eye. At year 2, 13 additional fellow eyes had developed MA, which is an incidence rate of 15%, in the nonneovascular fellow eyes. The intereye concordance of MA in the study eye and in the fellow eye at year two was highly significant (P = 0.0003). Forty-eight patients (52%) showed no MA in either eye, and 16 patients (17%) showed MA in both eyes. The number of patients showing atrophy in the study eye or the fellow eye only was 25 (27%) and 4 (4%), respectively.
About the MA incidence within the patient group with de novo MA outside the previous CNV complex in the study eye, there was no significant difference between the study eyes (14.9%) and the untreated fellow eyes (16.4%).
Discussion
In this study, we observed an MA incidence rate of 42% after 2 years of treatment under a variable dosing regimen with either ranibizumab or aflibercept for nAMD. The MA incidence associated risk factors were investigated using univariate and multivariate analysis, and intereye comparisons. The most intriguing (and new) finding was the association with fewer injections, whereas the other factors such as depigmentation, reticular pseudodrusen, RAP, lower baseline visual acuity, intraretinal cysts, and a high intereye correlation were expected findings based on the existing literature. Each point is separately discussed below.
In terms of incidence rates of MA in treated nAMD, the literature reports rates between 18% and 61%, depending on the imaging techniques and definitions applied. 10,11,13,21 All reports agree on the fact that de novo development of MA in anti-VEGF treated nAMD is frequent and multifactorial. 10,13 To date, most of the identified risk factors are ocular, with little evidence for influence of the treatment type and no evidence for systemic risk factors. However, our understanding of the risk factors is far from complete: multivariate models for MA incidence show only a weak-to-moderate goodness of fit (R 2 of 0.34 in our study). Thus, further investigations on the associated factors are important to create a comprehensive model.
The most clinically relevant factors are those which can be potentially modified. Therefore, our study particularly focused on treatment-related factors. The prespecified criteria of including both factors with univariate results of P , 0.2 into the multivariate analysis and drug type and the number of administered injections reflect this particular interest in treatment parameters. In fact, in complex multifactorial disorders, the significance of some factors may only become apparent when controlling for all confounding factors, as can be performed with multivariate analysis.
The number of injections in this study was dependent on individual treatment requirements. The Observe-and-Plan regimen indicated retreatment according to disease activity signs (intra-or sub-retinal fluid, retinal hemorrhage), and applied serial planed injections (2-3) until adjustment of the next treatment plan was performed in periodical monitoring visits (every 3-6 months). 19 The analysis of association with MA incidence was an analysis within this variable dosing regimen, contrasting with the previously reported comparisons between fixed monthly versus variable dosing pro re nata. 10,11 Our results showed higher risk for MA in eyes with lower treatment needs. Although the catorical analysis did not reveal a clear dose-dependent effect, the more reliable analysis with injections as continuous variable did clearly show a significant association (Table 2), and it remained significant when including all other significant variables in the multivariate analysis (Tables 3 and 4). Initially, this may seem to contradict the previous studies that have reported higher risk in monthly versus variable dosing regimens. However, the "inversion" of the expected results is likely to be due to methodological differences; the comparison of fixed monthly retreatment with a variable dosing regimen as a category, as was performed in the previous reports, 10,11 is like comparing overtreatment with individually adjusted treatment. Evidence from basic science suggests that the complete absence of VEGF isoforms 120 and 164 leads mice to an age-dependent degeneration of RPE and choriocapillaris similar to MA in AMD. 22 Overtreatment associated with fixed monthly retreatment may result in complete and continuous VEGF suppression in at least a proportion of patients, thus, explaining the higher risk for MA in such a regimen. By contrast, comparing eyes within a variable dosing regimen according the eye's treatment needs differs completely. The retreatment decision is based on disease activity signs and, thus, verifiable periodic VEGF secretion, as was performed in the Observe-and-Plan regimen of this study. The treatment frequency was aimed at controlling disease activity without overtreatment. Therefore, we compared participants based on their level of need for treatment (n injections as continuous variable compared with MA incidence). Biological plausibility for lower treatment need eyes being at higher risk for MA may be found in the following hypothesis: 1) If the RPE requires minimal VEGF for survival, 22 recurrences may provide the beneficial effect of transiently restoring adequate VEGF activity, helping to maintain the vitality of the RPE cells. This would be in favor of more frequent recurrences, as is the case for higher treatment need eyes. 2) The degenerative process of AMD may be ongoing while the patient is treated with anti-VEGF for neovascular complications. This may become more evident when the degree of exudative activity is low, as is in low treatment need eyes. 3) Despite the disastrous effect of the neovascular complex on retinal function, it may have some role in RPE survival. When the CNV complex is converted into scar tissue (low treatment need), its RPE supporting function may also disappear and MA may appear.
However, the drug type did not show any association with MA incidenc. To the best of our knowledge, this is the first comparison of MA incidence risk between treatment with ranibizumab and aflibercept. However, in terms of MA growth rate, Munk et al have performed a retrospective study and described less growth rate outside the CNV boundaries during the ranibizumab period compared with the subsequent use of aflibercept.(Munk et al 14 #1193) Basic research results are contradictory regarding their respective toxicities. For example, Julien et al 23 reported that aflibercept induced a higher rate of protein complex formation, hemolysis in the choriocapillaris, and RPE cell death than did ranibizumab in monkeys, whereas Malik et al 24 reported no relevant toxicity on RPE cell cultures with either aflibercept or ranibizumab. Our clinical results support that no important difference exists in terms of MA incidence between these anti-VEGF drugs. However, our results should be reevaluated in comparative trials with larger numbers of treated eyes; as with our sample size of 149 eyes, the power afforded is insufficient to claim parity between treatment types.
In the literature, numerous investigations on the comparison between ranibizumab and bevacizumab can be found, with conflicting results. The analysis of the Comparison of AMD Treatments Trials indicated a higher MA risk for the ranibizumab group compared with the bevacizumab group, 10 but this was not confirmed by the Inhibition of VEGF in Age-related choroidal Neovascularisation (IVAN) trial 11 nor by the meta-analysis that integrated both studies 11 nor by the treat-and-extend management strategy in neovascular age-related macular degeneration (TREX-AMD) trial. 13 In terms of ocular risk factors, this study identified as independent factors the presence of depigmentation, reticular pseudodrusen, lower baseline visual acuity, as well as the presence of RAP (or Type 3 neovascularization) and intraretinal cysts (in the subgroup with MA within the CNV area only). Depigmentation 25 and reticular pseudodrusen [26][27][28][29] have been previously reported as risk factors of MA in nonneovascular AMD, but so far not clearly identified in the context of treating nAMD. In our study, these parameters were also risk factors in the presence of neovascularization, a finding which is biologically plausible. These two factors probably correspond to the underlying degenerative process of AMD, rather than being related to neovascularization or its treatment. The relatively elevated odds ratio (OR 6.3 for depigmentation, OR 5.3 for reticular pseudodrusen) indicates their relevance for atrophy incidence. However, RAP and lower baseline visual acuity and intraretinal cysts, which have both been previously reported as risk factors for MA in nAMD, 10 may be more closely associated with the neovascularization-related processes. This is in keeping with the observation that the risk factor of intraretinal cysts only retain statistical significance in the subgroup with MA appearing within the CNV complex area, and that the RAP group lost its significance in the subgroup with MA appearing outside the CNV complex area. However, RAP is also more frequently associated with MA in nonneovascular fellow eyes of RAP lesions, 27 indicating an underlying risk profile for this phenotype. The absence of baseline subretinal fluid 10 and subretinal tissue thickness, 10,13 which are previously identified risk factors for MA, was statistically correlated with MA on the univariate analysis but was not independent. We also observed that a thinner choroid had a significant association with MA incidence in the univariate analysis, which has not been previously reported. However, as this factor is correlated with reticular pseudodrusen, 30,31 it is not surprising that it did not retain statistical significance in the multivariate model. RPE detachment was not related with MA incidence. This corresponds well with previous reports. 10,32 The association with MA in the fellow eye 10 could not be included into the multivariate model because of the large reduction in sample size when restricted to eyes with nonneovascular fellow eyes. Thus, we approached this question in a separate subanalysis, which did indeed show an intereye correlation for MA presence and incidence. We, therefore, consider that it was justified to disallow both eyes of an individual into this analysis, opting for systematically including the right eye only. Although some precious information might get lost by this approach, we had to exclude only 13 eyes because of this reason and gained statistical reliability for the results.
A few limitations of this study need to be acknowledged. First, although the studies that served as the source of data for this analysis were both prospective with identical protocols and regimens, small differences because of nonconcurrent enrollment exist (i.e., the study team). Second, the number of included eyes was limited, and this would have reduced the statistical sensitivity for identifying less important factors. However, it did not influence the reliability of the significant results. Third, this was a post hoc analysis of prospective studies, and not initially designed to address the issue of MA incidence.
However, this study also had several strengths: the well-documented baseline, the identical regimen with 2 different drugs, the context of prospective research, the identical treatment duration of 2 years, and the absence of selection bias other than informed consent for participation in the original studies.
In conclusion, this study found that the number of injections was inversely related to MA incidence, suggesting 1) that high treatment numbers, if individually required, are not a risk factor and 2) that MA appearance may co-occur with low activity neovascular disorder. About reports in the literature, our results suggest that continuous and complete anti-VEGF suppression may be harmful but that the number of injections adjusted to the individuals' needs is not associated with increased MA risk. In addition, MA incidence was associated with a range of ocular factors, both related to the underlying degenerative process of AMD and to the neovascular complex with its exudative activity. Finally, there was no harmful effect found for the drug type (aflibercept vs. ranibizumab). Although significant, the multivariate model was incomplete, and there is room for improved understanding. Further studies are required to confirm our findings.
|
2018-04-03T00:11:03.376Z
|
2018-01-23T00:00:00.000
|
{
"year": 2018,
"sha1": "6ce54fc840e9226a8574784cea1705c3f98c986c",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/retinajournal/Fulltext/2019/05000/MACULAR_ATROPHY_INCIDENCE_IN_ANTI_VASCULAR.11.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ce54fc840e9226a8574784cea1705c3f98c986c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118941779
|
pes2o/s2orc
|
v3-fos-license
|
Simulating the performance of a distance-3 surface code in a linear ion trap
We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of>99.9% for logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis on the error subsets from the importance sampling method used to approximate the logical error rates in this paper to gain insight into which error sources are particularly detrimental to error correction.
Introduction
A quantum computer is a device engineered to utilize the complexity of a many-particle wavefunction for the purpose of solving computational problems. For specific problems, quantum algorithms are predicted to surpass the ability of classical information processing [1][2][3][4][5][6] but the computational space solvable to quantum algorithms has yet to be rigorously explored due to the absence of a working physical architecture. Experimental implementations of small quantum algorithms in systems containing under 10 qubits have been exhibited in a variety of architectures [7][8][9][10][11][12][13][14][15][16][17]. However, realization of a large-scale algorithm consisting of hundreds or thousands of qubits will require protocols that protect the quantum states from sources of decoherence. Quantum error correction (QEC) is a viable method for protecting of quantum states from sources of decoherence [18][19][20]. Error correction routines embed logical qubits into subspaces of a multi-qubit Hilbert space and uses active feedback to remove entropy from the system. An enticing selection for an error correction protocol is the surface code [21] which exhibits an error correction threshold in the circuit model between 0.5% − 1% for depolarizing Pauli noise [22][23][24][25]. This threshold represents the error rate below which logical gates and memories can be made arbitrarily good by increasing the distance of the surface code. Here we examine the smallest surface code with nine data qubits and eight ancilla qubits, known as surface-17 [26][27][28]. In principle only a single ancilla qubit could be used over and over, but the gains from parallelism are even apparent in studies comparing 8 ancilla qubits to 6 ancilla qubits [28]. With 10-20 qubits, a number of QEC codes can be implemented fault-tolerantly including the 5-qubit code [29], Steane [[7,1,3]] [30,31], Bare [ [7,1,3]] [32], the Bacon-Shor [[9,1,3]] [33,34], or the twisted surface [35] code. We chose to study the surface code because the memory pseudothreshold, the error rate below which the encoded qubit outperforms the physical qubit, is superior to the 5-qubit code, the Steane code, and the Bare code, and comparable to the Bacon-Shor and twisted surface code [35].
Atomic ions have proven to be high-fidelity qubits for quantum information processing. The internal states of the ions are controlled by the application of electromagnetic radiation with lasers [36] or microwaves [37,38]. Two-qubit gates are performed by conditionally exciting the coupled motion of ions in the chain
The 17-Qubit Surface Code
The surface code allows for high-threshold fault-tolerant quantum computation in a two dimensional architecture [21][22][23][24][25]. The surface code is constructed by a square lattice arrangement of data qubits where the faces of the lattice represent the stabilizer generators of the error correcting code with X and Z type weight-4 stabilizers alternating in a checkerboard-like pattern throughout the lattice. In this arrangement, measurements are local and logical operators are non-local operators that span the surface and terminate on one of two types of boundaries, an X and a Z type, which label the type of stabilizers occupying the four terminating edges of the planar code. There are two choices of edge operators: weight-3 triangles or weight-2 links depending on how the bulk of the surface code is oriented [26]. The logical Z (X) operator spans the two Z (X) boundaries. The code distance, the weight of the lowest weight Pauli operator that maps elements of one logical basis state to another state, has an intuitive representation as the length (in number of data qubits) of the boundaries of the square lattice arrangement of the code.
The 17-qubit surface code is shown in figure 1a [27]. The white (black) circles represent data (ancilla) qubits and the dark (light) faces of the lattice dictate the X-type (Z-type) stabilizer generators of the code. The 13-qubit version of the surface code is constructed by removing the ancillary qubits on the boundaries of the 17-qubit code and scheduling stabilizer measurements in a manner that each of the ancillary qubits are used to measure both a weight-2 and weight-4 stabilizer [26]. This work focused on the 17-qubit version because the greater circuit depth for stabilizer measurement in the 13-qubit code adversely affects error correction [28].
The resource win for the surface codes is the ability to use bare ancilla for fault-tolerant measurement of the stabilizer generators. The scheduling of the two-qubit gates following an N-like pattern about the face of a weight-4 Z stabilizer allows for the cases where single-ancilla qubit to two-data qubit error propagation ("hook" errors) occur in a direction perpendicular to the direction of logical Z operator as shown in figures 1b and 1c. This error is equivalent to a single-qubit error from the perspective of the Z logical operator, thus retaining fault-tolerance during syndrome measurement. Scheduling the two-qubits gates during the measurement of an X stabilizer in a Z-like pattern gives a similar result for the logical X operator. Many other error correction routines require the use of many-qubit ancillary states to ensure fault-tolerance. Shor error correction requires many-qubit states, known as cat states, to fault-tolerantly measure stabilizers which increase the number of gates and require a number of ancilla equivalent to the sum of the operator weights of all the stabilizers to perform measurements in parallel [70]. This would require 20 ancillary qubits to perform error correction in parallel with the surface code. Steane [71] and Knill [72] error correction both require an ancillary logical state for fault-tolerance requiring 17 ancillary qubits. Recent work has shown that the use of "flag" ancillary qubits can reduce the number of ancillary qubits required for fault-tolerance [73,74] but still would which corresponds to 12 ancillary qubits (in parallel) in our surface code setting. However, a variation on the surface code, the twisted surface code, implements flag qubits and is constructed in a manner that requires only 15 total qubits with a small loss in pseudothreshold relative to the surface code [35]. We chose to focus on gate fidelities and pseudothresholds; thus our choice of code.
Mapping the Surface Code to an Ion Chain
To perform error correction with the surface code, the required operations are single-qubit gates (H), twoqubit gates (CN OT ), state initialization (|0 state), and measurement (Z-basis). Single-qubit gates are performed by the application of laser fields [36] or microwave radiation [37,38] to manipulate the hyperfine states of trapped 171 Yb + ( 2 S 1/2 |F = 0; m F = 0 ↔ 2 S 1/2 |F = 1; m F = 0 transition) which can drive arbitrary single-qubit rotation gates. High fidelity (compared to other schemes), fast two-qubit gates are performed by the application of counter-propagating laser fields achieving entanglement through the coupling of the internal states with the motional modes of the ion crystal through a method known as the Mølmer-Sørensen gate which engineers an XX entangling gate [17,75,76]. Controlled-NOT (CN OT ) gates can be built from Mølmer-Sørensen gates and available single-qubit rotations [77] (see figure 8). State initialization and measurement are performed by applying laser beams resonant with the 2 S 1/2 ↔ 2 P 1/2 transition. For |0 state preparation, qubits are optically pumped out of the 2 S 1/2 |F = 1 state into the 2 P 1/2 |F = 1 manifold which, with high probability, falls into the 2 S 1/2 |F = 0 state [59,78,79]. For measurement in the Z-basis, a (c) Figure 1: a) Planar layout of the 17-qubit surface code. White (black) circles represent data (ancilla) qubits and dark (light) faces denote X (Z) stabilizer generators. b) A single error on an ancilla propagating to a two-qubit error on the data; typically not fault-tolerant. c) The ancillary error from b) propagates in a direction perpendicular to the direction of the logical operator which is equivalent to a single-qubit error from the perspective of the logical operator retaining fault-tolerance with bare ancilla. All images were adapted from Tomita and Svore [28]. The SPAM zone where state preparation and measurement is performed, scattering photons. The Storage zone does not have a unique task but serves to sufficiently distance qubits from the SPAM zone.
2 S 1/2 |F = 1 ↔ 2 P 1/2 |F = 0 cycling transition is induced where the discrepancy between scattered photon counts of the qubit states serves as readout [59,78,79]. Note that the state preparation and measurement processes scatter photons that should not interact with surrounding ions. This requirement introduces an additional operation, ion shuttling [80][81][82][83][84][85], which will be used to separate qubits in memory from the scattered photons during measurement/preparation. An alternative approach would be to use two ion species so the data ions are insensitive to the fluorescence of the measurement ions [86,87], but we avoided this method due to technical issues in shuttling mixed-ion crystals. There exist many ion trap architectures containing both one-dimensional and two-dimensional ion layouts. For a first generation implementation of a logical qubit consisting of atomic ions, a trapped linear chain of ions was favored over two-dimensional architectures due to technological challenges in the latter that result in issues such as additional ion heating through shuttling junctions in traps [80,81,84,85], high idle ion heating rates [88], and single-ion addressing/readout issues in two-dimensional trap layouts. The linear trap is composed of at least three zones: a Logic, State Preparation and Measurement (SPAM), and Storage zone (figure 2). Ion shuttling across the axial direction of the trap allows for the 17-ion chain to be arbitrarily split into three, separate linear chains of ions inhabiting each of the three zones. The Logic zone is where all single-and two-qubit gates are applied. The central SPAM zone is where state preparation and measurement operations are performed. The Storage zone serves the purpose that its name implies and is required due to the geometric constraint of having the ions confined in a linear chain. In addition to these three zones, one or two additional zones may be capped at the ends of the trap that hold atomic ions for sympathetic cooling of the motional modes of the qubit ions [89][90][91]. Now we illustrate how a round of stabilizer measurements would proceed in such an architecture. Initially, all of the qubit ions would be prepared and cooled in the SPAM zone; an initialization step. After initialization, all 17 ions are shuttled into the Logic zone. In the Logic zone, the circuit implementing the measurement of the stabilizers of the surface-17 code would be implemented in, ideally, a parallel fashion. After the application of all the gates, groups of ancillary qubits would be shuttled to the SPAM zone for measurement. During the measurement, only ancillary qubits occupy the SPAM zone and any data qubits would be stored in either the Logic or Storage zone sufficiently far away from the SPAM zone. The ancillary qubits in the SPAM zone will be measured in parallel and in sets dictated by the data/ancilla assignment of the qubits in the ion chain. Following readout of all ancillary qubits, all qubits will be shuttled back to the Logic zone and the process is repeated. Such an implementation begs the question: how should the qubits in the surface code be assigned to the linear chain of ions? We are particularly interested in configurations that minimize the gate times (errors) of the error correction circuit. To proceed, we must first discuss two-qubit gates.
For computation of the two-qubit gate times, current gate protocols [17] and motional decoupling tech- niques [92] were modeled; the latter of which contributes significantly to the distance dependence of the gate times. The calculation of the gate time of an ion pair is outlined below. In the weak trap limit, a Paul trap can be well approximated by a pseudo-harmonic potential (see e.g. Ref. [93]). Here we consider ions in a linear Paul trap along the z direction (ω z ω x , ω y ). With a harmonic trap potential, the spacing between ions in the chain will be nonuniform, which can lead to undesired transition into a zigzag shape [94,95], as well as the difficulty in cooling many low frequency modes. To overcome this problem, an additional quartic potential can be added to the z direction [96] giving the total potential energy: where α 2 , α 4 > 0 are two parameters characterizing the strength of the quadratic and the quartic potentials. The ion configuration is then fully determined by a length unit l 0 ≡ e 2 /4π 0 α 2 1/3 and a dimensionless parameter γ 4 ≡ α 4 l 2 0 /α 2 . For N = 17 171 Yb + ions, we choose γ 4 = 0.86 to minimize the relative standard deviation of the ion spacings. An average ion distance of about 8.2 µm can then be realized by setting l 0 = 25 µm. The equilibrium configuration of the 17 ions is shown in figure 3.
The two-qubit entangling gate is implemented with a spin-dependent force on the two ions via the transverse collective modes. For example, we can use the transverse modes in the x direction whose k-th normalized mode vector is denoted as b k j with a mode frequency ω k where the index j runs over all ions (j = 1, 2, · · · , N ). The creation and annihilation operators corresponding to this collective mode are denoted asâ † k andâ k respectively. The transverse trap frequency is set to a typical value ω x = 2π × 3 MHz and the temperature is set to k B T = ω x giving an average phonon number ofn ≈ 0.5 for each transverse mode. This can be easily achieved with a Raman sideband cooling. The spin-dependent forces are generated by counter-propagating laser beams on the two ions that we choose to entangle (see figure 4). The Hamiltonian, in the interaction picture, can be represented as: where we further define the Lamb-Dicke parameter η k ≡ ∆k /2mω k , ∆k is the difference in the wavevectors of the counter-propagating Raman beams, µ is the two-photon detuning, andσ x j is theσ x Pauli matrix . Thanks to the nearly uniform ion spacings, the required gate times for ion pairs at the same distance are roughly the same. In general, the ion pair at a larger distance requires a longer gate time τ due to their weaker coupling.
on ion j. For the 171 Yb + qubit transitions, the laser beams have wavelengths around λ = 355 nm [97] and for counter-propagating pairs ∆k = 2k, hence the Lamb-Dicke parameter η k ≈ 0.111. In the above equation, Ω j is the effective Rabi frequency of the Raman transition pairs shown in figure 4 (Ω j ≈ Ω 1 Ω 3 /∆ = Ω 1 Ω 2 /∆ where ∆ is the single-photon detuning from the excited state). From now on we will drop the tilde notation for simplicity. Note that one of the laser beams contains two frequency components and we assume that the two Raman transition pairs have the same effective Rabi frequency Ω j , opposite detunings ±µ, and opposite wavevector differences ±∆k. This is known as the phase-insensitive geometry [98]. The time evolution under the above Hamiltonian can be written as [96,98]: The parameters α k j and Θ ij are purely numbers related to the phase space displacement of the motional states after the gate and angle of the entanglement gate, respectively. For the following calculations, we assume that Ω j is the same for both ions and we divide it into segments with equal durations; that is, a piecewise constant Ω(t) (see figure 5). With a suitable choice of detuning µ, gate time τ , and Rabi frequency Ω(t), we can suppress all the α k j (τ ) terms and realize an ideal entangling gate e ±iπσ x iσ x j /4 with high fidelity. Here, we focus on the intrinsic gate infidelity caused by the residual coupling to multiple phonon modes after the entangling gate. Other noise sources from technical control errors are not included for this calculation. Figure 5 shows example calculations of the gate sequences for different ion pairs: two nearest-neighbor pairs and another two separated by 7 ion spacings. Because the ion spacings have been configured to be nearly uniform, the gate times do not vary much for ion pairs with the same separation. In figure 6a, we show the calculated minimal gate times for ions pairs at the distance of 1, 3, 5, 7, 9 and 11 ion spacings. To find these "minimal" gate times, we searched different detunings and number of segments with a step of 10 µs and screened for solutions with an infidelity below 3 × 10 −6 . We further require the effective Rabi frequency Ω j (t) to be below 2π × 1 MHz, which is comparable with the value in current experiments. We note that the gate time limits here are not fundamental and alternative approaches could lead to faster two-qubit gates [99,100].
The underlying connection graph of a trapped linear ion chain is a fully connected graph [101]. Therefore, there are many ways to map the surface-17 code to the linear ion chain. A natural mapping is to split the ion chain into groups of data and ancillary qubits which appears to be advantageous by minimizing ion shuttling times since all measurement can be performed in parallel and would not require the storage zone. However, the data-ancilla distance between ions in this configuration is larger which, as we have shown above, results in slower two-qubit gates. We fit the gate time as a result of the ion distance (figure 6a) to a linear function yielding: where t g is the gate time (µs) and d is the ion distance. As we can see from figure 5, the nearest neighbor calculated gate time (ion pair 8 and 9) corresponds to d = 1 which gives a t g ≈ 50 µs. Single-qubit gates can be performed in parallel with a gate time of 10 µs. With this relationship between ion distance and gate times, we screened for the optimal ion chain configurations using a simulated annealing algorithm that minimized several parameters of interest. Three parameters were minimized: the maximum ion distance between entangled ions (M), the average ion distance between entangled ions (A), and the total time for one round of syndrome measurement in parallel (T) corresponding to the second letter in the labels in figure 7.
In addition, the optimizations were performed with constraint that the data and ancilla qubits are separate (S) and are allowed to be mixed together (M) corresponding to the first letter in the labels in figure 7. The corresponding connection graphs for two optimized chains (SM and MT) are shown in figure 6b. The ion splitting time was assigned to be 100 µs between neighboring zones [69,82,83]. Therefore, splitting ions in the chain from the logic zone and shuttling the ions to the SPAM or storage zone requires a time of 100 µs or 200 µs, respectively. This time is built from an assumption of a 200 kHz lowest axial frequency that implies that splitting/merging of subsets of ions in the ion chain can occur at a rate almost at this frequency. For splitting the ion chain, the transport is expected to be limited to 7 m/s assuming a 50 kHz update rate in the transport waveforms [85]. Therefore, the remaining 95 µs allows for the chains to be separated by a distance of 665 µm which is excellent separation between the detection lasers and the data qubits. It is assumed that operations can happen in parallel so part of an ancillary ion subchain can be shuttled to the storage zone while the other ions within the same subchain remain in the detection zone. 2320 3320 Figure 7: Trap operation times and ion arrangements optimized for an array of parameters. The first letter of the label refers to S=separate and M=mixed arrangements of data and ancilla qubits. The second letter of the label refers to the parameter minimized with M=maximum distance between entangled ions, A=average distance between entangled ions, and T=parallel total gate time. All values are reported in microseconds and the numbers in roman and italics refer to the gate time of the operations performed in serial and parallel, respectively. Parallel operations allow for two simultaneous two-qubit gates exciting the independent x and y radial modes and fully parallel single-ion operations. Single-qubit gates, parallel measurement/state preparation, and shuttling between neighboring zones require 10 µs, 100 µs, and 100 µs (5 µs split and 95 µs shuttle time), respectively.
The rejoining of the ions shuttled to the detection zone is assumed to occur in parallel with the next splitting operation which leads to a fixed cost for rejoining the chain of also 100 µs. The final assumption is that a three way split requires the same amount of time as single splitting operation of the ion chain due to the parallelism which is reasonable due to the small contribution of splitting operations to the total zone to zone movement time. The measurement time was also fixed to 100 µs which is a lax requirement on the experimental apparatus and will allow for high fidelity state detection [53].
The time required to measure the error syndrome for different optimized configurations are shown in figure 7. The gate times (Logic) for the chain configurations where the data and ancilla qubits are separate (labels SM and SA) are substantially longer than the mixed configurations. The mixed configurations (MM, MA, and MT) have longer chain manipulation and measurement times. The longer times are due to the inability to perform all measurements in parallel for a mixed arrangement; only subchains consisting of neighboring ancilla can be measured in parallel. An example of a parallel step for the mixed configuration is shown in figure 2 where the ion configuration corresponds to the ion chain label MT. Ions with labels 11, 12, 14 in the surface code are measured in parallel in this measurement step. The neighboring data qubits on the ends of the subchain consisting of the three ancillary qubits restrict measurement on other ancillary qubits in this architecture. The entanglement gates outlined above allow for parallel implementation. Two simultaneous entanglement gates can be performed on two independent pairs of ions by exciting the x and y radial modes, respectively, for each pair. Single-qubit operations are completely parallel for both the serial and parallel implementations. The parallel operation times are shown in italics in figure 7. For the detailed calculations below, we chose the ion chain configuration that gives the minimal total syndrome measurement time (serial or parallel), MT. Note that the gate times for the MM and MA configurations have shorter serial Logic times so these configurations will perform better than the MT configuration under the influence of our gate based error model outlined below.
Modeling Ion Trap Error Sources
For accurate assessment of error correction in an ion trap quantum computer, appropriate error models must be developed to simulate noise sources in the physical architecture. This section provides the components for building up such complexity. The Kraus operator representation will be used to describe the components of the quantum error channel. A graphical representation of the full ion trap error model is shown in figure 9.
Depolarizing Error Model
The depolarizing error model is a standard error model used in simulations of quantum error correcting codes. After the application of each gate in the quantum circuit implemented to measure the stabilizers, an element is sampled from the one-qubit (two-qubit) Pauli group and applied after each single-qubit (two-qubit) gate.
The one-and two-qubit Kraus channels are of the form: where p is the error rate of the error channel. Furthermore, the application of perfect two-qubit gates still allows for certain errors to propagate from single-to two-qubit errors. For measurement of the stabilizers, the CN OT (controlled-X) is the two-qubit gate and transforms two-qubit Pauli errors in the following manner: where the first (second) operator is on the control (target). The Y error rules can be built from the relation Y = iXZ. The stabilizer circuits in this work are built using only the CN OT as the two-qubit gate. This error model allows for errors on both the data and ancilla qubits, which translate into errors in the measurement of stabilizers during syndrome extraction. Furthermore, preparation and measurement errors are modeled by the application of a single-qubit depolarizing error channel after preparation gates and before measurement. This model will serve as a baseline error model for assessment of error correction.
Coherent Over-Rotation of the Mølmer-Sørensen Gate
The first step in adding complexity to the error model entails compiling the two-qubit quantum logic gates in the abstract quantum circuit with experimental entangling gates. The Mølmer-Sørensen (MS) entangling gate [75,76] was chosen for this purpose due to its faster gate times and higher gate fidelities relative to other entangling gate schemes [17]. The MS gate uses a bichromatic laser field to induce a two-photon transition that couples |00 ↔ |11 and |10 ↔ |01 qubit states. The MS gate induces a transition with a bichromatic laser tuned close to the upper and lower motional sideband of a qubit transition [75,76]. In the compuatational basis, the unitary operator associated with the Mølmer-Sørensen gate is: where the parameter χ depends on the gate time applied to the specific ion pair [17]. The absolute value of the angle, |χ|, may be set to any real number between 0 and π/2 by varying the power of the laser in the experiment [17]. The sign of χ is dependent on the laser detuning which is chosen from normal modes of the ion pair [17]. The CN OT gate can be achieved by assigning χ = ±π/4 and sandwiching the two-qubit unitary between single-qubit gates as shown in figure 8 [77]. The Mølmer-Sørensen unitary implemented during the CN OT can equivalently be written as: where we attempt to assign χ as π/4 with the laser field. However due to experimental error, a small overrotation (with angle α) may be applied about the XX axis with the real gate applied in equation 8 having an angle of χ + α. This error will be simulated by a probabilistic error channel of the form: where the probability of the channel above is a function of the over-rotation angle. For example, one possible relation between p xx and α is obtained by the Pauli twirled approximation, which results in p xx = sin 2 (α) [102]. It is also possible to choose p xx such that the Pauli approximation to the over-rotation satisfies additional constraints [103,104]. Furthermore, the single-qubit rotation gates in the circuit (figure 8) can Figure 8: The construction of a CN OT logic gate from a Mølmer-Sørensen entangling gate and single-qubit gates as follows from [77]. The quantity s is ion specific and equal to the sign of the experimental interaction parameter χ.
also suffer over-rotations, although typically to a much less degree. The over-rotations can be modeled in an analogous way giving three distinct gate-dependent error channels: which are applied after every single-qubit rotation gate R X (θ), R Y (θ), and R Z (θ), respectively. For simulations, the error rates for the single-qubit gates are a factor of 10 lower than those corresponding to two-qubit gates; representing observed single-and two-qubit gate fidelities [49,50].
Motional Mode Heating
In addition to control errors, the applied field from Mølmer-Sørensen gate can result in motional heating of the ions, which impacts the fidelity of the two-qubit gate. Modeling heating as a coupling of the motional states of the ions to an infinite temperature bath [105], Ballance et al. characterized the impact of motional heating on the error of a two-qubit entangling gate, h , giving: whereṅ is the average change in the thermal occupation number of the gate mode, t g is the gate time, and K is the number of loops in phase space traversed by the ions during the gate [49]. We chose to study the low K limit (K = 1, 2) of equation 11 modeling heating errors with the Kraus operators which are applied after every MS gate: where the probabilities are ion-dependent: p h = (r heat ) × (t M S ) where r h is the heating rate and t M S is the time of the Mølmer-Sørensen gate. It is important to note that this model is pessimistic with respect to ion heating, even in the low K limit, and the choice of coupling modes can increase K by 1 − 2 orders of magnitude [106,107].
Background Depolarizing Noise
For the stable "clock" states of the hyperfine qubits, errors arise from the application of gates. In addition to systematic over/under-rotations of the applied laser field, instabilities in the control of the qubits (laser field drifts, magnetic field fluctuations, etc.) can lead to stochastic error processes that we will model with a depolarizing error channel. One such natural stochastic process that has shown to be a contributing source of error is scattering during the application of the gate [108,109]. To model the effects of spontaneous Raman and Rayleigh photon scattering, we will apply a single-qubit depolarizing channel (equation 5) after every qubit involved in a gate (single-or two-qubit gates).
Dephasing Errors
While the ions are located in the trap where the DC electric fields vanish, the ions may still be exposed to oscillating electric fields from blackbody radiation, laser fields, or motion around the field free point in the oscillating trap field [110]. The application of the oscillating electric field shifts the energy each of the states of the two-level qubit system by the AC Stark effect, which introduces dephasing errors in the applied gates. This effect is observed for both single-and two-qubit gates. We choose to model these dephasing errors as a single qubit channel of the form: where each channel is applied to each qubit involved in single-and two-qubit gates and p d = r d × t g for each gate where r d is the dephasing rate and t g is the time of the applied gate. We make the approximation that single-and two-qubit dephasing errors occur at a constant rate. This is certainly not true in that the dephasing rates will be gate dependent between two-qubit gates and will likely not be at the same rate of single-qubit dephasing but, taking that single-qubit gates have higher fidelities relative to two-qubit gates, this serves as a pessimistic approximation which is consistent with our level of abstraction.
Ancilla Preparation and Measurement Errors
For the ion trap error model, measurement errors were modeled by a single-qubit depolarizing channel applied before the measurement with a probability equivalent to that of the single-qubit over-rotation errors of the single-qubit gates. Preparation errors were modeled with a single-qubit depolarizing channel applied immediately after the preparation of the state but with a probability equivalent to the background depolarizing channel. All states are prepared and measured in the +Z basis, which can be performed with high-fidelity [111]. Note that this implementation is not ideal given that both state preparation and state readout rely on the same scattering processes. However, the preparation and measurement errors should not be the dominant source of failure in the simulations consistent with single-qubit gate, preparation, and readout fidelities of ≥ 99.9 % [111]. Furthermore, state preparation/measurement is a high-fidelity operation (relative to two-qubit gates) so the inflated state preparation errors will give a pessimistic simulation of the fault-tolerance of the surface code on ion traps relative to the physical architecture. These claims are reinforced in section 5.4.
Error Correction for Ion Trap Errors
To perform error correction on the surface code, classical decoding algorithms have been developed to determine the most appropriate correction operation to perform given the limited information about the encoded state from the syndrome. Various decoders are available that trade-off classical efficiency and observed error threshold. We apply a few decoders for error correction on the surface code below and discuss their performance. For all simulations, we implemented a Monte Carlo simulation of the surface code using an importance sampling method described in Ref. [32].
Single-Qubit Errors
Two-Qubit Errors
Integration into Ion Trap Hardware
When choosing a decoding method to integrate into a physical architecture, there is much to consider that extends beyond the (pseudo)threshold. Processing, memory, and runtime requirements of the decoder play a role in the feasibility of implementing error correction with an experimental control system.
Lookup Table Decoder
The simplest decoder is a lookup table that maps a syndrome configuration to the lowest weight Pauli error corresponding to the syndrome. We may represent an error configuration e as a binary (row) vector Given two matrices, H and G T , that correspond to the binary representation of the X-type and Z-type stabilizers, respectively; one may define a mapping matrix T between error configurations e and binary syndrome (column) vectors s: Iterating over all elements of F 18 2 and applying T , we constructed a lookup table Tab[s] = e where e = min s (|e|) is the minimum weight error configuration corresponding to the syndrome string s. With a slight abuse of notation, we denote | · | as the hamming weight of the error string e with the caveat that Y -type errors are evaluated as the same weight as X and Z-type errors. Those familiar with the CSS construction of quantum error correcting codes will recognize H and G T as the parity check matrices of C and C ⊥ of a classical linear error correcting code used to construct the 17-qubit surface code [112]. All of the rules of the full lookup table (Tab[s]) can be constructed with two 16-element tables, each with keys corresponding to the X-type and Z-type stabilizer measurements, respectively.
For circuit-level noise, the lookup table above is not sufficient for fault-tolerance. A set of syndrome processing rules must be imposed to ensure that measurement errors do not result in faulty corrections that introduce errors onto the data qubits. An example of a typical set of rules is shown below (a, b, and c are syndrome outcome strings): where two rounds of stabilizer measurement are performed and, if the first two measurement outcomes disagree, a third round of stabilizer measurement is performed. Correction is applied based upon the final measurement performed. We chose to employ a different set of fault-tolerant syndrome processing rules that can, on average, reduce the depth of the circuit required to perform a fault-tolerant correction by one round of stabilizer measurement. The routine: performs one round of stabilizer measurements and performs a correction based on the following round of stabilizer measurements (b) only if the first round was non-trivial (a = 0). These two sets of rules yield equivalent results for the 17-qubit surface code under circuit-level depolarizing noise.
Decoder
Level-1 Pseudothreshold Computational Time (s) Lookup Table 3.0 × 10 −3 1.1 × 10 −7 Matching (table) 5.5 × 10 −3 1.43 × 10 −6 Figure 10: Performance of the two lookup table style decoders considered for implementation into a near-term quantum error correction experiment. Lookup table style decoders were chosen due to their easy integration into the control software of an ion trap system.
Minimum Weight Perfect Matching
For topological codes, minimum weight matching algorithms have been shown to be a useful heuristic technique for performing error correction [113][114][115]. For the distance 3 surface code, the minimum weight perfect matching rules can be encoded into a lookup table that presents a correction operation based on three rounds of syndrome measurement (for circuit-level depolarizing noise) [28]. Figure 10 shows the performance of the two lookup table style decoders, standard lookup and matching rule derived lookup, considered for implementation in a near-term experimental quantum error correction routine. Lookup table decoders were chosen for their easy integration into existing ion trap experimental controls which have restricted logic/memory available versus other techniques, such as maximum likelihood [116,117] or deeper memory step matching algorithms [113][114][115] for example, which would require additional processing power to implement/integrate into an experiment. The lookup table decoder was favored over the matching table because of its requirement for one less round of stabilizer measurement to perform fault-tolerant error correction with a comparable level-1 pseudothreshold to the matching table ( figure 10). Because current estimates of the syndrome extraction indicate it is relatively slow (figure 7), the ability to choose a correction fault-tolerantly from a minimal number of experimental operations is important to maintain coherence of the encoded information. The lookup table was implemented in all further simulations because of ease of integration into ion trap controls while requiring at most two syndrome measurements to fault-tolerantly perform error correction.
Error Correction on Ion Traps
Now that a fast, light memory, high-performance decoder has been identified, we will switch attention to using such a method to apply error correction on the 17-qubit surface code under the influence of ion trap errors. First, we must map the abstract quantum circuit used for error correction in the surface code to a circuit that implements gates that would be available in an ion trap quantum computer; specifically singlequbit rotations and Mølmer-Sørensen gates. Next, we will discuss the influence of the individual ion trap error sources (outlined in section 4) on the fault-tolerance of the surface code mapped to a linear ion chain highlighting the experimental parameter regimes which would allow for fault-tolerance for the surface code implementation. Finally, we analyze the error subset probabilities from the importance sampling simulations to understand the roles of the competing error sources and gain insight into the error sources that are most influential/detrimental to the error correcting properties of the code.
Surface-17 Syndrome Extraction Circuit Gate Compilation
As shown in figure 8, the two-qubit gates in the syndrome extraction circuit for the 17-qubit surface code must be decomposed into single-qubit rotation gates and two-qubit Mølmer-Sørensen gates. In addition, Hadamard gates are required during the measurement of the X-type stabilizers which can be decomposed into rotation gates in two equivalent ways: Note that the implementation of the rotation gates constructing the CN OT gate allows for some freedom in the direction of the rotation which can be used to reduce the number of primitive gates (an outline of the ion trap compilation techniques can be found in [77]). The parameter s ∈ {+1, −1} in the circuit dictated by the sign of the interaction parameter χ between two ions which is determined by the experimental apparatus. At our layer of abstraction, the value of s is left as a free parameter. Applying such a compilation method 0 M S(X) allowed for the reduction of the number of single-qubit gates from 48 in the naive implementation to 30 in the compiled circuit; the number of entangling gates cannot be reduced in the error correction routine leaving 24 Mølmer-Sørensen gates as well. A representation of the compiled syndrome extraction circuit is shown in figure 11 where the ancillary wires have been suppressed. This circuit was used for all further results.
Single Error Source Dominant Effects
In this section, we characterize the influence of the error sources in the limit where each error type is the dominant source of the error. Therefore the simulations that generate the following pseudothresholds will have varying single-and two-qubit error rates (remember that p x = p y = p z = p xx /10) and constant heating, depolarizing, or spin dephasing error rates during simulations. Our goal is to find a parameter range under which, again in this limit of a dominant error source, fault-tolerant retention of the encoded information would be possible. In all instances, a two-qubit gate fidelity of ≥ 99.9% and an error source error rate below a critical rate is necessary to allow for fault-tolerance (see figure 12). We discuss those critical rates for each error source below. Ion heating was characterized by a parameterized representation of the heating rateṅ/2K whereṅ is the heating (in quanta/s) of the gate motional mode and K is the number of loops in phase space traversed by the Mølmer-Sørensen gate. As shown in figure 12a, fault-tolerance is not achieved at a heating rate above 25 quanta/s which corresponds to a motional mode heating rate during the gate of 100 and 200 quanta/s for K = 2 and K = 4, respectively. A heating rate (ṅ) of about 58 quanta/s has been observed for a single 9 Be + ion on a room temperature surface trap [118] and a silicon based trap in a cryogenic environment used to trap individual 40 Ca + ions exhibited heating rates as low as 0.33 quanta/s (0.6(2) quanta/s on average) [119]. Note that macroscopic traps exhibit significantly lower heating rates relative to surface traps; for instance a single trapped 111 Cd + ion exhibited a heating rate of 2.48 quant/s for a room temperature macroscopic trap [120]. However, additional difficulties arise for macroscopic traps in engineering a system that allows for ion separation, addressing, and detection required for an error correction protocol. Also, the use of sympathetic cooling ions has been shown to reduce motional mode heating effects on T * 2 [121]; a method which could reduce the heating rates of the idle computational qubits during the error correction routine.
The depolarizing error channel was applied to simulate stochastic error processes. One such process of interest is spontaneous Raman and Rayleigh scattering which results in single-and two-qubit gate errors. Figure 12b displays an upper limit on the scattering rate (per-gate) of 8 × 10 −4 to allow for fault-tolerance when scattering errors dominate. Ozeri et al. have shown that gate errors due to Raman scattering to occur at a rate less than 10 −4 for single-qubit gates but two-qubit gates have scattering rates on the order of 10 −2 for their experimental setup for various species of trapped ions [108]. These achieved scattering rates are still above the theoretical lower bound on the scattering rates for single-and two-qubit gates for 171 Yb + by 3 and 7 orders of magnitude, respectively [108], showing potential for improvement especially in the two-qubit scattering case. Rayleigh scattering errors are less substantial, resulting in error rates per gate orders of magnitude below the Raman scattering error rates for heavy ions such as 171 Yb + [108].
Spin dephasing was modeled using a model that assumed a constant dephasing rate that scaled linearly with the time of the applied gate. The upper bound on the error rate (figure 12c) corresponds to a dephasing rate of 15 s −1 . These values are related to T * 2 [121]. Note that the use of magnetic clock transitions [111,122], decoherence free subspaces [123], or sympathetic cooling ions [121] during computation has been observed to increase the T 2 coherence times of the qubits to the order of seconds. A 5-qubit system that has implemented small quantum algorithms [17] and the 4, 2, 2 error detection code [58] with hyperfine qubits exhibits a T * 2 of ≈ 0.5 s [45], but further magnetic field stabilization could improve this as shown in [121] which exhibits over a 10 minute coherence time for trapped 171 Yb + ions.
Competing Error Sources: Dominant Errors
To characterize the dominant error sources contributing to the logical error rate in the 17-qubit surface code in the case where multiple error sources are competing, we take advantage of the importance sampling technique. We will briefly outline the importance sampling method; highlighting the use of error subsets that will be independently analyzed to gain insight into the effect of the error source on the logical error rate of the encoded state. This will then be followed by an analysis of the statistically significant error subsets, which will be used to characterize the most malignant errors contributing to the failure rate of the error correcting circuit.
Importance Sampling
This method is an adaptation of the method from [32] but extended to the case where multiple error sources are available during the simulation. The method relies on approximating the logical error rate as a sum of statistically weighted logical error rates of error subsets. For low enough physical error rates, few subsets need to be sampled in order to obtain an accurate approximation, which makes the approach considerably more efficient than the standard direct Monte Carlo sampling. The subsets are indexed according to the number of errors present in the circuit. For instance for the standard depolarizing error model, the subsets would be indexed according to the number of single-and two-qubit errors present in the circuit. Sampling error configurations corresponding to the number of errors for this subset and calculating the fraction of configurations resulting in a logical error gives an effective subset error rate A s,t . Multiplying this subset logical error rate by the total statistical weight of the error subset will provide the subset's contribution to the total logical error rate. Computing the statistical weight is done as so: where s and t are the number of single-and two-qubit errors in the circuit being considered, respectively. These are also the indices of the subset. The values n s and n t are the number of single-and two-qubit fault-points in the circuit, respectively. The values p s and p t are the single-and two-qubit error channel probabilities and |e| denotes the weight of the error. Estimating the logical error rate then constitutes calculating the following sum: where subsets with statistical weights below a chosen cutoff value, W , are omitted from the sum. Note that, with this method, the sampling of each error subset only needs to be performed once to generate a logical error curve. We altered the method above to handle situations where errors of equivalent types have different error rates; such is the situation for our ion heating and dephasing error models with ion dependent gate times, which influence the error rate per qubit. To motivate this point, consider the quantum circuit in figure 13. This circuit contains three two-qubit gates, a, b, and c, with different error rates p a , p b , and p c , respectively. a b c Figure 13: Circuit containing three two-qubit gates, labeled a, b, and c, with error rates pa, p b , and pc, respectively.
The weight of the (0, 2) subset would then be: so the two-qubit subset calculation requires one more ingredient: we need to sum over all n-tuple error configurations (f n ) during the subset weight calculation: When we adapt this approach to heating errors in an ion trap circuit, we get the following calculation of the subset weight: where p h and p h are the individual error rates of the heating error channels for each two-qubit configuration on which an entangling gate is and is not applied in the simulation, respectively. We have taken into account the influence of the different rates for the calculation of the subset weights, but this also has an influence on the sampled subset logical error rates as certain error configurations will be more probable than others.
Because the heating error rates are linearly proportional to the gate times in our error model, we have chosen to sample heating error configurations from a gate time weighted distribution of error configurations giving a corresponding logical error rate of A s,t,h . With the new subset weights and subset logical error rates, the estimation of the total logical error rate naturally extends from equation 17. Note that heating adds an extra subset label: (s,t,h). The indices s, t, and h represent the number of single-qubit gate, two-qubit gate, and heating errors sampled, respectively. Recall that the ion trap error model from section 4 contains 5 distinct error sources. Therefore, we extended the concepts from equations 17 and 20 to calculate the logical error rate of the 17-qubit surface code under the influence of single-qubit gate, two-qubit gate, ion heating, background depolarization, and dephasing errors. The analysis below will include 5 index subsets ordered with the indices listing the number of single-qubit gate, two-qubit gate, heating, background depolarization, and dephasing errors sampled in the circuit, in that order.
Competing Error Sources: Sampling Subset Analysis
For the importance sampling simulations of the 17-qubit surface code, a subset weight cutoff of W > 10 −6 was used and 30, 000 samples were collected for the calculation of each subset's logical error rate A s,t,h,dep,z . This weight cutoff corresponds to events expected to be sampled at least once out of a million randomly sampled events, which is sufficient for near-term error correction experiments. To calculate the statistical weights of the subsets, a single-qubit gate error rate (p y = p x = p z ), two-qubit gate error rate (p xx ), rate of heating (r heat ), background depolarizing noise error rate (p dep ), and rate of dephasing (r d ) of 10 −4 , 10 −3 , 25 quanta/s, 8 × 10 −4 , and 15 s −1 was chosen, respectively, which corresponds to the error rates that exhibit a logical error rate equal to the two-qubit gate error rate (see the green curves in figure 12). The logical error rates and statistical error weights calculated for each subset are presented in figure 14. An expanded number of subsets beyond this cutoff were run and are shown in figure 16 (See Supplementary Material). The goal of this analysis is to parse out situations where certain error sources show dominant contribution to the failure rate of the quantum error correcting circuit.
The logical error rates for each of the subsets sampled are shown in blue in figure 14. The error subsets containing two-qubit gate or heating errors tend to have higher logical error rates than other subsets Statistical Weight (d) 10 −6 ≤ W < 10 −5 Figure 14: The subset logical error rates and subset statistical weights above the cutoff of 10 −6 corresponding to events expected to be sampled from a random distribution at least once out of a million samples. The data is separated into four plots according to the order of magnitude of the subset statistical weights, which are plotted in red. The logical error rates for the subsets are plotted in blue. Note that the product of the subset weight and its corresponding logical error rate dictates the subset's contribution to the total logical error rate of the code. For calculation of the statistical weights of the subsets, the single-qubit gate error rate (py = px = pz), two-qubit gate error rate (pxx), rate of heating (r heat ), background depolarizing noise error rate p dep , and rate of dephasing (r d ) were 10 −4 , 10 −3 , 25 quanta/s, 8 × 10 −4 , and 15 s −1 , respectively, which correspond to the parameters allowing for the logical error rate equivalent to the unencoded two-qubit gate error rate in section 5.4. A plot containing similar subset information for data beyond the subset cutoff is shown in figure 16.
containing comparable number of errors. This occurs due to the ability of errors of this type to generate measurement faults in the circuit. The Mølmer-Sørensen gate transforms single-qubit data errors in the following manner: ZI ↔ Y X and Y I ↔ −ZX where the data and ancilla qubit errors are the first and second elements, respectively. A two-qubit gate or heating error makes preexisting errors undetectable which is a particularly malignant case. The tendancy towards measurement errors in the ion trap error model indicates that implementing a decoder that makes a correction based on more syndrome measurement rounds may show an above average performance boost in error correction. Error subsets containing single-qubit gate errors tend to have lower logical error rates that other subsets with comparable number of errors. To understand why this is the case, we explore the effect the errors have on error correction. Figure 15 shows how single-qubit gate errors transform preexisting errors in the circuit. The particularly malignant case in when there is a measurement error which can introduce errors into the code. For each single-qubit fault point, there are only two elements of the two-qubit Pauli group that are transformed in a manner that would result in a meaurement error. Actually, half of the elements of this group result in single-qubit errors (or no errors) on data that can be readily decoded in the following step of stabilizer measurement (see figure 15). The remaining errors are detectable but not necessarily corrected properly (this depends on the location that the fault occurs). However, these errors do alert the decoder to the location of an error on the code which is favorable and the faulty correction on these qubits will not propagate errors in a malignant manner given the next round of stabilizer measurement is correct. Take note that one of the malignant errors transformed in figure 15b (R X (± π 2 ) gate error) is XX which is the form of the two-qubit gate and heating errors. Therefore, compiling out the single-qubit gates R X (± π 2 ) gates seems to have also boosted the efficiency of the decoder to decode two-qubit gate and heating errors in addition to the obvious performance boost from less general fault points in the compiled circuit. Another alarming malignant configuration in figure 15c is the ZX error which is the result of the Mølmer-Sørensen transformation of Y I (a single-qubit data error). However, this fault requires two single-qubit Y errors which have a low statistical weight of occurance (see figure 14).
How else can we use this information to improve error correction? One obvious extension is to use such statistics to develop decoders targeted for such errors. For instance, this information about the failure rates at the logical level may be used to bias transition matrix elements of a maximum likelihood decoder to include information about the influence of error cosets on the code's performance instead of only considering the statistical weights of the error cosets [124]. Code considerations when optimizing the ion chain layout could serve to bound the effects of the gate-time dependent error sources. Specifically, optimizing the ion chain to assign the most distant qubits (with the longest gate times) to weight-2 stabilizers can reduce the influence of anisotropic error sources. Consider a scenario where two-qubit gate and heating errors dominate, then assigning the faultiest gates to the weight-2 X-type stabilizers would bound the influence of the most probable heating errors to single-qubit X-errors on the data. While this does not apply to our current error model where dephasing (Z) and heating (XX) errors both have the same dependency on gate time, this is probably not the case experimentally and, the greater the anisotropy of the errors, the greater one can reduce their effect.
(a) Faults existing after a Mølmer-Sørensen gate are transformed by single-qubit rotations gates and their errors. Figure 15: The errors in rotation gates will transform the faults that exist after a two-qubit gate. The transformation of an existing two-qubit Pauli error (ignoring the phase) after a single-qubit gate error on wires that contain an R X ± π 2 and an R Y ± π 2 gate is shown in b.) and c.). The first and second element of the Pauli error correspond to the error on the data and ancilla qubit, respectively. There are two types of single-qubit over-rotation errors, X and Y , which transform Pauli errors according to b.) and c.), respectively. Measurement errors can introduce of errors into the code through faulty correction when errors occur in the following round of stabilizer measurement. Flagged errors are detectable errors that are not always corrected properly but the error stays local to the qubit given that the following syndrome measurement is accurate. Single data errors are favorable because the error stays local to the data qubit and can be more easily corrected in the next round of measurement because no faulty information was sent to the decoder. Self correction may occur in as well for specific errors. Applying the transformation II ↔ ZI on the E 2 values in c.) give the resulting error on wires containing only R Y ± π 2 gate errors.
The subset statistical weights (probabilites of occurence) are shown in red in figure 14. These statistical weights give insight into the likelihood of sampling particular error events. Recall that the subset's contribution to the total logical error rate of the code (used to generate the pseudothreshold plots in 12) is the product of the subset weight and subset logical error rate (see section 5.5.1). Only ten points above the subset weight of 10 −3 (figure 14a) have significant contribution to the total logical error rate; that is, this small collection of subsets can be used to completely recreate the pseudothreshold plots in 12. Actually, the two subsets, (0, 0, 1, 0, 1) and (0, 0, 1, 1, 0), have the largest contribution to the encoded logical error rate and bound the logical error rate to p L ≈ A s,t,h,dep,z × W ≈ 1 × 10 −3 , which corresponds to a two-qubit gate fidelity of 99.9% (recall that the gate error rate for calculation of the subset weights was 10 −3 ). This essentially recreates our calculation of a 99.9% two-qubit gate fidelity for fault-tolerance that used more subsets. Therefore, changes in the statistics of the dominating subsets have significant influence on the observed pseudothreshold of the quantum error correcting code and can be considered when implementing a decoding algorithm. This also illustrates the concern that a success metric such as the (pseudo)threshold only represents the mean statistics of an underlying error model [125].
Conclusions
We studied the feasibility of implementing the quantum error correction with the 17-qubit surface code on a linear chain of atomic qubits. Optimization of the ion chain showed a preference for mixed data/ancilla configurations to reduce the gate times for syndrome extraction. We showed that the 17-qubit surface code contained enough structure to allow for the use two 16 key lookup tables for error correction with a respectably high pseudothreshold of 3 × 10 −3 for circuit-level depolarizing noise. The lookup table decoder is easily integrated into the logic of the ion trap controls and decodes at a rate much faster (O (ns)) than any physical operation on the qubits. When modeling ion trap error sources, it was shown that a two-qubit gate fidelity of ≥ 99.9 % is required in the cases where ion heating, scattering, or spin dephasing are the dominant error sources. Furthermore, the parameter regimes that allow for fault-tolerant error correction are not outlandish for such error sources. Finally, we took advantage of the error subset data required for our simulations to parse out trends that occur when multiple error sources occur during error correction. We found that two-qubit gate and heating errors are the most malignant error sources and single-qubit gate errors are manageable in our ion trap error model. We also speculate on how this subset information can be used to reduce the influence of malignant error sources on error correction. Similar calculations have recently been done for the Steane [[7,1,3]] code in a linear chain of ions that also allows for the rotation of ion crystals. The calculations presented in [69] use a different Pauli error model for ion trap errors that emphasizes memory errors. The key difference in approach is that our model includes enough ancillae such that a full syndrome measurement takes place during a single measurement time. The serialization of measurements in [69], when combined with intrinsic memory errors, requires a lower physical qubit error rate in order to achieve a break even logical error rate.
To better assess the performance of quantum error correcting codes in real systems, more detailed physical error models are warranted [102]. A promising approach is to use realistic error channels with quantum trajectories to avoid simulating the entire density matrix. As recently shown for superconducting systems and the surface-17 code, a smart choice in circuit representation allows the entire 17 qubit system to be modeled with only 10 active qubits [126]. In the future, we plan to apply this technique to experimentally derived error models for ion traps to help assess which coherent errors have the most deleterious effect.
Supplemental Information
For the importance sampling method, error subsets were labeled according to the number of errors due to single-qubit gates, two-qubit gates, ion heating, depolarizing, and dephasing processes which were modeled with Pauli errors in a manner shown in figure 9. For each subset, no more than 6 errors of one type were introduced into the circuit due to the total subset weight cutoff introduced. A natural method of partitioning such a data set is to represent each error subset as a base-7 integer in the following manner: n s , n t , n heat , n dep , n z 7 0 , 7 1 , 7 2 , 7 3 , 7 4 where digits are read from left to right. For instance, the subset (1, 5, 2, 1, 2) = 5279. This representation clusters the subset data into a readable format shown in the plots below.
|
2017-10-10T21:24:57.000Z
|
2017-10-03T00:00:00.000
|
{
"year": 2018,
"sha1": "9a4eff27564da98021a62da165590e072b6e45de",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/aab341",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "9a4eff27564da98021a62da165590e072b6e45de",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
72279827
|
pes2o/s2orc
|
v3-fos-license
|
Serum cholesterol-lowering efficacy of stanol ester incorporated in gelatin capsules
Background : The cholesterol-lowering properties of plant sterols have been known since the 1950s. In most clinical studies the efficacy of plant stanol ester has been studied when incorporated into mayonnaise, regular or low-fat spreads or yoghurt. Objecti v e : The purpose of this study was to confirm the cholesterol-lowering efficacy of plant stanol ester when incorporated in capsules as part of a normal everyday diet. Design : The study had a randomized double-blind parallel design with an intervention period of 3 weeks. Forty-two subjects were randomized to receive either the active capsule (2 g of stanol) or the placebo capsule. Results : Stanol ester capsules effectively decreased the low-density lipoprotein (LDL)-cholesterol level by 8.5% [95% confidence interval (CI) 4.1 to 13.0%, p B / 0.001 vs baseline, p B / 0.05 vs placebo]. Total cholesterol concentration was reduced by 4.6% (95% CI 1.3 to 8.0%, p B / 0.01) and the apolipoprotein B level by 6.5% (95% CI 2.0 to 11.0%, p B / 0.01) versus baseline. The LDL/HDL ratio was reduced by 10.6% (95% CI 5.2 to 16.0%, p B / 0.001) versus baseline. Conclusions : Plant stanol ester reduces LDL-cholesterol effectively, even when provided in a fat-free matrix such as capsules.
Introduction
T he cholesterol-lowering properties of plant sterols have been known since the 1950s (1). Plant sterols decrease the absorption of both biliary and dietary cholesterol from the small intestine (2). The saturated form of plant sterols, plant stanols, decreases the serum levels of both cholesterol and plant sterols and stanols are not absorbed to any significant degree (3).
Fat-soluble plant stanol esters were developed to be used as an ingredient in different food products to achieve significant reduction in serum total cholesterol and low-density lipoprotein (LDL)-cholesterol levels. Plant stanols are currently recommended for cholesterol lowering by several authoritative bodies, e.g. the US National Cholesterol Education Program recommends plant stanols or plant sterols as part of therapeutic lifestyle changes when normal dietary measures are not effective enough (4). According to a meta-analysis of clinical trials, the optimum daily intake of plant stanols is 2Á3 g as plant stanol ester (5). So far, the cholesterol-lowering efficacy of plant stanol ester alone has been confirmed in more than 40 clinical studies. In most of the clinical studies the efficacy of plant stanol ester has been studied when incorporated into mayonnaise, regular or low-fat spreads or yoghurt (6Á8). Previous studies have shown that incorporation of unesterified stanols in capsules does not have a significant cholesterol-lowering effect (9).
The purpose of this study was to evaluate whether esterification of plant stanol restores the cholesterol-lowering potential of plant stanol when administered as capsules.
Subjects
In total, 45 subjects with normal serum cholesterol to mild hypercholesterolaemia (serum total cholesterol between 4.5 and 7 mmol l (1 , and total triglycerides below 3.0 mmol l (1 ) were recruited to the study at Nokia Ltd occupational health-care centre in the city of Salo (south-west Finland). Other inclusion criteria were age 35Á55 years, normal liver, kidney and thyroid function, no lipid-lowering medication and no history of unstable coronary artery disease (myocardial infarction, coronary artery bypass graft or percutaneous transluminal coronary angioplasty within the previous 6 months, diabetes, temporal ischaemic attack and malignant diseases). Subjects with alcohol overconsumption (/ 2 portions per day or more than 16 portions per week) and pregnancy were excluded. Subjects using plant stanol or plant sterolcontaining products were excluded, although those who discontinued the consumption of these products a minimum of 3 weeks before the beginning of the study could be included. The inclusion criteria were checked during the screening visit before randomization. Of the recruited subjects 42 (20 men and 22 women) subjects were eligible to participate in the study.
The subjects gave written consent for the study and before initiation of the study. The protocol, the subject information leaflet and the informed consent form were approved by the ethics committee of the Health Care District of South-west Finland.
Study design
The study was carried out from the beginning of February to the end of March 2005, applying a randomized, double-blind, parallel design with an intervention period of 3 weeks. Randomization was done stratified by serum cholesterol levels and gender, i.e. men with serum total cholesterol 5/ 5.7 and / 5.7 mmol l (1 , and women with serum total cholesterol 5/ 5.7 and/5.7 mmol l (1 separately to ensure the comparability of the groups. Total cholesterol used in randomization was measured from the first blood sampling of the study at the screening visit. After the screening phase all subjects were be randomized into one of two groups. One of the groups received placebo capsules and the other group used similar capsules containing plant stanol ester (2 g of plant stanol; Raisio plc, Raisio, Finland).
Routine laboratory measurements were taken to ensure normal health status at the first and last visit of the study. In addition, health history, current medication, alcohol and tobacco consumption were explored through interview by a structured questionnaire on the first visit of the study. Fasting blood samples were taken at the beginning of the pretrial period (week (/1), at the beginning of the experimental period (week 0), and twice at the end of study period (3 weeks; variation between blood samples 2Á5 days). Body weight was recorded at the beginning (week (/1) and end of the study (3 weeks). Possible adverse effects and symptoms were recorded in a diary during the study.
Test products and diet
Experimental products were capsules containing 830 mg of plant stanol ester per capsule, corresponding to 490 mg of stanol in one capsule. Placebo capsules contained 830 mg of rapeseed oil. The stanol was derived from soya oil sterols and the fatty acid ester from rapeseed oil fatty acids.
The targeted daily intake of plant stanols from the stanol ester capsules was 2.0 g. Therefore, the targeted daily number of capsules was four. The subjects were advised to take capsules with a meal: either four capsules with one meal (preferably breakfast) or two capsules with two different meals.
Diet was monitored by food frequency questionnaires at the start and end of the study to follow possible changes in the consumption of foodstuffs that influence serum cholesterol levels. A food frequency list was checked immediately in the presence of the subjects by study personnel for incomplete filling or other deviations. In addition, subjects recorded the daily number of capsules and time of consumption, as well as possible side-effects, in a daily diary, and they were asked not to change their habitual diet or physical activity habits during the intervention.
Laboratory measurements
All blood samples were collected after a 12 h overnight fast. Blood samples were not taken on Mondays to minimize the effect of dietary changes at the weekends. Blood count, serum g-glutamyltransferase (s-GGT) and serum alanine aminotransferase (s-ALT) were measured at the beginning and end of the study to ensure normal health status of the subjects. Serum high-density lipoprotein (HDL)-cholesterol, total cholesterol and total triglycerides were analysed by enzymic photometric methods using commercial kits (Thermo Clinical, Finland). LDL-cholesterol estimates were calculated by Friedewald's equation. Apolipoprotein A and B were analysed immunonephelometrically (DadeBehring BNA, Marburg, Germany).
Statistical analysis
The power calculations were done for LDL-cholesterol as the primary outcome variable. It was assumed that the treatment effect would be at least a 10% decrease in the mean value of LDL (5,9). It was also assumed that in each of the experimental groups the baseline values for mean and standard deviation of LDL would be about 4.5 and 0.7, respectively (5,9). The values of LDL were the average of two recordings. If the standard deviations of the two initial recordings were 0.7, the standard deviation of their average is 0.49. The treatment effect of a 0.5 unit decrease in the mean value of LDL would correspond to a decrease of 11%. There was expected to be clear correlation within the subjects between the repeated measurements at the baseline and at the end of follow-up. This correlation was assumed to be 0.60. Comparison of the treatments was carried out by testing the difference in the mean LDL changes during the follow-up, using two-sided, two-sample t-tests. The power was chosen to be 85% in the calculations.
The power calculations showed that if there was a 0.5 unit difference in the mean value of treatment group compared with the placebo group it would be significant at the level of 0.017, if the number of subjects in each group was 15. The significance level corresponds to a 0/0.05 after Bonferroni correction due to three comparisons. Because of possible dropouts, the number of subjects were fixed to be at least 20 in both experimental groups.
The statistical significance of differences between the treatment groups in the baseline measurements and in the baseline characteristics was evaluated using the chi-squared test for categorical variables and a two-sample t-test for continuous variables. Wilcoxon's rank sum test was used for comparisons between treatment groups in the difference of the number of days between visits. The treatment effect in each lipid and blood variable was analysed by studying the change in values recorded before and after the treatment period. To eliminate the random fluctuation in one laboratory recording for each subject the average value of visit 1 and visit 2 was taken to represent the value before the treatment period, and the average value of visit 4 and visit 5 was taken to represent the value after the treatment period. Then for each subject the change was calculated by subtracting the value for before treatment from the value for after treatment. Percentual changes were calculated by calculating how large the change was compared with the value of the variable before the treatment period.
Statistical analysis of intragroup changes in the lipid and blood recordings was carried out with paired t-tests. Two-sample t-tests were used to compare the mean values of the change between the treatment groups. A well as p-values, the 95% confidence intervals (CIs) corresponding to the tests were reported. Statistical computations were performed with the SAS System for Windows, release 9.1.3/2004. p-ValuesB/0.05 were considered statistically significant. All p-values were twotailed.
Results
There were no differences in baseline characteristics between stanol ester and placebo groups. Serum lipid values between study groups before the treatment period were closely similar (Table 1). Other recorded blood values (blood count, s-GGT, s-ALT) were well within normal values and no differences between groups were found (data not shown).
The mean9/SD values for the lipid measurements before and after treatment for the stanol ester and placebo groups are shown in Table 2. As shown in Fig. 1, stanol ester capsules effectively decreased total cholesterol concentrations by 4.6% (95% CI 1.3 to 8.0%, p B/0.01) and decreased LDL-cholesterol by 8.5% (95% CI 4.1 to 13.0%, p B/0, 001)
Serum cholesterol and stanol ester capsules
Subjects were following their habitual diet, and food and nutrient intake did not change during the trial ( Table 3). The subjects took the capsules two or four at a time with a meal, as advised. No sideeffects were reported by subjects in health or lifestyle parameters daily during the study.
Discussion
In the present study, ingestion of plant stanol ester as capsules for 3 weeks reduced LDL-cholesterol effectively in healthy, slightly hypercholesterolaemic adults. In addition, the apolipoprotein B level was reduced from baseline during plant stanol ester use. No effects were seen in HDL-cholesterol or triglyceride levels. The cholesterol-lowering properties of plant sterols were first reported in animal experiments some 50 years ago by Peterson (1). Plant sterols and stanols reduce cholesterol absorption from the small intestine (3) and thereby serum cholesterol concentration.
Early studies with free, insoluble forms of sterols and stanols failed to show any significant efficacy, even with large doses. Denke incorporated sitostanol in capsules dispersed in oil, but failed to show cholesterol-lowering efficacy with such treatment (9). By esterification with fatty acids, stanols are rendered fat soluble and show significant cholesterol-lowering efficacy when ingested as foods (10). The first commercial food application for esterified stanol was margarine. Several clinical studies have since shown the efficacy of stanol ester in reducing serum cholesterol levels in both short-and longterm studies (6Á8, 11). They not only play an important part in the control of cholesterol levels in healthy subjects, but also have additive effects with statins in lowering LDL-cholesterol levels in patients with coronary heart disease (CHD) (11,12). A meta-analysis of trials conducted with plant sterols and plant stanols concluded that 2 g of plant stanols or sterols daily reduces the LDL-cholesterol concentration on average by 10% (5).
Early studies investigated the effect of plant stanols when ingested as several daily doses. A study on consumption frequency drew attention to the mechanism of reducing cholesterol absorption (13). It had been assumed previously that the inhibition of cholesterol absorption is based on stanol replacing cholesterol from the mixed micelles in the upper part of the small intestine. The consumption frequency study showed, however, that administering stanols once a day or three times a day was equally effective in reducing serum cholesterol level (13). This finding suggests that stanols have an additional, longer lasting effect on intestinal mucosal cells. Indeed, it has been shown that the uptake of cholesterol and other sterols into the intestinal mucosa is a rapid process and these substances are also rapidly resecreted back into the intestinal lumen (14). Recent findings suggest that apart from the micellar exclusion of cholesterol absorption there is an additional process by which stanols actively influence the cellular cholesterol metabolism within intestinal enterocytes, i.e. via activation of transporter proteins that resecrete cholesterol back to the intestinal lumen (15).
Today, there are several options for treating high cholesterol levels with pharmaceuticals. According to all international guidelines for treatment of hyperlipidaemia, emphasis should be put on diet to provide an option for individuals with lesser elevations of serum cholesterol. Restriction of saturated fatty acids and cholesterol and increasing the intake of fibre and unsaturated fatty acids reduces CHD risk. However, compliance with dietary recommendations remains a major problem. Cholesterol reductions of 15Á20% have been . 4). This study shows that the food vehicle does not have to have a high content of fat to deliver the cholesterol-reducing effect. So far, the efficacy of plant sterols and stanols have been shown in highfat products such as margarines (6), high-protein products such as milk and yoghurt (18Á22), and carbohydrate-rich products such as pasta, bread and cereals (18,23,24). However, in a study comparing the effect in milk-based versus carbohydrate-based products, the efficacy was superior in milk products (18). The timing of the meal is also apparently of importance. It is important to note that to gain the full cholesterol-lowering effect, esterified stanol has to be hydrolysed in the upper small intestine to enter into, and replace cholesterol from, the micelles. For hydrolysis of the ester bond, pancreatic esterase is needed; this is achieved only through a stimulus by a proper amount of food entering the digestive tract. Therefore, with food vehicles that are low in fat or are only small snacks or capsules, as in this trial, it is important to consume the stanol ester product with a proper meal. In fact, a recent study showed that when a yoghurt drink with sterol ester was drunk on an empty stomach, its cholesterol-lowering efficacy was only half that when the same drink was consumed with a meal (19). In addition, one study with free and esterified sterols suggested that consumption of such a product for breakfast may not be effective (25), whereas a breakfast study with stanol ester showed a proper effect (22).
The full benefit from consuming plant stanols can be achieved when their consumption is regular and of a sufficient amount. It has been estimated that maintaining the lipid-lowering effect for a minimum of 2 years may be associated with reduced mortality from heart disease (26). Several food forms are available in which plant stanols have been incorporated to offer consumers a wide choice over several dietary patterns. Capsules with esterified plant stanols present consumers with another, easy-to-dose alternative for the incorporation of plant stanols for significant cholesterol-lowering efficacy.
|
2019-03-09T14:17:12.522Z
|
2006-09-01T00:00:00.000
|
{
"year": 2006,
"sha1": "869786077be50dfeebe8d94615d9f0a8fd623ccf",
"oa_license": "CCBY",
"oa_url": "https://foodandnutritionresearch.net/index.php/fnr/article/download/407/443",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "51f6a5c906791b4dce6b9dfd0f23f18472cc7e72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222409149
|
pes2o/s2orc
|
v3-fos-license
|
Inhibitory Effect of a Microecological Preparation on Azoxymethane/Dextran Sodium Sulfate-Induced Inflammatory Colorectal Cancer in Mice
This study aims to investigate the antitumor effect and the possible mechanism of a microecological preparation (JK5G) in mice. The mice treated with AOM/DSS were then randomly divided into the two model groups and the JK5G group, and the blank control group was included. Fecal samples were used for liquid chromatography–mass spectrometry and 16S rRNA gene sequencing analyses to reveal metabolic perturbations and gut flora disorders to demonstrate the effects of JK5G. Compared with the mice in the control group, the weight and food intake of mice after JK5G treatment were both upregulated. Moreover, JK5G could inhibit the growth of colon tumors and prolong the survival rate of mice, as well as inhibit the levels of cytokines in serum. The proportions of lymphocytes, T cells, CD3+CD4+ T cells, and CD3+CD8+ T cells in the spleen of the JK5G mice were all significantly increased compared to those in the control group (p < 0.05). Similarly, compared with the model group, the proportions of lymphocytes, B cells, T cells, natural killer T cells, CD3+CD4+ T cells, and CD3+CD8+ T cells in the intestinal tumors of the JK5G mice were significantly increased (p < 0.05). Furthermore, 16S rRNA high-throughput sequencing data revealed that Alloprevotella in the JK5G group was significantly upregulated, and Ruminiclostridium, Prevotellaceae_UCG_001, and Acetitomaculum were significantly downregulated. Fecal and serum metabolite analysis detected 939 metabolites, such as sildenafil and pyridoxamine, as well as 20 metabolites, including N-Palmitoyl tyrosine and dihydroergotamine, which were differentially expressed between the JK5G and model groups. Integrated analysis of 16s rRNA and metabolomics data showed that there were 19 functional relationship pairs, including 8 altered microbiota, such as Ruminiclostridium and Prevotellaceae_UCG_001, and 16 disturbed metabolites between the JK5G and model groups. This study revealed that JK5G treatment was involved in the growth of colorectal cancer, which may be associated with the role of JK5G in improving the nutritional status of mice and regulating the tumor microenvironment by regulating the changes of intestinal microbiota and metabolite bands on different pathways.
was involved in the growth of colorectal cancer, which may be associated with the role of JK5G in improving the nutritional status of mice and regulating the tumor microenvironment by regulating the changes of intestinal microbiota and metabolite bands on different pathways.
INTRODUCTION
Intestinal flora is one of the most complex microbial systems in the human body. It is in a dynamic balance of symbiosis, coexistence, and co-prosperity between human hosts, and has a very important physiological function for human health (1,2). More and more studies have found that intestinal flora plays an important role in the occurrence of tumors, while digestive system tumors, as one of the most common malignant tumors in the world, also pose a serious threat to human health and constitute a major source of economic burden in the field of world public health (3,4). Colorectal cancer (CRC), the most common malignancies of digestive system, is the fourth deadliest cancer in the world, accounting for about 881,000 deaths worldwide in 2018. Most CRCs are characterized by an orderly carcinogenic process in which mutations accumulate over an average of 10 to 15 years (5,6).
In recent years, the clinical application of microecological preparations in CRC has been increasing gradually (7). In the perioperative period of CRC, intestinal microecological environment disorders (intestinal flora imbalance, intestinal barrier function damage, and bacterial flora migration) and cellular immune function decline often occur, which is not conducive to postoperative intestinal function recovery, while cellular immune function decline provides an opportunity for tumor recurrence (3,8,9). However, some microecological nutrition agents can promote the recovery of intestinal mucosal barrier function, reduce the incidence of postoperative complications, and shorten the length of hospital stay in patients with CRC during the perioperative period. Usually, the microecological balance can be adjusted and maintained through enzyme action, antibacterial action, adhesive colonization, and biological barrier, while improving the health of the host. Previous studies have reported that the main functions of microecologics can be summarized as protection, immunity, bacteriostasis, balance, nutrition, antitumor, liver protection, and downregulating blood glucose.
With the development of high-throughput sequencing technology, many studies have shown that microbial metabolites have important effects on host physiology. Notably, 16S rRNA sequencing technique is a fast and well-tested method widely used to analyze the differential abundance of microbial communities and their correlation with environmental factors (10,11). Moreover, based on liquid chromatography-mass spectrometry (LC-MS), metabolomics are emerging as valuable tools for the targeted profiling of numerous small molecular metabolites. Intestinal dysfunction has the effects on the body's absorption and metabolism, resulting in metabolic disorders. The combination of microbiological and metabolomic techniques suggests that the imbalance of intestinal flora is directly related to the imbalance of various metabolites in many diseases (12)(13)(14).
The azoxymethane/dextran sodium sulfate (AOM/DSS) mouse model stands as a relevant preclinical inflammationassociated CRC model with histological and phenotypic features. AOM/DSS simulates the physiological and pathological process of cancer induced by chronic intestinal inflammation, and therefore, it has been widely used to study the formation mechanism of inflammation-related cancer (15,16). The purpose of this study was to study the antitumor effect of a microecological preparation, JK5G, by constructing the tumor model.
Animals and Experimental Design
Thirty specified pathogen-free C57BL/6J mice (male, aged 4-5 weeks) were obtained from Changzhou Cavens Laboratory Animal Co. Ltd. All animals were housed with free access to water and food under standard laboratory conditions. A total of 20 mice were randomly selected to generate the AOM/DSS model of CRC. Furthermore, the mice treated with AOM/DSS were then randomly divided into the two groups, including model and JK5G groups. Briefly, the mice were injected intraperitoneally with AOM (10 mg/kg). A week later, DSS drinking water was prepared and the molecular weight was 36,000-50,000 DSS dissolved in sterilized water, configured as 2.5% DSS solution, and then the mice were provided regular sterile water for 14 days. All the water intake was not limited. Additionally, another 10 mice were set as the control group and were injected with an equal volume of normal saline intraperitoneally. The JK5G group was administered 10 mg once a day on the first injection of AOM (Japan Kyowa Industrial Co., Ltd.) through oral gavage. Microecological preparation JK5G is a high-concentration complex that is rich in bacteria and their metabolites, including lactococcus and 21 kinds of compound lactobacillus bacteria and peptidoglycans and metabolites from lactococcus cytoderm.
At 0 and 8 weeks, fecal samples were collected and then stored at −80 • C before the analysis of metabolic profiling and microbial community. The animal protocol was approved by the Animal Ethics Committee of Xuzhou Medical University. At the end of the experimental procedure, 10 mice from each group were randomly selected for the detection of hind leg muscle circumference. Additionally, the level of albumin in serum was quantified using ELISA kits (R&D Systems Europe, United Kingdom) in accordance with the instructions provided by the manufacturer. Moreover, mice were anesthetized by intraperitoneal injection of 3% sodium pentobarbital (40 mg/kg). Then, the blood, spleen, and intestinal tissue samples were collected for data analysis. All samples were stored at −80 • C until additional analyses.
Extraction of Single-Cell Suspension From Intestinal Tumor or Spleen
To extract single cells from intestinal tumors, 1-2 cm of cecum tumor tissue was placed in 10 ml of 1640 medium containing 10% fetal bovine serum. After that, the tissue samples were added to solution A [50 ml of PBS, 3 ml of 0.5 mM ethylenediaminetetraacetic acid, 500 µl of 1 M 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid, and 25 µl of 2 M dithiothreitol] and shaken vigorously at 37 • C for 10 min to remove intestinal epithelial cells. Subsequently, the tissue samples were transferred to a new 50-ml centrifuge tube filled with 10 ml of liquid B [50 ml of PBS, 3 ml of 0.5 mM ethylenediaminetetraacetic acid, and 500 µl of 1 M 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid] and shaken vigorously for 10 min. Following washing with 10% fetal bovine serum, 200 U/ml and 1 mg/ml type VIII collagenase was added for 1-1.5 h. The cell precipitate was resuspended with 4 ml 40% percoll solution and 2.5 ml of 80% percoll solution.
To extract single cells from spleen tissues, the spleen was ground with the blunt end of the syringe, and the cell suspension was absorbed by the dropper and filtered through a 100-µm filter into a 50-ml centrifuge tube. After centrifugation, the supernatant was removed, and 2 ml of erythrocyte lysis buffer was added at room temperature for 2 min. Then, 2 ml of PBS solution was added and centrifuged at 400 g for 5 min to remove the supernatant, which was repeated twice to obtain the single-cell spleen suspension.
ELISA
The levels of inflammatory factors, including tumor necrosis factor (TNF)-α, interleukin (IL)-2, IL-4, IL-6, IL-10, and interferon-γ in the serum, were measured using commercial mouse ELISA kits (Quanzhou Konodi Biotechnology Co., Ltd.). The absorbance of the final products was detected on a microplate reader at the wavelength of 450 nm.
Flow Cytometry
The cells were harvested from the mouse spleen and tumor tissues, with 10% PBS and 0.5 mM EDTA. Digested tissues were centrifuged at 2000 rpm for 7 min and resuspended using 40% percoll (GE, 17-0891-09). The cells were centrifuged at 2500 rpm for 20 min again and then incubated with antibodies (the exact antibody information was shown in the Supplementary Table S1). The BD LSRFortessa X-20 (Biosciences, San Jose, CA, United States) was used for flow cytometry analysis. CD4 + CD25 + Foxp3 + characterization was performed according to the eBiosciences kit (San Diego, CA, United States).
Histological Examination
The tissues samples were fixed in neutral formalin [10% (v/v)], embedded in paraffin, and sectioned into 5-µm slices. Then, the sections were stained with hematoxylin and eosin, and the infiltration of immune cells in tissue sections was examined under light microscopy.
DNA Extraction and Sequencing
The microbial DNA was extracted from 50 mg of thawed fecal sample by EZNA stool DNA kit. By using the diluted genomic DNA as the template and based on the selection of the sequencing region, the specific primer with Barcode and Takara Ex Taq high-fidelity enzyme of Takara Company were used for PCR to ensure the amplification efficiency and accuracy. The V3 and V4 hypervariable region of the 16S rRNA gene was amplified with polymerase chain reaction by employing forward (5 -TACGGRAGGCAGCAG-3 ) and reverse primers (5 -AGGGTATCTAATCCT-3 ). Purified libraries were constructed following the manufacturer's instructions, and Illumina MiSeq platform (Illumina, San Diego, CA, United States) was used for sequencing. There were six samples for each group.
USEARCH software was used to demultiplex the raw read (17). Operational taxonomic unit (OTU) picking was analyzed with Quantitative Insights Into Microbial Ecology bioinformatics pipeline (18). The gene sequences of 16S rRNA were clustered at 97% similarity using UCLUST (17). The indexes of alpha diversity were calculated with MOTHUR (3). The rarefaction curve and bar graphs were drawn with the R vegan package (19). Beta-diversity was estimated and visualized using principal coordinate analysis.
To predict metagenomic function, 97% of OTUs were selected with closed-reference OTU selection protocol (Quantitative Insights Into Microbial Ecology), as well as the Greengenes database (20). Reconstruction of the metagenome was conducted by Phylogenetic Investigation of Communities by Reconstruction of Unobserved States software (21). Predicted functional genes were cataloged into the KEGG orthology and then compared between different groups (model vs. control, JK5G vs. model) using Kruskal-Wallis tests. The pathway with p < 0.05 was considered significant.
LC-MS Metabolomics Processing
Fecal samples for metabolomics were analyzed using the LC-MS platform. The UPLC system was equipped with an AB Sciex Triple TOF 5600 mass spectrometer. Briefly, the samples were thawed at room temperature, and then each sample was transferred to a 1.5-ml centrifuge tube. After adding 10 µl of internal standard, each sample was vortexed for 30 s, and then centrifuged at 12,000 rpm at 4 • C for 15 min. Sequentially, the supernatant (200 µl) was transferred to a new bottle. Mass spectrometer (MS) detection was conducted with the quadrupole time-of-flight mass spectrometry system in electrospray ionization in positive ion mode, and the specific conditions were as follows: desolvation temperature, 500 • C; de-cluster voltage, 80 eV; ion spray voltage, 5500 V; collision energy, 10 eV; and desolvent gas flow rate, 50 ml/min.
Metabolomics Data Analysis
In order to complete the data processing of metabolomics, principal component analysis, partial least squares discriminant analysis, and orthogonal partial least squares discriminant analysis were conducted on the raw data of LC-MS. At the beginning of the analysis, quality control samples were injected and tested every five samples to assess LC-MS stability throughout the collection process. The characteristics of less than 50 or 80% of the biological samples detected in the quality control samples were removed. The ions were identified by combining the retention time and m/z data. The significant differences between groups (model vs. control, JK5G vs. model) in metabolites were analyzed by Student's t tests, with a p-value < 0.05 and Vip_oplsda > 1 considered to indicate statistical significance. Metabolic pathway analysis was conducted for differentially expressed metabolites that exhibited significant differences using the Kyoto Encyclopedia of Genes and Genome database 1 . The p-value was adjusted for multiple tests (Benjamini-Hochberg), and a p-value < 0.05 was considered to be statistically significant.
Integrated Analysis of 16s rRNA and Metabolomics Data
In order to detect the relationships between the gut microbiota and their potential association with metabolites, the common or similar pathways were selected. Furthermore, co-expression relationship pairs of 16S rRNA and metabolomics data in the model versus control groups and the JK5G versus model groups, respectively, were screened. The relationship pairs with correlation coefficients |r| > 0.8 and p < 0.001 were used for exploring the correlation of fecal microbiota and metabolites.
Statistical Analysis SPSS 19.0 and GraphPad Prism 7.0 were used for statistical analysis. Survival analysis was conducted by the Kaplan-Meier method. The differences between two groups were evaluated with the two-tailed Student's t-test, and the differences among three groups were analyzed using one-way analysis of variance.
General Observation
As shown in Figure 1A, after 25 days of intervention, the weight of mice in the control group increased slowly, but the weight of mice in the model group and the JK5G group decreased gradually. Moreover, the weight of mice in the JK5G group was always slightly higher than that in the model group. At the end of the experiment, the weight of the control, model, and JK5G groups was 19.38 ± 1.52, 16.44 ± 0.99, and 17.56 ± 0.91 g, respectively, and the differences were statistically significant. In addition, when compared with the control group, the muscle circumference and serum albumin levels of the hind limbs of mice in the model group and the JK5G group were significantly decreased (Figures 1B,C). The muscle circumference of the hindlimb of the JK5G group was higher than that of the model group; however, the difference was not significant.
Additionally, at 3 weeks of the experiment, the food intake of mice in the model group and the JK5G group decreased significantly; however, the food intake of mice in the JK5G group was always upregulated in comparison to that in the model group ( Figure 1D).
Furthermore, during the intervention period, no mice died in the control group (0/10), four mice died in the model group (4/10), and one mouse died in the JK5G group (1/10). The mortality rates of mice in the three groups were 0, 40, and 10%, respectively. Survival analysis showed that there were observably differences in survival rates in three groups (p = 0.028; Figure 1E).
JK5G Could Inhibit the Growth of Colon Tumor
The colon tissue was collected from mice at the end of the study, and tumor growth was observed. The results showed that there was an observable difference in colon tumor number among the three groups (p < 0.001; Figure 1F). The mean number of colon tumors in the JK5G group was observably lower than that in the model group (p < 0.05), suggesting that JK5G could inhibit the growth of colon tumors. Furthermore, the colon of mice in the model group was observably shorter than that in the control group (p < 0.001; Figure 1G). However, there was no observable difference in colonic length between the JK5G group and the model group.
More importantly, at the middle of the experiment, hematoxylin and eosin staining of the colon tissues showed that there was no normal intestinal gland structure in the intestinal mucosa of the model group and JK5G group, but irregular tubular and focal hyperplasia of glands formed by the arrangement of tall columnar tumor cells and protruding into the lumen (black arrow). However, the structural damage of intestinal glands in the model group was more serious than that in the JK5G group. Moreover, at the end of the experiment, tumor tissues of the colon in the model group and the JK5G group grew in compression to the surrounding tissues, which were clearly separated from the surrounding tissues. Meanwhile, there was a small amount of mitosis (red arrow) and a small amount of inflammatory cell infiltration in interstitial connective tissue (yellow arrow) ( Figure 1H). All these data further confirmed that JK5G could inhibit the growth of colon tumors induced by AOM/DSS treatment.
JK5G Reduces Serum IL-6, IL-10, and TNF-α in AOM/DSS-Treated Mice To further detect the effects of JK5G on CRC carcinogenesis, the serum cytokines levels were measured by ELISA. As illustrated in Figure 2, when compared with the control group, the IL-2, IL-4, IL-6, IL-10, TNF-α, and interferon-γ in the model group were Frontiers in Oncology | www.frontiersin.org FIGURE 1 | Mouse weights, general nutritional status, food intake, and survival rate and growth of colorectal cancer over time. When compared with the model group, the weight (A), food intake (D), and survival rate (E) of the microecological preparation group were significant (p < 0.05). The muscle circumference (B) and serum albumin (C) of the hindlimb of the microecological preparation group were higher than those of the model group, but the difference was not significant (p > 0.05). Compared with the model groups, the tumor size (F) and number (G) were lower in the microecological preparation group (p < 0.05). Hematoxylin and eosin staining of colon tissues showed that the tissue in the model group exhibited an obvious boundary between the tumor and surrounding tissues, with inflammatory cells infiltrating into the submucosa (H). *p < 0.05; **p < 0.01; ***p < 0.001. all upregulated. Furthermore, TNF-α, IL-2, and IL-6 levels in the JK5G group were observably reduced in comparison to those in the model group (p < 0.05).
Subsequently, the immune cells in the spleen including lymphocytes, B cells, natural killer cells, and T cells, as well as helper T cells (CD3 + CD4 + T cells), cytotoxic T cells (CD3 + CD8 + T cells), and regulatory T cells (Treg T cells)
, were further investigated. The results revealed that in comparison to the model group, the proportions of lymphocytes, CD3 + CD4 + T cells, and CD3 + CD8 + T cells in the JK5G group were observably upregulated ( Figure 3A and Supplementary Figure S2; all, p < 0.05). In addition, the proportions of CD4 + interferon + T cells in spleen of mice in the JK5G group were observably higher than those in the model group (p < 0.001; Figure 3B and Supplementary Figure S3).
Likewise, the proportions of lymphocytes, B cells, total T cells, CD3 + CD4 + T cells, natural killer T cells, and CD3 + CD8 + T cells in the spleen and tumor tissues of the JK5G group were observably upregulated compared to those in the model group (all p < 0.05; Figure 4).
JK5G-Induced Changes in the Intestinal Bacterial Microbiome
The results of alpha diversity analysis showed that the species accumulation curve in this experiment tended to be flat, indicating the adequacy of sampling ( Supplementary Figures S1A-C); furthermore, according to Supplementary Figures S1D, S2F, the samples of the same group were basically gathered together, indicating that the differences within the group were small and the experiment was reliable.
Different flora analysis was conducted on the model versus control groups and the JK5G versus model groups at the phylum and genus levels, respectively ( Figure 5). The significant differences were detected using a Kruskal-Wallis test, and the flora with p-value < 0.05 is generally considered to be a significantly different flora. As shown in Supplementary Table S2A, the phylum Verrucomicrobia and Actinobacteria were overrepresented in the model group compared to the control group. At the genus level, the 16s rRNA FIGURE 2 | Microecological preparation reduces interleukin (IL)-6, IL-10, and tumor necrosis factor-alpha (TNF-α) levels in AOM/DSS-treated mice. Compared with the control group, the IL-6, IL-10, and TNF-α levels were significantly reduced compared to those in the model group. *p < 0.05; **p < 0.01; ***p < 0.001. expression of Prevotellaceae_UCG_001, Bifidobacterium, and Akkermansia was observably increased in the model group (p < 0.05) in comparison to those in the control group, while Lachnospiraceae_UCG_008, Ruminococcaceae_UCG_005, and Lachnospiraceae_UCG_001 levels were observably downregulated in the model group. Similarly, Supplementary Table S2B showed the top 10 differential microbiome between the JK5G and model groups. Alloprevotella was found to be overrepresented in JK5G group as compared to the model group. Conversely, the 16s rRNA levels of Ruminiclostridium, Prevotellaceae_UCG_001, and Acetitomaculum in JK5G group were observably downregulated compared to those in model group (all p < 0.05).
Using Phylogenetic Investigation of Communities by Reconstruction of Unobserved States software, the different pathways between the model versus control groups, as well as the JK5G versus model groups, were screened. Briefly, a total of 20 pathways (Supplementary Table S3) and 7 pathways, such as ether lipid metabolism and linoleic acid metabolism, were found to be different, respectively. More importantly, as illustrated in Figure 5, the different pathways could be significantly distinguished between the two groups.
JK5G-Induced Gut Fecal Metabolic Profiling Analysis
Principal component analysis showed that the stool metabolite quality control samples were closely clustered together, indicating that the experiment was stable and reproducible (Figures 6A,B). A total of 172 differentially expressed metabolites (including 106 downregulated and 66 upregulated) were obtained between the model and control groups, such as carboxylic acid, 3galactosyllactose, indole, lenticin, bassic acid, and so on. Moreover, 939 metabolites (including 423 downregulated and 517 upregulated) were found to be differentially expressed between the JK5G and model groups (Figures 6C,E) including pyridoxamine, stevioside, isoleucyl-tyrosine, dinorcapsaicin, 2hydroxycinnamic acid, L-valine, serotonin, and so on. The top 20 differentially expressed metabolites are summarized in Supplementary Table S4, respectively.
To further explore the mechanism of action of the differentially expressed metabolites, a KEGG database pathway analysis was performed. According to Figure 6D, 30 pathways were enriched between the model and control groups, such as synaptic vesicle cycle, central carbon metabolism in cancer, biosynthesis of amino acids, choline metabolism in cancer, mineral absorption, and so on, and 11 pathways were enriched by the differentially expressed metabolites between the JK5G and model groups, including cholinergic synapse, synaptic vesicle cycle, gastric acid secretion, regulation of actin cytoskeleton, and so on.
JK5G-Induced Serum Metabolic Profiling Analysis
Twenty differentially expressed metabolites were obtained between the model and control groups ( Supplementary Table S5), such as hydroxydecanoic acid, kynurenine, 11Zhexadecenoic acid, dehydrovomifoliol, noravicholic acid, pyranomammea c, and so on. Moreover, 20 metabolites were found to be differentially expressed between the JK5G and model groups, including N-palmitoyl tyrosine, dihydroergotamine, liquoric acid, sulfacetamide, kolanone, eszopiclone, glucoside, flutamide, L-urobilin, and so on. Cluster analysis showed that these differentially expressed metabolites could be distinguished significantly in different groups (Figures 7A,B). Furthermore, according to Figures 7C,D, 26 pathways, such as central carbon metabolism, linoic acid metabolism, glucagon signaling pathway, and choline metabolism in cancer, were enriched between the model and control groups, and 28 pathways were enriched by the differentially expressed metabolites between the JK5G and model groups, including central carbon metabolism in cancer, mineral absorption, biosynthesis of amino acids, bile secretion, and so on. Furthermore, compared to the results of fecal metabolic pathways, there were 10 common pathways, such as central carbon metabolism in cancer, choline metabolism in cancer, glycerophospholipid metabolism, and sphingolipid metabolism, in the model versus control groups and two common pathways (protein digestion and absorption and glycerophospholipid metabolism) in the JKG5 versus model groups.
Correlation Between the Gut Microbiome and Metabolome
To further explore the relationship between the metabolites and intestinal flora, the common or similar pathways were screened according to the integrated analysis. In the model versus control groups, the results showed sphingolipid metabolism, sphingolipid signaling pathway, and glycosphingolipid biosynthesis; the lacto and neolacto series pathway affected the disease at the metabolomic level, but it is also closely related to the disease in the intestinal flora. However, there were no pathways with the same or similar functions between the JK5G and model groups through the integrated analysis of 16s rRNA and the metabolome.
According to the Spearman correlation coefficient of |r| > 0.8 (p < 0.001), 209 relationship pairs, including 24 different microbiota and 159 different metabolites, were screened in the model versus control (Figure 8A). In addition, Figure 8B revealed that there were 19 functional relationships pairs, including eight altered microbiota, such as Ruminiclostridium and Prevotellaceae_UCG_001, and 16 disturbed metabolites, such as C10H11NO3 and C21H32O8, following JK5G application.
DISCUSSION
Despite the advancement in the therapeutic strategies of CRC, the long-term prognosis of patients with the disease remains unsatisfactory; therefore, it is still urgent to identify useful drugs without side effects for the treatment and prognosis of CRC. This study based on the AOM/DSS-derived CRC mice model indicated that JK5G suppresses the growth of CRC, resulting in the better survival rate of CRC patients; moreover, JK5G treatment has significant effects on intestinal microbiota and metabolites by regulating the different pathways involved in CRC.
The combination of microecologics and intestinal mucosa can prevent the reproduction and invasion of bacteria to strengthen the barrier function of the intestinal mucosa and the antibacterial effect. Medina et al. (22) have found that Bifidobacterium and Lactobacillus have the function of regulating immunity, and some living bacteria and their metabolites can induce the production of interferon, enhance the body's immunity, and play antitumor and anti-allergy roles. A study has found that adding beneficial bacteria to traditional enteral nutrition can inhibit the growth of pathogenic bacteria, improve the distribution of intestinal flora, and improve the body's immunity and anti-infection ability (23). In the present study, the results showed that JK5G treatment could significantly inhibit the growth of colon tumors, as well as the levels of cytokines in serum, such as IL-2, IL-6, and TNF-α. Moreover, the proportions of lymphocytes, T cells, CD3 + CD8 + T cells, and CD3 + CD4 + T cells in the JK5G group were observably upregulated compared to those in the control group. Usually, plasma TNF-α, IL-2, and IL-6 have been regarded as important cytokines related to the progression of CRC (24,25). Moreover, CRC is associated with increased concentrations of IL-2, IL-6, and TNF-α. Furthermore, the increase of Tregs, lymphocytes, T cells, and CD3 + CD4 + T cells have been widely observed in CRC patients (26,27). The data suggested that JK5G treatment could suppress the progression of CRC. In this study, CD4 + and CD8 + T cells were increased while Treg and NK were not changed. We guess NK and Treg were insignificantly decreased in JK5G, which may raise the possibility of combined hyperglycemia of CD4 + and CD8 + T cells. . The red circle is upregulated differential bacteria, the blue circle is downregulated differential bacteria, the green square represents downregulated differential metabolites, and the pink square represents upregulated differential metabolites.
Microecologics have been used in feed, agriculture, medicine, health care, and food. In the future, a pollutionfree preparation following the natural circulation law of the ecological environment, microecologics will be a development trend in the additive industry. The gut flora is composed of different bacteria that produce a wide variety of metabolites. It is specifically responsible for the selection of microorganisms and plays important roles in metabolic functions (28). Factors such as nutritional intervention, host condition, as well as toxicological injury can induce microbial regulation and affect host health. Hence, comprehensive analysis of intestinal flora and metabolites is helpful to understand the changes of host physiological state under the above interference types (29). In the present study, we found that Alloprevotella was found to be overrepresented, but Ruminiclostridium, Prevotellaceae_UCG_001, and Acetitomaculum in the JK5G group were observably downregulated in comparison to those in the model group. Furthermore, the integrated analysis of 16s rRNA and metabolomics data showed that there were 19 functional relationship pairs, including eight altered microbiota (e.g., Ruminiclostridium and Prevotellaceae_UCG_001) and 16 disturbed metabolites (e.g., C10H11NO3 and C21H32O8) between the JK5G and model groups. Actually, Alloprevotella has been regarded as the cancer-related bacteria in colon cancer or visceral leishmaniasis (30,31). Moreover, a previous study showed that the abundance of the Ruminiclostridium genus was related to the enhancement of animal immune responses to spore adsorbent antigen and probiotics (32). Similarly, Prevotellaceae_UCG_001 was overrepresented in AOM/DSS mice in comparison to controls (33). Hence, we speculate that microecologics play a role in regulating tumor microenvironment (immunity and inflammation) through intestinal flora and its metabolites. More importantly, pathway analysis in this study revealed several differential pathways such as ether lipid metabolism and linoleic acid metabolism pathways. In fact, ether lipid metabolism has been reported to be found in oral squamous cell carcinoma (34); moreover, linoleic acid metabolism was found to play an important role in hepatocarcinogenesis (35). However, their role activated by the intestinal bacteria in CRC remains unclear and whether the microecologics affect intestinal flora and metabolites by regulating these pathways needs to be verified in a future study.
In conclusion, JK5G inhibiting the development of CRC may be associated with improving the nutritional status of mice and regulating the tumor microenvironment (immunity and inflammation) by modulating the proportions of intestinal microbiota and metabolites.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by The Animal Ethics Committee of Xuzhou Medical University. Written informed consent was obtained from the owners for the participation of their animals in this study.
AUTHOR CONTRIBUTIONS
HS, MM, WY, WZ, and CR conceived and designed the experiments. JZ, ZC, and SW performed the experiments and analyzed the data. JZ and WY drafted and revised the manuscript. All authors read and approved the final manuscript.
|
2020-10-16T13:10:22.072Z
|
2020-10-16T00:00:00.000
|
{
"year": 2020,
"sha1": "6ddba6ba0b94a481be1ae450d95793a62ec7af71",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.562189/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ddba6ba0b94a481be1ae450d95793a62ec7af71",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
15876133
|
pes2o/s2orc
|
v3-fos-license
|
A threshold phenomenon for embeddings of $H^m_0$ into Orlicz spaces
We consider a sequence of positive smooth critical points of the Adams-Moser-Trudinger embedding of $H^m_0$ into Orlicz spaces. We study its concentration-compactness behavior and show that if the sequence is not precompact, then the liminf of the $H^m_0$-norms of the functions is greater than or equal to a positive geometric constant.
Introduction and statement of the main result
Let Ω ⊂ R 2m be open, bounded and with smooth boundary, and let a sequence λ k → 0 + be given.Consider a sequence (u k ) k∈N of smooth solutions to (1) Assume also that In this paper we shall prove Theorem 1 Let (u k ) be a sequence of solutions to (1), (2).Then either (i) Λ = 0 and u k → 0 in C 2m−1,α (Ω), 1 or (ii) We have sup Ω u k → ∞ as k → ∞.Moreover there exists I ∈ N\{0} such that Λ ≥ IΛ 1 , where Λ 1 := (2m−1)!vol(S 2m ), and up to a subsequence there are I converging sequences of points x i,k → x (i) and of positive numbers r i,k → 0, the latter defined by such that the following is true: 1.If we define (4) 2. For every 1 ≤ i = j ≤ I we have where C does not depend on x or k.
Solutions to (1) arise from the Adams-Moser-Trudinger inequality [Ada]: ≤Λ1 Ω e mu2 dx = c 0 (m) < +∞, where c 0 (m) is a dimensional constant, and H m 0 (Ω) is the Beppo-Levi defined as the completion of C ∞ c (Ω) with respect to the norm 2 and we used the following notation: In fact (1) is the Euler-Lagrange equation of the functional (where λ = λ k plays the role of a Lagrange multiplier), which is well defined and smooth thanks to (6), but does not satisfy the Palais-Smale condition.For a more detailed discussion, in the context of Orlicz spaces, we refer to [Str1].
The function η 0 which appears in ( 4) is a solution of the higher-order Liouville's equation We recall (see e.g.[Mar1]) that if u solves (−∆) m u = V e 2mu on R 2m , then the conformal metric g u := e 2u g R 2m has Q-curvature V , where g R 2m denotes the Euclidean metric.This shows a surprising relation between Equation (1) and the problem of prescribing the Q-curvature.In fact η 0 has also a remarkable geometric interpretation: where g S 2m is the round metric on S 2m .Then (10) implies This is the reason why Λ ≥ IΛ 1 in case (ii) of Theorem 1 above, compare Proposition 7.
Theorem 1 have been proved by Adimurthi and M. Struwe [AS] and Adimurthi and O. Druet [AD] in the case m = 1, and by F. Robert and M. Struwe [RS] for m = 2, and we refer to them for further motivations and references.Here, instead, we want to point out the main ingredients of our approach.Crucial to the proof of Theorem 1 are the gradient estimates in Lemma 6 and the blow-up procedure of Proposition 7.For the latter, we rely on a concentrationcompactness result from [Mar2] and a classification result from [Mar1], which imply, together with the gradient estimates, that at the finitely many concentration points {x (1) , . . ., x (I) }, the profile of u k is η 0 , hence an energy not less that Λ 1 accumulates, namely lim As for the gradient estimates, if one uses (1) and (2) to infer ∆ m u k L 1 (Ω) ≤ C, then elliptic regularity gives ∇ ℓ u k L p (Ω) ≤ C(p) for every p ∈ [1, 2m/ℓ).These bounds, though, turn out to be too weak for Lemma 6 (see also the remark after Lemma 5).One has, instead, to fully exploit the integrability of ∆ m u k given by (2), namely ∆ m u k L(log L) 1/2 (Ω) ≤ C, where L(log L) 1/2 L 1 is the Zygmund space.Then an interpolation result from [BS] gives uniform estimates for ∇ ℓ u k in the Lorentz space L (2m/ℓ,2) (Ω), 1 ≤ ℓ ≤ 2m − 1, which are sharp for our purposes (see Lemma 5).We remark that when m = 1, things simplify dramatically, as we can simply integrate by parts (2) and get In the case m = 2, F. Robert and M. Struwe [RS] proved a slightly weaker form of our Lemma 6 by using subtle estimates in the BM O space, whose generalization to arbitrary dimensions appears quite challenging.Our approach, on the other hand, is simpler and more transparent.
Recently O. Druet [Dru] for the case m = 1, and M. Struwe [Str2] for m = 2 improved the previous results by showing that in case (ii) of Theorem 1 we have Λ = LΛ 1 for some positive L ∈ N.
In the following, the letter C denotes a generic positive constant, which may change from line to line and even within the same line.
I'm grateful to Prof. Michael Struwe for many useful discussions.
From now on, following the approach of [RS], we assume that, up to a subsequence, sup Ω u k → ∞ and show that we are in case (ii) of the theorem.In Section 2.1 we analyze the asymptotic profile at blow-up points.In Section 2.2 we sketch the inductive procedure which completes the proof.
Analysis of the first blow-up
with the boundary condition and elliptic estimates (see e.g.[ADN]), gives Lemma 2 We have Assume for the sake of contradiction that up to a subsequence we have Then, passing to a further subsequence, Ω k → P, where P is a half-space.Since we see that, up to a subsequence, u k → u in C 2m−1,α loc (P), where By ( 12) and the Sobolev imbedding H m−1 (Ω) ֒→ L 2m (Ω), we find Then ∇u ≡ 0, hence u ≡ const = 0 thanks to the boundary condition.That contradicts u(0) = 1.
Lemma 3 We have Assume that m > 1.By ( 12) and the Sobolev embedding Fix now R > 0 and write , where ∆ m h k = 0 and w k satisfies the Navier-boundary condition on B R .Then, (14) gives This, together with (15) implies Then, since ∆ m−1 (∆h k ) = 0, we get from Proposition 12 and ( 18), together with Again by Proposition 12 it follows that By Ascoli-Arzelà's theorem, ( 16) and ( 19), we have that up to a subsequence where ∆ m v ≡ 0 thanks to (14).We can now apply the above procedure with a sequence of radii R k → ∞, extract a diagonal subsequence (v k ′ ), and find a By Theorem 13 and (20), v is a polynomial of degree at most 2m − 2. Then ( 20) and ( 21) imply that v is constant, hence v ≡ v(0) = 0. Therefore the limit does not depend on the chosen subsequence (v k ′ ), and the full sequence ), as claimed.When m = 1, Pizzetti's formula and ( 14) imply at once that, for every An immediate consequence of Lemma 3 is the following where Proof.We first show that f where Indeed, set log + t := max{0, log t} for t > 0.Then, using the simple inequalities log(2 + t) ≤ 2 + log + t, log + (ts) ≤ log + t + log + s, t, s > 0, one gets Then, since f k ≥ 0, we have by (2), as claimed.Now (24) follows from Theorem 10.
Remark.The inequality ( 24) is intermediate between the L 1 and the L log L estimates.Indeed, the bound of [Hél,Thm. 3.3.6]),but that is not enough for our purposes (Lemma 6 below).On the other hand, was f k bounded in L(log L), we would have [Hél,Thm. 3.3.8]).But we know that this is not the case in general.
The following lemma replaces and sharpens Proposition 2.3 in [RS].
To see that, observe that The term 2u k (−∆) m u k is bounded in L 1 thanks to (2).The other terms on the right-hand side of ( 26 Thanks to [DAS,Thm. 12], Let µ k denote the probability measure |f k (y)| dy.By Fubini's theorem To conclude the proof, observe that Lemma 3 implies that on Integrating over B Rr k (x k ) and using the above estimates we conclude.
Proposition 7 Let η k be as in (22).Then, up to selecting a subsequence, Proof.Fix R > 0, and notice that, thanks to Lemma 3 and ( 23), where V k and a k are as in Corollary 4, and o(1) → 0 as k → ∞.
Step 1.We claim that Then, letting R → ∞ in (28), from Corollary 4 and Fatou's lemma we infer Let us prove the claim.Consider first the case m > 1.From Corollary 4, Theorem 1 in [Mar2], and ( 28), together with η k ≤ log 2 (which implies that S 1 = ∅ in Theorem 1 of [Mar2]), we infer that up to subsequences either Since η k (0) = log 2, (ii) can be ruled out.Assume now that (iii) occurs.From Liouville's theorem and (30) we get ∆ϕ ≡ 0, hence for some R > 0 we have On the other hand, we infer from Lemma 6 contradicting ( 31) when ℓ = 2 and therefore proving our claim.When m = 1, Theorem 3 in [BM] implies that only Case (i) or Case (ii) above can occur.Again Case (ii) can be ruled out, since η k (0) = log 2, and we are done.
Exhaustion of the blow-up points and proof of Theorem 1
For ℓ ∈ N we say that (H ℓ ) holds if there are ℓ sequences of converging points where We say that (E ℓ ) holds if there are ℓ sequences of converging points x i,k → x (i) such that, if we define r i,k as in (3), the following hold true: To prove Theorem 1 we show inductively that (H I ) and (E I ) hold for some positive I ∈ N (with the same sequences x i,k → x (i) , 1 ≤ i ≤ I), following the approach of [AD] and [RS].First observe that (E 1 ) holds thanks to Lemma 2 and Proposition 7. Assume now that for some ℓ ≥ 1 (E ℓ ) holds and (H ℓ ) does not.Choose x ℓ+1,k ∈ Ω such that ) and define r ℓ+1,k as in (3).It easily follows from (35) that Moreover, thanks to (E 2 ℓ ) and ( 35), we also have We now need to replace Lemma 2 and Lemma 3 with the lemma below.
Lemma 8 Under the above assumptions and notation, we have and Proof.To simplify the notation, let us write y k := x ℓ+1,k and ρ k := r ℓ+1,k .
Evaluating the right-hand side of (35) at the point , we have that where o(1) → 0 as k → ∞ locally uniformly in x, as (36) immediately implies.Then (37) follows as in the proof of Lemma 2, since (39) implies where o(1) → 0 as k → ∞ uniformly locally in R 2m .
Define now v k (x) := u k (x ℓ+1,k + r ℓ+1,k x) − u k (x ℓ+1,k ), and observe that thanks to ( 35) and ( 36).This and ( 40) imply that we can replace ( 14) in the proof of Lemma 3 with Then the rest of the proof of Lemma 3 applies without changes, and also ( 38) is proved.
Still repeating the arguments of the preceding section with x ℓ+1,k instead of x k and r ℓ+1,k instead of r k , we define and we have Proposition 9 Up to a subsequence Summarizing, we have proved that (E 1 ℓ+1 ), (E 2 ℓ+1 ) and ( 41) hold.These also imply that (E 3 ℓ+1 ) holds, hence we have (E ℓ+1 ).Because of (2) and (E 3 ℓ ), the procedure stops in a finite number I of steps, and we have (H I ).
Finally, we claim that λ k → 0 implies u k ⇀ 0 in H m (Ω).This, (5) and elliptic estimates then imply that To prove the claim, we observe that for any α > 0 where C α depends only on α.Letting k and α go to infinity, we infer Thanks to (12), we infer that up to a subsequence u k ⇀ u 0 in H m (Ω).Then (42) and the boundary condition imply that u 0 ≡ 0, in particular the full sequence converges to 0 weakly in H m (Ω).This completes the proof of the theorem.
Other useful results
A proof of the results below can be found in [Mar1].The following Lemma can be considered a generalized mean value identity for polyharmonic function.
Lemma 11 (Pizzetti [Piz]) Let u ∈ C 2m (B R (x 0 )), B R (x 0 ) ⊂ R n , for some m, n positive integers.Then there are positive constants c i = c i (n) such that for some ξ ∈ B R (x 0 ).
) are bounded in L 1 thanks to Lemma 5 and the Hölder-type inequality of O'Neil [O'N]. 3Hence (25) is proven.Now set f k := (−∆) m (u 2 k ), and for any x ∈ Ω, let G x be the Green's function for (−∆) m on Ω with Dirichlet boundary condition.Then
|
2018-05-08T17:59:34.027Z
|
2009-02-19T00:00:00.000
|
{
"year": 2009,
"sha1": "b71174e4064513a5ff2ca583beabfae74769a1de",
"oa_license": null,
"oa_url": "https://www.research-collection.ethz.ch/bitstream/20.500.11850/21420/2/526_2009_Article_239.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b71174e4064513a5ff2ca583beabfae74769a1de",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
216099043
|
pes2o/s2orc
|
v3-fos-license
|
Phages Affect Gene Expression and Fitness in E. coli
Cell membranes are largely made of proteins, and membrane proteins account for about a third of all genes. Despite their importance, they are devilishly hard to isolate and stabilize, and therefore are hard to study. The problem lies in their structure: membrane proteins have at least one hydrophobic domain, composed of a stretch of water-repelling amino acids, which holds the protein snugly in the lipid membrane. Purifying such a protein in an aqueous medium makes the hydrophobic parts aggregate, destroying the protein's delicate three-dimensional structure and often disrupting its function. The alternative is to extract the protein with a detergent, a two-headed " Janus " molecule with both hydrophobic and hydrophilic ends. The protein remains surrounded by the hydrophobic ends, while water clusters at the hydrophilic ends, easing the protein out of the membrane and into solution, where it can be studied. To date, though, relatively few complex membrane proteins have been successfully purifi ed with available detergents. In this issue, Shuguang Zhang and colleagues show that a simple amino acid–based detergent can successfully stabilize the dauntingly large protein complex photosystem I (PS-I), an integral part of the photosynthetic machinery. The molecule they made, abbreviated A 6 K, links six units of the hydrophobic amino acid alanine to one of the hydrophilic amino acid lysine. The authors used it to stabilize PS-I and then attached the detergent–protein complex to a glass slide, allowed it to dry, and examined the stability of PS-I by testing its fl uorescence. Intact PS-I emits red light with a characteristic peak wavelength; as it degrades, this peak subsides and is replaced by another, bluer peak. Even the two best standard detergents did poorly at maintaining the red peak. In contrast, the spectrum after A 6 K extraction was almost a perfect match for the normal one, indicating the complex was largely intact after drying. Furthermore, the complex appeared to remain stable for up to three weeks on the glass slide. The potential applications of this work are severalfold. PS-I itself remains to be fully characterized, and this stabilization technique offers new means to explore its properties. In addition, an isolated and stabilized form of PS-I may hold some promise as an alternative energy source, since it generates an electric current in sunlight. Perhaps most importantly, the full potential of such simple amino acid–based detergents has only begun to be explored. It is likely that either …
Synopses of Research Articles
Open access, freely available online Cell membranes are largely made of proteins, and membrane proteins account for about a third of all genes.Despite their importance, they are devilishly hard to isolate and stabilize, and therefore are hard to study.The problem lies in their structure: membrane proteins have at least one hydrophobic domain, composed of a stretch of waterrepelling amino acids, which holds the protein snugly in the lipid membrane.Purifying such a protein in an aqueous medium makes the hydrophobic parts aggregate, destroying the protein's delicate three-dimensional structure and often disrupting its function.The alternative is to extract the protein with a detergent, a two-headed "Janus" molecule with both hydrophobic and hydrophilic ends.The protein remains surrounded by the hydrophobic ends, while water clusters at the hydrophilic ends, easing the protein out of the membrane and into solution, where it can be studied.
To date, though, relatively few complex membrane proteins have been successfully purifi ed with available detergents.In this issue, Shuguang Zhang and colleagues show that a simple amino acid-based detergent can successfully stabilize the dauntingly large protein complex photosystem I (PS-I), an integral part of the photosynthetic machinery.
The molecule they made, abbreviated A 6 K, links six units of the hydrophobic amino acid alanine to one of the hydrophilic amino acid lysine.The authors used it to stabilize PS-I and then attached the detergent-protein complex to a glass slide, allowed it to dry, and examined the stability of PS-I by testing its fl uorescence.Intact PS-I emits red light with a characteristic peak wavelength; as it degrades, this peak subsides and is replaced by another, bluer peak.Even the two best standard detergents did poorly at maintaining the red peak.In contrast, the spectrum after A 6 K extraction was almost a perfect match for the normal one, indicating the complex was largely intact after drying.Furthermore, the complex appeared to remain stable for up to three weeks on the glass slide.
The potential applications of this work are severalfold.PS-I itself remains to be fully characterized, and this stabilization technique offers new means to explore its properties.In addition, an isolated and stabilized form of PS-I may hold some promise as an alternative energy source, since it generates an electric current in sunlight.Perhaps most importantly, the full potential of such simple amino acid-based detergents has only begun to be explored.It is likely that either this one, or others like it, can be used to isolate and stabilize hundreds of other membrane proteins, allowing them to be studied in detail for the fi rst time.In recent years, the control of gene expression by small RNA molecules has emerged as a major new mechanism for gene regulation.The small RNAs interfere with the expression of their target gene by reducing its transcription, triggering the destruction of the gene transcript, or inhibiting its translation into a protein.This discovery has not only altered views of gene regulation, but also provided molecular geneticists with powerful new tools with which to study and manipulate the function of any gene.The biology of these small RNAs is, therefore, under intense scrutiny.
Small RNAs are generated by specifi c pathways, the elements of which are being rapidly discovered.In this issue of PLoS Biology, two groups have identifi ed a missing piece in one such pathway-in the fruitfl y Drosophila.The pathway under investigation leads to the production of a type of small RNA called a microRNA (miRNA).These are 21-23 nucleotides in length, and are involved in regulating the expression of many genes.miRNAs start life as a much bigger transcript called a pri-miRNA, which is processed in two steps.First, it is converted into a shorter pre-miRNA, by the action of two proteins: Drosha, an RNAse III enzyme; and Pasha, which contains doublestranded RNA binding domains (dsRBDs).The pre-miRNA is then transported to the cytoplasm and is trimmed again into a double-stranded miRNA by a different RNAse III enzyme called Dicer-1.
In a separate pathway, RNAs called small interfering RNAs (siRNAs) depend on the Dicer-2 RNAse III and a dsRBD protein called R2D2 for their function.These pathways are also conserved in other organisms.Thus, a pattern emerges: the functions of small RNAs tend to require the combined actions of an RNAse III and a dsRBD protein.But why doesn't Dicer-1 have a partner?The answer, provided by the two studies from the labs of Phil Zamore and Haruhiko and Mikiko Siomi, is that we just hadn't found it yet.
The two groups took different approaches to fi nding Dicer-1's partner.Zamore's group looked for genes resembling other dsRBD-encoding genes, while the Siomi lab did a functional screen for new genes specifi cally implicated in miRNA processing.They both homed in on a new gene with great similarity to R2D2, and showed that loss of function of the gene results in the accumulation of pre-miRNAs-very similar to loss of Dicer-1 function, which suggests that the two genes act together in the same pathway.The new potential partner of Dicer-1 was given the name loquacious (loqs), because failure to process the miRNAs in turn causes increased levels of expression of the target genes for the miRNAs.For most healthy individuals, infection triggers a rapid immune response that repels the invaders.But for those rare individuals born without the immune system cells (lymphocytes) that recognize and kill pathogens, bacterial, viral, or fungal encounters can result in recurrent infections that are more life-threatening and less responsive to treatment than similar infections in normal infants.In the past, all that could be done for children with severe combined immunodefi ciency (SCID) was to protect them from infections by cocooning them in sterile plastic bubbles, which gave the disease its common name: bubble-boy syndrome.Nowadays, the treatment of choice, provided a suitable donor is available, is bone-marrow or stem-cell transplantation, which provides SCID children with a functioning immune system.
Mutations in at least nine genes can cause human SCID, but 20% of cases are caused by a defi ciency of the enzyme adenosine deaminase.This enzyme, which is present in all organisms, converts adenosine and deoxyadenosine to inosine and deoxyinosine, respectively.When adenosine deaminase is missing, its substrates (adenosine and deoxyadenosine) accumulate, and this is thought to cause the complete breakdown in immune defense characteristic of SCID.
To explore the role of adenosine deaminase in a tractable model system, Peter Bryant and his colleagues have now developed a Drosophila model by disabling the expression of a proteincalled adenosine deaminase-related growth factor A (ADGF-A)-that serves as a major adenosine deaminase in the fl y.In fl ies lacking ADGF-A enzymatic activity, adenosine and deoxyadenosine concentrations increase in the larval hemolymph, the circulatory fl uid or "blood" of insects.Lack of the enzyme, the researchers report, caused larval death associated with the disintegration of the fat body (the adipose tissue spread throughout the body of the insect), melanotic tumors, and delays and defects in development.
The fi rst two effects, fat body disintegration and the presence of melanotic tumors, are directly attributable to dysregulation of hemocytes (fl y blood cells) in the mutant animals.It turns out that in adgf-a-mutant larvae, hemocytes are released prematurely from the lymph glands (the organs where hemocytes are produced and stored).These prematurely released hemocytes then cause fat body disintegration and formation of melanotic tumors; however, if ADGF-A expression is selectively restored in the lymph glands, then hemocytes are not prematurely released, and the larvae survive and develop without the tumors or fat body disintegration.
Bryant and his colleagues reasoned that the elevated adenosine might have direct effects on fl y development aside from the dysregulation of hemocytes, so they examined the development of adgf-a mutant fl ies that also lacked a functional adenosine receptor (adoR mutants).They found that adgf-a/adoR mutant larvae were able to survive and continue development to adulthood, although these animals still experienced fat body disintegration and some melanotic tumors.These results suggest that the second consequence of ADGF-A defi ciency, delayed development, is caused by the elevated adenosine in the animals signaling through adenosine receptors.
Altogether, these results establish adgf-a fl ies as a useful model system for unraveling the many effects that adenosine and deoxyadenosine have on cellular physiology in general and on the immune system in particular.Because hemocyte release from lymph glands and delays in development also occur in response to infection, the authors hypothesize that adenosine might be involved in controlling hemocyte release and postponing development when fl y larvae are challenged by microbial attacks.Future experiments in this model system should provide important clues to the pathology of adenosine deaminase defi ciency-associated SCID and should also advance our understanding of how adenosine acts as a stress hormone during infections in individuals with normal immune systems.Both groups also show that Loqs and Dicer-1 exist in a complex within the cell, and that the complex is able to process pre-miRNA into its mature form.The Siomis' lab went on to show that the complex contains a protein called Ago-1, which hints that the complex might also be involved in the action of miRNAs on their target genes, as well as in miRNA processing itself.Both groups also point out the similarity between Loqs and a human dsRBD protein called TRBP, which has been implicated in the response to infection by HIV.
There seems little doubt, then, that Dicer-1's partner has been found, and that the combined action of an RNAse III and a dsRBD protein is a consistent theme in the function of miRNAs and siRNAs.The identifi cation of Loqs will help to refi ne our views of how miRNAs are processed, as well as how they can be manipulated.The connections made with processes such as stem cell maintenance (identifi ed by the Zamore lab) and viral infection in these new studies also emphasize that gene regulation by small RNAs is relevant to a broad range of cellular physiology.Like many animal viruses, SFV enters its host cells using clathrin-mediated endocytosis.One well-established way to study this process is to attach a fl uorescent tag to individual virus particles and observe their travels through the cell.Vonderheit and Helenius now track this journey in greater detail than ever before by attaching different colored fl uorescent tags to SFV and to protein markers of early and late endosomes.They then use videoenhanced triple-color microscopy to follow all the markers as they move through living cells.This analysis reveals that the virus is initially present in endosomes containing only proteins associated with early endosomes.Then, Rab7, a late endosome marker that is involved in transport of cargo from early to late endosomes, appears in distinct domains of these early endosomes.Finally, the viral cargo is transferred to a detached organelle that contains Rab7 but no early endosome markers.The researchers show that SFV transport to late endosomes requires Rab7 and the presence of intact microtubules, which often serve as a highway network along which vesicles travel.
The researchers conclude that, at least for SFV, the mechanism underlying sorting and transport from early to late endosomes falls somewhere in between the two existing models for clathrinmediated endocytosis.Early endosomes, they postulate, have to acquire some characteristics of late endosomes before SFV can be transported to late endosomes in Rab7-positive vesicles.But other cargos, the authors point out, may follow different pathways through the cell.When Robert Hooke fi rst looked at cork bark with a light microscope in 1655, he saw small empty chambers, reminiscent of monastery cells.We now know that living cells are full of organellesspecialized subcompartments surrounded by membranes in which different cellular life functions occur.This complex organization raises major transport and sorting problems similar to those encountered in a large city in which trains and trucks carrying different cargos arrive at peripheral distribution centers.The cargos must be sorted and transported to individual factories where goods are made for delivery to other city destinations or for export.At the same time, the different areas of the city produce waste products that also need to be sorted and transported correctly.Somehow, thousands of cargos must end up in exactly the right place in both the city and the cell.
One cellular system that sorts and transports cargos is the clathrin-mediated endocytic pathway.Endocytosis-the ingestion of materials into the cell-is important for the interaction of cells with the environment because it allows the uptake of nutrients (the equivalent of the raw materials brought into the city) and signaling molecules (the letters brought in by the mail service).In clathrinmediated endocytosis, materials arriving at the outside surface of the cell are engulfed in special areas of membrane known as coated pits, which pinch off to form intracellular vesicles.These lose their clathrin coat and other molecules involved in their formation to become early endosomes, a specifi c sort of intracellular vesicle.The cargos are then transferred to late endosomes, which have different proteins and functions than early endosomes.From endosomes, cargo can go either to lysosomes, where they are degraded, or to the Golgi apparatus, which sends cargo back to the cell surface.
Although many details of clathrinmediated endocytosis have been uncovered, cell biologists still hotly debate whether early endosomes mature into late endosomes or whether transport vesicles take cargos from early to late endosomes.Unraveling such details will improve our understanding of normal cellular processes and should help in the design of intracellularly Mobilizing an army to march into battle requires the increased activity of hundreds of people, from the quartermaster to the gunnery captain.The stately march of the cell cycle-from the fi rst growth phase, through DNA synthesis, to the second growth phase, and on to mitosis and cell division-also demands increased activity, but of hundreds of genes, from histones to protein kinases.And just as the army must coordinate the shipment of C rations with the movement of its troops, so must the cell coordinate its genetic activities to ensure that raw materials and regulatory molecules are present where and when they are needed.In this issue, Janet Leatherwood, Bruce Futcher, and colleagues describe the waves of gene activity that accompany the phases of the cell cycle in the yeast Schizosaccharomyces pombe.
Using microarrays, the authors examined the expression level of 5,000 genes over the course of the cell cycle.They found that well over 2,000 of these genes undergo slight but observable and statistically meaningful oscillations.Of these, they chose to examine the top 750, an admittedly arbitrary cutoff that nonetheless highlights those whose expression levels rise and fall the most.They identifi ed two broad waves of oscillation, one peaking in early to mid-G2 (the second growth phase) and the other late in G2 at the transition to mitosis.These two peaks were seen even in the 4,000 least cyclic genes, suggesting that many genes may be slightly Any cell that receives a dose of radiation is placed in a dangerous situation.The DNA damage resulting from exposure to such radiation (or any other mutagen) can cause massive rearrangements to genetic information and potentially kill the cell.Bacteria have learned to cope with this threat by activating genes that repair DNA damage and by preventing a cell from dividing before these repairs are completed.In the bacteria Escherichia coli, these repair genes form what is known as the SOS response.
The E. coli SOS response has been used to study DNA repair for decades, and a great deal is known about how the more than 30 genes involved in the response function.Two proteins fi gure prominently in this response.The LexA protein acts as a repressor and inhibits the expression of SOS genes under normal conditions; in the event of DNA damage, the protein RecA inactivates the LexA repressor by enhancing its autocleavage into two fragments, which initiates the SOS response.While these initial stages are well understood, how all the SOS genes are coordinated, and ultimately turned off, is only beginning to be explored.
In a new study, Joel Stavans, Uri Alon, and colleagues have closely followed the upregulated not for adaptive purposes, but simply because some transcription factors inevitably go astray whenever there are lots of them around.
Such broad waves of upregulation are likely due to a simultaneous increase in the activity of multiple clusters of genes, each controlled by separate groups of transcription factors.A variety of cell culture manipulations allowed the researchers to identify eight clusters of genes, the activity of whose members was tightly co-regulated.(In this case, "cluster" refers not to genes physically grouped together on a chromosome, but to genes that are regulated similarly.)Scouring the promoters of these genes confi rmed that each cluster was characterized by unique transcription factor binding sites.They also discovered that, as a group, these promoters tended to be longer than average, suggesting they may be more complex than those in non-oscillating genes.
The number of genes within each cluster ranged from only a few to over 100.The largest of them, the Cdc15 cluster, contains genes involved in mitosis, cytokinesis, and formation of the septum that separates the daughter cells, as well as genes for other functions.Other clusters regulate DNA replication, cell separation, synthesis of the histone proteins that act as spools on which DNA is wound, protein folding and stress response, ribosome biogenesis, and other aspects of the cell cycle.
Two other recent studies in S. pombe have found broadly similar patterns, and have identifi ed 407 and 747 genes, respectively, as strong oscillators.There was a heartening degree of overlap, with 171 genes identifi ed by all three studies, and 360 more found in two of the three.Even genes that made the cut in only one study were found likely to oscillate in the other two.Follow-up studies to further explore genes that are coordinated during the cell cycle march should help us to understand how the army of molecules exerts such fi ne control over both normal and abnormal cell growth and proliferation. of examining complex processes at the level of single cells, while furthering our understanding of how the SOS response is structured in E. coli.
Friedman et al. monitored the SOS response by attaching a green fl uorescent protein (GFP) to the promoters (the section of DNA responsible for activating a gene) of three SOS genes (lexA, recA, and umuDC).Bacteria expressing these promoter-GFP fusions became fl uorescent within minutes of being exposed to UV radiation, visualized using time-lapse fl uorescence microscopy.Since GFP fl uorescence is directly correlated with the expression of each of the chosen genes (i.e., their promoter activity), the authors could gauge the SOS response rate upon DNA damage.
To induce the SOS response, the authors exposed E. coli cells to UV radiation.By monitoring individual cells at two-minute intervals after this dose, Friedman et al. found up to three peaks of promoter activity at roughly 30, 60, and 100 minutes.Although the amount of this activity and the average number of peaks varied between cells, the timing was always similar in different cells, suggesting a highly structured, timed response.When the authors averaged this response over the population, it "washed out" into a single peak-which explains why the three peaks of expression were not previously detected.
A deeper look into the dynamics of the SOS response in single E. coli cells showed that it did not correlate with cell size, suggesting the SOS response is not synchronized with the cell cycle.In addition, Friedman et al. repeated their experiments in a bacterial strain lacking the SOS response gene umuDC.The peak pattern was altered in this mutant strain, and the precision in the appearance of the peaks was reduced.A map of a cell's metabolic pathways looks like an airline route map on steroids, with hundreds of reactions forming a complex and interconnected network.And just as a few cities serve as destination hubs for many different fl ights, a few metabolites, such as ATP and NADH, form biochemical hubs within the metabolic network.These molecules are involved in far more reactions than the average, and thereby serve to couple otherwise unrelated reactions within the cell.How did this hub-shaped network arise?In this issue, Thomas Pfeiffer and colleagues employ a computer simulation of a simplifi ed metabolic system to show that two key features in the evolution of a hub system are enzyme specialization and the transfer of chemical groups between metabolites.
The authors created "molecules" from all possible combinations of seven "groups."They began the simulation with seven "enzymes" that catalyze the transfer of one group from one molecule to another.Initially, each enzyme was a generalist-it could take a group from any molecule and donate it to any other.This mirrors one plausible scheme for the actual evolution of cellular biochemistry.Over the course of the simulation, enzymes could mutate to preferentially increase their affi nity for one substrate at the expense of others.The number of enzyme types could be increased by "gene duplications."Other parameters allowed the simulated cell to take up and excrete metabolites, and to grow.
As the system evolved, enzymes proliferated and became more specialized, until the fi nal mix included about two dozen enzymes, each of which catalyzed only one or two reactions.As a consequence, some metabolites fell out of use, and the fi nal number of metabolites dropped from 128 to 33.While most took part in only two or three reactions, a few emerged as hubs, participating in eight or more separate reactions.While this mathematical distribution did not match that found in the metabolic network of a whole real cell, it did approximate that of similar-sized sub-cellular metabolic networks.The central importance of group transfer to this structure was brought out when the authors reran the simulation without group transfer.When reactions simply added or removed a group, without transferring it to another molecule, a much simpler network without hubs evolved instead.This is not the last word on biochemical evolution, but it does show how an initially generalist metabolism can evolve into the highly specialist system found in all existing cells.Further experiments that model known reactions more closely may be useful in elucidating more details of the evolutionary process that led to the emergence of the biochemical "hubbub" that characterizes life today.Life is hard for bacteria.Not only must they constantly compete against their comrades for resources and living space, they're also subject to infection by pathogens-viruses called bacteriophages-which can affect their ability to survive and prosper.Two types of bacteriophages threaten bacteria: lytic phages and lysogenic (or temperate) phages.Acquisition of a lytic phage (for example, T2, T4, or T6) is an immediate death sentence for the bacterium; upon infection, a lytic phage subverts the bacterium's biochemical machinery to make copy after copy of itself until the bacterium bursts, or lyses, from the burden.In contrast, a temperate phage (for example, λ phage) can lie dormant for many generations before it co-opts the bacterium's machinery to reproduce, but eventually it, too, lyses the bacterial cell as it releases a host of new phages.From the perspective of the bacterium, it is better to be infected by a temperate phage than a lytic phage because infection with a lytic phage means instant death, while a temperate phage may lie dormant long enough for the bacterium to reproduce.
Temperate phages achieve dormancy by producing a phage gene product (in the case of λ phage, called cI) that represses the production of other phage genes; phage reproduction ceases as long as this repressor is produced.Once infected by a temperate phage, bacteria are protected from secondary infections by various other phages, because the temperate phage prevents the others from becoming established in the cell.But might temperate phage infection confer other advantages on bacterial survival?Edward Cox's group at Princeton University examined this question by looking for evidence that temperate phage infection triggers changes in bacterial behavior.Working with λ phages, the authors studied how phage Phages Affect Gene Expression and Fitness in E. coli Aside from global climate change, loss of biodiversity poses one of the greatest threats to the planet.Last year, the World Conservation Union reported an unprecedented decline in biodiversity, with nearly 16,000 species facing extinction.The biggest threat to the vast majority of these species is loss of habitat.And as habitat loss and degradation proceed nearly unabated, the need to accurately predict the population dynamics and extinction risk of potentially endangered species has never been greater.In a new study, John Drake tests models traditionally used to estimate the likelihood of extinction and shows that because the models ignore a critical parameter in projecting risk, they underestimate extinction rates.
Standard models for predicting extinction assume that population growth and decline are governed by random, or stochastic, variables.The models typically incorporate two major contributors to random variation in population growth rates: changes in environmental conditions and chance fl uctuations in population size-caused by variations in individual fi tness, random mating behavior, and events that affect just one individual-that are referred to as demographic stochasticity.But since few scientists have tested these models with empirical data, the question remained whether the models were accurately predicting population fl uctuations and extinction risk.
To test the reliability of standard stochastic models, Drake used data from experiments with water fl eas.He found that the models could accurately predict extinction risk only when there was enough information about variation in individual fi tness to account for demographic variability-a fi nding that undercuts the conventional wisdom that demographic stochasticity is unimportant.Some traditional models do not even include demographic stochasticity.
It's generally assumed that fl uctuating environments, a given in the natural world, increase a species's chance of extinction.Drake tested this notion in experiments by manipulating the available food sources in 281 populations of water fl eas.The fl ea populations received either low, medium, or high amounts of food, and Drake kept daily tallies of population number and extinctions.When he tried to predict the extinctions using traditional models, he couldn't.
To account for the discrepancy between model and data, Drake began to investigate a possibility raised by recent theoretical research that population density and individual interdependence might affect a major component of the model-demographic stochasticity.The idea is that if organisms interact in their environments-which of course they do-then these interactions will likely affect an individual's probability of dying or reproducing, which ultimately affects species survival.Drake calls this variable density-dependent demographic stochasticity.
Drake used half of the experimental data generated from testing the effects of environmental variability on water fl ea survival to select his models and estimate the range of parameters that might affect extinction, and the other half to test the models' reliability.From the estimated parameters, Drake wrote a computer program to simulate all the possible population outcomes and predict extinction rates.One set of Are We Underestimating Species Extinction Risk? infection affects the regulation of genes that might impact the bacterium's survival by comparing the constellation of genes expressed in uninfected E. coli bacteria to those in E. coli carrying a dormant λ phage.They found that λ phage caused reduced expression of the bacterial gene pckA, which codes for an enzyme that helps bacteria grow on carbon sources (fuels) other than glucose; without functioning pckA, bacteria grow normally in an environment containing glucose, but grow only slowly in an environment containing alternative carbon sources such as succinate.E. coli carrying λ phage fail to make the pckA gene product because the pckA gene is turned off by the virally encoded repressor cI.Interestingly, the researchers found evidence that the repressors made by other temperate phages may also be able to turn off pckA expression, and that the pckA genes of other bacteria related to E. coli might also be regulated by temperate phage repressors.
The fact that this relationship between temperate phage repressors and regulation of the pckA gene is so well conserved argues that the ability to turn off this gene might be positively selected; therefore, pckA repression must confer some sort of survival benefi t to the bacterium.It's not clear what this benefi t might be, but one explanation is that slowing bacterial growth in glucose-poor environments might help the bacterium elude detection by the immune system of any animal it invades, increasing its chances of survival.Alternatively, slower bacterial growth might slow down the onset of viral reproduction and eventual lysis.Regardless, it is clear that there is a strong relationship between the temperate phages and the bacteria they colonize.These results have signifi cant implications for the evolution of fi tness in these bacterial populations.Biologists have developed ever more sophisticated ways to fi nd molecular traces of natural selection.These traces-which occur as variations in DNA sequence, or polymorphisms, between and within species-are thought to harbor the genetic basis of adaptive events.
The study of natural selection at the molecular level has long been dominated by Kimura's theory of neutral evolution, which argues that most polymorphisms (in both DNA and protein sequence) have minor or no selective effect, and are governed by random, not selective, processes.The strength of this theory is that it leads to clear predictions that can be tested to identify those polymorphisms that really are subject to selection.Even though there's a large body of literature devoted to the statistical testing of selective neutrality, these tests are generally based on theoretical models, the assumptions of which have largely been untested.Only recently have the necessary quantities of data for testing the merits of these models become available, thanks to high-throughput genotyping and sequencing technologies.
Working with Arabidopsis thaliana, the fi rst plant genome sequenced, Magnus Nordborg, Joy Bergelson, and their colleagues investigate a global survey of polymorphism patterns in the genomes of 96 plants.The scale of their study affords robust insights into the genomic pattern of polymorphism of the plant and sheds light on its demographic history.The results also lay the foundation for future work on the genetic basis of A. thaliana variation while challenging the assumptions of standard mathematical models for determining whether a gene is under natural selection in the plant.
Nordborg and colleagues sequenced 876 short genome fragments of 96 A. thaliana plants from both worldwide natural populations and laboratory stocks.In total, they described 44,000,000 DNA bases of genetic material, which revealed 17,000 polymorphisms, either in the form of single changes in DNA sequence (single nucleotide polymorphisms, or SNPs) or as losses or additions of DNA sequence between individual plants.
The level of polymorphism in A. thaliana is unexpectedly high for a plant that is highly self-fertilizing.To see if these polymorphisms were uniformly shared across plant populations, or had a distinct structure, Nordborg and colleagues grouped a subset of the A. thaliana plants into populations based on their geographic origin.Although they found that individuals within a population harbor much of the variation that is typical of the species worldwide, it appeared that some of the variation was specifi c to particular geographic regions.Furthermore, closely related plants were almost always from the same local region.The authors suggest that is what would be expected for a sexually reproducing species found worldwide.
With such a large data set, it also becomes possible to see if the underlying assumptions of mathematical models commonly used for determining whether a gene is under selection are appropriate for A. thaliana.Nordborg and colleagues found that the patterns they observe do not fi t the standard neutral model of evolution, which is expected to explain most genetic variation.This model is the benchmark against which researchers pinpoint the signature of selection at particular genes.The authors caution that "commonly used 'tests of selection' are simply not valid in A. thaliana." Nordborg and colleagues have provided a wealth of detail to our understanding of genetic variation in Arabidopsis on a genome-wide scale.Future research can now begin to use this Arabidopsis genetic footprint to fi nd the exact variations that contribute to useful plant traits-and plumb its genome for evolutionary clues.
In April, the United States Centers for Disease Control and Prevention released a study challenging the conventional wisdom that eating less promotes longevity.The study found that the very thin run roughly the same risk of early death as the overweight.And now the tide seems to be turning against a common explanation for the longstanding observation that restricting food in lab organisms from yeast to mice prolongs life.
Many studies have indicated that it's calorie reduction, rather than the specifi c source of calories, that increases longevity.That this effect occurs in such diverse organisms suggests a common mechanism may be at work, though none has been defi nitively characterized.And while calorie restriction enhances longevity in mice, it has not always done so in rats.In a new study, William Mair, Matthew Piper, and Linda Partridge show that fl ies can live longer without reducing calories but by eating proportionally less yeast, supporting the notion that calorierestriction-induced longevity may not be as universal as once thought.
Dietary restriction in Drosophila involves diluting the nutrients in the fl y's standard lab diet of yeast and sugar to a level known to maximize life span.Since both yeast (which contributes protein and fat) and sugar (carbohydrates) provide the same calories per gram, the authors could adjust nutrient composition without affecting the calorie count, allowing them to separate the effects of calories and nutrients.The standard restricted diet had equivalent amounts of yeast and sugar (65 grams each) and an estimated caloric content of 521, while the yeast-restricted (65 g yeast/150 g sugar) and sugar-restricted (65 g sugar/150 g yeast) diets each had just over 860 calories.The control diet for the fl ies had equivalent amounts of sugar and yeast (150 grams), amounting to an estimated 1,203 calories.
First, the authors had to make sure the fl ies didn't change their eating behavior to make up for a less nutritious diet.(They didn't.)Reducing both nutrients increased the fl ies' life spans, but yeast had a much greater effect: reducing yeast from control to dietary restriction levels increased median life span by over 60%.
In a previous study, Mair et al. showed that fl ies that were switched from dietary-restricted diets to control diets soon began to die at the same rates as fl ies accustomed to the control diet.In this study, the authors studied the effects of switching yeast and sugar.Forty-eight hours after being switched from normal diets to yeast-restricted diets, fl ies were no more likely to die than fl ies fed the yeast-restricted diet from the beginning.In contrast, those switched from the standard restriction diet to the sugarrestricted diet began to die at the same rate as fl ies on the control diet.
The authors also ruled out the possibility that bacteria-attracted to high-nutrient food-might be infl uencing fl y survival.Altogether these results make a strong case that calories per se are not the salient factor in prolonging life-at least in fruitfl ies.The dramatic impact of reducing yeast suggests that protein or fat plays a greater role in fl y longevity than sugar.This in turn suggests, the authors argue, that yeast and sugar trigger different metabolic pathways with different effects on life span.
Why might different factors promote longevity in fl ies and rats?It could be that the caloric-restriction/longevity paradigm needs more rigorous reviewthough a vast body of literature does support it.Or it may be that the animals use the same strategy for dealing with food shortages-shifting resources from reproduction to survival, for examplebut have evolved different mechanisms for doing so that refl ect each species's life history, diet, and environment.Whatever explains the disparity, this study should give researchers interested in caloric restriction plenty to chew on.In human-pathogen encounters, the battle for advantage plays out at the level of gene expression.Hosts will stand a better chance of surviving if their genes confer resistance to diverse pathogens, while pathogens need genes that promote virulence and infection.To understand this game of evolutionary oneupsmanship, biologists study the genetic basis of resistance and infection by investigating how changes in an organism's genetic makeup, or genotype, affect its physiological and physical makeup, or phenotype.
In a new study, Scott Nuismer and Sarah Otto detail how host-parasite interactions shape the changes in gene expression that alter an organism's ability to induce or resist infection.They fi nd that gene expression for host and parasite follows quite different evolutionary paths: hosts express as many different gene variants, or alleles, as possible, while parasites express very few alleles.It's in the host's interest to have as many genetic weapons as possible that can recognize a foreign invader, while it's in the parasite's interest to reduce the number of recognizable molecules for a host to latch on to and destroy.Even though these results are intuitive, this phenomenon has not been shown before.
To model the evolution of gene expression levels in a hostparasite interaction, Nuismer and Otto started with a single gene A in hosts and a single gene B in parasites with two alleles each When parasite and host are allowed to interact, the model shows that host resistance alleles typically evolve toward coexpression while parasite infection alleles evolve toward single expression.By expressing more than one gene at a time, the host can recognize a greater diversity of parasites.But what's good for the host is bad for the parasite.Hosts benefi t from a wider array of parasite recognition systems, while parasites benefi t from expressing a narrow range of antigens to evade the host recognition system.
Human immune cells, for example, can recognize billions of different antigens, which then triggers an immune response against the foreign substance and increases the chance of surviving the infection.Parasites, however, generally express only one of many possible antigen alleles.The parasite responsible for African sleeping sickness expresses only one of thousands of surface receptor genes, which offers the host fewer opportunities for detection.
Nuismer and Otto's model provides a framework for understanding empirical observations of allele expression in known host-parasite interactions and may well help explain similar modifi cations in allele expression in other systems.Because the model also provides testable predictions, it should be useful in interpreting data from a wider range of species and interactions, furthering our understanding of the evolutionary forces that shape infection and resistance and ultimately infl uence how genes evolve.Most of us don't have much trouble recognizing what we see.Whether it is a face in a crowd, a bird in a tree, or papers on a desk, our brains expertly distinguish the target from the clutter.It is a simple skill most of us take for granted, but object recognition is not hard-wired.As we navigate our environment, the brain's visual centers continually reorganize themselves, classify novel features, and learn to pick out important objects from the background.Just how the human brain does this is not well understood, but new research by Zoe Kourtzi and colleagues may have uncovered some important clues.
To investigate how the human brain learns to separate targets (signal) from noise Kourtzi et al. showed subjects pictures of novel shapes embedded in a cluttered background and asked the subjects to determine whether or not the shapes were symmetrical.The researchers recorded the subjects' responses while using functional magnetic resonance imaging (fMRI) to measure neuronal activity in brain regions associated with visual processing.Each subject was tested using two sets of novel shapes: high-salience shapes (shapes easily distinguished from the background), and low-salience shapes (shapes camoufl aged by the background).After the initial testing, the subjects were trained to recognize a subset of the new shapes from each group, and then re-tested.
Visual input is thought to go through a hierarchy of processing centers that transform retinal images into complex objects and scenes.Kourtzi et al. recorded responses from both early (V1, V2, Vp, and V4) and late (lateral occipital cortex) stages of visual analysis in 26 subjects.The authors found that subjects demonstrated an increased number of correct responses for shapes they encountered during the training sessions, regardless of the type of background the shapes were presented on.By contrast, the fMRI responses differed dramatically, depending on whether the surroundings made the shapes easy or These results demonstrate that the ability to learn to detect novel shapes is independent of the degree of diffi culty, but suggest that the brain employs different mechanisms of perceptual learning depending on whether the objects stand out from their surroundings, or are obscured by them.Learning to detect highly camoufl aged shapes results in increased brain activity levels that are presumed to refl ect an increase in signal processing at the level of both the early visual areas and higher levels of cortical analysis.On the other hand, the reduction of neural activity that occurs during learning of more distinctive shapes likely refl ects effi cient neural coding of the critical features for their recognition at later stages of visual analysis.
According to Kourtzi and her colleagues, their results provide evidence that the visual brain is capable of tailoring the mechanism of perception to best suit the task.When the signal is weak-as in the case of viewing camoufl aged targetslearning amplifi es neural responses to the target shapes and drowns out the noise.But when the signal is strong-as in the case of viewing easily distinguishable, highly salient targets-neural activity in the visual cortex is reduced, possibly because training engages smaller populations of neurons that respond much more selectively to distinctive features of the stimulus.
In other words, all visual stimuli are not treated equally, and with just cause: the brain's unique ability to treat ambiguous signals differently than robust ones likely allows it to optimize neural coding, and in doing so, learn to increase detection of a broad spectrum of visual signals.
Förstemann
complex in Drosophila cells.DOI: 10.1371/journal.pbio.0030235targeted drugs.Andreas Vonderheit and Ari Helenius now provide new insights into this controversy by examining how Semliki forest virus (SFV) is sorted and transported to late endosomes.
Sorting and Transporting a Viral Cargo: The Role of the Rab7 Protein DOI: 10.1371/journal.pbio.0030260 Transcriptional Waves in the Yeast Cell Cycle DOI: 10.1371/journal.pbio.0030243DOI: 10.1371/journal.pbio.0030260.g001Immunofl uorescence showing intracellular compartments containing Rab7 (red), EEA1 (green), and Semliki Forest Virus (blue) SOS response in individualE.coli cells to investigate its dynamics.Previous studies, which monitored the temporal pattern of activation of entire populations of cells, found that SOS genes turned on in one peak upon DNA damage.But Friedman et al. found that SOS genes in individual bacteria respond to DNA damage in three precisely timed phases.This observation reveals the importance Three New Phases of Repairing DNA Damage in E. coli DOI: 10.1371/journal.pbio.0030239DOI: 10.1371/journal.pbio.0030239.g001Genes involved in the SOS response to DNA damage are expressed in three precisely timed phases July 2005 | Volume 3 | Issue 7 | e243 | e239 Oliva A, Rosebrock A, Ferrezuelo F, Pyne S, Chen H, et al. (2005) The cell cycle-regulated genes of Schizosaccharomyces pombe.DOI: 10.1371/journal.pbio.0030225 By re-examining the SOS response in single cells, Friedman et al. have visualized an accurately timed and synchronized DNA repair process.Modulations in response to DNA damage have also been observed recently in individual mammalian cells.Future experiments in E. coli-one of the most genetically tractable model systems-should help explain how this timed response is related to the different pathways of DNA repair and shutoff of the response.Friedman N, Vardi S, Ronen M, Alon U, Stavans J (2005) Precise temporal modulation in the response of the SOS DNA repair network in individual bacteria.DOI: 10.1371/journal.pbio.0030238 a Hub-Shaped Cell Metabolic Network DOI: 10.1371/journal.pbio.0030261 DOI: 10.1371/journal.pbio.0030258DOI: 10.1371/journal.pbio.0030261.g001For modeling purposes, metabolites can be denoted as binary strings of biochemical groups; enzymes catalyze the transfer of the groups DOI: 10.1371/journal.pbio.0030258.g001The bacterial virus λ integrates into the E. coli genome, where it shuts down the cell's ability to grow on poor carbon sources DOI: 10.1371/journal.pbio.0030253DOI: 10.1371/journal.pbio.0030253.g001Experiments with Daphnia magna, the water fl ea, show that traditional extinction models may be underestimating extinction risk July 2005 | Volume 3 | Issue 7 | e258 | e253 fi tness and the regulation of Escherichia coli genes by bacterial viruses.DOI: 10.1371/journal.pbio.0030229 Lived Flies, It's Calorie Quality, Not Quantity, That Matters DOI: 10.1371/journal.pbio.0030237DOI: 10.1371/journal.pbio.0030237.g001Contrary to popular belief, life span extension by dietary restriction in Drosophila is not explained by calories July 2005 | Volume 3 | Issue 7 | e237 | e252 Host-Parasite Battles Shed Light on the Evolution of Gene Expression DOI: 10.1371/journal.pbio.0030252(A or a, and B or b) that are involved respectively in resistance in the host and promoting infection by the parasite.Nuismer and Otto's model allows gene expression levels to be regulated by an additional modifi er gene in hosts and parasites.The two variants of the modifi er gene (M and m) alter the expression of the host gene A or parasite gene B. As a result of host-parasite interactions, the alleles at the modifi er gene either evolve to increase expression of only one allele or to co-express both alleles.
Nuismer SL, Otto SP (2005) Host−parasite interactions and the evolution of gene expression.DOI: 10.1371/journal.pbio.0030203DOI: 10.1371/journal.pbio.0030252.g001Interactions between hosts and parasites shape changes in gene expression, potentially maximizing parasite recognition for hosts while minimizing detection for parasites (such as red blood cells and trypanosomes) diffi cult to detect.Low-salience shapes triggered an increased fMRI response across all brain regions following training; Cutting through the Clutter: How the Brain Learns to See DOI: 10.1371/journal.pbio.0030256DOI: 10.1371/journal.pbio.0030256.g001The human brain learns to detect the contours of target objects in cluttered scenes by recruiting early and higher centers of visual analysis high-salience shapes precipitated a decrease in fMRI response in the regions of the lateral occipital cortex, but produced no change in any of the early visual areas (V1, V2, Vp, and V4).
|
2016-05-04T20:20:58.661Z
|
2005-06-21T00:00:00.000
|
{
"year": 2005,
"sha1": "284ed6ba60cc218f124c4a830d958a82e8974774",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.0030258&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "284ed6ba60cc218f124c4a830d958a82e8974774",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
249538637
|
pes2o/s2orc
|
v3-fos-license
|
Klein-like tunneling of sound via negative index metamaterials
Klein tunneling is a counterintuitive quantum-mechanical phenomenon, predicting perfect transmission of relativistic particles through higher energy barriers. This phenomenon was shown to be supported at normal incidence in graphene due to pseudospin conservation. Here I show that Klein tunneling analogue can occur in classical systems, and remarkably, not relying on mimicking graphene's spinor wavefunction structure. Instead, the mechanism requires a particular form of constitutive parameters of the penetrated medium, yielding transmission properties identical to the quantum tunneling in graphene. I demonstrate this result by simulating tunneling of sound in a two-dimensional acoustic metamaterial. More strikingly, I show that by introducing a certain form of anisotropy, the tunneling can be made unimpeded for any incidence angle, while keeping most of its original Klein dispersion properties. This phenomenon may be denoted by the omnidirectional Klein-like tunneling. The new tunneling mechanism and its omnidirectional variant may be useful for applications requiring lossless and direction-independent transmission of classical waves.
I. INTRODUCTION
The idea to guide classical waves by mimicking quantummechanical wave phenomena has received a major interest in recent years. This is enabled due to the striking analogy between the electronic band-structure of solids and the frequency dispersion of classical systems [1]. For example, a great deal of attention was devoted to mimicking quantum topological phenomena [2][3][4][5] in acoustic and elastic media, realizing it using architectured materials or metamaterials. The topological properties of the band-structure were exploited to achieve unique functionalities that are uncommon for sound and vibration, such as beam-like narrow waves, which are immune to backscattering from corners, bents, and structural defects [6][7][8][9][10][11][12][13][14][15][16][17][18].
However, an entire class of quantum-mechanical phenomena related to tunneling remains considerably under-explored for classical waveguiding. These phenomena include Klein tunneling of relativistic particles [19][20][21][22][23][24], tunneling of particles across the event horizon of black holes [25], tunneling of electron pairs through superconducting junctions [26], and more. The common property of these effects, which constitutes the essence of tunneling, is an unusual and counterintuitive ability of particles to cross gaps, barriers or interfaces, despite this crossing being forbidden in a sense by dynamical or energetic considerations. Translating this exciting property into the classical realm holds the potential to substantially advance waveguiding capabilities in classical systems. In this work the focus is on Klein tunneling.
II. THE ORIGINAL QUANTUM EFFECT
Quantum tunneling described by the Klein paradox [19] is a phenomenon, in which relativistic particles unimpededly cross a potential barrier regardless of its height and width, Klein tunneling in graphene illustrating particle transition to the lower Dirac cone with addition of potential V 0 > E. (c) Wave ψ I (energy E, momentum k 1 ) incident in graphene at φ > 0, between domains 1 and 2 that differ by the energy V 0 . The transmitted wave ψ T (energy E, momentum k 2 ) is at angle θ , negative for E < V 0 (the tunneling case) and positive for E > V 0 . Reflected wave ψ R exists. (d) Klein tunneling in graphene at normal incidence φ = 0. The transmission is reflectionless for any V 0 .
1(a). The fact that this crossing has a unity transmission probability when the barrier energy V 0 is higher than the particle energy E is counterintuitive, as one would expect the transmission probability to decay with an increasing barrier height, as in the non-relativistic scenario.
A similar effect was predicted [20], observed [21,22], and analyzed [23,24] for Dirac electrons in graphene between two domains that differ by a constant electrostatic potential V (x) = V 0 . The underlying principle for tunneling in graphene was shown to originate from the two-component structure of its wavefunctions, which resembles Dirac spinors, and features Dirac-like cone dispersion, Fig. 1(b). At the transmission to the higher potential side the electron of energy E and momentum k 1 is shifted to the lower band with the same energy but a different momentum k 2 , keeping its velocity direction but flipping its momentum direction.
In Figs. 1(c),(d) a step potential of height V 0 and infinite width is considered. The domains with V (x) = 0 at −∞ < x < 0 and with V (x) = V 0 at 0 ≤ x < ∞ are labeled by 1 and 2. For a wave ψ I incident from domain 1, and a wave ψ T transmitted to domain 2, Fig. 1(c), the incidence and transmission angles φ and θ constitute the phases between the two components of the respective wavefunctions, and are associated with sublattice pseudospin. For φ > 0 a reflected wave ψ R exists. Momentum equity in the y direction in both domains, together with wavefunction continuity at the domain's interface referring to pseudospin conservation [23], yields where R and T are reflection and transmission amplitudes. The relation in Eq. (1a) implies that for a given φ , the relative 'heights' of E and V 0 are translated to the sign of θ (and k 2 ), which is positive for E > V 0 and negative for the tunneling case E < V 0 . The tunneling effect is manifested in Eq. (1b) at normal incidence (φ = 0), where the transmission becomes unimpeded irrespective of E and V 0 , implying R = 0 and T = 1, as depicted in Fig. 1 The exotic properties of Klein tunneling inspired the search for analogies in other systems [27][28][29], but were exclusively based on mimicking graphene or graphene-like lattices. Next I demonstrate that a tunneling effect with properties identical to Eq. (1a)-(1b) can occur in inherently classical systems without a restriction to the particular graphene's wavefunction structure and dispersion.
III. A NON-SPINOR CLASSICAL ANALOGY
A. Effective medium model To this end I consider the system in Fig. 1(c),(d) to represent a continuous acoustic medium defined by pressure field p(x, y,t) and flow velocity field v(x, y,t). Domain 1 is a uniform acoustic medium of mass density m 0 and bulk modulus b 0 . Domain 2 is a complex medium, described by dynamical mass density m 0 M(ω) and bulk modulus b 0 B(ω). ω is the sound wave frequency. The constitutive parameters M(ω) and B(ω) play a crucial role in reproducing Klein-like tunneling in this system. Assuming longitudinal wave propagation [30] and time-harmonic dependence p j (x, y,t) = P j (x, y)e −iωt , v j (x, y,t) = V j (x, y)e −iωt , j = 1, 2 indicating domain number, this system is governed by Here M 1 (ω) = B 1 (ω) = 1, M 2 (ω) = M(ω) and B 2 (ω) = B(ω). I now consider a pressure wave of amplitude P I incident from domain 1 at angle φ , a reflected wave P R , and a wave P T transmitted to domain 2 at angle θ , which respectively stand for ψ I , ψ R and ψ T in Fig. 1(c). Employing horizontal stratification, continuity of pressure, and continuity of normal flow velocity along the domain's interface (derivation details appear in Appendix A), gives These classical concepts of wave propagation between media, Snell's law of refraction in Eq. (3a), and Fresnel's reflection and transmission coefficients R and T in Eq. (3b), are strikingly similar to the quantum tunneling properties in Eq. (1a) and (1b). Using the mapping E ↔ ω 2 , the matching of Eq.
This determines the mass density and bulk modulus as The particular combination of the parameters in Eq. (5) creates acoustic tunneling with properties identical to the tunneling of electrons in graphene, although the underlying continuous fields physics in Eq. (2a)-(2b) is fundamentally different from the quantum Dirac physics. In Eq. (4a), is the ratio of domain 1 and 2 wavenumbers, resulting in Both the wavenumber ratio k 2 /k 1 and the transmission angle θ in Eq. (6) perfectly coincide with corresponding values of the quantum Klein tunneling in graphene [20,23,24]. In Eq. (4b), ( M(ω) B(ω)) 1/2 = z 2 /z 1 , with z 1 = (m 0 b 0 ) 1/2 , is the respective ratio of the specific acoustic impedance [31]. Eq. (4b) indicates that the impedance of domains 2 and 1 is matched at all frequencies. This implies that a normally-incident acoustic wave for any ω 2 /V 0 (and an obliquely-incident wave for the particular case ω 2 /V 0 = 1 2 ) will penetrate domain 2 completely free of backscattering, i.e. with R = 0, similarly to the quantum tunneling. For all other values of ω 2 /V 0 , R = 0 at oblique incidence. However, impedance matching alone is not enough for the analogy; the particular dispersion of Eq. (6) is required in domain 2.
Here, k 2 in Eq. (6), and M(ω) and B(ω) in Eq. (5) are positive for ω 2 > V 0 , and negative for ω 2 < V 0 . The notion of a negative wavenumber, resulting from simultaneously negative constitutive parameters, is a celebrated concept in the research of wave propagation in electromagnetic and acoustic systems. It indicates antiparallel phase and group velocities, leading to extraordinary phenomena unavailable in natural materials [32][33][34][35][36][37]. In fact, the expressions in Eq. (5) coincide with the so-called matched Drude model [32] or left-handed electric networks [33], in the electromagnetic terminology. In this section I showed that Eq. (5) constitutes an exact classical analogue of Klein tunneling. Next I propose its realization using an acoustic metamaterial.
B. Acoustic metamaterial realization
The proposed metamaterial is illustrated in Fig. 2(a). Domain 1 is a waveguide of area L 1 × h, consisting of two rigid parallel plates, gapped by a distance d. Domain 2, of area L 2 × h, is a matrix of a × a × d cuboids, Fig. 2(a) inset, with elastic membranes (blue circles) of radius R and stiffness B m ( 1 2 B m for a unit cell) mounted in the walls, and an open side branch cavity resonator of length l and radius r [31] (red cylinder) at the top. The external walls are sealed, with an array of acoustic actuators (grey circles) at the left wall, producing source waves (black arrow).
The membranes create an effective mass density of whereas the resonator creates an effective bulk modulus of The graphene potential V 0 thus translates into a function of the metamaterial's constitutive parameters and geometry, and unlike the quantum system does not represent any physical addition. It indicates the threshold between a double-positive and a double-negative index acoustic medium. The particular value of V 0 depends on the desired ratio γ = ω 2 /V 0 . The wavelength in domain 2, λ 2 = 2π/k 2 , is determined from Eq. (6). For a ≪ λ 2 , the collective unit cell dynamics turns the metamaterial into an effectively continuous material with properties determined by Eq. To obtain ω 2 0 < V 0 , as required for tunneling, we set, e.g. γ 0 = ω 2 0 /V 0 = 2/3. This results in V 0 = 1.75 · 10 8 [r 2 /s 2 ], and by Eq. (S5) in k 2 = −k 1 /2 and λ 2 = 2λ 1 . The pressure fields, obtained by the finite difference time domain (FDTD) method (simulation construction details appear in Appendix D), are plotted for two cases. In the first, Fig. 2(b), the source beam incidence is normal, φ = 0 o , resulting in a perfectly unimpeded tunneling. In the second, Fig. 2(c), the beam is incident at φ = 28 o , resulting in tunneling at a negative angle θ ≈ −70 o , and a partial reflection. In both cases the refraction is negative with the phase and group velocities v p2 = −v g2 = −2c. The simulated transmission angles, wavelengths and wave velocities are in full agreement with the theoretical expectations from Eqs. (4)-(5).
A. Anisotropic medium design
It would be exceptionally interesting to discover conditions for which the Klein-like tunneling defined by Eq. (4) becomes unimpeded regardless of the incidence angle, for any ω 2 /V 0 . This could be useful for applications that require navigating detection beams of arbitrary incidence angles and frequencies around an object without backscattering (acoustic camouflaging, for example). Since Eq. (4a) and (4b) uniquely determine the metamaterial parameters, an additional degree of freedom in the design is required. This can be obtained by introducing anisotropy [38,39] to the effective mass density, with M x (ω) in the x direction and M y (ω) = M x (ω) in the y direction. The system is then described by Eq. (2) with two distinct equations in Eq. (2a). A possible realization in an acoustic metamaterial is illustrated in Fig. 3, which is similar to the one in The relations in Eq. (7a)-(7b) resemble Eq. (3a)-(3b), yet are essentially different. Eq. (7a), which may be considered as a modified Snell's law of refraction, indicates that there is no critical angle for any γ = ω 2 /V 0 and φ , yet the refractive angle θ is positive for γ > 1 and negative for γ < 1, as in the original effect ( Fig. S3 in Appendix C). Eq. (7b) indicates unimpeded transmission to domain 2 for any γ and φ . I denote this effect by omnidirectional Klein-like tunneling.
The parameter M y (ω) can be of any form, provided the overall system is dynamically stable, where the condition B(ω) M −1 y (ω) = 1 is applied to a specific working frequency ω 0 , i.e. M y (ω 0 ) = ω 2 0 /(ω 2 0 − V 0 ). For example, where γ 0 = ω 2 0 /V 0 < 1. For γ 0 = 1/2, M y (ω) in Eq. (8) retrieves M(ω) of Eq. (4a). The anisotropic medium defined by Eq. (8) thus supports unimpeded transmission for any incidence angle φ and a particular frequency ω = ω 0 . To support a different frequency, the parameter α in M y (ω) needs to be adjusted accordingly. The constitutive parameters determine the medium's dispersion relation, which then takes the form with k 1y = k 2y = (ω/c) sin φ . This relation captures the underlying mechanism of the omnidirectional tunneling. In fact, Eq. (9) is the x axis projection of the original Klein dispersion in Eq. (6), indicating that the omnidirectional k 2 has the same Klein-like dispersion as in the angle-dependent case, just scaled by the positive constant cosφ / cos θ . Contrary to the quantum graphene, for which the dispersion at the vicinity of Dirac points consists of two touching cones both in domains 1 and 2, with a constant shift of V 0 in domain 2, Fig. 1(b), the situation for the omnidirectional acoustic analogue is quite different, as discussed next.
B. Tilted cones dispersion and dynamical response
The dispersion of the anisotropic effective medium designed in Sec. IV A is depicted in Fig. 4(a)-(d). In domain 1 the dispersion is a single circular cone, Fig. 4(a). With the introduction of potential, expressed through the anisotropic constitutive parameters of domain 2, this cone transforms into three surfaces, the form of which depends on γ 0 , as depicted in Figs. 4(b),(c),(d) for γ 0 = 2/3, 1/2 and 1/3. The different values of γ 0 indicate different working frequencies, ω 2 0 = γ 0 V 0 , highlighted by a purple, green and yellow curve, respectively. V 0 is kept constant at the value set in Eq. (5).
For 1/2 < γ 0 < 1 and 0 < γ 0 < 1/2, Figs. 4(b) and (d), the middle surface consists of two tilted cones of a hyperbolic isofrequency cross-section. The upper and the lower cones are of an elliptic cross-section, respectively forming tilted Diraclike cones with the middle surface [40]. At γ 0 = 1/2 the middle surface degenerates, as captured by the transparent sheet in Fig. 4(c), and is no longer a part of the solution. The top and bottom surfaces become regular circular cones, touching at the origin.
For any γ 0 , the lower surface corresponds to the wave transition to domain 2, similarly to the electron transition from the upper to the lower cone in quantum graphene. For 1/2 < γ 0 < 1 (0 < γ 0 < 1/2) the major axis of the elliptic lower cone is k 2x (k 2y ). This polarization flipping, as illustrated by Fig. S3 in Appendix C, corresponds to the interplay of the tunnelled wave group and phase velocity directions, respectively given by the lower cone dispersion gradient in Figs. 4(b)-(d) and the transmission angle θ in Eq. (7a).
The omnidirectional tunneling is demonstrated in the dynamical FDTD responses of the anisotropic homogenized effective medium for three different working frequencies, γ 0 = 2/3, γ 0 = 1/2, and γ 0 = 1/3, respectively depicted in Figs. 4(e)-(g) (simulation details and responses of an actual discrete structure appear in Appendix C, Fig. S5). The simulated systems feature the same bulk modulus B(ω) and x direction mass density M x (ω) = M(ω) as in the isotropic simulation in Fig. 2 (the actual metamaterials will comprise the same resonators geometry and x direction membranes stiffness as in the isotropic case), keeping V 0 = 1.75 · 10 8 [r 2 /s 2 ] in all the three simulations. To accommodate the different working frequencies implied by the different γ 0 , the y direction mass density M y (ω) is changed according to (8). The incidence angle is φ = 28 o in all the three cases. Simulations for other incidence angles are given in Fig. S4 of Appendix C.
The resulting acoustic pressure fields demonstrate a complete transmission from medium 1 to medium 2 (up to minor numerical reflections). The phase and group velocity directions polarization is respectively illustrated by the grey and white arrows at angles θ and θ g . For γ 0 = 2/3, the source is of frequency ω 0 = 1.
V. CONCLUSION
This work provided an exact analogue of the quantum Klein tunneling phenomenon in an inherently classical acoustic medium, without mimicking graphene's spinors, but by tailoring the constitutive parameters according to Eq. (5). Realization of these parameters in the acoustic metamaterial of Fig. 2(a) was suggested. Furthermore, the anisotropic design of Fig. 3, with the tuning parameter in Eq. (8), en-abled the sound to tunnel independently of incidence angle and frequency-potential ratio, obeying the modified Snell's law in Eq. (7a) and the unique three-surface dispersion in Figs. 4(b)-(d). This new phenomenon can be denoted by the omnidirectional Klein-like tunneling. Due to the general effective medium formalism in Eq. (2), this strategy offers a platform for omnidirectional unimpeded wave transmission in diverse classical systems.
ACKNOWLEDGEMENT I thank Yair Shokef, Yoav Lahini, Roni Ilan and Moshe Goldstein for useful discussions.
leading to the total wave equations and Substituting traveling wave solutions P 1 (x, y) ∝ e i(k 1x x+k 1y y) and P 2 (x, y) ∝ e i(k 2x x+k 2y y) for the corresponding pressure fields in Eqs. (S3) and (S4), gives so that the total wavenumbers become Substituting Eq. (S6) into the horizontal stratification condition k 1y = k 2y , i.e. in k 1 sin φ = k 2 sin θ , results in Eq. (3a).
The first part of Eq. (3b), 1 + R = T , does not depend on the constitutive parameters in domain 2. It is the direct result of continuity of pressure at x = 0, P 1 (x = 0, y) = P 2 (x = 0, y), or P I (x = 0, y) + P R (x = 0, y) = P T (x = 0, y), where P I , P R and P T are the incident, reflected and transmitted fields, explicitly defined as P I (x, y) = P 0 e i(k 1x x+k 1y y) , P R (x, y) = P 0 e i(−k 1x x+k 1y y) and P T (x, y) = P 0 e i(k 2x x+k 2y y) . The second part of Eq. (3b) does depend on M(ω) and B(ω). The requirement on continuity of normal flow velocity, V 1x (0, y) = V 2x (0, y), or V Ix (0, y) + V Rx (0, y) = V T x (0, y), by Eqs. (S1) and (S2), implies Differentiating P I , P R and P T , and using k 1y = k 2y , k 1x = k 1 cos φ and k 2x = k 2 cos θ in Eq. (S7), gives The physics of an acoustic cavity-on-neck resonator, aka Helmholtz resonator, as well as sound wave transmission through an elastic membrane, is well-known [34]. However, their collective dynamic behavior in the metamaterial setting, producing Eq. (5), requires some derivation. The derivation here includes dissipation that naturally exists in both membranes and cavities. To this end, the schematic of Fig. S1 is considered, which represents a unit cell of length a in a channel of the metamaterial in Fig. 2(a). This channel has a cross-sectional area A c = ad. The resonator, here closed, can be regarded as an air mass per unit area M h [kg/m 2 ] attached to an air spring per unit area B h [N/m 3 ], with dissipation D h , where the neck of area A n = πr 2 stands for the mass, the cavity of volume Vol for the spring, and both are given by The connection of the resonator to the tube can be thus represented by a serial connection of a dynamic impedance z h (ω) = M h iω + D h + B h /iω with the air impedance B 0 /iω, B 0 = b 0 /a, leading to the effective bulk modulus of (S10) The time-harmonic constitutive equations of the anisotropic medium in domain 2 take the form (S12) and the total wave equation becomes (S13) With P 2 (x, y) ∝ e i(k 2x x+k 2y y) , Eq. (S13) yields the dispersion relation Continuity of normal flow velocity at For angle-independent perfect transmission it is then required to set which implies 1 − R = T . Together with continuity of pressure requirement, which does not change in the anisotropic regime and yields 1 + R = T , Eq. (7b) is retrieved. Now, substituting Eq. (S16) into the dispersion relation Eq. (S14), and using Snell's law k 2y = k 1y , gives This needs to retrieve the uniform dispersion relation in domain 1, which leads to the conditions On the other hand, using the explicit form of k 2y = k 1y , ω 2 sin 2 φ = c 2 k 2 2 sin 2 θ = c 2 k 2 2x + ω 2 sin 2 φ sin 2 θ . (S19) Solving for k 2x , gives which together with Eq. (S16) and k 1x = ω/c cosφ , results in where γ = ω 2 /V 0 for the general frequency ω, and γ 0 = ω 2 0 /V 0 for the specific working frequency ω 0 . The relation in Eq. (S23) represents the dispersion plots in Figs. 4(b)-(d).
In Fig. S3, isofrequency contours of the dispersion surfaces of Figs. 4(b)-(d) are depicted. For 1/2 < γ 0 < 1 and 0 < γ 0 < 1/2, the isofrequency cross-sections of the middle surface are hyperbolic, forming tilted Dirac-like cones with the top or bottom surfaces, which have elliptic cross-sections. At γ 0 = 1/2 the middle surface degenerates, and the top and bottom surfaces become regular circular cones, touching at the origin. The elliptic cones polarization is flipped between 1/2 < γ 0 < 1 and 0 < γ 0 < 1/2, with the bottom cone indicating the interplay of the tunnelled wave group and phase velocity directions.
To illustrate the angle independence of the omnidirectional Klein-like tunneling, dynamical simulations of the anisotropic medium for two additional incidence angles, φ = 15 o and φ = 35 o , are depicted in Fig. S4.
Finally, the homogenized medium response for the incidence angle of 28 o is compared in Fig. S5 to the response of the actual discrete metamaterial with unit cell size of a ≈ λ /8. The computational algorithm was based on coupled spatial and temporal iterative schemes, where the spatial scheme extending in both x and y directions. The update in time was carried out using a step of dt = 10 −6 . The space was discretized by a step of dx = λ /40 in the continuous medium case, and dx = λ /8 in the discrete structure case.
The source function was a continuous sinusoidal wave generated at the left boundary, x = 0. To localize the wave at a finite section and thus to create a finite width beam, the wave was truncated along the vertical y axis by a Gaussian of width proportional to σ 2 . The tilt angle φ of the beam was created by inducing a targeted phase shift in the source as a function of the distance y, ∆ϕ(y) = (2πa)/λ 1 sin φ . The overall source function for the discretization step a and incidence domain (medium 1) wavelength λ 1 , was therefore given by f (y,t) = e − (yσ) 2 2 sin (ωt + ∆ϕ(y)).
For domain truncation the absorbing boundary conditions (ABCs) technique was used, which matches the medium impedance z 0 to that of the boundary. In general, this is not trivial when the medium is dispersive, since the effective medium impedance is frequency dependent. In our case, how- ever, the impedance z 2 (ω) = z 0 implied the ABCs ∂ p(x, y,t) ∂t along the x = L boundary. In the y direction an extended domain was used to allow the wave to hit the x = L boundary. The implementation of the frequency dispersion was a threestep process: (i) translating the frequency domain expressions to time domain differentiation operators (e.g. −ω 2 p ⇔ ∂ 2 p/∂t 2 ), resulting in higher order partial differential equations (PDEs), (ii) converting the PDEs into an auxiliary system of first order equations, and (iii) augmenting the iteration scheme accordingly with the spatially discretized version of the auxiliary equations.
The frequency domain functions are P(x, y), V x (x, y) and V y (x, y), whereas the corresponding time domain functions are p(x, y,t), v x (x, y,t) and v y (x, y,t). Considering the anisotropic case dispersion defined in Eq. tively become Defining the auxiliary variables the high order PDEs in (S26) can be rewritten in the form The system of the first order PDEs in (S28)-(S29), once spatially discretized and rearranged for the update in time, constitutes the finite difference time domain scheme used in the simulations of Fig. 4(e)-(g). The simulations of the isotropic case, Fig. 2 (b),(c), were carried out using the same scheme, by substituting α = V 0 . Two types of dissipation were modeled. One was an overall dissipation ζ , which affects the entire pressure field as ζ ∂ p ∂t , and exists in both in medium 1 and 2. The other type was a dissipation ζ m caused by particular features of the metamaterial cells in medium 2, thus directly affecting the effective constitutive parameters. The latter, with equal values of 0.005 for brevity, was therefore modeled as
|
2021-07-13T01:15:58.610Z
|
2021-07-10T00:00:00.000
|
{
"year": 2021,
"sha1": "39e83f202a7e1be81c21535251e793285ef72926",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "39e83f202a7e1be81c21535251e793285ef72926",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
231629128
|
pes2o/s2orc
|
v3-fos-license
|
Metabolic Shock in Elderly Pertrochanteric or Intertrochanteric Surgery. Comparison of Three Surgical Methods. Is there a Much Safer?
Abstract Introduction Trochanteric fractures are a major trauma in the elderly population and represent a significant part of public health spending. Various fixation devices are used as treatment for these fractures. This study aimed to evaluate three surgical methods in the treatment of pertrochanteric femoral fractures. Materials and methods From January 1, 2013, to December 31, 2014, 86 patients were divided into 3 groups. Fifteen patients were treated with osteosynthesis by reamed intramedullary nailing (RIMN), 15 patients were treated with unreamed intramedullary nailing (UIMN), and 13 patients were treated with dynamic hip screw (DHS) plate osteosynthesis. All patients were >75 years of age. They were evaluated with a clinical radiological follow-up and laboratory examination (LDH, CPK, IL-1-B, IL-8, TNF-α, alpha-1-acid glycoprotein, D-dimer, fibrinogen, ESR, CRP, and procalcitonin). Results IL-8, TNF-α, fibrinogen, D-dimer and alpha-1-acid glycoprotein levels were higher in the DHS group compared with the other two groups at 1 month after surgery (P<0.05). LDH, IL-1β, and IL-6 levels were higher in the DHS group compared with the other two groups at 3 months after surgery (P<0.05). From 3 to 6 months after surgery, the TNF-α level was high in the DHS and RIMN groups (P<0.05). Infection markers did not demonstrate a difference among the 3 groups. Twelve patients died during the 12-month follow-up. Regardless of the method used, morbidity and mortality are linked to enticement and comorbidities rather than surgery within 48 hours after the trauma. Conclusions From our study, we can affirm that the values of cytokines and interleukins observed remain high during the 12-month follow up, regardless of whatever fixation devices or surgery type was performed within 48 hours of injury. Inflammatory markers are higher in patients in the DHS group. This can probably be explained by the fact that DHS technique is performed by open surgery, and this can create a higher inflammation of soft tissue. Mortality is reduced in the first 30 days after surgery if patients are mobilized early. Therefore, mortality in our study population of patients aged >75 years is linked more to the chronic inflammatory state and comorbidities, rather than fixation device or surgical type used. However, future studies are needed to answer further questions that go beyond the scope of our study.
Introduction
Pertrochanteric or intertrochanteric fractures are increasing in the population with an incidence of 57.5% [1]. Therefore, they are a common cause of hospitalization, disability, and mortality. In fact, 29% of patients lose the capacity to independently perform activities of daily living and 56% lose walking ability [2].
In 2014, Barbour et al. reported that the highest levels of inflammatory markers are correlated with an increased risk of fracture in elderly people [3]. At present, there are various surgical techniques available to treat pertrochanteric or intertrochanteric hip fractures, such as reamed intramedullary nailing (RIMN), unreamed intramedullary nailing (UIMN), and dynamic hip screw (DHS) [4]. The objective of this article is to analyze the positive and negative to the AO classification; and had undergone the operation within 48 hours of the trauma. Patients younger than 75 years and those who used corticosteroids, immunosuppressants, anticoagulants, and antiplatelets were excluded from the study. The patients in our study exhibited several comorbidities, such as cardiovascular disease, respiratory disease, renal disease, stroke, diabetes mellitus, rheumatoid disease, Parkinson's disease, severe mental deterioration, Paget's disease, and current smokers. In all three groups, 50% patients had up to three comorbidities. Comorbidity should be added as a high anesthetic risk because of the nature of the comorbidities and older age of the patients ( Table 2). outcomes of these techniques and to determine if metabolic shock is related to a specific technique.
Materials and Method
From September 2016 to May 2018, 43 patients (Table 1) were enrolled in this study. These patients were divided into three groups according to surgical technique. Fifteen patients were treated with RIMN, 15 patients were treated with UIMN, and 13 patients were treated with DHS ( Table 1). All treated patients were at least 75 years old; had fractures classified as A1, A2, or A3, according
Results
Initial measurement of inflammatory markers in the emergency room revealed no statistically significant difference among the three groups. However, all parameters analyzed were higher than baseline at 12-month follow-up. The value of LDH was higher in the DHS group compared with the other two groups (Table 3). CPK did not show a significant difference among the three groups during the follow-up (Table 4). IL-1β and IL-6 (Tables 5 and 6) levels were higher in the DHS group at 3 months after surgery (P<0.05). IL-8 level was higher in the DHS group compared with the other two groups at 1 month after surgery (Table 7). In the DHS group, TNF-α level was higher compared with the other two groups at 1 month after surgery (Table 7). From 3 to 6 months after surgery (Table The patients treated with intramedullary nailing were able to walk, on an average, 2.3 (range: 2-4) days after the surgery, whereas the patients treated by DHS walked, on an average, 17.3 (range: [14][15][16][17][18][19][20] days after the surgery. All patients were informed in a clear and comprehensive manner about the three types of treatments and the corresponding surgical alternatives. Patients were treated according to the ethical standards of the Declaration of Helsinki and were invited to read, understand, and sign the informed consent form. All patients had a follow-up of 1 year with clinical examinations and radiographic control evaluated at 1, 3, 6, and 12 months. We analyzed the metabolic shock with LDH and CPK metabolic shock; the inflammation with IL-1B, IL-8, TNFα, and alpha-1-acid glycoprotein; the thrombosis with D-dimer and fibrinogen; and the risk of infection with ESR, CRP, and procalcitonin. We do not consider the UI of transfused blood because transfusion must be performed according to the specific needs of the patient, and not to standardized levels.
Western blotting analysis
The extraction of total proteins was performed with cells at a concentration of about 2.5 × 10 5 cells/mL. Briefly, the cells were centrifuged at 1,200 rpm for 5 min at 4°C in order to remove the culture medium, and the pellet was washed with 1 mL of PBS 1×. Then it was lysed with about 200 μL of the lysis buffer (50 mM Tris-HCl, 150 mM NaCl, 10 mM NaF, and NP-40 1%) in the presence of protease inhibitors cocktail (PIC; Sigma-Aldrich) and 200 μM of phenylmethylsulfonyl fluoride (PMSF; Sigma-Aldrich). At the end, the samples were centrifuged for 15 min at 13,000 rpm at 4°C. The antibodies used were TNF (Santa Cruz) and IL-6, IL-8, and IL-10 (Abcam); Erk (Santa Cruz) was used to normalize the results. The protein quantification was performed following the method of Bredford. The Biorad protein assay was based on the change in the color of the dye Coomassie Brilliant Blue G-250 in response to various concentrations of protein. The reagent binds mainly to the aromatic amino acid residues or bases, such as arginine, as standard was used a solution of bovine serum albumin to a known concentration. The reading was Infection markers did not demonstrate a difference among the three groups (Tables 12-14). Because of the short-term therapy with antibiotics, fixation only caused an increase in inflammatory markers postoperatively. Twelve patients died during the 12-month follow-up.
8), the TNF-α level was high in the DHS and RIMN groups (P<0.05). There were slightly higher values for alpha-1-acid glycoprotein in the DHS group at 1 month after surgery (Table 9). Fibrinogen and D-dimer levels were higher in the DHS group at 1 month after surgery (P<0.05) (Tables 10 and 11). is considerable debate about the optimal timing of hip fracture repair and whether delaying the repair affects outcomes.
Postoperative rates of medical complications and death are high for many reasons. Current guidelines [8] recommend that surgeons perform hip fracture surgery within 48 hours of injury. Observational studies suggest earlier surgery is associated with better functional outcome and lower rates of nonunion, shorter hospital stays and duration of pain, and lower rates of complications and mortality. In published series, patients who underwent surgery earlier had lower rates of nonunion, avascular necrosis of the femoral head, urinary tract infections, decubitus ulcers, pneumonia, venous thromboembolism, and death and better long-term functional status than those who underwent surgery later [8].
In addition, delaying surgery prolongs patient's pain and
Discussion
There are many different implants and operative procedures available to treat pertrochanteric or intertrochanteric fractures; however, choosing the best treatment is a topic of debate. Current data show an increasing use of cephalomedullary nails despite evidence in the literature suggesting the dynamic hip screw to be superior [5]. Intertrochanteric hip fractures in the elderly are frequent [6]. Unstable fracture pattern (fracture 31-A2 and 31-A3, AO/ASIF classification) occurs more frequently with increased age and low bone mineral density [6]. Unstable fracture can be difficult to manage, particularly in noncompliant patients with implant failure [6]. We used the common and widely used implant by Orthopedics and Traumatology within our study [6][7]. There suffering. In a recent prospective cohort study involving 1,206 patients, those who underwent surgery within 48 hours had significantly fewer days of severe and very severe pain and shorter lengths of hospital stay. Higher pain ratings in patients with hip fracture are associated with longer postoperative lengths of stay, delayed postoperative rehabilitation, and increased risk of delirium, which increases the mortality and complications in elderly hospitalized patients [9]. If a fractured femur in elderly patients is considered as a polytrauma or with comorbidity, it receives an average score of 15-17 points on the Injury Severity Score scale, which may lead to multiple organ failure (MOF), acute respiratory distress syndrome (ARDS), and other serious complications [10]. In addition, the window of opportunity in patients with multiple trauma or multiple comorbidities to avoid a second HIT caused by the surgery is between 4 and 7 days from the trauma [11]. In fact, the mortality was 9.6% at 30 days after the surgery and 33% at 1 year from the surgery. Patients aged >75 years who need surgery to repair a hip fracture have a critically higher incidence of congestive heart failure [12]. In order to minimize complications or mortality in surgery, a kind of damage control can be carried out using gastric protectors, using antithrombotic therapy, and maintaining a good hemodynamic equilibrium with hemoglobin values >8 mg/dL [12]. However, 1-month mortality is reduced if patients are operated within 48 hours after the trauma [13]. Unfortunately, after 80 years of age, the surgery for hip fracture is carried out more so for the purpose of pain reduction, rather than to alter a patient's perspective of life. Hezman et al. [14] analyzed 9,157 intramedullary nail procedures and 27,687 plate fixation procedures from 1998 to 2007. During this period, the proportion of intertrochanteric hip fractures treated with intramedullary nail fixation, instead of plate fixation, increased from 3.3% to 63.1% [14]. Patients treated with an intramedullary nail had a higher adjusted risk of pulmonary embolism at 3 months after surgery (39%) and mortality at 1 year after surgery (9%) compared with those treated with plate fixation [14]. However, patients treated with an intramedullary nail had a 22% lower adjusted risk of conversion to hip replacement at 1 year after surgery. On the basis of the subgroup analysis of 4074 patients treated with an intramedullary nail and 2869 patients treated with plate fixation from 2006 to 2007, the lower adjusted risk of conversion to hip replacement at 1 year after surgery was still observed with intramedullary nails (~36%). The previously mentioned higher risk of pulmonary embolism and mortality associated with treatment with an intramedullary nail in the larger study group was not found in the subgroup analysis.
Of the selected complications, deep venous thrombosis (4%) and mortality (25%) were the most frequently reported complications at 90 days and 1 year after surgery [14]. In our study, mortality is higher in the DHS group compared with the other groups (Table 3). However, at the end of the year of follow-up, there was not a statistically significant difference among the three groups, but the DHS group demonstrated slightly higher mortality compared with the other two groups.
On the basis of our results, we believe that mortality is linked with the comorbidity of the patients instead of the surgical procedure. Introducing early weight bearing activity may assist in decreasing the mortality rate. According to Carretta et al. [15], patients with a hip fracture should have surgery within 2 days from admission in order to reduce 30-day mortality.
Hip fractures have long been associated with significant morbidity and mortality, regardless of the surgical mode of treatment, with 1-year mortality rates reported to range from 13% to 32.5% [16][17][18][19][20][21][22]. In elderly patients, the pertrochanteric fracture might have an important impact on the entire body, which may present like a polytrauma for some patients [23]. For these patients, mainly elderly with multiple comorbidities, surgery represents the ''second hit'' after the first fracture trauma. Minimally invasive methods for indirect reduction and fixation [24] try to minimize the impact of this second hit. Unfortunately, to date, the expected benefit (decreased additional tissue damage) of these minimally invasive techniques [25] has not been objectively measurable. Elderly patients who have a pertrochanteric or intertrochanteric fracture and associated comorbidity may exhibit an injury severity score >15 and, therefore, are at risk for developing MOF or ARDS as in patients with polytrauma [26]. In our data, the values of risk of infection, such as ESR and CRP, were higher in all three groups during the perioperative period but had nonspecific markers. Procalcitonin, which is a more specific marker of sepsis and infection, was high in all three groups (P>0.05) in the perioperative period during the antibiotic prophylaxis, as reported in the literature [27].
A meta-analysis of polytrauma has shown that the increased risk of thrombosis is primarily due to medical conditions, such as a high ISS, blood transfusions, neurological deficit, hip injuries, and head trauma [28]. Many patients in our groups had this comorbidity. In a meta-analysis of the 27,441 patients with hip fracture, 449 (1.6%) developed deep venous thrombosis (DVT) or pulmonary embolism (PE). There was a significant difference in the rates of DVT/ PE based on surgery type (P = 0.015). Patients undergoing intramedullary nailing of inter-/peri-/subtrochanteric femoral fractures had the highest rates of DVT/PE (2.06%). However, the multivariate analysis revealed that renal failure and recent surgery were significant risk factors for DVT/PE [29].
In opposition to the previous literature, we demonstrate that for the first 3 months after surgery, the risk of DVT/PE was greater in the DHS group (P<0.05) compared with the other groups. However, the values remained high throughout the follow-up. Perhaps, this result is due to the prolonged recovery of patients in the DHS group compared with the
Conclusions
From our study, we can affirm that the values of cytokines and interleukins observed remain high during the 12-month followup, regardless of whatever fixation devices or surgery type was performed within 48 hours of injury. Mortality is reduced in the first 30 days after surgery if the patients are mobilized early. Therefore, mortality in our study population of patients aged >75 years is linked more to the chronic inflammatory state and comorbidities, rather than fixation device or surgical type used. However, future studies are needed to answer further questions that go beyond the scope of our study.
Limitations in Investigational Methodology
The limitations of the current study were the limited number of patients, nonprobability sample of convenience because of few centric sample, and level 1 trauma center. Another limit is that a part of population is from a retrospective study and another part is from a continuative case series. Selection of patients may be biased, making generalization of results difficult. It may be unclear whether the confluence of findings is merely a chance occurrence or is truly characteristic of a new disease or syndrome. Case series studies have no comparison group and can only be used to generate hypotheses about the relationship between an exposure and an outcome. Another limitation was that the measurements and intervention were made without randomization of the researcher to the experimental groups, which have the potential for bias. Finally, other limiting factors of the study acknowledged by the authors can be the potential for regression to the mean, the presence of temporal confounders, and the mention of subjective score.
CONFLICT OF INTEREST STATEMENT:
All authors disclose any financial and personal relationships with other people or organizations that could inappropriately influence (bias) their work. Examples of potential conflicts of interest include employment, consultancies, stock ownership, honoraria, paid expert testimony, patent applications/ registrations, and grants or other funding.
HUMAN AND ANIMAL RIGHT:
For this type of study, any statement relating to studies on humans and animals is not required. All patients gave the informed consent before being included into the study. All procedures involving human other two groups. Many authors argued that the shock is due to a chaotic imbalance between the demand and the supply of energy substrate at cellular level [26]. In fact, the LDH was higher in the DHS group the day after the surgery. This may be due to the invasive nature of the surgery or patients' comorbidities. CPK values were higher than normal because of the presence of sarcopenia syndrome in elderly patients [30]. There are several interrelationships between bone and muscle, and when the aging process affects one of these, the functionality of the other is compromised [30].
In experimental studies, it has been shown that the cytokines IL-6, IL-1β, and TNF-α exert effects on skeletal homeostasis [31]. In particular, IL-1β stimulates the proliferation of osteoblasts and production of mineralized bone matrix and inhibits the proliferation and differentiation of chondrocytes. It has also been reported that levels of IL-1β are higher in stabilized, rather than non-stabilized, fractures of the tibia in mice [31]. According to our data, the values of IL-6 and IL-1β are higher in the DHS group compared with the other two groups until the sixth month after surgery because the failure to engage in early weight bearing is correlated with the delay in bone healing [32]. TNF-α is a cytokine involved in systemic response and a member of a group of cytokines that stimulate the acute-phase reaction. It is mainly produced by macrophages, although it can be produced by CD4+ T lymphocytes, NK cells, neutrophils, mast cells, eosinophils, and neurons. It promotes bone healing and increases protein breakdown in muscle and fat breakdown in the adipose tissue. High postoperative grades at 6 months follow-up in this group are due to the insult of RIMN on reaming marrow when compared with group's values with UIMN [33]. We are not certain as to the explanation behind the trend observed for the values of the DHS group. The secretion of IL-8 is increased by the oxidative stress, which in turn causes the recruitment of inflammatory cells and a further increase in mediators of oxidative stress, making it a key parameter in localized inflammation. IL-8 is also known to be a potent promoter of angiogenesis [34]. IL-8 seems not to be useful for our purpose of quantifying surgical tissue [31] but may be useful to understand the oxidative stress of the body and quantify its ability to repair bone via angiogenesis. Alpha-1acid glycoprotein is a marker too nonspecific to be taken into account. In fact, it turns out to also be present in inflammatory and microtraumatic states, such as Legg-Calve-Perthes disease [35]. Alpha-1-acid glycoprotein has been identified as one of the four potentially useful circulating biomarkers for estimating the 5-year risk of all-cause mortality (the other three are albumin, very low-density lipoprotein particle size, and citrate) [36]. Given the values in Table 15, perhaps, we can allude to why a higher mortality was observed in the DHS group compared with the other two groups, without the presence of any statistical difference among groups.
participants were in accordance with the Declaration of Helsinki 1964 and its later amendments.
|
2021-01-18T14:20:45.592Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "db93a721f0ac8fc7d53b29557b73755c99e415b2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "db93a721f0ac8fc7d53b29557b73755c99e415b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235398943
|
pes2o/s2orc
|
v3-fos-license
|
Implementation, relevance, and virtual adaptation of neuro-oncological tumor boards during the COVID-19 pandemic: a nationwide provider survey
Purpose Neuro-oncology tumor boards (NTBs) hold an established function in cancer care as multidisciplinary tumor boards. However, NTBs predominantly exist at academic and/or specialized centers. In addition to increasing centralization throughout the healthcare system, changes due to the COVID-19 pandemic have arguably resulted in advantages by conducting clinical meetings virtually. We therefore asked about the experience and acceptance of (virtualized) NTBs and their potential benefits. Methods A survey questionnaire was developed and distributed via a web-based platform. Specialized neuro-oncological centers in Germany were identified based on the number of brain tumor cases treated in the respective institution per year. Only one representative per center was invited to participate in the survey. Questions targeted the structure/organization of NTBs as well as changes due to the COVID-19 pandemic. Results A total of 65/97 institutions participated in the survey (response rate 67%). In the context of the COVID-19 pandemic, regular conventions of NTBs were maintained by the respective centers and multi-specialty participation remained high. NTBs were considered valuable by respondents in achieving the most optimal therapy for the affected patient and in maintaining/encouraging interdisciplinary debate/exchange. The settings of NTBs have been adapted during the pandemic with the increased use of virtual technology. Virtual NTBs were found to be beneficial, yet administrative support is lacking in some places. Conclusions Virtual implementation of NTBs was feasible and accepted in the centers surveyed. Therefore, successful implementation offers new avenues and may be pursued for networking between centers, thereby increasing coverage of neuro-oncology care. Supplementary Information The online version contains supplementary material available at 10.1007/s11060-021-03784-w.
Introduction
Treatment of patients with cancers affecting the central and peripheral nervous system is complex and requires a coordinated team of specialists. Multidisciplinary tumor boards (MTBs) form the foundation for highly specialized (neuro)-oncology care and the continuous maintenance of the highest quality in cancer care [1]. The benefits of MTBs include efficient collaboration of multiple providers, communication between treatment teams, continuous education, increased adherence to treatment guidelines, and access to clinical trials [2,3]. However, MTBs not only provide an opportunity for consensus building on the best possible treatment regimens for individual patients, but also make a significant contribution to collegial adherence in individual case decisions. However, a study by Snyder et al. showed a high degree of heterogeneity in the implementation, proceeding, and documentation of such MTBs [4]. Going beyond the implementation of general MTB, Robin et al. recommend the establishment of a multidisciplinary brain tumor board led by neuro-oncologists 1 3 for patient-centered neuro-oncology treatment planning and management to address the multiple demands in neuro-oncology patients. [5]. Appropriately, Snyder and colleagues state here that the implementation and delivery of neuro-oncology MTBs, including those at nonacademic centers, is critical for nationwide quality assurance of neuro-oncology patient care [4]. In addition, highly specialized neuro-oncology tumor boards (NTB) focusing solely on neuro-oncological patients cannot easily be established at every center. Nevertheless, the survival advantage of neuro-oncological patients treated at highvolume and/or academic centers seems evident [6,7]. The survey by Snyder et al. revealed that although the academic tumor centers polled had a desire to review external cases, only a quarter also experienced involvement from affiliated satellite centers [4]. To address the issue of nonparticipation of external centers, both teleconferencing options and increased use of virtual platforms have been proposed but rarely implemented to date [8,9]. However, in the context of the COVID-19 pandemic, numerous efforts to increase digitalization/virtualization, particularly in healthcare, have accelerated [10,11]. This digitalization leap has not just created new challenges but, conversely, is also fostering new opportunities for expert networking in (neuro-) oncological tumor care [12].
The aim of the study was to identify the implementation modalities in academic and non-academic hospitals regarding NTBs in Germany in order to use this knowledge to further improve current practice and, if possible, to take advantage of the trends towards increasing digitalization.
Survey population-neuro-oncological specialty centers
The surveyed neuro-oncology centers in Germany were identified via Germany's leading provider transparency portal WeisseListe.de (WL.de). The portal WL.de has become the largest public portal for quality reports in healthcare in Germany [13]. The collected information originates from the statutory quality reports of over 2000 hospitals in Germany [14]. Since these quality reports are also based on the hospitals' accounting data, it was possible to record the number of reported brain tumor cases for each hospital using ICD-10 coding. For each hospital reporting more than 50 cases with ICD-10 code C71 (malignant disease of the brain) in a year, the authors' team manually identified one person responsible for neuro-oncology therapy using address lists. In this way, only one report per hospital took place with regard to the survey.
Survey questionnaire
Adhering to the design of a cross-sectional study, an online survey is created via a web-based platform (SurveyMonkey Inc.; San Mateo, California, USA; www. surve ymonk ey. com). The survey consisted of 24 questions formulated after discussion among the authors (supplementary appendix). The questions were grouped into 4 categories: (1) structure, (2) function/implementation, (3) changes due to the COVID-19 pandemic, and (4) impact of NTB on clinical practice. The questions on structure included information on the surveyed institutions and the format of NTB. Questions on the function/ implementation of the NTB addressed information on meeting tasks, activities, and staff composition as well as the respective implementation of the NTB in the participating centers. Changes caused by the COVID-19 pandemic were asked in a separate section (with an explicit focus on the expected increase in virtualization). Impact questions addressed the individual value of meetings as well as barriers. The survey was conclusively reviewed internally by a multidisciplinary group of physicians involved in neuro-oncology care at the University Hospital Bonn. The study was approved by the Institutional Review Board of the Medical Faculty of Bonn (no. 063/21).
Data collection
The survey was sent by email in March 2021 to the responsible medical staff involved in neuro-oncology at each hospital that had previously been identified using the described pathway. A total of 97 hospitals in Germany were contacted. No rewards or incentives for participation in the survey were offered, and those who refused to participate and/or did not complete the survey (more than three questions were missing) were considered non-responders. After one week, a reminder was sent in the same way to increase the response rate. Unique visitors were identified based on IP addresses and were used to prevent multiple entries from the same individual. The survey was available for a total duration of 3 weeks.
Data analysis
Survey responses were collected, downloaded, and converted into a dataset for further analysis. Summary statistics, simple and stratified, were compiled using SPSS (version 25, IBM Corp., Armonk, NY).
Respondent characteristics
Of the 97 centers invited, 65 (response rate 67%) responded to this survey. With the exception of one federal state, ≥ 50% of centers per federal state in Germany were represented (Fig. 1). Of the responding centers, 53% were a part of a university hospital, 29% were a part of a municipal hospital, and 18% were a part of a hospital with a private carrier. Overall, 5% of surveyed centers stated that they did not implement or were not affiliated with an NTB.
Characteristics of the neuro-oncological tumor boards
All centers that implemented NTB did so on a weekly schedule. The practice of conducting an NTB had been established for > 3 years in 89% of the participating centers. 10% of centers had been conducting or participating in an NTB for 1-3 years, whereas 2% had been doing so for < 1 year. Details on the participating specialties of NTBs are given in Table 1.
Regarding the diagnoses discussed in the NTB, all centers reported discussing patients with primary brain tumors (100%). Brain metastases were also discussed in 95% of the centers and spinal tumor cases in 86%. 47% of the participating centers also consulted on the treatment of paraneoplastic disease within their NTB. 86% reported to perform collaborative preliminary discussion of cases with lesions of unknown etiology. Presentation of the cases to be discussed is performed in 79% by the treating physician, in 56% also by a resident physician, and in only 5% by recitation of textonly information about the patient. 84% of responding centers reported receiving detailed information about patient comorbidities as part of the case presentation. In 82% of the cases, the NTBs also directly address the inclusion possibilities of potential clinical trials.
Case presentation includes active demonstration of radiological imaging in 98%, and description of histological findings by (neuro)pathology in 82%. In 89% of NTBs, the results of additional molecular pathology investigations are also part of the individual case discussion. 21% state that, especially in rare cases, a summary of the current research and/or a literature review is included in the case briefing. In 20%, the demonstration of clinical cases is additionally complemented by histopathological images.
Documentation of NTB consultation results is done digitally in the patient's record in 91% of centers, including a separate report in 27% of cases, while in 9% of cases documentation is handled in the paper-based medical record. 73% of responding centers conduct regular morbidity and mortality (M&M) conferences focusing on patient safety and quality improvement within their NTB, of which 88% do so at least twice a year.
Changes during the COVID-19 pandemic
The circumstances of the COVID-19 pandemic, including the contact restrictions required, have led to increased use of virtual technology in 68% of participating centers. 23% of centers are now operating their NTB completely virtually, while 48% are using a partially virtual environment but are In free-text response options regarding subjective benefits of virtual NTB, the centers that have completed a virtual NTB conversion advocate improved integrability into daily clinical practice, significant time savings, better integration of external specialties/colleagues, higher numbers of participants, and significantly enhanced flexibility.
Value of neuro-oncological tumor board
All respondents consider the value of NTB to be the significant benefit of improving communication among medical colleagues (100%). 88% consider the NTB as an opportunity to jointly achieve optimal/improved standardization and quality of care. 78% of the responding centers perceive an advantage in the continuous medical education stimulated by the NTB. 62% of the participants thought that the interdisciplinary meeting of NTB enables more treatment options for the individual patient. 50% also regard the possibility of increased recruitment of patients for clinical neuro-oncological studies as an advantage of NTB.
The major obstacle to conducting a weekly NTB is perceived by 35% of respondents to be the high number of cases, while 25% consider it difficult to integrate the NTB into their daily clinical duties, 28% complain about a lack of support from the clinical administration and/or information technology (IT) services, whereas 27% identify the absence of individual specialty departments during the NTB and/ or within their center as a major constraint. According to respondents, 10% of centers experience scheduling conflicts between multiple MTBs. Within the scope of the free-text answers, complaints are found about the long duration of the weekly NTB, the resulting inconsistency with the stipulated work hours, and the sometimes deficient preparation/presentation of the cases to be discussed.
Discussion
Given the increased sub-specialization in the context of increasingly individualized tumor management, the establishment of specialized tumor boards (such as neuro-oncological tumor boards) seems worthwhile [15,16]. The respective benefits for both treatment and outcome of the patient, as well as the continuing education and training for the treating physicians along with the continuous selfreflection on applied treatment strategies in addition to the possibility of enrolling patients in clinical trials are proven advantages of NTBs [4,17,18]. Nevertheless, a continuous centralization of treatment can also be observed for neuro-oncological conditions, with a consequential reorganization of the hospital landscape towards the development of regional specialty centers/large-volume centers [19]. In addition to lower morbidity and mortality after (neuro) surgical procedures, improvements in overall survival have been demonstrated for various cancer types in large-volume centers due to the often improved inner-hospital multidisciplinary networking [7,[20][21][22]. Furthermore, corresponding accrediting authorities also set certain thresholds for the number of cases to be treated as part of the certification process for specialty cancer centers. To what extent this progressive centralization of neuro-oncology treatment can also ensure nationwide coverage is as yet unknown.
For this reason, our survey was addressed to German institutions with experience in neuro-oncology based on the number of diagnoses of malignant primary brain tumors per year. Given the thorough and stringent selection of survey participants, as well as the response rate of more than 60%, we believe that the results presented here are robust for representative interpretation. Noteworthy, many of the respondents were non-academic institutions, mirroring a broad neuro-oncological care network in Germany given the low incidence rates of malignant primary brain tumors. Among the national facilities surveyed, NTBs are generally accepted and represent a worthwhile component of daily clinical routine in the view of the survey participants. Respondent institutions reported a high level of experience with NTBs, with 89% having established NTBs for more than 3 years. Only 5% of the responding centers have currently no NTBs implemented. This finding is underlined by the way cases are presented during these NTBs. The reported predominant participation and presentation by residents/treating physicians may indicate a true accountability of treating physicians and allows for detailed case discussions as well as study recruitment. Further, the estimated value of NTBs may increase willingness to participate. The vast majority rated the opportunities for communication, implementation of optimal patient care, and continuing medical education as beneficial. As expected, participation in NTBs is perceived to be time-consuming and may therefore interact with other duties in clinical practice.
COVID-19 pandemic leading towards virtualization of NTBs
The tremendous impact on daily living during the COVID-19 pandemic resulted primarily in shifts in medical care [23][24][25][26]. However, the results of our survey impressingly demonstrate that NTBs were maintained during this period. This indicates the great efforts of local institutions and departments at this time, as well as the medical need for NTBs. Moreover, this occurred without a lower participation rate of individual disciplines, as could be demonstrated by the outstanding reported participation rate of neurosurgeons, radiation oncologists, medical oncologists and (neuro-) radiologists throughout this straining pandemic. A reason for this achievement may be the successful transition from face-to-face meetings to virtual meetings. This switch was reported as technically feasible and even resulted in a better integration of NTBs in daily practice routine. Interestingly, given the plethora of applicable software for virtual NTBs, the majority of institutions chose to use software from Zoom, Cisco, and Skype. In doing so, the virtual nature of the meetings seemed to facilitate the participation of the various disciplines, which would mean a significant benefit and increase in quality of NTBs held virtually. Another advantage of a virtual implementation of NTBs seems to be easier NTB participation of local and/or external guests, which allows for more in-depth individual case discussion and also cross-regional networking of all participating physicians [27]. Nevertheless, increasing virtualization also poses a challenge to strict compliance with data protection laws. Particularly in the case of cross-regional NTBs, this requires close coordination between the respective data protection officers in accordance with the prevailing federal state data protection laws.
Value and relevance of NTBs
The multidisciplinary approach in neuro-oncology implies non-delegable tasks of neurosurgeons and radiation oncologists as well as shared topics such as diagnostics and/or medical treatment, supportive and palliative care. Between these different disciplines, a leadership role for patients and caregivers is needed to ensure fixed contact partners and a managing/coordinating department. Especially patients with primary brain tumors as well as their caregivers might need a designated site for contact due to the high amount of psycho-social distress and impairment in physical and cognitive functioning [28]. Neuro-oncological neurologists, as representatives of a non-interventional discipline, may be predisposed to this role. As a noble goal, they might hold the reins, thereby coordinating necessary further medical/ surgical consultations and providing additional (neurologically skilled) supportive therapy to assist patients and their families throughout the course of their neuro-oncological disease. Obviously, these efforts and demand of great commitment could be supplanted by either one of the partnering disciplines. Nevertheless, only 4 of 5 participating institutions report that neurologists attend their NTBs, which makes increased enforcement within neuro-oncology by neurology specialists desirable given the medical care that must be provided to patients with tumors of the central nervous system.
Apart from patient management, the majority of respondents agree that the NTBs established at their centers contribute to improved physician communication, better interdisciplinary networking, and thus more optimal treatment of the patients entrusted to them. Resulting from the necessary contact restrictions in the context of the COVID-19 pandemic, the increased virtualization of NTBs promises to have a positive effect on improving attendance even for specialties not residing at the same hospital. Furthermore, due to the growing centralization in the healthcare system, a significantly improved integration of low-volume/remote hospitals might become feasible and thus contribute to a comprehensive, highly-specialized and optimal treatment of patients with CNS tumors.
Limitations
The main limitation of the present work that goes common to all surveys is the inability to generalize the results based on the selected surveyed sample. Due to the described selection procedure of the recipients of the survey, there is also a risk of selection bias. Nevertheless, the selection approach reduces the risk of multiple responses per center, and the overall survey coverage achieved is likely to reflect a reliable sentiment regarding the implementation of NTBs in Germany. Furthermore, the selection of one contact person per center naturally negates the discipline-specific characteristics, which, on the other hand, were not addressed by the survey itself.
Conclusions
Increasing centralization in the healthcare system also affects patients suffering from neuro-oncological tumors. The enormous efforts of healthcare providers in the context of the COVID-19 pandemic, including the augmented virtualization of neuro-oncological tumor boards, could help to implement optimal care for neuro-oncological patients even in remote hospitals and thus nationwide.
Author contributions All authors contributed to the study conception and design. Material preparation was perforemd by all authors. Data collection and analysis were performed by NS and PS. The first draft of the manuscript was written by NS and PS and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding Open Access funding enabled and organized by Projekt DEAL. The authors did not receive support from any organization for the submitted work.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Code availability A web-based platform was used provided by Survey-Monkey Inc. (San Mateo, California, USA; www. surve ymonk ey. com). For data analysis we used SPSS version 25 (IBM Corp., Armonk, NY).
Conflict of interest
The authors declare that the article content was composed in the absence of any commercial or financial relationship that could be construed as a potential conflict of interest.
Ethical approval The study was approved by the Institutional Review Board of the Medical Faculty of Bonn (no. 063/21).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-06-11T14:20:02.799Z
|
2021-06-11T00:00:00.000
|
{
"year": 2021,
"sha1": "9e3b3b4e3f318be65bcc5d6fc2077da90abd9c95",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11060-021-03784-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e3b3b4e3f318be65bcc5d6fc2077da90abd9c95",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237341834
|
pes2o/s2orc
|
v3-fos-license
|
Prognostic values and clinical relationship of TYK2 in laryngeal squamous cell cancer
Supplemental Digital Content is available in the text
Introduction
Laryngeal cancer, in recent years, is the second most common head and neck cancer, and over 95% of histological types are laryngeal squamous cell cancer (LSCC). [1,2] According to the Global Cancer Observatory report, there were a total of 177,442 new cases of laryngeal cancer, and about 10,000 patients dying of this disease in 2018. [3] In 2020, an estimated 12,370 new cases of laryngeal cancer will be diagnosed in America, and there will be approximately 3750 patients dying of it. [4] Admittedly, both the social-economic loss and the medical burden are enormous. Tobacco and alcohol consumption are generally accepted as the most significant risk factors for LSCC, and the function of the human papillomavirus is also, to some extent, involved in tumorigenesis. [5] Approximately half of the patients have been at stage III or IV disease when first diagnosed. [6] Although, there are, currently, several effective treatment in the management of LSCC, including surgery, radiation therapy, and chemotherapy, LSCC, unfortunately, is 1 of a few tumors in which the 5-year survival rate has decreased over the past 40 years, from 66% to 63%. [4,7] Therefore, in order to improve the therapeutic effect and the survival rate of patients with LC, it's necessary to proceed further research finding novel biomarkers for tumor detection with prognostic value.
The tyrosine kinase 2 (TYK2), located on human chromosome 19p13.2, is one of the protein coding genes. [8] TYK2 plays important roles in various biological processes, including cytokines activation, growth factors response, immune, or inflammatory reaction, to name a few. [9,10] It's reported that TYK2-deficient patients are vulnerable to viral, fungal or bacterial infections. [11] With further research, some previous studies declared that TYK2 is an essential molecule involved in tumour immunosurveillance. [12] Notably, there is a possibility that TYK2 is, to some extent, implicated in the pathogenesis of cancer. Recently, the function of TYK2 in various cancers, such as prostate cancer, ovarian, breast tumor, ovarian tumor, and so on, has been widely reported. [13][14][15][16] However, the correlation between TYK2 and the prognosis of LSCC has yet been reported. In this study, we performed bioinformatic analyses to identify the TYK2 expression between normal and tumor samples using high throughput RNAsequencing data from gene expression omnibus (GEO) and the cancer genome atlas program (TCGA) databases. Also, the survival analysis were performed based on the TCGA profile. Therefore, the main aim of the present study was to evaluate the potential prognostic value and clinical correlation of TYK2expression in LC. What's more, gene set enrichment analysis (GSEA) was performed to gain further insight into the biological pathways involved in LSCC pathogenesis related TYK2regulatory network.
Data acquisition and bioinformatics analysis
The data of microarray datasets GSE59102 from the GEO database were chosen for analysis (https://www.ncbi.nlm.nih. gov/geo/). There were 13 normal samples and 29 tumor samples in dataset GSE59102 (Last update date is Jan 23, 2019; Platform: GPL6480 Agilent-014850 Whole Human Genome Microarray 4x44K G4112F). In total, 12 normal samples and 111 LSCC samples were obtained from TCGA database. Notably, the data category was transcriptome profiling, while the data type was gene expression quantification. What's more, experimental strategy was RNA-Seq and workflow type was HTSeq -Counts. Also, clinical characteristic data including gender, age, tumor stage, etc were downloaded at the same time. We used R software (v.4.0.3) to observe whether the statistical difference (P-value <.05) of the expression of TYK2 existed between the normal and tumor samples.
GEPIA validation and statistical analysis
Gene expression profiling interactive analysis (GEPIA) database (http://gepia.cancer-pku.cn/), a newly opening interactive web server for cancer and normal gene expression profiling and interactive analyses, included 9736 tumors and 8587 normal samples from TCGA and the GTEx projects. [17] We further estimated the differences of TYK2 expression between normal and tumor tissue based on GEPIA database. What's more, according to the median expression values of TYK2, we divided tumor samples into 2 groups (high expression of TYK2 and low expression of TYK2). The survival analysis of TYK2 was performed by Kaplan-Meier method and log-rank test. Also, the outcome of survival analysis would be verified in GEPIA database. The Wilcoxon signed-rank test were used to evaluate statistical differences between clinical pathologic features and TYK2. Univariate Cox regression analysis was applied to identify single factor of clinical characteristics that were strongly correlated with survival. Besides, we also implemented multivariate Cox regression analysis in order to obverse the impact of both TYK2 expression and other clinical characteristics on survival. All statistical analyses were performed based on R software (v.4.0.3). Furthermore, P-value <.05 was regarded as significance in all statistical analysis.
Gene set enrichment analysis (GSEA)
GSEA is a computational tool and method. We can use GSEA to investigate whether a priori defined set of genes shows statistically significant, accordant differences between 2 biological states. [18] We applied GSEA software (version 3.0) to the analysis of functional enrichment. Firstly, genes are ranked in GSEA based on the correlation between their expression and the expression of TYK2. Subsequently, GSEA was carried out to identify significant signaling pathways between low and high TYK2 expression data sets. The annotated gene set files (c2.cp. kegg.v7.0.symbols.gmt and h.all.c2.v7.2.symbols.gmt) were as references. Gene set permutations were performed 1000 times for each analysis. The phenotype label was expression level of TYK2. What's more, if the nominal P-value <.05 and the false discovery rate q-value <0.25, signaling pathway would be considered as statistically significant.
TYK2 expression comparison and patient clinical characteristics
In order to explore whether the difference of the expression level of TYK2 existed between LSCC tissue and normal tissue, we employed GEO, TCGA, and GEPIA databases to analyze the expression level of it. As shown in Figure 1A, the expression of TYK2 in tumor tissue samples (n = 13) was all significantly higher than normal tissue samples (n = 29) in GSE59102 (P = 1.683eÀ04). Significantly higher expression of TYK2 in TCGA dataset was observed in 111 LSCC patients than in 12 normal samples (P = 1.365eÀ04, Fig. 1B). Also, we observed the same trend in GEPIA database (P < .01, Fig. 1C) between tumor samples (n = 111) and normal samples (n = 12). What's more, clinical characteristics of 111 LSCC patients from TCGA were exhibited in Table 1.
Survival outcomes and Cox analysis
As exhibited in Figure 2A, the result of Kaplan-Meier survival analysis indicated that tumor tissue with low expression of TYK2 was considerably associated with a worse overall survival (P < .001). Similarly, the survival analysis in GEPIA database also shown that low expression of TYK2 in tumor tissues had a worse overall survival than TYK2-high expression (P = .00027, Fig. 2B
Discussion
LSCC is the second most common head and neck cancer with the increasing mortality in the United States. [7] As a result, it's necessary to find new biomarkers benefiting the diagnosis, treatment and prognosis assessment. To our knowledge, the function of TYK2 and its potential prognostic impact on LSCC has not yet been reported. The gene TYK2 is located on chromosome 19p13.2. [8] The protein TYK2 belongs to the family of JAK, which structurally has 4 functional domains-FERMdomain, SH2-like domain, JH1 domain, and JH2 domain. [19,20] N-terminal FERM and SH2 domains, the important mediators in peptide interactions, are able to facilitate JAK for connecting with cytokine receptors. [19] JH1 domain is a tyrosine kinase and JH2 is regarded as a kinase-like or pseudokinase which could inhibit the kinase activity of JH1. [20] Notably, TYK2 is closely related to a variety of cytokines and immune cells. In the studies of TYK2 knockout mice, TYK2-deficient mice become more susceptible to infection, which can be explained by impaired Th1 and Th17 development. [9,21,22] On the contrary, low level TYK2 led to enhanced lung inflammation because of enhanced Th2 response and highest IL-4 level. [23] What's more, TYK2 plays critical role not only in effector Th cell signaling but also in NK cells and dendritic cells. The presence of TYK2 is necessary for dendritic cells to prime CD8+ T cells generating IFN-g. [24] And NK cells would abrogate Th1 differentiation and IFN-c generation in the absence of TYK2. [25] Besides, previous research have established that TYK2 inhibitors have shown exciting preclinical efficacy in various autoimmune diseases, such as psoriasis, lupus, and inflammatory bowel disease. [26] To date, the expression and functions of TYK2 associated with or causative for tumorigenesis, including prostate, ovarian, cervical, and breast cancer, to name a few, have attracted more and more attention and is actively investigated. [13][14][15][16] Accounts by many researchers of the ability of TYK2 to regulate invasiveness and metastasis of cancer cells have been widely reported. High levels of TYK2 showed a positive correlation with the invasion and metastasis of prostate cancer via gastrinreleasing peptide receptor. [13,27] Also, TYK2 is involved in the urokinase-type plasminogen activator receptor system, which is central for tumor cell migration and metastasis. [28,29] Strangely, it's relatively lower TYK2 level in tumor samples that is regarded as an undesirable prognostic marker. [30] It has been previously shown that normal or higher expression level of TYK2 is significantly associated with longer survival according to a metaanalysis about hepatocellular cancer patients. [31] Consistent with this, our result also demonstrate that tumor tissue with low expression of TYK2 was considerably associated with a worse overall survival. The underlying mechanisms for these conflicting reports is still equivocal. One possible explanation for this observation is that TYK2 serves the function of inhibiting or activating BCL-2 family members which is able to prevent tumor from cell death. [32,33] What's more, the study led by Li et al [34] demonstrated that there was a dual role of TYK2 in regulation of Connexin43 (Cx43) with both pro-and antitumorigenic function: on the 1 hand, TYK2 is capable of decreasing Cx43 stability via phosphorylating; on the other hand, TYK2 can increase Cx43 levels in a STAT3-dependent manner. [35,36] Admittedly, the above findings provide constructive references for further research of the mechanism of TYK2 in LSCC.
Our study pointed to the fact that the expression of TYK2 in LSCC not only probably correlated with tumorigenesis, but also, to some extent, could predict prognosis. Consequently, we can boldly speculate that TYK2 can contribute to the pathogenesis and metastasis in LSCC. To further investigate the functions of TYK2 in LSCC, GSEA was implemented via using TCGA data, GSEA showed that JAK-STAT signaling pathway, and P53 signaling pathway are differentially enriched in TYK2 high expression phenotype.
Both JAK-STAT signaling pathway and P53 signaling pathway are famous pathways associated with tumor. Originally, Leonard et al [37] suggested that constitutive activation of JAKs and STATs was highly associated with malignancy. Subsequently, the JAK-STAT pathway is gradually regarded as one of the most important cancer pathways [38] and directly contributes to tumorigenesis, progression, invasion, and metastasis. [39] Various cancers, such as lung cancer, oral cancer, and pancreatic cancer, are closely related to JAK-STAT signaling pathway. [40][41][42] Furthermore, P53 signaling pathway also has notable effect in tumorigenesis. Previous studies have demonstrated that p53 overexpression was obviously associated with LSCC as well as head and neck squamous cell carcinoma in immunohistochemical analysis. [43,44] Recent study additionally reported that Lupeol can serve the function of regulating neoplastic growth and apoptosis in laryngeal cancer based on the antitumor effect mediated by p53. [45] The aforementioned discoveries does shed a light on further study of mechanism of TYK2 in LSCC.
Additionally, this study has compelling limitation. Inevitably, a limitation of this analysis is that the sample size was relatively inadequate. No experimental study was conducted by us to explore the potential carcinogenic mechanism of TYK2 in the development of LSCC. Admittedly, additional studies are needed in order to explore the accurate functional mechanisms of TYK2 in LSCC. What's more, the information acquired from all databases was limited, therefore improvement of the databases will lead to varied and credible outcomes.
Conclusion
In conclusion, this work postulated that TYK2 is probably a good prognostic factor in LSCC patients. The expression level of TYK2 decreased with the progression of tumor. Besides, JAK-STAT Table 3 Gene set enrichment analysis. signaling pathway and p53 signaling pathway might be the key pathway associated with TYK2 in LSCC. Admittedly, it's necessary to perform further experimental validation including the molecular mechanism and deeper genomic research to prove the biological impact of TYK2.
|
2021-08-29T06:16:16.237Z
|
2021-08-27T00:00:00.000
|
{
"year": 2021,
"sha1": "2d911c03020f28c1d263ff6c2e94aecb4c086457",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000027062",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "a1fdac7cf695dfee5e20a636330189bebaa5bd69",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
197666757
|
pes2o/s2orc
|
v3-fos-license
|
Achieving the Goals of Dementia Plans: A Review of Evidence-Informed Implementation Strategies.
A 2019 report by the Canadian Academy of Health Sciences identified the importance of evidence-informed implementation strategies in reforming dementia care. Such implementation strategies may be relevant to changing clinical practice in the wake of Canada's impending federal dementia plan (initiated by Bill C-233). As this federal dementia plan is elaborated, there may be value in looking ahead to some of the implementation challenges likely to be faced “on the ground” in healthcare settings. We thus conducted a rapid review of provincial and national dementia plans from high-income countries and reviewed studies on implementation strategies to dementia care. We advance seven key implementation strategies that may be useful for future dementia care reform.
of provincial and national dementia plans from high-income countries and reviewed studies on implementation strategies to dementia care. We advance seven key implementation strategies that may be useful for future dementia care reform.
Résumé
Un rapport publié en 2019 par l' Académie canadienne des sciences de la santé met en relief l'importance des stratégies éclairées par les données probantes pour la mise en oeuvre des réformes des soins en matière de démence. Ces stratégies de mise en oeuvre peuvent permettre de changer la pratique clinique dans le contexte imminent du plan fédéral canadien en matière de démence (au moyen du projet de loi C-233). Dans le contexte de ce plan, il est intéressant d' envisager certains défis liés à la mise en oeuvre auxquels devront faire face « sur le terrain » les établissements de santé. Nous avons ainsi mené une revue rapide des plans provinciaux et nationaux contre la démence dans les pays à revenu élevé. Puis nous y avons examiné les stratégies de mise en oeuvre pour les soins en matière de démence. Nous proposons sept stratégies de mise en oeuvre qui pourraient être utiles pour d'éventuelles réformes des soins en matière de démence. T T he Alzheimer Society of Canada (2010) reports that by 2038 over 1.1 million Canadians will have dementia. This represents 2.8% of the total Canadian population, with 9% of Canadians over age 60 and 50% of Canadians over age 90 having dementia (Alzheimer Society of Canada 2010). Ultimately, this prevalence of dementia will lead to a cumulative economic burden of $293 billion per year by 2040 (Alzheimer Society of Canada 2018). In response to rising global dementia rates, the World Health Organization (WHO 2012) has identified dementia as a global health priority. In Canada, this priority has been addressed provincially: beginning with Ontario in 1999 (MOHLTC 1999), provinces have gradually developed plans to address the overwhelming scale, impact and cost of dementia. While provincial stewardship in this arena is logical (Flood and Choudhry 2002), calls for a federal dementia strategy that is complementary to provincial stewardship -involving investment in research, increasing awareness of dementia risk factors and supporting and inspiring local clinicians to improve care practices for dementia -persist (Alzheimer Society of Canada 2018).
Canada' s recent passage of Bill C-233, an Act respecting a national strategy for Alzheimer' s disease and other dementias, suggests that a federal dementia plan may soon be established. Bill C-233 identified five priorities for dementia care reform: (1) developing national objectives, (2) encouraging investment in research, (3) coordinating with international bodies (e.g., WHO), (4) assisting provinces with the development and dissemination of diagnostic treatment guidelines and best practices for dementia care management; and (5) making recommendations for standards of care. A National Dementia Conference (PHAC 2018) and a report conducted by the Canadian Academy of Health Sciences (CAHS 2019) were organized in response to Bill C-233. Both the conference and report allowed for diverse stakeholders to share perspectives on dementia care and support, research and public education. They also suggested that implementing a dementia strategy is easier said than done. Accordingly, the CAHS recommended that evidence-informed implementation strategies be considered to achieve stated goals of dementia care reform (CAHS 2019). To respond to this final recommendation -and to support the clinic-level objectives identified by Bill C-233 and the National Dementia Conference -a synthesis of existing implementation strategies specifically relevant to dementia care is needed.
In this article, our aims are (1) to highlight why implementation strategies are essential components downstream of any dementia plan, (2) to examine the implementation strategies referenced in dementia plans of peer high-income countries and provinces; and (3) to review and propose evidence-informed implementation strategies that national and provincial governments in Canada may use as they further reform dementia care at the clinical level. To do so, we conducted a rapid review as defined by Tricco et al. (2016), examining provincial and national dementia plans from around the world. In addition, we reviewed studies on implementation strategies that are specific to dementia care reform. Note that while a dementia plan should ideally be broad, including supportive housing, community programs, caregiver support, dementia-friendly cities, transportation and anti-stigma campaigns, this paper will specifically focus on the healthcare delivery system for dementia care.
Why Implementation Strategies Matter
The inclusion of implementation strategies in dementia care reform is important for countries to reap the benefits -improved care and reduced cost -of dementia plans (Milstein and Shortell 2012). Studies have shown that the dissemination of healthcare initiatives is challenging. For example, Damschroder et al. (2009) report that only one-third of healthcare improvement initiatives successfully transition from adoption to sustained implementation across organizations. Even if implementation strategies to change clinical practice are only enacted after high-level policy is negotiated, understanding implementation challenges likely to be faced by healthcare professionals is relevant to the negotiation of funding mechanisms and resource allocation by federal and provincial governments.
Whereas many implementation strategies are applicable to any healthcare policy, specific implementation strategies matter for dementia because of the complex nature of dementia diagnosis, care and affected population. First, dementia is notoriously underdiagnosed in primary care, with rates between one-half (Bradford et al. 2009) and two-thirds (Valcour et al. 2000). The challenges of primary care physicians to diagnose dementia stem from a lack of confidence (Foley et al. 2017) and/or uncertainty about whether the diagnosis of an incurable disease such as dementia will improve the care or quality of life of a patient (Borson and Chodosh 2014). Second, optimal dementia care requires a wide range of personnel and services, which change as the needs of dementia patients evolve (Borson and Chodosh 2014). Third, patients with dementia suffer from high degrees of comorbidity, with one-third of patients experiencing five or more additional chronic conditions (Mondor et al. 2017). Acute exacerbations of these co-existing diseases often make dementia care too rare of a priority. Finally, optimal dementia care requires engaging both the patient and their caregiver(s), which is specific to dementia care (Borson and Chodosh 2014).
Shedding Light on the Lack of Implementation Strategies in Published National and Provincial Plans for Dementia
National and provincial plans for dementia have been published in 29 countries and eight Canadian provinces, according to Alzheimer's Disease International (2018). We analyzed the 24 strategies that were written in either English or French (16 countries plus all eight Canadian provinces). These reports generally share a common form: the reports define dementia and describe its prevalence and impact, underscore the purpose for a national or provincial dementia strategy and outline strategic priorities for dementia reform. These priorities typically include (1) increasing awareness and understanding of dementia, (2) promoting timely diagnosis through workforce development; and (3) improving dementia management and care. Of the 24 national and provincial plans for dementia examined, only 12 addressed the implementation strategies for the programs. The plans either introduce implementation strategies throughout the documents (i.e., tying individual strategies to specific objectives) or through explicit "stand-alone" chapters on implementation strategies, typically located towards the conclusion of the documents (Table 1).
More critically, even among the national and provincial plans for dementia that include sections on implementation strategies, very few plans actually articulate strategies for the diffusion or implementation of dementia care reform. They tend to state objectives but not how such objectives will be achieved or measured (e.g., "educating more people earlier about the risks of developing dementia"). The few implementation strategies that have been articulated remain vague. Strategies like "investing in research" (United Kingdom) (United Kingdom Department of Health 2009), "diversifying pedagogical approaches" (France) (Ministère des Affaires sociales, de la Santé et des Droits des femmes 2014) and "involving individuals living with dementia and their caregivers" (Switzerland and Malta) (Office fédéral de la santé publique 2013; Scerri 2014) form inadequate foundations upon which governments can orchestrate targeted and consequential steps towards achieving dementia plan goals.
A Review of Successful Implementation Strategies in Dementia Care
The literature suggests that any implementation of dementia reform, like any innovation, should target both individual adopters (healthcare professionals and informal caregivers) and whole organizations (Greenhalgh et al. 2004). Individual adopters benefit from pragmatic guidelines that target the confidence and expertise of individuals, address their concerns and encourage them to engage with dementia reform over an extended period. Implementation strategies should also be conceived at the organizational level, where integrating reforms with the current organizational context, identifying and valourizing a "champion" of dementia reform and providing additional resources and incentives may facilitate improved dementia care.
Disseminating pragmatic guidelines and training through active, concise and varied formats
Traditional didactic and passive strategies (lecture-style meetings, printed materials and guidelines) are usually ineffective strategies for increasing healthcare professionals' knowledge of dementia and their confidence in managing patients (Aminzadeh et al. 2012;Burgio et al. 2001;Gifford et al. 1999). Healthcare professionals benefit most from problem-based and solution-focused dementia training (Yaffe et al. 2008). Whatever the intervention, strategies that focus on pragmatic benefit and usability should be developed (Aminzadeh et al. 2012). Guidelines must recognize the importance of the patient-caregiver dyad, which is specific to dementia (CAHS 2019). For example, caregivers benefit from specialized training including practice opportunities, personalized feedback and collaboration with practitioners (Chesney et al. 2011;Mazmanian and Davis 2002;Soumerai 1998). Guidelines to healthcare professionals and informal caregivers should be communicated in succinct and synchronized trainings to minimize "guideline fatigue" (Aminzadeh et al. 2012). These guidelines should also include recent recommendations from the Fourth Canadian Consensus Conference on the Diagnosis and Treatment of Dementia (Gauthier et al. 2012). Finally, guidelines should be encompassing of the comorbidity associated with dementia that often compounds physicians' difficulty with diagnosing and providing care for dementia and patients' difficulty with living with the disease while managing other chronic conditions (Borson and Chodosh 2014;Mondor et al. 2017).
Promoting confidence and expertise
Implementation strategies must be designed to target the confidence of healthcare professionals who feel ill-equipped to diagnose and care for dementia in Canada (Aminzadeh et al. 2012). Confident healthcare professionals are more likely to take a keen interest in dementia and dementia care reform and to diagnose dementia in a timely way (Aminzadeh et al. 2012;Moore and Cahill 2012). Confidence and expertise may be self-initiated, but governments can also furnish this capacity by providing funding and resources to train additional staff, such as geriatric nurses, who can collaborate and mentor closely with other clinicians (Aminzadeh et al. 2012).
Addressing concerns of potential adopters
Similarly, many healthcare professionals approach dementia diagnosis and care from a nihilist perspective (Pentzek et al. 2009). Family physicians are concerned about whether a diagnosis will improve the quality of life of a patient (Borson and Chodosh 2014) and whether dementia care interventions will result in improved care (Black and Fauske 2007;Netting and Williams 1999;Seddon and Robinson 2001). Studies show that when healthcare professionals maintain negative attitudes towards dementia interventions, the interventions are less likely to be adopted (Khanassov et al. 2014). A final unique barrier remains the reluctance of some family physicians to be trained in dementia care by non-physicians (Cameron et al. 2010).
Encouraging adopters to engage with the intervention over an extended period
Interventions take time to implement, and practices take time to change. This is especially true in dementia care, which mobilizes multiple health and social service organizations. Accordingly, benefits of dementia diagnosis and management take time to emerge. Persistence with interventions is thus particularly important in the context of dementia care. When healthcare professionals engage with new dementia programs for longer durations, their adherence to, and confidence in, the interventions increases (Cherry et al. 2004;Gladman et al. 2007;McCrae and Banerjee 2011;Netting and Williams 1999;Van Eijken et al. 2008). Eventually, as outcomes become perceivable, healthcare professionals feel increased self-worth and accomplishment (Grinberg et al. 2008).
Successful Strategies at the Organizational Level: Teamwork and Resources
Integration with current context Dementia interventions that are implemented in ways that are compatible with the current healthcare structure are more likely to be well-received by healthcare professionals (Khanassov et al. 2014). This can be challenging, since dementia care is often time-consuming, especially for solo practitioners (Hinton et al. 2007). Team-based care, with a clear division of labour, is needed. For example, nurses (referred to as infirmières pivots, "pivot nurses") are particularly suited to conduct cognitive screening, assessment and functional evaluation (Bergman 2009).
Identifying and valourizing a "champion" of dementia reform
As is usually the case for any policy or program implementation, a critical predictor for the successful implementation of a strategy is the presence of a physician or nurse who serves as a "clear champion" for dementia reform (Gifford et al. 1999). This champion, who recognizes the potential benefits of new recommendations, including timely diagnosis of dementia and interdisciplinary management, takes an active role in convincing other colleagues to use the guidelines (Gifford et al. 1999). If the champion is knowledgeable in dementia management, they may also provide support and guidance to peers. Championing dementia reform can be individual-or team-based.
Resources, incentives and culture
Governments must also fund and support dementia-specific resources beyond the clinic: home-based care, community services, transportation, long-term care and assistive devices. Healthcare professionals should be trained to know which of these options or services are available in the region, how efficient and organized these resources are and how to refer patients to them (Yaffe et al. 2008). Governments should also consider personal incentives (such as remuneration and other motivations) and cultural differences (unique perceptions of dementia and caregiving, especially in rural, Northern or immigrant communities) when developing strategies for implementation (Braun and Browne 1998;Khanassov et al. 2014;Martindale-Adams et al. 2017).
Limitations
This rapid review serves as a brief overview of the current state of dementia plans, vis-àvis implementation strategies, across Canada and other high-income countries. However, our analysis is limited. First, untranslated dementia plans (written in languages other than English or French), or those not available in the public domain, were not examined. Also, this review was limited to national and provincial plans. Grey literature (including future policy enforcement documentation) was not examined. Accordingly, we may have missed more applied guidelines (including implementation strategies) in subsequent years.
Summing Up: Implementation Strategies for Dementia
Even if implementation strategies are not included in national and provincial dementia plans, they will ultimately be relevant to transforming dementia care practice "on the ground." This article advances several dementia-specific implementation strategies that can be leveraged to improve the diagnosis and management of dementia. These strategies should be considered as future dementia plans are translated from policy to action.
|
2019-07-20T13:04:18.722Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "22df7874bbc7582fc458e3ed76af90baca9dd3e4",
"oa_license": "CCBYNC",
"oa_url": "https://www.longwoods.com/product/download/code/25860",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f058612fad1c989eabfe2ad7fb154288ae6ce85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
259660521
|
pes2o/s2orc
|
v3-fos-license
|
Q-Factor Optimization of Modes in Ordered and Disordered Photonic Systems Using Non-Hermitian Perturbation Theory
The quality factor, Q, of photonic resonators permeates most figures of merit in applications that rely on cavity-enhanced light–matter interaction such as all-optical information processing, high-resolution sensing, or ultralow-threshold lasing. As a consequence, large-scale efforts have been devoted to understanding and efficiently computing and optimizing the Q of optical resonators in the design stage. This has generated large know-how on the relation between physical quantities of the cavity, e.g., Q, and controllable parameters, e.g., hole positions, for engineered cavities in gaped photonic crystals. However, such a correspondence is much less intuitive in the case of modes in disordered photonic media, e.g., Anderson-localized modes. Here, we demonstrate that the theoretical framework of quasinormal modes (QNMs), a non-Hermitian perturbation theory for shifting material boundaries, and a finite-element complex eigensolver provide an ideal toolbox for the automated shape optimization of Q of a single photonic mode in both ordered and disordered environments. We benchmark the non-Hermitian perturbation formula and employ it to optimize the Q-factor of a photonic mode relative to the position of vertically etched holes in a dielectric slab for two different settings: first, for the fundamental mode of L3 cavities with various footprints, demonstrating that the approach simultaneously takes in-plane and out-of-plane losses into account and leads to minor modal structure modifications; and second, for an Anderson-localized mode with an initial Q of 200, which evolves into a completely different mode, displaying a threefold reduction in the mode volume, a different overall spatial location, and, notably, a 3 order of magnitude increase in Q.
■ INTRODUCTION
The interaction of light and matter in structured optical environments that tailor the local density of optical states is at the core of fields such as cavity electrodynamics, 1−3 nonlinear optics, 4−6 and optomechanics. 7,8 In many of these fields, the use of photonic crystals, their band gaps, and engineered defects within them, such as cavities and waveguides, is widespread. 9 However, the translational order that underpins such synthetic materials is not necessary, and disordered systems can expand the parameter space for several applications due to the large plethora of design freedom. Moreover, disordered photonic media made of random distributions of pointlike scatterers with controlled scattering properties have also been shown to block, guide, and tightly confine light. 10−13 In addition, the nontrivial interplay of order and disorder can also drastically reshape light transport, with strong Anderson localization of light as an emblematic example. 14 This has fostered the vision of a vast landscape from order to disorder with engineered disordered systems as a complementary alternative to their fully ordered counterpart. 15 While the mechanisms governing light transport in ordered and disordered environments may differ, their fitness as light− matter interfaces is ultimately determined by their ability to sustain photonic modes with large optical energy densities, i.e., through spectral and spatial light confinement. A paradigmatic way of doing so 16 is via high quality factor, Q, and low mode volume, V, optical cavities, with the latter figure of merit taking a different expression depending on the interaction at hand. 17 Given the generalized role of Q, 18 extensive efforts have been put into improving the designs and top-down nanofabrication. While enhancements of various orders of magnitude in Q can be achieved through intuitive-based approaches 19 and radiation-limited Qs as high as 9 million have been demonstrated in optimized two-dimensional photonic-crystal cavities, 20 progress in the case of random photonic systems has been more limited. 21 Such an issue has been addressed at the ensemble-average level by introducing short-range correlations, 22−24 but the Qs of Anderson-localized modes are only on par with engineered cavities in the case of slow-light photoniccrystal waveguides subjected to minute fabrication disorder. 25 On the other hand, the alternative problem of optimizing the Q of a single localized photonic mode in a random system, i.e., to engineer it, has not been tackled. In the more general picture of wave-matter science, while the optimization of ordered systems can be considered unambiguous, engineering and optimizing performances of single realizations of disordered systems is more difficult. Several approaches have tackled this challenge, for example, connecting wave-physics to network science, and succeeded in establishing clear interplays between physical quantities and tunable parameters. 26−28 In the absence of absorption, the Q of a cavity mode is determined by radiation losses at the boundaries of the domain. Due to its compatibility with conventional planar semiconductor technology, the preferred geometry is a dielectric slab: this leads to a heuristic distinction between in-plane and out-of-plane losses, respectively, gauged by Q ∥ and Q ⊥ . The possibility of increasing the former by increasing the footprint in the slab plane has implied that most efforts to maximize Q have been devoted to maximizing Q ⊥ . This boils down to modifying the momentum-space representation of the resonant modes via either first-principles group symmetry arguments, 29 the direct observation of the smoothness of the field envelope, 30 real-space analysis of the leaky components, 31 or semianalytic formalisms that tackle the problem as a reverse design one. 32,33 However, while they allow a pathway for iterative optimization, these approaches are supervised, and their extension to the case of random modes is not trivial. In parallel, rapid growth of computational resources has helped the development of both gradient-free and gradient-based automated optimization methods such as nature-inspired search algorithms, 34,35 machine learning, 27,36,37 and densitybased topology optimization. 38 In particular, gradient-based inverse design, which is transforming the paradigm of highefficiency component design in nanophotonics, 39 uses adjoint sensitivity analysis to efficiently compute gradients of a wide variety of objective functions. Traditionally used in finite difference and finite element solvers, 40,41 the adjoint method has recently been extended to mode-expansion solvers through automatic differentiation techniques. 42 Among the many desired functional characteristics, these methods have been employed to optimize the Q of a photonic mode. 42 We note, however, that these have rarely relied on directly solving Maxwell's eigenproblem with radiation boundary conditions, 43 where Q emerges as a natural quantity through the complex eigenfrequencies of quasinormal modes (QNMs). 44 Here, we propose a gradient-based automated optimization approach to maximize the Q of optical resonances in ordered and disordered dielectric slabs. The method uses first-order non-Hermitian perturbation theory 45 to efficiently compute the gradients of the Q-factor of a single QNM relative to arbitrary material boundary displacements, i.e., it optimizes the position and shape of material boundaries. First, we exploit the method on L3 cavities surrounded by photonic crystals of different spatial extensions, i.e., of different footprints, and evidence how it naturally optimizes for both Q ⊥ and Q ∥ . Then, we employ it to optimize the Q of an Anderson mode supported by a dielectric slab with a random distribution of etched holes 46 and demonstrate the optimization process to produce a 3 order of magnitude enhancement of its Q. By monitoring the spatial distribution of the mode along the optimization, we observe the central location and spatial distribution of the mode to change dramatically, with a final spatial localization comparable to the one achieved in engineered photonic-crystal cavities.
■ Q-FACTOR OPTIMIZATION METHOD
Resonant electromagnetic fields in plasmonic and dielectric resonators are unbound; this gives rise to, e.g., an exponential decay of the resonating field after an excitation is switched off or lineshapes of finite linewidth in scattering spectra. From a modeling perspective, these resonances are well described within the theoretical framework of QNMs, which are the solutions to the source-free Maxwell wave equation with a radiation boundary condition. 44,45 The resulting eigenvalue problem admits solutions with complex eigenfrequencies ωñ = ω n + iγ n , from where the Q-factor of the n-th mode is found as Q n = ω n /2γ n . As a consequence of the radiation condition, the QNM fields diverge in the far field, which invalidates common energy normalization approaches in Hermitian systems. This is circumvented through alternative normalization approaches that regularize the QNM behavior. 47 In this work, we use the so-called perfectly matched layer (PML) normalization 48
E r r E r H r r H r r
where {E n , H n } is the electromagnetic field of the QNM and the integral is carried out over the volume V T = V ∪ V PML , which includes the volume surrounding the cavity, V, and importantly, the volume V PML occupied by the PML used for the numerical implementation of the radiation condition. In recent years, various QNM expansion techniques have been used to model light-scattering problems 49,50 and light−matter interaction 48,51−53 whenever either (or both) photonic or (and) plasmonic resonances are involved. In addition, perturbation theories have been adequately generalized to open resonators using QNMs 54 and their predictions experimentally tested. 55,56 Here, we consider the effect of shifting the boundaries between two materials (labeled 1 and 2). The first-order complex shift to the complex eigenfrequency ωñ of a QNM is given by 45,58 S E r E r D r D r s r n r where E n and D n are the normalized (according to eq 1) complex electric and displacement fields of the QNM, respectively; the superscripts "∥" and "⊥" denote field components, respectively, parallel and perpendicular to the shifted boundary S, the displacement of which is given by s(r) and its normal by n(r) pointing from material 1 to material 2 (see Figure 1a). The expression in eq 2 generalizes the formula in 57 to open resonators and has been recently employed to calculate dissipative optomechanical coupling rates, 58 the sensitivity of ultra-low mode volume dielectric bowtie nanocavities, 59 and the effect of surface roughness in plasmonic resonators. 60 Even if the use of the QNM perturbation theory for shape deformations has been proposed to optimize Q, 61 a systematic study evidencing such use is still missing.
In this work, we study the photonic modes of dielectric slabs with n vertically etched void features, an example of which is an L3 photonic-crystal cavity, 19 whose geometry is shown in Figure 1b. We validate eq 2 by computing the QNM associated with the fundamental mode (the so-called Y mode) of an L3 cavity as a function of a symmetric and rigid shift s = (S 1x ,0) in the position of the two holes bounding the cavity along its axis. We use a commercial finite-element complex eigensolver (COMSOL Multiphysics 62 ) and reduce the computational size by employing the appropriate boundary conditions for the symmetry of the Y mode. Figure 1c compares the finitedifference numerical derivatives to the result given by the perturbation theory of eq 2 for both the resonant wavelength and the quality factor of the QNM of interest, which show clear quantitative agreement. We observe that, for small values of S 1x , the displacement leads to a red shift, as expected from the increased effective refractive index, and to an increase of Q, as evidenced earlier in ref 19. By mapping out the real (Δω) and imaginary (Δγ) parts of the integrand of eq 2 for a displacement set S = {(S ix , 0)|i ∈ [1, N]} in the unaltered L3 cavity (S 1x = 0 nm), as shown in Figure 1d, it also becomes apparent that most holes around the cavity region produce considerable changes simultaneously to the loss rate γ and the frequency ω, warranting automated optimization of Q with respect to the position of all holes. In the following section, we report on the gradient-descent optimization of photonic cavities, where the objective function is the quality factor Q of a single QNM and where eq 2 is used to estimate the gradients relative to the in-plane position of all holes (see Supplementary Section S1 for details). Although the literature on optimal line search methods is vast, we employ here a simple line search direction along the gradient and a step length set to η∇ S Q/|∇ S Q|, with η chosen to produce sufficiently smooth convergence (see Supplementary Section S2 for a study on the effect of η). We note that no constraints are imposed on the performed optimizations, although inequality constraints to limit wavelength excursions can be readily implemented with the real part of eq 2 and additional constraints might be incorporated with adjoint-based sensitivity analysis.
■ RESULTS AND DISCUSSION
Most previous research on photonic-crystal slab cavities has focused on maximizing Q ⊥ as Q ∥ scales with the size of the etched pattern around the cavity defect, i.e., the number of Bragg mirrors. However, the optimization of Q for a mode in an ungapped system (see the case of a random system later) requires an optimization approach that can simultaneously address Q ⊥ and Q ∥ . To evidence the versatility of the method proposed to optimize for both, we perform a systematic study of the L3 cavity studied in Figure 1 (S 1x = 0 nm) for varying footprints, gauged via the domain radius R (in units of a) within which circular holes are considered. Figure 2a,b summarize the results of the Q-factor optimization for R = 9a, including the evolution with iterations of Q, the loss rate γ, the resonant wavelength, the mode volume (calculated at the center of the cavity 44 ), and the position of the holes (from red to blue in Figure 2b). The Q of the initial unoptimized L3 cavity is considerably limited by out-of-plane radiation as evidenced by the value of Q ∥ , obtained by integrating the radiated power over the slab thickness at the edge of the PMLbacked domain, 63 which is much higher than that of Q ⊥ . Therefore, an initial drop in Q ∥ is observed, but both Q ∥ and Q ⊥ grow steadily after 20 iterations, indicating that the optimized configuration naturally accounts for both loss pathways, which for the final configuration in R = 9a are approximately of equal importance. We also observe that the minimum in Q ∥ is accompanied by a maximum in the evolution of the resonant wavelength, for which we observe a 50 nm deviation between the initial resonant wavelength, λ i , and the final one, λ f . On the other hand, V slightly increases, but the 2-order-of-magnitude improvement in Q largely overcomes that uncontrolled increase in V in terms of the achieved Purcell factor. The associated position of the circular holes as iterations evidences that while the optimization displaces the holes bounding the defect, i.e., those considered in previous attempts to optimize the Q of this mode, 34 the position of all holes along and around the 30.7°diagonal and up to the PML evolves during optimization. Such a direction nearly corresponds to that with the largest Bragg length in triangular lattice photonic crystals with circular holes, indicating that in-plane losses are, by construction, integral to the automated optimization strategy presented here.
We optimize the L3 cavities with different R using the same finite-element mesh sizes and fixed optimizer parameters, i.e., η = 2, and the stopping criterion to be the point when the relative variation between the Q of the running iteration and the Q 100 iterations before is less than 0.2%. Such a stopping criterion is used to account for the noisy nature of the evolution of Q as the number of iterations becomes large, which stems from the large value of η (see Supplementary Section S2). The effect of domain size on the optimized quality factor, Q f , which is shown in the top panel of Figure 2c for values of R varying from 6a to 12a, is pronounced. The transition from geometries limited by in-plane losses to those limited by out-of-plane losses is clear from an evaluation of the ratio Q ∥,f /Q ∥,i . Specifically, the large ratios for small R indicate that the dominating source of losses is in-plane losses, while the drop to 1 for large values of R indicates that the spatial extent of the photonic-crystal cladding already provides enough in-plane loss suppression and therefore the optimization is, in practice, optimizing Q ⊥ . As a consequence, this leads to only minor modifications around the defect for large R and produces only a small wavelength blue shift, as shown in the bottom panel of Figure 2c, where the final wavelength λ f and mode volume V f (solid-dotted lines) are compared to their initial values (dashed lines) for every value of R. We observe an increasing blueshift of λ f relative to λ i for decreasing R. We also report in Figure 2d the spatial profiles of the y-component of the electric field E y of the optimized modes in the plane z = 0 as well as the position of the hole boundaries. While the final configuration of the holes can deviate considerably from the initial one, e.g., R = 6a or R = 8a, the modal structure is preserved regardless of R. This stems from the fact that the boundary conditions determine field orientations on the symmetry axis and that the single QNM tracked is wellisolated spectrally and spatially.
On the contrary, random systems typically exhibit a large spatial and spectral density of (localized) modes in a given physical domain, which, for example, is used to alleviate issues in spectro-spatial matching to solid-state light emitters. 21 Therefore, the implications of using the QNM perturbation theory to optimize the Q of a single QNM in a disordered system are far from obvious and can eventually lead to a strong variation in the mode structure, including its spatial location and confinement level, as we demonstrate here. We apply the optimization method to an Anderson mode supported by a gallium arsenide slab (n GaAs = 3.46) of thickness d = 180 nm, size 36 μm 2 , and including N = 260 etched holes of radius R = 110 nm (see Supplementary Section S3 for details on the distribution of the position of the holes, e.g., the structure factor). The particular QNM we optimize, whose electric field intensity distribution is reported in the first map of Figure 3a, has an initial Q of 200, λ = 1273 nm and mode volume V = 1.22(λ/n GaAs ) 3 , and is selected among the many other modes supported by the structure because it is spatially isolated from the rest and it is the highest Q in a close spectral neighborhood (see Supplementary Section 4 for visualization of other QNMs). The latter facilitates tracking of the QNM of interest as iterations evolve. The optimization process is run for 5000 iterations, and the evolution of Q, the resonant wavelength, and the hole positions are summarized in Figure 3b,c. The value of Q grows at a rather (average) constant pace and reaches Q = 10 5 after 5000 iterations, which constitutes, to the best of our knowledge, the highest Q reported in a purely random system on a slab. We note that the steady increase in Q is also accompanied by considerable fluctuations, which originate because of a too large choice for η (η = 5) (see Supplementary Section 2). Fluctuations are also observed for the resonant wavelength of the mode although no significant drift is observed in this case. We attribute this to the random nature of the design that allows the holes to shift in any direction in the plane. Interestingly, monitoring how the spatial profile of the mode evolves as the Q-factor increases evidences that the mode location and spread evolve and therefore that the initial QNM chosen should be considered just as a seed for the optimization, contrary to the L3 cavity case. The three panels of Figure 3a highlight two specific configurations in addition to the initial one, corresponding, respectively, to Q = 2000 and Q = 10 5 . The middle configuration is chosen to highlight that the final one, despite the dramatic change in the spatial profile, is linked to the initial one, since the intermediate-case profile still preserves a tail corresponding to the original hotspot. The final optimized mode is located in a completely different position, and by tracking also the evolution of the mode volume V (light-blue dots in Figure 3b), we observe that it exhibits a much tighter localization (V = 0.4(λ/n s ) 3 ), leading to a Q/V = 5 × 10 6 μm −3 . This corresponds to an increase of the Purcell factor from 12 to 18 600, a final value typical of the best photonic-crystal cavities. 19 Interestingly, the optimized configuration exhibits peculiar properties of both order and disorder; despite the uncorrelated disorder environment, a high-Q Anderson mode with a tight spatial localization (typical of point defects in a perfect photonic order) is displayed, in a system with a high spectral density of modes (typical of random photonic patterns). In order to numerically test the general validity of the optimization approach for random media, we apply the method to different initial QNMs of the same disordered system and to a photonic mode supported by a photonic crystal with a certain degree of disorder, i.e., based on a quasiordered distribution of holes. The results are shown in Supplementary Sections 5 and 6.
We evaluate the in-plane losses in the initial and optimized configuration and report that Q ∥ increases from Q ∥ = 1.3 × 10 5 (unperturbed mode) until it reaches a value of Q ∥ = 2.8 × 10 5 . This, similar to the case in Figure 2, demonstrates that in the initial configuration Q is strongly limited by the out-of-plane losses, which are then optimized at the end of the process, for which Q ∥ ∼ Q. To understand the outcome of the optimization not only in terms of the single QNM we optimize but also in
ACS Photonics
pubs.acs.org/journal/apchd5 Article terms of the local density of optical states in the frequency range around it, we investigate the spectral response of the system in the presence of a single point-like electric dipole. To do this, we employ a finite difference time domain (FDTD) commercial software (Lumerical 64 ) and use a spectrally broad (δλ = 200 nm and pulse length 7.28 fs) electric dipole located at the brightest spot of the explored QNM, as highlighted for initial and final configurations with a green cross in the zoomed-in field maps of Figure 3d. We report the spectrum of the structure for both the initial and final configurations in Figure 3d. The FDTD method confirms the stability of the mode central wavelength during the optimization process and the increase of the total Q by 3 orders of magnitude. Interestingly, the high density of modes typical of random systems prevails after the optimization as can be deduced from the presence of many other less prominent peaks in the emission spectrum. This evidences that the optimization of Q does not occur through the formation of a band gap as it is achieved in other disordered systems. 22−24 This is further corroborated by the presence, in the final configuration, of other QNMs in spatial and spectral proximity (see Supplementary Figure S4 and S5) and by the very limited change to the hole statistics (see Supplementary Figure S3). The possibility of achieving a Q/V comparable to photoniccrystal cavities while preserving the high density of modes in a small spatial footprint might pave the way to the engineering of multiple Anderson modes in the same structure once the appropriate constraints are provided.
■ CONCLUSIONS
In conclusion, we have proposed a gradient-based automated shape optimization approach to maximize the quality factor Q of optical resonances. The method, which employs first-order non-Hermitian quasinormal mode (QNM) perturbation theory for shape deformations, allows the efficient computation of the gradients of Q relative to small material boundary displacements without the need for solving additional (non)linear algebraic systems. Due to the free-form and boundary-conformal meshes employed in finite-element method simulations, the additional calculations are also trivial, making the actual calculation of the QNMs the only time-and memory-consuming step. Although the cases considered here are limited to hole displacements in dispersion-less and absorption-less dielectrics, the approach naturally extends to absorptive media 44,58 and arbitrary�down to the mesh size� boundary deformations. We benchmarked our method with the optimization of cavity modes in dielectric slabs with either ordered or disordered patterns of scatterers. By simulating a standard L3 photonic-crystal cavity, we demonstrated that the approach can simultaneously take into account in-plane and out-of-plane losses and therefore truly optimize Q for a given domain size, circumventing issues found in other methods based on mode-expansion techniques. 34 Such optimized lowfootprint cavities may play a prominent role in applications where compactness determines functionality, such as spatial light modulators 65 or electrically driven nanolasers, 66 and enable optical interconnects for on-chip electronic−photonic integration, where size discrepancy has slowed down developments. 67 While single QNM perturbation theories are more intuitively suited to systems with well-isolated QNMs, the method is also successfully employed on a random system with a large density of optical modes around the targeted initial QNM. We optimize the Q of an Anderson-localized mode, for which we obtain an increase of 3 orders of magnitude. The optimized mode also exhibits a decrease of the mode volume and an unchanged resonant wavelength, leading to a Q/V of 5 × 10 6 μm −3 , on par with photonic-crystal cavities. 19 Our result might be relevant for the employment of random structures for lasing 68−70 and sensing 71 applications but also for the basic physical insights it can provide on light confinement in random systems. We foresee that the optimization approach in a random system of larger size might unveil novel features of engineered disordered systems such as hole structural correlations that are yet unexplored.
■ ASSOCIATED CONTENT
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsphotonics.3c00510. Details on the simulation models, the role of the gradient-descent η parameter, the statistical properties of the hole positions in the random systems, the QNMs of the initial and optimized random systems, and results of the optimization for different QNMs (PDF) European Union's Horizon 2021 research and innovation program under the Marie Skłodowska-Curie Action (Grant No. 101067606 -TOPEX).
|
2023-07-12T05:27:47.449Z
|
2023-07-10T00:00:00.000
|
{
"year": 2023,
"sha1": "c2aa508ef5693b16d79928c9639742637c54dd40",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsphotonics.3c00510",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "db640f56b8ce29e90e9793237a0b811bcdc906e8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
207948667
|
pes2o/s2orc
|
v3-fos-license
|
A Therapeutic Education Program for patients that underwent at temporary tracheotomy and total laryngectomy: leading to improved the “Diagnostic, Therapeutic and Assistance Path”
Background and aim of the study: Therapeutic education helps patients with a chronic disease to acquire and maintain the ability to live their life while handling their illness. Patients with temporary medium-term tracheotomy or permanent tracheostomy need to acquire skills to be able to handle the stoma, tracheal tube, related issues, and other apparatuses. This was the purpose of our therapeutic education program, which was aimed to take patients and caregiver to an efficient level of self-care. Methods: In 2018, was created a CME-accredited (Continuing Medical Education) “Workplace-based Learning Project” involving all the nurses in the Otolaryngology Head and Neck Operational Unit, along different specialists on the Disease Management Team, thereby forming an “Improvement Group”. We established parallel workgroups for bibliography research on data-based like PubMed, Cinahl, Cochrane, Google scholar, in order to obtain the information to write up a shared document. Results: We wrote out an Operational Protocol which lined up nursing skills – when handling patients with medium-term tracheotomy or tracheostomy – with scientific evidence. Our standard educational plan (customizable, based on each patient’s characteristics) promote the patient’s learning with respect to self-care. Conclusions: This project has set the basis for the improvement of the quality of assistance given to the patients and therapeutic education provided them. It has encouraged the development of the skills of the nurses involved, along with their motivation, and their integration on the Disease Management Team. But, it will be necessary in the future to further evaluate the effectiveness of the program in terms of self-care. (www.actabiomedica.it)
Introduction
At this moment in time, health services are experiencing increasing demands for assistance. This reflects the fact that people are living longer, often to advanced old age, accompanied by chronic health problems (1,2). This evolution led to reflection on the various aspects of nursing and care-taking entities with respect to the professional skills which enable them to respond in an appropriate manner to the needs of their citizens (2).
Due to the above, attention now shifts from the hospital, which handles acute cases, to locations which deal with chronic stages. The medium and long-term problems found here create the need for an integrated care pathway which connects them (2). In the intensive care hospitals the acute phases are treated. They Therapeutic Education Program for improved the "Diagnostic, Therapeutic and Assistence Path" 39 are a resource to be used only when strictly necessary. This is where the patient, with his/her specific health problem, is taken into charge to face his/her specific pathology by an integrated multidisciplinary team.
With this model, a different response mode comes to the fore. Appropriate technologies and skills provided by the appropriate quantity and quality of personnel are assigned in differing degrees to clinical instability and its accompanying complexity in terms of assistance. This combination gives the patient the most appropriate and timely of responses (2)(3)(4).
In this setting, a Diagnostic-Therapeutic-Assistance Path standardizes processes using scientific evidence. It is designed to ensure professional integration and coordination, guaranteeing adequate and equal clinical outcomes, even though it does not depend wholly on professionals. Each professional, with his/ her own specific competences, contributes to achieving the patient's goals, which have been identified in a shared manner (1)(2)(3)(4)(5).
Thus, it becomes essential to involve the patient in the decisions that concern him/her (1,2,5,6). Carrying this concept further, investment must be made in the patient's therapeutic education. This then becomes part of the process which helps sick people acquire and maintain their ability to best conduct their own lives while living with the illness itself. This, in turn, enhances the effect of other therapeutic effects derived from other sources (5,7).
The patient's family is also encouraged to participate. The context of the patient's lifestyle and experiences is also considered. And content is designed to stimulate learning how to promote empowerment and efficient self-care especially when dealing with chronic illness (5). This outcome is central to nursing assistance, and it encompasses the other goals (8).
In this manner, therapeutic education becomes a fundamental process in a structured health context. The multidisciplinary team, on a clinical assistance path with case/care management, places the patient (and his/her specific condition) at the center, while the related outcomes to pursue are studied (5). This involves the entire professional team. They interact in a focused way to guarantee coordinated and timely assistance, thereby increasing patient satisfaction and the effectiveness of the services (5).
Middle-Range Theory for Self-Care in Chronic Pathologies analyzes the characteristics and the factors necessary to make it efficient. The goal is to enable the patient to: (9) • Better understand his/her illness, treatments, and complications • Handle his/her new condition in a competent way, having been given info and knowhow • Avoid complications by reforming existing behavior modes Scientific literature has shown that training that employs active involvement, can produce better results in terms of learning and provide positive practical effects. For this to occur, three essential elements are involved: concrete problems to resolve, interactivity, and direct involvement in favorably organized contexts (10). In optimal situations, one's own work context provides both training needs and satisfaction (10).
To support this, in 2003, the CME (Continuing Medical Education) National Commission introduced what was called "Workplace-based Learning". This was a new mode that totally integrated the work environment and clinical-assistance procedures. As a result, the added value was actually determined by a motivational push which led professional into carrying out individual or group investigations/research, finding solutions for concrete problems (10).
"Workplace-based Learning" emerged from an intentional and well-organized search for solutions to real problems. And since its origins lie in real problems, monitored over time, it is clear that the evaluation of organizational change is as decisive as the learning process. And therefore, project methodology must be rigorous enough to guarantee the quality of the results while maintaining the right flexibility for the context (10).
One way to carry out this type of training uses "The Improvement Group". This was created to show the concept of change and multidisciplinary training in one's own work environment (10).
"Improvement groups use multi-professional and multidisciplinary activities in the workplace to promote health; the continual improvement of clinical assistance, management, or organizational processes; and the consequent accreditation or certification of the health structure involved. Here the learning process A. Spito, B. Cavaliere 40 occurs through the integration of a group of equals." (10,11) Improvement groups give operators responsibility for their own training (self directed learning), thereby encouraging colleagues to reflect upon their own work. Exchanges and reciprocal learning are promoted by sharing. Though always retaining methodological rigor, this mode encourages the possibility of incurring changes in the overlying organization. It enables it to meet the needs of the professionals involved, thereby encouraging their participation (10).
The process used to design, implement, and carry out an improvement group is divided up into the following phases (10): 1. Once a problem is identified along with the corresponding aspects that need to be faced, the head of the project writes out a program which identifies goals, participants, work phases and their duration, and a way to evaluate the project's success. Then he proposes it to the training service. 2. This document is then evaluated by a special multidisciplinary and multi-professional committee consisting of health professionals who then guarantee its appropriateness. 3. Once modified or approved by the committee, the project may start. The head of the project must guide the work done by the participants, assign responsibility, and work on making the project transferrable, watching out for the effects. In addition the head formulates the final report, giving extra credit to those whose efforts stood out.
Training of this type -fine-tuned starting from 2005 by the Training Service of the Azienda Sanitaria Provinciale in the Province of Trento -was then initiated in 2017 at the San Martino Polyclinic Hospital in Genoa. Its usefulness was revealed as a way to focus on skills, performance, and health successes regarding the patient (10).
Method
In agreement with the considerations described above, our project was designed for Otolaryngology Head and Neck Unit of the San Martino Polyclinic Hospital (Genoa, Italy). It used workplace-based learning carried out by an improvement group, and it was conduced by the nursing referenced to the Diagnostic-Therapeutic-Assistance-Path for the oncologic head neck disease, who worked with the other members of the multidisciplinary team.
Patients who undergo open surgery, which creates medium-term temporary tracheotomies or permanent tracheostomies, need to be provided with specific skills which allow them to handle -on their own -the stoma, the tracheal tube, and all that which is affected by this and other devices, to guarantee air passage in their airways and prevent the onset of complications.
A customized therapeutic education program was set up to take patients and their related caregivers to the point of efficient self-care. Up till now, there had been no clear operational methodology which defined the contents, instruments, methods, times, places, actions, and roles involved, and so, this lack of standardization also made it difficult to track and records the outcomes. Confirming this, both the day clinic nurses who participated in check-ups and follow-ups after the patient was released, and the speech therapists who then worked with this type of patient, noted the patients lacked ability for self-care and needed for further explanations.
The most critical phase has been identified as that immediately after discharge. This is the point where patients leave a protected environment -inside of which they are safe and receive answers to all their needs -to reenter a context in which they must measure themselves against their own ability for self-care and management. And yet, these abilities are consolidated only by continual practice and the gradual acquisition of experience.
As a result, the therapeutic education carried out in the recovery phase, needs to be as complete and efficient as possible. It must focus in particular on the prevention of more serious complications (i.e. airway obstruction, infections, and hemorrhages).
By planning early and providing follow-up meetings with discharged patients, we have found that this can help them and their caregivers to acquire additional skills and greater confidence. This is turn allows the health professionals to monitor their learning process and intervene in an appropriate manner where necessary. Therapeutic Education Program for improved the "Diagnostic, Therapeutic and Assistence Path" 41 Patients need to be able to take care of their daily needs, carrying out the main procedures in a safe and efficient way. Additional educational can then be given on an outpatient basis, an its evolution can be monitored over time. The educational program that we designed serves to reach this goal.
But there is a difference in the training to be given to a patient who underwent temporary tracheotomy versus a patient with a tracheostomy, after a total laryngectomy. This is due to the fact that while a tracheotomy is temporary, the latter modifies permanently the upper respiratory airways. Thus, in the first case, autonomous management is more limited. Professional experts intervene in the execution of some risky maneuvers, done in the hospital. The opposite holds true with regards to total laryngectomy. In this case, the patient learns to handle and live with it, turning him/her into the main subject of the therapeutic education program of self-care.
Our goal was intended to draw up an educational project using this scenario, an intermediate phase with respect to the whole process. And only after having done so, we would create a data bank which verified the efficiency of the training program provided.
The project was divided into five phases. The first phase directly involved the coordinator of the Otolaryngology Head and Neck Unit of the San Martino Polyclinic Hospital in Genoa Italy and her promoter. It provided a detailed educational plan which was shared with the Chair of the Operational Unit and with the Director of the Health Professions Operational Unit.
The project was presented in December 2017 to the Scientific Committee through the Simple Departmental Structure for Training and Communication to enable a training program that would be accredited by Continuing Medical Education, carried out in accordance with its "Workplace-based Learning" methodology. Approval was obtained in February 2018.
In March, the head (and coordinator) of the project, along with a representative of the nursing group, attended a course designed to supply methodological support for the development and implementation of this type of program. This involved teaching experts and support from a distance. Once the training was completed, the project was begun.
A work team was set up as an "Improvement Group". It consisted of 25 nurses, and 13 DMT (Disease Management Team) specialists for oncologic pathologies in the cervico-facial district (two otolaryngologists, a radiotherapist, an oncologist, two psychologists, a physiatrist, a physiotherapist, two speech therapists, two dieticians, and a health assistant), actively working as experts. It was also involved a nurse infection control.
Though well aware that the number of participants was rather high, it was decided to involve the whole nursing group, the principal students from the training course subjected to the proposed project, in order to permit the latter to become a source of motivation, producing consolidation/development of skills through the training.
In this specific context it is, in particular, the nurse who works the most to activate the educational program with the patient and/or caregiver.
Seeking our goal in self-care terms, we felt that it was necessary to work on the updating of nursing skills with regards to recent scientific research when dealing with patients with tracheotomies or tracheostomies. We also felt it necessary to plan and extend the program of therapeutic education by specifying its subject, methodology, and tools. A major role was also played by dieticians and speech-therapists, working more independently. The time required for each of these, including nursing, was 25 hours in total.
The second phase was begun in April 2018. It established three secondary groups of nurses, working in parallel, who researched: 1. Scientific evidence regarding the handling and the necessary devices for tracheotomy-tracheostomy. 2. Therapeutic education concepts and related methods, along with the identification of tools for recording educational actions that have been carried out and an evaluation of their effectiveness in teaching the patient/caregiver, the concept of self-care, and the quality of life for patients who have undergone tracheostomy.
Brochures and booklets providing information
for laryngectomized patients, in order to write a specific booklet to be given to the user and/ or caregiver at the same of the learning period.
These groups worked independently, though under the guidance and supervision of the head and coordinator of the project. Some work was done at home, and then discussed with the group.
As for point 1 inclusion criteria have been: documents both in English and Italian, covered the years from 2008 to 2018, and all dealt with adult patients. 10 guidelines were analyzed, along with 2 operational protocols, 1 policy, 1 procedure, 7 articles, and 3 manuals. We did not include publications which did not provide detailed procedures for the handling of tracheotomy-tracheostomy or those which did not seem pertinent (Table 1, Figure 1).
As for point 2 inclusion criteria have been: publications, both in English and in Italian, dating from 1998 to 2018, that discuss therapeutic education, selfcare in general, and tracheostomized patients and their relative quality of life.
We consulted 5 books, 1 guideline, 3 operational protocols, 18 articles, and 1 training program for educators. We excluded documents which describe educational methodologies that were not very applicable in our specific context of reference. We did not use those that were too generalized, and texts that did not focus directly on patient learnings (Table 2, Figure 2). As for point 3 just a few booklets were consulted, without doing an in-depth search, because we had decided to write our own. Our booklet would discuss basic subjects and supply useful information that our patients, having undergone a total laryngectomy, would need.
The product of the work groups' research was shared on many occasions. On the basis of the data that emerged, we made decisions regarding the next phases to be taken, until a final document was approved by all.
The third phase began in May 2018. It expanded up an operational protocol which included both skilled procedures for the nurses handling a tracheotomy-tracheostomy and the therapeutic education program for the patient or his/her caregiver in the recovery phase. A booklet for the laryngectomized patients was also written to be provided them at the start of the training program. These tools were the specific goal of the training project, and will be described in detail when we speak of results.
The experts involved in this phase were various specialists from the Disease Management Team. They were chosen on the base of their subject and specific competences, and greatly helped us write out the information material given to the patient. Each specialist contributed with his/her specialty. The final documents were then shared among the operators.
The fourth phase was begun in September 2018. In this phase, both the Chair of the Otolaryngology Head and Neck Unit and the Director of the Health Professions Unit checked and approved the operational protocol and the information booklet. Publication on the official website was requested after an evaluation carried out by the Clinical Risk Management Unit, Quality Accreditation, and Public Relations Office.
The fifth phase, started in January 2019, is still ongoing, and involved the application of the operational protocol created.
Results
The guidelines and the operational protocols that we chose were considered to be specifically instituted for tracheotomy-tracheostomy, and provided detailed information for the various procedures. These documents were carefully evaluated and from those were extracted our operational protocol described below.
With regards to the theme of therapeutic education and self-care, we selected documents which provided definitions, methods, tools, and variables which could affect results. We also considered articles related to the quality of life of patients with tracheostomy. Our research found only two instruments to register the occurrence of educational interventions, and none related to the traceability of the self-care level evaluation.
Four instruments were found to evaluate the skills of health service operators for the various procedures to assist patients with tracheostomies. One evaluated the patient upon his/her arrival in the hospital with a check list for the procedures and evaluations to be carried out. Five forms recorded the evaluation of the patient's condition and/or the presence of apparatus. And one was a check list of the material and equipment that were available to the patient upon release from the hospital. But we did not analyzed them as they were not on topic with regards to our project. An operational protocol was created called "As-
sistance and Therapeutic Education for the Laryngectomized or Tracheotomized Patient Belonging to the Diagnostic-Therapeutic-Assistance Path for the Oncologic
Head Neck Disease.". Its goal was to line up nursing skills with recent scientific evidence -while also standardizing behavior inside the group -and to set up a standard educational plan customized for each patient/ caregiver to promote his/her self-care.
The document was divided into two parts. It also had two attachments which were equivalent to selfstanding documents: • PART I -A detailed description was given for all the procedures that nurses carry out -completely on their own, or in the presence of an otolaryngologist -related to the handling of a medium-term tracheotomy or a tracheostomy, tracheal tube, along with any other apparatuses, devices, or dispositions designed to guarantee the airway patency and prevent the onset of complications. The second part of the operational protocol is based on this.
• PART II -After the concept of therapeutic education and self-care was introduced, a description of the methodology regarding both the patient and the caregiver was presented. Next, the subjects were Therapeutic Education Program for improved the "Diagnostic, Therapeutic and Assistence Path" 45 described in detail: the specific goals, contents, methods, roles, instruments employed, and the modes used to evaluate the patient's understanding. A distinction was made between the patient who has undergone a total laryngectomy and the patient with a temporary tracheotomy, as some of the procedures can be done autonomously in the former case, but not in the latter. Evaluation of the patient's learning is a rather difficult process. It is necessary to use validated and objective methods and instruments that are addressed to the patient and/or the caregiver. The former is totally or partially unable to express himself/herself in words during the days that followed the surgery. And so, we decided to verify the patient's understanding of our ex-planation of procedure techniques by simply observing the patient/caregiver, using the procedures summarized in this part of the protocol. But the evaluation of the patient's ability to handle this new condition -and namely the level of self-care reached -is much more complex, and therefore a more appropriate instrument should be used in the future. (12) • ATTACHMENT I -"Form used to record tracheostomized patient education " We created a form which records training given to patients and/or caregivers. It is intended to highlight and record the subjects explained, along with the extent to which they are able to understand what is (8) proposed to them. We used as a reference one of the two forms that we had found during our bibliographical research, because it conformed with our needs and our context (13). The top of the form provides space for the user's personal data (that of the patient or the caregiver). Next comes the identification of the subjects dealt with by the nurses. For each of these, the date of the training session is to be submitted, along with the name of the operator responsible for it, data regarding the extent of the patient's learning and comprehension, and if the set goal has been achieved or not.
Given the importance of customizing the training plan and the need to give the patient and/or caregiver time to assimilate the information provided and the procedures explained, several observations could be recorded on the same subject. Space is not included for observations related to speech therapists and dietitians, as these are to be recorded in a counseling report. This document is to be given to the patient or caregiver at the start of his/her education. It reinforces the training and acts as a useful reference upon the patient's return home. It supplies general and procedural information, but also helps the patient and family members become familiar with care and assistance upon discharge. Subjects dealt with during this educational phase is summarized in a booklet designed for consultation when in need. The subjects are as follows: 1. Anatomy and physiology of the upper aero and digestive tract. Once the documentation produced and approved by both the Chair of the Operational Unit and by the Director of the Health Professions Operational Unit, we proceeded to make it official on the hospital website and moved on to the implementation of the educational program that we had created.
These last stages also concluded the related training project. The documents produced and the final results were sent to the Simple Departmental Structure for Training and Communication. This led to Continuing Medical Education accreditation, along with recognition given to all the participants.
Conclusion
This project combined training, research, and organization with assistance for the patient. It provides food for thought as a starting point for improving normal procedures. By procedures we mean both the handling of tracheotomies-tracheostomies and the implementation of therapeutic education for the patient/ caregiver.
This subject demands the confirmation that one can and should change one's own behavior. "Workplace-based Learning" can provide this. Through this methodology the nurses of the "Improvement group" have increase their knowledge of subjects of daily interest, aligning their skills. Moreover it gave them the opportunity to take part in a training exercise that was organized "ad hoc" with the additional intention of minimizing personal discomfort, while still guaranteeing preset objectives within set times.
Only too often nurses who wished to participate in training courses related to their specialty, found it difficult to attend these courses -due to family reasons and personal inhibitions deriving from the need to respect the rules inherent in work schedules.
Even the other specialists showed themselves to be favorable and available to undertake this route of improvement. Their involvement also encouraged the integration of the multidisciplinary and multi-professional team, creating yet more added value and giving birth to valuable collaborations.
Surely this project has shown one of the many contributions that the nurse can make during the patient's pathway. The nurse's work thereby becomes more visible. This is a starting point which should lead to active participation in Disease Management Team activities, also through the establishment of a Case/Care Manager.
Discussion
This project stimulated reflection on some themes and proposals such as: • Adding at least one indicator of the nursing process to the budget • Organizing training courses for all those who carry out therapeutic educational activities in A. Spito, B. Cavaliere 50 their own work contexts to help them acquire adequate skills • Organizing Continuing Medical Education accredited seminars for external health operators, with the goal of optimizing the handling of this type of patient in a home environment, and also encouraging integration with the hospital • Instituting an outpatient nursing clinic dedicated to continuing therapeutic education after the patient's discharge from the hospital, monitoring his/her learning until the patient/caregiver is completely autonomous • Formally setting up a group of nurses who can provide their colleagues in other operational units with advice regarding tracheostomy patients, upon request • A better research for the appropriate instruments to evaluate patient self-care In a health context which deals, to an ever increasing degree, with chronic illness, it is essential to be able to face patients' various needs for assistance. It becomes ever more important to be able to recognize and evaluate the contribution of each single professional, the skills that they have, and their personal ability to develop and apply them.
To do so, action must be taken through training, organization able to support the various processes, and professionals to motivate them and bring out their value, are all necessary. Conditions must also be created to help them to work to their best, guaranteeing positive results for the users (4,14,15).
For some time now, researchers have been occupied with outcomes, evaluating the effect of nursing activities on patients. Successes have been identified that relate directly to this. They focus on all the tools which allow them to be measured and monitored, and on all the factors which influence their creation (8,14,15).
In addition, scientific literature shows that quality assistance is reached in settings in which there is a high degree of satisfaction both on the part of the patient, as on the part of the doctors and nurses (14,15).
Nurses have a greater impact when they are given charge of the patient. They need to feel they have more autonomy and control over practices; that they can influence decisions, participate in the logic behind them and the handling of priorities, even while fully participating in organizational choices. Coordinators and managers at various levels need to take action to make this occur. And the training project we have initiated intends to work precisely on all these aspects.
We set the promotion of Self-Care as our overall goal, as the main palpable outcome of nursing a patient afflicted by a chronic illness. We managed to act on both the quality of assistance given to the patients with medium-term tracheotomy or laryngectomy and on the skills of the nurses involved, along with their motivation and multidisciplinary and multi-professional integration on the Disease Management Team.
"Workplace-based Learning" methodology has surely shown itself as added value. It has indirectly allowed for the achievement of organizational-related goals in addition to ones regarding training. And so, to continue to improve quality, it is not just a point of arrival, but also a point of departure towards a further implementation of the culture of taking charge of a patient. We feel that the real difficulty was not in the making of this project, in spite of its complexity, but will be in the guaranteeing of the constant application of the operational protocol which we have created. Many different critical points are present in the organizational context. They must be taken into consideration in order to produce real change. This study should not remain a work unto itself.
Furthermore, though training shows its efficiency by enabling learning, seen as a process which leads to a change in the learner's way of thinking, feeling, and acting (16), we feel that it is important to identify an appropriate mode for evaluating the skills that have been acquired by health operators at the end of this path. The impact of the quality of assistance given also strongly impacts outcome.
Limits
• The modes used to evaluate how the patient or caregiver learns to deal with a new psycho-physical condition after surgery could be improved by using more appropriate tools. • The large size of the work group created the need for a great deal of time and coordination. Therapeutic Education Program for improved the "Diagnostic, Therapeutic and Assistence Path" 51 • Five of the nurses (20%) for personal reasons were not able to finish the training process. • The bibliographic searches carried out were not systematic reviews of the literature.
|
2019-11-14T14:16:19.565Z
|
2019-11-11T00:00:00.000
|
{
"year": 2019,
"sha1": "069d10ab448d065dbc16dcce9bf11a6f1b1abf08",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a0e33cb8d2b9f840d0317cb9885bb4ecb787a353",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13816928
|
pes2o/s2orc
|
v3-fos-license
|
Persistent faCial swelling and tinnitus ComPliCating sePto rhinoPlastY
monly performed procedure for correction of esthetic and/or functional deformities of the nasal bony and cartilaginous structures. Complications after SRP are relatively uncommon and most often have an infectious origin or result from inadequate surgical planning or from poor technique. The formation of a traumatic arteriovenous fistula (AVF) is often seen at specific anatomic locations (e.g. caroticocavernous fistulae) and has rarely been associated to SRP. We report such an unusual case of an AVF originating from a terminal branch of the facial artery.
Septorhinoplasty (SRP) is a commonly performed procedure for correction of esthetic and/or functional deformities of the nasal bony and cartilaginous structures.Complications after SRP are relatively uncommon and most often have an infectious origin or result from inadequate surgical planning or from poor technique.
The formation of a traumatic arteriovenous fistula (AVF) is often seen at specific anatomic locations (e.g.caroticocavernous fistulae) and has rarely been associated to SRP.We report such an unusual case of an AVF originating from a terminal branch of the facial artery.
Case report
A 47-year-old male underwent endonasal SRP for correction of an obstructing nasal pyramid and nasal septal deviation.The patient's history included hiatus hernia, aspirin intolerance and bronchial hyperreactivity without evidence of sinonasal inflammatory disease.Oral omeprazole and salmeterol/fluticason intake was used as chronic medications.No other systemic or vascular conditions were present.
The surgical procedure involved a septal correction with osteotomy of the maxillary crest, removal of a cartilaginous and bony hump, paramedian and lateral osteotomies involving external stab incisions midway between medial canthus and nasal dorsum.Afterwards nasal facial soft tissue swelling a localized swelling at the left medial canthus had become evident.The swelling was well-confined, painless, without signs of inflammation or hematoma (Fig. 1).In addition, the presence of a pulse synchronous pulsation of the swollen area was very striking.Rhino scopy and nasal endoscopy were normal.Clinically, an arteriovenous fistula formation was suspected and the patient was referred for Doppler Ultrasound evaluation, confirming the vascular nature of the lesion with mixed arteriovenous flow (Fig. 2).The high diastolic flow in the arterial component of the lesion reflected a potential communication with the internal carotid artery ("lowresistance cerebral circulation").
For optimal therapy planning the patient underwent conventional angiography.Selective angiography of both external and internal carotid arteries showed the vascular anatomy of the AVF with the arterial feeders splints and thermoplastic external nasal splint were placed.
The immediate postoperative course was uneventful and the patient was discharged the next day.
At postoperative control one week later the splints were removed and symmetrical regressing facial swelling and hematoma was visible.However, the patient mentioned a discrete, pounding tinnitus that was present since day one after surgery.Since no clinical explanation could be found at that time, no specific action was undertaken until the next control visit one week later.At that time and after further regression of JBR-BTR, 2013, 96: 65-68.
Persistent faCial swelling and tinnitus ComPliCating sePto rhinoPlastY
T. Van der Zijden 1 , J. Claes 2,3 , F.M. Vanhoenacker 4,5 , G. Claes 6 septorhinoplasty (srP) is commonly performed for correcting nasal bony and cartilaginous deformities.traumatic arteriovenous fistula (aVf) is often seen at specific anatomic locations and has rarely been associated to srP.we report such an unusual case where an aVf developed from a terminal branch of the facial artery.after septorhinoplasty a patient reported pulsatile tinnitus, starting one day after surgery.swelling on the left side of the nasal pyramid was still present two weeks after the procedure.Clinically, a traumatic aVf was suspected which was confirmed by subsequent doppler ultrasound examination and angiography.the lesion had developed an important venous pouch and arterial contribution was from the internal carotid as well as external carotid system bilaterally.Complete resection was done by external approach.
Keywords: arteriovenous malformations -nose.
From: 1. Dept of Medical Imaging, University Hospital Antwerp, Antwerp, 2. ENT Dept University Hospital Antwerp, 3. ENT Dept AZ St Maarten Duffel, 4. Dept of Medical Imaging University Hospitals Antwerp and Ghent, 5. Dept of Medical Imaging AZ St Maarten Duffel, 6. ENT Dept University Hospital Antwerp, Belgium.Address for correspondence: Dr T. Van der Zijden, Dept of Medical Imaging, University Hospital Antwerp, Wilrijkstraat 10, B-2650 Edegem, Belgium.E-mail: Thijs.Van.der.Zijden@uza.beBecause of the easy surgical access and the difficult direct arterial approach for femoral (transarterial) treatment, surgery was performed to treat this condition.
Through an external skin incision the venous pouch was exposed in the subcutaneous plane.Several feeding arteries were coagulated and transected, and the pouch was removed (Fig. 4).Pathologic exam of the resected tissue confirmed the vascular nature of the lesion.The further postoperative course was uneventful and the patient remained without complaints until the last follow-up visit four months after the surgical procedure.
discussion
Arteriovenous fistulas (AVFs) are uncommon vascular lesions with abnormal communications between arteries and veins resulting in shunting of blood.Mostly, they are acquired after trauma (including surgery), due to rupture of an arterial aneurysm or due to erosion in neoplasms.
Most of the AVF's in head and neck region are intracranial, i.e. caroticocavernous fistulas and dural AVF's.AVF formation in the facial eral routes is the orbital plexus, which connects the ophthalmic artery with facial, middle meningeal, maxillary and ethmoidal arteries (5).It is known that in some cases of internal carotid artery occlusion collateral connections between external carotid artery and the intracranial and orbital circulation may develop.The blood supply to the ipsilateral eye and even to the ipsilateral brain hemisphere can depend solely on retrograde filling of the ophthalmic artery (6).During embolization procedures or due to high-flow shunts, these potentially collateral pathways can become more prominent (7).
In the case of clinical suspicion of a superficially located AVF, due to its low cost and wide availability, color Doppler ultrasound is the first-line imaging technique to confirm the vascular nature of the lesion.It also demonstrates arterial flow in the feeding arteries, turbulent flow at the junction between artery and vein and high-velocity arterialized flow within the draining veins (8).However, catheter angiography is often needed for more precise vascular mapping of feeding arteries, nidus and draining veins (1).Angiography is very useful in showing the sometimes complex anatomy of the AVF in order to plan an adequate treatment strategy.It is very important to visualize the entire nidus with all feeding vessels, including other possible collateral feeders, and draining vessels.A proper angio graphy protocol in the case of a facial AVF includes arteriograms of both the external and internal carotid arteries (9, 10).
A specific treatment choice is always a trade-off between benefits and risks.A femoral transarterial embolization is preferred in a case with area after surgery has been described (1, 2).We did -howevernot find a similar case to the current case after a medline search, using the search terms "rhinoplasty, septoplasty, rhinosurgery, rhino surgery", combined with "arteriovenous".Descriptions exist of AVF formation through direct damage of the anterior ethmoidal artery (2).This in our opinion was not the underlying mechanism in our case, since there were no SRP-related peroperative or postoperative signs suggestive of any other than preseptal localization of vascular trauma.
Caroticocavernous AVF has been described as an unusual and dramatic complication of nasal surgery (3).It is clear from the clinical presentation and angiographic findings in our case that it is not comparable to a traumatic carotico-cavernous AVF.
We believe that in our case a direct trauma of the left angular artery has been the primary vascular lesion, caused by transcutaneous left lateral osteotomy.
Pulsatile tinnitus is a known early sign of AVF of the midface, nose or sinuses (4).The tinnitus in our case was also the first sign and we believe to be explained by bony sound conduction of the turbulent flow at the AVF.
The development of the venous pouch and its typical clinical presentation in our case have possibly been delayed by the use of external nasal splints and masked by the immediate normal postoperative swelling.
The arterial contribution to the fistula from both internal and external carotid artery is striking.Several anastomotic routes between extracranial and intracranial circulation exist.One of these possible collat- A B good arterial access, with no dangerous interconnections with the internal carotid artery system, and easily accessible collateral feeders.Another embolic therapeutic approach could be by direct or femoral transvenous approach.The embolization could be done by glue, non-adhesive liquid embolic agents, coils, particles or a combination of the aforementioned embolic agents.In comparison with surgery endovascular therapy has higher rates of recurrence.Direct transcutaneous injection of a sclerosing agent is in selected cases possible as well.Surgical treatment of a sinonasal AVF is a valuable option whenever its location allows radical resection.It may be the only option of treatment in those cases where embolization is not feasible or has failed.Resection of the venous pouch and all feeding arteries is of utmost importance, since reformation of AVF has been described after incomplete resection (10).
In conclusion, AVF developing from a terminal branch of the facial artery after SRP is unusual.Feeders were recruited from both internal and external carotid arteries.Doppler-ultrasound is an excellent firstline imaging technique for confirming the vascular nature of a lesion.For treatment strategy planning catheter angiography, including angiograms of both the external and internal arteries, provides more precise vascular mapping.Visualizing the entire nidus with all feeding and draining vessels is very important.references
Fig. 1 .
Fig. 1. -Clinical picture of the patient showing a welldefined, pulsatile soft tissue swelling at the left medial canthus (white arrow).
Fig. 2 .
Fig. 2. -Doppler ultrasound of the nose.Color Doppler ultrasound image (A) obtained at the level of the clinical swelling confirms the vascular nature of the lesion.The high diastolic flow in the arterial component on pulsed Doppler (B) reflects a potential communication with the internal carotid artery system.
Fig. 3 .
Fig. 3. -Selective angiography (internal carotid artery, ICA, and external carotid artery, ECA) shows the vascular anatomy of the AVF with arterial feeders (thin black arrows) and venous pouch (thick white arrows) with draining veins.The AVF is fed from the left internal maxillary artery through a hypertrophied infra-orbital artery (A -left ECA injection, lateral view) and from the left ophthalmic artery through the dorsal nasal artery (B -left ICA injection, lateral view).Right ECA injection (C -anteroposterior view) show feeding from the left angular artery through the right facial artery, using right to left collaterals.The venous outflow of the fistula is mainly through the right angular vein and frontal veins (D -anteroposterior view, left ECA injection late arterial phase).
venous pouch with draining veins (Fig.3).On the left side the AVF was fed from the left internal maxillary artery by a hypertrophied infraorbital artery and from the left ophthalmic artery by the dorsal nasal artery.Across small right to left collaterals the feeding left angular artery was supplied by the right facial artery and right ophthalmic artery.The venous outflow was mainly through the right angular vein and frontal veins.
Fig. 4 .
Fig. 4. -Peroperative images during (A) and post resection (B) of the lesion showing the venous pouch measuring about 1 cm with the coagulated entry points of the feeding arteries.
|
2018-04-03T02:42:57.678Z
|
2013-03-01T00:00:00.000
|
{
"year": 2013,
"sha1": "2373df5c3731a5d8a6893257d20ac9f6f9750fa1",
"oa_license": "CCBY",
"oa_url": "http://www.jbsr.be/articles/10.5334/jbr-btr.210/galley/207/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2373df5c3731a5d8a6893257d20ac9f6f9750fa1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
243887933
|
pes2o/s2orc
|
v3-fos-license
|
Experimental Investigation of Bearing Capacity of Screw Piles and Excess Porewater Pressure in Soft Clay under Static Axial Loading
. In this study, the behavior of screw piles models with continuous helix was studied by conducting laboratory experimental tests on a single screw pile that has several aspect ratios (L/D) under the influence of static axial compression loads. The screw piles were inserted in a soft soil that has a unit weight of 18.72 kN/m 3 and moisture content of 30.19%. Also, the soil has a liquid limit of 55% and a plasticity index of 32%. A physical laboratory model was designed to investigate the ultimate compression capacity of the screw pile and measure the generated porewater pressure during the loading process. The bedding soil was prepared according to the field unit weight and moisture content and the failure load was assumed corresponding to a settlement equals 20% of helix diameter. The ultimate compression capacity of screw piles higher than the ultimate capacity of ordinary piles and the ultimate compression capacity increases with decreasing the aspect ratio. The ultimate bearing capacity of the flexible screw pile (L/D<20) is greater than the ordinary pile by 59.5% and with the rigid screw pile (L/D>20), the ultimate bearing capacity could reach 250% compared with the ordinary pile. Also, the estimated ultimate compression capacity of flexible screw piles well agreed with those measured experimentally, but a large difference was noted for rigid screw piles.
Introduction
In recent years, there was increasing demand for finding engineering-efficient solutions to soil problems that take into account the economic suitability, ease of implementation and non-exaggerated damage to the environment. Emerging of the screw piles which has been characterized by its wide field use and which included many applications such as stabilizers, supporting the foundations of masts and energy transmission towers, underpinning the damaged old foundations and sustainable energy projects by fixing the bases of solar panels. Moreover, it has the advantage of being re-usable and lightweight compared to other types of piles, making it very comfortable in marshy soil conditions, fine marine mud and areas with restricted access. The characteristics of soft soils pose a difficult challenge for geotechnical engineering where the design of the foundation in compressible soils cannot be based solely on the theory of bearing capacity. It is generally governed by the deformation of the soil and the behavior in the structure. Thus the design can become more complicated when the environment presents extremely unfavorable conditions [1,2].
Soils that are normally consolidated or under consolidated or slightly over-consolidated and form most of their structure from fine grains and have a very soft to soft consistency known as soft soils. Soft soil depths vary and sometimes up to a depth of 30 meters below the natural ground surface. Occasionally, sand and silt layers coincide with contact with the post-glacial deposits such as lacustrine clays, which may maintain pore pressure, which in turn causes the instability of the soft soil to be excavated [1,2]. Soft soils are characterized by their flat laminate surface and are alluvial deposits that may be traced back to nearly 10,000 years. This type of soil can be determined by its high compressibility (Cc ranges from 0.44 to 0.19) and the low undrained shear strength (cu less than 40 kPa). However, the undrained shear strength for this soil type is affected significantly by the change in their moisture content and becomes harsh due to drought and vice versa when increases its moisture content. Therefore, this variation of this property causes stability problems [3].
Screw pile is a famous solution for supporting light structures, where helical piles are a valuable component in the geotechnical tool belt. Also, the screw piles provide support for different types of structures and adapt them in difficult underground conditions and the speed in installation leads to cost savings in general. Moreover, the screw piles are easy to install with providing capacities with a high degree of certainty, and from the public perspective is interesting because it is considered an innovative environment-friendly solution [4,5]. The bearing capacity of micro screw piles under the effect of axial compression, pullout, and lateral loading was investigated by conducting several field tests. The screw piles are inserted in clayey and sandy soils. The authors concluded that increasing the number of helices causes increasing the bearing capacity of screw piles. Also, the screw piles in clayey soil exhibited higher bearing capacity than sandy soil [6].
Mukhlef et al. [7] investigated the behavior of screw piles having several aspect ratios (L/D) inserted in gypseous soils and subjected to axial compression loading. The tests were conducted in dry and soaking conditions to evaluate the effect of gypsum dissolution on the bearing capacity of screw piles. The results of tests showed increasing the bearing capacity of screw piles with decreasing the aspect ratio. Also, the soaking of soil resulted in a significantly decreasing bearing capacity of screw piles in gypseous soil. This study focuses on investigating the ultimate bearing capacity of screw piles inserted in soft soil with several aspect ratios and subjected to axial compression loading. The screw piles can be divided into two groups: flexible piles with L/D>20 and rigid piles with L/D<20. Also, the porewater pressure generated during the loading tests was measured using porewater pressure transducers. The experimental results of ultimate bearing capacity were compared with those calculated theoretically to evaluate the validity and applicability of existing theoretical equations for estimation the ultimate bearing capacity of screw piles in soft soils.
Soil Sampling and Geotechnical Properties
Soil sampling. Soil samples were obtained from a quarry of raw materials for the Kufa Cement Factory located west of the Sudair district, 20 km to the south of Al-Diwaniyah Governorate. The soil samples were obtained from a depth ranging between (4.25-4.5) meters below the natural ground level and below the groundwater level. The groundwater level settles at a level of approximately 4 meters from the ground surface.
Geotechnical properties. In order to determine the geotechnical properties of the soil used in this study, these samples were subject to a program of tests that included determining the field density and moisture content, which are the main factors in the preparation of bedding soil in physical mode. The physical and mechanical properties of used soil are summarized in Table 1. Also, the chemical properties of used soft clay in tests are given in Table 2.
Physical Model and Procedure of Testing
Model Piles. Three models of screw piles have an aspect ratio (L/D) of 20, 13.33, and 10 and one ordinary pile of regular circular cross-section of the aspect ratio of 30.77. The ratio of (1:10) was adopted to correlate between the laboratory models and field models. The screw piles were used with a constant embedded length (L) of 400 mm and variable configurations of diameters and spacing between helices. A constant ratio between the diameter of helices (D) to the diameter of shaft (d) was equal to 2.5. Details and dimensions of used piles in tests are given in Table 3 and shown in Figure 1. Physical model. The model consists of two main parts: the first part is a steel container that contains the soil and consists of separate and removable parts. It was made of iron plates with a thickness of 8 mm and dimensions of 70 cm length, 70 cm width, and 70 cm height and is seated on a base of iron plate. At the center of the base, the container has an opening of 1.25 cm in diameter which is connected to a valve. This valve is connected to a tank to conduct soil saturation from the bottom to the top of the soil. The water level in the reservoir is usually 20 cm more than the soil surface to ensure soil saturation. This container is coated with anti-rust paint and two layers of oil paint to avoid corrosion during the inspection period of tests, the schematic diagram of the physical model is shown in Figure 2. The second part consists of a hydraulic piston used to apply load on the pile head. The magnitude of applied load was measured by a load cell that is fixed to the head of the hydraulic piston and the vertical displacement of the pile was measured by using a linear variable differential transformer (LVDT). The used instruments in tests are shown in Figure 3. Moreover, the porewater pressure is monitored throughout the test period by using a porewater pressure transducer (Model 4500DP).
The transducer is made of high-quality stainless steel and is designed to handle pressure from -50 to 4000 kPa, as it can measure negative porewater pressures up to -50 kPa. Liquid level sensor, also known as hydrostatic level sensor, measure the level by converting the fluid pressure based on its height above the sensor and its density into a linear output signal The signal generated in each of LVDT, load cell, and porewater pressure transducer are received and converted into the values of the settlement and the applied load with the corresponding porewater pressure by using the Arduino software as well as the LabVIEW program, where the outputs of each examination are controlled through these two programs. Soil bedding. The soil bed was prepared according to the field unit weight of 18.72 kN/m 3 and corresponding moisture content of 30.19%. The soil bed in the iron box was divided into six layers so that the thickness of one layer after compaction does not exceed 9 cm, where the final height of soil in the container will be 60 cm. A layer of sand of 3 cm thickness was placed at the bottom of the container and used to protect the bottom of soil from disturbance during the saturation process which continued for 48 hours and to ensure that the saturation process occurs in a smooth and complete manner. Also, a torque is applied on the head of screw piles gradually to insert the pile in the center of the soft soil bed and torque stopped after reaching the required depth of 40 cm. Enough care and control should be taken to keep the vertical line of the screw pile.
Testing Procedure
The axial compression load tests on screw piles are typically conducted in accordance with ASTM D1143-07 [8], where a compression test is performed on a single pile embedded into the prepared soft soil, then the loading and measurement system tools are tightened, and after ensuring that the soil is saturated. The compression load test is performed as described in the following steps: 1) The compression-load test system is prepared by equipping a hydraulic piston, supported from the top and fixed to the loading frame and connected from the bottom with the load cell. The load cell is mechanically connected to the head of the screw pile by special tools. The load cell is connected to an Arduino programmer that works with the LabVIEW program to record the increments of the applied load.
2) The compression load is applied and controlled by equipping a hydraulic piston and load cell.
This piston has the ability to move downward and apply a compression load on the load cell, which is connected to the pile cap and LabVIEW program to display the values of the compression load.
3) The compression load is applied gradually and incrementally, the average downward movement of the screw pile under each increment is measured by using the LVDT, which is connected to the screw pile cap and LabVIEW program to display the values of downward movement. 4) The test will continue until reaching failure of the pile, the adopted failure criterion adopted in this study defined the ultimate bearing capacity of the pile as the load corresponding to a displacement equals 20% of the helix diameter.
Load Transmission Mechanism
The mechanism of transferring the load from the pile to the soil is carried out by two basic components in calculating its capacity, which is skin friction and the end bearing of the pile as shown in Figure 4. The ultimate compression capacity of screw piles can be estimated using one of the suggested formulas in literature [9,10]. The general equation used for this purpose can be easily defined as: Qult is the ultimate compression capacity of screw pile; Qhelix is the shearing resistance mobilized along the cylindrical failure surface; Qbearing is the end bearing of the pile; Qshaft is the resistance developed along the pile shaft. can be used to estimate the ultimate bearing capacity of the screw pile based on that the failure mode of cylindrical shear failure. The failure of this type of pile is determined by the pattern of cylindrical failure where the ratio between the helices center to the diameter of the pile does not exceed one [4]. The formula could be expressed as: It is evident from the limitation of Eq. 1 and its more detailed boundary coefficients defined in Eq. 2 that the main parameters must be provided for the purpose of theoretical estimation of the ultimate load are the undrained cohesion force and the adhesion strength between the soil and the surface of the pile shaft. The adhesion strength was considered equal to the cohesion strength of the soil (adhesion factor equal to unity) since the soil condition is mainly compatible with the reconstituted conditions in the soft to medium-hard clay. For uniform clay, which was compacted with the same manner in layers and has the same moisture content it can be assumed that C a ′ = Ca = Cp = cu [9,10].
Results and Discussion
The comparison was performed using an ordinary pile of constant section and without helical fins in order to observe its performance and evaluate the level of efficiency gained after adding the helix plate to the rest of the screw piles and the effect of the ratio of the embedded length to the diameter of helices. The variation of applied axial compression loads on screw piles and ordinary pile with the resulted settlement is shown in Figure 5. The results of tests showed a significant difference in the response of screw piles in comparison with ordinary pile despite the smaller shaft diameter of screw piles in comparison with that of the ordinary pile. Generally, the bearing capacity of screw piles increases with decreasing the aspect ratio (L/D) of piles where the ultimate compression capacity of screw piles is greater than ordinary pile in about 59.5% when L/D= 20 and this value increased with decreasing L/D so for rigid screw pile the ultimate compression capacity is varied from (150-250)% when L/D varied from (13. [3][4][5][6][7][8][9][10]. The increase in ultimate bearing capacity is resulted from increasing the area of interaction between the soil and helices of screw piles. The friction resistance of screw piles is higher than that of ordinary piles due to increasing the perimeter of the slip surface which equals the diameter of the helix. In general, the failure pattern changed from local shear failure to general shear failure with decreasing the aspect ratio, where the flexible screw piles with (L/D>20) fail by local shear failure, but the rigid screw piles with (L/D<20) fail by general shear failure. The porewater pressure (PWP) increases with increasing the applied load as shown in Figure 6. At the early stage of loading, the PWP growing up rapidly to reach a maximum value then begins to decrease or stabilize at a certain limit according to the diameter of the pile. However, what is striking is that the rate of change in PWP has changed in reverse order with increasing the diameter of pile helix and showed a rather large change in the piles of small diameters (SD13 and SD20) quite the opposite of the piles of large diameters (SD30 and SD40). It was observed that reaching the maximum value in the PWP coincides with the movement of the pile by 3% of its diameter, but it begins to decline and descend with the pile reaching its ultimate capacity corresponding to the displacement of 20% of the helix diameter. Based on the results obtained from the loading of screw piles up failure, the failure pattern that occurs for all tests is the pattern of cylindrical failure due to the convergence of the distances between the center of the helices which is mostly less than the diameter of helices. These conditions cause the overlapping of stresses for the soil between the helixes, resulting in a cylindrical failure as shown in Figure 7. There are several approaches and criteria used for conceptual design that must be compared with the ultimate capacities of the screw piles obtained from load-settlement tests. The load value corresponding to a displacement of 20% from the diameter of the pile helix was used as a standard to determine the ultimate bearing capacity of screw piles. This criterion was used in this study for comparison with the estimated capacities by theoretical methods. The results of the ultimate bearing capacity of screw piles calculated from experimental tests are compared with the corresponding values calculated from theoretical equations, Table 4. It is noticed that there is a discrepancy between the experimental results and the estimated results calculated from Eq. 2, and this discrepancy increases as the (L/D) ratio decreases.
The experimentally obtained load capacity is near twice the estimated value, and this correlation appears to be related to the stiffness of the pile. The pile is classified according to the aspect ratio of the pile (L/D) into two major categories: rigid piles (aspect ratio less than 20) and flexible piles (aspect ratio more than 20) [11,12]. By comparing the results obtained from the experimental work with the estimated values using equation 2, the results for the flexible piles are compatible to some extent, but they become disparate to a large extent when the piles are rigid where the estimated pile capacity becomes much less compared to the actual capacity, Therefore, it is logically acceptable to use Eq. 2 for flexible piles, but its use with short rigid piles must be reconsidered.
Conclusions
This study was devoted to investigate the behavior of flexible and rigid screw piles in soft clay under axial compression loading. Based on the results of experimental tests, the following conclusions can be drawn out: The easy and simple erection of piles in soft soils. Also, it can be the reuse of such types of piles in construction. The results of the research showed that this type of pile is very useful in resisting axial compression loads in soft cohesive soils. The failure pattern associated with these piles is the pattern of cylindrical failure due to the short spacing between the helices and the spacing between helices is less than the helices diameter. The results of tests showed increasing gains in the ultimate bearing capacity of the screw pile as a result of adding the helical plates, where the ultimate bearing capacity of screw piles increases with decreasing the aspect ratio of the pile (L/D). In general, the failure of flexible screw piles (L/D>20) can be classified as local shear failure, but the rigid screw piles (L/D<20) can be classified as a general shear failure. The generated porewater pressure during the loading process reaches a maximum value associated with a displacement of the substrate of 3% of the diameter of the helix and then begins to decrease continuously the movement of the pile as a result of loading. There was a convergent between the estimated ultimate capacity and the experimental ultimate capacity of flexible screw piles and there was a large disparity when the screw piles are rigid.
|
2021-11-10T16:17:46.477Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1db0bd70af0bdda32d3b2817052bfd04e4256331",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/94/e3sconf_icge2021_01001.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c20cbb77d36204412fd5964a31f453a8f5e94c62",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
119296257
|
pes2o/s2orc
|
v3-fos-license
|
Detecting quantum gravitational effects of loop quantum cosmology in the early universe?
We derive the primordial power spectra and spectral indexes of the density fluctuations and gravitational waves in the framework of loop quantum cosmology (LQC) with holonomy and inverse-volume corrections, by using the uniform asymptotic approximation method to its third-order, at which the upper error bounds are $\lesssim 0.15\%$, and accurate enough for the current and forthcoming cosmological observations. Then, using the Planck, BAO and SN data we obtain the tightest constraints on quantum gravitational effects from LQC corrections, and find that such effects could be well within the detection of the current and forthcoming cosmological observations.
Introduction.-Quantization of gravity has been one of the main driving forces in physics in the past decades [1], and various approaches have been pursued, including string/M-Theory [2], loop quantum gravity [3], and more recently the Horava-Lifshitz theory [4]. However, it is fair to say that our understanding of it is still highly limited, and none of the aforementioned approaches is complete. One of the main reasons is the lack of evidences of quantum gravitational effects, due to the extreme weakness of gravitational fields. This situation has been dramatically changed recently, however, with the arrival of the era of precision cosmology [5]. In particular, cosmic inflation [6], which is assumed to have taken place during the first moments of time, provides the simplest and most elegant mechanism to produce the primordial density perturbations and gravitational waves. The former is responsible for the formations of the cosmic microwave background (CMB) and the large-scale structure of the universe [7]. Current measurements of CMB [8] and observations of the large scale distributions of dark matter and galaxies in the universe [9] are in stunning agreement with it. On the other hand, since inflation is extremely sensitive to the Planckian physics [7,10], it also provides opportunities to get deep insight into the physics at the energy scales that cannot be reached by any of man-made terrestrial experiments in the near future. In particular, it provides a unique window to explore quantum gravitational effects from different theories of quantum gravity, whereby one can falsify some of these theories with observational data that have the uncomprehended accuracy [11], and obtain experimental evidences and valuable guidelines for the final construction of the theory of quantum gravity.
In this Letter, we shall study the quantum gravitational effects of LQC in inflation [12], and show explicitly that these effects could be well within the detection of the current and forthcoming cosmological experiments [11]. Such effects can be studied by introducing appropriate modifications at the level of the classical Hamiltonian, very much similar to those studied in solid state physics [12]. It was found that there are mainly two kinds of quantum corrections: the holonomy [13][14][15][16], and inversevolume corrections [17][18][19]. These corrections modify not only the linear perturbations, but also the space-time background.
In particular, for a scalar field φ with its potential V (φ), the holonomy corrections modify the Friedmann and Klein-Gordon equations to the forms, where a denotes the expansion factor, H ≡ a /a, and a prime denotes the derivative with respect to the conformal time η (≡ dt/a(t)). ρ c is a constant and characterizes the energy scale of the holonomy corrections, with . Clearly, the big bang singularity normally appearing at ρ φ = ∞ now is replaced by a big bounce occurring at ρ φ = ρ c . In the infrared (IR) we have ρ φ /ρ c 1, and Eq.(1) reduces to that of general relativity (GR). The evolutions of the anomaly-free cosmological scalar and tensor perturbations are described by the mode function µ k (η), satisfying the equation [14,15], where ω 2 k (η) = Ω(η)k 2 with Ω(η) ≡ 1 − 2ρ/ρ c . The background-dependent function z(η) is given by z S (≡ aφ /H) for the scalar perturbations, and z T (≡ a/ √ Ω) for the tensor ones. To the first-order of the slow-roll parameters and δ H (≡ ρ/ρ c 1), the inflationary spectra and spectral indexes with the holonomy corrections have been recently obtained, by further assuming that the slow-roll parameters and δ H are all constants [16].
With the inverse-volume corrections, on the other hand, the Friedmann and Klein-Gordon equations are arXiv:1503.06761v2 [gr-qc] 1 Apr 2015 modified to the forms [18], Pl , and δ Pl ∝ a −σ , where α 0 , ϑ 0 and σ are constants [Note that here we use ϑ instead of ν adopted in [18], and reserve ν for other uses.]. The values of α 0 and σ are currently subject to quantization ambiguities, while the magnitude of δ Pl is unknown, as so far we have no control over the details of the underlying full theory of quantum gravity [18]. However, when σ takes values in the range 0 < σ ≤ 6, the size of δ Pl does not depend on α 0 and ϑ 0 , and can be written in the form δ Pl ≡ (a Pl /a) σ , where a Pl is another arbitrary constant. The constant ϑ 0 is related to α 0 and σ via the consistency relation ϑ 0 (σ − 3)(σ + 6) − 3α 0 (σ − 6) = 0. However, to make the effective theory viable, we shall assume δ Pl (η) 1 at any given moment, so we can safely drop off all the secondand high-order terms of δ Pl (η). This assumption also guarantees that the slow-roll conditions can be imposed, even after the inverse-volume corrections are taken into account.
With the above assumption, Bojowald and Calcagni (BC) [18] studied the scalar and tensor perturbations with the inverse-volume corrections, and found that the corresponding mode function µ k (η) can be also cast in the form (3), but now with where β 0 ≡ σϑ 0 (σ + 6) /36 + α 0 (15 − σ) /12. With such modified dispersion relations, BC calculated the corresponding power spectra and spectral indexes to the firstorder of the slow-roll parameters, from which, together with Tsujikawa, they found [19] that the LQC effects are distinguishable from these of the noncommutative geometry or string, as the latter manifest themselves in small scales [20], while the former mainly at large scales. To find explicitly the observational bounds on the inverse-volume quantum corrections, they considered the CMB likelihood for the potentials V (φ) = λ n φ n and V (φ) = V 0 e −κλφ , by using the data of WMAP 7yr together with the large-scale structure, the Hubble con-stant measurement from the Hubble Space Telescope, supernovae type Ia, and big bang nucleosynthesis [21], the most accurate data available to them by then, and obtained various constraints on δ(k) for different values of σ at the pivots k 0 = 0.002 Mpc −1 and k 0 = 0.05 Mpc −1 where δ(k) = α 0 δ Pl (k) for σ = 3, and δ(k) = ϑ 0 δ Pl (k) for σ = 3. An interesting feature is that the constraints are very sensitive to the choice of the pivots k 0 , specially when σ is large (σ ≥ 2), but insensitive to the forms of the potential V (φ).
In this Letter our goals are two-fold: First, we calculate the scalar and tensor power spectra, spectral indexes and the ratio r to the second-order of the slow-roll parameters, for both of the holonomy and inverse-volume corrections, so that they are accurate enough to match with the accuracy required by the current and forthcoming experiments [11]. This becomes possible, due to the recent development of the powerful uniform asymptotical approximation method [22][23][24], which is designed specially for the studies of inflationary models after quantum gravitational effects are taken into account. Up to the third-order approximations in terms of the free parameter (λ −1 ) introduced in the method, which is independent of the slow-roll inflationary parameters mentioned above, the upper error bounds are less than 0.15% [24]. Second, we shall use the most recent observational data to obtain new constraints on δ(k 0 ) for the powerlaw potential V (φ) = λ n φ n , where n is chosen so that r 0.1. With such constraints, we shall prove explicitly that the quantum gravitational effects from the inversevolume corrections are within the range of the detection of the forthcoming experiments, specially of the Stage IV ones [11].
Inflationary Spectra and Spectral Indexes.-To apply the uniform asymptotic approximation method, we first rewrite Eq.(3) to d 2 µ k (y) dy 2 = λ 2ĝ (y) + q(y) µ k (y), where y ≡ −kη, and the parameter λ is a large constant to be used to trace the order of approximations. With the holonomy corrections, we have λ 2ĝ (y)+q(y) = z /(k 2 z)− Ω(η). To minimize the errors, q(y) cannot be any, and was found that it must be taken as q(y) = −1/(4y 2 ) [23]. Then, it can be shown that in the quasi-de Sitter background, the equationĝ(y) = 0 has only one real root, say, y 0 . In the case with only one real root, the general expressions of the mode function, power spectra and spectral indexes up to the third-order approximations (in terms of λ −1 ) were given explicitly in [24]. Applying them to the case with the holonomy corrections, we find [25], where , D p = 67/181 − ln 3, D n = 10/27 − ln 3, ∆ 1 = 485296 98283 − π 2 2 , ∆ 2 = 9269 589698 , and denotes quantities evaluated at horizon crossing a(η )H(η ) = Ω(η )k. n denote the slow-roll parameters, defined as 1 ≡ −Ḣ/H 2 , n+1 ≡˙ n /(H n ) (n ≥ 1). Note that in the above expressions we have ignored terms at the orders higher than O( 3 , 2 δ H ). To the first-order, it can be shown that our results are consistent with those presented in [16].
In the case with the inverse-volume corrections, we have λ 2ĝ (y) + q(y) = k −2 (z /z − ω 2 k (η)), where ω 2 k (η) is given by Eq. (6), with z s (η) ≡ aφ[1 + 1 2 (α 0 − 2ϑ 0 )δ Pl ] and z t (η) ≡ a(1 − α 0 δ Pl /2), respectively. To minimize the errors, q(y) must be also chosen as in the last case, and then it can be shown thatĝ(y) = 0 has only one real root, and as a result, the general expressions of the mode function, power spectra and spectral indexes given in [24] are also applicable to this case, which yield [25], Note that we parametrize δ Pl (η) = (a Pl /k) σ (−aη) −σ y σ with Pl ≡ (a Pl /k) σ , κ ≡ (−aη) −σ . In TABLE I, we list the values of the coefficients Q (s) for different values of σ, as they represent the dominant contributions. The rest of the terms appearing in the above expressions are subdominant and will not be given here, but they are given explicitly in [25]. When σ = 3, Q = − 9π 64 ϑ 0 . We emphasize that the modified power spectra and also spectral indices are now explicitly scale-dependent because of Pl ∼ k −σ .
Before considering the observational constraints, let us first note that in [18,19] the observables n s , n t and r were calculated up to the first order of the slow-roll parameters. Comparing their results with ours, after writ-ing all expressions in terms of the same set of parameters, say, , we find that our results are different from theirs. A closer examination shows that this is mainly due to the following: (a) In [18] the horizon crossing was taken as k = H. However, due to the quantum gravitational effects, the dispersion relation is modified to the form (6), so the horizon crossing should be at ω k = H. (b) In [18] the mode function was first obtained at two limits, k H and k H, and then matched together at the horizon crossing where k H. This may lead to huge errors [26], as neither µ k H nor µ k H is a good approximation of the mode function µ k at the horizon crossing. The above arguments can be seen further by considering the exact solution of µ k , for the case σ = 2, where W W (b 1 , b 2 , z) denotes the WhittakerW function, + m y 2 δ Pl , and ν = 3/2 + 1 + 2 /2 for the scalar perturbations, and ν = 3/2 + 1 for the tensor. Matching it to the Bunch-Davies vacuum solution at k H, we find that c 1 = e − a 1 π 8 √ a 2 /( √ 2ka 1/4 2 ). With the above mode function, the power spectra and spectral indexes can be calculated, and found to be the same as those given in [25], but are different from those of [18,19].
Detection of Quantum Gravitational Effects.-The contributions to the inflationary spectra and spectral indices from the holonomy corrections are introduced through the parameter δ H , which are of the order of 10 −12 for typical values of the parameters [16]. Then, with the current and forthcoming observations [11], it is very difficult to detect such effects.
On the other hand, for the inverse-volume corrections, let us consider the power-law potential V (φ) = λ n φ n , for which we find that η V = 2(n − 1) V /n, ξ 2 V = 4(n − 1)(n−2) 2 V /n 2 , where V = M 2 Pl n 2 /(2φ 2 ). Thus, without the inverse-volume corrections (δ Pl = 0), we have n s = n s ( V ) and r = r( V ), and up to the second-order of V , the relation [27], Γ n (n s , r) ≡ (n s − 1) + (2 + n)r 8n + (3n 2 + 18n − 4)(n s − 1) 2 6(n + 2) 2 = 0, (10) holds precisely. The results from Planck 2015 are n s = 0.968 ± 0.006 and r 0.002 < 0.11(95% CL) [8], which yields n 1. In the forthcoming experiments, specially the Stage IV ones, the errors of the measurements on both n s and r are ≤ 10 −3 [11], which implies σ(Γ n ) ≤ 10 −3 . On the other hand, when the inverse-volume corrections are taken into account (δ Pl = 0), we have n s = n s ( V , Pl ) and r = r( V , Pl ), and Eq.(10) is modified to, where δ(k) ≡ α 0 Pl H σ and F(σ) O(1) [25]. Clearly, the right-hand side of the above equation represents the quantum gravitational effects from the inverse-volume corrections. If it is equal or greater than O(10 −3 ), these effects shall be well within the detection of the current or forthcoming experiments. It is interesting to note that the quantum gravitational effects are enhanced by an order −1 V , which is absent in [18]. In the following, we run the Cosmological Monte Carlo (CosmoMC) code [28] with the Planck [29], BAO [30], and Supernova Legacy Survey [31] data for the powerlaw potential with n = 1, which can be naturally realized in the axion monodromy inflation motived by string/M theory [32]. To compare our results with these acquired in [19], we shall carry out our CMB likelihood analysis as closed to theirs as possible. In particular, we assume the flat cold dark matter model with the effective number of neutrinos N ef f = 3.046 and fix the total neutrino mass Σm ν = 0.06eV . We vary the seven parameters: (i) baryon density parameter, Ω b h 2 , (ii) dark matter density parameter, Ω c h 2 , (iii) the ratio of the sound horiozn to the angular diameter, θ, (iv) the reionization optical depth τ , (v) δ(k 0 )/ V , (vi) V , and (vii) ∆ 2 s (k 0 ). We take the pivot wave number k 0 = 0.05 Mpc −1 used in Planck to constrain δ(k 0 ) and V . In Fig.1, the constraints on δ/ V and V are given, respectively, for σ = 1 and σ = 2. In particular, we find that δ(k 0 ) 6.8 × 10 −5 (68% CL) for σ = 1, and δ(k 0 ) 1.9 × 10 −8 (68% CL) for σ = 2, which are much tighter than those given in [19]. The upper bound for δ(k 0 ) decreases dramatically as σ increases [19,25]. However, for any given σ, the best fitting value of V is about 10 −2 , which is rather robust in comparing with the case without the gravitational quantum effects [29]. It is remarkable to note that, despite the tight constraints on δ(k 0 ), because of the −1 V enhancement of Eq.(11), such effects can be well within the range of the detection of the current and forthcoming cosmological experiments [11] for σ 1. Note that small values of σ are also favorable theoretically [18].
Conclusions.-Using the uniform asymptotic approximation method developed recently in [23,24], we have accurately computed the power spectra, spectral indices and the ratio r of the scalar and tensor perturbations of inflation in LQC to the second-order of the slow-roll parameters, after the corrections of the holonomy [13][14][15][16] and inverse-volume [17][18][19] are taken into account. The upper error bounds are 0.15%, which is accurate enough for the current and forthcoming experiments [11]. Utilizing the most accurate CMB, BAO and SN data currently available publicly [29][30][31], we have carried out the CMB likelihood analysis, and found constraints on (δ(k 0 ), V ), the tightest ones obtained so far in the literature. Even with such tight constraints, the quantum gravitational effects due to the inverse-volume corrections of LQC can be well within the range of the detection of the current and forthcoming cosmological experiments [11], provided that σ 1.
It should be noted that in our studies of the holonomy corrections, the effects of bouncing of the universe are insignificant by implicitly assuming that inflation occurred long after the bouncing. This is the same as those considered in [13][14][15][16]. Thus, it is expected that quantum gravitational effects from these corrections are neglectable. However, when the whole process of the bouncing is properly taken into account, such effects may not be small at all [33]. It would be very interesting to restudy the observational aspects of these effects.
|
2015-07-31T23:13:58.000Z
|
2015-03-23T00:00:00.000
|
{
"year": 2015,
"sha1": "84b1e4437e1d5215f2d0799ff5f0ad40d10b8425",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1503.06761",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "84b1e4437e1d5215f2d0799ff5f0ad40d10b8425",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
53452256
|
pes2o/s2orc
|
v3-fos-license
|
Simulation of a 10 kW Photovoltaic System in Areas with High Solar Irradiation
Problem statement: This study investigates the design and the simulation of a 10 kW photovoltaic (PV) system in areas of high solar irradiation. The importance of this study is to explore the feasibility of connecting the PV system with a grid to generate electricity at campus of Hashemite University (Jordan) whose yearly global irradiation is 2000 kWh/m. Approach: In order to determine the size and the number of PV modules needed to achieve the energy needs of the campus, we apply both the METEONORM and the PV SOL simulation software. This study calculates the cost of one kWh generated by the PV system and then compares it with the public electricity tariff. The study also presents a comparison between the performances of different PV panel sizes with different inclination angles. Results: METEONORM software data proves to be accurate and reliable to be used in this study. The 300W-panel-size shows the maximum energy generation. The optimal inclination angle which gives the maximum power generation is 27°. Conclusion: The calculations show the cost for PV-generation is $0.18 kWh without public subsidy compared to $0.086 kWh from national electric power company. A total of 356 modules are calculated to meet the power needs of the campus.
INTRODUCTION
Jordan imports approximately ninety seven percent of its primary energy, of which thirty percent is used to generate electrical energy (Hanitsch et al., 1998). Renewable-energy sources are becoming more and more attractive especially with the constant fluctuation in oil prices (Shafie-Pour et al., 2008). Photovoltaic (PV) system is considered one of the important alternative sources in this regard (Ibrahim et al., 2009). This clean and environment-friendly energy-source is looking very useful to be utilized in Jordan where solar global-radiation is one of the highest in the world (International Energy Agency, 2006).
The sun is the largest regenerative source of energy in our world. It is estimated that the annual sun exposure amounts to 3.9×10 24 J = 1.08×10 18 kWh. This corresponds to more than 10000 times of the present world energy needs (Quaschning, 2005).
The history of photovoltaic goes back to the year 1839, when Becquerel discovered the photovoltaic effect, but no technology was available in the 19th century to exploit this discovery. The semiconductor age only began about 100 years later. After Shockley had developed a model for the pn junction, Bell Laboratories produced the first solar cell in 1954; the efficiency of this, in converting light into electricity, was about 5% (Luque and Hegedus, 2003).
To understand PV operation, we need to state that the solar cells are made of semiconductors material, which have some weakly bonded electrons. Electrons and holes usually appear in pairs within solid matter. And the characteristics of the semiconductor material make it easy for incoming photons of sunlight to release electrons from the electron hole binding. Leaving the holes behind them, the released electrons can move freely within the solid material (Quaschning, 2005;Luque and Hegedus, 2003). However the electron movements have no clear direction; therefore, to create electricity, it is necessary to collect electrons. The semiconductor material is therefore polluted with 'impure' atoms. Two different kinds of atom produce an n-type and a p-type region inside the semiconductor and these two neighboring regions generate an electrical field as shown in Fig. 1. This field can then collect electrons and draws free electrons released by the photons to the n-type region. And the holes move in the opposite direction, into the p-type region (Quaschning, 2005;Luque and Hegedus, 2003).
However, not all of the energy from the sunlight can generate free electrons. There are several reasons for this. Part of the sunlight is reflected at the surface of the solar cell, or passes through the cell. In some cases, electrons and holes recombine before arriving at the ntype and p-type regions. Furthermore, if the energy of the photon is too low-which is the case with light of long wavelengths, such as infrared-it is not sufficient to release the electron. On the other hand, if the photon energy is too high, only a part of its energy is needed to release the electron and the rest converts to heat. Figure 1 shows these processes in a Photovoltaic (PV) cell.
In this study, a 10-kW grid-connected PV-system is investigated as a case study in Hashemite University of Jordan. Many factors have motivated this study. Firstly, the campus is located in a desert area where the global radiation numbers are ones of the highest in the world. Secondly, the campus has plenty of safe building flat-roof areas to install solar panels. Thirdly, most of the power demand is during the daytime so that implementing grid-connected PV system would save considerably in the electric bill (Rehman et al., 2005). Finally, constant increase of electric bill cost is due to the increase of oil prices.
Photovoltaic system: In general, PV electrical power generation can be categorized in two categories; standalone PV-system and grid-connected PV-system. The first category is used in remote area where it is too expensive to be reached by public grid system. Big disadvantage of this system is the use of batteries for night supply, since battery energy-loss is too high (Hanitsch et al., 1998;Marouani and Mami, 2010;Aliman et al., 2007).
The second category is grid-connected PV-system where the generated electricity is directly used and there is no need for storage. This study investigates this category since Jordan national public-grid covers 99.8% of the populated areas in the country (National Electric Power Company, 2009). Figure 2 shows the main components of gridconnected PV-system. The connection to the public grid is achieved by using proper inverters. Care must be exercised to choose inverter units with the highest efficiency. During the daytime, the solar generator provides power for the electrical equipment and lighting and excess energy is supplied to the public grid. In addition, during the nighttime, the load gets its electricity from the public-grid (Luque and Hegedus, 2003).
Grid-connected PV-system can be installed in different establishments where the range of power needs can be in magnitude of watts to magnitudes of megawatts. This can be achieved by installing enough PV-generators for different establishments.
System description:
The study is based on Hashemite University campus located 25 km north of Amman, Jordan. This area is a desert area where radiation is one of the highest in the world, which makes solar energy an attractive source of energy. Jordan Meteorological Department has around fifteen weather stations in the country that collect regular metrological data including sun radiation. Al Azraq station is chosen for this study since weather conditions are close to the conditions in the university campus (Royal Jordanian Geographic Centre, 2001). The campus obtains its electrical power from a public grid that is shared with other industrial and residential consumers. And the peak consumption in the campus occurred during the daytime where PV power generation is conveniently can be generated. The university power consumption for year 2009 is 6106 MWh with a cost of $529000 where the cost of 1 kWh is $0.087. Furthermore, the unused building flat-roof areas are close to 53600 m 2 , which provide safe location for huge number of PV panels.
The principal objective of this simulation study is to provide a modular 10 kW PV system that will be connected to the public power grid as shown in Fig. 2. Additional PV modules can be added in the future as needed to meet campus needs.
The software used for the simulation is PV*SOL (Valentin Energy Software, 2003). This simulation program is used to design and perform calculations of grid-connected and stand-alone systems. It calculates the output of a PV system, depending on its location and determines its economic efficiency.
Monthly metrological data, global radiations and temperatures, from Jordan Meteorological Department, was obtained for the last eight years. Figure 3 shows the yearly global-radiation figures, which are exceeding 2000 kWh m −2 . The actual PV system yields can vary due to a variation in weather conditions and module and inverter efficiencies. In addition, the data fed to PV*SOL is based on hourly data. However, the data obtained from Jordan Meteorological Department is monthly averages and not hourly. Therefore, METEONORM software is used to obtain the hourly data needed for the simulation.
METEONORM 5.0 is a comprehensive meteorological reference, incorporating a catalogue of meteorological data and calculation procedures for solar applications and system design at any desired location in the world.
RESULTS
Global-radiation and temperature data from Jordan Meteorological Department is compared with data Fig. 4 and 5 and showed very small differences. Therefore, METEONORM software data proves to be accurate and reliable to be used in this study. The configuration of the PV system is illustrated in Fig. 2. This proposed 10 KW PV system will be made of number of panels. Initially, it is important to determine the optimal panel-size. Many panel sizes were tried using PV*SOL software to choose the optimal panel-size. Table 1 shows the simulation results of one of the best runs. And 300W-panel-size shows the maximum energy generation.
One important factor of PV generation is the panel inclination angle. A simulation was performed to obtain the optimal inclination angle as shown in the Fig. 6, it was determined that an angle of 27° gives the maximum power generation.
Another aspect of PV generation is to choose suitable inverter from a list of inverter database available in PV*SOL. The selection of the inverter is based on operating power and highest efficiency possible. This list reflects standard inverters available in the market. SUNWAYS AG NT 4000 was chosen to be the best match for this application. Table 2 shows inverter data sheet. Now, we are ready to run the final simulation, the inputs needed to run the simulation are the module power, PV-module size, inverter type, place and angle of inclination. The resultant 10-KW PV-system configuration is PV generator obtained from 33 panels and 3 inverters of NT 4000 3.4 kW as shown in Table 3. Table 3 that the system efficiency, percentage of PV-generated energy to PV radiation, is 9.5% where it has a big room for improvement. In addition, the system cost is $61,200 where 70% of this cost goes for the solar panels (Luque and Hegedus, 2003). Furthermore, the area required is 80 m 2 and plenty of safe areas are available in the campus.
CONCLUSION
We discussed the optimal configuration of modular PV system in the Hashemite University of Jordan. Calculations show the cost for PV-generation is $0.18 kWh −1 without public subsidy compared to $0.086 kWh −1 from national electric power company (Jordan Meteorological Department, 2002;National Electric Power Company, 2008;2004).
However, Most of the PV-system cost comes from the hardware, i.e., the initial cost and portion of it might be covered from public subsidies in the future. Furthermore, the hardware cost is decreasing with time and oil prices are increasing. And this cost ratio, between PV-generation and grid supply, might be reversed in near future.
We focused on implementing a PV-system in 10-kW modules. However, more modules can be connected to the grid to meet the additional needs. Note that a total of 356 modules are calculated to meet the power needs of the campus.
Furthermore, considering the environmental impact, the PV system does not produce CO 2 emission and maintain clean and healthy environment. As shown in Table 3 this PV-module avoided emission of 15148 kg year −1 of CO 2 . In addition, it is important to mention that global radiation in Jordan is much higher than the values in Europe, for example the yearly global radiation in Germany is around 1000 kWh m −2 compare to 2000 kWh m −2 in Jordan.
Future study might include improving systemefficiency and optimizing system components. This might be achieved by investigating a new mechanism to keep the sun radiation vertical to solar-panels, or using newer technologies for solar cells and inverters.
|
2018-10-31T00:31:45.466Z
|
2011-02-28T00:00:00.000
|
{
"year": 2011,
"sha1": "52c2022cd04a7ca0a2938c5638020a812ad24880",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajassp.2011.177.181",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c8d62af2c5d2750464249b0a0930446006bbdf82",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
5350404
|
pes2o/s2orc
|
v3-fos-license
|
Urological complication following aortoiliac graft: case report and review of the literature
ABSTRACT CONTEXT: Ureteral stenosis and ureterohydronephrosis may be serious complications of aortoiliac or aortofemoral reconstructive surgery. CASE REPORT: A 62-year-old female patient presented with a six-month history of left lumbar pain. She was a smoker, and had mild chronic arterial hypertension and Takayasu arteritis. She had previously undergone three vascular interventions. In two procedures, Dacron prostheses were necessary. Excretory urography showed moderate left ureterohydronephrosis and revealed a filling defect in the ureter close to where the iliac vessels cross. This finding was compatible with ureteral stenosis, and the aortoiliac graft may have been the reason for this inflammatory process. The patient underwent laparotomy, which showed that there was a relationship between the ureteral stenosis and the vascular prosthesis. Segmental ureterectomy and end-to-end ureteroplasty with the ureter crossing over the prosthesis anteriorly were performed. There were no complications. The early and late postoperative periods were uneventful. The patient evolved well and the results from a new excretory urogram were normal. We concluded that symptomatic ureterohydronephrosis following aortoiliac graft is a real complication and needs to be quickly diagnosed and treated by urologists.
INTRODUCTION
Ureteral stenosis and ureterohydronephrosis may be serious complications in aortoiliac or aortofemoral reconstructive surgery.Ureteral lesions from vascular surgery are believed to account for 0.8% of lesions recognized at the time of surgery and 2.2% of complications observed later. 1 Ureteral lesions are sometimes not iatrogenic.They are related to the aneurysmal form of aortic disease, and some develop from inflammatory secondary reactions.Today, intraoperative ureteral injury, secondary retroperitoneal fibrosis, residual hematomas, false aneurysms after surgery, graft placement anterior to the ureter and graft infection are the main causes of urological complications following vascular reconstructive surgery. 1 We report on a case of a late complication from an aortoiliac graft in a symptomatic patient that needed surgical intervention.We also present a brief review of the literature on this subject.
CASE REPORT
A 62-year-old Caucasian female presented with a six-month history of left lumbar pain.She had no history of fever, hematuria, dysuria, frequency, urgency, polyuria, nocturia, incontinence or difficulty in voiding.The patient was a smoker, and she had mild chronic arterial hypertension and Takayasu arteritis.Both of these diseases were under control with medications.She had previously undergone three vascular interventions because of complications from the Takayasu arteritis.In 1984, she underwent superior mesenteric artery endarterectomy.In 1986, she underwent aortobicarotid graft surgery with implantation of a Dacron prosthesis and in 1991, she underwent aortobiiliac graft surgery, also with a Dacron prosthesis.
Physical examination revealed only mild pain in the left flank.The patient was negative for Giordano's sign.Urine culture and urine cytological tests were negative.Biochemical parameters were within nor-mal limits.Abdominal ultrasonography (US) revealed a cyst in the left kidney.Abdominal computerized tomography (CT) just confirmed the diagnosis.The patient was submitted to US-guided aspiration puncture, follow by ethanol sclerosis.
The pain kept bothering the patient and CT was performed again.This showed moderate left ureterohydronephrosis and a stop point in the mid-portion of the left ureter.Intravenous urography confirmed the moderate left ureterohydronephrosis and showed a filling defect in the ureter close to where the iliac vessels crossed (Figure 1).This was compatible with ureter stenosis, and the aortoiliac prosthesis may have been the reason for this inflammatory process.The patient underwent laparotomy, which showed that there was a relationship between the ureteral stenosis and the vascular prosthesis.Segmental ureterectomy and end-to-end ureteroplasty with the ureter crossing over the prosthesis were performed.Some fat tissue was interposed between the ureter and prosthesis and a double-J catheter was implanted.
There were no complications.The early and late postoperative periods were uneventful.The results from a new intravenous urogram were normal, i.e. contrast was eliminated through the ureter without filling defects.At a 24-month follow-up, the patient remained asymptomatic.
DISCUSSION
The first case of hydronephrosis secondary to placement of an aortic prosthesis was reported by Jacobson et al. in 1962. 2 Since then, a few other cases have been reported in the literature.We performed a search in the PubMed, Embase (Excerpta Medica), Cochrane library and Lilacs (Literatura Latino-Americana e do Caribe em Ciências da Saúde) databases (Table 1) and found 13 case reports and five case series with no more than three cases each, which reported occurrences of obstructive uropathy following vascular graft placement.Cases with ureteral fistula were not included.
Six retrospective 1,3-7 studies attempted to estimate the incidence of this pathological condition, but the results were dissimilar.Wright et al. 1 reported on 33 years of experience with 58 ureteral complications in 50 out of 3580 patients who had undergone aortoiliac reconstructive surgery.Just 42 patients presented hydronephrosis, thus revealing very low incidence.Gil-Salom et al. 4 reported one case of hydronephrosis out of 50 aortobifemoral bypass procedures.However, Frusha et al. 5 reported an incidence of 14% among 50 patients who underwent aortobifemoral bifurcation grafts, and Heard et al. 6 noted incidence of 10% of the patients and 7% of the ureters in a study on 20 patients.
Three prospective studies [8][9][10] attempted to discover the real incidence of this complication.Goldenberg et al. 8 performed serial ultrasound examinations on 93 patients who had undergone aortofemoral or aortoiliac reconstructive surgery (one week, three months and one year postoperatively) and found that hydronephrosis developed in 11 patients (12%).The obstruction resolved spontaneously in 10 of these patients within three months and only a single case persisted for one year.In a similar study, Daune et al. 9 reported no cases of symptomatic early or late hydronephrosis among 30 patients who underwent aortobifemoral graft.In another prospective study, Henriksen et al. 10 did not find any patients with signs of ureteral obstruction among 56 patients who underwent aortic reconstruction.
After reviewing both the retrospective and the prospective studies, we could see that real urological complications following vascular grafts, such that surgical interventions were necessary, were described well in the case reports.We also discerned that well-designed studies are needed in order to validate the diagnosis and management.
Several pathogenic mechanisms have been suggested for the development of hydronephrosis.Early reports considered that placement of the graft anteriorly to the ureter, thus entrapping the ureter between the graft and the native artery, was the factor responsible.Other mechanisms that have been implicated include mechanical compression of the ureter by means of iliac or proximal anastomotic pseudoaneurysm, ureteral fibrosis due to constant microtrauma caused by graft pulsation, or direct injury to the ureter during surgery.However, it seems that in the majority of cases, the etiology of the obstruction is postoperative retroperitoneal fibrosis caused by tissue reaction to the implanted graft.The degree of the reaction correlates with the severity of surgical trauma and the residual hematoma. 11reatment aims to restore urinary tract continuity and preserve kidney function.The type of therapy is chosen taking into account the type of lesion, time of occurrence, functional capacity of the corresponding kidney and patient status.A conservative approach is recommended for incidental asymptomatic cases of early postoperative hydronephrosis, since spontaneous resolution may occur in a high proportion of these patients. 8However, when symptoms are present or severe and late hydronephrosis is found, an operative procedure tends to be performed.The approaches used go from ureterolysis to nephrectomy.Ureterolysis, transection of the ureter and end-to-end anastomosis is an option of interest.Some authors prefer to divide and perform reanastomosis on the graft, in order to avoid opening the ureter and prevent extravasation of potentially infected urine and graft sepsis. 12In our opinion, it is a good alternative in cases of associated graft complications.
CONCLUSION
Symptomatic ureterohydronephrosis following aortoiliac graft is a real complication and it needs to be quickly diagnosed and treated by urologists.The approach should be selected based on the particular features of each case.
Figure 1 .
Figure 1.Intravenous urography showing moderate left ureterohydronephrosis and a filling defect in the ureter close to where the iliac vessels cross.
|
2017-06-27T16:03:26.677Z
|
2010-05-01T00:00:00.000
|
{
"year": 2010,
"sha1": "330c03536480229cf04f9484f2e0cc4d3ae2ced6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1590/s1516-31802010000300010",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a6ddff7dbf0678bdb440e2a2f5a296783bc3c39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220647843
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge, Attitude, and Practice of Pediculus Capitis Prevention and Control and Their Predictors among Schoolchildren in Woreta Town, Northwest Ethiopia, 2018: A School-Based Cross-Sectional Study
Background Pediculus capitis is a human head lice infestation, a major public health issue that is most prevalent in resource-limited countries globally. The current study aimed to assess the knowledge, attitude, and practice of pediculus capitis prevention and control and their predictors among schoolchildren in North West Ethiopia. Methods About 402 randomly selected schoolchildren from three schools in Woreta town participated in the study from April to June 2018. The outcomes of this study were knowledge, attitude, and self-reported practice of schoolchildren about pediculus capitis prevention and control. We used EPI Info 7.1 and SPSS 21 software for data entry and analysis, respectively. Binary logistic regression was employed to test the association of covariates with the outcome/response variables. Variables with a p value <0.2 during the bivariable binary logistic regression analysis were included in the multivariable binary logistic regression analysis. Variables with p value <0.05 were declared as significantly associated with outcomes. Results The mean age of the study participants was 10.19 (±1.62) years. About 58.8%, 45.8%, and 78.6% of the schoolchildren had better self-reported pediculus capitis prevention knowledge, attitude, and practice, respectively. Age of children [9 to 11 years (AOR = 2.24, 95% C.I (1.10, 4.55)) and>12 years (AOR = 3.84, 95% C.I (1.56, 9.46))], better practice (AOR = 2.93, 95% C.I (1.39, 6.18)), and those who were not infested (AOR = 2.25, 95% C.I (1.14, 4.44)) were predictors of knowledge regarding pediculus capitis prevention. Better practice (AOR = 4.33, 95% C.I (1.69, 11.09)) and absence of infestation (AOR = 2.97, 95% C.I (1.64, 5.36)) were predictors of attitude of schoolchildren about pediculus capitis prevention. Number of students in a class [51 to 56 students per classroom, AOR = 4.61, 95% C.I (1.83, 11.67); 57 to 58 students per classroom, AOR = 8.18, 95% C.I (2.73, 24.46)], less than five family size (AOR = 2.37, 95% C.I (1.24, 4.54)), better knowledge (AOR = 2.93, 95% C.I (1.32, 6.50)), desirable attitude (AOR = 4.24, 95% C.I (1.60, 11.23)), and absence of infestation (AOR = 3.52, 95% C.I (1.22, 10.15)) were predictors of self-reported pediculus capitis prevention practice. Conclusion The knowledge, attitude, and practice of schoolchildren regarding pediculus capitis prevention and control were not satisfactory. To bring change, intensive efforts on factors associated with the knowledge, attitude, and practice should be encouraged.
morbidity and prevalence of pediculus capitis include strategies targeted at increasing knowledge, changing attitudes and behaviors, and improving personal hygiene practice [6]. Even though pediculus capitis is very common in Africa from earlier times until now [12][13][14][15], there are only a few studies on knowledge, attitude, and practice regarding head lice infesta-tions especially in sub-Saharan Africa [16]. Above all, majority of the previous studies focused on the knowledge, attitude, and practice of parents, nurses, and teachers only [17][18][19][20][21]. Let alone the level of knowledge about human head lice and their control in high-income countries is restricted in resourceconstrained settings among schoolchildren [16,19,20,22] and even insufficient among health professionals [23][24][25]. The major setbacks for ineffective control of pediculus capitis include lack of knowledge, undesirable attitude towards control and prevention of head lice, and inadequate personal hygiene practice [26][27][28].
Therefore, the present study was undertaken to assess the level of knowledge, attitude, practice, and their associated factors regarding pediculus capitis prevention and control among schoolchildren in Woreta town, northwest Ethiopia.
Study Design and
Setting. This was a school-based crosssectional study conducted from April to June 2018 among schoolchildren in Woreta town primary schools. The town is located 589 km far from Addis Ababa, the capital city of Ethiopia. During the study period, there were three primary schools in the town with a total of 3239 students. The prevalence of pediculus capitis during the study period was 65.7% (95% CI 60.01-70.3%) [29]. Figure 1: Knowledge, attitude, and practice towards Pediculus capitis prevention and control among schoolchildren Woreta, 2018 (n = 402). For the present study, the sample size was determined using a single population proportion formula [30]. With the following assumptions: p = 50 percent to allow maximum variation (as there was no previous country analysis on the ratio of knowledge, attitude, and practice to pediculus capitis prevention and control), 95 percent confidence level, z = standard normal tabulated value, and α = level of significance and margin of error (d) = 0.05; The final total sample size was 402, after adding the estimated nonresponse rate of 5 percent. Study participants were chosen using a simple random sampling technique and distributed in proportion to the three schools, based on the number of students in each school.
Data Collection Tool and Quality Control Procedures.
A pretested, semistructured questionnaire was used that included socio-demographic variables, knowledge, attitude, and practice items relevant to infestation with pediculus capitis. Two Environmental Health Bachelor's degree students conducted an interview and observation after receiving training on the data collection method, techniques, study intent, and ethical considerations.
Detailed explanation and reliability of the method for data collection based on the pretest results are discussed elsewhere [29].
Variable Measurement
2.4.1. Pediculusis. A child with at least one head louse by wet combing is assumed to be infested with pediculus capitis [14].
2.4.2.
Schoolchildren. In the current report, children attending classes from grades 1 to 4 were considered schoolchildren [29].
2.4.3. Knowledge. Knowledge was measured by 10 yes/no category knowledge items.
Students who scored mean of knowledge questions and above were deemed to have good knowledge [29].
2.4.4. Attitude. Attitude was assessed by 8 Likert-scale attitude questions (1-strongly disagree with 5-strongly agree). Children who scored mean and above questions about the attitude were known as having a good attitude [29]. 3 International Journal of Pediatrics 2.4.5. Practice. Children were asked five specific questions regarding the prevention behavior of pediculus capitis. Those children who scored mean and above of the questions were considered good practice [29].
2.5. Data Analysis. Completeness and consistency of the data were reviewed regularly. Epi info version 7.1 was used for data entry and analysis was performed using SPSS version 21. We calculated the number, mean, and standard deviation to show the descriptive results.
Bivariable binary logistic regression was first tested, and in the final model, we used multivariable binary logistic regression to test variables with a p value <0.2 in bivariable analysis for significant association. Variables with a p value <0.05 were declared as being significantly associated with dependent variables (i.e., knowledge, attitude, and practice) in the multivariable binary logistic regression analysis.
Crude and adjusted odds ratios and 95% confidence intervals were calculated. Model fitness was tested using Hosmer and Lemeshow good-ness of fit at p > 0:05.
2.6. Patient and Public Participation. The current research did not include patients or the public but included students with and withoutpediculus capitis infestation.
Results
3.1. Socio-Demographic Characteristics. Four hundred and two students were included in the current study. Above half (53.7%) of the study participants were females. The study participants' mean age was 10.19 (±1.62) years. Almost all (93%) of the students attend in schools with no water access (Table 1).
From the total participants, 78.6% had good self-reported practice, 58.8% had good knowledge, and 45.8% had desirable attitude (Figure 1).
Correlation between Knowledge, Attitude, and Practice.
The correlation between knowledge, attitude, and practice was assessed using the rank correlation coefficient of Spearman and represented using 0 -0:25 = poor, 0:25 -0:5 = average, 0:5 -0:75 = good, and above 0:75 = excellent correlations based on criteria developed for the study of statistical power for behavioral sciences [31]. There was a significant correlation among knowledge, attitude and practice (Table 2).
Factors Associated with
Knowledge regarding pediculus capitis Prevention. Age of child, paternal and maternal education, number of students per class, family size, water accessibility, attitude of students towards pediculus capitis prevention, practice towards pediculus capitis prevention, (Table 3). Students' knowledge towards pediculus capitis prevention, practice towards pediculus capitis prevention, and previous history of pediculus capitis infestation were factors associated with attitude about pediculus capitis prevention during the multivariable logistic regression (Table 4).
Grade level, paternal and maternal educational status, number of students per classroom, family size, knowledge and attitude towards pediculus capitis prevention, and history of infestation were factors eligible (p < 0:2) for multivariable analysis in the final model. The only number of students per classroom, family size, knowledge, attitude, and history of infestation was associated with pediculus capitis prevention practice during the multivariable analysis. Class size was associated with practice regarding pediculus capitis prevention. Students attending in larger size class had better prevention practice compared with those in smaller size classes.
Discussion
In this study, we evaluated the level of knowledge, attitude, and practice regarding pediculus capitis prevention and control among schoolchildren in Woreta town. This study is a 5 International Journal of Pediatrics part of a project on a school health assessment program, and detailed information about prevalence and risk factors of pediculus capitis is published elsewhere [29]. In the current manuscript, we have assessed the knowledge, attitude, and practice regarding pediculus capitis prevention and control and their associated factors. Knowledge was a significant predictor of infestation status in previous studies [19,29,32]. This may be due to the lack of knowledge that can result in insufficient capacity to handle lice infestation. Several interventional studies concentrated on health education intervention to improve knowledge of school teachers, guardians, and students [33,34]. Deficiencies in knowledge may indicate inabilities to manage infestation. In the current analysis, the positive associations between knowledge-attitude, knowledge-practice, and attitude-practice notify the relationship among knowledge, attitude, and practice about pediculus capitis prevention. Sufficient knowledge will result in a positive attitude leading to better practice. This is in line with previous studies [35,36]. Students with higher age, better attitude and practice, and those with no previous history of infestation had better-adjusted odds of knowledge regarding pediculus capitis prevention. Students with higher age were less likely to be infested [5]. In the current study, students with higher age were found to have better infestation prevention and control knowledge. This is in line with earlier studies [37].
Family size less than five, better knowledge and desirable attitude, and not being infested by pediculus capitis were among factors associated with practice regarding pediculus capitis prevention among schoolchildren. Previous research showed that children from lower socioeconomic classes and those with lower-educated parents were more frequently infested [38][39][40][41][42][43]. Greater family size was identified as a determinant for pediculus capitis in several earlier studies [44][45][46].
In the current study, students with better knowledge and desirable attitude more likely reported better pediculus capitis prevention and control practice. Health practice is defined by the knowledge and attitude of an individual or the public [37].
This research was, ultimately, not without limitations. No evaluation was made of the knowledge, attitude, and practice of school teachers and parents. The scarcity of previous studies on knowledge, attitude, practice, and associated factors among children made comparison of results difficult. An additional limitation of this study has been the inherent weakness of cross-sectional research design in determining cause-effect relationship, recall and social desirability bias, and poor generalizability as the analysis is performed only at a specific city.
Conclusion
Head lice infestation is a major public health concern and the national and regional health authorities need to advocate awareness-raising programs that target mothers and prepare knowledge, attitude, and practice improvement strategies.
AOR:
Adjusted odds ratio COR: Crude odds ratio CI: Confidence interval EPI Info: Epidemiological information SPSS: Statistical Package for Social Sciences.
Data Availability
The datasets available from the corresponding authors upon reasonable request.
Ethical Approval
Ethical approval was secured from the department of Environmental and Occupational Health and Safety Ethical committee. Written consent was obtained from parents and school directors. Accent was received from study participants. Any possible identifiers were avoided to ensure confidentiality. Health education about pediculus capitis prevention and control, transmission routes, and personal hygiene was delivered for school teachers, students, and directors after completion of data collection. School directors were advised to regularly check the personal hygiene of students. Students with pediculus capitis were advised for visiting nearby clinics.
Conflicts of Interest
None of the authors declared competing interest.
Authors' Contributions
HD designed and supervised the study, analysed the data, and wrote the draft manuscript. WWY participated in the design of the study, revised the draft manuscript, and critically revised the draft manuscript for the methodological and intellectual soundness. AT, AAB, ZA, and BD involved from proposal writing, participated in data collection, until the manuscript writing phase. The final manuscript was read and accepted by all contributors.
|
2020-07-02T10:09:51.750Z
|
2020-06-21T00:00:00.000
|
{
"year": 2020,
"sha1": "5c5779862972bfac22fc0f3b9c8285c620ecb9ed",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/3619494",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d11162e2d892736c4087a499979e9939be776b7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245828818
|
pes2o/s2orc
|
v3-fos-license
|
The Biodegradation of Soil Organic Matter in Soil-Dwelling Humivorous Fauna
Soil organic matter contains more carbon than global vegetation and the atmosphere combined. Gaining access to this source of organic carbon is challenging and requires at least partial removal of polyphenolic and/or soil mineral protections, followed by subsequent enzymatic or chemical cleavage of diverse plant polysaccharides. Soil-feeding animals make significant contributions to the recycling of terrestrial organic matter. Some humivorous earthworms, beetles, and termites, among others, have evolved the ability to mineralize recalcitrant soil organic matter, thereby leading to their tremendous ecological success in the (sub)tropical areas. This ability largely relies on their symbiotic associations with a diverse community of gut microbes. Recent integrative omics studies, including genomics, metagenomics, and proteomics, provide deeper insights into the functions of gut symbionts. In reviewing this literature, we emphasized that understanding how these soil-feeding fauna catabolize soil organic substrates not only reveals the key microbes in the intestinal processes but also uncovers the potential novel enzymes with considerable biotechnological interests.
INTRODUCTION
Soil organic matter (SOM) is massive and representative of a major organic carbon pool on the planet, which is considered as an essential agent in maintaining ecosystem productivity and sustainability through its physical, chemical, and biological properties. More specifically, soil organic matter not only retains nutrients that improve plant growth but also contributes soil physicochemical property enhancements such as infiltration, water-holding capacity, and aggregates (Lehmann and Kleber, 2015). To date, researchers estimate SOM approximately makes up less than 5% of the global dry weight soil (Stevenson, 1972;Liang et al., 2017). Soils also contribute an important source of aquatic and atmospheric carbon; moreover, diverse living organisms within the soils are considered as the most driving force of carbon cycling in biogeochemical processes. Collectively, organic matter in the soil represents the most abundant source of organic carbon and has unparalleled ecological and economic impacts on the Earth (Bolan et al., 2011).
The formation and turnover of soil organic matter is a continuum of progressively decomposing processes. Biological, physical, and chemical transformation processes convert dead plant material into organic products that form intimate associations with soil minerals (Lehmann and Kleber, 2015). The fragments of plants are often first broken up into small pieces at the beginning of decomposition by soil fauna. The plant residues are further degraded by subsequent exo-enzymes derived from surrounding microorganisms, where they are broken down to a relatively small size. The generated organic compounds at various stages of decay not only represent energyrich spots in the soils but also relatively recalcitrant components. For instance, polyphenols in soils exist either in a dissolved form that moves freely in the soil solution, in a sorbed form that reversibly binds to the soil particle or proteins, or in a polymerized form that consists of humic substances (Min et al., 2015) Among them, lignin is one of the most recalcitrant carbon compounds and can bind with proteins, thereby immobilizing nitrogen (Gentile et al., 2011). Increasing evidence shows that soil-dwelling fauna and their gut microbial symbionts have the ability to decompose these persistent materials even more quickly than previously recognized (Coleman and Wall, 2015). In this review, we provide an overview of the recent omics-based research, including soildwelling fauna and their associated gut bacterial genomic and metagenomic studies, which have led to a deeper understanding of soil organic matter degradation processes and uncovered the presence of only recently recognized microbial symbionts and relevant degradative enzymes.
THE CHEMICAL COMPLEXITY AND RECALCITRANCE OF SOIL ORGANIC MATTER
Soil organic matter is heterogeneous complexes with a variety of chemical components. Although the definite chemical structures have remained contentious, it is generally accepted that humic substances consist of polyphenols, peptides, lipids, and polysaccharides ( Figure 1) (Gerke, 2018). This supramolecular network formed by complex carbohydrates and aromatic polymers provides the SOM complexes with sufficient stability, but it also makes the SOM a major barrier to gain access to the stored hydrolysable aliphatic components (Horwath and Paul, 2015). The substantial ether-and carbon-carbon interunit linkages between aromatic units possess an inherent chemical recalcitrance. At the same time, the SOM often has chemical interaction with inorganic soil colloids, including mineral or clay particles, to form dense aggregates, which further provides the physical protections against decomposition (Oades and Waters, 1991). Specifically, owing to the stimulation of microbial activity and microbe-derived carbon, plant residue starts to form aggregates when it enters the soil. Along with the decomposition processing, plant residues or other particulate organic matter gradually encrusted with clay particles and microbial byproducts to form the core of stable microaggregates. Consequently, the mineral crusts interacting with microbial byproducts managed to form recalcitrant organo-mineral complexes (Six et al., 2004).
The stability of soil organic matter including peptides, amino acids, and polysaccharides is strongly related to the presence of humic substances, which is largely owing to the polymerization of aromatic units during the humification (Shan et al., 2010). Modern analytical approaches for characterization of biomolecules in microbial cells and soils now suggest a direct and rapid contribution of microbial cell walls to soil organic matter protections when associated with model polyphenolic components. The emergent concept of soil organic matter as a continuum spanning the full range from intact plant material to highly oxidized carbon in carboxylic acids represents the more common view among the public (Lehmann and Kleber, 2015).
BIODEGRADATION MECHANISM OF SOIL ORGANIC MATTER
Soil organic matter degradation mechanisms in natural systems have remained less known since their structural complexes and therefore a suite of ligninolytic enzymes are likely engaged in the FIGURE 1 | Traditional view of the chemical complex of soil organic matters or humus substance. Modified from Wagner and Wolf (1998).
Among organisms, actinobacteria and fungi are most well known to be capable of degrading humic substances. The fungi involved in humic acid degradation are usually known as whiterot fungi capable of lignin degradation (Dashtban et al., 2010;Datta et al., 2017). Extracellular enzymes including laccase and ligninolytic peroxidase are involved in the cleavage of aromatic rings; among them, manganese peroxidase is the most investigated (Nousiainen et al., 2014). Also, protease, lipase, and various carbohydrases may be involved in the degradation of aliphatic structural components (peptides, lipids, polysaccharides, etc.) (Holtof et al., 2019). Enzymatic degradation of protein from humic acids has been demonstrated. Meanwhile, the release of amino acids from humic substances by chemical autoxidation has also been observed (Kappler and Brune, 1999).
THE MECHANISM AND PROCESS OF SOIL-DWELLING MACROFAUNA BREAKING DOWN THE SOIL ORGANIC MATTER
Some soil fauna feed on soil organic matter, exclusively relying on soil organic matter in an advanced stage of humification (Briones, 2014). In fact, Donovan et al. (2001) defined four feeding groups of soil fanuas based on the humification stages of their gut content: 1) feeding on wood, litter, and grass; 2) feeding on very decayed wood and/or high organic content soil; and 3) feeding on only organic soil (so-called true soil feeders) (Donovan et al., 2001). The mineralization of SOM components throughout the guts of soil-feeding fauna has a significant impact on carboncycling globally. Indeed, several soil-dwelling fauna evolved the capacity to efficiently utilize the stored organic carbon within the soil organic matter (Jiang et al., 2018). Given the independent evolution of different soil-dwelling fauna, diverse bioprocessing mechanisms of the soil organic matter-based diet across these organisms have been established. The major innovation in soil fauna is a variety of microbes and their relevant enzymes engaged in these biodegradation processes, which either hydrolyze residual polysaccharides or degrade polyphenolic components of soil organic matter.
In the natural ecosystem, there is a diverse population of soildwelling fauna; among them, most research concentrates on earthworms, beetles, and termites (Swift et al., 1979;Li et al., 2021). Organic matter transformation is directly affected by soil macrofauna through the incorporation and redistribution of various materials and indirectly by making use of the microbial community with both constructive and destructive means (Wolters, 2000;Lavelle et al., 2001;Liu et al., 2019). More current research studies concentrate on the representative soil organisms including earthworms, beetles, and termites, which ingest a mixture of organic matter, soil components, and microorganisms adhering to mineral particles (Mcquillan and Webb, 1994;Lavelle et al., 1997;Brauman et al., 2000). Highly compartmentalized gut structure, extremely alkaline gut microenvironment, hydrolytic enzymes, and specialized microbiota in the gut of soil-dwelling fauna are the key points in the digestion of organic matter.
THE CONVERSION OF SOIL ORGANIC MATTER IN EARTHWORMS
Earthworms live in diverse types of soil, ranging from the top of soil in the surface litter, rotting logs, and the axils of tree branches, to the moist soil surrounding natural freshwater bodies (Reynolds, 1994). Earthworms contribute huge ecological impacts by modifying the soil structure. For example, the tropical earthworm Reginaldia omodeoi can take up to 30 times its own biomass of soil per day through its simple and tubular gut (Figure 2A) (Blouin et al., 2013). In temperate ecosystems, earthworms also ingest large amounts of material, with approximately 2-15% of organic matter inputs (Whalen and Parmelee, 2000). Earthworms live in the soil and ingest a mixture of soil and organic matter and finally excrete organo-mineral feces. Some species are dwellers and transformers of litter, living in organic soil horizons in or near the surface litter, with a diet of coarse particulate organic matter. This species takes large amounts of undecomposed litter and excretes holorganic fecal pellets (Dominguez and Edwards, 2010). Consequently, incorporation of organic matter into soil and the formation of macroaggregates are finished through burrowing, consumption, and egestion activities of earthworms (Guggenberger et al., 1996;Blanchart et al., 1997). After digestion, nitrogen is also reused by plants so that in the presence of earthworms, nitrogen mineralization increases either directly through the release of nitrogen by their metabolic products and dead tissues or indirectly through changes in soil physical properties and fragmentation of organic material and through interactions with other soil organisms (Lee, 1985;Bityutskii et al., 2002).
Research studies about degradation of soil organic matter by earthworms are currently focused on the degradation and transformation of plant-derived materials, such as cellulose, lignin, and other components of plant litter (Angst et al., 2021). Early feeding experiments on earthworms by using 14 C-labeled lignin substrates indicate that the effect of earthworms on the degradation of cellulose and lignin has two distinct aspects: promotion of initial biodegradation and inhibitory effect of lateral biodegradation (Scheu, 1993). In holocellulose mineralization, earthworm processing causes a two-phase alteration: mineralization rates were initially increased for 6-15 weeks but decreased later in the experiment. Overall holocellulose mineralization in the soil of the 6-and 13-year-old fallows was increased by factors of 1.5 and 1.4 due to earthworm processing, respectively, whereas in wheatfield and beechwood soil, the effects are only slight. In the case of wheatfield soil, the earthworm processing causes a two-phase alteration in the context of the rate of lignin mineralization: mineralization rates were increased for about 10 weeks but decreased afterward in the experiment.
Moreover, these earthworms have much a higher degradation capacity on cellulose than on lignin (Scheu, 1993).
In earthworms, the gut community is dominated by the Proteobacteria, Acidobacteria, Actinobacteria, Firmicutes and Verrucomicrobia taxa within three genera of earthworms, Aporrectodea, Allolobophora, and Lumbricus (Sapkota et al., 2020). Several microbiome analysis results of different earthworms indicate that Proteobacteria is likely the most abundant in the gut microbiota (Knapp et al., 2009;Liu et al., 2011;Liu et al., 2018), which is consistent with early reports that Proteobacteria might be involved in the fermentation, digestion, and absorption of food for the earthworm host (Flint et al., 2012).
SELECTIVE DIGESTION OF POLYSACCHARIDES OF SOM IN HUMIVOROUS LARVA OF BEETLES
Among beetles, most larvae feed on fresh or decomposing vegetable materials (Wolters, 2000). In the case of the Scarabaeidae beetle Pachnoda ephippiata, the larvae are considered almost entirely herbivorous or saprophagous (Crowson, 2013). The intestinal tract of Scarabaeid beetle larvae is mainly composed of two enlarged components, the long tubular midgut and a paunch hindgut, but also a poorly developed foregut (Figure 2A) (Cazemier, 1999). It has been observed that in saprophagous beetle larvae, the gut contains not only a large amount of humic material and plant residues but also fungal hyphae and other microbes (Bauchop and Clarke, 1977;Crowson, 2013). In Scarabaeidae families, similar to many soilfeeders, alkaline pH (>10) is always found in the midgut. The recalcitrant chitin and peptidoglycan, also the major structural polymers in the soil organic matter, are co-polymerized with polyphenols and thereby more likely to be against the soil microbial degradation. Early feeding experiments reveal that P. ephippiata larvae enable the selective digestion of those two polysaccharides over the protections from the polyphenols (Li and Brune, 2005).
The bacterial community structure study of the P. ephippiata larvae gut indicates the presence of dense and diverse microbiota, which is considerably different to the surrounding soils Lemke et al., 2003). One of the dominant bacterial species isolated from the hindgut of the larvae, Promicromonospora pachnodae, is capable of reducing iron and degrading (hemi)cellulose (probably simultaneously), which indicates that dissimilatory iron reduction is involved in the degradation of organic matter in the intestinal tract. Also, other substantial cellulolytic bacteria, hemicellulolytic bacteria, and methanogenic archaea have been found in the intestinal tract (Bayon and Mathelin, 1980). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org January 2022 | Volume 9 | Article 808075 In some dung beetles, microbiome research studies of Onthophagus beetles reveal that Enterobacter and Serratia are the dominant genera in the adults, while Dysgonomonas and Parabacteroides dominate in larval and pupal stages (Suárez-Moo et al., 2020). Nevertheless, the genus Dysgonomonas is more abundant in the larval stage of E. intermedius and E. triangulatus (Shukla et al., 2016) and the gut microbiota of two Pachysoma MacLeay desert dung beetle species (Franzini et al., 2016).
MOBILIZATION AND TRANSFORMATION OF NITROGENOUS COMPONENTS WITHIN SOM IN HUMIVOROUS HIGHER TERMITES
Termites consist of seven families and are phylogenetically classified into lower termites with six families and higher termites with just one family (Noirot, 1992). For the woodfeeding "lower" termites, cellulolytic protozoa and bacteria attribute the plant biomass digestions. Evolutionarily derived "higher" termites, which are completely lacking in protozoa, have an extensive diet diversity ranging from wood, grass, bark, lichen, and decayed litter to organic soil (Wood, 1978). Among them, soil-feeding species are found in three subfamilies of higher termites and constitute approximately 67% of all genera (Brauman et al., 2000). Soil-feeding termites have been considered as important contributors to biogeochemical cycles, especially in carbon, methane, and nitrogen (Sugimoto et al., 2000;Ji and Brune, 2006). In the tropical savanna, termites have been estimated to be directly responsible for up to 20% of total carbon mineralization .
In soil-feeding termites, the hindgut is highly compartmentalized and longer, which is classified in five sections, from P1 to P5 (Figure 2A) (Brune and Kühl, 1996). It is observed that in comparison with the generally tubular compartments of wood feeders, humivorous higher termites are characterized by dilated P1 compartments, which is characterized by an increase in the length and volume, so that it allows a sequential transit of long duration. Notably, the pH sharply increases in the mixed segment and results in the alkalinity in the anterior hindgut of soil feeders being the highest values that have been reported for biological systems (Wang, 2018).
Early studies have already estimated the strong mineralization of carbon and nitrogen in the gut of soil-feeding termites, even though the overall information on humivorous termites is still limited. Ji and Brune (2005) found that soil-feeding termites, Cubitermes orthognathus, enable the efficient mobilization and digestion of the peptidic components within the soil organic matter by a combination of proteolytic activities and extreme alkalinity in their intestinal tract (Ji and Brune, 2005). By using pyrolysis-GC-MS, Griffiths et al. (2013) further confirms that in comparison to the wood-feeding termites, the soil-feeder Cubitermes termites efficiently digested peptides and other nitrogenous residues such as chitin and peptidoglycan of soil organic carbon, rather than polyphenols (Griffiths et al., 2013). Interestingly, nitrogenous components are derived from microbial biomass, which are generally protected from degradation by covalent linkage to polyphenols and an intimate association with clay minerals. The ability to mobilize such recalcitrant humus constituents is accompanied by an even more pronounced elongation and extreme alkalization (to >pH 12) of the anterior hindgut, which remains a mystery.
Diverse and unique microbial populations exist in the hindgut of soil-feeding termites. Termites largely depend on these complex microbial communities to digest and utilize soil organic matter, including highly recalcitrant lignocellulose and other organic matters in advanced stages of humification (Ohkuma and Brune, 2011). It has been demonstrated that the relative increase in alkali-active proteases in the P1 section and ammonia accumulates to high concentrations in the posterior hindgut. The magnified abundance of these alkali-adapted Firmicutes belongs to clostridia in their hindguts may satisfy the metabolic requirement (Mikaelyan et al., 2015). However, the concrete roles played by intestinal microbiota in the digestive process are still unclear.
To date, there are numerous gut microbiome studies across feeding groups of termites. The overall pattern indicates a prevalence of Fibrobacteres and Spirochaetes bacteria in the wood feeders, whereas humus feeders, soil feeders, and fungus feeders shared similarities in community structure, with large proportions of Firmicutes, Bacteroidetes, and Proteobacteria. Furthermore, the soil feeders also harbored a larger proportion of actinobacteria (Schloss et al., 2006;Dietrich et al., 2014;Su et al., 2016;Bucek et al., 2019;Hu et al., 2019). The latest work on the large-scale metagenomic analysis of 145 termite species revealed the correlation between host phylogeny and the functionalities of their microbiota (Arora et al., 2021).
MICROBIOME OF SOIL-DWELLING HUMIVOROUS FAUNA AND BIOENERGY APPLICATIONS
As one of the largest carbon pools, soil organic matter represents a complex and recalcitrant carbon that has an inherent resistance to decomposition, which is largely owing to the protection provided by soil minerals and a variety of aromatic biopolymers (Kleber, 2010). The ability of decomposer soil fauna to access the stored carbon of soil organic matter at an incredibly efficient level has fascinated biologists for more than a century. In parallel to the current industrial saccharification, the breakdown of complex polysaccharides into monosaccharides employs strategies involving a combination of chemical pretreatment and enzymatic hydrolysis to obtain simple sugar for subsequent fermentation (Hafid et al., 2017). The depolymerization processing is still not economically viable and is even challenging.
Notably, soil-dwelling fauna is widespread on Earth, for example, soil-feeding termites inhabit approximately 75% of the terrestrial soil surface and consume wood and litter in different stages of decay and humification (Noirot, 1992;Li et al., 2017). The microbiome of soil-dwelling humivorous fauna represents a particularly vast and promising source of novel cellulolytic enzymes, or enzyme cocktails, for industrial cellulosic biofuel production. Yet, we have only begun to understand the ecologic impacts. Work on the core and functional bacterial lineages and their related microbial enzymes and genomic investigations have led to discoveries of novel and diverse microbe-derived enzymes. To further explore these biological systems, it is essential to proceed beyond a full understanding of the chemistry of the nature of all organic matter in soil. An integrative analysis of chemically tracking the fate of soil organic matter throughout soil-dwelling humivorous fauna is urgently necessary.
AUTHOR CONTRIBUTIONS
All the authors drafted the outline of this review; XuL and HL wrote the manuscript.
FUNDING
This study was funded by the Zhejiang Provincial Natural Science Foundation Project (LR21C160001) and the National Natural Science Foundation of China (Grant Project 32171796 and 31500528).
|
2022-01-10T14:10:12.664Z
|
2022-01-10T00:00:00.000
|
{
"year": 2021,
"sha1": "e35384ff052e13298cd187b05c0931da15a22acf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "e35384ff052e13298cd187b05c0931da15a22acf",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268119396
|
pes2o/s2orc
|
v3-fos-license
|
Simplicity: Web-Based Visualization and Analysis of High-Throughput Cancer Cell Line Screens
High-throughput drug screens are a powerful tool for cancer drug development. However, the results of such screens are often made available only as raw data, which is intractable for researchers without informatics skills, or as highly processed summary statistics, which can lack essential information for translating screening results into clinically meaningful discoveries. To improve the usability of these datasets, we developed Simplicity, a robust and user-friendly web interface for visualizing, exploring, and summarizing raw and processed data from high- throughput drug screens. Importantly, Simplicity allows for easy recalculation of summary statistics at user-defined drug concentrations. This allows Simplicity’s outputs to be used with methods that rely on statistics being calculated at clinically relevant doses. Simplicity can be freely accessed at https://oncotherapyinformatics.org/simplicity/.
Introduction
In the past decade, multiple institutions have generated publicly available datasets for hundreds of compounds screened in hundreds of cancer cell lines (CCLs) [1].Substantial efforts have been made to harmonize and distribute data from these datasets both via programmatic [2] and web-based [3,4] interfaces.However, programmatic access is challenging for researchers who lack coding or bioinformatics experience, and web-based interfaces for these datasets do not currently provide users with the means to summarize drug efficacy at specific drug concentrations or concentration ranges.
Given recent evidence that CCL screening data should be analyzed at clinically achievable drug concentrations to generate clinically relevant findings [5] and the recent deployment of a web-based interface for utilizing CCL screening data to predict drug combination efficacy in a dose-dependent fashion [6], we developed the Simplicity (Simplified Interface to Manipulate Preclinical Information for Cancer In vitro TherapY) web-interface to enable researchers without programming experience to easily perform dose-dependent calculations with CCL screening data.
Materials and Methods
Raw screening data was obtained from four large CCL screening datasets: 1.The Cancer Therapeutics Response Portal v2 (CTRPv2) [7][8][9] 2 & 3 Genomics of Drug Sensitivity in Cancer 1 & 2 (GDSC1 and GDSC2) [10][11][12] 4. PRISM Repurposing [13] CTRPv2 was generated at the Broad Institute between 2012 and 2013 and contains data for 544 compounds screened in 887 cell lines.GDSC1 was generated by Massachusetts General Hospital and the Wellcome Sanger Institute between 2010 and 2015 and contains data for 343 compounds screened in 987 cell lines, with a follow up screen (GDSC2) being performed by Sanger between 2015 and 2017 for 192 compounds in 809 cell lines.PRISM Repurposing was published by the Broad Institute in 2020 and contains screening data for 1446 compounds in 481 cell lines.Further details for these screens can be found in the "Data Explorer/Explore Datasets" tab of Simplicity or in their respective publications.
Full details of how these datasets were harmonized and quality controlled are included in the Supplemental Methods : However, a very brief description of this process is as follows.. Initial cell line and compound harmonization tables were taken from our prior harmonization efforts [1,5], which included harmonized cell line and compound IDs for CTRPv2 and GDSC1.Data was further harmonized and annotated using a mix of manual curation as well as data from Cellosaurus (https://www.cellosaurus.org/),the BROAD Drug Repurposing Hub (https://www.broadinstitute.org/drug-repurposing-hub),and webChem (https://webchem.org/).Raw data from each dataset was then quality controlled, and doseresponse curves were fit to the harmonized and quality controlled data.A user interface for exploring and manipulating this data was created using the shiny package [14] in R [15].This interface, Simplicity, was then deployed on scalable cloud-based infrastructure.
Validation of data quality
To validate the quality of Simplicity's refitted dose-response curves, cross-dataset agreement was measured for shared compounds and cell lines under the hypothesis that compound/ cell-line pairs which were screened in multiple screens should result in similar AUC values across the same dose-range in both screens.As such, high correlation in drug sensitivities measured between two screens should indicate that dose-response curves have been appropriately fit, while lower correlations may indicate inferior curve-fitting approaches.
We took data from three sources of harmonized data for the drug screens included in Simplicity and sought to ensure that the cross-dataset agreement in Simplicity was not inferior to other available sources.These three sources were: Simplicity, Corsello et al [13], and PharmacoGx [2].Cross-dataset correlations were similar between all datasets when using any of the three data sources, with larger variations between sources noted when comparing drug sensitivities measured in PRISM-Repurposing to other screens (Figures S1-S3).Despite similar performance between data sources, a few compounds were much more or less correlated between screens with Simplicity than with other datasets.To understand these situations, we plotted PRISM-Repurposing vs. CTRPv2 AUC values for the top eight compounds in which PharmacoGx had higher cross-dataset correlations than Simplicity (Figure S4) and the top eight compounds in which Simplicity had higher crossdataset correlations than PharmacoGx (Figure S5).This data suggests that the majority of compounds that see large differences in Spearman's rho values between data sources are compounds that have low efficacies in most tested cell lines, resulting in relatively little variation in measured drug sensitivities.While it does appear that the curve fitting approach used by Simplicity may perform worse or better for specific compounds than the approaches used by other data sources, average performance across all tested compounds is very similar.This gives us confidence that the new functionalities provided by Simplicity to non-computational users of these datasets do not come at a cost of reduced data quality.These functionalities are described in the following sections.
Visualizing screening data with Simplicity
Simplicity allows users to generate customized plots to easily visualize information such as: (1) Ancestry (Figure 1A), age, gender, and cancer types across specific CCL populations (not shown).This can facilitate rapid intuition around how well a set of CCLs represents a researcher's patient cohort of interest.(2) Summary statistics of drug sensitivity across many CCLs for a single drug or across many drugs for a single CCL (Figure 1B).This enables users to quickly identify which cell lines are most or least sensitive to a given drug or to identify which drugs a given cell line shows exceptional sensitivity/resistance to.(3) Raw data for a given drug/CCL pair's dose-response curve (Figure 1C).This allows users to directly visualize the quality of a given dose-response curve, as well as to determine the level of reproducibility for a given drug/CCL pair across different datasets and replicates.(4) Relevant background information to the results being plotted, such as information about variations in assay conditions between different CCLs screens and different experimental runs within a given screen (Figure 1D).This can allow users to easily visualize how factors such as cell seeding density, plate format, assay reagent, and treatment duration influence dose-response curves.Customization of these plots is achieved via use of searchable dropdown menus and slider bars which allow filtering based on such characteristics as CCL disease type, age, gender, and ancestry makeup or compound molecular target, mechanism of action, or clinical phase.
Calculating custom summary statistics with Simplicity
To enable researchers to easily generate dose-specific metrics of drug efficacy from these screens, Simplicity provides the "Calculate Custom Statistics/AUC Values" and "Calculate Custom Statistics/Viability Values" tabs to calculate AUC and Viability values at custom concentrations/concentration ranges using a simple graphical user interface (Figure 1E).The interface provides the same searchable drop-down menus and slider bars present throughout the rest of the app to allow easy selection of compounds and CCLs of interest.The results of these calculations are provided as downloadable tables, with an option to automatically format the output for direct use with the IDACombo web application, which uses dosespecific estimates of monotherapy drug efficacy to predict drug combination efficacy across different doses of combined drugs [6].
Accessing bulk data through Simplicity
Simplicity also provides bulk data download for researchers who wish to use Simplicity's harmonized data with their own informatics tools.These can be accessed via the "Download Bulk Data" tab.Available data includes: a.
Harmonized CCL and compound names between the included datasets.
b.
Clinically relevant concentrations for 143 clinically tested compounds that are included in Simplicity.
c.
AUC and IC50 values for the CCL-compound pairs tested in each screen.
d.
Raw viability values from each screen following compound and CCL name harmonization.
Summary
Simplicity provides a graphical user web interface which allows users to easily visualize and manipulate data from high-throughput CCL drug screens.Notably, Simplicity provides the ability to query viability and AUC values at custom doses/dose ranges, enabling analyses to be conducted with clinically relevant concentrations without the need for coding or informatics experience.It is our hope that this will remove a significant barrier for noncomputational scientists who wish to use these datasets to conduct such dose-dependent studies.A video tutorial on the use of Simplicity is available at https://www.youtube.com/watch?v=oNuwRDs_5DQ.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
Figure 1 :
Figure 1: Example functionality of Simplicity.Plots, tables, and interfaces from Simplicity.(A) Ancestry plot for glioblastoma (GBM) cell lines tested with 5-Fluorouracil in GDSC1 as provided by the "Data Explorer/Explore Compounds" tab.(B) Examples of drug and cellline level summaries produced by Simplicity.Left panel: Plot showing measured sensitivities (IC50s) of Tozasertib in GBM cell lines in the PRISM-Repurposing dataset as provided by the "Data Explorer/Explore Compounds" tab.Cell lines names and exact IC50 values can be obtained by hovering over each data point.Right panel: Plot showing relative sensitivity of NKM-1 cell line to FDA approved (Launched) compounds tested in GDSC2 as measured by IC50 percentile relative to all other cell lines tested with each compound in GDSC2as provided by the "Data Explorer/Explore Cell Lines" tab.Higher percentiles indicate NKM-1 was more sensitive to a given compound relative to other tested lines.Direct IC50 values can be obtained by hovering over each data point or by downloading the summary statistics tables provided in the "Download Bulk Data" tab of Simplicity.Note that infinite IC50 values occur when fitted dose-response curves have a lower asymptote above 50% viability.This can occur when the data directly implies an asymptote above 50% viability or when the tested compound shows no efficacy at any tested dose such that the fitted dose response curve is simply a flat line at 100% viability.(C) Calculated dose-response curves for cisplatin in the NKM-1 cell line in both GDSC1 and GDSC2 along with the experiment Table of experimental conditions used in the experiments shown in panel C as provided by the "Data Explorer/Plot Dose-Response Curves" tab.(E) User interface for calculating viability values at specified concentrations.The interface allows users to easily select compounds, cell lines, and concentrations of interest using a graphical user interface.A similar interface is also available for calculating area under the curve (AUC) values at custom concentration ranges.
|
2023-12-21T16:20:02.677Z
|
2023-12-08T00:00:00.000
|
{
"year": 2023,
"sha1": "77cf5267e2fed2c8e9033f82706fc081bf50c388",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "52cf94a5b2a9e0f65cbd383458b891db0f15d329",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16003707
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of combined intermittent preventive treatment for children and timely home treatment for malaria control
Background Whiles awaiting for the arrival of an effective and affordable malaria vaccine, there is a need to make use of the available control tools to reduce malaria risk, especially in children under five years and pregnant women. Intermittent preventive treatment (IPT) has recently been accepted as an important component of the malaria control strategy. This study explored the potential of a strategy of intermittent preventive treatment for children (IPTC) and timely treatment of malaria-related febrile illness in the home in reducing the parasite prevalence and malaria morbidity in young children in a coastal village in Ghana. Methods The study combined home-based delivery of IPTC among six to 60 months old and home treatment of suspected febrile malaria illness within 24 hours. All children between six and 60 months of age received intermittent preventive treatment using amodiaquine and artesunate, delivered by community assistants every four months (three times in 12 months). Malaria parasite prevalence surveys were conducted before the first and after the third dose of IPTC. Results Parasite prevalence was reduced from 25% to 3% (p < 0.00, Mann-Whitney) one year after the inception of the two interventions. At baseline, 13.8% of the children were febrile (axillary temperature greater than or equal to 37.5 degree Celsius) compared to 2.2% at evaluation (post IPTC3 combined with timely home management of fever) (p < 0.00, Mann-Whitney). Conclusion The evaluation result indicates that IPTC given three times in a year combined with timely treatment of febrile malaria illness, impacts significantly on the parasite prevalence. The marked reduction in the parasite prevalence with this strategy points to the potential for reducing malaria-related childhood morbidity and mortality, and this should be explored by control programme managers.
Background
Malaria is estimated to cause between 300 and 500 million clinical cases with about 700,000 to 1.6 million deaths every year. About 94% of these deaths are believed to occur in sub-Saharan Africa, which is also the most resource-limited continent on the globe [1,2]. Malaria remains a major cause of morbidity and mortality in Ghana, accounting for over 40% of outpatient clinic visits and about 20% of deaths in children under five years of age [3]. The use of artemisinin-based combination therapy (ACT) for the treatment of uncomplicated falciparum malaria has been shown to provide effective treatment against falciparum malaria and slow down the spread of drug resistance [1]. In line with this recommendation and the increasing treatment failure rate of chloroquine [4], the Ghana National Malaria Control Programme, (GNMCP) has adopted the use of artesunate and amodiaquine for the treatment of uncomplicated malaria in the country since January, 2005. This became necessary because by 2003 parasitological responses to chloroquine were less than 50% in some areas of the country. Similar studies on the efficacy of sulphadoxine/pyrimethamine (SP) have shown 0-36% of RII resistance and 0-9% of RIII resistance [5,6].
Malaria control in Ghana relies on early diagnosis and prompt treatment of suspected cases (fevers) and the home is where early recognition and, in most cases, prompt treatment is initiated [7,8]. However, the current combination therapy is not widely available for home management. Reluctance to make combination therapy available for home management results from concerns that making these drugs widely available may lead to abuse and, therefore, increased drug pressure on the parasites which could lead to the emergence of Plasmodium falciparum resistance to these anti-malarial drugs, as with chloroquine and sulphadoxine/pyrimethamine. Prompt and efficacious treatment of malaria cases is an effective strategy for malaria control. It is especially important for vulnerable groups including children under five years of age and pregnant women. IPT has also recently been accepted as an important component of the malaria control strategy following the demonstration, in perennial malaria transmission areas, that IPT given with childhood vaccinations reduced incidence of first episode of malaria. It also reduced severe anaemia by more than 50% during the first year of life [9][10][11]. IPTC studies were carried out in areas with seasonal malaria transmission and were found to be effective [9,12]. The use of bed net is one of the mainstays of malaria control in Africa and to promote the use of treated bed net, the Ghana Health Service has made them available at community levels. These bed nets were highly subsidized for children under five years and pregnant women.
Rationale
Although the Abuja target of treating 60% of under-five suspected malaria cases within 24 hours of symptom onset by 2005 was reduced by the Ghana Health Services to 55% in 2007, this goal has not been achieved, and the question remains as to how this will be achieved. Improved access to IPTC especially in communities with no health facilities is one way to reduce the prevalence of infection and associated morbidity. IPTC studies were carried out in areas with seasonal malaria transmission and were found to be effective [9,12], but the question is will it be effective in areas with perennial malaria transmission, especially when combined with timely home management? This study was designed to test the potential of a strategy of IPTC delivered at home and timely treatment of febrile illness to reduce parasite prevalence and malaria morbidity in children under five years of age, in rural coastal Ghana.
Objectives
The goal of this study was to demonstrate the feasibility of controlling febrile malaria illness at the community level through combined interventions of intermittent preventive treatment and timely febrile malaria illness management in children between six and 60 months of age in southern Ghana. Specifically, the study was designed to: deliver intermittent preventive treatment to children (IPTC) at home and to provide timely treatment for febrile malaria illness in children within 24 hours of symptom onset.
Study sites
The study was conducted at Shime sub-district of the Keta district in the Volta region of Ghana where malaria accounts for over 40% outpatient clinic attendance in the district (District Annual Report, 2006). The malaria situation in the district is perennial with higher transmission occurring in March (the beginning of the raining season) to November (the end of the raining season). The Shime sub-district is one of the four administrative sub-districts recognized by the Ghana Health Service in the Keta District. It is the most deprived area in the Keta district. Electricity supply is limited; it has no secondary school and a very poor road network. During the rainy season most parts of the area are cut off from the rest of the district. The land is marshy and covered with streams, lagoons and creeks. The entire sub-district has a population of 19,972. The study took place in an area with a population of 8,000. About 99% are Anlo-ewe-speaking people. Their main occupation is farming and fishing.
The sub-district is served by two health centers and one of them is located within the study area. Almost all the communities in the area have pipe-borne water, but because of the 'cash and carry system' where residents pay for every bucket of water fetched at the point of fetching, the people continue to use unsafe water from the many water bodies around to complement the safe one. Four out of the 36 communities in the sub-district participated in the study, mainly because of geographical accessibility.
Study participants
Community entry activities, involving meetings and durbars of chiefs and people in the study communities, were done at the onset of the project. After gaining the support and willingness of the chiefs and people to participate in the study, the community registers of eligible children aged ≤60 months were updated. After obtaining consent from caretakers/parents, children aged six to 60 months were enrolled in the study. No caretaker refused to participate. All children (six to 60 months old), including those on antiretroviral treatment, were included in the study.
Questionnaire survey
All caretakers of children selected to participate in the baseline parasite survey were selected as respondents in a semi-structured questionnaire interview. The questionnaire was designed to solicit information on the demography of caretakers, all cause mortality, morbidity, bed net usage among others. This short interview guide was translated into the local language and back translated into English to ensure that they convey the meanings they were intended to. The questionnaire was pre-tested in a field situation in a community with similar characteristics as the study community. The interview was conducted in the local Ewe language by the first author.
Intermittent preventive treatment for children
IPTC was delivered every four months beginning in July 2007 for one year (July, November 2007 and March 2008) and the strategy was evaluated in July, four months after the last round of IPTC and just before the fifth round of IPTC. Children received 10 mg/kg body weight of amodiaquine (AQ) daily and 4 mg/kg of artesunate (AS) daily (given as a single dose) over three days. All treatments were given under direct observation and children were observed for five minutes after drug administration to ensure that they retained the medication. Children who vomited within the five-minute observation period received a repeated full dose of the medication. Children were followed up on days 1, 2, 3 and 7 after treatment (the day of first dose (treatment) was counted as Day 0) to document adverse events reported by caretakers using the adverse report form.
Timely treatment of suspected febrile malaria in children
Any child with febrile illness suspected to be malaria by the caretaker was reported to the fieldworkers who then evaluated and treated those who met the protocol criteria for suspected febrile malaria illness. Those who did not meet the criteria were referred to the health post located within the sub-district for further evaluation and treatment. In short, any child with axillary temperature of ≥37.5 degree Celsius, whose caretaker suspected febrile malaria illness, was treated with the full three-day course of artesunate and amodiaquine.
Laboratory methods
A diagnosis of malaria infection was made by light microscopy of thick and thin blood smears. Finger print peripheral blood was taken by trained technologists from the Noguchi Memorial Institute for Medical Research for the preparation of the thick and thin blood smears. All specimens were taken from the field. The thick and thin blood smears were stained with 3% Giemsa for 30 minutes. Parasite density was determined by counting the number of asexual parasites per 200 white blood cells, and calculated per μL assuming a white blood cell count of 8,000 cells per μL. Sexual parasite count was done per 1,000 white blood cells. A smear was declared negative when the examination of 100 thick-film fields did not reveal the presence of asexual parasites. Quality control was done by allowing a second microscopist to read a random 10% sample of both negative and positive slides to confirm the absence or presence of parasites. Discrepant results were read by a 3 rd microscopist and the majority result was taken as the final result. Haemoglobin concentration was determined from finger pricks using a portable automated Hemocue ® photometer (Leo Diagnostics, Sweden) at the field site.
Sample size
Every qualified person living in the study community was enrolled with the consent of his or her caretaker, mostly parents. However, for the prevalence survey, a sample size of 174 was needed to determine malaria parasite prevalence estimated at 20% in the study population. Also, to be able to determine at least10% reduction in malaria parasite prevalence in the study population at evaluation, we needed to enrol about 360 children into the IPTC intervention programme [13]. At evaluation however, after examining 174 children, randomly selected, and detecting very low parasite prevalence, all available participants at the time of the evaluation were examined. This led to the examination of over 80% of the children enrolled into the study.
Data analysis
Data were analysed to compare pre-intervention findings with post-intervention findings. All quantitative analyses were done using EpiInfo version 3.4.1. Proportions of preand post-intervention clinical findings and parasite levels were compared using Mann-Whitney or chi-squared tests. Statistical significance was set at p ≥ 0.05.
Respondents (98.0% for baseline and 100% at evaluation) reported that they and their families usually sleep under bed net. Comparatively, the number of insecticidetreated net usage has increased significantly from 38.5% to 60.0% in the study community within one year of project implementation. This was as a result of continual information provided by community assistants to caretakers that treated bed net was an effective way to reduce febrile malaria illness, and also participants were encouraged to take advantage of a highly subsidized treated bed net provision programme available at the health facility for children under five years of age and pregnant women. All respondents (baseline and evaluation) said it was important for children under five years to sleep under bed nets.
The mortality data generated was too small to make any meaningful analysis, but it is important to report that, at baseline, five (3.0%) caretakers reported that they had lost a child within the 12 months preceding the interview. The conditions that killed these children were reported as Asra -febrile malaria (2), convulsion (1), swelled up (1) and could not tell (1). Also, 35 (20.1%) respondents said they knew a close relative who had lost a child less than five years of age within the past 12 months at baseline. No caretaker, however, had lost a child or had a relative that had lost a child in the twelve months of project implementation.
Parasite prevalence and haemoglobin surveys
During the prevalence surveys (baseline and evaluation) a number of febrile illnesses were reported for the children (within the seven days prior to data collection) ( Table 1). On the day of examination, 42% (baseline) and 7.6% (evaluation) of caretakers reported that the children had fever within the past seven days, an approximate reduction of 80% in the point prevalence of fever cases in the study community (p < 0.00, Mann-Whitney test).
At baseline, five (2.9%) caretakers reported that the children were treated for febrile malaria within the seven days prior to data collection. These five were reportedly treated with anti-malarials, such as chloroquine, malaherb (local herbal-based anti-malarial preparation) and Kinaquine ® (a locally-manufactured brand of chloroquine). None of these five children were taken to the clinic/hospital. On the other hand, during the evaluation, only two caretakers reported that their children had been treated within the seven days prior to data collection and they were treated by the community assistants working with the project using amodiaquine and artesunate. However, community assistants' have referred a total of 29 children whose conditions they suspected not to be febrile malaria-relatedmostly children having running nose/catarrh, stomach problems, ear infections, eye problems, watery diarrhoea and fast breathing among others, to the health post (clinic) for further investigations and treatment within the 12 months of study implementation.
Parasite prevalence
Out of 174 children tested for malaria parasites at baseline, 44 (25%) of them were positive for malaria parasite compared with only 10 (3%) out of 357, a significant difference of over 80% (p < 0.00, Mann-Whitney test). All the positive infections were P. falciparum, the dominant species in Ghana. Detailed clinical and parasitological findings are presented in Table 2. One child (evaluation) with a very high parasite density was found to have been taken out of the study community after receiving the first IPTC and brought back to the community a few days before the evaluation survey.
Anaemia
In this study, anaemia was defined as haemoglobin < 10 g/dl. There was a significant drop of anaemia in the study population one year into the intervention (p < 0.004, Mann-Whitney test), at baseline, 27.6% of the children were anaemic compared to 16.8% recorded during evaluation (table 2).
There was no relationship between gender and anaemia in the study population. There was a significant relationship between reported fever in the seven days prior to baseline survey and anaemia (X 2 = 5.5309, p = 0.01), but this was not the case at evaluation. Also, at baseline there was a significant relationship between parasitaemia and anaemia (X 2 = 5.2028, p = 0.02), but this was not the case at evaluation. There was a significant relationship between febrile status and anaemia at evaluation (P = 0.01 (Fisher exact test)), but this was not the case at baseline.
There was a significant relationship between febrile status and parasitaemia at both baseline and evaluation (X 2 = 49. 3643, p = < 0.00 -baseline; p = < 0.00 (Fisher exact test)) -evaluation). There was no gender difference in parasitaemia in the study population.
Intermittent preventive treatment for children (IPTC)
IPTC was delivered to children aged six to 60 months in the study community three times (July and November 2007 and March 2008). There have been marginal increases in the number of children at each subsequent treatment round (IPTC1 = 413, IPTC2 = 420 and IPTC3 = 433) ( Table 3). Only 22 and 18 children have received IPTC twice and once respectively at the time of evaluation. This was due largely to the movement of these children out of the study community. However, the number of those who received IPTC once or twice was too small that we could not perform any meaningful comparative analysis between them and those who received IPTC for three times.
During each IPTC round, a number of febrile illnesses were reported by caretakers within the past seven days prior to treatment (Table 3). On the day of treatment 15.50% (IPTC1), 10.20% (IPTC2) and 2.50% (IPTC3) respectively of caretakers reported that the children have had fever within the past seven days. This shows a steady drop in fever cases in the study community at each subsequent treatment.
There was virtually no serious adverse event reported after drug administrations. However, 5 (1.21%) IPTC1, 4 (0.95%) IPTC2 and 7 (1.65%) IPTC3, caretakers reported that their children were weak for about two to four hours after taking the drugs. None of these cases warranted any medical intervention.
Only five (1.2%) febrile malaria-related illnesses were treated in the study communities between IPTC1 and IPTC2, while 39 (9.3) were treated between IPTC2 and IPTC3 and 17 (3.9%) were treated between IPTC3 and evaluation.
Informal information, education and communication
Throughout the 12-month intervention period, community assistants continued to informally advice community members on the importance of IPTC, the need for timely treatment of suspected febrile malaria illness in children within 24 hours of the onset of symptoms and the need to replace the predominantly untreated bed nets with treated ones. Community members were encouraged to take advantage of the highly subsidized treated bed nets available at the health centre for children under five years and pregnant women. Pregnant women were also encouraged and in some cases assisted to attend antenatal clinics.
Feasibility of combining IPTC and timely home management for malaria control
The study clearly demonstrates that it is possible to train community assistants to deliver IPTC and timely home treatment to children aged six to 60 months. Visits were made to the community twice in a month to collect timely home treatment forms. However, during each IPTC rounds, the first author spent three days in the community to supervise the research assistants.
Discussion
This study looked at the effectiveness of intermittent preventive treatment for children (IPTC) combined with timely treatment at home for malaria control, targeting children aged six to 60 months old in an all year round malaria endemic area in Ghana. The main finding of the study was a reduction of about 88.0% in the prevalence of malaria parasite infections in the target population (from 25.0% at baseline to 3.0% at evaluation) within one year of the project implementation. It may be argued that the increase in the use of insecticide treated net (from 38.5% to 60.0%) could be implicated in the reduction of the malaria parasite prevalence in the study population. However, it is unlikely that all the reduction of over 80.0% was due to the 55.8% increase in ITN use alone. Several studies in IPT intervention measured clinical incidence rather than prevalence and found between 20% and 86.0% reduction with strong variations depending on transmission duration and intensity, target population and intervals between treatments [9][10][11][12]14,15]. In this study, prevalence was measured because it is easier to measure as the interventions were delivered by community assistants. The second reason for measuring prevalence was the combination of IPTC with home treatment, which requires that suspected cases were treated once they meet the inclusion criteria and for ethical reasons, the community assistants were not trained to take blood for examination before treating the children. However, this may be seen as a weakness of the study.
Anaemia in the children, defined as haemoglobin <10 g/ dl, improved from 27.6% to 16.8%, 12 months after the intervention was implemented. This compares well with a recent randomized trial in Tanzania, which showed that IPT given to infants at the time of childhood immunization reduced the incidence of the first episode of malaria and anaemia by more than 50.0% during the first year of life [10,11].
There was a noticeable reduction in malaria-related morbidity in the study population as expressed by fever and other malaria-related signs and symptoms reported either in the seven days before prevalence surveys or IPTC administration. The reduction could also be seen from reported treatment sought for the children within the seven days prior to prevalence surveys or IPTC administration. Timely treatment of febrile malaria cases in the community did not follow any pattern as can be seen from the result. However, it could be argued that because the community assistants were on hand to deliver effective treatment in a timely fashion, this could have contributed to the marked reduction of parasitaemia seen at the evaluation survey. The community assistants also contributed to heightened health information in the study community through informal education on the use of treated bed net and timely treatment. This should encourage malaria control programmes to have confidence in community assistants to deliver timely treatment and IPTC to children at the community level, once they are well trained by the programme coupled with reference manual for easy and quick referencing when in doubt.
Findings reported here present a challenge to the existing practice, especially in most sub-Saharan African countries, where malaria diagnosis is based on presumption without confirmation. As community intervention or treatment increases, this may lead to fewer malaria infections and this may lead to over diagnosis and treatment with expensive drugs for people who do not need them. When this happened, control programmes would have to invest in rapid diagnostic test kits where microscopy is not possible. As reported by Zikusooka et al [16], this may lead to cost savings because anti-malarial drugs are expensive. Goodman et al [17] also make this point. Furthermore, rational use of anti-malarials will reduce the potential for adverse reactions.
The sharp increases in the number of febrile malaria cases treated at home between IPTC2 and IPTC3 might have been largely due to increased confidence of caretakers in the ability of community assistants to treat their children with suspected febrile malaria effectively and the recognition of caretakers of the importance of timely treatment. The decrease in the numbers between IPTC3 and evaluation could be attributed to the reduction of malaria prevalence in the study community as a result of the interventions implemented.
The use of IPTi which is similar to IPTC in principle was found to reduce malaria incidence in infants [10]. Although this study cannot determine the contribution of IPTC and timely treatment at home to the protection offered to the children because the two interventions were delivered concurrently, the two together in this study offered a major protection against malaria in children, reducing prevalence from 25% to 3%.
The results indicate that it is possible to deliver IPTC and timely home treatment to children between six and 60 months old. Since there were no timely treatment form to collect at some of the biweekly visits to the community, it should be possible to reduce the visit to once in a month to reduce supervision cost.
This indicates that, it is possible to reduce malaria prevalence and this may reduce malaria-related childhood morbidity and mortality and this should be explored by control programme managers as one of the effective options available for the fight against malaria, especially in sub-Saharan Africa.
|
2017-08-03T01:53:55.190Z
|
2009-12-11T00:00:00.000
|
{
"year": 2009,
"sha1": "989a06b95d87bb1c7c7db7bc149e6693f6630139",
"oa_license": "CCBY",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-8-292",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7adb16252a536f91b82c7fc3ec56a7710feae52",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253419562
|
pes2o/s2orc
|
v3-fos-license
|
Sulfur dioxide emissions curbing effects and influencing mechanisms of China’s emission trading system
The emissions trading system, a crucial and fundamental system reform in the environmental resources field of China, was established to promote the continuous and effective reduction of total emissions of major pollutants. In this context, based on the panel data of 285 Chinese cities (except Tibet) from 2004 to 2018, this paper uses the quasi-experimental method of Difference in Difference to assess the effect of the emissions trading system introduced on sulfur dioxide emissions of China and the transmission mechanism. The article generates several intriguing findings. (1) The emissions trading system has a significant suppressive effect on sulfur dioxide emissions. (2) Mechanistic tests show that the emissions trading system can effectively suppress sulfur dioxide emissions by reducing government intervention, stimulating green patent innovation, and improving resource use efficiency, in which green utility patents have a masking effect. (3) From the east, central and west divisions, the emissions trading system has a significant suppression effect on sulfur dioxide emission in the eastern and central regions, and the eastern region is better than the central region. (4) In terms of factor endowment, the emissions trading system has a significant suppression effect on sulfur dioxide emissions in both resource-based and non-resource-based cities, with non-resource-based cities outperforming resource-based cities; while within resource-based effect exists only in regenerative cities. (5) The emissions trading system has a significant suppression effect on sulfur dioxide emissions in old and non-old industrial base cities in industrial base zoning. The suppression effect in non-old industrial base cities is better than that in old industrial base cities. This paper provides empirical evidence for evaluating the emissions trading system at the provincial level in China and suggests policy recommendations for selecting government tools to effectively curb sulfur dioxide emissions. Although the emissions trading system has made an outstanding contribution to sulfur dioxide emissions reduction, there is still much space for further development of potential emission reductions.
Introduction
As one of the major air pollutants [1], sulfur dioxide (SO 2 ) emissions are the main cause of acid rain, fog, and haze [2], which pose a great threat to economic development, human health and ecosystem security [3]. In the long run, the deterioration of economic performance due to the growth of SO 2 emissions will overshadow the enhancement of economic performance under this status quo [4]. The phenomenon of environmental pollution caused by excessive emissions of SO 2 is regarded as a phenomenon with negative externality characteristics, and in the face of such negative externality problems, it is particularly important to take control measures on the emitters [5], especially the use of climate policy tools [6]. In addition, Coase's theorem is a theory used to analyze transaction costs and their relationship with property rights arrangements. The idea points out that due to transaction costs, different definitions and allocations of rights lead to other efficient resource allocations. Dales J. H. (1986) applied Coase's theorem to point out that pollution is a property right given by the government to the emitting firm, and this discharge right can be transferred by market means so that market instruments can be used to improve the efficiency of environmental pollution control [7]. However, in the comparison of the strength of the role of public awareness and government awareness, the government has more influence than the public and industrial enterprises, so the government should intervene appropriately while using the market [8]. The emissions trading system is the product of the combination of government and market, which is also a major innovation in environmental regulation. The 1990 amendments to the Clean Air Act first introduced the emission trading system (ETS) [9], which not only reduced U.S. emissions reductions from 1980 levels by a total of 40% but also provided new ideas for countries looking for effective ways to reduce SO 2 emission reductions [10,11]. Developed countries have borrowed from the U.S. approach to implement the ETS [12]. During improving air quality and safeguarding health [13], the potential of the ETS to promote SO 2 emission reduction is gradually emerging [14].
Currently, Shi Wenming et al. (2021) show that China is the second-largest emitter of SO 2 , and finding more effective control methods has become a difficult task to promote SO 2 emission reduction in China [15]. In the face of the harm caused by SO 2 emissions, the Chinese government has repeatedly emphasized the promotion of SO 2 emission reduction since 2006 and set SO 2 emission reduction targets of 10%, 8%, and 15% in 2006, 2011, and 2016, respectively. During this period, in 2007, China launched ETS pilot projects in 11 provinces and cities in Jiangsu, Zhejiang, Tianjin, Hubei, Hunan, Chongqing, Hebei, Henan, Shanxi, Inner Mongolia, and Shaanxi, with the expectation of using market-based instruments to promote the sustainable and effective reduction of SO 2 emissions. Jiang Lei et al. (2020) showed that central and provincial governments play a pivotal role in solving China's environmental pollution problems and that governmental awareness of environmental protection contributes to significant reductions in SO 2 emissions [16]. In this context, the effect of ETS implementation on SO 2 emission reduction in China and the effective transmission pathway deserves further study. At the same time, ETS is an innovative environmental regulation tool. Finding the path to realize the effect of ETS on SO 2 can help China find a practical path to manage SO 2 and help production units establish the value of green and clean production, thus boosting sustainable development.
Therefore, based on the panel data of 285 Chinese cities (except Tibet) from 2004 to 2018, this paper investigates the effect of the ETS introduced in 2007 on SO 2 emissions in China and the transmission mechanism using the difference-in-differences (DID) empirical approach and the mediating effect model.
The research of this paper is divided into six parts. The first part describes the reasons for the government to adopt environmental policies to inhibit SO 2 and the current status of ETS implementation. The second part composes the current research results of scholars on the effect of ETS. The third part puts forward the research hypothesis of the effect of ETS on SO 2 in China based on relevant theories and models. The fourth part establishes models required to test the transmission mechanism of the effect of ETS on China's SO 2 and lists the data sources. The fifth part uses the collated data to test the research hypothesizes and conduct analyses. The sixth part puts forward targeted policy recommendations on the specific challenges faced by China in promoting ETS at this stage.
The innovative points of this paper are as follows. First, the effect of ETS on SO 2 is discussed at both national and regional levels by using the empirical method of DID. Second, city-level data are used for the analysis to make the results closer to the actual effect. Third, the transmission mechanism of the effect of ETS on SO 2 is analyzed in terms of technological innovation and energy efficiency. Fourth, the robustness of the DID model is strengthened by the prior trend test, the placebo test, the exclusion of policy interference test, and the propensity score matching (PSM) test. It is worth mentioning that the research framework of this paper is not only applicable to Chinese ETS, but also to international ETS studies.
Literature review
As a typical model of market-based regulation, ETS has received much attention from international scholars in its creation and promotion. As the first country to implement ETS, Conrad Klaus and Kohn Robert (1996) studied that a market approach is more cost-effective in the United States than command and control regulations for pollution control [10], which has reduced U.S. SO 2 emissions reductions by 40% from 1980 levels [17]. The ETS is ecologically efficient and economically effective; it plays a vital role in developing the future market for tradable permits [18]. However, there are some problems in ETS implementation, such as compliance costs are not being minimized, and the market is still not mature enough [19]. As the largest owner of the ETS, the effection of ETS implementation in E.U.has also received a high level of attention from scholars. During the implementation of the ETS, Swedish participation in the ETS has been enthusiastic. Still, in the absence of a sentiment market instrument that pays close attention to the pricing mechanism, it may severely negatively impact the system's efficiency [20]. In addition, Rogge Karoline et al. (2012) show that because of the lack of rigor and predictability in the E.U. trading system, coupled with environmental constraints, the ETS may not provide sufficient incentives for fundamental changes in firms' innovation activities at a level that ensures the achievement of long-term political goals [21]. Arto Iñaki et al. (2009) study points out that Spain, Austria, Italy, the U.K., and Sweden are better suited to the equalization scheme, which would greatly benefit the textile, non-metallic minerals, and paper industries, but would be particularly detrimental to the chemical, non-ferrous and other metals, and engineering industries [22].
Current research on ETS in China focuses on two main parts: the direct effect of ETS implementation and the transmission mechanism of ETS. Scholars have studied the feasibility of ETS implementation in China before its implementation and have noted that cap-and-trade offers the most cost-effective approach [23]. Shuangqing Xu and Bin Liu (2012) compared pollution control cases in the United States and China and pointed out that the command and control model can achieve efficient and cost effective results in the case of market failure and immature market environment [24].
Research studies on the direct effects of China's ETS pilot can be divided into two main categories: economic effects and environmental effects. In terms of environmental benefits, in the early stages of ETS implementation, scholars questioned the effectiveness of the ETS in China. Zhen Lu (2011) pointed out that a national emissions trading scheme may not be suitable for China in the short term [25]. Zhengge Tu and Renjun Shen (2014) demonstrate that ETS fails to reduce the average pollution abatement cost based on the PSM-DID method [26]. Guang MingShi et al. (2016) use a game theory approach to show that pollution charging system (PCS) and ETS are not effective in motivating firms to reduce SO 2 emissions [27]. As the policy advances, the research sample has extended in time and space, and inconsistencies in research findings emerge. At the regional level, some scholarly studies have pointed out that EST can not only effectively promote SO 2 emission reduction [12], green development [28] in the pilot provinces, improving environmental efficiency [29] and raise environmental awareness in the surrounding areas [16]. On the contrary, Bingqing Hou et al. (2020) showed that ETS implementation hurts green development based on urban panel data [30]. At the firm level, Tang Maogang et al. (2021) showed that ETS has a significant contribution to SO 2 emission reduction based on firm data [31], while Sousa Rita et al. (2020) showed that this contribution does not apply to resource-intensive industries [32]. In terms of economic benefits, the implementation of ETS can stimulate strategic behaviors such as technological innovation [33], production restructuring, and industrial restructuring [34], which not only improve the total factor productivity [35] but also increase the energy efficiency of the company [36]. Zhang H. and Fan LW. (2019) use provincial data to show that ETS is stimulating for provincial energy efficiency improvements, but not as effective as thought, and suggest that it is more appropriate to implement ETS in the industrial or transportation sectors than in the national sectors [37]. In addition, other scholars point out from the firm level that ETS still has difficulties in improving the productivity of firms [38] and the green total factor productivity of firms [30].
After confirming the existence of direct effects of ETS on the environment, scholars began to explore the transmission mechanism between ETS and the environment, which is the core area of concern in this paper. Scholars mainly analyze the transmission channels of ETS to the environment from two perspectives: internal ETS and external ETS. Gao Shuai and Wang Can. (2021) showed that the design of the ETS quota allocation on a uniform basis and allowing firms to bank the quota system can effectively encourage firms to participate in ETS and thus achieve environmental optimization [39]. Zhang Bing et al. (2010) show that transaction costs also reduce market efficiency, China's coal price system is also a major factor affecting SO 2 emissions trading market performance [40]. Ju Yiyi and Fujikawa Kiyoshi (2019) showed that through a cost transmission mechanism, the additional abatement costs caused by the national ETS drove an increase in household consumption in several energy-intensive industries [41]. Zhang Bing et al. (2013) showed that the number of cost savings and transactions was less than expected at the beginning of the project, and that enhanced policy interaction could effectively address this issue and thus contribute to SO 2 reduction [42].
The study of extrinsic mechanisms. Zhang Jingxiao et al. (2021) showed that ETS can curb SO 2 by increasing the efficiency of industrial green innovation, and that unbalanced economic and social growth could undermine any initial permit allocation scheme that might become a cornerstone of ETS [43]. Wang Lianfen et al. (2021) showed that the implementation of policies can promote the optimization of industrial structure through technological innovation, but this effect is influenced by economic development and the level of technology [44]. Zhang Shengling et al. (2021) showed that ETS achieves environmental optimization by increasing green total factor productivity (GTFP) and reducing investment in pollution-intensive industries in pilot provinces, confirming the existence of Porter's hypothesis and the investment transfer transmission path [45]. Yang S. et al. (2021) showed that ETS promotes urban SO 2 reduction by stimulating green technology innovation, promoting industrial restructuring, and directing investment toward green assets. Among them, the mediating effect of directing investment toward green assets is the strongest [46]. Because of the spatial spillover effect of the harm caused by the accelerated SO 2 emissions [47], Du Gang et al. (2021) showed that ETS significantly stimulated green utility innovation in and around the pilot areas and that there was a masking effect of government intervention in the process [48].
In general, the shortcomings of the current research results are as follows. Firstly, the discussion of ETS is controversial and lacks heterogeneity at the regional level. Second, most scholars study policy effects at the provincial level, which can make policy effect estimates biased. Third, scholars have studied the direct effect of ETS on environmental optimization, but less attention has been paid to the transmission mechanism between ETS and environmental effects. Fourth, the existing literature is also crude in terms of data interpretation and model robustness tests.
The effect of ETS on SO 2
The ETS, which combines market mechanisms with government regulation, is a significant innovation in environmental law. The core idea is that by setting a maximum amount of pollution emissions in a region and allocate a share of the emissions to each enterprise [49], which is directly linked to the production costs of enterprises. Firms that emit less than the allotted amount can sell their excess share in the emissions trading market and receive a profit, reducing the production costs of low-polluting firms. In contrast, enterprises whose emissions are higher than the allocated amount need to buy the corresponding share in the emissions trading market, which raises the production cost of high polluting enterprises to some extent. The cost-benefit theory holds that financial decisions should be based on the principle that the benefits outweigh the costs, and this decision is feasible only when the input costs are less than the benefits brought by the output. At this time, enterprises with strong pollution control ability will adjust their production mode more actively because of the benefits to achieve a sustainable and effective reduction of emissions. The firms with weak emission capacity will also adapt their production mode because of the increased cost to reduce their negative externalities on the environment. As SO 2 is the main component of pollutants in China, hypothesis H1 is proposed in this paper. H1: ETS directly affects SO 2 and can significantly suppress SO 2 .
Path of government intervention (GOV)
Because the environment is a public good characterized by externalities and property rights are difficult to define clearly, environmental issues require a joint role of government and markets [50]. Traditional environmental regulation tends to be dominated by administrative forces, i.e., local governments tend to use the command-and-control environmental scheme to restrict corporate emissions behavior. This regulation creates barriers to inter-regional factor flows and intensifies market fragmentation, harming eco-efficiency [51]. At the same time, the trade-off between governance costs and rent-seeking costs can lead to low awareness of cleaner production and insufficient incentives for energy conservation and emission reduction among enterprises [52]. In addition, information asymmetry often leads to the failure of commandand-control environmental regulation [53]. Unlike traditional environmental schemes, ETS, as a market-based one, cedes part of its power to the market by establishing an emissions trading market, which fully utilizes the decisive role of the market in resource allocation and enhances the speed of elimination of backward production capacity. In addition, the revenue gained by the enterprises with strong pollution control ability in the trading process also plays a particular incentive effect. Thus this paper proposes hypothesis H2. H2: ETS indirectly affects SO 2 and can suppress SO 2 by reducing GOV.
Path of green technology innovation (GTI)
Based on Porter's hypothesis (1995), appropriate environmental regulation can lead to more innovative activities by firms, which will increase their productivity [54]. Thus offsetting the costs of environmental protection and improving their profitability in the market and product quality may give domestic firms a competitive advantage in the international market and, at the same time, may increase industrial productivity. ETS has changed the concept from emissions are punishable to profitable as a new environmental regulation tool, which puts pressure on production units to discharge emissions and warns them that high value-added, low pollution, and green sustainability are the future direction of economic growth development. This approach motivates production units to engage in GTI actively to achieve effective long-term reductions in pollutant emissions and thus meet the reduction targets of environmental policies. Therefore, this paper proposes hypothesis H3.
H3: ETS indirectly affects SO 2 and can suppress our SO 2 through GTI.
Path of resource utilization efficiency (RUE)
The implementation of ETS is based on production units not only in terms of emission pressure but also in terms of intensified market competition. And market competition is characterized by the flow of factors from inefficient sectors to efficient sectors, thus achieving last place elimination, which often shows a positive correlation with this firm productivity [55]. Therefore, firms should reduce their pollution emissions and improve the efficiency of resource use to prevent being eliminated in the process of market competition. Under a perfectly competitive market, and emission shares can be traded according to firm demand. In reality, however, Tombe and Winter (2015), in their study, combined environmental regulation with resource allocation for the first time, point out that environmental regulation set based on pollution intensity carries significant information asymmetry [56]. This asymmetry may lead to resource misallocation. As a result, there is a possibility that a few firms may purchase and store emission rights far over their emission quotas to gain monopoly revenues or for future use, with consequences such as a decrease in product market production [57], which results in higher compliance costs for buyers of emission shares. To be able to thrive under the pressure of environmental regulation, production units will increase the efficiency of resource use to reduce production costs. Therefore, this paper proposes hypothesis H4.
H4: ETS indirectly affects our SO 2 and can suppress SO 2 by enhancing RUE.
Based on the above theoretical analysis, this paper studies the transmission mechanism of the effect of ETS on SO 2 in China from the perspective of GOV, GTI, and RUE, and the structure of the study is shown in for the assessment of policy effects. The sample is divided into treatment and control groups and combined with a fixed-effects model, and the data are then compared between the two periods. This approach yields relatively accurate results for assessing policy effects, largely avoids endogeneity problems, and omits variable bias. For the study of this paper, a difference in differences model is developed, as shown in Eq (1).
In Eq (1), the explanatory variable SO 2it is sulfur dioxide emissions, Policy it and Year t are both 0-1 dummy variables reflecting the regional policy implementation, i is the city, and t is the time. Based on the government work report, 2007 was taken as the policy implementation year, and the values of treatment group and control group are shown in Fig 2. Control it is a series of control variables, μ i and λ t are individual fixed effects and time fixed effects, and ε it is random error influenced by the province time trend term.
Parallel trend test model
In assessing policy effects by the DID method, the common trend is a crucial assumption that largely determines the accuracy of the model assessment. Therefore, in this paper, two-period and multi-period parallel trend tests are conducted in the study to analyze the effect of ETS on SO 2 emissions, and a multi-period parallel trend test model is established as shown in Eq (2).
PLOS ONE
In Eq (2), d t is a dummy variable of time, if t>2007,then Policy it ×d t = 1, if t�2007, then Policy it ×d t = 0. β 1 is the coefficient of the cross product term, which is the average treatment effect. Theoretically, the policy effect is significant if the average treatment effect is not significant before the policy is implemented, but the average treatment effect is significant after the policy is implemented.
Intermediary effect model.
Based on the hypotheses in section 3, this paper analyzes the effect of ETS on SO 2 from two perspectives: GOV, GTI, and RUE, based on the mediating effect model test proposed by Baron and Kenny's (1986) [58]. M is used to represent all mediating variables involved in this paper. The mediating effect test model is shown in Eqs (3)- (5).
The test steps are as follows. In the first step, test whether the interaction term coefficient α 1 in Eq (3) is significant for the second step of the test. If not significant, the causal effect is not apparent to abandon the intermediary effect test. The second step tests whether the β 1 in Eq (4). If significant, continue the third step of the intermediary effect test. On the contrary, then the intermediary effect is not significant, the test is terminated. The third step tests whether γ 1 and γ 2 in Eq (5) is significant, if only γ 1 is significant, it is a complete mediation effect. If both γ 1 and γ 2 are significant, it is a partial mediation effect. In addition, if β 1 ×γ 2 and γ 1 have the same sign, then the argument is based on the mediating effect, and if β 1 ×γ 2 and γ 1 have different signs, then the argument is based on the masking effect.
Data selection
Explained variables. In order to investigate the impact of ETS on China's SO 2 emissions, this paper selects SO 2 emissions in various regions as an explained variables. The data source is China City Statistical Yearbook and the data description is shown in Table 1. The acronyms involved in the study of this paper are shown in Table 2.
PLOS ONE
Sulfur dioxide emissions curbing effects and influencing mechanisms of China's emission trading system Control variables. In order to avoid the influence of omitted variables, this paper adopts the following control variables. (1) Economic development (economy): Grossman and Krueger (1995) argue that regional economic growth may be accompanied by excessive consumption of environmental resources [59,60], exacerbating environmental degradation [61]. But the willingness to use environmental resources in exchange for economic growth gradually declines when economic development reaches a certain level. (2) Per capita income (income): The environmental Kuznets curve states an inverted U-shaped relationship between per capita income level and ecological pollution, and SO 2 can generally satisfy the environmental Kuznets curve [62]. (3) Industrial structure (structure): As an essential link between human economic activities and air quality, changes in industrial structure can have an impact on the environment [63]. For example, a decrease in the share of secondary industry output in GDP can significantly reduce SO 2 pollution [63]. (4) Population size (population): According to Dietz T. and Rosa E. (1997), the environmental impact of population growth is characterized by significant diseconomies of scale [64]. On the one hand, population expansion brings direct environmental pressure, increasing the consumption of exhaustible resources and leading to environmental degradation. On the other hand, population growth intensifies the demand for industrial products, and the expansion of industrial production brings higher levels of pollutant emissions. (5) Electricity efficiency (electricity): As the economy grows and the scale of production increases, the increase in electricity efficiency can impact SO 2 [19]. (6) Infrastructure (infrastructure): With economic development, urban infrastructure has led to more active production and life, which has a certain impact on the environment [65]. (7) Technology investment (technology). Government spending on technology can help finance technology research and development units and contribute to the development of science and technology, which is led by the value of green development and tends to favor environmentally beneficial technology development, thus optimizing environmental quality [66]. Among them, GDP is deflated with 2003 as the base period. The above indicators are calculated from the China Statistical Yearbook. Before processing, the raw data of the above indicators are obtained from China Statistical Yearbook, and the data description is shown in Table 1.
Mediating variables. This paper analyzes the effect of ETS on SO 2 from three perspectives: GOV, GTI, and RUE.
In terms of the perspective of GOV, local governments can direct the flow of resources through investment and fiscal spending, impacting production and thus the environmental quality. Fiscal is an effective tool for the government to address resource allocation. Therefore, this paper draws on Du Gang et al.'s (2021) study using the ratio of government fiscal expenditure to GDP for the year to measure the level of government intervention [48]. PLOS ONE GTI perspective. Since there is a specific time distance from the application to the acquisition of green patents, some of the green patents obtained in the year of policy implementation were applied for before the performance of the policy, and policy factors less influence this part of the patent application. In addition, the patent acquisition is influenced by multiple factors. Therefore, this paper selects the number of GTIA and the number of GTIB applications to measure green technology innovation. The data source is the National Bureau of Statistics. RUE perspective. This paper analyzes the mediating effect of resource use efficiency from two perspectives: ECS and TFP. First is the energy efficiency perspective. In this paper, the ratio between GDP and energy consumption is used as an indicator to measure energy efficiency, where GDP data are obtained from the China Urban Statistical Yearbook and deflated with 2003 as the base period. Energy consumption is estimated using nighttime lighting data. Numerous scholars have demonstrated a correlation between the number of lights and energy consumption [67][68][69][70]. The basic logic is that higher light levels at night indicate more economic activity at night, implying a higher level of economic development and corresponding energy consumption. Chinese scholar Wu Jiansheng et al. (2009) demonstrated the relationship between nighttime lighting and energy consumption in exponential, linear, and logarithmic relationships in China, with the most robust linear relationship [71]. Xiao Hongwei (2018) showed that the accuracy of this estimation method is as high as 99% [72]. Therefore, this paper uses the linear correlation between energy consumption and total nighttime lights in 30 provinces of China to establish a linear simulation model of energy consumption in China by regions through regression analysis. Considering the accuracy problem of downscaling model inversion, the linear model without intercept is adopted in this paper, and its formula is shown in (6).
In Eq (6), E it is the statistical value of energy consumption in province i in year t; k t is the coefficient in year t; DN it is the sum of grayscale values of all rasters in province i in year t.
Second is the total factor productivity perspective. Total factor productivity (TFP) measurement methods can be broadly classified into the stochastic frontier analysis (SFA) and data envelopment analysis (DEA). Compared with the overly idealized expressive relationships among variables, DEA only derives the optimal weights from the actual input-output data of decision units, which has strong objectivity [73]. Therefore, this paper uses capital, labor, and energy as inputs and GDP as output to estimate the allocation effect. The capital data are estimated using the perpetual inventory method. Meanwhile, labor data are the average number of employees per year, while energy is the total amount of energy input per year. The data source is the China City Statistical Yearbook. Among these variables, those affected by price factors are deflated using 2003 as the base period.
Fundamental analysis
Before the logarithmic treatment of SO 2 emissions, we conducted a fundamental analysis of SO 2 emissions, as shown in Fig 3. From Fig 3, before the implementation of ETS in 2007, the overall Sulphur dioxide emissions in China increased, while Sulphur dioxide emissions showed a decreasing trend after 2007. Besides, the growth rate of Sulphur dioxide emissions also changed from positive to negative, which shows that China has effectively reduced Sulphur dioxide emissions during this period. However, the effect of ETS as an environmental regulation policy tool generated in this period cannot be judged directly.
Baseline model test results
Before conducting the baseline regression, this paper used the correlation coefficient analysis to analyze the control variables for multicollinearity. As can be seen from the Table 3, the variance inflation factors (VIFs) are below 10, and the tolerances are above 0.1, so there is no multicollinearity in this paper as far as the control variables are concerned [74,75]. In addition, the variables in this paper were subjected to the Hausman test, which significantly rejected the original hypothesis, so in the baseline regression, this paper selected the DID model with double fixation of time and location to assess the effect of ETS on SO 2 .
For the study of the effect of ETS on SO 2 , this paper adopts the stepwise regression method to present the effect of ETS on SO 2 in China, and the baseline regression results are shown in Table 4. From Table 4, the effect of ETS on SO 2 is significant, and compared with the control group, the SO 2 emission of the treatment group is reduced by 26.44%. Thus, hypothesis H 1 that ETS directly affects SO 2 emissions and can effectively suppress SO 2 emissions in China is confirmed.
Parallel trend test results
Considering that the implementation of the ETS pilot is a continuous dynamic adjustment process, it is necessary to consider further the dynamic marginal effect of ETS on SO 2 emission reduction in China. In the test process, 2004 is taken as the base period of policy implementation, and the test results are shown in Fig 4. From Fig 4, overall, the coefficient of the
PLOS ONE
interaction term changes from positive to negative after the implementation of the policy. After implementing ETS in 2007, ETS significantly suppressed SO 2 emissions at a 90% confidence level, especially after 2012, when the confidence level stabilized above 95%. To explain the insignificance in 2011, the government announced a 2011 plan for another market-based environmental regulation policy, namely the carbon emissions trading policy. This policy adds additional pressure to reduce carbon emissions and distracts the production units, which the effect of ETS is somewhat disturbed.
Analysis of ETS transmission mechanism
5.4.1 GOV conduction pathway. Based on the theoretical hypothesis proposed in section 3, the intermediary effect of GOV is tested in this paper, and the test results are shown in Table 5. From the test results in Table 5, we can see that the mediating effect of GOV does exist in the process of SO 2 reduction by ETS, and ETS can suppress SO 2 by reducing GOV, which also confirms the validity of hypothesis H 2 .
This paper gives the following explanation for this phenomenon. As a market-based incentive-based environmental regulation tool, ETS makes it possible to build a platform for
PLOS ONE
enterprises to trade emission shares by establishing an emission rights trading market. On the one hand, the seller of the share of pollution gains revenue in trading and then invests the earned income into production, which stimulates the seller's incentive to treat pollution and relieves the pressure of government subsidies. The government can reduce its investment accordingly. On the other hand, for the share buyer, the addition of a market mechanism enhances the production cost of the share buyer. It enhances the buyer's awareness of self-reduction under market competition. Therefore, the government will simplify and decentralize the government, give up more power to the market for allocation and regulation, and enhance the awareness and ability of enterprises to treat pollution through market competition in all aspects. Thus effectively reducing pollution emissions and achieving environmental optimization.
GTI conduction pathway.
Based on the Porter hypothesis theory proposed in section 3, this paper tests the small and medium effects of GTI. Since GTI is composed of GTIA and GTIB, this paper tests the mediation path of GTI from these two perspectives, and the test results are shown in Table 6. Among them, Table 6-(2) and Table 6- (3) show the test results of GTIA, and Table 6-(4) and Table 6- (5) show the test results of GTIB. From Table 6, it is clear that the mediating effect of GTIA does exist in the process of achieving SO 2 emission reduction by ETS, while GTIB has a masking impact on the process.
The interpretation of this result is as follows. In ETS implementation, production units need to purchase emission credits to meet their emission demand, which makes the emission volume directly related to the production cost of production units. Under the dual pressure of environmental regulation and industrial development, production units inevitably have to find a balance between the two. At the same time, since the research process of GTI requires a large amount of capital and is led by the value of low pollution and low emission, the money is more inclined to the units with clean production. Therefore, production units will continue to increase GTI. Although GTIB, which takes less time, consumes fewer resources, and is highly practical, may be more preferred. However, a utility model patent is mainly a practical new technical solution for a product's shape, structure, or combination, not technological innovation that transforms the production method. Therefore, utility model patent innovations that follow previous production methods do not significantly suppress SO 2 . In contrast, GTIA, as a green innovation in production methods, can effectively curb the development of ETS, although it is complex and has a long cycle time. Production units will also focus on increasing the investment in GTIA for long-term development, thus achieving a change in production methods. In addition, it cannot be excluded that the inconsistency of the research and application cycle between GTIA and GTIB makes the stimulating effect of ETS on GTIB more evident than that of GTIA.
RUE conduction pathway.
In order to investigate whether ETS can achieve SO 2 emission reduction through RUE. This paper measures RUE through two indicators, ECS and TFP, where ECS is energy use efficiency and TFP is total factor productivity, and the test results are shown in Table 7. Table 7 shows that ETS can suppress the emission of SO 2 through ECS and TFP. Thus, the hypothesis H 4 " ETS can curb SO 2 emissions in China by improving the efficiency of resource utilization." is confirmed.
In response to this empirical result, this paper tries to make the following explanation. ETS is implemented by setting a maximum amount of pollution emissions in a region and assigning the right to discharge emissions to each production unit. Unlike traditional environmental regulation, ETS combines government regulation with market mechanisms to create a market for emissions trading among producers. Production units can buy or sell their share of emissions in the emissions trading market through their production. The market mechanism often leads to a reallocation of factors, driving the flow of factors from inefficient to efficient sectors [55]. Moreover, information asymmetries may lead to distortions in factor allocation and thus affect production [57]. Therefore, during the ETS implementation process, production units will add energy source efficiency to reduce SO 2 emissions while ensuring production. At the same time, enterprises will also carry out internal business model innovation, more efficient use of all factors to improve the overall resource utilization efficiency. This behavior can not only promote enterprises in the market competition to have the strength to obtain more resources but also enable enterprises to obtain more revenue in the emission trading market to form a virtuous circle.
Robustness tests
5.5.1 Propensity score matching test. In this paper, a counterfactual research sample was constructed using the PSM method, and its findings were used as evidence for robust type testing. The samples from non-overlapping regions are dropped out to satisfy the common support hypothesis, and the balance test of matching variables is conducted. The test results are shown in Fig 5. The matching balance test results show that the standard deviations of the
PLOS ONE
main variables of the treatment and control groups after matching are less than 10%., indicating that the matching the matched estimation results are valid and reliable. In addition, this paper adds two additional estimation methods, radius matching, and kernel matching, to the nearest neighbor matching. The balance tests of these two matching methods and the results of PS plots are shown in Figs 6 and 7. The ETS estimation results obtained by these three matching methods are shown in Table 8. From Table 8, the effect of ETS on SO 2 is significantly negative at the 1% level, indicating that there is a significant suppression effect of ETS on SO 2 . This result is consistent with the results of the benchmark regression, which further enhances the robustness of the empirical results.
Placebo test.
In the placebo test, counterfactual tests were conducted in this paper from both implementation area and implementation time perspectives, and the results of the tests are shown in Table 9. Table 9-(1) shows the results of the counterfactual test for the implementation time, and we can see that implementing ETS at non-actual pilot times significantly promoted SO 2 emissions, which is opposed to the baseline regression results. Table 9-(2) shows the results of the counterfactual test for the implementation area, and it is found that ETS in the non-pilot implementation region has a significant contribution effect on SO 2 . This result is opposed to the baseline regression results. Therefore, the placebo test conducted for the implementation region passed. The robustness of the empirical results is further enhanced.
Removal of policy interference.
In the robustness test excluding other policy interferences, this paper excludes representative policies affecting SO 2 emissions and thus observes the approximate effect of ETS alone. The two control zones in 2000 and the pilot emissions
PLOS ONE
trading scheme in 2002 were selected as the "interfering policies." In 2000 the government implemented two control zones to control acid rain formation and SO 2 pollution. The total area of China's two control areas is about 1.09 million square kilometers. The acid rain control area is approximately 800,000 square kilometers, and the sulfur dioxide pollution control area is about 290,000 square kilometers. And before 2007, as early as 2002, China had implemented the emissions trading policy in four provinces (Shandong Province, Shanxi Province, Jiangsu Province, and Henan Province) and three cities (Shanghai, Tianjin, and Liuzhou City). Due to the large area occupied by the two control areas, to prevent estimation bias, this paper divides the model into two parts according to the implementation of the two control areas. The specific grouping idea is shown in Fig 8. The obtained test results are shown in Table 10. Table 10-(1) shows the policy effect of ETS in the implementation area of the two control areas, and Table 10-(2) shows the policy effect of ETS in the implementation area of the nontwo control areas. Table 10 Table 10, we can find that ETS has a significant suppression effect on SO 2 emissions regardless of the way of
PLOS ONE
grouping. This result is consistent with the results of the benchmark regression, which further enhances the robustness of the empirical results. Meanwhile, from the test results in Table 9, it is found that the suppression effect of ETS on SO 2 is more significant in the two control areas than in the non-two control areas after excluding the 2002 emissions trading policy. Since the two control areas have been aware of SO 2 emission reduction since 1998, they have relatively rich experience and infrastructure for SO 2 management. Therefore, after implementing ETS, the two control areas can use the existing resources to achieve SO 2 emission reduction more effectively.
After implementing ETS in 2007, China implemented the carbon emission trading policy (CES) in 2013, and ZhiQing Dong et al. (2020) showed that this policy also has a significant effect on SO 2 emissions [76]. Therefore, this paper uses the same idea to test the effect of ETS on SO 2 after excluding CES, and the test results are shown in Table 11. Table 11-(1) shows the shows the effect of ETS on SO 2 in the two control areas based on Table 11-(2). Table 11- (4) shows the effect of ETS on SO 2 in the non-two control areas based on Table 11- (2). From the results in Table 11, it is found that ETS still has a significant suppression effect on SO 2 emissions after continuously removing the disturbance policy, and this result is consistent with the results of the benchmark regression, which further enhances the robustness of the empirical results.
Heterogeneity analysis
5.6.1 Heterogeneous impact of east, central and west. The above analysis verified the policy effect of ETS on SO 2 emission reduction in China from the national level, but China is a developing country with uneven development, and there is heterogeneity in policy implementation at the regional level. Therefore, this paper analyzed cities in the east, central and west separately according to the classification criteria of the National Bureau of Statistics, and the test results are shown in Table 12. From Table 12, it can be seen that there is regional heterogeneity in the suppression effect of ETS on SO 2 , from strong to weak in the order of eastern, central, and western.
PLOS ONE
The eastern, central, and western regions have different industrial structures due to their geographical locations and energy endowments. The east part can better introduce advanced talents and resources in economic development with its inherent location advantage and establish the concept of green and sustainable development by contacting the new green industries earlier than the central and western regions. In ETS promotion, the eastern part can effectively use this resource advantage to achieve SO 2 emission reduction. The central area will also enjoy the advanced resources of the eastern region due to its location close to the east region. However, compared with the east area, the central area has a relatively large proportion of industry and a vital resource dependency, and the amount of SO 2 emissions are relatively high. In the process of ETS implementation, it can make better use of the existing advanced resources and actively make technical and technological innovations to effectively suppress regional SO 2 emissions. The western region is located inland. Its inherent geographical disadvantages make it rank at the back of the country in terms of labor resources, knowledge content, openness level, foreign trade dependence, fixed capital stock, etc. These objective conditions limit the promotion of the concept of green development in the region. Still, the resources are vibrant, such as Xinjiang, Gansu, Qinghai is a wealthy region of China's oil resources, and the northwest region has 38% of China's power coal, etc. The advantageous prerequisite conditions of factor endowment make these regions have low factor utilization rate and severe
PLOS ONE
environmental pollution due to the mismatch of technology level and industrial layout in production and processing. In recent years, the final demand for construction and heavy industry in less developed provinces in central and western China has caused an increase in SO 2 emissions [77]. In addition, under the strategy of industrial location transfer, the Western region has undertaken more traditional high pollution type industries from home and abroad. Therefore, ETS did not significantly suppress SO 2 emissions in the western region.
Heterogeneity analysis of different types of resource-based cities.
Resource-based cities are cities with the mining and processing of natural resources such as minerals and forests in the region as the leading industry. It is an essential strategic guarantee base for energy resources in China. The National Plan for Sustainable Development of Resource-based Cities (2013-2020) identifies 126 resource-based cities and classifies resource-based cities into four types: growing, mature, declining, and regenerating, according to their resource security capacity and sustainable economic and social development capacity. According to the principle of factor endowment, resource-based cities have a certain impact on the environment with their unique, innate advantages. Therefore, we analyze the heterogeneity of the effect of ETS on SO 2 from the perspective of resource-based cities, and the test results are shown in Table 13. Table 13-(1) to show the DID test results for all resource-based cities, growing cities, mature cities, declining cities, regenerating cities, and non-resource-based
PLOS ONE
cities. From Table 13, we can see that ETS has a significant suppression effect on SO 2 in both resource-based and non-resource-based cities, and the impact in non-resource-based is more potent than that of resource-based cities. In addition, among resource-based cities, ETS implementation has a significant suppression effect only on mature cities. The explanations for this phenomenon in this paper are as follows. First, the cause of the differential effect of ETS between resource-based cities and non-resource-based cities. Resource-based cities are resource-rich and based on factor endowment theory. Their production is more resource-dependent than that of non-resource-based cities. In addition, according to Hotelling's law [78], the cost of extracting some resources will increase in the future because of their non-renewable nature. The introduction of ETS conveys the value of clean production to production units, and energy suppliers will extract energy early and sell it at a low price, considering that clean energy will occupy a larger market in the future. Low-priced energy will attract energy-dependent industries and thus hinder the process of SO 2 reduction by ETS. Therefore, compared with resource-based cities, non-resource-based cities have more potent emission reduction effects. Second, ETS only has a significant suppression effect on SO 2 in regenerative cities among resource-based cities. Regenerative cities are free from resource dependence. Their economy and society have started to enter a virtuous development track, and they are the pioneer area for resource-based cities to transform their economic development model. Therefore, in ETS implementation, regenerative cities can take advantage of their advantages to speed up the move away from energy dependence and find clean production methods, thus significantly curbing SO 2 emissions. Growing cities are in the rising stage of resource development, while mature cities are in the stable stage of resource development. They have the characteristics of a high potential for resource security and substantial economic and social development. These two types of cities are at the stage of solid resource dependence, high resource consumption, and abundant resource supply. Therefore, ETS has a suppressive effect on SO 2 emission reduction in these two types of cities, but it is not apparent. Since cities in the growth stage are smaller in industrial scale and more accessible to transform than cities in the mature stage, the suppression effect in the growth stage is better than that in the mature stage when the suppression effect is not apparent. Declining cities have entered the declining resource development phase, with stronger resource dependence but insufficient resource supply. Unlike mature cities, declining cities missed the critical period of transformation, and ETS did not suppress SO 2 reduction in declining cities because of the difficulties in SO 2 reduction due to infrastructure and industrial accumulation.
Heterogeneity analysis of old industrial base cities.
Old industrial bases are industrial areas with relatively complete and concentrated categories formed by relying on state investment and construction during the planned economy. The National Plan for the Adjustment and Transformation of Old Industrial Bases (2013-2022) identifies 120 old industrial base cities. Some of the old industrial bases are important national energy bases and usually undertake the supply of major technical equipment or products related to the people's livelihood. Therefore, this paper analyzes the heterogeneity of the effect of ETS on SO 2 from the perspective of old industrial bases, and the test results are shown in Table 14. Table 14-(1) shows the test results of old industrial bases, and Table 14-(2) shows the test results of non-old industrial bases. From the test results, we can find that ETS has a significant suppression effect on SO 2 regardless of whether it is an old industrial base. In comparison, this effect is stronger for non-old industrial bases than for old industrial bases.
This result is explained as follows. The old industrial bases within the planning area are characterized by a robust industrial base, large industrial scale, abundant natural resources, and high technological potential, and their development conditions are good. However, their development methods are still relatively crude. The long-accumulated industrial structure and production mode make the old industrial bases more energy and technology-dependent, characterized by high energy consumption and high pollution. As an essential pillar of urban economic development, the region used to have relatively few demands on environmental quality. The implementation of ETS has made it necessary for cities in old industrial bases to control emissions while ensuring their development, which has brought pressure on emissions from old industrial bases and made them less motivated to participate in emissions trading. According to data analysis, the SO 2 emission intensity of old industrial bases is 1.5 times the national average, respectively. Old industrial bases are the key transformation areas to achieve SO 2 reduction at the national level. Therefore, the state gives subsidies and other preferential policies to the old industrial bases to mobilize them to participate in ETS and help them change their production methods to realize industrial upgrading, thus making SO 2 emissions reduction. In contrast, non-old industrial bases are mostly economic development areas with more environmental quality claims and higher motivation to participate in emissions trading. In addition, non-old industrial bases have relatively lower industrial energy dependence and more vital awareness of cleaner production in industrial development. Therefore, ETS can significantly suppress SO 2 emissions from old and non-old industrial bases. The suppression effect of non-old industrial bases is more potent than that of old industrial bases.
Discussion
The empirical part of this paper, 5.1 to 5.4, is the confirmation part of the four hypotheses in this paper, and from 5.5 is the robustness test, which further confirms the robustness of the research results through the PSM-DID, placebo test, and exclusion of interference policy test. This paper makes the following statement for each hypothesis in this paper. Hypothesis 1: Emissions trading policy can effectively reduce SO 2 emissions. This hypothesis is confirmed in the baseline regression of 5.2, while the parallel trend test of 5.3 further strengthens the reliability of this research conclusion. Since ETS is an environmental regulation tool, internal sources transfer their emissions to each other through the monetary exchange, thus reducing emissions. This linking emissions to cost-effectiveness motivate enterprises to participate and thus achieve SO 2 emission reduction.
Hypothesis 2: Emissions trading policy promotes SO 2 reduction by reducing government intervention. This hypothesis is confirmed by the estimation of the mediating effect of government intervention in 5.4.1. ETS not only releases some of the government's environmental regulation power but also relieves some of the pressure on government financial subsidies by establishing a trading market and circulating pollution shares as commodities in the market. So enterprises can realize their environmental awareness through trading emission shares. It not only releases some of the government's environmental regulatory power but also relieves some of the pressure on government financial subsidies.
Hypothesis 3: Emissions trading policy can achieve SO 2 emission reduction by promoting technological innovation, confirmed by the estimated results of 5.4.2 on technological innovation. ETS is a policy traded so that environmentally conscious units can gain more revenue and thus invest in technological R&D. The government subsidizes units that change their processes and production in the direction of environmental protection, which stimulates innovative research and development in the field of environmental protection.
Hypothesis 4: Emissions trading policy promotes SO 2 emission reduction by improving energy efficiency, and this hypothesis is confirmed by the estimated intermediary effect of energy use efficiency in 5.4.3. The environmental pressure brought by ETS can make enterprises pay more attention to energy use efficiency, which greatly avoids energy waste and can better promote the rational and effective distribution of energy among production units. Thus SO 2 reduction is achieved.
In addition, due to regional differences, heterogeneity analysis is conducted in this paper in 5.6. In this paper, two types of heterogeneity analysis are conducted for resource-based cities and non-resource-based cities, and industrial bases and non-industrial bases. From the heterogeneity analysis, the resource endowment and the advantage of the regional industrial base will make the industry the pillar industry of the region, which will also pressure the region's environment. Therefore, it will generate ETS in the region's resource endowment or good industrial base to promote relatively more complexity.
Compared with the previous research results, this paper has three breakthroughs while confirming some of their research results. First, the research sample is urban. Second, the research method is a detailed robustness test, which is neglected in the previous research results. Third, the research perspective is that existing scholars analyze the transmission path of ETS to SO 2 from two perspectives: energy and government. Furthermore, heterogeneity analysis is conducted from two perspectives: resource-based cities and industrial cities, which enriches the research in the field of ETS to some extent, enriching the research in the field of ETS to a certain extent.
Compared with the previous research results, this paper has three breakthroughs while confirming some of their research results. First, the research sample is urban. Second, the research method is a detailed robustness test, which is neglected in the previous research results. Third, the research perspective is that existing scholars analyze the transmission path of ETS to SO 2 from two perspectives: energy and government. Furthermore, heterogeneity analysis is conducted from two perspectives: resource-based cities and industrial cities, which enriches the research in the field of ETS to some extent, enriching the research in the field of ETS to a certain extent.
Conclusions
This paper examines the effect of ETS on SO 2 emissions in China and the mechanism of influence based on sample data from 285 cities (except Tibet) from 2004 to 2018. The research conclusions obtained from this paper are as follows. First, ETS has a significant suppression effect on SO 2 in China, and the results still hold after the parallel trend test as well as the placebo test.
Second, the mechanism test shows that ETS can effectively suppress SO 2 by reducing GOV, stimulating GTIA, and improving RUE, while GTIB has a masking effect in it. Third, in terms of the East, Central, and West regions, ETS has a significant suppression effect on SO 2 emissions in the Eastern and Central regions. The Eastern part is better than the Central region. Fourth, in terms of factor endowment, ETS has a significant suppression effect on SO 2 in both resource-based and non-resource-based cities, with non-resource-based cities outperforming resource-based cities. While within resource-based cities, this effect exists only in regenerative cities. Fifth, in terms of industrial base zoning, ETS has a significant suppression effect on SO 2 in both old industrial base cities and non-old industrial base cities. The suppression effect in non-old industrial base cities is better than that in old industrial base cities.
Based on the above findings, this paper puts forward targeted policy recommendations on the specific challenges faced by China's ETS to curb SO 2 emissions at this stage.
First, ETS is an essential environmental regulation tool for managing SO 2 emissions. The government should improve the system design of pricing and quotas to improve the efficiency of market transactions. At the same time, it should enhance the price regulation mechanism and rationalize the pricing of trading shares to provide a good market environment and a trading platform for the implementation of ETS. In addition, the government should gradually expand the scope of markets and subjects of the emissions trading system to include more pollutants in the matter of trading.
Secondly, in terms of government intervention, the government should improve the market mechanism and strengthen the regulation of the market. On this basis, it should strengthen information disclosure, promote healthy competition, and reduce unnecessary market intervention. ETS can give full play to the role of the market mechanism in environmental governance.
Thirdly, in terms of the green technology innovation path, the government should strengthen the protection of intellectual property rights, establish a green technology standard system, the entire life cycle management of products, and enhance risk identification and control. Create a good innovation environment. At the same time, it should play the role of government incentives to motivate enterprises to actively participate, increase their R&D investment, and develop new low-pollution and clean technologies to improve green innovation. Increase subsidies for units that make outstanding contributions in the field of GTI.
Fourthly, in terms of resource utilization efficiency, strengthen the regulation of the energy supply industry and energy prices, encourage energy supply units to adjust their energy supply structure, and prevent vicious exploitation by energy suppliers. And under the premise of controlled environmental risks, we will implement targeted utilization for energy-consuming enterprises and encourage production units to reduce energy dependence. At the same time, fully develop cooperation with universities and research institutes, vigorously introduce relevant technologies and talents in environmental protection, and encourage and guide social funds to participate in the implementation of ETS.
Fifthly, in response to the phenomenon of uneven regional development, the government should pay attention to the differences in SO 2 emission reduction effects in different regions during ETS implementation, fully consider each region's development characteristics, and promote policy implementation in a targeted manner. In terms of the East, Central and West, the government should actively guide the flow of capital into the western region and encourage the area of the west to undertake new industrial models. Develop concepts to accelerate the adaptation of technology levels to regional industrial structures and thus make more effective use of its rich natural factor endowment advantages. For resource-based cities, the government should encourage regenerative cities to eliminate energy dependence and promote new industries. Help mature and growing cities to realize economic transformation actively and guide declining cities to introduce low-pollution and high-value-added industries. For old industrial bases, strengthen ETS implementation for old industrial bases with high pollution. Encourage them to use their own scientific and technological potential to accelerate the transformation of achievements and realize the change in production methods.
|
2022-11-10T06:17:00.655Z
|
2022-11-09T00:00:00.000
|
{
"year": 2022,
"sha1": "e2ca8fda6a5cb21d199b1939381f9f40bf773f4f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0276601&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6fafda6576831e11b9362edfd2ba61e139493141",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238682952
|
pes2o/s2orc
|
v3-fos-license
|
Places to Intervene in a Socio-Ecological System: A Blueprint for Transformational Change
: The scientific community and many intergovernmental organizations are now calling for transformational change to the prevailing socioeconomic systems, to solve global environmental problems, and to achieve sustainable development. Leverage point frameworks that could facilitate such transformative system change have been created and are in use, but major issues remain. Scholars use the leverage point term in multiple contradicting ways, often confusing it with system outcomes or specific interventions. Accordingly, the underlying structural causes of unsustainability have received insufficient consideration in the proposed actions for transformational change. In this work, I address these issues by clarifying the definition for leverage points and by integrating them into a new blueprint for transformational change, with clarified structure and clearly defined transformational change terminology. I then theoretically demonstrate how the nine phases of the blueprint could be applied to both plan and implement transformational change in a socio-ecological system. Although the blueprint is designed to be applied for socio-ecological systems at national and international scales, it could also be applied to plan and implement transformational change in various sub-systems.
Introduction
Human actions throughout time, particularly after the industrial revolution, have placed ever growing amounts of pressure on environmental systems and as a consequence, some planetary boundaries have already been exceeded [1,2] and the Earth system is now on the precipice of exceeding several environmental tipping points [3,4]. Many of nature's supporting and regulatory functions have been compromised, which has led to global climate change [5] and the sixth mass extinction event in Earth's history [6]. Over the years, scientists have recognized and anticipated these ongoing global environmental problems and given repeated warnings calling for environmental action [7][8][9][10][11].
In response, nations have sought to address the environmental concerns concurrently with addressing issues related to human development, such as poverty and inequality, through the 2000-2015 Millennium Development Goals [12] and the successive 2030 Agenda for Sustainable Development [13]. Despite these efforts, the 2019 Global Sustainable Development Report (GSDR) on the progress towards the Sustainable Development Goals (SDGs) concluded that "the world is not on track for achieving most of the 169 targets that comprise the Goals" [14]. Climate change and biodiversity loss, together with rising inequalities and increasing waste outputs, were identified as issues "with cross-cutting impacts across the entire 2030 Agenda" that nonetheless "are not even moving in the right direction" [14].
In the latest global assessment report on biodiversity and ecosystem services, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) argued that "current structures often inhibit sustainable development and actually represent the indirect drivers of biodiversity loss", which is why "fundamental, structural change is called for" [15]. In this context, "structural" refers to the organization of system parts, processes, and rules, not to the sectoral composition of economies. This is also what the word "structural" refers to in the work at hand.
The dictionary definition of the word "transform" includes three interpretations: (1) "to change in composition or structure"; (2) "to change the outward form or appearance of", and (3) "to change in character or condition" [20]. Therefore, TC can be understood as change that influences the system composition or structure, which then changes the character and condition of the system, which in turn changes its outcomes and outward appearance. This helps complement the IPBES definition, which lacked such causality. Hence, I define TC as fundamental and comprehensive structural change that influences the components and functions of a system, thereby changing its emergent outcomes. The components and functions that affect system outcomes include all technological, economic, and social factors (including values and goals).
Not only is it important to recognize the need for TC, but it is also necessary to understand how TC in socio-ecological systems could be achieved by utilizing specific points of leverage. Consequently, leverage points (LP) for transformational system change have been the focus of much research as of late, e.g., [21][22][23][24][25], and Chan et al. [24] have incorporated their version of the LPs in a framework for TC that is currently informing the IPBES [26,27] and the Convention on Biological Diversity (CBD) [18].
However, several major issues remain to be addressed. Scholars use the LP and other terms related to TC in multiple contradicting ways [22,23] and the underlying structural causes of unsustainability have received insufficient consideration, as too much emphasis is given to values and goals over structural realities.
In this theoretical work, I seek to address these issues and demonstrate how the LP approach could be improved, with literature-based and logical argumentation. In Section 2, I compare two influential LP frameworks, those of Meadows [28] and Chan et al. [24], to illustrate prevailing issues, to clarify the definition of LPs, and to demonstrate how the LPs could be applied to facilitate change in socio-ecological systems. In Section 3, I will integrate the LPs into an improved blueprint for TC, with clarified structure and TC terminology. I will then theoretically demonstrate how the nine phases of the TC blueprint could be applied to plan and implement TC in a socio-ecological system. In Section 4, I will discuss the findings of this work, giving particular focus to the role of values and goals in system change and the potential uses of the new blueprint. In the concluding Section 5, I emphasize the importance of clearly defining terms used in sustainability research and the importance of identifying and addressing the underlying causes of systemic problems to achieve global sustainability.
Leverage Points for Socio-Ecological System Change
An influential (and, to the best of my knowledge, the first) LP framework for system change was created by Donella Meadows in 1999, who defined LPs as "places within a complex system (a corporation, an economy, a living body, a city, an ecosystem) where a small shift in one thing can produce big changes in everything" [28]. With this framework, Meadows focused on all systems at an abstract level, identifying common shared properties that could be used to change the dynamics or outcomes a system and ranking these properties based on their potential power over the system.
Another influential development was made by Chan et al. in 2020 [24], who were inspired by the LPs defined by Meadows but considered them to be ill-suited for addressing complex global socio-ecological system change that has multiple contesting purposes [15,24]. Instead, through a process of iterative expert deliberation, the authors identified eight LPs for societal transformation, which they defined as places "where to intervene to change social-ecological systems", and five levers, which they defined as "the means of realizing these changes, such as governance approaches and interventions", for implementing TC [24]. Table 1 presents a comparison of the items identified by Meadows and Chan et al. Table 1. A comparison between the leverage points identified by Meadows [28] and the levers and leverage points identified by Chan et al. [24]. Leverage points: Priority points for intervention. Levers: Management interventions. The leverage points of Meadows are presented here in order of decreasing importance.
The power to transcend paradigms 2.
The mindset or paradigm out of which the system-its goals, structure, rules, delays, parameters-arises 3.
The goals of the system 4.
The power to add, change, evolve, or self-organize system structure 5.
The rules of the system (such as incentives, punishments, constraints) 6.
The structure of information flows (who does and does not have access to what kinds of information) 7.
The gain around driving positive feedback loops 8.
The strength of negative feedback loops, relative to the impacts they are trying to correct against 9.
The lengths of delays, relative to the rate of system change 10. The structure of material stocks and flows (such as transport networks, population age structures) 11. The sizes of buffers and other stabilizing stocks, relative to their flows 12. Constants, parameters, numbers (such as subsidies, taxes, standards) Leverage points 1.
Visions of a good life 2.
Total consumption and waste 3.
Justice and inclusion in conservation 6.
Externalities from trade and other telecouplings 7.
Responsible technology, innovation, and investment 8.
Education and knowledge generation and sharing
A.
Incentives and capacity building B.
Coordination across sectors and jurisdictions C.
Pre-emptive action D.
Adaptive decision-making E.
Environmental law and implementation Being a part of the influential IPBES report [26], the framework of Chan et al. was also embraced by the CBD, whose Global Biodiversity Outlook 5 report stated that these levers and LPs "may be targeted by leaders in government, business, civil society and academia to spark transformative changes towards a more just and sustainable world" [18]. More recently still, the framework of Chan et al. was also used in the IPBES-IPCC co-sponsored workshop report on biodiversity and climate change [27].
Chan et al. referred to their work as a "framework of interventions" [24], and as such it has value. The authors detailed several areas where TC is needed, what actions could be taken to address specific problems, and they gave evidence-based guidance for decision makers on how these practices should be implemented [24].
My point of contest is with none of the above, but specifically with the way the LP term was used, and how the provided TC framework placed too much emphasis on values and gave too little consideration for other underlying structural causes of unsustainability in the socio-ecological system. The rest of this paper seeks to demonstrate why these issues should be addressed and how, starting with the LPs.
Clarifying the Definition of Leverage Points
Whereas the LPs of Meadows [28] were intentionally so broad that they could be applied to any social system at any scale, Chan et al. focused their LPs on the socioecological system primarily at the global to national scale [24]. However, unlike with the list of Meadows, most of the LPs listed by Chan et al. are not points of leverage that address the properties (components or processes) of the system, but instead they list some of the important outcomes of the system and areas where TC interventions are needed. For example, "total material consumption and waste" does not directly identify anything about the system itself that would need to change, or any specific actions. Instead, it implies that something needs to happen to reinforce and enable controlled changes to the levels of consumption and waste. Although Chan et al. accurately argue that total volumes of consumption and production must decrease among the wealthier countries and economic classes and increase among the more disadvantaged [24,29,30], they specified no actions or processes that could allow these changes to take place, beyond changes in values.
I have chosen to focus on the work of Chan et al. but they are by no means alone in misusing the LP term. In a recent special issue on "Leverage Points for Sustainability Transformations", Leventon et al. confirmed that authors use the term in multiple contradicting ways, writing: "It is evident through this collection that the papers do not always agree with how systems are (or should be) framed, nor use the same terminology to describe the fundamental components of the leverage points framework: the system, the lever, the leverage points, the interventions, etc. What is a leverage point for one author, is a system or an intervention for another" [22]. It is my view that this creates unnecessary confusion, which not only reduces the quality of the science, but that can also lead to adverse real-world consequences when the "LPs" are used to guide decision makers.
Meadows defined LPs as places "where a small shift in one thing can produce big changes" in the whole system [28]. Similarly, Chan et al. defined LPs as priority points for intervention, "where to intervene to change social-ecological systems" [24], and recently Linnér and Wibeck proposed to define LPs as "The part of the system that can be influenced for a proportionally greater effect on the whole system" [23]. However, these broad definitions may not be clear enough. When taken out of context, they can be interpreted as referring to anything small that can change something big. This is surely not what Meadows meant, which is why she defined the 12 specific LPs (Table 1). Authors should, therefore, always seek to refer to these 12 points, not just the overall definition. Furthermore, an intervention is not an LP. An intervention can tap into an LP when that intervention influences one or more of the specific system properties (LPs).
With the goal of clarifying this concept, I have rephrased the broad definition for LPs to the following form: LPs are key system properties where focused interventions can give rise to large changes in the behavior of a system. Here, the "key system properties" refer to the 12 points of Meadows [28].
A LP perspective can help understand how to create fundamental systems change towards sustainability [22], but only when that perspective correctly interprets what LPs are and what they are not, i.e., when the term is clearly defined and used. Defining terms clearly and using them consistently is what allows scientists to communicate effectively, within and outside academia. Using the LP term without defining it, or having multiple conflicting definitions for it, leads to problems. There is even a danger of turning a useful term into a buzzword devoid of any functional meaning.
Applying Leverage Points for Transformational Socio-Ecological Change
Chan et al. [24] decided to diverge from the LP typology of Meadows [28], because they deemed her typology to be ill suited to the context of complex global socio-ecological systems. However, in the following paragraphs, I show how the earlier typology of Meadows [28] not only can be used for this purpose, but also that applying it can reveal many valuable points and insights that were missing from the newer framework.
When comparing the framework of Chan et al. [24] to the LPs of Meadows [28], it seems that only two or three actual points of leverage are addressed by the newer framework of Chan et al. Firstly, "visions of a good life" can be seen to correspond to the paradigm shift in values that Meadows had high in her list (Table 1). Since societies have emerged from the interactions of minds, with each other and with the external world, and continue to be maintained by these minds, envisioning a new paradigm that facilitates the achievement of a good life in a new way can be a powerful factor in enabling socioeconomic changes that lead to justice and inclusion, changes in the levels of total material consumption and waste, reduced inequalities, and so forth. Secondly, "education and knowledge generation and sharing" is important in changing the minds of people and, thus, shifting shared societal goals towards achieving the new paradigm. These two LPs of Chan et al. address the first two LPs of Meadows. Third, the call to "unleash latent capabilities and relational values" may be interpreted as an inference to enabling a positive feedback loop that supports TC.
The highest of the unaddressed LPs was the goals of the system (LP 3 of Meadows). System goals are different from the shared societal goals, in that they emerge from the incentive structure of the system, not from the will (or values) of people-the two can conflict. To understand the structural incentives, it is important to identify the underlying structural causes that reinforce harmful or restrict beneficial behaviors within the system. Addressing such structural constraints is crucial for being able to utilize the other points of leverage covered by Meadows, such as applying critical changes to key parameters like taxes, subsidies, other policies, and the rules of the system. Key parameters are those that can influence the underlying structures and mechanisms, and critical means changes that surpass the normal range of variation for the parameter values, going beyond the status quo and leading to large changes in the whole system [28].
The points of Chan et al. (Table 1) can be seen as identifying some of the important areas such critical changes should seek to address. For example, "Externalities from trade and other telecouplings" can be addressed by implementing new negative feedback loops (LP 8 of Meadows), such as strong enough ("critical") cap-auction-trade systems for environmentally harmful inputs and outputs to help internalize externalities [31,32], and by adding new rules or changing existing parameters (LPs 5 and 12 of Meadows), such as ecological tariffs that influence trade [31,32]. Adding missing negative feedback loops helps balance the system into a new, more sustainable, state.
Restructuring or improving the information flows can also be relevant for transforming complex global socio-ecological systems (LP 6 of Meadows). Restructuring information flows means increasing information availability where it is relevant for the well-being of people and nature, to strengthen feedback, accountability, and public engagement. As Meadows identified, "Missing feedback is one of the most common causes of system malfunction" [28]. Due to it being relatively cheap and easy compared to other LPs, restructuring information flows should be combined with the other interventions early on in the transformation [28]. However, information alone is not enough unless it can affect influential feedback loops. For example, every year, societies are reminded earlier and earlier about the Earth Overshoot Day, but this has no visible impact on the system functions because the adaptive and correcting mechanisms of the prevailing system structure are too weak compared to the strong self-reinforcing feedbacks that maintain the prevailing harmful behavior.
The critical changes to key parameters and rules, implementation of new negative feedback loops, and information sharing can be supported by recognizing and targeting existing positive feedback loops in the system that act to reinforce old unsustainable patterns, which can create obstacles to change. One example of such is the positive "success to the successful" loop [28], which can lead to inequality and regulatory capture. With regulatory capture, agencies that should regulate the market become dominated by the industries they are supposed to regulate, with lobbying being one of the most visible manifestations of this process [15,19,33]. This creation and empowerment of vested interests [33,34] is just one way the system reinforces itself and creates resilience against change.
Even more common is the resistance of ordinary people to change, owing to how their livelihoods are often tied to the old unsustainable patterns, which creates a positive feedback loop where outdated structures support the short-term security and gain of people, who in turn wish to maintain the old structures. This discrepancy between longterm interests at the societal level and the short-term rewards at the individual level has long been recognized as a "social trap" [35][36][37]. In addition to material needs, the psychological needs and worldviews of people and businesses are also tied to the status quo, which has created a "social logic of consumerism" and materialism [38]. Not all positive feedback loops are harmful, however. They can also be strategically utilized, as Chan et al. [24] recognized, by creating new feedback loops that reinforce new sustainable patterns (LP 7 of Meadows).
Addressing stocks and flows, such as infrastructure and networks, is also relevant when seeking changes to the socio-ecological system (LP 10 of Meadows), as is ensuring that size of stabilizing buffers, such as the extent and capacity of social security to abate the impacts of TC on employment levels and livelihoods (LP 11 of Meadows). Accounting for the length of delays relative to the rate of system change (LP 9 of Meadows) can also help prevent over-and understeering TC [28].
Changing the underlying structures (LP 4 of Meadows) with the help of higher order LPs is required to implement changes through the other LPs relating to system feedbacks, rules, stocks, buffers, and so on, and to align the goals of the system with the new societal goals. Such fundamental TC also provides an opportunity to add self-organizational capacity to the system that allows the system to adapt and evolve, increasing resilience and facilitating sustainable development in the long-term [28]. After a systemic and structural transformation, the policies that influence the levels of stocks, flows, constants, and (other) parameters can be optimized to increase socio-ecological-economic fairness and efficiency.
Summarizing and Focusing the Leverage Points
To provide a clear list of LPs that could be specifically used to guide transformational socio-ecological change in the context of complex international and national socio-ecological systems, I have reorganized and reduced Meadow's list of twelve LPs to the following five, based on the above discussion: 1.
Societal goals-To lead and motivate transformation; 2.
Structural goals-To address the underlying causes of problems; 3.
Key parameters-To redirect the whole system with critical changes; 4.
Information flows and feedback loops-To facilitate change and help overcome obstacles; 5.
Flows, constants, and other parameters-To optimize a transformed system.
These are the places to intervene in a socio-ecological system, in order of decreasing importance. It must be emphasized that this ranking of LPs is based on their relative power over the system, not the sequence in which the points should be utilized. In fact, change makers may not have access to the higher LPs like structural goals right away, which is why they might have to start with information sharing and feedback loops first, thereby influencing key parameters and societal goals. After a sufficient demand (critical mass) is created, the structural goals can finally be addressed, which ultimately determine the system outcomes. As Fischer and Riechers [25] have emphasized, LPs can interact with each other, and sometimes deeper changes are needed for less powerful actions to work, whereas other times shallower changes can be used to pave the way for deeper changes.
A New Blueprint for Transformational Change
As a part of the IPBES report [26], Chan et al. [24] utilized their list of LPs to create an iterative framework for achieving TC (Figure 1). In it, they classified drivers of environmental problems and recognized various interventions and decision-making approaches that could be utilized to transform the socio-ecological system into one that is sustainable. The way Chan et al. visualized TC as an iterative process of interventions is valuable, as it demonstrates the dynamic and interlinked process of socio-ecological change ( Figure 1). Importantly, Chan et al. also correctly identified that to achieve TC, focus should be expanded from direct drivers to indirect drivers [24].
This important and influential framework of Chan et al. [24] could be improved by adding in the missing LPs of Meadows and by clarifying the terminology in two ways: First, by making a clear separation between outcomes, specific interventions, and LPs, and second, by not calling the interventions "LPs", as argued in the previous section. Other shortcomings can also be identified: The framework lacks sufficient consideration for the underlying causes of global environmental problems, and it does not consider potential obstacles for TC. Lastly, the terms used in each step of the framework have not been clearly defined. [24] and IPBES [26]. Everything else follows Chan et al. [24] except the box with drivers and human activities, as this part included more details in the earlier version [26]. Besides adding the word "Outcomes", which was signified with an illustration in the original figures, the text has not been altered in any way and has the original emphasis. Chan et al. [24] explained the emphasis in the leverage point box by writing that "At the leverage points (bolded), we have specified actions consistent with transformative change to sustainability (unbolded)". I have simplified the figure style to facilitate comparisons with Figure 2. Modified from [24,26].
In this section, I address the needed improvements by creating a new version of the framework of Chan et al. [24,26]. With this new "TC blueprint" (Figure 2), I add focus on the structural underlying causes of problems and the obstacles to change, beyond simply "values and behaviors". In addition to clarifying the structure and terminology, I also increase the applicability of the framework by dividing it into nine distinct and clearly defined phases, which are generalized enough to be applicable to any socio-ecological system in any context and at different scales.
The benefit of this new blueprint is that it clearly separates LPs from decision-making and actions, while also clearly categorizing and defining direct and indirect causes into threats, pressures, drivers, and the key underlying causes of problems. This schematic also helps visualize how "leveraging" change is not just a simple linear process. Instead, the process of this blueprint, with iterative feedback, helps account for irregularities, feedbacks, and other complex interactions of non-linear socio-ecological systems [23].
Comparing my blueprint (Figure 2) to the framework of Chan et al. (Figure 1), the general order in which stakeholders and decision makers use LPs to implement actions that influence indirect and direct causes of problems remains, as does the feedback between system outcomes and management (the "iterative learning loop"). However, this new blueprint is divided into nine specific phases, and the terminology is clarified in several respects compared to the earlier framework.
Firstly, I have included the list of five LPs summarizing Meadows's LP typology, and these are separated from interventions (phase 4) and outcomes (phase 1). I abstain from using the word "lever" alongside LPs in an effort to avoid unnecessary confusion arising from the similarity of the terms and their overlapping meanings in the English language. Instead, the five levers of Chan et al. are included in phase 2, "decision-making and management".
Figure 2.
A new blueprint for transformational change. The nine phases guide the implementation (counterclockwise) or planning (clockwise) of directed and effective change to socio-ecological systems, using leverage points and addressing the underlying structural causes that can restrict system outcomes. Tables 2 and 3 provide definitions for the terms used in this blueprint. Whereas Chan et al. used "multi-actor interventions" as another way to refer to their levers, in this new blueprint the word "intervention" refers only to actions, not decision-making and management (phase 2). Phase 2 is meant to organize the overall implementation of TC, whereas the actions (phase 4) specify the needed interventions that utilize the LPs of phase 3 to target phases 5-9.
When applying my blueprint in practice to a specific socio-ecological system, the needed interventions would be listed in the "Actions" phase, and the "LPs" of Chan et al. [24] could be used to identify and categorize important areas where actions are needed. Following the system dynamics terminology, in phase 3, I use the term "flows" to refer to the movement of matter, energy, or information through the system. "Constants" refer to variables that are set to some value and remain the same throughout time, whereas "parameters" are variables that can be altered or fine-tuned to guide the system behavior, such as subsidies, taxes, standards, and rules [28].
Phase
Term Definition 1 Socio-ecological outcomes Social, economic, and environmental outcomes of the system.
Decision-making and management
Organization of the implementation of transformational change in a way that seeks to ensure sustainable and desirable outcomes for the socio-ecological system.
Leverage points
Key system properties where focused interventions can give rise to large changes in the behavior of a system. 4 Actions Specific interventions that influence or change the feedbacks, components, or processes of the system.
Obstacles
Feedback loops that oppose changes to the system components and processes.
Underlying causes
Key functions of specific structural components of the socioeconomic system that either cause drivers to lead to negative outcomes or that impede the operation of balancing feedback loops. I use the word "structural" to refer to the organization of system parts, processes, and rules, not to the sectoral composition of economies.
Drivers
The structural components of the socioeconomic system and their functional organization that influence the formation and intensity of pressures, and conditions (system dynamics) that restrict capacity for action.
Pressures
Socioeconomic processes or conditions that reinforce behavioral patterns that lead to direct threats. 9 Threats Direct causes of negative outcomes. Table 3. Decision-making and management (phase 2) terms from the IPBES report [15,26], with new clarified definitions and reasoning.
Term Definition/Reasoning
Pre-emptive and adaptive The potential outcomes of the planned changes are evaluated in advance and the actual outcomes are used to inform consequent actions.
Precautionary and just
Precautionary means that decisions are reviewed and made with caution, avoiding unnecessary risks, so as not to carelessly apply new innovations that may prove harmful in the long-term. Just means conforming to a standard of morality and correctness as defined by the group applying the blueprint (such as a nation).
Integrative, inclusive, and informed
Decision-making that is integrative and inclusive takes into consideration different perspectives and allows everyone to contribute to TC. Informed means decision-making considers the latest scientific knowledge of various disciplines and follows the best available multidisciplinary advice.
Multi-actor response at different levels TC can be planned and applied by individuals, businesses, NGOs, governments, and other groups, each strengthening the overall societal effort to transform.
Coordination across stakeholders and locations
Response can be more effective when people and areas work together under shared overall goals.
Mainstreaming across sectors
Different governmental, societal, and economic subdivisions all integrate the same goals or practices of transformation into their agendas.
Capacity building
Allows more to be done while performing at a greater efficiency. A process that retains or improves the human, social, built, and natural capital that are needed to competently achieve needed changes.
Enforcing rules
The creation, modification, and implementation of laws, policies, and other guidelines, that are needed to change the behavior of systems and people.
From Drivers to Underlying Causes
Importantly, with this new blueprint, I provide separate definitions for threats, pressures, drivers, and underlying causes ( Table 2). In several previous studies, the word "driver" has been used both in the context of direct and indirect influences [24,26,39]. While not strictly incorrect, it is better to clearly differentiate between the different levels of directness, because that can help reveal causalities and prioritize actions. In my blueprint, underlying causes are the most indirect, followed by drivers and then pressures, whereas threats are the only direct influences. The threats could also be referred to as direct drivers, as the prior studies have done, but this can create unnecessary confusion, which should be avoided as the blueprint is meant to guide not only academics, but also decision makers, who in turn may use it when communicating with the general public.
Previous research has identified several (indirect) drivers. For example, the IPBES [26] listed the following: demographic and sociocultural, economic and technological, institutions and governance, and conflicts and epidemics. The WWF [39] has similarly identified consumption, economics, institutions, governance, conflicts, technology, demographics, and epidemics as drivers, stating that "In the last 50 years our world has been transformed by an explosion in global trade, consumption and human population growth, as well as an enormous move towards urbanization. These underlying trends are driving the unrelenting destruction of nature". The formation and intensity of pressures, such as agricultural expansion and the use of non-renewable forms of energy, are influenced by these drivers.
The common characteristic of drivers is that they identify structural and functional properties of the socio-ecological system that can create harmful outcomes to people and nature, or more specifically, how the structural components of the socioeconomic system (e.g., people, businesses, governments) and their functional organization (interactions through institutions, patterns and levels of consumption, etc.) influence the formation and intensity of pressures. This includes conflicts and epidemics, although they could be seen as exceptional larger order system dynamics that only intermittently constrain the capacity of societies to work towards sustainability, albeit often severely.
So far, there has been little focus on the relative importance of the drivers, and even less on finding what structural components of the socioeconomic system cause drivers to lead to negative outcomes in the first place. For example, what causes consumption levels to exceed the limits of social and ecological sustainability? Why do governments subsidize harmful practices? Why is economic growth needed? Why does the global population keep growing? Why is technology used to exploit instead of regenerate? Instead of seeking structural answers for questions like these, studies tend to attribute some drivers, such as consumption, technology, or population growth, as the underlying causes, while explaining that the ultimate cause for them all is simply "values and behaviours" [15,24,26,39,40] ( Figure 1). The confusing of LPs with system outcomes or specific interventions may have contributed to this insufficient consideration of the structural underlying causes in prior research.
By placing values and behaviors before drivers, the previous studies have inadvertently overlooked the importance of identifying what the key underlying structural causes are. Focusing on values and behaviors as the ultimate drivers of environmental problems can also have the effect of directing blame towards individual choice and away from structural realities [41], although problems such as overconsumption are driven by both outdated worldviews and structural requirements and reinforcements [38,42]. Considering the structural and underlying causes helps direct focus to the context in which behaviors occur.
By iteratively asking "why" the important indirect drivers exist and "what" causes them to lead to harmful outcomes, it is possible to identify and define a set of key underlying causes behind socio-ecological unsustainability. For example, the structural reliance or "societal addiction" [43,44] to economic growth has been identified as an underlying cause that not only maintains harmful behavioral reinforcements and restricts the available solution space, but also prevents the application of critical policy changes that would internalize externalities and correct telecouplings [31,38,40,44,45]. Similarly, the reliance of countries on international trade has been recognized as an underlying cause that drives down ecological and social standards, creates global inequality, and prevents countries from acting on sustainability [14,45]. It is due to the amplifying influence of these and other specific underlying causes that drivers like governance failures, overconsumption, and population growth occur and lead to harmful outcomes. In my blueprint, underlying causes influence drivers, which create pressures, which in turn create direct threats ( Figure 2). Furthermore, obstacles can exist that interfere with any action, regardless of what kind of "driver" the actions are directed to address (Figure 2). These are sometimes called "barriers" to change, but I have opted to use "obstacles" instead, which connotates more with something that can be overcome, even if it interferes with or slows down progress.
Decision-Making and Management Terminology
The items of phase 2 (decision-making and management) seek to aid decision makers and managers to organize the implementation of actions in a way that considers LPs and allows iterative TC to take place. All of these items directly correspond to those addressed and detailed in the IPBES report [15,26], although not all of them were included into the earlier figures (Figure 1). With my definitions (Table 3), and the new organization in the blueprint, I have merely sought to provide clarity to the points of this phase, to facilitate their implementation as best practice guidelines if the blueprint is used for planning and implementing TC.
Applying the New Blueprint
In theory, the nine phases of the blueprint could be applied to both plan and implement the transformational change of a socio-ecological system. Going through the nine phases of my blueprint, first the negative outcomes of the system are recognized in phase 1, which leads to a collective envisioning of new desired outcomes and an organization of a response in phase 2. Phase 2 is when the implementation of transformational change is planned and organized.
In this planning phase, the blueprint is applied clockwise to determine what directly threatens (phase 9) the desired outcomes, what pressures (phase 8) lead to the threats, what drivers (phase 7) cause the pressures, what ultimate and specific underlying causes (phase 6) cause the drivers to lead to pressures, and what specific actions (phase 4) would be needed to address the problems at each level, taking advantage of the LPs (phase 3). Then, potential obstacles (phase 5) are identified for each specified intervention, and actions are prioritized to address the obstacles first.
In this planning phase, the TC blueprint can be used as a part of a backcasting scenario building approach. In backcasting, criteria for a desirable future are defined first, after which a feasible and logical path is built from that future state to the present, which can help create alternatives otherwise not available through the forecasting of prevailing trends [46,47]. This makes backcasting particularly useful for considering how TC could be achieved [46].
When it comes time to implement the blueprint in practice, it is applied counterclockwise so that the system outcomes (phase 1) are used to justify and motivate the creation of new societal goals (phases 2 and 3), and the implementation of critical transformational actions (phase 4). First, the actions seek to overcome obstacles (phase 5) and fix the underlying root causes of problems (phase 6) that reinforce and necessitate behaviors that lead to the negative outcomes. Then, policies that directly address the drivers, pressures, and direct threats (phases 7-9) that impact people or nature in a negative way can be applied effectively and optimized. If the changes applied at each phase end up removing the threats to the desired social, ecological, and economic outcomes (phase 1), decision-making and management in phase 2 continues to enforce the new rules and maintain the new parameter values. If new threats emerge, the loop starts over and keeps going until the outcomes of the system are desirable.
This has to be the general order of action, because addressing the later phases (threats, pressures, and drivers) without first fixing the earlier phases (underlying causes and obstacles) is like swimming against a strong current [43]. Adding ad hoc fixes that work against the structural incentives creates inefficiency and wasted resources at best, and an unsustainable fix at worst. So far, socio-ecological change has neither been sustainable nor transformational, because societies have neglected the most influential deep LPs [21,25] (1-4 in my summarized list) and actions have been directed to the threats, pressures, and drivers only, without addressing their underlying causes or properly accounting for the obstacles that create opposition to change.
Discussion
In this theoretical work, I provided clarification and improvements to existing LP frameworks that are currently informing international TC discourse and developed a new blueprint for TC. These contributions help provide more clarity on LPs and bring attention to the key underlying causes that cause drivers to lead to negative socio-ecological outcomes. Considering how the key underlying structural mechanisms behind unsustainable behaviors continue to be largely unaddressed in the sustainability discourse, not to mention in practice, it is unsurprising that efforts to achieve sustainability have so far not succeeded. The underlying causes need to be explicitly addressed and researched. Unless the TC discourse addresses this problem, it is unlikely to differ much from the sustainable development discourse in its success.
The key in creating transformational solutions that address the underlying causes is to first recognize, admit, and agree on the underlying causes, points of leverage, and the needed actions. Then, research efforts can be directed to identify potential obstacles for change and to model the likely outcomes of planned interventions. The nascent field of ecological macroeconomics has started to provide examples of relevant modelling work can test the social, ecological, and economic outcomes of TC policies, e.g., [48][49][50][51]. The actions have to be directed to overcome the strong self-reinforcing feedbacks that maintain the prevailing harmful behavior, and to implement new adaptive and correcting mechanisms to the system structure.
Given global impact inequalities among higher and lower income nations [19,45,52,53], it is particularly important to weaken the societal reliance on consumerism and growth in high-income nations while increasing the weight given to ecological and societal wellbeing considerations in decision-making at all levels [40,43,[54][55][56][57]. In terms of sustainability, it is also worth to emphasize that for national and international decision-making to be "just", it must not further disadvantage the underprivileged or favor those already in positions of advantage, which would amplify harmful inequalities [11,15,26,58].
As a first step of the needed post-growth transformations in high-income nations, it is important to identify new measures for monitoring progress [55,59]. This can help societies evaluate if the system outcomes are within the "doughnut" in which social needs are met without exceeding planetary boundaries [60,61]. However, it must be emphasized that new indicators belong to phase 1 of the transformation and, although they are necessary, they alone are not sufficient for creating TC.
A shared characteristic in the frameworks of Meadows [28] and Chan et al. [24] was the emphasis on the importance of values. Chan et al. placed "Visions of a good life" and "Latent values of responsibility" high on the list of LPs. Similarly, second highest in Meadows's list was "The mindset or paradigm out of which the system-its goals, structure, rules, delays, parameters-arises". However, for the new values to have positive influence, they must be directed towards solving the underlying structural causes of problems.
Even if people understood that consumption and growth do not add to well-being beyond a certain point [59,62] and even if they embraced relational values and felt responsibility for taking care of the environment, that still would not be enough to implement TC, unless people feel a need to change the familiar but harmful socioeconomic structures. The new sustainable value systems can be viable in both appearance and practice only when they explicitly consider the underlying structural problems and how they could be solved without risking the security and well-being of citizens. Otherwise, people might not embrace the needed value shifts and solutions. Promoting a new ecologically and socially sustainable value system could even lead to an increase in paralyzing forms of eco-anxiety or eco-anger [63], if the new value system does not direct people towards recognizing, demanding, and creating structural solutions to the underlying causes.
This important point can be clarified by modifying Meadows's bathtub analogy of a system. Consider the socio-ecological system as a bathtub that has too much hot water. Previously, when the water was considered too tepid, a system developed that effectively incentivized everyone to only run hot water. Consequently, a structural constraint was created to the faucet, which meant that later generations could not simply adjust the water to colder temperatures when the bath started to be too hot for comfort, even as the values and goals changed. The discord between observed reality and desired system state creates anguish and despair. Only when the structural problems with the faucet are fixed, can the lower temperatures (new goals) be achieved. Until then, attempts to change parameters (policies, taxation) can only determine whether the water is going to keep increasing in temperature rapidly or a bit slower, and new information and changing preferences can only keep increasing anxiety about the worsening situation, unless the emerging values are directed towards solving the structural problem.
Since all systems must adapt to TC, the blueprint offered here has a wide range of potential applications, and the fact that I have separated the LPs from specific outcomes or interventions improves the applicability of this blueprint compared to the earlier framework of Chan et al. [24]. One particularly important application for the blueprint would be to define what the causal hierarchy of global environmental problems is, focusing on establishing the underlying causes that countries would need to address. The TC blueprint could also be applied to solve problems in the social sphere, considering the specific threats and pressures that are reducing human well-being [13,60], listing the drivers that influence the formation and intensity of those threats and pressures, and identifying the key underlying causes that necessitate or reinforce the drivers, and then forming solutions that address those problems following the phases of the blueprint.
In addition, for socioeconomic systems to stay within environmental carrying capacity, the CBD [18] expects transitions in land-use, forestry, freshwater systems, fisheries and the use of oceans, agriculture and food systems, infrastructure, climate action, and health systems. The blueprint could be applied for planning and implementing TC in each of these sub-systems. Although solutions at the level of sub-systems cannot influence the underlying incentive structures of the socio-ecological system, the blueprint could be used to plan and implement changes that help improve the sub-systems and conform them to the larger TC occurring at the societal and international scales.
Conclusions
Since the late 20th century, we have been living in a "full world" where every additional unit of nature appropriated for human use presents trade-offs with dangerous consequences, which imposes new rules on socioeconomic systems [32,35,64]. To quote IPBES [34], "it is increasingly clear that structural, systemic change is necessary, and continuing along current trajectories increases the likelihood of disruptions, shocks and undesired systemic change".
In this work, I outlined how the LP frameworks for socio-ecological systems could be improved. The LP concept has been identified to be a "boundary object", which can provide an entry point for transdisciplinary and multi-stakeholder collaboration on complex system change [25]. To avoid creating further confusion, not only among scholars but also among the decision makers who the sustainability research aims to help, authors should always clearly define the terms they use. Specifically, they should clearly separate interventions and actions from LPs. Furthermore, reviewers of scientific manuscripts should make sure that if authors claim that some intervention addresses a LP, they must also argue how that intervention relates to the actual points of leverage originally recognized by Meadows, i.e., the key system properties where focused interventions can give rise to large changes in the behavior of a system. The LP term has specific meaning and should not be used as a buzzword.
After outlining the needed improvements to LP frameworks, I integrated them into a new blueprint for TC, with clarified terminology and structure. I then used the TC blueprint to theoretically demonstrate how its nine phases could be applied to plan and implement TC in a socio-ecological system. The blueprint is an improvement on previous frameworks, due to its clarified structure and terminology, and although it was designed for socio-ecological systems, it could also be applied to plan and implement TC in various sub-systems at different scales. I propose that the terminology I have clarified and defined in Tables 2 and 3 should become the new standard for TC discourse when addressing socio-ecological systems, which might make TC plans more approachable for a wider range of stakeholders.
Any set of solution proposals that seek to make the socioeconomic system ecologically and socially sustainable, seeking true TC, must systemically and successfully identify and address the underlying causes of the global problems. The blueprint I have developed could help academics and societies achieve this, helping to balance the social, ecological, and economic net-benefits of consumption, production, and trade, thereby bring the scale of economies into balance with Earth's carrying capacity. When combined with the policies and modelling tools developed in the field of ecological economics, this blueprint could help achieve the targets set for mitigating global environmental problems and for achieving sustainable development goals.
|
2021-09-25T16:21:34.007Z
|
2021-08-23T00:00:00.000
|
{
"year": 2021,
"sha1": "78c79fb873b24f4d7bc5a74366320bd40b146e37",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/16/9474/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0eb80009ab26ce9844decfb1c5333e437d8a3ebb",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
255645852
|
pes2o/s2orc
|
v3-fos-license
|
PACAP-PAC1 receptor inhibition is effective in opioid induced hyperalgesia and medication overuse headache models
Summary Opioids prescribed for pain and migraine can produce opioid-induced hyperalgesia (OIH) or medication overuse headache (MOH). We previously demonstrated that pituitary adenylate cyclase activating polypeptide (PACAP) is upregulated in OIH and chronic migraine models. Here we determined if PACAP acts as a bridge between opioids and pain chronification. We tested PACAP-PAC1 receptor inhibition in novel models of opioid-exacerbated trigeminovascular pain. The PAC1 antagonist, M65, reversed chronic allodynia in a model which combines morphine with the migraine trigger, nitroglycerin. Chronic opioids also exacerbated cortical spreading depression, a correlate of migraine aura; and M65 inhibited this augmentation. In situ hybridization showed MOR and PACAP co-expression in trigeminal ganglia, and near complete overlap between MOR and PAC1 in the trigeminal nucleus caudalis and periaqueductal gray. PACAPergic mechanisms appear to facilitate the transition to chronic headache following opioid use, and strategies targeting this system may be particularly beneficial for OIH and MOH.
INTRODUCTION
Opioid analgesics in current use (e.g., hydrocodone, oxycodone, meperidine) act primarily at the mu opioid receptor (MOR) and are still commonly prescribed for migraine. 1,2 Although opioids may provide acute relief, chronic use results in increased severity and progression of migraine from an episodic to a chronic state 3,4 ; a phenomenon known as medication overuse headache (MOH). 5 Furthermore, excessive use of prescription opioids by migraine patients is also a significant public health issue. [6][7][8] Over 50% of patients in a recent study were prescribed opioids at some point for headache, 2 and a large scale epidemiological study published in 2020 found that 36% of migraine patients continued to use or keep on hand opioids for headache management. 1 In another study, opioids were administered in over half of all emergency room visits for migraine, and repeat visits to the emergency room were associated with opioid prescription. 9 Even after successful initial opioid withdrawal, 20-50% of patients relapsed within the first year, and most within the first 6 months. 10 The continued prescription of opioids for migraine puts a substantial number of patients at risk of developing MOH and potentially for prescription drug misuse and abuse. 11,12 There is thus a desperate need to find effective therapies for opioid-induced MOH.
Pituitary adenylate cyclase-activating polypeptide (PACAP) has emerged as a therapeutic target for migraine. [13][14][15][16] This neuropeptide is found in two forms as a 38 amino acid peptide (PACAP38) and a truncated version, PACAP27. 17 PACAP38 is the predominant form making up 90% of the circulating peptide and is found in both the peripheral and central nervous system 18 PACAP38 can bind to one of three Gs G-protein coupled receptors (GPCRs), VPAC1, VPAC2, or PAC1. 19 However, its affinity for PAC1 is 100-fold higher relative to the other two receptors. 20 In clinical studies, direct infusion of PACAP produces headache in healthy subjects and migraine in migraine patients. 21,22 Further strengthening the relationship between PACAP and migraine, PACAP38 is increased in the ictal phase of migraine patients 23 ; and the migraine therapeutic, sumatriptan, was correspondingly associated with reduced PACAP38 levels. 24 trigeminal nucleus caudalis (TNC). 27 PACAP knockout mice also have reduced susceptibility to the effects of nitroglycerin (NTG), a known human migraine trigger commonly used to model migraine-associated effects in rodents. 27 The PACAPergic system may play a distinct role in opioid-induced MOH. Our lab recently performed an unbiased large scale peptidomic study comparing mouse models of opioid induced hyperalgesia (OIH) and chronic migraine-associated pain. 29 PACAP was one of the few neuropeptides with altered expression in both models. In confirmation experiments, we found that the PAC1 inhibitor, M65, blocked the development of cephalic allodynia in an NTG model of migraine-associated pain, as well as in a model of OIH using chronic escalating doses of morphine. 29 PACAP may act as mechanistic bridge between opioid use and migraine chronification.
Although PACAP has been investigated as a target for migraine, to the best of our knowledge it has not been considered for opioid-induced MOH; and one of the aims of the current study was to explore this role further. To date, most preclinical studies either model chronic migraine or OIH. To better reflect clinical MOH, we first developed 2 models of opioid-exacerbated migraine phenotypes. We modified the commonly used NTG model of chronic migraine in which high doses of NTG result in the development of chronic allodynia. [30][31][32][33][34] We found that much lower doses of NTG did not produce chronic hypersensitivity unless combined with chronic morphine. Furthermore, we investigated the effect of morphine on a mechanistically distinct model of migraine -cortical spreading depression (CSD), a physiological correlate of migraine aura. 35 We found that repeated escalating doses of morphine also increased CSD events. We next tested the PAC1 inhibitor M65 within these models of opioid exacerbated migraine-associated pain and aura. M65 effectively reduced MOH-associated symptoms in both models. Finally, we investigated the cellular expression of the PACAPergic system relative to the mu opioid receptor (MOR) using in situ hybridization; and observed co-expression of MOR with PACAP or PAC1 in key headache processing regions. Together our results establish two new models of opioid induced MOH and demonstrate that inhibition of the PACAPergic system may be an effective therapeutic strategy for this disorder.
RESULTS
Repeated opioid administration exacerbates cephalic allodynia induced by low dose NTG MOR agonists are still prescribed for the treatment of headache disorders despite the potential to cause MOH. 4,36,37 In this case opioids are overlaid on top of migraine pathophysiology, and our first aim was to establish migraine models that reflect this interaction. To this end we repeatedly treated animals with morphine (10 mg/kg, SC) or vehicle for 11 days and paired it with an intermittent low dose of the human migraine trigger NTG (0.01 mg/kg, IP) starting on day 3 ( Figure 1A). We tested cephalic mechanical thresholds across the 11-day period; and responses were assessed before daily treatments (Basal Responses, Figure 1B), and 2 h following the NTG/Veh injection (Post-treatment Responses, Figure 1C). These doses of NTG or morphine alone did not produce chronic allodynia, but the combination of the two resulted in significant cephalic allodynia as observed on days 7 and 11 ( Figure 1B). The low dose of NTG produced an acute allodynia 2 h post-injection ( Figure 1C, open squares). This hypersensitivity was blocked by morphine on the first day of NTG treatment ( Figure 1C, filled square, day 3), but the anti-allodynic effects of morphine were lost by day 7. We also separated this data by sex, and did not observe any significant differences between males and females ( Figure S1). This data suggests that morphine tolerance is observed in this model, or that the combination of NTG and morphine facilitates signaling mechanisms that promote allodynia that is insensitive to opioids. Chronic opioid treatment exacerbates the effect of low dose NTG to produce chronic cephalic allodynia.
Opioid induced hyperalgesia model results in increased cortical spreading depression
Approximately one-third of migraine patients experience migraine aura as part of their symptoms, and cortical spreading depression (CSD) is thought to be the electrophysiological correlate of aura. 35 To examine if chronic MOR agonist treatment results in increased susceptibility to CSD we subjected mice to an established OIH model shown to cause cephalic allodynia. [38][39][40][41][42] Mice were treated with morphine or vehicle twice daily for 4 days (Figure 2A, 20 mg/kg/injection on days 1-3 and 40 mg/kg/injection on day 4). As reported previously, 38,41,42 this dosing regimen produced significant cephalic allodynia (Figure 2B). We tested mice in the CSD model 30,38,43,44 on day 5, 16 h after the final morphine treatment. Mice were recorded for an hour during continual administration of 1M KCl directly onto the dura. The total number of CSD events was counted based on visual reflectance changes and local field potential (LFP) recordings ( Figure 2C). Chronic opioid treatment significantly increased CSD events relative to vehicle treated controls ( Figure 2D). This regimen of morphine did not alter amplitude of CSD ( Figure S2). We In this case, mice were treated with vehicle or morphine (10 mg/kg SC) daily for 11 days and CSD was tested on day 12. This dosing regimen of morphine does not cause cephalic allodynia ( Figures 1B and 1C), and did not significantly exacerbate CSD ( Figure S3). These data indicate that OIH not only increases cephalic hyperalgesia but also increases frequency of CSD events.
PAC1 antagonist inhibits allodynia induced by the mixed morphine-NTG model
We next determined if inhibition of the PACAPergic system could block cephalic allodynia in our opioidfacilitated migraine model. Mice were treated in the opioid facilitated migraine model as described in Figure 1 ( Figure 3A). On day 12, 24 h, after their final drug treatment, mice were injected with vehicle or the PAC1 receptor antagonist, M65 ( Figure 3A, 0.1 mg/kg IP) 45 and tested for cephalic thresholds 30 min post-treatment. At this time point, mice treated with morphine and NTG continued to show significant allodynia ( Figure 3B, morphine-NTG-vehicle). Acute treatment with M65 completely inhibited allodynia in this group ( Figure 3B, morphine-NTG-M65). Of importance, we also determined the effect of olcegepant (1 mg/kg IP, 2h post-injection), a calcitonin gene related peptide (CGRP) receptor antagonist, in this model. Olcegepant did not significantly inhibit allodynia induced by chronic morphine plus NTG ( Figure 3B, morphine-NTG-OLC). These data suggest that inhibition of the PACAPergic system specifically can ameliorate opioid-facilitated migraine.
PAC1 antagonist inhibits exacerbation of CSD by chronic morphine
We tested if PAC1 inhibition could affect CSD and/or opioid-exacerbation of CSD. Mice were treated in the 4 days OIH regimen described above ( Figure 3C), and on day 5 were tested in the CSD paradigm. Mice were treated with vehicle or M65 (0.1 mg/kg IP) 400 s following the beginning of KCl stimulation and were recorded for an additional 3600 s. As was observed in Figure 2, chronic morphine resulted in a iScience Article significant increase in CSD events in vehicle challenged mice ( Figure 3D, left panel). Intriguingly, M65 did not reduce the number CSD events in chronic vehicle treated animals, but it significantly reduced the exacerbation of CSD by morphine ( Figure 3D, right panel). These data suggest that a PACAPergic mechanism may mediate the facilitation of CSD by chronic opioid treatment.
MOR is co-expressed with PACAP and/or PAC1 in migraine processing regions
We investigated if there was evidence for direct interaction between MOR and PACAP-PAC1 receptor. We used fluorescent in situ hybridization to map transcripts for Oprm1 (MOR, pink), Adcyap1 (PACAP, blue), and Adcyap1R1 (PAC1, green); and examined the following migraine/pain processing regions: trigeminal ganglia (TG), trigeminal nucleus caudalis (TNC, layer 1-4), somatosensory cortex (SSC, layers 4-5), and lateral-ventrolateral periaqueductal gray (PAG) 46,47 ( Figure 4A). We also included nuclear DAPI staining (dark blue) for anatomical context. In the TG, approximately 42% of MOR+ cells also expressed PACAP, whereas only 23% expressed PAC1 ( Figure 4B). In contrast, in the TNC almost all MOR+ cells also co-expressed PAC1, whereas only 22% expressed PACAP ( Figure 4C). In the SSC, MOR was highly co-expressed with both PACAP and PAC1, 57% and 67%, respectively ( Figure 4D). Similarly, in the PAG we observed 50% co-expression of MOR and PACAP, and almost 100% co-expression of MOR+ cells with PAC1 ( Figure 4E). We also performed an initial study to determine if MOR and PAC1 transcripts were expressed in neurons. We found that MOR and PAC1 broadly co-localized with the neuronal marker, NeuN, in all four regions ( Figure S4). These results demonstrate that there is very high cellular co-expression between MOR and components of the PACAPergic system.
DISCUSSION
Opioids are still commonly prescribed for the treatment of headache disorders including migraine. 1,2,36 Although opioids can provide limited relief from the acute migraine attack phase repeated use can result in transition to MOH. 3,4 To better understand the drivers of opioid-induced MOH we developed two iScience Article models which reflect opioid exacerbation of migraine-associated pain and aura. Chronic morphine treatment combined with a low dose of the human migraine trigger, NTG, resulted in chronic cephalic allodynia not observed with either treatment alone. We also demonstrated that a traditional OIH paradigm not only resulted in cephalic allodynia but also increased CSD events. We identified PAC1 receptor as a promising therapeutic target for opioid-induced MOH, as PAC1 inhibition blocked opioid facilitation in both of these models. Importantly, CGRP blockade was ineffective in reducing opioid-induced MOH. This finding suggests that opioid-induced MOH is distinctly regulated by the PACAPergic system; and not just by migraine mechanisms more generally. This study also demonstrates that MOR, PACAP, and PAC1 are co-expressed in numerous sites associated with head pain processing, further supporting the idea of direct interaction between these two systems.
Clinically, OIH is treated by opioid taper and cessation, which can be difficult to implement as patients are reluctant to stop using drugs which they believe are treating their pain, and because of opioid withdrawal. 48 Even after successful initial opioid withdrawal, 20-50% of headache patients relapse within the first year, and most of those patients relapse within the first 6 months. 10 A greater understanding of the pathophysiology underlying MOH would allow for discovery of more targeted treatments for this disorder. One of our aims was to produce preclinical models that reflected the mechanistic interactions between chronic opioid treatment and migraine pathology. NTG evokes migraine in migraine patients, 31 and has been used as a human experimental model of migraine. 49 Our laboratory has used chronic intermittent administration of higher dose NTG (10 mg/kg) to model chronic migraine-associated pain. 50,51 In the current study we found iScience Article that a much lower dose of NTG (0.01 mg/kg) induced acute cephalic allodynia but did not result in chronic hypersensitivity. Similarly, frequent and escalating doses of morphine results in OIH, 40,52,53 however, the opioid regimen used in our mixed NTG-opioid model was insufficient to produce OIH. Only when lowdose NTG and morphine were combined did we observe the development of chronic cephalic allodynia. MOH patients show increased pain sensitivity in cephalic and extra-cephalic regions, 54 and in epidemiological studies MOH is associated with increased pain severity and cutaneous allodynia. 1,3 Therefore, the allodynia observed in this animal model of opioid-facilitated migraine pain reflects clinical observations and may be of translational significance.
CSD captures a mechanistically different aspect of migraine relative to the NTG model. CSD is an electrophysiological correlate of migraine aura and widely held to be the cause of aura symptoms. 35 Susceptibility to CSD has been linked to increased cortical excitability. Genetic knock-in models of familial/monogenic iScience Article forms of migraine show increased susceptibility to CSD. 55,56 Furthermore, multiple migraine preventives with diverse sites of action inhibit CSD, 57,58 and this model is used to screen novel preventives. We show that pretreatment of mice with an opioid paradigm shown to cause OIH, 29,38,40,42,53 also increases the number of CSD events in response to KCl administration. Increased CSD events were also observed following chronic treatment with paracetamol, 59 and chronic sumatriptan exposure resulted in a decrease in the stimulation threshold required to generate a CSD event. 60 Together with our findings, these studies show that the mechanisms underlying CSD are sensitive to medication overuse, and a combined drug-CSD model can be used to model MOH.
In the NTG model, a less severe dosing regimen of morphine (10 mg/kg daily for 10 days) was sufficient to exacerbate NTG-induced chronic allodynia. In contrast, this dosing regimen did not increase CSD events. Although NTG and CSD are considered migraine triggers, they are mechanistically distinct. For example, NTG infusion in migraine patients does not evoke aura, even in migraine with aura patients. 32 Our results suggest that mechanisms regulating migraine-like pain are more sensitive to opioid administration. Considering the high expression of MOR in pain processing regions, over-stimulation of these receptors would directly feed into mechanisms regulating allodynia and hyperalgesia. Outside of headache, many clinical studies also indicate that chronic opioids can result in increased hyperalgesia and allodynia, including in patients suffering from peripheral pain or opioid use disorders. 48,[61][62][63] In a previous study we had identified PACAP as a potential bridge between chronic opioid treatment and migraine chronicity. 29 We followed up on these findings and demonstrated that PAC1 inhibition effectively blocked both the chronic allodynia induced by NTG-morphine treatment, as well as the facilitatory effects of morphine on CSD. The latter findings are particularly intriguing as M65 did not itself decrease CSD events (i.e. in vehicle treated mice), as has been demonstrated for migraine preventives. 57,58 PAC1 inhibition only decreased the augmentation of CSD induced by chronic morphine. These results support the idea that PACAPergic mechanisms are specifically enhanced in response to chronic opioid treatment which can feed into migraine pathophysiology. A number of studies indicate that MOR, PACAP, and PAC1 are expressed at similar anatomical sites, including the PAG, trigeminal complex, and somatosensory cortex. 13,64 Our in situ hybridization studies confirmed expression in these regions and show that at a cellular level MOR is abundantly co-expressed with PACAP and PAC1. Transcript expression does not always correspond with protein expression or functional interaction, and future studies will address these issues more specifically. Nevertheless, these findings are the first step to showing that chronic opioid action at MOR could directly impact PACAP and PAC1 expression and function.
Targeting PACAP and its receptor PAC1 is under investigation as a potential migraine therapy. 13,16 A rodent specific PAC1 antibody was able to inhibit evoked nociceptive activity in rats, 65 and we have shown that M65 can block the development of chronic migraine-associated pain induced by NTG. 29 AMG-301, a human monoclonal antibody targeting the PAC1 receptor, recently completed a Phase II clinical trial for migraine; but did not show efficacy over placebo. 66 Although disappointing, the antibody was well tolerated with minimal adverse effects, and there may still be a use for this strategy for the treatment of opioid-induced MOH or OIH. These results may also point to the need to target PACAP, which can also bind to VPAC1 and VPAC2. Anti-PACAP antibodies developed by Lundbeck and Lilly may address this question. Future clinical studies will also have to consider that PACAP levels increase in lactating women and is expressed in breast milk, therefore these strategies may be contraindicated for this population. In addition, peripheral PAC1 inhibition may be insufficient for efficacy in migraine. In our peptidomic study, PACAP levels were most altered in the PAG following chronic NTG or morphine treatment, 29 supporting the role of central PACAPergic mechanisms in migraine and MOH.
The work presented in the current study suggests that PAC1 may be a particularly effective target for opioid-induced MOH. M65 effectively blocked chronic cephalic allodynia induced by morphine-NTG treatment, an effect not observed with olcegepant. Both PACAP and CGRP are implicated in migraine mechanisms, and both evoke migraine in migraine patients. 67 How MOR regulates CGRP and its receptor is relatively underexplored. Immunohistochemical analysis has shown co-expression between MOR and CGRP in trigeminal ganglia in rats, 68 but another study found very little expression of MOR in rat dura, and even rarer co-expression with CGRP. 69 In higher order brain regions, high co-expression of MOR and CALCA transcripts were observed in the parabrachial nucleus. 70 iScience Article change following chronic opioid or pain conditions, and this will be explored in future studies. Headache patients are heterogeneous and spontaneous migraines may be endogenously generated in each individual by different combinations of pro-migraine peptides and signaling molecules. Our findings suggest that opioid-induced MOH may be particularly weighted toward PACAPergic mechanisms; and that PACAP-PAC1 is a therapeutic target for this disorder.
Limitations of the study
In this study we found that PACAP-PAC1 receptor facilitated behaviors associated with opioid-induced MOH. This work suggests that PACAP or PAC1 targeting therapies would be beneficial to patients suffering from this disorder. However, MOH patients usually have a long history of headache and have likely tried or continue to use or overuse many different therapies. Therefore, the mechanisms regulating their headaches are expected to be due to adaptations at many different levels in the central and peripheral nervous system. This heterogeneity is not captured in our preclinical study. Technically, this study was limited in that we only examined transcript expression of MOR, PACAP, and PAC1 which cannot fully reflect where the translated receptors and peptide are expressed in the cell. For example, proteins can be made in one part of the neuron but expressed or released in far-reaching projection sites. Unfortunately, antibodies targeting PACAP and PAC1 receptor are not always selective under standard immunohistochemistry protocols. Future studies will test new antibodies to overcome this limitation.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
DECLARATION OF INTERESTS
The authors declare no competing interest.
INCLUSION AND DIVERSITY
We support inclusive, diverse, and equitable conduct of research.
We worked to ensure sex balance in the selection of non-human subjects. iScience Article One or more of the authors of this paper self-identifies as an underrepresented ethnic minority in their field of research or within their geographical location.
One or more of the authors of this paper received support from a program designed to increase minority representation in their field of research.
While citing references scientifically relevant for this work, we also actively worked to promote gender balance in our reference list.
Data and code availability d No new code was used in these studies.
d No large dataset was generated through these studies.
d All other data are available upon request through the lead contact.
EXPERIMENTAL MODEL AND SUBJECT DETAILS Animals
An equal number of adult male and female C57BL6/J mice (Jackson Laboratories, Bar Harbor, ME. USA, RRID IMSR_JAX:000664) were used in this study; except for CSD experiments, where only females were used, as they are more sensitive to CSD induction. 71 Mice were aged between 9 and 16 weeks. All mice were group housed with a 12hour-12hour light-dark cycle, in which lights were turned on at 07:00 and turned off at 19:00. Food and water were available ad libitum and mice weighed between 20 and 30g. On test days weights were recorded and the experimenters were blinded. All experimental procedures were approved by the University of Illinois at Chicago Office of Animal Care and Institutional Biosafety Committee, in accordance with Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC) International guidelines and the Animal Care policies of the University of Illinois at Chicago. All results are reported according to Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. All animals were monitored continuously throughout experiments and no adverse effects were observed. All injections were in a volume of 10 mL/kg.
REAGENT or RESOURCE SOURCE IDENTIFIER
Biological samples Brain and trigeminal ganglia tissue from C57 mice used in the studies described in Animals section of methods At the beginning of each experiment a basal test for mechanical threshold was recorded, mice were counterbalanced into groups from this measurement. All sensory sensitivity tests were conducted in the same behavior room. The behavior room is separated from the vivarium and has low light (35-50 lux) and low noise conditions. All tests were performed during the light cycle between 08:00 and 17:00. The testing rack consisted of individual plexiglass boxes with a 4 oz paper cup in each box. Mice were habituated to the testing racks for 2 consecutive days before the initial test day. On test days mice were again habituated to the racks 20 minutes before the first test measurement. Mice were tested while in the paper cup. The periorbital region, caudal to the eyes and near the midline, was tested to assess cephalic mechanical thresholds. Manual von Frey hair filaments were used in an up-and-down method. 72 The filaments had a bending force ranging from 0.008g to 2g. The first filament used was 0.4g. Following the first filament if there was no response a heavier filament (up) was used, alternatively a response would result in the use of a lighter filament (down). Response were defined as shaking of the head, repeated pawing, or cowering away from the filament following a bend. Following the first response the up-and-down method was continued for 4 additional filaments. Mice were tested with the PAC1 inhibitor, M65 (0.1 mg/kg IP); the CGRP receptor antagonist, olcegepant (1 mg/kg IP), or vehicle (0.9% saline) on day 12, 24h after their final morphine/NTG injection.
Model of opioid exacerbated migraine
Nitroglycerin (NTG) was purchased at a concentration of 5 mg/mL, in 30% alcohol, 30% propylene glycol and water (American Regent, NY, USA). NTG was diluted on each test day in 0.9% saline to make a working solution of 1 mg/mL. This solution was further diluted in saline for a final dose of 0.01 mg/kg (0.001 mg/mL). Morphine (10 mg/kg) or vehicle (saline) was administered once daily subcutaneously for 11 days. Beginning on day 3, 30 minutes after the morphine/vehicle injection, animals received NTG (0.01 mg/kg, IP) or vehicle (saline). Basal mechanical thresholds were assessed on days 1, 3, 7, and 11 prior to that days' morphine/ vehicle treatment. A post-treatment measurement was also taken 2 hours after the NTG/veh injection on days 3, 7, and 11.
Model of opioid exacerbated aura
Before CSD measurement, mice underwent an OIH protocol in a similar matter to previous studies in our lab and others. 29,73,74 Only female mice were used in this model. Mice received morphine (Sigma Chemical, St. Louis, MO) 20 mg/kg SC twice daily, once in the morning around 09:00 and again at 17:00, for days 1-3. On day 4 the dose was escalated to 40 mg/kg SC/injection, which was also given twice that day. Vehicle(saline) was similarly administered to control mice. Mice were tested in the CSD model on day 5, 18-20h following their final morphine/vehicle injection. In a pilot experiment we also tested another morphine paradigm -10 mg/kg SC daily for 11 days with CSD on day 12 -which did not significantly exacerbate CSD ( Figure S3).
Cortical spreading depression model
The model of cortical spreading depression/depolarization (CSD) used in these studies is based on previously published work. 30,38,75 Mice were randomly grouped into vehicle or M65 (0.1 mg/kg, IP). For the CSD procedure mice were anesthetized with isoflurane (induction 3-4%; maintenance 0.75 to 1.25%; in 67% N2/33% O2) and placed in a stereotaxic frame on a homoeothermic heating pad. Core temperature (37.0 G 0.5 C), oxygen saturation ( 99%), heart rate, and respiratory rate (80-120 bpm) were continuously monitored (PhysioSuite; Kent Scientific Instruments, Torrington, CT, USA). Mice were repeatedly tested with tail and hind paw pinch to ensure proper anesthetic levels were maintained.
CSD was verified in two ways, optical intrinsic signal (OIS) and electrophysiological recordings, as has been shown in similar studies 30 Two burr holes were drilled lateral to the thinned window around the midpoint of the rectangle. The burr holes were drilled deeper than the thinned skull portion such that the dura was exposed, but not so deep that the dura was broken. Local field potentials (LFPs) were recorded using a pulled glass pipette filled with saline and attached to an electrode, which was further connected to an amplifier. The electrode was placed inside of the lateral burr hole such that it was inside of the cortical tissue. A separate ground wire was placed underneath the skin caudal to the skull, which was used to ground the LFPs. After set up, the LFP was recorded for an hour to allow for stabilization in the case that a CSD occurred during the placement of the electrode or the thinning surgery. After stabilization a second pulled glass pipette was filled with 1M KCl and placed into the rostral burr hole, ensuring there was no direct contact with the brain or surrounding skull. Once placed, an initial flow of KCl was started and an even flow was maintained so that a constant small pool of KCl filled the burr hole. Any excess liquid was removed with tissue paper. After initial KCl administration mice were recorded for 400 seconds. Following which mice were treated with M65 or vehicle and then recorded for a remaining 3,600 seconds, for a total recording time of 4,000 seconds. Animals were only included in the final analysis if at least 2 CSD events occurred within the first 400s of recording. No animals were excluded in this study. Following the recording, video and LFP were analyzed and used to count the number of CSD events that occurred within the hour recording. Following the procedure mice were euthanized by anesthetic overdose followed by decapitation.
RNAScope fluorescent in situ hybridization
RNAscope kit was purchased from Advanced Cell Diagnostics RNAScope Technology (ACD Bioscience). C57Bl6/J mice were anesthetized, brain and TG were collected and immediately frozen. Frozen tissue was cut on a cryostat at 14 mm, collected on slides, and processed per the manufacturer's protocol. Every 4th section was quantified for each brain region. The probes used were targeted against the mouse genes for Oprm1, Adcyap1, and Adcyap1r1.
Statistical analysis
The sample size needed for each experiment was either based on similar previous experiments or calculated through power analysis where the minimal detectable difference in means = 0.3, expected standard deviation of residuals = 0.2, desired power = 0.8, alpha = 0.05. Each experiment was replicated with separate cohorts of animals to ensure reproducibility. All data were analyzed using GraphPad Prism (GraphPad, San Diego, CA). The level of significance (a) for all tests was set to p < 0.05. Unpaired, two-tailed, t-test was performed to determine effect of morphine on CSD; 2-way ANOVA to determine effect of M65 on morphine-CSD. Three-way ANOVA was performed for allodynia experiments. Post hoc analysis was conducted using Holm-Sidak analysis to correct for multiple comparisons. Post hoc analysis was only performed when F values achieved p < 0.05. All values in the text are reported as mean G SEM.
Three-way repeated measure ANOVA was performed in Figure 1, following significant interaction within this test, post hoc analysis was performed using the Holm-Sidak test. Figure 1B ***p < 0.001; ****p < 0.0001 relative to the vehicle-vehicle group on the same test day. Figure 1C ++++p < 0.0001 vehicle-NTG group compared to vehicle-vehicle Group; ****p < 0.0001, morphine-NTG group compared to vehicle-vehicle group same day. Figure 2A a two-way ANOVA was performed followed by post hoc analysis using the Holm-Sidak test, **p < 0.01, ***p < 0.001 morphine compared to vehicle on Day 1. An unpaired t-test was also used in Figure 2D *p < 0.05. Figure 3 used a three-way ANOVA and post hoc analysis with the Holm-Sidak test; ****p < 0.0001 morphinenitroglycerin-vehicle Group compared to the vehicle-vehicle-vehicle group; ++p < 0.01 morpine-nitroglycerin-M65 group compared to the morphine-nitroglycerin-vehicle group. A two-way ANOVA and post hoc analysis through Holm-Sidak test, **p < 0.01 morphine-vehicle Group compared to the morphine-M65 group.
ll
OPEN ACCESS
|
2023-01-12T16:16:50.640Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "7bddecf396ae4859811335f6623c639f15619a60",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4a5867e5a46d2329a7026e7fd0fa7d40b9c7ba95",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255505516
|
pes2o/s2orc
|
v3-fos-license
|
Workplace happiness, well-being and their relationship with psychological capital: A study of Hungarian Teachers
Happiness and well-being at work has been an increasingly popular topic in the past two decades in academic and business contexts alike, along with positive psychology, through which organizations aim to find out, what makes working environments engaging and motivating. Few studies have focused on education, however, especially from a solution-focused perspective, even though it is a sector where employees are highly exposed to stress and burnout. Accordingly, the purpose of his study was to investigate the relationship between teachers’ psychological resources through the concept of psychological capital, workplace well-being and perceived workplace happiness. We used both qualitative (open-ended question) and quantitative (test battery) methods to examine the relation between the various factors. Content analysis of responses in our qualitative research suggests that the main pillars of teachers’ workplace happiness were realization of goals, feedback, finding meaning in work and social relationships. The results of our quantitative study indicated that workplace well-being and happiness correlated with inner psychological resources, hope and optimism in particular. We conclude that the future focus on employee well-being must take into account positive contributing factors and adopt a positively-oriented approach to promoting well-being. Suggestions for practical implications are also discussed.
Introduction
The mental health and well-being of employees are crucial factors in an organization's performance and success (Page and Vella-Brodrick 2009). The dynamics of employee well-being at work are pivotal for understanding the different components that affect their health, work behaviour and performance. There are resources on individual, group, managerial and organizational levels that are strongly related to employee well-being (Nielsen et al. 2017). Subjective well-being is connected with levels of workplace stress, absenteeism, intrinsic motivation, commitment, innovation, and satisfaction. Work-related well-being and workplace happiness have been identified as important factors in performance, job satisfaction (Crede et al. 2007;Fisher 2010), and susceptibility to burnout (Iverson et al. 1998). Organizational programmes designed to reduce negative workplace outcomes like burnout or stress are often risk-based, problem-focused and negatively framed approaches to mental health (Page and Vella-Brodrick 2012;LaMontagne et al. 2007). There are far fewer positively framed programs or interventions that aim to promote and to improve positive and inner aspects of employees' wellbeing at the primer level (LaMontagne et al. 2014;Luthans 2002a). Thus, the contributory factors of employee workplace well-being and happiness should be considered very important components of mental health and subjective well-being per se.
Well-Being and Happiness
Previous literature on subjective well-being proposed that well-being should be considered a broader phenomenon that involves affective, cognitive and behavioural aspects (Ryff 1989;Ryff and Keyes 1995;Seligman 2011). There are two main approaches to the concept of well-being: subjective wellbeing and psychological well-being. Subjective well-being is often used as an umbrella term covering various factors. Although there is a consensus that well-being is a multidimensional construct, different theoretical interpretations of the components have been proposed. Constructs of happiness and subjective well-being focus mainly on hedonic aspects of well-beingstriving for maximisation of pleasure and positive emotions, but other constructs include eudaimonic aspects as well, such as autonomy and self-actualization (Fisher 2010).
Several theories of psychological well-being fall under the broader concept of eudaimonic well-being. One of them is the self-determination theory (SDT) formulated by Ryan and Deci (2000), who came to the conclusion that there are three basic psychological needs: autonomy, competence and relatedness. When these needs are satisfied, they foster well-being. Ryff (1989) analysed various approaches to happiness in different subfields of psychology and introduced a six-dimensional model of well-being comprising the following factors: selfacceptance, environmental mastery, autonomy, positive relations with others, personal growth, and purpose in life. Csikszentmihalyi's concept of autotelic personality also fits into the term of eudaimonic happiness. Autotelic individuals often engage in meaningful activities for their own sake (Csikszentmihalyi 1990;Baumann 2012). Finally, in a positive psychological context, Seligman's (2002) authentic happiness model distinguishes three types of lives that together make up an all-round happy life: a pleasant life, an engaged life and a meaningful life. Later Seligman (2011) revised his early model of happiness and in a new well-being theory proposed five pillars of human flourishing: Positive emotions, Engagement, positive Relationships, Meaning, and Accomplishment (PERMA as an acronym). According to Seligman (2011), each of these five elements contributes to well-being and is pursued for its own sake.
As can be seen above, multidimensional approaches to well-being may not only provide a more precise interpretation of well-being but may also provide a better basis for the design of interventions aimed at improving well-being and happiness. Notably, psychological well-being is a concept that can also be interpreted and measured in organizational context (Dagenais-Desmarais and Savoie 2011).
Well-Being and Happiness at Work
The past few decades have seen an explosion in research on workplace happiness. Many authors have attempted to identify the sources of happiness and each of them has found important but different determinants (e.g. Diener 1984;Freedman 1978;Argyle 1987;Csikszentmihalyi 1990;Emmons 1986). Subjective well-being is often equated with happiness. Happiness is one of the most studied facets of wellbeing, but is only one of the various aspects that researchers have considered (Jayawickreme et al. 2012).
Workplace happiness is a term that describes the experience of employees who are energized by and enthusiastic about their work, find meaning and purpose in their work, have good relationships at their workplace, and feel committed to their work. Overall or global workplace happiness refers to how employees evaluate their work life in general and most studies rely on global reports of this kind (e.g. Kahneman et al. 2004). Most studies have examined objective variables that influence well-being and happiness, but happiness can also be interpreted through a subjectivist approach, which considers happiness from the individual's own perspective, and this notion has led to the self-report measurement of global happiness (Lyubomirsky and Lepper 1999).
Dolan and his colleagues (Dolan et al. 2008) studied 19 cross-national, major national datasets that included measures of subjective well-being and tried to identify all the potential influencers of well-being. Their analysis revealed seven broad categories: 1. income, 2. personal characteristics (age, personality), 3. socially developed characteristics (education, unemployment), 4. how we spend our time (e.g. caring for others, hours worked), 5. attitudes and beliefs toward self/others/life (religion, political persuasion), 6. relationships (intimate relationship, having children), and 7. the wider social, economic and political environment (degree of democracy, welfare system). Dolan et. al's review highlighted the most frequently measured factors associated with well-being and was concerned chiefly with the impact of objective and subjective variables which in combination influence overall well-being. In our study, we adopt a subjectivist interpretation where overall subjective happiness denotes a broader and more global psychological phenomenon.
Many studies have shown that overall well-being and happiness at the workplace can be highly beneficial for organizations (Seligman 2002). Research has shown that happier individuals tend to have better physical and psychological health, and live longer (Roysamb et al. 2003;Lyubomirsky et al. 2005a, b), perform better, can cope better with stressful events (Wood and Joseph, 2010), have more positive workplace relationships and are more satisfied with their jobs (Boehm and Lyubomirsky 2008;Connolly and Viswesvaran 2000). Individuals with higher levels of well-being perform better at work, are more cooperative (George 1991), have more satisfying relationships, stronger immune systems, fewer sleep problems, lower levels of burnout, greater self-control, better self-regulation and coping abilities, and are more prosocial (Diener and Seligman 2002;Chida and Steptoe 2008;Seligman and Schulman 1986;Seligman et al. 1990;Kubzansky et al. 2001;Fredrickson and Joiner 2002;Howell et al. 2007;Lyubomirsky et al. 2005a, b;Segerstrom 2007;Williams and Shiaw 1999).
Teachers' Well-Being and Happiness
The psychological capacities of individuals can be especially important in a professional context. Teaching is a profession which is highly associated with stress-related outcomes and teachers' stress and burn-out are popular topics of research. Nevertheless, studies primarily focus on problems, like burn-out, stress, frustration, anxiety, attrition (e.g. Singh and Billingsley 1996;Kyriacou 2001;Brouwers and Tomic 1999;Trent 1997;Macdonald 1999;Ramsey 2000) while there are fewer solution-focused, positively framed approaches that build on teachers' strengths or intrinsic resources linked to wellbeing. The workplace well-being and happiness of teachers have been investigated less often than other factors, although there are some relevant and remarkable results (Calabrese et al. 2010;Hoy and Tarter 2011;Benevene et al. 2018;Chan 2009;Chan 2010;Lavy and Bocker 2018). According to Hoy and Tarter (2011), positive psychology can be a new frame in which educational staff's well-being can be improved.
In the context of teaching, research has shown that teachers' happiness and well-being correlates with students' well-being and performance. Briner and Dewberry (2007) found a relationship between staff well-being and student SATs (Statutory Assessment Tests) although a causal relationship could not be proved. Some studies show that teachers' well-being is closely related to students' school performance and happiness (Lyubomirsky et al. 2005a, b;Jennings and Greenberg 2009). Other studies have revealed that teachers' happiness has an impact on students' happiness (Bakker 2005). Jennings and Greenberg (2009) emphasized the importance of teachers' well-being in developing and maintaining a positive classroom climate and the teacher-student relationship. Roffey (2012) argued that focusing on teacher wellbeing also promotes student well-being and performance. Spilt et al. (2011) discussed empirical evidence for the influence of teacher-student relationships on teacher well-being.
Despite limited research results regarding the link between teachers' well-being and students' attainment, there is a reasonable expectation that such a relationship exists. It seems reasonable to assume that teachers with a high sense of wellbeing would presumably perform better, display better educational outcomes and that this would lead to happier and more motivated students. Bricheno et al. (2009) highlighted that more research needs to be conducted into the link between teacher well-being and student educational attainment and well-being. Teacher well-being should be actively supported in schools.
There is a substantial body of evidence relating to factors that enhance well-being. To date, many studies have mainly targeted the negative side of the teaching profession including work stress and burnout. This focus on the negative aspects of teachers' mental health provides no guidance on how to promote and develop overall well-being. By adopting a positive psychological approach, a new perspective opens as the primary focus shifts to revealing and developing potential and inner resources rather than treating problems or negative consequences.
Positive Psychological Resources
Positive psychology offers a new perspective on improving well-being and happiness without solely focusing on deficits and disorders. This notion includes building upon existing resources and strengths that individuals possess such as optimism, hope, resilience, or gratitude, that can help to sustain good mental health (Seligman et al. 2005;Peterson and Seligman 2004;Luthans et al. 2006). Most of the research focused on employee well-being originates from Positive Organizational Scholarship (POS; Cameron et al. 2003) and Positive Organizational Behaviour (POB; Luthans 2002b). POS and POB research have demonstrated that well-being at work is more than the result of job satisfaction and commitment, and that well-being includes "positively oriented human resource strengths and psychological capacities" (Luthans 2002a, p. 59). The concept of psychological capital (PsyCap) refers to an individual's positive psychological state of development characterized by hope, self-efficacy, resilience and optimism (often referred as the HERO model from the acronym of the components) (Luthans et al. 2007). PsyCap is malleable, and therefore open to development. This notion also opens new opportunities for workplaces to enhance employee well-being (Avey et al. 2011). This positive approach suggests that revealing and promoting resources and strengths that individuals already have can improve their overall well-being and happiness, as well as having a positive impact on the organization's productivity and on other (not necessarily measurable) outcomes (e.g. workplace climate, cooperation between employees, trust in leaders). The more inner resources individuals have, the more likely it is that they will experience higher levels of motivation, satisfaction and well-being (Xanthopoulou et al. 2007).
Some remarkable research results in recent years have demonstrated the relationship between PsyCap and well-being. For example, Culbertson et al. (2010) found that PsyCap was related to both (hedonic and eudaimonic) types of wellbeing. This finding suggests that PsyCap may be a core element in the application of positive psychology in organizations, and that improving employees' PsyCap may be one of the most effective ways to enhance workplace well-being. Research has shown that micro-interventions or even Internet-based interventions (aimed at developing hope, selfefficacy, resilience and optimism) can improve PsyCap (Luthans et al. 2006;Luthans et al. 2008). These encouraging results may provide a suitable starting point when considering broader well-being interventions in organizations.
We approach teacher well-being within the positive psychology framework to give us a more comprehensive understanding of what factors may play significant roles in the formation of teachers' well-being. For this reason, we use the multidimensional PERMA model as a basic framework. In this study, we explore teachers' psychological capital in relation to workplace well-being and happiness. We assume that different PsyCap factors relate distinctly to different facets of well-being and overall workplace happiness. Applying the PsyCap model and the multidimensional PERMA model can help us to understand which aspects are the most relevant for overall happiness. We aim to reveal the unique contribution of each PsyCap component to each of the PERMA factors, as well as to overall workplace happiness. Our research was guided by the following research questions: 1. What are the sources of teachers' workplace happiness?
What are the elements that contribute to overall happiness at work? 2. How do different PsyCap factors relate to each of the PERMA facets? Do some factors of PsyCap have a stronger link to overall workplace happiness? 3. Which dimensions of PERMA play a role in overall workplace happiness? Do some facets of PERMA have a stronger link to overall workplace happiness?
Method
Participants 297 participants completed the Workplace well-being and happiness questionnaires (201 female, 93 male, three respondents did not indicate their gender). The mean age of the participants was 41.4, with a standard deviation of 7.81 and most participants were between 36 and 45 years of age. All the participants had university or college degrees. The relationship status of participants was also recorded: 207 were married, 50 single, and 36 divorced or widowed (4 participants did not respond). 65 of the participants had 1 child, 123 had two, 40 had three, 6 had four, 1 had five, and 60 had no children (2 persons did not respond). The cohorts were participants in Educational Leadership training at the Budapest University of Technology and Economics'. This is a postgraduate course to get a qualification in the field of leadership that lasts for two years. This qualification is necessary to become a principal of any educational institute. Participants were recruited before the training session has started. The conductor of the research gave them all the information about the aim of the research personally. They received information about the opportunity to participate from the conductor of the research, therefore taking part was voluntary. Volunteers filled the questionnaires after the training session has ended.
Measures
The participants in the study were asked to rate their workplace happiness, well-being and psychological capital on scales as a pilot study for future international research. The results of both quantitative and qualitative measures were then analysed.
Qualitative Method -Content Analysis
The aim of this research was to explore the determinants of workplace happiness via content analysis of teachers' answers written to the open-ended question: "When do you feel/ experience happiness at work?" Given that happiness is defined as a subjective phenomenon, our aim was to explore what teachers identified as the main factors that determine their workplace happiness experience in their own words, and accordingly this open-ended question was asked before the rest of the test battery. The first questionnaire, the Subjective Happiness Scale, included an instruction for participants to record responses according to their current workplace and full time job context.
Quantitative Measurements
Seligman's PERMA model was used as a framework for measuring workplace well-being. We use a measurement that was developed on the basis of this model (Kun et al. 2017). Although the PERMA model focuses on positive aspects of well-being, this measurement also captures the negative side of workplace well-being. The Workplace PERMA Questionnaire comprised 6 dimensions: Positive emotions, Engagement, positive Relationships, Meaning of work, Accomplishment, and Negative aspects of work. Respondents had to record their answers on a 5-point Likert scale. The scale had a Cronbach α of .87, which indicated that the scale was reliable for the sample.
Psychological capital was measured using four different questionnaires. Resilience was measured using the Brief Resilience Scale (BRS; Smith et al. 2008). For the Hungarian sample, our own translation of this scale was used. The scale contained 6 items, of which 3 are positive and 3 are negative items. Respondents recorded their answers on a 5point Likert scale. The scale's Cronbach α of .80 showed that the scale was reliable for the sample. Self-efficacy was measured using the General Self-Efficacy Scale (GSE; Schwarzer and Jerusalem 1995), for which we also used our own translation. The scale consisted of 10 items, which respondents had to record their answers for on four-point Likert scales. The scale's Cronbach α of .82 showed that the scale was reliable. The validations of the Hungarian versions of BRS and GSE scales are currently in progress. After the translation of these two scales into Hungarian we translated a completed translation back into English, compared that new translation with the original scales, and finally, reconciled any meaningful differences between the two.
Optimism was measured with the Life Orientation Test -Revised (LOT-R; Scheier et al. 1994), the Hungarian translation of which was developed by Bérdi and Köteles (2010). The scale consists of 10 items, including reverse items for pessimism. Respondents had to record their answers on a 7-point Likert scale. The scale's Cronbach α was below .70, therefore the 8th item had to be omitted, after which the scale had an α-value of .70, and therefore proved to be reliable. Hope was measured on the Adult Hope Scale (AHS; Snyder et al. 1991). The scale contains 12 items, and two subscales: Agency (motivation to reach a goal, and the amount of energy focused on the goal), and Pathway (the personal ability to find solutions and if the goal is being seen as a stressor or challenge), measured by four items on each subscale. Four distractor items were also added to the scale. The Hungarian translation was developed by Martos et al. (2014). Respondents had to record their answers on an 8-point Likert scale. The Pathway subscale had a Cronbach α of .71, while the Agency subscale had an α of .70. The overall scale had a Cronbach α of .80 confirming its reliability. Finally, overall workplace happiness was measured by means of the Subjective Happiness Scale (SHS; Lyubomirsky and Lepper 1999) which consists of four items. As a Hungarian translation was not available at the time of research our own translation was used and followed the same translation process we mentioned above. Respondents had to record their answers on a 7-point Likert scale, where only the highest and lowest values were labelled within each item. In the instructions, participants were asked to answer questions specifically regarding their workplace experiences. The scale proved reliable in translation and as applied to the sample with a Cronbach α of .81.
The last section of the set contained socio-demographic questions on gender, age, relationship status, number of children, highest level of education and employment status, and also single questions about work and life satisfaction, stress at work and overall health status. The questions about life and work satisfaction and workplace stress were measured on a 3point scale, while overall health status was measured on a 5point scale. These questions were included to further explore if the (positive or negative) correlation between these factors and happiness can be proved. Previous Hungarian studies had examined the relation of these factors and overall well-being among healthcare professionals and educators (Deutsch et al. 2015; Holecz and Molnár 2014), but not specifically applied to the happiness experienced by teachers. However, since overall well-being and happiness are both positive constructs, we expected similar results. The surveys were conducted at the training venue.
Content Analysis -Exploring Workplace Happiness Factors
In the content analysis procedure the teachers' answers to an open-ended question were defined as a text unit, and "that part of the text unit to which coding categories or dimensions are applicable" (Smith 2000., p. 320) were categorised as a coding unit.
We used linguistically defined segments (sentences, clauses, phrases, words) as coding units. The most important aspect of the text used as coding unit is the theme, that is a single idea or statement about the topic (here: about workplace happiness). A theme may be expressed in a few words, a phrase, or sentences. Each theme in a text unit was classified by its properties and then classified. We did not have a priori categories, but instead inductive, empirical ones were created for every theme that appeared to warrant classification as a new one. By using an inductive approach, categories emerge from the material without any preconceptions. Since our qualitative research tends to be exploratory, this approach seemed to be most appropriate for our research question.
Two coders analysed the written answers from 297 respondents independently to create coding categories, with the criteria of uni-dimensionality, comprehensiveness, mutual exclusivity and independence (Smith 2000). Frequency scores for categories were calculated by adding the number of themes that represented a given category. Categories were retained if the intercoder agreement was at least 80% and if each of the coding units had more than five recorded (or coded) units. To avoid overestimation of the agreement we also included frequency occurrences and non-occurrences. We provided the validity of the content analysis along three criteria: 1. Closeness of categories: two coders independently coded the study sample of 297 workplace happiness descriptions and empirical categories were retained if they arrived at an agreed upon definition of each specific categories. We excluded categories from the final analysis that was not agreed upon. 2. Data validity: our data accurately reflect the range of content being studied. Our measuring procedure represented the intended, and only the intended, concept. 3. Internal validity: a) our categories are exhaustive and mutually exclusive b) and we measured our workplace happiness concept with categories that are the highest level of measurement possible.
The answers given to the workplace happiness open-ended question varied in length from one to 48 words. We summarized a total of 762 responses (sentences, phrases, words) given to the open question, and finally, 673 coded units were classified into categories. As a result, four distinct main dimensions, and within them, 21 subcategories were defined (Table 1). Subcategories describe the content of the main ones in a more detailed way. The four main dimensions are as follows: 1. Results and success 2. Assessment of and feedback on the work 3. Meaning at work 4. Social relationships Results and Success -Past, Present, Future The first subcategory refers to success, mainly to results achieved by the teachers ("The students understood the curriculum."). These results are the confirmation of good work and enhance teachers' well-being, and as positive results affect not only their emotional experiences but also serve as a basis for future motivation and successes (Tadić et al. 2013). As a second subcategory, teachers have a sense of inner, subjective competence and self-efficacy as they feel that they are able to perform tasks that match their skills, and motivate them to perform well ("Using my skills and knowledge I did the best I could."). All of these ensure the sense of achievement and enjoyment of success and in turn contribute to the sense of efficacy (Friedman and Kass 2002). Not surprisingly, performance and success are also specifically related to children's' success. This is the third sub-category within the category of success. The answers to this question cover past, present and future -it is not only about the work done in the present but also covers feedback from children on past performance (" ...my old students come back to school and tells me how valuable the knowledge he has gained from me"), and about the hope that can be linked to the future (e.g. that a child will be successful in an entrance exam). This time continuity is ensured by a sense of success and effective work that includes not only retrospective but also positive and future-inspired confidence. All these aspects are significant contributors to the sense of personal well-being (Zee and Koomen 2016). In the fourth subgroup, there is a general motive for success, and another is related to the professional side ("I did a very good job professionally and methodologically."). These can be related to the concrete results achieved, joint work with their students and colleagues and success experienced at a subjective level ("I have a sense of success."). A separate dimension of successes and achievements of school life can be defined, such as social events, applications for funding or projects, competitions, celebrations, and other programs ("My students won prizes in a contest.", "Our school won a tender."). The presence of this category clearly indicates that the (workplace) community is an important element of personal well-being (Gross and John 2003), as no one works in isolation but on a collective level, and shared goals and experiences are as positive as achieving personal goals. Individual prosperity and happiness can only be imagined in an organization that is supportive, and in addition to individual endeavours, it provides strong attachments and builds a community. Experiences shared by colleagues and teams may enhance the individual's level of well-being (Greller et al. 1992;Sonnentag 1996). In the main category of success, the next subgroup is also related to children, which includes one of the important goals of teacher work: the development of children. This refers both to work with children in usual, everyday education situations, or to children with problematic abilities, and can also be considered when a child with 'problematic personality traits' changes in a positive way (I taught to read a little kid everyone has given up about."). The importance of developing not only the abilities but the personality of children is also part of this category, and this development-centred work can be an indicator of teachers' work and a sense of personal well-being. The last key element of the first major dimension is the realization and implementation of goals and plans, which may be short or long-term, and may be more general or specific ("When I can effectively implement my plans.", "I am satisfied when each work phase works as I planned."). These are direct, sometimes measurable indicators of success, as they demonstrate the individual's competence, efforts and commitment to goals, which, as a source of motivation (Locke and Latham 2002), provide a further basis for future goals and aspirations.
Assessment of and Feedback on Work
The first major sub-category of the second dimension is feedback, and most of all, positive feedback on job performance. The number of responses about positive feedback (Mitchell et al. 1982) is outstanding since they account for more than half of the total number of responses in this dimension (140). Its robustness is also reflected in the fact that not only global, generally work-related feedback, but also positive, confirming, acknowledging feedback is determinative. Further analysis revealed that teachers stress the importance of feedback from students, parents, colleagues, and leaders alike content ("I get positive feedback from children, parents, and colleagues."). Feedback can be direct (either verbal or nonverbal) and indirect feedback (e.g. the success of a child or a child winning a contest can be considered the result of the contribution of a teacher). The next subcategory concerns moral recognition and the appreciation of teachers' work. As with the previous category, there is a general level of recognition from students, colleagues, parents, and leaders ("I get appreciative words from my colleagues or from the school principal."). The value of moral recognition and appreciation is obvious, and studies have shown that it can have a much stronger effect on performance and commitment (to the workplace) than financial benefits (e.g. Brun and Dugas 2008). The third category within this main dimension is the satisfaction expressed by others (superiors, colleagues, students, etc.), but this sub-category also represents personal, subjective satisfaction ("I am satisfied.","My colleagues are satisfied with my job."). Satisfaction has a striking effect on workplace happiness because of its emotional component since it directly contributes to the subjective sense of well-being of individuals (Bowling et al. 2010). The coding units of the next sub-category, labelled praise, appear not only in general but also on a specific level specifying who (leader, co-worker, parents) the praise is given by. Praise is also a type of the social reinforcement, which has a direct impact on motivation since it is the direct and socially awarded expression of maximum recognition (Deci and Ryan 1985;Sansone et al. 1989). It is a highly rated form of the reinforcement of good performance. In the penultimate sub-category within the second main dimension, there are coding units that confirm the respondent as a member of a particular workplace, organization, that counts on and considers a person an important part of it and takes into account his/her opinion. This sub-category was labelled organizational citizenship, referring to the fact that the employer considers his/her employee a full member of the workplace and appreciates his/her contributions to the organizational goals and values ("They consider my job important and make me feel that what I do is required to the success of the school."). Organizational citizenship is an important key factor of workplace engagement (Bhatnagar and Biswas 2010), and without it, the worker feels as if they are only a tiny cog in the machine, which if lost, can easily be replaced by another. Finally, the last sub-category includes financial recognition and reward, which naturally includes salary ("I would be lying if I said finances do not cause happiness but is not what matters the most."). It is worth mentioning that there were only a few coding units in this category, this sharp difference is notable in itself: there are 29 coding units that refer to the importance of moral recognition and only 6 units to the importance of financial recognition. Apart from social incentives, it is understandable that financial incentives are also important, but it is worth pointing out that teachers do not have exclusively or primarily financial incentives, but instead moral and social motivational factors play an essential role in the formation of their well-being and happiness.
Meaningful Work -Meaningfulness and Emotion
The first and most significant among the five subcategories of this dimension is the well-being, joy and happiness of the children. This refers to the times when children enjoy the lesson, learning, tasks, and when they give positive feedback to the teacher ("I see the joy on the children's faces.", "Students give feedback that they had a good time in my class."). Positive emotions (pleasure, enthusiasm, excitement, enjoyment) expressed by children and shared with the teacher are significant components of a teacher's well-being (Spilt et al. 2011). The second subdimension that is also related to the children concerns rises in children's motivation and interest ("I see genuine interest on my students' faces."), and which includes changes in attitude (from passive to active), and increased desire to be praised and to learn. This is one of the main missions of pedagogical work, as one of the greatest challenges for teachers is to raise children's interest, curiosity and motivation to learn. The next subcategory representing an important determinant of a teacher's happiness is that his/her work had or has meaning. The most essential aspect of this category is that the teacher does not perform irrelevant, often unnecessary (e.g. administrative) tasks, but instead real and valuable pedagogical work is carried out. An important criterion for meaningful work is that the employee feels that his/her work is significant, useful, and influences others in a positive way, and is also undertaken for unselfish reasons (Pratt and Ashforth 2003). This subjective meaning and personal significance are very important determinants of an individual's work-related well-being, as is evident here. The transfer and sharing of knowledge, defined as a separate sub-category, is one of the key tasks of pedagogical work, which includes the success of knowledge and information transfer processes during lessons, when children understand the material being transmitted. This sub-category also includes success in approaching, working on and completing the tasks teachers are set. The last category covers the special state and characteristics associated with teachers' experiences while conducting everyday classroom work, which we could actually call flow at work. Flow is an optimal psychological state where attention is undivided and motivates action to fulfill the goal of expressing self (Csikszentmihalyi 1990). On the one hand this is when a teacher has flow in his/her work ("I work with intrinsic motivation and experience flow."); on the other it is when a teacher triggers flow-like experience in children. In the happiness texts we can detect features of flow, such as work is carried out smoothly and almost unnoticed ("The whole class becomes one unit no child is left out of attention."); time passes unnoticed ("Time spent together flies."); they enjoy tasks ("Kids enjoy the task."); there is a chance for a relaxed and creative manifestation; and there are a lot of smiles and laughter. In a state of flow teachers have a sense of self-efficacy and they feel that they are at the apex of their abilities professionally and methodologically.
Social Relationships
The three sub-categories of Social relationships display the highest weighted means. The first subcategory covers the love, emotional attachment, and positive emotional feedback children express toward teachers ("I feel the love, trust, and attachment of the children"). Positive emotional reinforcements from others are essential building blocks of subjective well-being and are important determinants of happiness (Ryff 1989). The more commonly an individual experiences positive emotions, the more they feel others' acceptance and support, the better their personal well-being and the more efficiently they form and maintain their relationships (Kahn et al. 2003). In the case of teachers, it is no coincidence that social reinforcements from children, who are at the centre of their activities, play such an important role. The second subcategory involves teachers' relationships with colleagues and the workplace climate. Like the previous child-centred category, relationships are still in focus. The coding units here cover positive, balanced, well-functioning relationships with colleagues ("I am happy to work with my colleagues they are like a second family to me.") and on an organizational level a positive, supportive climate and a positive emotional milieu ("The workplace atmosphere is good, relaxed, and cheerful."). The quality of relationships at work often represents an important reason for retention, as well as contributing to job satisfaction and workplace commitment (Crosby 1982;Venkataramani et al. 2013), and as a whole, provides a strong basis for individual and collective achievements and successes. The third and last sub-category of this main dimension concerns social support and co-operation (We help each other, if a colleague has a problem.", "We work together in a good mood to achieve our goal."). Helping and supporting colleagues, as well as sympathy and acceptance from colleagues, are an essential part of this category. Selfless help and cooperation in problem management or in finding solutions provide shared (emotional, behavioural) experiences that keep the community together and provide a safe, trusting milieu for the individual. Helping others is not only a pleasure for an individual but gives a sense of meaningful contribution to a community that is also an important building block of subjective well-being (Mitchell et al. 1982;Aknin et al., 2013a, b).
Quantitative Analyses
Descriptive Statistics Table 2 presents the descriptive information for each of the scales included in the study (N = 297). As can be seen, participants were above the neutral level for all measurements except the optimism scale (LOT-R). At the time of data analysis no reference means were not available for all instruments. Some of the measurements (Acton and Glasgow 2015;Aknin et al. 2013;Avey et al. 2011) have not yet been validated in Hungarian, therefore we used our own translations, and the PERMA questionnaire (Baumann 2012) is relatively new (published in 2017 by Kun et al.) therefore it still lacks reference means. Two scales are validated in Hungarian (Argyle 1987;Avanzi et al. 2012), the reference means for these questionnaires are as follows: LOT-R: 3.25 (Bérdi and Köteles 2010), AHS: 5.79 (Martos et al. 2014). As can be seen our particular sample's means slightly differ from these values.
Correlation Analyses
Bivariate Pearson correlations (two-tailed) were deployed to test our research questions 2-3. Correlational analyses were conducted between each dependent variable, including all scales of PsyCap, and the two subscales of AHS. All correlations between variables can be found in Table 3. Our second research question concerned the relationships between the PsyCap and PERMA subscales and SHS. Moderate positive correlations were found between the PsyCap and the overall PERMA scale (r = .52), and between the PsyCap and all the PERMA subscales (.23 < r < .55). Positive Emotions and Achievement showed the strongest relationships with PsyCap subscales, with Positive Emotions having a moderate correlation with LOT (r = .44) and weak correlations with AHS (r = .34) and the Agency subscale of AHS (r = .32). Achievement showed moderate correlations with GSE (r = .45), LOT (r = .40), AHS (r = .55), and the Agency (r = .56) and Pathway subscales of AHS (r = .41).
PsyCap in general showed connections with different subscales of PERMA, displaying moderate correlations with Achievement (r = .56) and Positive Emotions (r = .44), and weak correlations with Engagement (r = .34), Meaning (r = .32), and Positive Relationships (r = .23). The results also suggested that PsyCap was positively and significantly related to SHS (r = .50). Thus, the findings indicate that Psychological Capital factors have a positive relationship with all the workplace well-being factors and overall workplace happiness.
Our third research question concerned the relationship between PERMA and SHS. A positive and significant relationship was found between them (r = .47), thus supporting our third research question. SHS also showed a moderate correlation with Positive Emotions (r = .46) and a weak correlation with Achievement (r = .36) and Meaning (r = .31).
Discussion
Teachers are important persons who contribute to student achievement and success (Stronge et al. 2004). Teachers' workplace happiness and well-being is therefore a critical factor in positive education (Ross et al. 2012). The aim of this study was to reveal the most relevant workplace happiness factors and to understand teachers' well-being in detail in the framework of the PERMA model and Psychological Capital theory.
We used both qualitative and quantitative methods. After coding nearly 300 respondents' written workplace happiness texts it became clear that a wide range of essential factors contributed to workplace happiness. Answers were organized into four main dimensions and 21 subcategories. Most of the responses referred to results (e.g. realization of goals and plans) and experiences of success such as successful work, the success of children, and success of the school. This implies that the success of others is as important for teachers as their own individual success in producing workplace happiness. Respondents frequently mentioned the importance and significance of assessment of and feedback on their work. This second main category included moral and financial recognition, praise, and other people's satisfaction with the teachers' work, as well as their own feelings of satisfaction. Countless studies have revealed and confirmed the decisive role played by feedback on work, and its impact on performance, commitment to the organization, satisfaction and motivation (Kluger and DeNisi 1996;Eccles and Wigfield 2002). As regards the third main category, teachers' answers supported the importance of meaningful work in relation to the sense of workplace happiness. Intrinsic reasons for working and finding meaning in work have a positive effect on subjective wellbeing (Winefield and Tiggemann 1990). Our results have demonstrated that perceiving work as meaningful appears to play an important role in teachers' happiness. Meaning is derived from different aspect of teachers' work such as knowledge sharing, interesting and motivating the children, and experiencing flow in their work. The last main dimension referred to social relationships with children, colleagues, and parents. Analysis of teachers' responses revealed that not only good personal relationships but also a positive overall workplace climate is necessary for workplace happiness. Social relationships, then, are necessary for happiness (Diener and Seligman 2002). The people around teachers provide social support and teachers place a high importance in relationships as sources of happiness.
Our study used six additional questionnaires to explore the relationship between teachers' well-being, happiness and their inner psychological resources. The data supported our research questions. Correlations showed significant relationships between the variables with the findings revealing that workplace happiness relates positively with all psychological capital factors (hope, self-efficacy, resilience, and optimism; .23 < r < .57) and all well-being dimensions (positive emotions, engagement, positive relationships, meaning, and achievement; .19 < r < .46). Optimism (of PsyCap) and positive emotions (of PERMA) were the most relevant factors in relation to workplace happiness (r = .57 and r = .46, respectively). In terms of Frederickson's (Fredrickson 2001) broaden-and-build theory of positive emotions, these findings support the notion that the implementation of interventions to improve optimism and positive emotions may increase happiness. According to this theory, positive experiences and emotions create a positive spiral generating more positive thoughts, experiences and feelings that are beneficial for well-being and happiness. Experiencing positive emotions regularly can produce long-term changes in individuals' personal resources.
We also found that two of the PERMA subscales, positive emotions and achievement, have the strongest relationship to psychological capital. More precisely, positive emotions are strongly related to optimism while achievement is related to self-efficacy and the two subscales of hope (agency and pathway). This result is consistent with the foundation of hope theory (Snyder 2002) which claims that a pathway involves the future potential to achieve goals, and agency involves motivation for movement along a pathway toward achieving. Not surprisingly, teachers' achievement is essential for their .26 ** .38 ** .49 ** .30 ** .32 ** .42 ** .38 ** .52 ** .83 ** .73 ** .67 ** .78 ** .67 ** *p < 0.05; **p < 0.01; bold character indicates the name of overall scales subjective well-being and happiness at work as our content analysis has also confirmed above. Our results strongly support the Acton and Glasgow (2015) approach to teacher wellbeing, which is defined as "an individual sense of personal professional fulfilment, satisfaction, purposefulness and happiness, constructed in a collaborative process with colleagues and students" (p. 101).
Practical Considerations
Our research suggests that several important factors can influence happiness and well-being in the workplace specifically. In the light of positive psychology and the potential role of 'positivity' as an added value, it is worth reviewing some of the potential activities that may be useful for teachers in order to improve their overall well-being. These possible activities of intervention are selected from the toolkit of positive psychological interventions (PPIs).
There are many forms of positive psychological interventions aimed at improving happiness and well-beingincluding in the context of work and organizational settings (Layous et al. 2014;Lyubomirsky et al. 2005aLyubomirsky et al. , 2005bSeligman et al. 2005). The aim of PPIs is to identify, develop and broaden individual trait-like characteristics such as the elements of psychological capital and to promote well-being. Interventions to increase psychological capital are assumed to lead to higher efficiency and performance (Luthans and Youssef 2004). The development of hope can be used to build positive well-being through identifying personally important goals, goal design, pathway generation, resources and possible obstacles in achieving goals, reframing barriers, etc. Hope interventions help individuals to set realistic goals that could boost their well-being as they achieve these goals. Optimism is another component of psychological capital development. Optimism can be enhanced through various types of interventions, resulting in increased well-being. Research findings suggest, for example, that gratitude interventions may enhance optimism and well-being (Emmons and McCullough 2003;Froh et al. 2008). Optimistic thinking includes positive expectations for the future and viewing negative life events as temporary, external, and limited to the immediate incident (Seligman et al. 1995). Thus, activities targeting optimistic thinking may prevent teachers from burning out and suffering stress and may have positive effects on their self-efficacy and wellbeing.
Further practices can be applied in the context of work in order to improve workplace happiness and well-being. Our suggestions are: 1. The use of strengths-based feedback (Roberts et al. 2005;Aguinis et al. 2012;Herman et al. 2012) instead of weakness-focused feedback on performance. 2. Matching of work tasks and employee characteristics using the 'job crafting' approach (Wrzesniewski and Dutton 2001) resulting in work becoming more meaningful for the person, which has a positive effect on wellbeing and productivity. 3. Using solution-focused brief coachingthis type of coaching focuses on solutions, individual strengths, personal resources, and the future instead of causes and problems. This technique has a positive impact on psychological well-being, strengthens hope and spurs further efforts to achieve the goal (Green et al. 2006;Sherlock-Storey et al. 2013).
Above mentioned research suggest that increasing wellbeing through intentional activities has multiple effects on employees. Workplace positive psychology interventions are relatively few in number and many previous studies have not focused much on what individuals can do to enhance their own well-being themselves (Sin and Lyubomirsky 2009;Mazzucchelli et al. 2010). PPIs are simple and time-saving self-guided interventions that can improve well-being in today's work environment (Meyers et al. 2013).
Our findings seem to be promising in regard to determining main intervention fields for enhancing teachers' well-being. Considering our main results linked to our research questions, we think that PsyCap can be one of the core constructs of interventions. As our results indicated, all the five well-being factors of PERMA were related to overall PsyCap, and two elements, Positive Emotions and Achievement showed the strongest relationship with it. Our findings also showed that optimism (of PsyCap) and Positive Emotions (of PERMA) were the most relevant factors in relation to overall workplace happiness. Considering these results, on one hand we recommend developing programs aiming at PsyCap components (hope, self-efficacy, resiliency, and optimism) that can have a positive effect on overall well-being (Luthans et al. 2006), and we also suggest putting the main focus on Achievement and Positive Emotions.
Currently little is known about which interventions impact which elements of PERMA but there are a few studies that provide the first results about techniques that increase positive emotions, enhance achievement, and raise global happiness. These studies have used different activities, for example, working on personal goals, committing acts of kindness, visualizing best possible self, or remembering one's best achievement (Fordyce 1983;Sheldon and Lyubomirsky 2006;Pham and Taylor 1999;Sheldon et al. 2002). In order to design specific and relevant interventions for improving workplace happiness, our qualitative research may help to determine the specific areas and content of them. For example, strengthening the sense of competence and pursuing personal goals can contribute to the sense of success (Sheldon et al.2002) or job crafting technique may help teachers to find or reshape the meaning of their work (Wrzesniewski and Dutton 2001).
We conclude that the two models of PsyCap and PERMA and our qualitative analysis are a good starting point in developing PPIs in order to improve teachers' workplace happiness and well-being. An additional future direction could involve developing programs for teachers specifically but more research is needed to work toward recommendations regarding this issue.
Limitations and Future Research
It is important to identify several weaker features of our study. First, our research was conducted based on a sample of Hungarian teachers and therefore the results cannot be generalized to the population. Also, teachers in our sample were affiliated with different types of education institute and we have not controlled for contingency and organizational variables. Another limitation of this research is that the constructs were measured via self-reported questionnaires, and the crosssectional nature of the data does not allow us to infer causal relationships.
For future directions, it would be interesting to study the level of well-being and happiness among different age (Avanzi et al. 2012;Kinnunen et al. 1994) and occupational groups of teachers or in different educational institutes. We believe that research questions raised in this study deserve further research attention, along with applied PPIs practices in work environment and other relevant constructs that can serve as new resources for workplace well-being and happiness. We hope our results hold implications for the future of positive psychological research for teachers and school settings.
|
2023-01-08T15:17:31.737Z
|
2019-12-05T00:00:00.000
|
{
"year": 2019,
"sha1": "bb608b8c4eb0399738465830c153952ae5d2d60c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12144-019-00550-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "bb608b8c4eb0399738465830c153952ae5d2d60c",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": []
}
|
260266042
|
pes2o/s2orc
|
v3-fos-license
|
Right Transcephalic Ventriculo-Subclavian Shunt in the Surgical Treatment of Hydrocephalus—An Original Procedure for Drainage of Cerebrospinal Fluid into the Venous System
The objectives of this article are to present an original surgical procedure for the temporary or definitive resolution of hydrocephalus, in the case of repeated failure of standard treatment techniques, and to present a case that was resolved using this surgical technique. Materials and methods: We present the case of a 20-year-old male patient with congenital hydrocephalus who underwent a number of 39 shunt revisions, given the repetitive dysfunctions of various techniques (ventriculo-peritoneal shunt, ventriculo-cardiac shunt). The patient was evaluated with the ventricular catheter externalized at the distal end and it was necessary to find an emergency surgical solution, considering the imminent risk of meningitis. The patient was also associated with the diagnosis of acute lithiasic cholecystitis. Results and discussions: The final chosen solution, right ventriculo-venous drainage using the cephalic vein, was a temporary surgical solution, but there are signs that this procedure can provide long-term ventricular drainage. Conclusions: Transcephalic ventriculo-subclavian drainage represents an alternative technical option, which can be used when established options become ineffective.
Introduction
Identifying the etiology of hydrocephalus, and the medical or surgical treatment of its cause, represent objectives that can only be achieved in a small number of cases [1,2]. The surgical treatment alternative is represented by ventricular drainage to structures or cavities that can take over variable amounts of cerebrospinal fluid (CSF) [3,4].
The technical variants adopted for the evacuation of excess CSF from the ventricular cavities are very numerous and have appeared as a necessity, given the imperfections of the reported methods [5,6]. Improving the results of these techniques has been a constant concern, given the inconveniences or complications specific to each method applied. Improving the quality of the tubing or pressure-modulating valves has improved postoperative outcomes [7][8][9][10].
The choice of the type of drainage takes into account a number of parameters: age, type of hydrocephalus, the severity of the condition, associated disorders, operative risks, possible inconveniences and complications, temporary or definitive nature of the chosen surgical solution, etc. [18,19].
Unfortunately, there are situations in which the chosen surgical procedure cannot ensure the long-term drainage of excess CSF, either due to malfunctions of the materials used (tubing, valves), due to the onset of complications in the structures that receive the excess CSF, or of complications at the level of cerebral structures (epi-and subdural hematomas, chronic subdural hygromas, septic complications, pneumoencephaly, postshunt craniostenoses, etc.) [20][21][22][23][24][25][26][27][28].
Currently, the gold standard for the surgical solution of hydrocephalus is the ventriculoperitoneal shunt. An efficient and well-tolerated procedure, the drainage of the peritoneal cavity of the CSF has spread widely and represents, in many cases, the first therapeutic option. Unfortunately, the range of complications specific to this technique is quite extensive: the obstruction or disconnection of the distal catheter, septic complications in the peritoneum or intraperitoneal organs, CSF pseudocyst, CSF ascites, bowel obstruction, inguinal hernia and hydrocele, visceral perforations, peritoneal metastases from central nervous system tumors, etc. [29][30][31][32][33][34][35][36]. Failure rates in ventriculo-peritoneal shunts have been estimated at percentages between 11-25% within the first year after initial shunt placement [37][38][39][40], with most references reporting a significantly higher number of shunt revisions among pediatric patients compared to adults [29,39,40].
The existence of such repetitive complications requires shunt revisions or choosing another surgical technique, with a progressive reduction of the chances of achieving longterm functionality. A series of technical variants already tried have proven limited efficiency, associated with severe complications, and used only in certain particular cases (ventriculopleural shunt, ventriculo-gallbladder shunt, ventriculo-ureteral shunt, lumbo-ureteral shunt, ventriculo-mastoid shunt, etc). [54][55][56][57][58].
Materials and Methods
We present the case of a 20-year-old male patient diagnosed with congenital hydrocephalus treated by a surgical procedure for the first time six months after birth (ventriculoperitoneal shunt). During the next four years, the functionality of the drainage was good but, later, a CSF pseudocyst occurred. This complication led to surgical evacuation of the pseudocyst and repositioning of the distal catheter. Such episodes arrived at various time intervals, constraining the surgical team to change the type of placement to a biventriculo-atrial shunt with a low-pressure valve at the age of 17. Unfortunately, 13 months later, the patient presented with headache, vomiting and unsystematized static and dynamic balance disorders. The cerebral CT scan performed at admission revealed enlarged cerebral ventricles. Due to this situation, a new surgery followed in order to convert the drainage to a unishunt biventriculo-peritoneal system.
In the next three years, new surgical procedures were interspersed in 12 situations externalizing the shunt for several days, the time needed to rest the peritoneum, performing investigations on the quality of the CSF and establishing the therapeutic strategy to follow (subsequently). Thus, about 38 ventriculo-peritoneal shunt revisions and a ventriculocardiac shunt took place in a relatively short interval.
The recurrence of the intracranial hypertension symptoms after this long succession of surgeries brought the patient, once again, to the emergency department. Upon hospital admission, the imaging exams showed the occurrence of a new CSF pseudocyst ( Figure 1A-D) and acute lithiasic cholecystitis. The neurosurgical team externalized the distal end of the ventricular drainage catheter from the peritoneum as an emergency procedure. The surgical evaluation confirmed the existence of the two pathological entities that required surgical resolution. Although the rate of success after the evacuation of the pseudocyst and repositioning of the intraperitoneal catheter was relatively low, this was attempted considering the laparoscopic approach required to solve the acute lithiasic cholecystitis. On this occasion, we observed an intense process of bowel adhesions (predominantly in the inframesocolic space) and we performed adhesiolysis, retrograde laparoscopic cholecystectomy, pseudocyst evacuation and the repositioning of the distal tip of the catheter in the lower abdomen, after reconnection to the drainage system. The subhepatic drainage tube was removed 24 h postoperatively.
The immediate postoperative evolution was favorable, both surgically and neurologically for approximately 14 days, until the intracranial hypertension phenomena reappeared, the patient presenting headache, vomiting, fever, altered state of consciousness, drowsiness and divergent strabismus. A cerebral CT scan and Magnetic Resonance Imaging (MRI) showed active hydrocephalus (Figures 2 and 3), which again required the externalization of the distal end of the ventricular catheter. Faced with this therapeutic impasse, we found the solution of catheterizing the right cephalic vein, a tributary of the superior cava system, as the equivalent of the ventriculocardiac shunt. The vein was identified in the right deltopectoral space and catheterized with the distal end of the ventricular tubing ( Figures 4A,B and 5A,B), with CSF flow modulated by the right retroauricular valve. The distal end (intraperitoneal) of the drainage duct was initially abandoned in the previous position, then suppressed. The clinical and imaging ( Figure 6A,B) outcome was favorable four months after the surgery.
Results
The evacuation of excess CSF from the ventricular system represents an objective that can be solved by medical or surgical means, to avoid severe intracranial hypertension, with consequences on the cerebral noble substance. Due to the low efficiency of the medication, it is used solely in mild forms and as a preoperative preparation [59].
The surgical treatment uses three leading solutions: − endoscopic internal drainage (ventriculostomy III with/without aqueductoplasty, perforation of the supraoptic blade, posterior ventriculostomy).
The main beneficiaries of these procedures are patients with obstructive hydrocephalus by posterior cerebral fossa tumors, those with CSF circulation disorders (Sylvius aqueduct stenosis, Dandy-Walker malformations) and skull base malformations. The multiple advantages of these surgical procedures are overshadowed by: the limitation of applicability to the aforementioned pathologic conditions, bleeding from the cervical plexus, risk of injury to the basal artery branches, closure of the surgically created communication or subdural hematoma [16,[60][61][62][63][64][65][66][67][68].
− external drainage of the CSF and its collection in an external reservoir is acceptable as a temporary solution in the case of association with meningitis or intraventricular bleeding [69][70][71]. − extracranial drainage of the CSF is the most extensive method and it is applied to all patients, with the choice of the technical solution depending on several criteria: age, generating cause, associated disorders, operating risks, temporary or definitive nature of the procedure, etc. [4, 11,12,18,72,73].
The efficiency of drainage in the cephalic vein (which can be extended to the superior vena cava and right heart, respectively) with results and possible complications similar to ventriculo-cardiac drainage, but using another approach, confirms the validity of the procedure and gives hope for the alternative surgical treatment of hydrocephalus in case of the appearance of complications related to established surgical procedures.
In the period elapsed from the moment our surgical team applied this imagined therapeutic solution, no complications related to the neurological condition and the operative act were found.
Discussion
Although physiological considerations suggest that the drainage of excess CSF from active hydrocephalus should drain into a venous segment, the well-established surgical method is represented by the ventriculo-peritoneal shunt. The peritoneum provides a generous resorption surface, which tolerated the CSF well. The resorptive function of the peritoneum can be canceled or reversed by certain factors, which are partially known: immunological, septic, mechanical, allergic factors, etc. Under these conditions, the ventriculoperitoneal shunt becomes non-functional due to the appearance of an intraperitoneal complication: CSF pseudocyst, CSF ascites, plastic peritonitis, etc. Thus, a ventriculo-peritoneal shunt, well tolerated for long periods of time, can become quite suddenly ineffective. Reinterventions that propose shunt revision, with the possible solving of intraperitoneal surgical complications, represent the solution, but with the diminished possibility of finding a long-term resolution using the same surgical procedure [29,33,[74][75][76][77][78].
This problem determines the quest for another surgical solution (endoscopic internal drainage or ventriculo-cardiac shunt). The alternative methods mentioned above resolve the situation, but they are also burdened by certain complications as previously noted. Thus, a therapeutic impasse that can endanger the neurological condition and even the patient's life can appear in many cases. Finding alternative solutions has been a permanent concern, but unfortunately no superior treatment options have been identified.
Analyzing the particular situation of our patient, considering that the iterative repositioning of the catheter is doomed to failure, that the indication of intraperitoneal drainage was not an option and that the transjugular path (ventriculo-cardiac shunt) was exhausted, we identified the solution of placing the ventricular catheter in the right cephalic vein, thus accessing the superior cava system, with the possibility of progression up to the level of the right side of the heart.
The reasons considered In choosing this surgical approach were: the sufficient pressure gradient between the cerebral ventricular system (even more so in conditions of hydrocephalus) and central venous pressure, the convenience of access (the catheter crosses a short subcutaneous segment to the deltopectoral space, where the surgeon will find the right cephalic vein), reduced surgical risks, physiological considerations (the procedure is an equivalent of a ventriculo-cardiac shunt using another access route), the ease of re-accessing the valve and the venous approach (if necessary), the excellent tolerance of CSF in the venous system, etc.
The pressure gradient ensures adequate drainage (under 20 mmHg in the ventricular system in the absence of hydrocephalus [79] and 2-6 mmHg in the upper cava system [80]). The much higher values in hydrocephalus cause an unhindered discharge of excess CSF (but modulated by the pressure valve) into a venous bloodstream, which can take up any amount.
The catheter inserted into the cephalic vein can be advanced to the level of the large veins tributary to the superior vena cava, superior cavities, or up to the level of the right heart, thus obtaining a classic ventricular shunt equivalent. In our case, the catheter was placed in the right subclavian vein, using the right transcephalic route.
Some pathologic circumstances may limit the use of this access route to the superior vena cava system, such as the use of cephalic vein in a previous history for angioaccess procedures, thrombosis of superficial veins of the right upper limb, tumoral disorders, keloid scars of the deltopectoral space, orthopedic disorders of the shoulder, etc. In this case, the contralateral cephalic vein can be used, although the drainage tubing route is longer, with more risk of torsion and drainage malfunction.
Another circumstance that may contraindicate this procedure is the occurrence or persistence of complications related to the ventriculo-cardiac shunt previously practiced by the transjugular route (this complication was mentioned above). It is assumed that the drainage of excess CSF by the transcephalic/transsubclavian pathway reactivates or worsens a previously observed pathological condition (e.g., shunt nephritis).
In the event of a dysfunction of this type of shunt, the revision takes little effort to perform, the access being convenient, on the ventricular catheter, valve, or distal segment of the drainage tubing.
In the literature, the ventriculo-subclavian shunt was described by a series of authors. Matsuoka et al., in 1993, published two case reports of a 64-year-old male and of a 65-yearold male, respectively, both of whom had their subclavian vein punctured through the infraclavicular approach, with positive results [81]. Another case reported by Evangelos et al. in 2017 presented a 4-year-old child with multiple ventriculo-peritoneal shunt revision surgeries and ventriculo-atrial failure due to distal catheter malfunction that was treated with the percutaneous placement of the peripheral catheter in the subclavian vein [82].
Using the right cephalic vein as the anatomic area of insertion of the ventricular shunt into the venous system is the innovative step of our procedure. The proposed surgical procedure did not raise any particular technical problems, and the intervention was carried out without complications. We did not study the flow of the cephalic vein preoperatively (Doppler ultrasound, Computer Tomography, phlebography), but the lack of other therapeutic options led us to use the cephalic vein as an access path to reach the subclavian vein whose venous flow we considered sufficient to absorb the excess CSF; the favorable clinical and imagistic evolution of the patient showed that this assumption was correct.
The test of time will prove whether the proposed method will be imposed as a therapeutic alternative to the well-established techniques, in the event of a therapeutic impasse, or as a first-option solution.
1.
The ventriculo-subclavian shunt is an easy surgical procedure.
2.
It is a solution for cases where the variants of standard surgical treatment have been exhausted.
3.
It is a drainage solution of excess CSF in the superior vena cava system (equivalent to the established ventriculo-cardiac shunt).
4.
It uses an access path that does not have anatomical/functional disadvantages.
5.
Depending on the patency of the method and possible late complications, it can become a variant of a first-option treatment. Funding: The publication of this paper was supported by the University of Medicine and Pharmacy "Carol Davila", through the institutional program "Publish not Perish".
|
2023-07-29T15:03:57.815Z
|
2023-07-26T00:00:00.000
|
{
"year": 2023,
"sha1": "f47f0f7996da3c3dfcaf03dff6a3405c4c2eb08f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/15/4919/pdf?version=1690428487",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "54c206dfc814cd271ae65adc23e46ad82a8acbfb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
233206107
|
pes2o/s2orc
|
v3-fos-license
|
The Potential Use of Sialic Acid From Edible Bird’s Nest to Attenuate Mitochondrial Dysfunction by In Vitro Study
Edible bird’s nest (EBN) is one of the expensive functional foods in herbal medicine. One of the major glyconutrients in EBN is sialic acid, which has a beneficial effect on neurological and intellectual capability in mammals. The aims of this research were to study the effects of sialic acid from EBN on cell viability and to determine its effect on mitochondria membrane potential (MtMP) in Caco-2, SK-N-MC, SH-SY5Y, and PC-12 cell lines. Fourteen samples of raw EBN were collected from four different states in Malaysia. The confluency of the epithelial monolayers measurement of the tight junction for all the cell lines was determined using transepithelial electrical resistance (TEER), and the sialic acid uptake study in cell lines was determined by using ultra-high performance liquid chromatography (UHPLC). The MTT assay was conducted for cell viability study. The MtMP in cell lines was determined using the Mito Probe JC-1 Assay by flow cytometer analysis. We have recorded a statistically significant difference between the uptake of sialic acid from EBN and the standard solution. A higher amount of sialic acid was absorbed by the cells from extract of EBN compared to the standard solution. The amounts of sialic acid uptake in Caco-2, SK-N-MC, SH-SY5Y, and PC-12 cell lines were (0.019 ± 0.001), (0.034 ± 0.006), (0.021 ± 0.002), and (0.025 ± 0.000) µmol/L, respectively. The MTT results indicated that the concentration of sialic acid increased the cell viability and showed no cytotoxicity effects on cell lines when they were exposed to the sialic acid extract and sialic acid standard at all the tested concentrations. The number of active mitochondria was found to be significantly higher in SH-SY5Y cell lines with a 195% increase when treated with sialic acid from EBN. Although many researchers around the globe use SH-SY5Y and SK-N-MC for Alzheimer’s disease (AD) study, based on our finding, SH-SY5Y was found to be the most suitable cell line for AD study by in vitro works where it has a known relationship with mitochondrial dysfunction.
INTRODUCTION
Human body is composed of many vital organs, where one of the largest and most complex organs is the brain (Jawabri and Sharma, 2020). Under normal circumstances, brain aging among healthy adults is known to undergo various changes, in terms of structural, chemical, functional, and neuronal alterations. Several of these changes are indicated via decrease in brain size, as a whole, as well as declining neurotransmitter systems (Marsman et al., 2013). Despite the fact that all healthy adults are subjected to these changes, it is important to highlight that age-related neurodegenerative disorders are not inclusive as a part of regular healthy aging conditions. These are mainly referred to Alzheimer's disease (AD) and other forms of dementia (Dobrowolski, 2014).
Alzheimer's disease or a related form of dementia is estimated to affect approximately 44 million people worldwide (Duthey, 2013). To date, elucidation to demonstrate the mechanism involved in AD pathogenesis is yet to be reported. Amyloid cascade reaction is the most widely recognized action mechanism among the numerous hypotheses that are proposed and available. This reaction puts forward the role of neurotoxic β-amyloid proteins that are deposited within the brain. The presence of these proteins instigates pathological changes that include amyloid plaques aggregation and intracellular neurofibrillary tangles (Murphy et al, 2010). Apart from the explained hypothesis above, there are also evident studies that suggest interrelatedness of mitochondrial damage and AD (Wang et al., 2020). The idea behind this is that the presence of healthy mitochondria is vital for neuron-based activity and also as a protection mechanism to minimize possible oxidative damage (Wang et al., 2020). As such, damaged mitochondria are believed to interfere with these essential roles.
Mitochondria are essential and of high importance for several roles, with their main function channeled toward energy production. The synthesis of high energy molecules in the form of adenosine-5′-triphosphate (ATP) is associated with the presence of mitochondria. The mechanism involved here includes conversion of metabolites energy to reduced nicotinamide adenine dinucleotide (NADH) followed by electron transfer from NADH while protons are also pumped across inner membrane to intermembrane space. This process creates transmembrane potential that utilizes ATP synthase for reflow of protons across inner membrane, and finally this energy is the driving factor of adenosine diphosphate (ADP) phosphorylation to ATP (Nicholls, 2010;Rich and Maréchal, 2010;Divakaruni and Brand, 2011;Bonora et al., 2012). Three factors were proposed as the possible reason that could lead to mitochondrial dysfunction, which include inability to provide required substrates, insufficient mitochondria, and failure of electron transport and ATP synthesis machinery (Nicolson, 2014).
Sialic acid presence is deemed essential for brain development and participates within ganglioside with specific role to enhance learning capability as well as memory (Tram et al., 1997;Wainscot, 2004). Cognitive ability in mammals is related to the variations observed with brain sialic acid concentration. In young mammals, concentrations of ganglioside-and proteinbound sialic acid were enhanced upon exogenous supplementation (Oliveros et al., 2018). Moreover, the exogenous sialic acid is localized to the synapse and the movement of positive neurotransmitters, transmitter release, and altering existing synaptic morphology are influenced by the sialic acid supplementation (Morgan and Winick, 1979). Dietary sialic acid has a role in brain development. Previous study done by Sprenger and Duncan has shown a significant rise in the sialic acid concentration in brain gangliosides and glycoprotein via oral administration of sialic acid during an initial postnatal period in the rodents (Sprenger and Duncan, 2012). Another study found that a decline in exogenous sialic acid concentration in brain leads to irreversibly decreased cognitive function but supplementary sialic acid will improve the learning process (Tram et al., 1997). Thus, nutritional interference research is important to evaluate the benefits in digestion and absorption system associated with neurodevelopment function. This will allow detailed analyses of cognitive function and behavior at numerous stages of development and show the association between dietary sialic acid supplementation and cognitive function development (Wang, 2012).
The sialic acid can be found in edible bird's nest (EBN) and it is one of the eight glyconutrients which has helped to increase cell tissue repair (Kong et al., 1987), promotes cell division and cell proliferation (Aswir and Wan Nazaimon, 2010). EBN has also been reported to be effective in the treatment of neurodegenerative disorders located in the hippocampal and cortical neurons in the brain such as AD and Parkinson's disease (PD) (Yew et al., 2014;Zhiping et al., 2015), and was able to improve the physiological human health (Guo et al., 2006;Matsukawa et al., 2011). However, extensive research is required to verify the effective levels and safety issues of EBN before it can be marketed more progressively worldwide and consumed by human. This is important because EBN is considered very precious, and its high consumption might not necessarily benefit the body.
In view that there is restricted access to human brain tissue in neuron-based disorders, most researches were channeled toward application of in vitro cell line. One of the most common cell lines used is the neuroblastoma (SH-SY5Y) cells. These cells were applied as a prototype for Aβ cytotoxicity in AD (Xie et al., 2010). Another cell line that could also be used is the human induced pluripotent stem cells (iPS) isolated from familial AD (FAD) patient. The iPS cell line is differentiated cells and is hence suitable to be evaluated for presenilin mutation effects (Penney et al., 2020). In addition to the two cell lines, immortal rat hippocampal (IRH) cell lines obtained from embryonic rat hippocampus were also deemed vital as the hippocampal neurons are accountable for both cognitive and memory ability (Eves et al., 1992). Furthermore, previous study by Gilbert and Ross mentioned that this cell line is more beneficial pertaining to its malignant and lack of cell lineage specificity nature, comparatively to tumor cells (Gilbert and Ross, 2009).
As mentioned above, the progression of β-amyloid may result in loss of memory function; thus, the objective of our study was to focus on the effect of sialic acid on mitochondrial dysfunction by using several types of cell lines.
EBN and Sialic Acid Extract Source
A total of 14 raw unprocessed EBN samples were collected from four states of Peninsular Malaysia representing each region. The samples were collected during the breeding season of edible nest swiftlet within April to August 2016, manually cleaned, and finely grounded using a grinder. The sialic acid was extracted from raw EBN samples at SIRIM Berhad, Malaysia, using the high performance liquid chromatography (Siti Khadijah et al., 2019).
Cell Lines and Culture Conditions
Four different types of cell lines were used in this study: the colorectal adenocarcinoma (Caco-2/ATCC cat. no. HTB-37); the neuroepithelioma (SK-N-MC/ATCC cat. no. HTB-10); the neuroblastoma (SH-SY5Y/ATCC cat. no. CRL-2266); and the pheochromocytoma (PC-12/ATCC cat. no. CRL-1721). The entire cell lines were purchased from American Type Culture Collection (ATCC, United States). Each of the cell lines was seeded and maintained in 25 cm 2 culture flasks (Constar, Cambridge, MA) until use.
The PC-12 and SH-SY5Y cells were grown in Dulbecco's Modified Eagle's Medium (DMEM; GIBCO New York, United States) supplemented with 20 and 15% v/v fetal bovine serum (FBS), while SK-N-MC cells were grown in Eagle's Minimum Essential Medium (EMEM; GIBCO New York, United States) supplemented with 20% v/v fatal bovine serum. The entire medium contained 1% v/v nonessential amino acid (GIBCO New York, United States), 1% antibiotic (penicillin-streptomycin) (GIBCO New York, United States), and 1% v/v L-glutamine (GIBCO New York, United States). Only PC-12 cell was added with 15% horse serum (GIBCO New York, United States). All the cells were maintained in the same conditions, at 37°C in an incubator with 5% carbon dioxide, 95% humidity, and air atmosphere. The medium was replaced every 2-3 days. The cells were maintained until they reached 80% confluency.
Transepithelial Resistance Values Measurement
The cells will reach the maximal levels of differentiation after several days in incubation. To confirm this process, transepithelial resistance values (TEER) can be monitored using EVOM2 ™ because fully differentiated polarized cells have tight junctions with a TEER of >200 Ω*cm 2 (MacCallum et al., 2005). The EVOM2 ™ measures cell monolayer health and cellular confluence via qualitative and quantitative measurement, respectively. Before measurement, STX2 electrodes (World Precision Instrument, New Haven, CT, United States) were equilibrated and sterilized according to the manufacturer's recommendations. Two hundred microliters of culture medium was added to the upper compartment of the cell culture system. The ohmic resistance of a blank (culture insert without cells) was measured in parallel. The blank value was subtracted from the total resistance of the sample, in order to obtain sample resistance. The final unit area resistance (Ω*cm 2 ) was calculated by multiplying the sample resistance by the effective area of the membrane (0.33 cm 2 for 24-well millicell insert plates).
Cell Viability
The MTT (3-(4,5-dimethythiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay is commonly used to measure cell viability and proliferation (Ahmad et al., 2006). Adherent cells were dissociated from their substrate by trypsinization or scraping. Then, the cells were centrifuged at 830 × g for 5 min. The supernatants were discarded, and the cell pellet was resuspended in DMEM at a density of 20,000 cells/cm 2 into a 96-well plate. The cells were then incubated overnight in an incubator with 5% carbon dioxide (CO 2 ), 95% air atmosphere, and 95% relative humidity at 37°C. The next day, the medium was changed to medium-free FBS following exposure to (or treatment with) sialic acid at the required concentration (20, 40, 60, 80, and 100 μg/ml), and the cells were incubated for 24 h. MTT reagent at a concentration of 5 mg/ml and a volume of 10 µL was added to each well and incubated for 3-4 h until the purple precipitate was visible. Later, the medium was aspirated from the wells as completely as possible without disturbing the formazan crystals and cells on the plastic surface. Then, 100 µL of dimethyl sulfoxide (DMSO) was added to each well, followed by agitation on a plate shaker for 5 min, and finally, the optical density was read at 570 nm. The number of surviving cells is directly proportional to the level of the formazan product created ( Van de Loosdrecht et al., 1994).
The absorbance value for the blanks should be 0.00 ± 0.1 optical density (O.D) units. The absorbance range for untreated cells should typically be between 0.75 and 1.25 O.D. units. Selection of a cell number (i.e., providing values between the range of 0.75 and 1.25) allows for the measurement of both stimulation and inhibition of cell proliferation. If the absorbance values of the experimental samples are higher than the control, this indicates an increase in cell proliferation. Alternatively, if the absorbance rates of the experimental samples are lower than the control, this indicates a reduction in the rate of cell proliferation or a reduction in overall cell viability.
Sialic Acid Uptake Studies in Caco-2, SK-N-MC, SH-SY5Y, and PC-12 Cell Lines For the purpose of sialic acid uptake studies, the cells were grown on transwell plates for 21 days. The culture medium was removed before adding 2 ml of uptake buffer (140 mM NaCl, 5 mM KCl, 1 mM NaH 2 PO 4 , 10 mM Mes, 0.5 mM of MgCl 2 , and 1.0 mM CaCl 2 at pH 6.0) and incubated at 37°C for 2 min. Later, 1 ppm (∼3 μmol/L) of sialic acid extract was added to each well and incubated for 15 min at 37°C. Then, the buffer was aspirated, and cells were rinsed with cold buffer for three times before adding Frontiers in Pharmacology | www.frontiersin.org April 2021 | Volume 12 | Article 633303 1 ml of 200 mM NaOH to solubilize the cells and they were left overnight at 4°C. After an overnight incubation at 4°C, the cells were subsequently removed; the levels of sialic acid in buffer and cells were determined by using UHPLC. This procedure follows the method of Derakhshandeh and his colleagues with slight modification (Derakhshandeh et al., 2011).
Mitochondrial Membrane Potential
This procedure was performed using the Mito Probe JC-1 Assay Kit for flow cytometry (Life Technology, United States). The kit contains 30 µg powdered dye, DMSO, carbonyl cyanide 3-chlorophenylhhydrazone (CCCP), and 10× phosphate-buffered saline (PBS). The cell lines were fixed with 150 µL of sialic acid extract and standards for 24 h prior to staining the cells for flow cytometer measurement. In brief, all reagents were equilibrated at room temperature before beginning the experiment. Two hundred micromolar of JC-1 powder stock solution was prepared by dissolving the vial with 230 µL of the DMSO. Then, 1 × 10 6 cells/mL of cell lines were collected by scrapping the cells and centrifuged in warm PBS before being incubated at 37°C for 5 min. To provide the positive control, 1 µL of 50 mM of CCCP (50 µM in final concentration) was added to the tube and incubated for 5 min. After the incubation, 10 µL of JC-1 stock solution (2 µM in final concentration) was added to all tubes and incubated for another 30 min at 37°C. Later, all cells were washed using the warm PBS and centrifuged to obtain the cell pellet. Then, the cells were resuspended by adding 500 µL of PBS to each tube. All tubes were analyzed using a flow cytometer machine (BD FACSAria ™ ) with 488 nm excitation using emission for Alexa Fluor ® 488 dye and fluorescence microscopy. The JC-1 dye detection was used with bandpass filters centered around 529 nm (green fluorescence) and bandpass filters centered around 595 nm (orange fluorescence). The logarithmic signal amplification was used with the typical green-orange electronic signal compensation near 4% and orange-green signal compensation around 10% (Cossarizza and Salvioli, 2001). The depolarization of mitochondria in cells is indicated by decreasing ratio of fluorescence intensity (red JC aggregates/ green JC monomer).
Statistical Analysis
All data were analyzed using the SPSS software version 16 (IBM Software, Inc., New York, United States). The viability of the cell lines induced with sialic acid was analyzed using the one-way analysis of variance (ANOVA) with post hoc Tukey's test to show significant difference between groups. p < 0.05 was considered statistically significant.
Transepithelial Resistance Values Measurement
The determination of cell cultures confluency stage for entire cell lines using EVOM2 ™ was shown in Figure 1, 2. Caco-2 and SK-N-MC cell lines reached their 50% confluency at day 12 with 289.91 ± 137.89 Ω*cm 2 and 292.71 ± 80.61 Ω*cm 2 , respectively, while SH-SY5Y and PC-12 cell lines reached their 50% confluency at day 15 with 305.91 ± 80.61 Ω*cm 2 and 280 ± 127.98 Ω*cm 2 , respectively. The Pearson Rank correlation analysis was performed to determine the relationship between the days of culture and TEER reading. Based on results in Figure 2, a strong positive correlation between culture duration and all cell lines was recorded (p < 0.01).
Cell Viability Table 1 shows the effect of sialic acid extract and standard at 20, 40, 60, 80, and 100 μg/ml concentrations on cell viability in serum-free medium, which represents the conditions for the sialic acid uptake study. The cell viability results showed that there are no cytotoxicity effects in all neuroblastoma and epithelial cell lines when exposed to sialic acid at the concentration of 60 μg/ml or below. All cell lines showed significant differences in cell viability (p < 0.05).
The percentage of cell viability in epithelial and neuroblastoma cell lines was significantly higher when they were induced with sialic acid extract compared to sialic acid standard. Above the concentration of 60 μg/ml, all cell lines showed negative effect of sialic acid on cell viability.
Sialic Acid Uptake Studies in Caco-2, SK-N-MC, SH-SY5Y, and PC-12 Cell Lines Table 2 shows the concentration of sialic acid that has been absorbed by the entire cell lines after selected concentration of sialic acid was added to the cells. For cell monolayer, all cell lines showed significant differences between the extraction and control (p < 0.05) and between the extraction and standard extraction (p < 0.05), respectively. However, only Caco-2 cell lines showed a significant difference between the control and standard (p < 0.05). For basal solution, the mean absolute difference (MD) of sialic acid uptake was significant between the control and the extraction in all cell lines. The MD of sialic acid uptake was also significant between the extraction and the standard. In contrast, only SH-SY5Y cell line has recorded the differences in sialic acid uptake between the control and the standard. Table 3 shows the percentage of active mitochondria once being treated with sialic acid in cell lines using flow cytometry analysis. Figures 3-6 present the excitation peak of entire cell exposed to JC-1 dye using 488 nm wavelength. One-way ANOVA showed that there was a significant difference in numbers of active mitochondria between all groups of treatment (p < 0.05). The Tukey post hoc test revealed that the number of active mitochondria in SH-SY5Y is significantly higher when induced with sialic acid compared with control (p 0.000). However, even without treatment with sialic acid, all other cell lines have a higher number of active mitochondria at the start of the experiment.
DISCUSSION
In studies that involve cell line usage, it is important to control possible variations especially in terms of standardizing optimal duration after seeding as this ensures generation of differentiated cultures with excellent uniformity. Furthermore, development of tight junction in these cells is highly associated with the culture duration (Srinivasan et al., 2015). As an example, Caco-2 cells are derived from colorectal adenocarcinoma cells and spontaneously undergo differentiation between 0 and 21 days. It is important to also note that the Caco-2 cells are nonidentical to normal duodenal enterocytes (Mahraoui et al., 1994;Sharp et al., 2002;Mariadason et al., 2000). Based on reported literature, fully differentiated polarized Caco-2 cells have tight junctions with a TEER value of >200 Ω*cm 2 , and as such the differentiation process is confirmed by careful monitoring of the cell's polarization process (Eves et al., 1992). On the other hand, fully differentiated neuroblastoma cells (SH-SY5Y) express FIGURE 2 | Scattered plot shows the correlation between cell lines and day of culture. Values are given as 1 × 10 6 (mean ± SEM) for three independent biological determinations.
TEER measurement is subject to certain level of variations, especially among distinct groups in different studies. These variations observed are possibly contributed by a number of factors. The identified factors include difference in actual junction tightness, temperature, cells handling technique during measurements, and potential difference in measuring equipment (e.g., chopstick or cup electrodes, impedance measurements). In addition, it should also be highlighted that translating TEER into a functional estimate of tightness is tough. This is due to the fact that composition of the tight junction complexes and the size of the compound of interest are the underlying factors that influence the endothelial monolayer tightness (Srinivasan et al., 2015;Helms et al., 2016). In our study, we found that the human epithelial cell lines (Caco-2) and human brain barrier cell lines (SK-N-MC, SH-SY5Y and PC-12) reached more than 300 Ω*cm 2 at 21 days. In comparison to previous reported review by Helms and colleagues, our finding is considered higher than the reported standard range of TEER values for blood-brain barrier cell culture, which is around 40-200 Ω*cm 2 (Helms et al., 2016).
In comparison to in vivo studies, in vitro cytotoxicity and cell viability assays were shown to be more advantageous in terms of speed, lower cost, and automation potential. Due to these reasons, studies conducted with human cells are deemed more appropriate than in vivo research (Chrzanowska et al., 1990). MTT assay is a colorimetric assay that assesses the cell metabolic activity. Before any dietary components are investigated for their effects on iron uptake in Caco-2 cells, it is prudent to evaluate cell viability under incubation conditions and the reduction of tetrazolium salts, as part of the MTT assay, which is now recognized as a safe and accurate test for cell viability (Yew et al., 2014). NADPH and NADH that are synthesized by dehydrogenase enzymes in metabolically active cells are responsible for this conversion (Xu et al., 2015). It was found that the cell viability of the Caco-2, SK-N-MC, PC-12, and SH-SY5Y cell lines were upregulated alongside the steady increase in the concentration of sialic acid in the media. At 1 × 10 6 cell lines, sialic acid extract and sialic acid standard did not show any cytotoxicity effects on cell viability up to a concentration of 60 μg/ml. However, above 80 μg/ml concentration, there was a reduction in cell viability for all the cell lines. This result is similar to our previous published finding on cell proliferation where it was concluded that stimulation with different concentration of sialic acid from EBN and sialic acid standard into cells will give rise to a dose dependent increase in cell viability (Aswir and Wan Nazaimon, 2011).
The cell viability of Caco-2 and SH-SY5Y cell lines showed remarkable difference when treated with sialic acid extract compared to the standard sialic acid. This discrepancy could be due to the variation in extraction process. Besides, there are many different types of standard sialic acids commercially available and containing a mixture of sialic acids found in humans and animals. The extraction of standard sialic acids also varies from sulfuric acid, phosphoric acid, acetic acid, trifluoroacetic acid (TFA), and HCl. None of the sialic acid on the market is obtained from bird's nests. Since bird's nests naturally contain higher bioactive compounds including sialic acid, it may be one of the reasons why sialic acid obtained from bird's nest has a slightly higher absorption rate than the commercially available sialic acid. In addition, the increase in cell viability with higher concentration might be influenced by the mitogenic properties in sialic acid extract from EBN that promoted the cell growth as manifested by previous studies which showed the enhanced cell division in rabbit corneal keratocytes using EBN (Zainal Abidin et al., 2011;Yew et al., 2014). Although sialic acid standard also showed increase in cell viability, this effect was observed to be slightly lower than the sialic acid extract. This could be due to low activities associated with the varied process and preparing the treatment as previously mentioned (Yew et al., 2014). Moreover, the quantity of absorbance signal created is dependent on numerous parameters including the concentration of MTT reagent, the time of incubation period, the numbers of viable cells, and also their metabolic activities (Riss et al., 2016).
In general, sialic acid rarely exists free in nature and is usually available as component of oligosaccharide chains of mucins, glycoproteins, and glycolipids. In terms of its position, sialic acid usually occupies terminal, nonreducing positions that are highly exposed and functionally essential. These commonly refer to nonreducing positions of oligosaccharide chains of complex Cells were treated with sialic acid extract and sialic acid standard for 24 h in serum-free medium. The optical density was determined by spectrophotometer at 570 nm. Values are expressed as mean ± SEM of triplicate experiment. a, b, c, d Means without common letters in their superscript are significantly different (p < 0.05).
Frontiers in Pharmacology | www.frontiersin.org April 2021 | Volume 12 | Article 633303 carbohydrates on both outer and inner membrane surfaces, mainly to galactose, N-acetylgalactosamine, and other sialic acid moieties. The highest concentration of sialic acid is present in mammalian central nervous system, where majority of it is found in gangliosides (65%), followed by glycoproteins (32%), while the remaining exists as free sialic acid (Brunngraber et al., 1972).
To date, there are limited findings on digestion and mechanisms involved among sialic acid compounds. It was reported that rat intestinal walls are highly permeable to free sialic acids. In addition, it was highlighted that sialidases of bacterial origin could possibly cleave sialic acid residues from milk oligosaccharides in colon; however it was not evident if sialic acid is capable of absorption across colonic mucosa (Wang and Brand-Miller, 2003). Hence, a cell model was established in order to understand and evaluate the mechanisms involved in sialic acid transport across cellular barriers. One relevant cell model is the Caco-2 epithelial cells that plays its role in studying transport from gastrointestinal (GI) lumen into the blood (Wilson et al., 1990), where sialic acid is taken orally. In this study, we determine the concentration of sialic acid that has been absorbed by the cell using UHPLC instead of radiolabeled isotope. A monolayer of Caco-2 cells was applied as the model uptake prototype to demonstrate sialic acid uptake across the GI epithelium. The same model was applied for sialic acid uptake in the brain through a monolayer of SK-N-MC, SH-SY5Y, and PC-12 cell lines.
Based on literature, it was found that the normal range of total sialic acid (TSA) level found within serum or plasma falls in the range of 1.58-2.22 mmol/L. From this, only about 0.5-3.0 μmol/L corresponds to free form of SA (Sillanaukee et al., 1999). Thus, we used 3 μmol/L of sialic acid extract to mimic the normal range of free sialic acid in the human body. Our study has recorded sialic acid uptake by the cell lines. A number of transport mechanisms across cytoplasmic membrane have evolved in response to sialic acid transport in bacteria. Tripartite ATP-independent periplasmic (TRAP), ATP binding cassette (ABC), major facilitator superfamily (MFS), and sodium solute symporter (SSS) transporter families are among the common identified mechanisms (Vimr and Troy, 1985;Allen et al., 2005;Post et al., 2005;Severi et al., 2010;North et al., 2017;Wahlgren et al., 2018). Based on our finding, there is high possibility of the presence of sialic acid transporter in monolayer cell lines that helps in the transportation of sialic acid extract across the membrane to the basal chamber. From the original concentration of 3 μmol/L, we found that high concentration of sialic acid was recorded in neuroblastoma (SK-N-MC, SH-SY5Y, and PC-12) cell lines compared to epithelial (Caco-2) cell lines. Although we did not perform a study on the sialic acid transporter, this finding, however, shed some lights and indirectly gave insights for the reason behind high sialic acid concentration in brain compared to the other parts of the body. The previous study done by Bardor and his colleagues (Bardor et al., 2005) which reported that human neuroblastoma cell lines could be incorporated with sialic acid with efficiency comparable with Caco-2 cells also supports our findings. Thus, this could suggest that the mechanism of sialic acid uptake study can also occur in other human cells but with varying degrees.
Eukaryotic cells contain several types of organelles, which may include nucleus, mitochondria, chloroplasts, the endoplasmic reticulum, the Golgi apparatus, and lysosomes. Each of these organelles performs a specific function critical to the cell's survival. Since cell lines were also obtained from the eukaryotic cells, they also have the same organelles. Mitochondria study gained attention in 1970s, when research was focused on studying energy conservation mechanism as well as ATP synthesis. At the same time, chemiosmotic theory of oxidative phosphorylation was also established, which led to conferment of Nobel Prize in Chemistry awarded to Peter Mitchell in year 1978 (Brand et al., 2013). Any part of the body can be affected by mitochondrial diseases such as the cells of the brain, nerves, muscles, kidneys, heart, liver, eyes, ears, and pancreas. Mitochondrial dysfunction arises upon failure of the mitochondria in ATP synthesis due to certain underlying conditions or diseases. These diseases can lead to secondary mitochondrial dysfunction, including but not limited to Alzheimer's disease, muscular dystrophy, diabetes, and cancer (Brand et al., 2013). Values are expressed as mean ± SEM in triplicate experiment. a, b, c Means without common letters in their superscript are significantly different (p < 0.05). In this study, the JC-1 was used because it is more specific for measuring changes in mitochondria membrane potential (MtMP) (De Biasi et al., 2015). It is also consistent in response to the depolarization of the mitochondria compared to other cationic dyes such as Rhodamine-123 and 3,3′ dihexyloxacarbocyanine iodide (DiOC6) which are toxic to the mammalian cells (Shapiro, 1994). The carbonyl cyanide 3-chlorophenylhhydrazone (CCCP) was used as a control to confirm that JC-1 dye responses to membrane potential fluctuations and also determines compensation percentage that is necessary in order to quantify 488-excited J-aggregates accurately (Perelman et al., 2012;Sivandzade et al., 2019). Throughout oxidative phosphorylation, essential components that are required in energy storage are the MtMP derived by Frontiers in Pharmacology | www.frontiersin.org April 2021 | Volume 12 | Article 633303 8 proton pumps, specifically called Complexes I, III, and IV (Zorova et al., 2018). Based on our findings, Caco-2, PC-12, and SK-N-MC cell lines revealed a high number of active mitochondria compared to SH-SY5Y cell lines in control group ( Table 3). The sialic acid extract exhibited less numbers of active mitochondria in Caco-2 cells compared to sialic acid standard in respond to the sialic acid stimulus. The impact developed and was upregulated almost 100% when sialic acid was added to the cells compared to the control.
Previous study showed that the mitochondrial functionality can be restored by natural product such as herbal medicine which helps in preserving the dopaminergic neurons in Parkinson disease (Kim et al., 2004), and in this study the sialic acid caused the depolarization of the mitochondria cells. The percentage of mitochondria was slightly increased when sialic acid standard was added to the media containing Caco-2 cell lines compared with the control. However, the percentage was reduced significantly when sialic acid extract was added to the media. Although these findings were interesting, the root cause of this phenomenon is unknown. Conversely, in the SK-N-MC cell line, the percentage of mitochondria was slightly higher in sialic acid extract (94.80%) compared to sialic acid standard (93.51%). This is because the SK-N-MC is a human brain cell and according to Schauer (Schauer, 1982), neural cell in membrane contains 20 times more sialic acid than other types of membrane and clearly indicated that sialic acid is of utmost importance in neuronal development. The results were similar to the study done by Rosernberg (Rosenberg, 1995), where ganglioside in nervous tissue is composed of sialic acid as glycosphingolipids that are available in cerebral cortex of human brain at high concentration. Intriguingly, sialic acid extract and standard were found to be able to increase the number of active mitochondria significantly in SH-SY5Y cells compared with the control. With SH-SY5Y cells originally derived from a metastatic bone tumor biopsy, the number of active mitochondria in them seem to be able to effectively change when sialic acid was added to the media. Although there are many types of cell lines available to represent AD study in vitro, SH-SY5Y cell line is more pronounced if the study is focusing on mitochondrial dysfunction. Apart from its suitability for AD, it can also be applicable for other studies focused on mitochondrial dysfunction in other diseases.
CONCLUSION
Our finding has indicated that MtMP measurement could be used to study mitochondrial dysfunction by in vitro technique. The increase observed in mitochondrial membrane potential in entire cell lines when subjected to sialic acid treatment indicated presence of healthy cells. This is vital to study on MtMP measurement for its effect on mitochondrial dysfunction. The sialic acid uptake also was noticed to occur with varying degrees in cell lines. Based on our findings among all the tested cells, SH-SY5Y was found to be the most suitable cell line especially for studies that will be focused on the expression of active mitochondria.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
|
2021-04-12T13:26:21.466Z
|
2021-04-12T00:00:00.000
|
{
"year": 2021,
"sha1": "0d0ce6b7a58004f2e7845d6509577dc0a34bd268",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.633303/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d0ce6b7a58004f2e7845d6509577dc0a34bd268",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214002041
|
pes2o/s2orc
|
v3-fos-license
|
A new species of Hornstedtia and a new species record of Globba (Zingiberaceae) from Palawan, Philippines
During recent botanical exploration in the province of Palawan, Philippines specimens were collected of a new species, Hornstedtia crispata Docot, and a new species record for the Philippines, Globba francisci Ridl., both from the ginger family Zingiberaceae. The new species is described and illustrated here along with an assessment of its conservation status.
Introduction
Palawan is an archipelagic province comprising of approximately 1,780 islands and islets at about 14,897 km 2 , making it the largest province in the Philippines (Fernandez et al., 2002). About 48% of the province is covered with vegetation, including tropical lowland evergreen rainforest, lowland semi-deciduous (seasonal/monsoon) forest, montane forest, and forest-over-limestone (PCSDS, 2015). Within this remaining forest are unique species of terrestrial flora and fauna, including 1700−3500 angiosperms, of which 15−20% are endemic to the country (Sopsop & Buot, 2009).
There is strong evidence that Borneo and Palawan were once connected by a land bridge based on bathymetric data (Woodruff, 2010) and from animal distributions (Heaney, 1986). This hypothesis is further supported by plant distributions (e.g. Poulsen & Docot, 2018 for Etlingera sessilanthera R.M.Sm.) and through biogeographical studies using molecular data to explain colonisation of the Philippines from Borneo via Palawan (e.g. Brown & Guttman, 2002 for Rana), or the other way around (e.g. Hughes et al., 2015 for Begonia L. sect. Baryandra A.DC.) Despite these efforts, further studies are still needed to support this hypothesis.
Our knowledge of the Zingiberaceae of the Philippines has been updated as a result of recent botanical explorations focused on the collection of gingers (e.g. Naive, 2017;Ambida et al., 2018;Poulsen & Docot, 2018;Acma et al., 2019;Docot et al., 2019a, b). Ridley (1905) enumerated 19 species of Philippine Zingiberaceae but now 122 species are known, of which 68% are endemic (Pelser et al., 2011 onwards;Zingiberaceae Resource Centre, 2019). Thirteen of these species occur in Palawan, five of which are endemic to the province (see Table 1 for a list of species in Palawan).
Hornstedtia Retz., as currently circumscribed, comprises c. 40 species distributed from the Himalayas to Queensland with a centre of diversity in continental Southeast Asia and Malesia. Sixteen species are found in Borneo (Lamb et al., 2013;Zingiberaceae Resource Centre, 2019). The genus can be easily recognised by the involucre of tightly overlapping sterile bracts that enclose the flowers near or at the uppermost part of the corolla tube and by the flat receptacle or condensed rachis (Smith, 1985). In the Philippines, Hornstedtia is represented by six species of which five are endemic and one, Hornstedtia havilandii (K.Schum.) K.Schum., is also found in Borneo (Pelser et al., 2011 onwards).
Globba L. species are distributed from Sri Lanka to Australia with a centre of distribution in monsoonal continental Southeast Asia (Williams et al., 2004). Members of this genus are easily distinguished by their small, delicate flowers with small labellum and elongated, arched stamen (Smith, 1988;Leong-Škorničková & Newman, 2015). A total of eight species of Globba are currently known in the Philippines, all of which are endemic except Globba marantina L., a species which is naturalised in tropical regions around the world (Pelser et al., 2011 onwards).
Botanical fieldwork focused on collecting gingers conducted by the authors in Palawan in 2017−2018 led to the collection of unidentified Globba and Hornstedtia species. After careful morphological comparison with known Philippine species and those occurring on neighbouring islands (e.g. Borneo and Sulawesi), the authors concluded that the Hornstedtia species is new to science, and the Globba species is Globba francisci Ridl., a species hitherto endemic to Borneo (Lamb et al., 2013). The new species is described and illustrated here along with an assessment of its conservation status. It is likely that there are still numerous species awaiting discovery in Palawan and many species already known from Borneo will eventually be discovered in Palawan or vice versa.
Materials and methods
Herbarium collections, including high-resolution images of specimens, from BISH, BM, BO, E, F, FEUH, G, GH, K, L, MO, NY, P, PNH, S, SING, U, US, USTH and Z, along with published morphological descriptions of most similar species (e.g. Smith, 1988;Newman, 1995;Ye et al., 2018), especially those from the Philippines and neighbouring islands, were examined and compared to our recently collected specimens. Specimens seen only as a digital image available online are denoted with an asterisk (*). The herbarium acronyms follow Thiers (continuously updated). The species description of the new species follows the style of recently published species of Hornstedtia (e.g., Leong-Škorničková et al., 2016;Ye et al., 2018) and the terminology in Beentje (2012).
The extent of occurrence (EOO) and area of occupancy (AOO) of the new species were calculated using the Geospatial Conservation Assessment Tool (GeoCAT) (Bachman et al., 2011: www.geocat.kew.org). These data were then compiled to assess its conservation status using the International Union for Conservation of Nature criteria (IUCN, 2016). Furthermore, the coordinates of the localities based on information Elmer (1915) on herbarium labels were collected and generated in QGIS v. 2.18 (Quantum GIS Development Team, 2016), creating a distribution map (Fig. 4).
Distribution and habitat. Hornstedtia crispata is known only in Mount Mantalingajan, Brooke's Point, Palawan where it inhabits slopes of primary forest at 1000−1300 m.
Phenology. Both flowering and fruiting individuals were observed in late September.
Etymology. The specific epithet refers to the crispate anther crest.
Provisional IUCN conservation assessment. Based on the IUCN red list categories and criteria (IUCN, 2016), Hornstedtia crispata is categorised as Endangered EN B2ab(iii). The area of occupancy is estimated to be less than 500 km 2 (total area of occupancy is c. 16 km²) and is known only at two locations on Mount Mantalingajan which is fortunately a protected area. Although the observed populations are within a protected area, the species may decline significantly if mining activities and conversion of forest into agricultural lands (e.g. rice plantations) are continued within the area. Notes. No common names or uses were reported by our Palaw'an tribe local guides. The most similar species in morphology is Hornstedtia sanhan M.F.Newman of Vietnam (west of Palawan). Although of similar habit (leafy shoots of both species can reach at least 4 m in length), the lamina of Hornstedtia crispata has a 15−25 mm long petiole while the lamina of H. sanhan is sessile. Moreover, Hornstedtia crispata has a significantly wider lamina (10−16 cm) than H. sanhan (5−7 cm), and the base and apex is rounded and acute rather than narrowly cuneate and acuminate as in H. sanhan (see table 2 for the full morphological differences between H. crispata and H. sanhan). The main differences between the two species are in the floral morphology. Both species have bright red and ovate sterile bracts but the hairs in the lower half are white in Hornstedtia crispata while rufous (red) in H. sanhan. The most obvious similarity between the two species is the petaloid and crispate margin of the labellum (Fig. 1D & 2J). This labellum morphology can also be observed in Hornstedtia gracilis R.M.Sm. of Sabah, Borneo (south of Palawan) but is distinct by its slender peduncle that can reach up to 1 m long (vs 2−4 cm long in H. crispata). This labellum morphology is also found in Hornstedtia hainanensis T.L.Wu & S.J.Chen of Hainan, China (northwest of Palawan) but is distinct in the pink and bifid labellum (vs white and entire in H. crispata) and the absence of a bracteole and anther crest (vs present in H. crispata). In Hornstedtia gracilis, H. hainanensis and H. sanhan, however, only ½ or less than ¾ of the total length of the labellum has a petaloid and crispate margin in contrast to H. crispata in which it is present throughout its length. As a result, the labellum of Hornstedtia gracilis, H. hainanensis and H. sanhan appears spathulate rather than oblong as in H. crispata. Hornstedtia crispata also has a petaloid and crispate anther crest (Fig. 1E & 2K) which is entirely absent in H. hainanensis and H. sanhan and present in H. gracilis as small lobes on each theca. Although the fruits are fully embedded in a juicy aril, the local people of the Palaw'an tribe do not consume them, unlike the fruits of Hornstedtia hainanensis and H. sanhan which are gathered for their edible and medicinal fruits in the regions where they occur (Newman, 1995;Ye et al., 2018). (1988)). (Fig. 3A & C) Notes. Globba francisci is usually confused with G. aurea Elmer (Fig. 3B & D) in the forests of Palawan because both species have yellow flowers. Both are distributed from Central to Southern Palawan (Fig. 4) and usually grow together in wet rock crevices beside streams and ravines up to 1000 m. Unfortunately, the type of Globba aurea (A.D.E. Elmer 13243) cannot be located in any herbaria but our collections perfectly match the original description of the species by Elmer (1915), including in the purplish spotted sheath and the petaloid lateral staminodes.
|
2020-01-02T21:10:59.275Z
|
2019-12-16T00:00:00.000
|
{
"year": 2019,
"sha1": "104358fb867ea903603c64e8e322fbfdf5b0c8a6",
"oa_license": null,
"oa_url": "https://doi.org/10.26492/gbs71(2).2019-13",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "79000b7e7b92ff4b28d1cbaf8d00ed6f0e9985c8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
260466356
|
pes2o/s2orc
|
v3-fos-license
|
Sex Talk: Designing for Sexual Health with Adolescents
In this paper, we describe a user-centred design process, where we engaged with 58 adolescents over an 18-month period to design and evaluate a multiplayer mobile game which prompts peer-led interactions around sex and sexuality. Engagement with our design process, and response to our game, has been enthusiastic, highlighting the rich opportunities for HCI to contribute constructively to how HCI may contribute to sexual health in adolescents. Based on our experiences we discuss three lessons learnt: lightweight digital approaches can be extremely successful at facilitating talk among young people about sex; sharing control of the conversation between all stakeholders is a fair and achievable approach; even problematic interactions can be opportunities to talk about sex.
INTRODUCTION
Adolescents have emerged as a priority in public health in recent years. In a recent commission on adolescent health and wellbeing, The Lancet [26] reports that although adolescence is often considered the healthiest period in individual's lives, its significance in global health is increasing. Partly, this is because of relative decreases in this population's overall health and wellbeing, but moreover, health amongst this population is a good predictor for health trajectories across the life course [12].
Digital technologies are repeatedly highlighted as holding some of the greatest possibilities in improving health outcomes for adolescents [26]. Mobile content has been identified as one of the primary sources in which young people access health information [22] and, as such, the novel communicational and networking opportunities presented by the digital, particularly in increasing health literacy, have been emphasized as an under considered area for health promotion in adolescents [16].
Sexual and reproductive health is a key priority for this population, particularly through changing patterns in risk in reference to sexually transmitted infections and unplanned pregnancy [4]. Changes in the sociocultural, political and legal contexts have been shown to play a key role within these new vulnerabilities [26], and here digital technologies have also played no small role. Popular representations in new social media are argued to be changing young people's attitudes around sex and sexuality, particularly around casual sex. For example, 'new' public health problems such as young people taking and sending sexually explicit photographs of themselves, or sexting as it has been termed, have been described by some as a new public health 'epidemic' amongst young people [33].
The provision of sexual health information and sex education is seen to come from two countering perspectives. On the one hand, there is a perceived need for access to 'correct' or 'trustworthy' authoritative information, often with the overriding objective of reducing sexual activity amongst young people (sometimes referred to as a 'restrictive' approach to sexual health and sex education). On the other, a 'permissive' approach argues that we should acknowledge young people as sexual beings, and put their needs and perspectives at the fore [1,13,14]. In grappling this tension, our paper details our digital response to adolescent sexual and reproductive health, a game we designed in conjunction with young people to promote 'healthy' discussions around sex and sexuality. We detail how, whilst proving popular with our participants, use of the game in youth group settings reproduced many of the tensions characterising this space. We pose these as lessons learnt for IDC, and HCI more broadly, in responding meaningfully to the complex and multifaceted design space of sexual health in adolescents. We propose the benefits of lightweight digital approaches for face-toface interaction, suggest how all stakeholders can control the level of these communications, and suggest that when problematic interactions arose, these presented opportunities to our agenda of promoting discussion about sex and sexuality.
Sexual and Reproductive Health in Adolescents
Adolescents are defined by the WHO as individuals between 10 and 24 years old. This population poses an important yet also challenging setting for sexual and reproductive health. This age bracket covers a key transitionary period across the life course, particularly from a legal perspective, spanning from official 'childhood' to 'responsible adults'. Furthermore, perspectives from young people within this bracket can vary drastically, with adolescents often maturing sexually at very different times, which is influenced largely by socioeconomic factors [26]. This has resulted in the majority of sexual health interventions focusing the biological 'facts' of reproduction, the risks and dangers of unprotected sex, and often focuses on abstinence [13].
This 'restrictive' approach has been widely denounced by scholars working in critical sexuality [1]. It has been argued that a focus on abstinence reproduces unhelpful constructions surrounding male and female sexuality, for example, the view that men are the active, desiring sexual agents in sexual relationships (see 'the male sex drive discourse' [17]), which by virtue means that (heterosexual) women are required to protect themselves against men's sexual desire. There is also evidence to suggest that a focus on the risks and dangers surrounding sex, such as unwanted pregnancies and STDs, does not reduce sexual activity amongst young people, only that it discourages contraception use when young people come to have sex [21]. Moreover, a focus on the mechanics of 'sexual intercourse' we privilege a 'heteronormative' model of sex education, promoting heterosexual sex as the only legitimate form of sexuality [12]. Therefore, Taylor [28] has argued that by rejecting young people as being sexual beings, we are harming young people's overall sexual health. A preoccupation with the physical 'act' of sex, often reduced to be the insertion of a penis into a vagina [24], also prevents a focus on matters surrounding sex, such as relationships and intimacy [30].
Sex is not a Natural Act
Leanore Tiefer has made the influential claim that contrary to how sex is culturally constructed, that 'sex is not a natural act' [30]. By this she argued against the typical rhetoric of sex as an innate, and the idea that there is a standardized or inbuilt model of sexual response. Instead she argues there is very clear evidence that sexual behavior varies hugely from person to person. In contrast to being an inbuilt biological urge, cross cultural studies have shown us that sex is fundamentally shaped by the social context [21].
To these ends, Tiefer argues that the construction of sex as an inbuilt biological entity has resulted in most cultures simply not talking to young people about sex, with a "history of silence and embarrassment", based on the assumption that nature will simply 'take its course'. In [31] Walker suggests that young people in particular have desires to talk about sex with their elders, yet often find they are not able to have open and frank conversations about sex and sexuality, due to all parties, be it parents, schools or siblings, 'offsetting' responsibility for these conversations to others. She argues that the result of this is that needed conversations simply do not happen, and that the consequences of this are two-fold. Firstly, she argues that this prevents people from having a fulfilled sex life. The literature surrounding couples' sexual difficulties suggests that the major obstacles in couples' sex lives is simply being able to talk about sex -a topic we have been taught is 'embarrassing' or 'dirty' from an early age [20]. But more pressingly, she argues a consequence of embarrassment is people being exploited. By classing sex as a topic we don't talk about, conversations about consent, sexual violence and exploitation are reduced to the margins.
In contrast, cultures where sex is talked about more openly boast better overall sexual health. The Netherlands have some of the lowest rates of teen pregnancy and STIs, with research also suggesting that young people are more likely to delay sexual activity later than those in the US or UK. A cross-cultural study between the UK and the Netherland's sex education materials showed that the Dutch model of sex education taught about sex at a much earlier stage than in the UK [21], and that they also taught about the pleasurable aspects of sex. In response, Fine and McLelland [14] advocate 'a discourse of desire' in relation to young people and sexuality, acknowledging aspects of sexual pleasure, recognizing young people (particularly young women) as sexual agents in their own right, and being inclusive of sexual minorities. To these ends, we suggest that permissive, positive discussions for young people around sex and sexuality is an important goal for improving overall sexual health, and as will now be discussed, an opportunity for HCI.
Sexual Interactions and HCI
Digital technology provides clear opportunities for having conversations about sex. Previous research in HCI has examined the role that digital self-presentation plays in interaction around intimacy and sexuality. This has included how romantic relationships can be supported by technology, from their initiation [23] to their sustainment over time and distance [25]. The anonymity technology offers has been explored by [19] in investigating how explicit talk about 'making love' was expressed in various ways through a dedicated anonymous posting website, whilst [2] has investigated the (often) sexual content on anonymous Facebook 'Confession Boards' [5]. Research examining location based social networks such as Grindr and Tinder argue how self-presentation and anonymity become complicated in these online spaces [7], with [6] arguing that the prominence of these apps should now lead us to consider sex as a significant motivator in of itself for interaction with technology. An examination of these existing systems indicates how technology can create new or distinct ways for people to have interactions around sex, yet despite this HCI has made little way in terms of scoping a design space or response to these identified opportunities.
Sex Education and Technology
"Serious Games" have been one of the few HCI responses to sex education [29], using a computer game, some 20 years ago, to "increase [young people's] skill and selfefficiency", while more recently [3] using a gameshow format to reduce 'risk' of sexual coercion. Design concepts from social gaming may also have good application to the context of young people's sexuality; for example, 'play' has been used extensively in therapy settings to improve wellbeing, through 'playing out' concerns and anxieties [9]. Yet an objective of 'playfulness' has also become a more common focus for HCI research in recent years [8]. Specifically, humour and play have been evidenced as promising strategies for 'taboo' design [10], with Almeida et al. [2] arguing that humour provides an effective tool for designers wishing to diminish social awkwardness around sensitive areas.
This positioning underpinned a perspective for our work. We wished to respond to the design space of young people's sexual health with a playful and permissive approach, orientated around young people's perspectives. Moreover, we wished to explore the opportunities of digital play and humour in this context, in seeking to encourage 'positive' interactions around the topic.
TALKING ABOUT SEX
We therefore identified 'talking about sex' as an agenda for inclusive, permissive sex education, and identified games and play as a suitable mode of response. The development of our design concept was a collaboration between the authors. The first author is a sex and sexuality researcher, with experience in working with young people. The second author is a games designer with interests in designing for improvised play and using games with card-based playful interactions. The third author is an interaction designer who focuses on designing for digital health and wellbeing. The work was informed by the critical literature around sex and sexuality, and 12 design workshops where we workshopped and tested several playful techniques for promoting interaction about sex and sexuality with young people.
As previously mentioned, the WHO defines adolescents as individuals between 10 -24 years old. Since research around this topic with such a diverse age group would be a considerable ethical challenge, the decision was made, in the first instance, to work alongside local authority led youth groups who work with 13-19 year olds. Although this is a wide age bracket, all the young people in these groups knew each other, and regularly came together to talk about a range of social issues, including sexual health. These groups were also organised age appropriately, with only young people of similar ages participating in the same group as one another. This made them an ideal starting point for this design work, as we sought to develop a design which could be extended for adolescents' discussions about sex and sexuality across a broader context.
Our engagements with the youth groups started with the youth group leaders. We had several meetings where we discussed the nature their engagements with adolescents, and how they ran their sessions. We then conducted 3 design workshops with 4 youth groups, who comprised of adolescents from both urban and rural environments and from a range of socio-economic backgrounds. Altogether we engaged 21 adolescents in these sessions.
Building on the existing activities used by youth workers, we developed a series of design activities to trail with young people, each designed to promote discussions about sex and sexuality. These were (1) a body mapping activity where young people were asked to plot their ideas of sex and sexuality onto inflatable mannequins, (2) an activity where young people were asked to timeline when they learnt about sex and sexuality and where from and (3) an activity using Lego where participants were asked to design some sexual health interventions. These workshops were looked at broadly for the types of interactions we found amongst our participants, and we identified two sensitising concepts: that of inclusivity and digital playfulness.
Inclusivity
There was a broad diversity of the topics covered by the workshops. Although our sessions were orientated around generating conversation about sex and sexuality, these engagements led to topics, as directed by participants, around body image, appearance concerns and mental health, alongside many other areas. Whilst a design response may not address all these complex matters, it did prompt us to extend our conception of sexuality for young people. While some young people presented themselves as experienced sexual beings, e.g. "I know what I'm doing!", others presented themselves as uninterested by sex, e.g. "Still now, I don't find sex appealing at all". Ideas of sexuality and intimacy often presented themselves in subtler ways, such as talking about the role of friendships or in using social media.
Digital Playfulness
The use of digital technology, particularly social media, was a prominent part of these youth group settings. This was illustrated most strikingly in a visit to one group where first arrivals immediately logged into Facebook on the available computers. Mobile phone use was a prominent part of these workshops, with participants regularly taking pictures on their phones, messaging friends, playing music from their phones and using social media. Although occasionally disruptive e.g. "Youth worker: Come on, get with the programme!", we were particularly interested in how digital technology organically introduced opportunities for social play in these community settings, for example sharing pictures of artefacts they had produced in the workshop on social media. This reinstated the assertion that mobile technologies were a particularly suitable medium in which to focus our prototype.
The Prototype
Building on these insights from the workshops in conjunction with the dialogue in the critical sexuality literature, our prototype was a result of numerous design sessions and discussions, paper based prototyping and body storming. 'Talk About Sex' is a multiplayer game, developed for iOS, designed initially for young people to play together. Using a peer-to-peer network over Wi-Fi or Bluetooth on players' devices, the game begins by instructing all players to turn their phones face down. After a three second pause, one player's phone vibrates and makes a short sound, indicating that it is their turn. Once they turn their phone face up, it presents the player with a task presented in Figure 1. To progress, all must return their phone face down where the process is repeated with the next player. This continues until all tasks have been played through. Figure 2 shows screenshots of how these tasks were presented to players. Further details about our game design can be found in [32].
The set of tasks for our initial prototype were devised by the authors, informed by the above findings of inclusivity and playfulness. Due to the diversity of perspectives presented to us in the workshops it was important that our tasks retained a sense of inclusivity. None of the tasks explicitly referred to sex, instead using more ambiguous terms such as 'moment' (tasks 4, 7) or 'tickly bits' (16). Additionally, it was important that our tasks had an element of playfulness, particularly through digital play, such as using the phone's camera or drawing functionality, but also tasks which explicitly encouraged head up [27] interactions, playing with the social setting of the game.
GAME PLAYING SESSIONS
We conducted two phases of game playing sessions with young people. In phase 1, we presented young people with the game with tasks generated by us, to explore broadly how the game was appropriated and played in youth group settings. Then we conducted phase 2 where, after playing through the tasks as devised by us, and describing our rationale to the young people, we invited participant-led content for the games' tasks.
We have played 'Talk about Sex' with a total of 46 young people across these two phases of game playing sessions. Four groups participated in the first phase (n=24) and three in the second phase (n=22). These sessions have been conducted in the three different locations where we held the initial workshops, and in one additional setting. All young people participating in the design workshops were invited to this gameplay phase, but most players were new to the project. Each of the sessions was audio recorded, and we made field notes around the interactional qualities of the gameplay. After play, all groups were also asked about their experiences of playing the game. This data was then analysed using thematic analysis, which we organised into three 'themes' around the most pertinent interactional elements in these settings: 'physical' play, arguably 'problematic' play, and 'exclusion' & a lack of direction.
Overview
Overall, the game was met by enthusiasm from most participants. At the young people's request, the game was often played multiple times, starting with a different player each turn: "I really want to play the game again!" / "I'd like to do it again". On one occasion, the youth workers had trouble to get young people stop playing the game to move on to the group discussion: "Come on, put it down now!" When we asked how and where this game could be played, one participant told us: "I could imagine all our friendship group at school playing this game", and another: "This would be brilliant cos we have like free periods where we basically should be doing work but instead we get games on our phones that everyone can play". We were particularly pleased by how the young people framed this game as 'not' education, i.e. they "should be doing work", as it was our aim to distance our prototype from the traditional, restrictive discourse of sex education. Youth workers were also positive about the game: "If you could get them to sit that long and do that it says a lot about the resource".
At the same time, some, typically older, participants (16+), were more cautious of the game, responding less enthusiastically than younger players. For instance, in postgame discussions a 17-year-old player commented: "I just think that I wouldn't play it, because (.) I just think I'm a bit old". Similarly, in almost all groups, at least one player suggested that the game "wasn't really talking about sex" or could go further in how 'extreme' it was. Furthermore, some 16+ young people and youth workers appeared somewhat unsure about the 'purpose' of the game, suggesting it should focus more directly on the delivery of information. Here, we consider how the game was experienced by both enthusiastic participants and those more cautious, before indicating some of the challenges and opportunities encountered in our second phase of game playing where we invited participant-led game tasks.
Phase 1: Gameplay
There was almost always a palpable sense of curiosity as the game began, and in the livelier groups participants were often excitable, turning their phones over prematurely to see what might have happened. When the first device indicated a player's turn, there was often a tentative negotiation over whose device had buzzed. Play in all groups began hesitantly, as the players got familiar with the protocol of turning over their phone, completing the task, and then placing it back face down. Players showed signs of anticipation before their go, often showing visible signs of apprehension before turning over their phone, e.g. "oh shit!" / "I'm scared of what it's going to say!" Participants also often non-verbally enacted shock, embarrassment, surprise or confusion as the tasks were revealed. As the game then progressed onto the second and third turn, more
Physical Play
Younger groups (under 16) typically displayed enthusiasm when joining in activities together. Although many tasks were individual, often all players wanted to play the turn, such as all shouting names for body parts (task 5) and often players took full opportunity to play in physical space. For instance, when the game asked for all but two players to leave the room (task 7), in one lively group of young men a participant shouted: "Right, everyone out!", while other members of the group attempted playfully to hide under the table. However, in a group of older teenagers this task was met with a 'sigh': "Do we really have to leave the room?", who collectively then changed this task to "two people leave the room" as some players didn't want to get up.
As such, it was the more physical activities such as swaying, dancing and singing that were most often 'passed'. While some participants took to these requests enthusiastically, all joining in singing a popular pop song for example, participants did sometimes skip these or completed them warily, reluctantly humming a nondescript tune as a 'sexy theme tune' for example. On one occasion, a young man repeatedly turned the device face-up and facedown many times (tasks 12 through 16) to find a task that "wasn't rubbish". Our attempts to utilize digital play also had mixed success. Whilst almost all groups were happy and excited to "take a selfie on someone else's phone" (task 21), the task to "mark on Google Maps where you've had a 'moment'" (task 4) noticeably held up the rhythm of the gameplay, as participants navigated to the app and took time in finding a location. For other tasks navigating outside the app was dismissed as "pointless", for instance in the task 'Use a Google image search to find a picture of a romantic location' (task 9), where on two occasions participants changed the task themselves to "just name a place" as that was "easier".
Perhaps one of the most successful tasks was task 2, blow a kiss to another player, played on by most groups as a humorous display of affection, but also introduced a surprising dialogue around sexuality between a young man, who didn't want to complete the task, and the youth worker: "Just cos you're blowing a kiss to someone doesn't mean it always has to be a sexual thing". It was therefore often the simplest tasks which led to the most successful gameplay. Whilst more complex activities using maps (task 4) and image searches (task 9) stalled gameplay, the 'selfie' task (task 21) and 'take a picture of a body part' (task 5) were typically more successful, where the process was less involved.
'Problematic' Play
Our invitations for digital play also led to some difficult scenarios, most prominent in our evaluations with groups of young men. In swapping phones with another player (task 19), one participant repeatedly entered the wrong passcode into his friend's phone, so he was temporarily locked out of his device, resulting in mild upset. In another group of young men, the task to 'take a photo of a body part' (task 5) was met with the suggestion to "take a photo of your penis brah!" As the young men got to their feet, suggesting they may do something inappropriate, slight chaos ensued as the youth workers intervened: "Seriously, not your penis" / "If the police caught you with that".
The conversations and interactions that the game initiated were broad and far ranging, from the sexually explicit to discussions that avoided the topic of sex altogether. For instance, responses to tasks where players were requested to There we go" to very sexually upfront: "The first anal in my life". We witnessed some conversations about participants' first kiss: "I remember mine, it was quite embarrassing", or relationships: "Was that your boyfriend? How long have you been together?" Yet overall, conversations did not extend far beyond the tasks set. Moreover, in some instances our 'ambiguity' resulted in participants going somewhat 'off topic'. The 'pause' we inserted into the game (task 14) intended to prompt reflection on the gameplay often resulted in conversations around other things: "I'm really tired" / "I'm getting my nails done tomorrow". Likewise, when one participant commented that they "don't have a moment" in response to task 4, their conversations occasionally forayed into the obscure: "P1: Just make up one! P2: Right, there was a donkey, it turned into a unicorn before my very eyes".
'Exclusion' and a lack of direction
Some participants had difficulty interpreting some of the language in the game, such as the word 'poignant' (task 7), and although the term 'moment' was intended to "mean anything" as one participant acknowledged, the lack of direction meant some thought the task didn't apply to them: "I don't have one" / "I haven't done anything". Some participants expressed frustration at this lack of direction, with one commenting that task 20 "didn't tell me what kind of message to record". Therefore, the game was sometimes accused of "not talking about sex" or "what to do".
Simultaneously, however, some expressed our game had gone too far. One (older) participant refused to read out the 'bad sex' paragraph (task 3): "Oh gosh! Oh no, I don't want to read it", commenting that the text was "So dirty, how are 14 year olds going to cope with this?" On another occasion, a participant exclaimed "I am not doing that! Take a photo of a body part, as if!" Although another member of the group reflected to the participant the ambiguity of the task: "It could be any body part!", this vagueness, particularly surrounding the taking of photographs, was clearly less than ideal, as our 'problematic' example illustrated earlier.
Additionally, the seemingly innocuous task "Write the name of your first kiss" implies some level of experience, and indeed sexuality, which was flagged as potentially difficult by some participants: "[our friend] hasn't had her first kiss yet, it's quite a big deal for her".
Summary
Reflecting on these initial play sessions, the premise of the game appeared to have promising design elements which were interesting and leading to good gameplay. Asking players to interact with each other's phones during gameplay drew on broad ideas of intimacy and trust, flipping the device before revealing tasks added anticipation and momentum, and tasks around physical play were generally received positively, particularly for younger players. Yet there were also several problems with the tasks we drew up. Activities had mixed rates of success when they were perceived to have higher 'barriers to entry', whilst others appeared to legitimize arguably problematic behaviour. Curiously, in taking an indirect approach with the hope participants would mediate these interactions at a pace comfortable to them, we had managed to be simultaneously too tame, with tasks "not talking about sex" or "not telling me what to do", yet also too extreme: "She hasn't had her first kiss yet" / "That's so dirty, how are 14 year olds going to cope with this?" Based on these findings, we felt we could involve our participants more through a second round of gameplay and design sessions.
A further challenge in these play sessions was the unpredictable nature of youth groups, meaning the environment was less than ideal for a multi-device networked game, particularly one played on young people's own phones. Young people often joined and left the game haphazardly, meaning the ad-hoc networking was disturbed and the flow of the game interrupted. Additionally, we had underestimated just how much young people relied on their phones. Notifications came through young people's devices at an often-rapid rate, causing further disruptions, and young people indicated that even ten minutes was a long time to go without access to their phone's functionality. Due to these factors, although gameplay always began on individual devices, often it continued a single device shared by players, using their own devices to complete tasks. Therefore, in this second round of evaluations, we decided to present the game as a single device experience.
Phase 2: Participant Generated Content
This second set of play sessions followed a process where young people played through our set of tasks on a tablet or phone, and used their own devices to complete tasks. After reflecting on the gameplay, the group was then asked for their suggestions. We explained our rationale for creating tasks, that we tried to make them playful, inclusive and use the digital affordances of the mobile phone, but we did not dictate these as conditions for their tasks. We asked participants to imagine playing the game with their friends, and to write down on cards either specific tasks they thought the game could play through, or more general topics/areas they thought the game should address. Where the group was big enough, we split the group in two so that each half could play through the other half's suggestions. The workshops were audio recorded and observational notes were taken. Suggestions from participants resulted in 67 participant-driven tasks, which were collated and analysed thematically into: "Personal sharing", "Playful tasks" and "Health orientated tasks", which we will discuss.
Play on a single device
In comparison to the networked gameplay on individuals own mobile phones, we saw the single device version leading to more flexible gameplay, meaning everyone, no matter what their make of device, could use their own phone to complete tasks. It also meant young people could spend more time on them, such as recording a message for a friend, whilst gameplay continued centrally. This led to hastier, non-disrupted gameplay, and tasks revealed in the centre were seen by everyone, meaning completion of them was more collective. This also resulted in turn-taking negotiation by players, which was typically policed rigidly, and led to instances of players 'trading' tasks: "I did the last one, now it's your go!" In this more 'public' version of the game, young people also often insisted that the youth workers joined in as well.
Playful tasks
A minority of players' suggestions (11) had a 'playful' element. Some were like our task, 'Blow a kiss to another player' (task 2): "Say I love you to a friend" / "Say 'you are beautiful' to someone", whilst others introduced a guessing element: "Get a friend to guess your crush". There were also some suggestions for 'physical' tasks, particularly around movement, such as "Do Gangnam style" / "'Dab' [dance] with your friends". Yet other tasks did start to verge on something that might be inappropriate: "Take off one piece of clothing" / "touch a body part of your choice". The latter task was commented on specifically by a youth worker as something he couldn't do in this setting: "It would have undermined my safeguarding role in the group". Only three tasks suggested use of mobile phones: "Text from another player's phone", "[give a player your] unlocked phone" and "let someone send one message".
Personal sharing tasks
More tasks (21) requested a level of personal sharing. Young people's suggestions were generally more upfront than our 'set'. The use of our word 'moment' was interpreted more specifically to "share an embarrassing moment" or "school moment", and was also extended to a "tell the group a once in a life experience you have had". Requests to share also became more specific to "share something you regret" or to "tell a story about your first kiss", whilst others became more dark, e.g. "Who do you hate?" The 'act' of sex was focused on more specifically by some, typically older, members, such as more vaguely suggesting tasks around "your ideal first time", or perceptions around "first time -hurt?" Other tasks started to verge into close-ended 'truth or dare' territory, again more especially around the act of sex: "How many times have you had sex?", "have you had sex while drunk?", "what age did you 'lose it'?", "Name one famous person you would have sex with". The topic of 'talking about sex' was also touched on in some tasks, rather than giving more specific suggestions for activities or conversations: "Do you talk about sex? If so, who with?". The tasks which prompted some sense of personal sharing were perhaps the most successful when played through, prompting several conversations around celebrity crushes and regrettable experiences, e.g.: "Oh man I've got so many!" / "You've got to name yours now."
Health orientated tasks
The tasks suggested by youth workers, and some 16+ young people were largely 'health' orientated, or around the provision of information. One youth worker in particular, "Joel" (pseudonym) was seemingly unsatisfied with the game simply being a playful experience, asking, "What is it that the game really supposed to do? [...] I think it should be about misconceptions about sex". Countering our 'playful' approach, Joel suggested two knowledge based tasks: "Explain the C-Card scheme [UK condom distribution scheme]", and "What is the legal age of consent in the UK?". He also suggested that the game could instead be a 'fact or fiction' game around specific statements, a suggestion given by a few youth workers and health professionals in response to seeing our game. This rhetoric was supported by a minority of young people, typically older teenagers, who also suggested some knowledge testing tasks such as "What does STI stand for?", and questions more focused around morality such as "what do you think of teen pregnancy?" Some young people expressed dissatisfaction at these 'health orientated tasks', particularly Joel's suggestions, with one young person suggesting: "That sounds boring!" in response. This dialogue mirrors debates in sex education and, as we will discuss, the game embodied such tensions around what role technology for young people's sexuality 'should' have.
Playing Young Person-Led Tasks
In most workshops, numbers were sufficient to enable us to split the group in half so that young people could play through each other's tasks. Tasks were placed in the centre and participants played through these as if they were a card game. Participants typically took to playing each other's tasks with considerable interest. Sometimes the tasks were questioned by each half of the group, for example: "Can I just ask, boys, who wrote 'remove one item of clothing'?" However, some of the older players who were more hesitant when playing through our initial set of activities took to these 'user-centred' tasks more enthusiastically. This was particularly evident with those that required a level of personal sharing, and in some of the 'moral' questions such as 'What do you think of teen pregnancy?': "I have some serious opinions on that, don't get me started!"
DISCUSSION
We present our process of user-centred design as a successful enquiry between young people, researchers and youth workers on the topic of young people and sexuality. 'Talk About Sex', has resulted in enthusiastic, lively and fun gameplay, particularly from younger participants, with tasks providing a focus for interactions. Nevertheless, we have also encountered ongoing challenges working within this space. Gameplay sessions did at times have a lack of focus, while some tasks led to exclusion and legitimized 'problematic' behaviour. Moreover, the presence of technology in these settings epitomized many of the tensions around young people's sexualities, and the perceived role technology should be having. We present these as lessons learnt, suggesting adolescents' sexuality as a fruitful, if challenging, design space for HCI.
Lesson Learnt: Lightweight Digital Play
'Talk About Sex' began as paper based prototyping, and participant-generated tasks were played through as a card game in the latter stages of our process. Our game is therefore in some ways like analogue-based discussion games, such as 'spin the bottle' or 'truth or dare'. Despite this, we argue there are many benefits to our game as a digital experience. Not insignificantly, the very act of delivering a game through a piece of technology provided a focus for some participants, with one youth worker commenting it said "a lot about the resource" that it "could get them to sit that long" while another jokingly remarked that she'll "do all my sessions on an iPad now!" Moreover, the 'pause and reveal' mechanism provided rhythmic gameplay and a sense of anticipation before each task, which was lacking in our early prototyping sessions. Indeed, one of our early testers commented "it's way better on phones than cards!" This was particularly evident when the game was played uninterrupted on a single device, where the timing of the game provided a fast, but clear, directive pace to these interactions. With the game audibly indicating a player's turn, all players were given the opportunity to share, as directed through the device.
The digital medium also gave novel opportunities for play, such as introducing a timed 'pause' in the middle of the game (task 14), and requests to use the digital functionality of the phone. Tasks using the camera (task 21), messages (task 24), maps (task 3) and image search (task 9) all gave opportunity to explore how uses of mobile technologies intercept with intimacy. Yet we found that digital play was received most successfully when simple and easy to understand. For example, 'take a selfie on someone else's phone' (task 21), or 'shine the light to illuminate a body part' (task 23) were more successful in comparison to tasks which required more involvement, such as 'mark on Google Maps where you've had a 'moment' (task 4). It was also notable that one of our most successful tasks was the nondigital 'blow a kiss to another player'. Therefore, in a game where we were seeking to encourage interaction between players, it was much more important to seek broader ways of promoting 'head up' [26] interaction through the device, rather than focus on more granular digital interactions.
Despite this, a card game also possesses qualities, which we are now looking to explore in a further iteration of this game. The analogue nature of physical games means they are easily reproducible by individuals wishing to use ideas in their own practice, and there are also rules which dictate traditional games which make external facilitation less necessary. In our deployments, the researchers and youth workers very much facilitate play of this game, whereas in a card game play sessions facilitated by young people may be more easily enabled. The tangible quality of cards in a game is also preferred by some. The interrelation of traditional card games and games with digital elements, and utilising the affordances of both effectively, is an aspect we are exploring in further work.
Lesson Learnt: Share Control of the Conversation
We took two different approaches to involving young people in our process of user-centred design, spanning the different levels of involvement Druin highlights in 'The Role of Children in the Design of Technology' [11]. In the first stage of our research we treated young people as 'research informants', whereas in the second stage of our research we treated participants more as 'design partners' by inviting their suggestions for tasks. We found that in each case these approaches had individual benefits and drawbacks.
The initial set of tasks we developed for this game were based around insights interpreted from our initial engagements and our interests in digital play. This resulted in several tasks that were successful, e.g. 'take a selfie on someone else's phone' (task 21), but also tasks that were too fiddly e.g. 'mark on Google Maps where you've had a moment' (task 4), and tasks that had problematic elements, e.g. 'take a photo of a body part' (task 5). Different tasks also had varying degrees of success with different participant groups. While one younger group took enthusiastically to our request for all but two to leave the room (task 7), some older groups were more cautious, i.e. "do we really have to leave the room?" Moreover, in all the groups, one or more players commented that our tasks "weren't really talking about sex", and some of our tasks were interpreted as exclusive -"I don't have a moment" / "[our friend] hasn't had her first kiss yet".
Taking a participant-led approach to the devising of tasks avoided some of these problems. In general, the tasks that participants wrote were more specific and had an element of personal sharing; for example, "share something you regret" and "celebrity crush" resulted in some of the liveliest conversations. Equally however, some of these tasks had problematic aspects. Only a minority of tasks had 'playful elements', and many were closed ended, e.g. 'How many times have you had sex'. In most cases, this resulted in play which was rather more static, lacking an 'energy', and closed-ended tasks which didn't invite further discussion. Moreover, some of the tasks were defined by youth workers as actively problematic within a youth group setting, for example 'touch a body part'.
Many of the youth workers and some older young people often suggested tasks around health promotion, for example: "Explain the C-Card scheme" / "What services could you access?" This reflects arguments within the sex education literature discussed earlier, centred on the debate between a 'restrictive' discourse of sexuality seeking to control young people's sexual activity, or a 'permissive' approach seeking to legitimize and acknowledge sexuality [14]. When Joel, one of the youth workers, explained why he thought the game should have an explicit educative purpose, he said "because that's my job". Yet, this was simultaneously seen as a problem by a young person, stating that his tasks sounded "boring".
This variety of perspective indicates the complexity of young people and sexuality as a design space. None of these approaches is the 'correct' approach to take, rather, we argue the standpoints of these stakeholders needs to be balanced and shared. If we were, as is planned, to hand over control of this game, our users, be it young people or youth workers, are likely to use the game for their own purposes. This could be 'truth or dare' style tasks, closed tasks around specific sexual acts, or being used as testing adolescent's knowledge. It is a benefit that stakeholders are able to utilise the tool for their own purposes, yet in treating young people (and youth workers) as 'design partners', we argue researchers should be aware that in doing this, the outcome of user-centred design may no longer align with the agenda it was originally envisaged with. In this case, the game may no longer possess the playful and inclusive agenda it was designed with.
As discussed, the 'agenda' of our game was broad, in that we wished to open a dialogue about sex and sexuality with young people through the medium of a game. We have discussed the extent to which this was successful, with discussions breaking out in both helpful and perhaps lesshelpful ways. Also, the necessity of human facilitation due to the nature of the game meant this inevitably shaped the conversations, such as the power dynamics between youth workers and young people. These are aspects that we are looking to explore in further work.
Lesson Learnt: Problems can be Opportunities to Talk
On several different occasions, our own conceptions of child sexuality, and how young people should behave in these settings, were challenged. One task appeared to legitimize unkindly locking another player out of his phone (task 19), and another prompted a player's threat to produce child pornography (task 5). We were also more than a little alarmed, as our readers may be, for an underage man describe "first anal" as a 'moment'. Yet the fact that these uncomfortable issues were raised enforces them as legitimate areas of enquiry in young people's sexuality, and highlights the importance of talk. Many young people did present themselves as mature sexual beings, and some youth workers reflected that young people's responses to the tasks reflected the complicated reality of sexuality, particularly in relation to technology. Discussing the incident around a player threatening to take a picture of his penis (task 5), one youth leader mentioned "that's a part of their life now, taking photos…so maybe do keep it [the task] in", and with one young man locking a player out of his phone, matters of intimacy and trust were actively played out even more than we were expecting. Therefore, although some of these tasks may appear on the surface problematic, they were perhaps one of the most meaningful sources of conversation with these young people, touching on the self-production of child pornography, and notions of friendship and trust.
CONCLUSION
In this paper, we extend previous work in HCI around sexuality, through suggesting young people, sexuality and technology as an agenda for the field. We have shown how in utilising insights from play and social gaming, and through taking an extended, multi-layered process of user centred design, we were able to produce work that distanced itself from HCI's more traditional, restrictive and problematic discourses around sex and sexuality [19]. Our approach took a 'permissive' approach [14], prioritised young people's perspectives, and respected their sexual agency, regularly lacking from 'interventions' in this area [21], particularly when sexuality is considered in conjunction with technology [29].
Young people's sexuality is a contentious topic, dominated by adult opinion, with conflicting views over how the topic should be approached. This research explored how this might be counteracted through a process of user-centred design. Our findings have highlighted the value of engaging all concerned stakeholders in this process of design, and suggested that even when problems arise in the process, this may be an opportunity to have productive and meaningful opportunities to 'Talk About Sex'.
|
2017-06-28T05:55:50.655Z
|
2017-06-27T00:00:00.000
|
{
"year": 2017,
"sha1": "9db23e02d5eb555bdf70adb2a183c6b5e13b8a91",
"oa_license": "CCBY",
"oa_url": "http://nrl.northumbria.ac.uk/id/eprint/30738/1/paper258.pdf",
"oa_status": "GREEN",
"pdf_src": "ACM",
"pdf_hash": "7593852694f2ea21c4ef4234a356d3352fec330d",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Computer Science"
]
}
|
264487572
|
pes2o/s2orc
|
v3-fos-license
|
Inter-rater reliability of a novel objective endpoint for benign central airway stenosis interventions: Segmentation-based volume rendering of computed tomography scans
Objectives To evaluate the reliability of a novel segmentation-based volume rendering approach for quantification of benign central airway obstruction (BCAO). Design A retrospective single-center cohort study. Setting Data were ascertained using electronic health records at a tertiary academic medical center in the United States. Participants and inclusion Patients with airway stenosis located within the trachea on two-dimensional (2D) computed tomography (CT) imaging and documentation of suspected benign etiology were included. Four readers with varying expertise in quantifying tracheal stenosis severity were selected to manually segment each CT using a volume rendering approach with the available free tools in the medical imaging viewing software OsiriX (Bernex, Switzerland). Three expert thoracic radiologists were recruited to quantify the same CTs using traditional subjective methods on a continuous and categorical scale. Outcome measures The interrater reliability for continuous variables was calculated by the intraclass correlation coefficient (ICC) using a two-way mixed model with 95% confidence intervals (CI). Results Thirty-eight patients met the inclusion criteria, and fifty CT scans were selected for measurement. The most common etiology of BCAO was iatrogenic in 22 patients (58%). There was an even distribution of chest and neck CT imaging within our cohort. The average ICC across all four readers for the volume rendering approach was 0.88 (95% CI, 0.84 to 0.93), suggesting good to excellent agreement. The average ICC for thoracic radiologists for subjective methods on the continuous scale was 0.38 (95% CI, 0.20 to 0.55), suggesting poor to fair agreement. The kappa for the categorical approach was 0.26, suggesting a slight to fair agreement amongst the raters. Conclusion In this retrospective cohort study, agreement was good to excellent for raters with varying expertise in airway cross-sectional imaging using a novel segmentation-based volume rendering approach to quantify BCAO. This proposed measurement outperformed our expert thoracic radiologists using conventional subjective grading methods.
Introduction
Benign central airway obstruction (BCAO) comprises a complex and multifactorial set of conditions [1].Patients typically present with signs and symptoms of airflow limitation (dyspnea, cough, wheezing, stridor).However, given the relatively late onset of symptoms, up to half of patients present in respiratory distress [2].The most common etiology is post-traumatic from prolonged endotracheal intubation or tracheostomy [3].With the recent worldwide pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), there have been reports of increasing BCAO cases following prolonged intubation [4,5] with an expected increase in the coming years.
The burden of BCAO on patients and the healthcare system has been recently examined [6][7][8][9].When examining a quality healthcare record, a study reported that patients with tracheal stenosis from prolonged intubation had an increased hospital stay (6.3 days; 95% CI, 6.0 to 6.3), in addition to an increase in hospital costs ($10,375; 95% CI, $9762 to $10,988).Another study reported that patients with post-intubation tracheal stenosis (PITS) that underwent nonsurgical treatments (Montgomery T-tube, silicone stent, or tracheostomy) had a decreased quality of life in the domains of physical limitation, bodily pain, and increased emotional distress following the procedure.Thus, early identification and shared decision-making regarding management are vital to prevent further patient morbidity.
However, during the last decade, comparative effectiveness studies evaluating novel therapeutic interventions for BCAO have been lacking, with management currently established on expert opinion and small retrospective cohort studies and case reports [10][11][12].In addition, these studies have primarily used conventional subjective grading and classification systems as endpoints of disease recurrence, making interpretation of the results challenging due to uncertain reliability.The field urgently needs more objective methods to assess airway luminal narrowing, which are reliable and not overly complex.
OsiriX (Berenex, Switzerland) is a Digital Imaging and Communications in Medicine (DICOM) viewer with the ability to perform advanced post-processing techniques on 2-dimensional (2D) computed tomography (CT) scans.In recent years, the ability of the software to create three-dimensional (3D) reconstructions of solid organs with volume rendering and segmentation-based techniques that are free of charge has generated excitement for preoperative planning and trainee simulation [13][14][15].However, its use within the trachea has not been well described.Currently, it is being explored as an objective endpoint to quantify stenosis recurrence in an ongoing pilot randomized clinical trial (NCT04996173) evaluating the utility of adding spray cryotherapy to standard of care interventions in BCAO.
The objectives of our study are twofold.Firstly, we sought to evaluate the reliability of this novel objective manual segmentation-based volume rendering approach to quantify BCAO in raters with varying expertise in cross-sectional airway imaging.Secondarily, we aimed to evaluate agreement amongst expert thoracic radiologists using conventional, subjective methods for assessing stenosis severity currently used in clinical practice and research.
Study subjects
We utilized an ongoing interventional pulmonary procedural database at Vanderbilt University Medical Center (VUMC) to identify patients for inclusion.Demographic data, including age, sex, smoking status, etiology of stenosis, CT imaging type (chest or neck), and axial slice thickness, were ascertained from the electronic health record (EHR).We included only patients with airway stenosis localized to the trachea and documentation of a suspected benign etiology within six months of the selected CT scan.The deidentified data were collected and managed using the Research Electronic Data Capture (REDCap) system [16,17].This study was approved by our local institutional review board (IRB #211567).
CT image acquisition, segmentation, and volumetric analysis
All images were uploaded from the local picture archiving and communication system (PACS) to version 12.0 of OsiriX.After reviewing multiple sets of imaging before collection, a 3 cm measurement was felt to adequately capture the entire length of a stenotic segment in 95% of patients.The nadir stenosis point was identified in the soft tissue window and marked in the sagittal plane.We then measured 1.5 cm above and below this point.The airway lumen boundaries were then circumferentially marked using the closed polygon tool in the axial window.Four additional segments are manually segmented: the proximal end, one-third down the stenotic segment, two-thirds down the stenotic segment, and the most distal end (Fig 1).Finally, using the built-in repulsor function, the boundaries of the missing segments were manually adjusted to achieve a luminal fit.The resulting volumetric reconstruction is then generated and can be manipulated in 3D space with the resulting volume measurement (Fig 2).
Interrater reliability
To assess the reliability of this novel endpoint, we identified four clinicians with different levels of expertise to interpret airway cross-sectional imaging and the ability of this approach to quantify airway stenosis severity.At the time of data collection, observer 1 (AR) was a pulmonologist, observers 2 (ES) and 3 (LB) were medicine residents, and observer 4 (KP) was a radiologist.AR gave each observer a thirty-minute introduction to the software with training on measurement and rendering of a test image.To provide for consistency of measurements AR marked the nadir point of stenosis on each image to be used as a reference.A screen recording was also available to all readers during the study to be used as a reference.The expectation was for all readers to start measuring within 24 hours of the training, and complete within a twoweek timeframe.To compare the reproducibility of our segmentation-based approach with more subjective quantification methods, we recruited three expert thoracic radiologists (AG, KS, and TM) to read the same CT images and give their opinion of stenosis severity on both a continuous scale from 0-100% and a categorical scale using the Cotton-Myer grading system (grade 1: 0-50%, grade 2: 51-70%; grade 3: 71-99%; grade 4: No detectable lumen).In contrast to the objective methods, no nadir point of stenosis was pre-identified in the subjective approach.
Statistical analysis
Descriptive statistics are presented, including means, medians, interquartile ranges (IQR), standard deviations, and ranges for continuous parameters and percentages and frequencies for categorical parameters.The interrater reliability for continuous variables was calculated by the intraclass correlation coefficient (ICC) using a two-way mixed model with 95% confidence intervals (CI).Fleiss's kappa was used to determine the reliability between groups of categorical variables.Guidelines for the interruption and reporting of ICC and Kappa have been previously described18,19.No correction was made for missing data.All analyses were performed by an independent statistician using R software, version 4.2.0 (R Foundation for Statistical Computing, www.r-project.org).
Results
Thirty-eight patients met the inclusion criteria, with fifty CT scans between 2009 and 2021.Twenty-two (58%) were labeled iatrogenic (post-intubation or post-tracheostomy), 10 (26%) idiopathic, and the remaining six were due to other suspected etiologies (Table 1).Twentyseven (71%) patients were women and most were never-smokers (68%).The resolution of the scans ranged from 1 to 5 mm, the median 2 mm (IQR, 1.25 to 3).Our cohort had an even distribution of CT neck and chest imaging.The calculated airway luminal volume means and standard deviations for each rater are displayed in Table 2.The average ICC across all four readers was 0.88 (95% CI, 0.84 to 0.93), suggesting good to excellent agreement.The average ICC for the thoracic radiologist on the continuous grading scale for the subjective approach was 0.38 (95% CI, 0.20 to 0.55), suggesting a poor to fair agreement.The average Fleiss kappa for the categorical Cotton-Myer grading system was 0.26, suggesting a slight to fair agreement amongst raters.
Discussion
In this retrospective cohort study, we show that clinicians with varying expertise in airway cross-sectional imaging have good to excellent agreement when using a novel, objective segmentation-based volume approach to quantifying BCAO.Further, we demonstrate that when a group of expert thoracic radiologists evaluates the same images using traditional, more subjective approaches, they have poor overall agreement with both continuous and categorical measures.
Existing classification and grading systems [18][19][20][21] for BCAO are almost never used in clinical practice and only inconsistently in research.We believe reasons include 1) heavy reliance on a subjective interpretation of airway luminal narrowing, 2) poor external validation limiting generalizability, 3) lack of reproducibility, 4) poor standardization across and within specialties, 5) and poor correlation with physiological markers of disease activity and patient-reported outcomes (PRO).For example, changes in peak expiratory flow, which has been previously shown to predict disease recurrence in patients with BCAO, were recently shown to have a poor correlation with stenosis severity using the Cotton-Myer classification system with an overall kappa of 0.37 [22,23].
For several reasons, classification and grading systems for BCAO that rely heavily on subjective evaluation measures prove to be the most problematic.First, they may not correctly characterize complex lesions, such as lesions with a more significant vertical extent (� 1 cm in length), those that invade surrounding cartilaginous structures, and those with dynamic collapse from underlying malacia.Second, reproducibility amongst providers is challenging as assessment of luminal narrowing is often "eye-balled," with difficulties with interpreting when a transition state occurs (e.g., 49% versus 50% stenosis).Finally, most systems that use a visual grading of stenosis require direct visualization with inherent procedural and anesthesia risks.
We believe that an approach using volumetric assessment to quantify airway stenosis from readily available 2D CT imaging may address some of these challenges.By visualizing a stenotic segment in multiple dimensions, one can get a more comprehensive sense of the lesion's vertical and structural extent.This can prove advantageous in decision-making regarding early referral for surgical resection, as this has shown to be the most definitive treatment in patients with BCAO [24] who are suitable candidates.Additionally, objective data following an endoscopic therapeutic intervention allows for better identification of disease recurrence and need for a repeat procedure.Finally, little expertise in measurement is required as we have shown that readers with minimal training and expertise in airway cross-sectional imaging were able to have strong agreement.This contrasts with our radiologists' subjective grading, highlighting the challenges with current approaches in everyday practice.This study has several notable strengths.Multiple etiologies of BCAO were included, highlighting the generalizability of these findings.Our reported ICC suggests that this novel measurement is reliable regardless of the underlying type of CT performed (chest or neck).The patients in our cohort had a variety of stenotic lengths and severity of luminal narrowing, highlighting the ability of our readers to agree with lesions of different complexities.The ability to analyze these images using the free of charge tools in OsiriX with relatively brief training suggests that this approach could be widely adopted with minimal cost or effort.However, it is essential to consider the inherent trade-off of increased time required to perform such measurements compared to traditional subjective methods.
Limitations of this study include a modest sample size and testing limited to the confines of the trachea.The a priori identification of the point of nadir stenosis may have introduced bias, improving recognition of the stenotic area.However, this approach was chosen to minimize ambiguity in identifying structural abnormalities and prioritize accurate measurements.As the dataset was retrospective, direct correlation of tracheal volume with stenosis severity or quality of life measures was not possible.Future research could explore establishing thresholds or criteria for tracheal volume indicative of stenosis severity or its impact on quality of life.Additionally, the study was limited in assessing dynamic imaging or individuals with underlying malacia.
In conclusion, we report the reliability of a novel objective measure for quantifying airway stenosis based on a straightforward volume rendering approach in the open-source medical imaging viewer OsiriX.This measure holds promise as an objective research endpoint for assessing airway luminal narrowing and may serve as an accurate assessment of disease recurrence in studies testing new therapeutic interventions in BCAO.
Fig 1 .Fig 2 .
Fig 1.The trachea is manually segmented for 3 cm along the superior-inferior axis in the axial view.(A) Represents the most proximal segmentation.(B) Shows segmentation at the focal nadir point of stenosis.(C) Shows the two-thirds segmentation point.(D) Represents the most distal portion of the stenotic segment to be measured.https://doi.org/10.1371/journal.pone.0290393.g001
Table 1 . Baseline characteristics with median (IQR) for continuous variables and number of patients with relative frequencies (%) for categorical variables
. †Iatrogenic includes post-intubation and post tracheostomy-induced stenosis.§ Idiopathic includes no formal etiology given within a six-month time frame of the subject CT.
|
2023-10-27T05:09:25.344Z
|
2023-10-25T00:00:00.000
|
{
"year": 2023,
"sha1": "f6e9cac4865a894b8e252ec1736b13e68028c4a2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f6e9cac4865a894b8e252ec1736b13e68028c4a2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
92722318
|
pes2o/s2orc
|
v3-fos-license
|
Joint effects of five environmental factors on the growth of cyanobacterium Microcystis aeruginosa
In many lakes and reservoirs, Microcystis aeruginosa is one of the dominant bloom species. Five environmental factors, including nutrients and physical factors, were selected to evaluate their effects and interactions on the growth of M. aeruginosa (FACHB-905) by joint analysis in a laboratory batch culture. The results indicated that all five factors affected the growth rate alone or in combination, and that their interactions were complex. This cyanobacterium strain preferred higher water temperature and alkaline conditions, while not requiring high illumination or high concentrations of nitrogen and phosphorus. Owing to these features the bloom of this cyanobacterium appears easily in nature. The form of nitrogen (nitrate or ammonium) also affected the assessment of M. aeruginosa bloom. The possibility of M. aeruginosa bloom would still exist even if the phosphorus concentration in the water column was very low. The result provided a good basis for the analysis and prediction ofM. aeruginosa blooms in terms of environmental assessment, because the joint analysis of multiple factors would offer more valuable information than a univariate analysis. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/). doi: 10.2166/wcc.2018.255 ://iwaponline.com/wqrj/article-pdf/54/2/79/682644/wqrjc0540079.pdf Guikun Hu Qingtian Zhang (corresponding author) Tianjin Marine Environmental Protection and Restoration Technology Engineering Center, Tianjin 300457, China and Tianjin Key Laboratory of Marine Resources and Chemistry, Tianjin University of Science and Technology, Tianjin 300457, China E-mail: qtzhang@163.com Qingtian Zhang School of Life Sciences, Nankai University, Tianjin 300071, China This article has been made Open Access thanks to the kind support of CAWQ/ACQE (https://www. cawq.ca).
INTRODUCTION
Harmful algal blooms (HABs) in freshwater have become a hot topic across the world. They result in a deterioration in the quality of water resources, and also cause bad conditions that affect the growth and development of aquatic organisms in lakes or reservoirs (Kameyama et al. ; Backer et al. ). The formation mechanisms of HABs, as well as how to forecast or control the blooms, have therefore become a worldwide concern and a subject of serious debate (Tayaban et al. ). It is well known that an excess of green-blue algae (also usually Their results gave us more information than some univariate experiments. However, the limitations of orthogonal experiments, which could not include many factors or their levels, meant that few factors were included in the analysis. Studies using many experimental treatments would incur too much expenditure and workload, and those using fewer factors or levels only offer a limited explanation of blooms. Quiblier et al. () also conducted multi-factor studies in batch culture. They used natural water and added some nutrients; their main subject was the phytoplankton community. It has thus been shown that more studies including more factors are essential to bloom analysis.
The Uniform Design method (UD) was devised by Professors Fang and Wang in 1978, and is an important method for application in virtual experimental and solidity designs; UD has become a standard tool in experimental design over the last two decades (Fang & Ma ; Winker & Lin ).
Compared with other statistical methods, UD reduces the number of experiments in a multiple-dimension optimization and allows the largest possible number of levels for each factor (Wu et al. ). It has been used successfully in many experiments of condition optimization (Peng et al. ). Compared with the common orthogonal design method, this method has some particular advantages. First, only one experiment is needed for each level of each factor, and the experimental counts are equal to the level counts; the lower number of experimental treatments will reduce cost and workload significantly. Secondly, it is both convenient for analyzing interactions among experimental factors and helpful in developing a mathematical model. Owing to these advantages, UD has hitherto been successfully applied in many research areas (Liang et al. ; Mehri & Ghazaghi ).
Five environmental factors showing effects on the growth of M. aeruginosa were chosen to do a basic integrative study. The five factors were nitrogen, phosphorus, temperature, pH and illumination. The joint effect of these factors on the growth of M. aeruginosa was the main aim of this study, but we also try to test the different impacts between ammonium and nitrate. We hope that the optimal conditions obtained from this study will be helpful in understanding the complexity of cyanobacterial blooms.
Cyanobacterium strain and culture conditions
The cyanobacterium strain Microcystis aeruginosa (FACHB-905) was bought from the Freshwater Algae Culture Collection of the Institute of Hydrobiology, CAS (http://algae.ihb.ac.cn). The cyanobacterium seed was cultured in BG-11 medium in the culture box prior to the experiments. Culture conditions for the seed were as follows: water temperature 25 ± 0.5°C; pH 7.5-8.5; illumination 28.5-34.2 μmol photons m −2 s −1 ; and photoperiod 14 h:10 h (light:dark).
Experimental materials, equipment and environments were sterilized or disinfected before the experiments.
M. aeruginosa was cultured under axenic conditions. All operations and counting processes were performed under aseptic laboratory conditions.
Experimental designs
The five environmental factors selected for the study were nitrogen (nitrate and ammonium in different experiments), K 2 HPO 4 , illumination, water temperature, and pH. The optimal table of the Uniform Design (U* 12 6 2 × 3 2 × 4), Table 1. Twelve treatments were set for each experiment, as well as three repetitions for each treatment. The batch culture method was used for both experiments, i.e. no nutrients were added during the experiments.
(1) An appropriate amount of M. aeruginosa in its exponential growth stage was transferred into a 5 L triangular flask and all the components were added to the BG-11 culture medium except nitrogen and phosphorus. After being shaken gently, 100 mL of mixed liquid with well distributed M. aeruginosa was pipetted into a 250 mL triangular flask, which was marked as one repetition of an experimental treatment.
(2) Nitrate and K 2 HPO 4 were added to each treatment, according to the concentrations presented in Table 1.
Three repetitions were specified for each treatment.
(3) After controlling the pH of each treatment, the flasks were placed under their temperature and lighting conditions, as listed in Table 1. For the second experiment, the above steps were repeated, but using ammonium instead of nitrate.
Data collection and analysis
The densities of M. aeruginosa were measured microscopically in the blood-cell-counting chamber every two days.
These flasks were shaken twice a day during the experiments. The growth rates (μ) of this cyanobacterium were calculated using the formula: where t indicated the days of exponential growth period, N was the density of the t day, and N 0 was the initial density at the beginning of the exponential growth period. DPS software (Tang & Feng ) was used for the subsequent statistical analysis and data fitting. The software OriginPro was used to plot the figures.
Brief introduction to the growth of M. aeruginosa
In both experiments minor increases and lower final densities were detected for most treatments. M. aeruginosa increased greatly and got the highest final densities under the conditions of treatment C04 in both experiments.
In the ammonium experiment the treatment C04 represented the best growth of M. aeruginosa, outshining other treatments, in which the growth curve went up rapidly.
Among the other treatments, C02 showed a better growth condition, but its cell density was close to that in treatments C08, C11 and C12. In the nitrate experiment the cell densities increased sharply in both treatment C04 and treatment C09, although C04 showed a little higher density.
Some treatments showed minor increases, and others had nearly no increase. The final densities with the two nitrogen forms were quite different; the difference between the maximum and minimum was huge ( Figure 1).
Comparing the final densities in terms of the treatment, (Table 1). These differences clearly indicated the favorite growth condition of M. aeruginosa. This cyanobacterium prefered higher water temperatures and an alkaline environment, but did not require higher illumination or higher concentrations of nutrients (nitrogen and phosphorus). Phosphorus was not added in the treatments C08 and C11 in this study; however, the cyanobacteria in them kept growing to some extent ( Figure 2). The growth rates of C09 showed great difference in terms of nitrogen forms, which was consistent with the analysis of final density.
The effect of temperature was also obvious in this study. The growth rates in the high-temperature group (e.g. C02, C04, C05 and C12) were higher than those of the low-temperature group (e.g. C01, C06, C07 and C10). Low temperature was an important limitation for the bloom (Figure 2).
Regression equations
The stepwise regression of a quadratic multinomial to the growth rate was performed for both experiments, in order to derive optimal growth conditions and understand the relationships among these factors.
Optimal growth conditions of M. aeruginosa
The optimal values indicated that this cyanobacterium prefered higher water temperatures and an alkaline environment, but did not need higher illumination or higher concentrations of nitrogen and phosphorus. That is to say, this cyanobacterium strain had highly adaptive abilities in the environment. This result was consistent with some reports about cyanobacteria (Dokulil & Teubner ).
Cyanobacteria proliferations have been reported in oligotrophic and mesotrophic freshwater bodies ( Jacquet et al.
).
A strong M. aeruginosa proliferation was observed in the Djoudj pond with higher nitrogen concentrations, whereas soluble reactive phosphorus (SRP) concentrations were always low (Berger et al. ; Quiblier et al. ).
The results of this study were quite similar to the field observations. In addition, the designed N/P (nitrogen/ phosphorus) ratio in treatment C04 was 15:1, which was also the same as the Redfield ratio and similar to other reports (Yi et al. ; Zhang & Hu ). This ratio was also a good condition for the increase of microalgae.
This regression equations showed that all the five environ- Generally speaking, the difference between the univariate and multivariate results was not big (Zhang et al. a, b, c), but the multivariate result would be more suitable for bloom assessment. Also, the fitting model was useful to forecast cyanobacterium growth under various conditions. In addition, the bloom may be limited by some factors that were not tested in our study, so further studies including more key factors should be conducted in laboratory and in field study. It is a good basis for bloom analysis.
Influence of nitrogen forms
The results of this study suggested not only the influence of environmental factors on the growth of M. aeruginosa, but also reflected the different effects caused by the nitrogen forms. Because, as mentioned before, the difference between the two experiments was just the nitrogen forms, while the optimal conditions and their interactions were not the same (Figure 2 The nitrogen forms in the aquatic ecosystem were changeable; the transformation between nitrogen forms occurred frequently owing to biological processes. A combined analysis including various nitrogen forms in the water column would be useful for phytoplankton study.
About phosphorus storage
Obviously, the zero phosphorus concentration in this study was a calculated result, which did not mean this cyanobacterium did not need phosphorus nutrient (Wang et al. in a water column, a bloom was still possible. It may also affect the analysis of the N/P ratio.
Enlightenments gained from the physical factors
The current results suggest that water temperature had a strong influence on the growth of M. aeruginosa. This was consistent with monitoring results in situ, with many cyanobacteria blooms occurring when water temperature increased during the summer months (Zheng et al. ). This primary study is a good basis for bloom analysis; experiments including more environmental factors will give us a better understanding of bloom occurence.
|
2019-04-03T13:07:42.134Z
|
2018-08-24T00:00:00.000
|
{
"year": 2018,
"sha1": "f3114eeedfed584ab7d792d13efce4a709877d16",
"oa_license": "CCBY",
"oa_url": "https://iwaponline.com/wqrj/article-pdf/54/2/79/682644/wqrjc0540079.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "135a6785a7b744cd812a7d77f68a056a63d56cfb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
158330709
|
pes2o/s2orc
|
v3-fos-license
|
Anglicism in Indonesian
This article discusses a language phenomenon currently occurring in Indonesia which is related to borrowing English words with the addition of a prefix ng-/nge in the Indonesian. The purpose of this article is to show how some English words are borrowed in Indonesian and what changes occur within this borrowing process which will be seen on two linguistic levels (phonological and semantic). The data were collected through an observation either in writing forms found in social media or oral form used in daily conversations. The interim results show that phonologically, in general the loan words follow the Indonesian phonological rules with little divergence in certain cases. From the semantic analysis it was found that these Anglicism words can be divided into three categories based on their meanings: restriction, expansion and static.
Introduction
The influence of English on other languages in the world began in the 17th century and somehow assumed to be related to the discovery of the British continent.Since then it was known that there has been an ongoing linguistic relationship between English and other languages.Basically, this phenomenon occurs because of the cultural, political and trade communication contacts between Britain and America with other countries.
English influence to Indonesian itself has reached its peak along with the era of globalization.Until today, Indonesian is facing many problems due to the existence of English in reference with the language development and its existence.The use of English in publics has become an inevitable habit.This resulted in the tendency of reluctant to use Indonesian language and culture, which slowly but surely has made English a more preferred language in this country.Alwi, et al. (eds.) (2003, p. 9) states that the inclusion of elements of English by some people is considered as a contamination of the authenticity and purity of Indonesian as a whole.This is the cause of its interference.Chaer (1994, p. 66) gives limitation on interference as the entrance of other language elements into a language being used in a community which triggered a deviation of the rules of the language being used.In addition to interference, integration is also regarded as a contaminant of the Indonesian language.Chaer (1994, p. 67) states that integration is the elements of another language being brought which is considered, treated, and used as part of the language it enters.This integration process certainly takes a long time, because the elements integrated have been adjusted in pronunciation, spelling, and form.
The way a language borrows foreign words is actually quite a complex process.In Indonesian, the coming of English words happened through written and spoken media that influence it.It follows the pattern that a linguistic loan between two languages can occur when there is a close contact between the speakers of both languages and the use of the internet, especially social media.The terms in English which are associated with computers, technology and even sports such as football can be easily adopted in the Indonesian language because Indonesians feel that their language does not have the exact equivalent of the concepts.Bojčić, et al, (2012, p. 2) state the use of an English word as a foreign language is actually called Anglicism.Basically word absorption / loan (loanwords) can also referred to as Anglicism because there is a fact that the original words are English words; the words are taken from English, and that the words refer to the object or idea that comes from English and closely related to the lifestyle and culture of English and Americans.
One of the language phenomena that caught the attention of the writer related to Anglicism is the emergence of English words that enter the Indonesian language in the form of direct absorption both in form and pronunciation combined with informal Indonesian affixation, especially a prefix ng-/ nge-as in ngorder, ngeblock, ngebully, etc.These words are often found in social media and are actively used by young people.The use of these words is very interesting in terms of phonology and semantics, since Indonesian and English are genetically different languages.Based on this fact, the writer intends to examine whether the addition of prefixes to English words follows the Indonesian phonological rules in general, and whether there is a difference in meaning with English.In this borrowing process, it seems that the foreign words should be adapted to Indonesian in order to function properly and not to lose meaning in its new language.In short, the purpose of this article is to show how some English words are borrowed in Indonesian and what changes occur in this borrowing process in both linguistic levels (phonological and semantic).
Method
The method used in this research is a descriptive one because it describes and examines a language phenomenon by looking at the Indonesian phonological rules and the semantic changes of the Anglicism.The data are taken from the social media and conversation amongst the youngsters through note taking and recording.However, not all the data are used in this research.The data are selected carefully based on the category of loan words required: the full absorption (form and pronunciation) of English words.
Results
There are two types of results found in this research.The first one is the result of phonological analysis as seen in Table 1.The second one is the result of semantic analysis illustrated in Table 2. Table 1 consists of base word and prefix (meng-and its allomorph), formal, informal and phonetic forms of the words.Formal form is the grammatically accepted form (standardized form) while the informal form is the cultural accepted form (mostly used in oral communication).The informal form is actually the simplified version of the formal form by deleting some parts of the prefix.Table 2 projects the meaning of the bases and Anglicism words.The meaning of the base words is taken from Oxford online dictionary (2017) while the meaning for Anglicism is found in the distribution of the words in sentences: written and spoken.
Discussion
Borrowing is a phenomenon which may throw light on the internal organization of language (Hudson, 1980, p. 61).It is common for words with foreign sounds being replaced by native sounds or the native uses some sounds taken from foreign sounds that does not exist in their phonological system before.This is an extremely common phenomenon in many languages.Borrowing seems not to be restricted to words only.Bynon (1977, p. 255) (as cited in Hudson, 1980, p. 60) had proven that borrowing is also possible in inflectional level.In Indonesian, borrowing can occur in the base words mixed with Indonesian prefix, which in it uses the informal form of prefix meng-(ng-/nge-).
The prefix meng-in Indonesian has some allomorphs me-, mem-, men-, meny-and menge-, while the prefix ng-/nge-is an informal affix in Indonesian which generally represents the formal form of prefixes meng-and menge-, but it is also possible to replace other allomorphs.It has been said so because there are some cases where words such as me-lihat (see) used in non-formal form becomes nge-liat but the word me-minum (drink) in a non-formal form becomes minum.In this phonological level, the analysis will be done by looking at the initial letters of the basic words and trying to apply the rules of attaching the prefix meng-(formal and informal form) onto full absorption of English words.
The data used as examples in this article are words with the same form and pronunciation either when functioning as Indonesian words or English words.Table 1 shows in general the phonological rules for prefix meng-which is applicable to Anglicism words.However, there are some words in which the rules are hard to apply.Those words are add, hack, like and post.Below are the phonological analysis of the unacceptable phonological rules for Indonesian language.
Add
By looking at the first letter of the word [a] supposedly, the word add should receive the prefix meng-and becomes meng-add in formal form and *ngadd in informal form.This refers to the rule when adding prefix meng-to words beginning with vowels then the prefix takes its original form.To illustrate this, some Indonesian words asap, ekor, iris and oceh are presented: Nonetheless, this rule is unworkable to the word add in non-formal contexts.This is because this word simply consists of one syllable.It is also the reason why the affix of ng-when attached to the word add cannot comply to the Indonesian phonological rules, therefore the accepted non-formal affix for the word add is nge-(add ngeadd).
Hack
The word hack is a word that begins with the sound [h] in which according to the Indonesian phonological rules, the word will receive the affix meng-.For words borrowed in English the above rule does not seem to meet the terms.The [h] sound, which is a fricative sound, should have been disappearing when it gets the prefix ng-.However, in this case it did not.In fact, the word hack has to take an informal prefix nge-instead of ng-for the reason that the word hack consists of only one syllable, which made it impossible for the word hack to follow the addition of prefix ng-rule for words beginning with the letter h.
Like
For words beginning with the letter l, the phonological rules of adding prefix meng-in Indonesian are as follows: langkah melangkah ngelangkah "Kalau ngelangkah mesti hati-hati." step "Be careful when you take a step."lapor melapor ngelapor "Resi suka sekali ngelapor ke gurunya." repot "Resi likes to report things to her teacher." In contrast, for the word like, the addition of prefix me-to it is unacceptable because even though the writing form of *me-like is acceptable but the pronunciation is not quite correct.The addition of sound [ŋ] to the transitive form of the word like is needed because between the sounds [ə] and [l] it requires a sound that can give emphasis to the first syllable and separate the two sounds.However, the non-formal form of the word like (ngelike) follows the non-formal rules as in the example above.
Post
For words with p initials, the addition of prefix meng-to the formal and non-formal forms follows the phonological rules in which the basic word received the prefix mem-and the sound of [p] disappears, whereas for the non-formal form it was found there are two phonological rules in which the first rule allows the use of affixes while the second rule does not.Here are the example of Indonesian: Unlike Indonesian words above, the word post, the formal rule cannot be applied.When the word post takes the prefix meng-, the [p] sound does not disappear additionally when attached to a non-formal affix nge-(ngepost).
Note that in some Anglicism words (marked *) there are phonologically unacceptable words of formal forms.Those words are never used by Indonesian speakers because they sound awkward/abnormal.Uniquely, although the words cannot be used in a formal form, the non-formal form are used widely.
Semantic Analysis
Table 2 consists of 18 Anglicism words.It shows that the prefix ng-/nge-can be attached to the English wordclasses of Verbs, Nouns and Adjectives, in which Verbs are dominated.The syntactic function of adding this prefix to Anglicism words is to formulate active verbs.Another interesting factor that can be analyzed in addition to phonological rules is the semantic aspect of those words.This will be done by looking at whether there is a change in the meaning of the words from the base to the Anglicism.
"The most basic function of language is to communicate, as a means of association and communication among fellow human beings, so as to form a social system or society" (Nababan, 1984, p. 2).In addition, Bloomfield (as cited in Alwasilah, 1993, p. 37) says that the language community is a group of people who use the same system of signs and speech.The principle of language society is formed by mutual intelligibility, mainly because of the existence of togetherness in linguistic codes.
Based on the statement above, the meaning of a word can be formed through mutual agreement between its speakers.Often the same word has different meanings in different places because the meaning of a word is based on elements of linguistic knowledge possessed by its speakers, which was introduced by Chomsky (2000, p. 48) as the concept of language.This is why semantic analysis related to loan words is considered necessary to see whether there is an extension of meaning of the words, restriction and dynamic/changing when they become Anglicism.
Looking at the overall data and by comparing it with the original meaning of the words taken from Oxford online dictionary (2017), the meaning of Indonesian words (Anglicism) can be categorized into restriction, expansion, and zero semantic extension (Bojčić, et al., 2012, p. 9-10).
Restriction of Meaning
Restriction of meaning occurs in words like ngeadd, ngeblank, ngegame, ngelike, ngorder, ngepost, and ngerequest, which can be proven by looking at the distribution of the words in sentences.
"I'd like to add her on facebook."
Ga tau kenapa pas di depan tiba-tiba saya ngeblank.
"I dont know why when in front (of the class) I suddenly have no idea" 3. Pacarku suka sekali ngegame di warnet.
"My boyfriend loves to play online game in an internet cafe."
Kok ga ada yang ngelike status saya ya?
"Why isn't anybody liking my status?"
"Wait a minute, I want to post this on instagram."7. Dia sudah ngerequest di facebook sih tapi ga saya respon."He's been requesting to be my friend on facebook but I don't give a response." The sentences above prove the restriction of meaning because in Indonesian the words can only be used in context related to certain things such as social media, technology, mental condition or things that are closely related to the lives of young people.For example, for the word ngeblank, which derived from the word blank (empty), ngeblank can only be associated with the absence of an idea/thought (mental state), in which the base word (blank) can actually be associated with objects such as paper (blank paper) or whose basic meaning generally refers to empty space.Likewise, the word ngegame also has a restriction of meaning from the basic word game that means "any kind of games" into "playing just an online game only".
Expansion of Meaning
One example of an expansion of meaning from the basic English words can be seen in the word ngeblunder.Look at the following sentences: 1. a. Kipernya semalem ngeblunder makanya kita kalah.
"The reason why we lost last night was because the goal keeper blundered."b.Kok permasalahannya jadi ngeblunder gini ya.
"The problem seems to be mixed up/unclear now." Sentence 1 indicates that the word ngeblunder has the same meaning as the basic word blunder (making a big mistake).However, this exact meaning is known only to people who like to watch football because this term is often used by football commentators.For people who are not involved in the world of football, the word ngeblunder is often defined as "mixed-up or unclear".This can be seen in sentence 2. The use of the word blunder in this case seems to be assosiated with the use of the word blender.This is probably because blunder and blender bear a resemblance of sounds that are only distinguished by one distinct vowel sound in which the word blender itself is very familiar to the Indonesians.
Another example that falls into this category is the word ngefly.The word ngefly derived from the basic word fly experiencing the expansion of meaning "being flattered" and "using drugs".Below are the examples of the word distributed in sentences.
a. Duh pujian dia buat gue ngefly deh.
"His compliment makes me flattered."b.Tiap hari dia ngefly mulu deh.Kapan sadarnya ya? "Everyday he's using drugs.When will he ever stop?" The expansion of meaning in the word ngefly in both sentences above actually has a connection with the original meaning (the basic word).In the first sentence, the word ngefly shows the feeling of someone who seems to "fly" because of the happy feeling of a compliment given to her.In the second sentence, the word ngefly refers to the state of an unconscious person after using a drug as being "flying" into his subconsciousness.In this case the connection of meaning ngefly with the word fly refers to the mental condition/state of a person.
Zero Semantic Extension
Found in the data are some Anglicism words that do not change in meaning or have the same meaning as the meaning of the basic words.Below are the words in the form of sentences.Regardless of the change of the word class, the meaning of the Anglicism words seen in the sentences above do not imply any change in meaning with the original word (basic word).For this case, it shows that the direct lending process of the word (in terms of form and pronunciation) also borrowed the concept brought by the word.Uniquely, although there are some equivalents of these words in Indonesian (ngedownload = mengunduh, ngupdate = memperbaharui), people tend to be more comfortable to use the loan words due to familiarity and practicality.
Conclusion
Indonesian is one of the most open languages for foreign influences.The phenomenon of language related to the borrowing of English words (Anglicism) needs a special attention in order to see the forms and tendencies that allow the entering of these words and become part of the Indonesian language.As we know, lately a large number of loan words have been borrowed from English as a reason of the constant changes in all areas, especially in technology.
From the results of the analysis above, it was found that phonologically Anglicism generally follows the rules of Indonesian language although there are some words that do not conform to it.Semantically, however, it was discovered that although most of the data show no change in the meaning of the base words but there seem to be a tendency for changes in meaning, both restriction and expansion, which is of interest and need to be studied further by using larger data corpus.
Table 1 .
Result of Phonological Analysis
Table 2 .
Result of Semantic Analysis ConsiderIndonesian examples hilang and hisap for illustration:
|
2018-12-11T04:22:48.646Z
|
2018-02-23T00:00:00.000
|
{
"year": 2018,
"sha1": "fa6696cda5f94dea2104867a64246d2c67922324",
"oa_license": "CCBYNCSA",
"oa_url": "https://ethicallingua.org/25409190/article/download/29/19",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fa6696cda5f94dea2104867a64246d2c67922324",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"History"
]
}
|
235196449
|
pes2o/s2orc
|
v3-fos-license
|
Value-Based Care for Nonoperative Management of Hip and Knee Osteoarthritis: Current Landscape Not Ripe for Implementation
As themost expensive health-care system in theworld, a central focus of health-care reform in the United States has been on delivering value-based care. Within orthopedics, joint arthroplasty has been the primary subject of this policy shift. A number of bundled or alternative payment models (APMs) have been implemented starting with the 2009 Acute Care Episode Demonstration and leading to the 2018 Bundled Payments for Care Improvement Advanced. While APMs have been shown to decrease the length of stay and nonhome discharge after total joint arthroplasty (TJA), other studies have shown similar improvements in patientreported outcomes and rates of 90-day unplanned readmissions, emergency department visits, andmortality relative to nonbundled procedures [1e4]. Furthermore, while there is evidence demonstrating a positive impact of APMs on cost TJA containment [5e7], this outcome has not been universal with some institutions reporting significant losses after the implementation of Bundled Payments for Care Improvement Advanced [8]. One of the challenges of APMs is that target costs are often based on historical references. Failure to achieve those often “low” target values can result in a penalty, causing some institutions to withdraw from APMs [2].
As the most expensive health-care system in the world, a central focus of health-care reform in the United States has been on delivering value-based care. Within orthopedics, joint arthroplasty has been the primary subject of this policy shift. A number of bundled or alternative payment models (APMs) have been implemented starting with the 2009 Acute Care Episode Demonstration and leading to the 2018 Bundled Payments for Care Improvement Advanced. While APMs have been shown to decrease the length of stay and nonhome discharge after total joint arthroplasty (TJA), other studies have shown similar improvements in patientreported outcomes and rates of 90-day unplanned readmissions, emergency department visits, and mortality relative to nonbundled procedures [1e4]. Furthermore, while there is evidence demonstrating a positive impact of APMs on cost TJA containment [5e7], this outcome has not been universal with some institutions reporting significant losses after the implementation of Bundled Payments for Care Improvement Advanced [8]. One of the challenges of APMs is that target costs are often based on historical references. Failure to achieve those often "low" target values can result in a penalty, causing some institutions to withdraw from APMs [2].
The Centers for Medicare and Medicaid Services is currently considering the expansion of payment reform to the nonoperative management of osteoarthritis (OA). It is estimated that over 32.5 million adults in the United States are affected by hip and knee OA with mean outpatient costs estimated at $7840 per person [9,10]. Apart from curbing the costs of care for one of the most expensive chronic conditions, the proposed longitudinal OA bundle would also complement our traditional problem-focused approach by attending to more holistic aspects including lifestyle modifications, patient education, and counseling on pain-coping skills [11]. These interventions could modulate the course of OA burden and optimize outcomes for patients who eventually undergo surgery. Bundled payment programs that focus only on surgical procedures and the immediate postoperative period are inherently limited because they do not address factors that could preoperatively improve patients' outcomes before the disease has progressed to the point of needing surgery [11].
Nonsurgical management of OA is currently reimbursed on a fee-for-service basis, which is dependent on the quantity rather than quality of care. Value-based payment programs, on the contrary, may have the potential to promote evidence-based costeffective care, increase care coordination among different medical specialists, and optimize outcomes for patients who eventually undergo TJA. One example is the Australian Osteoarthritis Chronic Care Program, which is funded by the Ministry of Health. The Australian Osteoarthritis Chronic Care Program provides a comprehensive nonoperative care including exercise, weight loss, pharmacologic management, and psychological management for 1 year and is coordinated by dedicated musculoskeletal specialists. This resulted in an 11% decrease in TKA utilization and 4% decrease in THA over 1 year because of successful nonoperative management [11]. In the United States, a pilot program is underway at the University of Texas in Austin. The program consists of an integrated group of orthopedic surgeons, advanced practitioner nurses, nutritionists, and behavioral health-trained social workers. After enrollment in the program, 65% of patients achieved a minimum clinically important difference in their hip and knee disability and osteoarthritis outcome scores (HOOS, JR and KOOS, JR) at their first follow-up visit. If patients progressed to needing surgery, a decrease in surgical length of stay and an increased rate of discharge to home were also observed [11]. A national value-based care model for nonoperative management of OA would require a similar arrangement to be effective. Particularly, APMs need to be based on provision of evidence-based care and timely referral for joint arthroplasty when conservative management has failed. Otherwise, primary care physicians and other nonorthopaedic care providers participating in bundled care would be incentivized to perform noneevidenced-based treatments (viscosupplement injections) and not refer patient for surgery.
As previously stated, a fundamental component of value-based care is the provision of evidence-based treatment. Specific to hip and knee OA, the American Academy of Orthopedic Surgeons (AAOS) has published clinical practice guidelines (CPGs) for nonoperative management of these conditions [12,13]. For hip OA, the AAOS CPGs provide strong recommendations for use of physical therapy, intra-articular corticosteroids, and nonsteroid antiinflammatory drugs (NSAIDs). In contrast, interventions such as intra-articular hyaluronic acid and glucosamine sulfate are not recommended. Regarding knee OA, the AAOS CPGs provide strong recommendations for use of NSAIDs and physical rehabilitation. Moderate recommendations are provided for weight loss, lateral wedge insoles, and needle lavage. In contrast, interventions such as glucosamine, chondroitin sulfate, acupuncture, and viscosupplement injections are not recommended. Electrotherapeutic modalities, manual therapy, knee brace, acetaminophen, opioids, pain patches, biologic injections, and intra-articular steroid injections have inconclusive recommendations for their use.
Outside of orthopedic surgery, we are aware of only one other major medical sub-specialty that has published similar CPGs for hip and knee OA, namely the American College of Rheumatology [14]. Overall, the AAOS and American College of Rheumatology largely agree on treatment recommendations for hip and knee OA. Both recommend the use of NSAIDs, physical therapy, weight loss, and intraarticular corticosteroids. The primary points of difference lie in the strength of these recommendations and whether they apply to hip or knee OA. Still, despite the existence of AAOS CPGs, adherence in our field has been poor [15]. For example, even with a strong recommendation against hyaluronic acid injection, this therapy remains commonly used by orthopedic surgeons [12], with a mean cost of $1128 for one injection series [16]. The underlying reasons for noncompliance are unclear but are hypothesized to include either lack of CPGs from governing medical societies or lack of awareness of those CPGs. Apart from orthopedics and rheumatology, patients with OA are often seen by a variety of other medical specialties, most commonly family medicine, geriatrics, internal medicine, and physical therapy. Remarkably, the American Academy of Family Physicians, the American Geriatrics Society, the American College of Physicians, and the American Physical Therapy Association do not currently have published treatment recommendations for hip and knee OA. As members of those medical societies are often the gatekeepers for patients with OA, it is essential that they are aware of the optimal treatment modalities to ensure consistent, evidence-based management. If patients visit multiple care providers from different specialties and receive inconsistent recommendations, this can foster an environment of noncompliance and lead to a negative and costly overall patient experience. Noncompliance can lead to patients seeking care with multiple care providers, virtually negating any benefit that bundled payments may provide and further increasing the costs of care.
An additional benefit for multidisciplinary standardization of nonoperative care is minimizing health-care disparities. Several studies have shown disproportionately low TJA utilization rates among racial and ethnic minorities despite similar or even higher OA burden [17e19]. Differences in preoperative OA management and access to specialized care are potential contributing factors for such disparities. Communities with higher concentrations of Black residents, for example, tend to have fewer surgeons per capita and fewer external ties for referrals to specialists [20]. Studies across several medical and surgical specialties have shown that Black patients do not receive timely referrals to specialists [21e25]. In addition, minority patients undergoing TJA have higher overall comorbidity burden including obesity and diabetes mellitus obesity [26] and thus may experience delays in receiving surgery. In addition to promoting high-quality care before progressing to the point of needing surgery, value-based care should also include appropriation for preoperative optimization and coordination of care. Otherwise, minorities and high-risk patients may be excluded from value-based payment programs, further perpetuating health disparities. Minority and medically complex patients may require more visits to care coordination and thus could cost more in a nonoperative bundle. Therefore, risk-adjusted bundles are necessary to prevent such patients from being denied care for fear that they would be "bundle busters" due to higher medical complexity. Anemia, malnutrition, opioid use, and tobacco smoking are a few examples of the modifiable risk factors that can increase the rate of complications after surgery [27,28]. Management of modifiable risks factors is a time-consuming process which requires multidisciplinary care, and bundling costs could help streamline this process to decrease the rate of surgical site infections, length of stay, readmission rates, and overall costs of care. Improving patients' overall health may also promote increased hospital participation in current TJA bundled payment programs.
Payment reforms have been proven to be effective at reducing costs of surgical care without compromising outcomes. Our next challenge as a community is to take these principles and apply them to nonoperative management of common chronic conditions, such as OA. Increasing value of care is a worthwhile goal, but it cannot be accomplished until our evidence-based CPGs are familiar to and followed by all health-care providers who would be providing nonoperative management for our patients. This has important implications beyond value, and it extends to providing equitable care to all patients. Regardless of where patients enter the healthcare system, receiving consistent and evidence-based care is critical.
|
2021-05-27T05:23:27.204Z
|
2021-05-15T00:00:00.000
|
{
"year": 2021,
"sha1": "199330e26ecf052f96fff2ca09a657667cffb065",
"oa_license": "CCBYNCND",
"oa_url": "http://www.arthroplastytoday.org/article/S2352344121000625/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "199330e26ecf052f96fff2ca09a657667cffb065",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259857722
|
pes2o/s2orc
|
v3-fos-license
|
Engineering of TEV protease variants with redesigned substrate specificity
Due to their ability to catalytically cleave proteins and peptides, proteases present unique opportunities for the use in industrial, biotechnological, and therapeutic applications. Engineered proteases with redesigned substrate specificities have the potential to expand the scope of practical applications of this enzyme class. We here apply a combinatorial protease engineering‐based screening method that links proteolytic activity to the solubility and correct folding of a fluorescent reporter protein to redesign the substrate specificity of tobacco etch virus (TEV) protease. The target substrate EKLVFQA differs at three out of seven positions from the TEV consensus substrate sequence. Flow cytometric sorting of a semi‐rational TEV protease library, consisting of focused mutations of the substrate binding pockets as well as random mutations throughout the enzyme, led to the enrichment of a set of protease variants that recognize and cleave the novel target substrate.
INTRODUCTION
Catalyzing the hydrolysis of peptide bonds and thereby activating or inactivating proteins makes proteases ubiquitous regulators of physiological processes in all domains of life. [1]Due to this feature proteases have applications in industry, research, and therapy.Highly substratespecific viral proteases like tobacco etch virus (TEV) protease and human rhinovirus (HRV) 3C protease are used in a variety of research applications. [2,3]Supplementation of natural proteases is a treatment option for a number of human diseases caused by deficiencies or abnormalities in protease activity.For example, the proteases urokinase (uPA), tissue plasminogen activator (tPA), Factor IX, and Factor VIIa are approved drugs for the therapeutic modulation of blood clotting in stroke, heart attack, or hemophilia. [4]ile the aforementioned examples all rely on the native function of proteases to cleave their physiological substrates, the scope of practical applications of proteases could be greatly expanded by generating enzymes with novel protein cleavage specificities.Early efforts to redesign protease specificity focused on structure-guided, rational design of the active site, and surface loops of proteases. [5,6]owever, this did not provide proteases with truly novel specificities that are not known among natural occurring proteases. [7,8]As an alternative, directed evolution through screening of large libraries of protease variants has emerged to create highly active proteases that recognize and cleave truly novel substrates.][12] However, proteolytic degradation of a target protein of interest often requires more substantial changes of the specificity.Packer et al. described the evolution of a protease that cleaves a novel substrate differing at six of seven positions from its consensus wild-type (wt) substrate. [13] have developed a high-throughput, cell-based assay for combinatorial protease engineering that links protease activity to the solubility of a fluorescent reporter protein in the cytoplasm of Escherichia coli. [14]To evaluate the efficiency of our method, we here applied it to identify variants of TEV protease (TEVp) that recognize and cleave the novel substrate EKLVFQA, which differs at three of seven positions from the TEV consensus substrate sequence.It exhibits high sequence similarity (5 out of 7 aa) to the aggregation-inducing hydrophobic core region of the human amyloid beta (Aβ) peptide associated with the neurodegenerative disorder Alzheimer's disease. [15]Flow cytometric sorting of a semi-rational TEVp library yielded a set of protease variants with proteolytic activity on the target substrate EKLVFQA.The results thus indicate that our method can be applied to generate proteases with novel specificities that have been altered at several substrate positions in order to cleave a target protein of interest.
Construction of reporter plasmids
Genes encoding the substrate sequences were purchased from IDT (Integrated DNA Technologies Inc., Coralville, USA).These were subcloned into a previously constructed reporter plasmid derived from the pRha67K vector (Xbrane Bioscience, Stockholm, Sweden) using standard molecular cloning techniques. [14]The resulting plasmids express a reporter protein consisting of N-terminal Aβ42(Q15L/E22G), followed by a peptide linker containing the respective substrate sequence, and enhanced green fluorescent protein (EGFP) at the C-terminus.
Sequence verification was carried out by Sanger DNA sequencing (Microsynth AG, Balgach Switzerland).
Supersteve library construction
A saturation mutagenesis library of the TEVp active site, as described in the results sections, was purchased from TWIST (TWIST Biosciences, South San Francisco, USA).Regions were included upstream-and downstream of the active site that annealed on the gene encoding TEVp.
We first amplified the C-terminal region of the gene encoding TEVp by PCR using Q5 high-fidelity DNA polymerase (New England BioLabs) according to the manufacturer's protocol.An internal forward primer (5′CGAGTGTCCCGAAGAACTTCATGG3′) and reverse primer (5′CAC-CGCGGCCGCTGA3′), annealing downstream of the TEVp gene, were used together with the previously generated pBAD TEVp-mCherry plasmid as template DNA. [14]The resulting PCR product was "sewed" to the active site library by overlap extension PCR in 10 cycles.The DNA product was purified by gel extraction using the QIAquick Gel Extraction kit (Qiagen GmbH, Hilden, Germany).This DNA product was used as a reverse primer in PCR.The pBAD TEVp-mCherry plasmid was used as template DNA with a forward primer annealing upstream of the TEVp gene (5′GCTTGAATTCTCTAGATTAAAGAGGA-GAAAGGTACCATGG3′).The resulting PCR product constituting the active site library cloned into a wt TEV backbone was purified by gel extraction using the QIAquick Gel Extraction kit (Qiagen GmbH).
Finally, one round of error-prone PCR was carried out on the purified DNA using the GeneMorphII Random mutagenesis Kit (Agilent Technologies, Santa Clara, USA) according to the manufacturer's protocol.The final PCR product was purified and subcloned into the pBAD plasmid. [16]The resulting Supersteve library plasmid was used to elec- Chemically competent One ShotTM TOP10 (Thermo Fisher Scientific) cells were transformed by heat shock with the reporter plasmid harboring the EKLVFQA substrate sequence and spread on agar plates containing kanamycin.A sequence-verified colony was used to generate electrocompetent cells. [17]e recovered Supersteve library was used to electroporate the
Cell sorting of Supersteve library
The transformed library was cultured overnight in LB medium supplemented with kanamycin and chloramphenicol at 37
In vitro protease characterization
Selected Supersteve variants and wt TEVp were amplified from the pBAD plasmid via PCR.The genes were subcloned as N-terminal fusion to a hexahistidine tag into a pET-26b(+) vector (Merck KGaA), lacking the pelB signal sequence.The proteins were expressed as previously described and purified using HisPurTM Cobalt Resin (Thermo Fisher Scientific) according to the manufacturer's protocol. [18] expression vector containing a reporter fusion protein consisting of N-terminal maltose binding protein (MBP), fused with a peptide linker containing EKLVFQA to the Z domain (Z) of staphylococcal protein A, and an albumin binding domain (ABD) at the C-terminus (MBP-EKLVFQA-Z-ABD) was purchased from GenScript (GenScript, Piscataway, USA).The reporter protein was expressed and purified as previously described. [19]l purified proteins were >95% pure as determined by sodium
Choice of target substrate and protease
We have previously reported on a screening method that links protease activity to the solubility and correct folding of a fluorescent reporter protein. [14]The reporter protein consists of an N-terminal highly aggregation-prone mutant of the Aβ 42 peptide, followed by a protease substrate sequence, and EGFP at the C-terminus.Upon expression in E. coli, misfolding of the reporter protein yields a low whole-cell EGFP fluorescence signal detected by flow cytometry.Coexpression of a protease with activity on the substrate sequence leads to proteolytic separation of the Aβ 42 peptide from EGFP, proper folding of EGFP, and restored whole-cell fluorescence (Figure 1A).This method has been used to enrich protease variants with improved proteolytic activity on a native substrate sequence from a combinatorial protein library. [14]Here, we apply the method to redesign the specificity of a protease with the aim to cleave a novel substrate sequence.We targeted TEVp as its biochemistry and structure is extensively studied.
Previous directed evolution campaigns have demonstrated that it is possible to alter the substrate specificity of TEVp. [9,12,13] first determined the proteolytic activity of wt TEVp for sub- at position P1. [20]To challenge our platform, we chose EKLVFQA as the target substrate.This substrate exhibits high sequence similarity to the aggregation-inducing hydrophobic core region of the Aβ peptide QKLVFFA. [15]
TEV protease library design
To generate TEVp variants with modified specificity we designed a library of the substrate binding pockets of the TEVp.From the crystal structure of TEVp in complex with a peptide substrate (PDB entry 1LVM) we identified TEVp residues T146, D148, N171, N174, and N176 forming interactions with the specificity-determining substrate residues P6, P3, and P1.TEVp residue N177 also emerged as a mutational hotspot in previous TEVp engineering campaigns. [12,13]These six residues were subjected to full amino acid randomization, excluding cysteine, via trinucleotide gene synthesis.Based on results from a previously published study, we included an S170A substitution in the TEVp sequence (Figure 1D). [13] subcloned the library and performed error-prone PCR to generate a semi-rational TEVp library denoted Supersteve (Substrate specificity by rational study of the TEV enzyme).The Supersteve library was subcloned into an intracellular expression plasmid as N-terminal fusion to the fluorescent protein mCherry, enabling cell-to-cell normalization of the protease activity, that is, EGFP signal, to the protease variants expression level, that is, mCherry signal, in dual color flow cytometry (Figure 1A).
Selection of TEVp variants with proteolytic activity on the substrate EKLVFQA
E. coli cells containing the triple substitution EKLVFQA reporter plasmid construct were co-transformed with the Supersteve library, resulting in approximately 1.0 × 10 9 colony forming units.Three consecutive rounds of flow cytometric sorting were carried out to collect cells exhibiting high whole-cell EGFP signal, normalized to mCherry signals with 10-fold coverage (Figure 2A).From the final sort 46 clones were analyzed by DNA sequencing revealing that 42 of 46 TEVp sequences in the sort output were unique.Among the isolated clones, one protease variant was identified three times, and two variants were identified twice.In position, 176 Thr was overrepresented, while position 177 was enriched for aromatic amino acids and Met.We also observed enrichment of mutations at positions 135, 192, and 193 outside the active site, originating from the random mutagenesis by error-prone PCR.
We evaluated the proteolytic activity of three evolved TEVp variants, denoted A01, C03, and E05, as well as wt TEVp on the EKLVFQA reporter protein using the intracellular assay.Mean EGFP fluorescent intensities of cells co-expressing the protease variants and the reporter protein were compared to cells expressing only the reporter protein (Figure 2B).We observed a 3.87-, 4.51-, and 3.43-fold increase in GFP fluorescence signal upon expression of A01, C03, and E05, respectively.
In vitro characterization of evolved TEV protease variants
We generated a soluble reporter fusion protein to assess the proteolytic activities of the evolved protease variants and wt TEVp on the novel substrate in vitro (Figure 3A).This consisted of MBP, [21] the substrate sequence EKLVFQA, the Z domain of staphylococcal protein A, [22] and an ABD. [23]The three selected protease variants, wt TEVp and MBP-EKLVFQA-Z-ABD, were expressed and purified.The soluble protease variants and MBP-EKLVFQA-Z-ABD were mixed in a 1:50 molar ratio, incubated at 37 • C, and the cleavage reaction analyzed over a 5-h time period (Figure 3B-E).We observe only minimal cleavage of the EKLVFQA substrate by wt TEVp (Figure 3B).All three evolved TEVp variants exhibited significant cleavage already after 30 min, with almost complete substrate protein turnover for variants C03 and A01 after 5 h (Figure 3C-E).Variant C03 and wt TEVp were tested against wt TEV substrate wherein expected cleavage by wt TEVp was observed, as well as efficient cleavage by C03 (Figure S1).
DISCUSSION
Our group has previously described a method for combinatorial engineering of protease activity and shown that it is efficient for improving the catalytic activity of a protease. [14]Here, we have evaluated the method for engineering the substrate specificity of the TEVp to recognize and cleave novel substrate EKLVFQA that differs from the TEV consensus cleavage site by three positions.Substitutions at positions P5 and P1′ reduced activity of the wt, however the addition of the third substitution at P3 extinguished activity (Figure 1).Reduction of activity by the P3 alteration suggests a critical role in the substrate for the wt TEVp.
A semi-rational TEVp library was designed with the expression of protease variants correlated to the fluorescent output of mCherry.
Activity of the variants against the substrate was determined by the release and subsequent folding and fluorescence of EGFP from an aggregating Aβ subunit (Figure 1A).The library was sorted for activity on the target substrate EKLVFQA, and observed an overall increase in EGFP signal indicating enrichment of EKLVFQA cleaving variants (Figure 2).We identified a broad panel of new mutants with relatively high sequence diversity in the sort output of the third round, which could assist the design of a maturation library for screening of protease activity on future substrates with those modifications, including an Aβ substrate QKLVFFA.A subset of enriched protease variants from the sort output were investigated for proteolytic activity on the EKLVFQA using the intracellular method.Investigated variants exhibited proteolytic cleavage restoring EGFP signals, whereas wt TEVp did not (Figure 2B).Proteolytic activity of the enriched TEV variants on the EKLVFQA substrate was confirmed in an in vitro assay (Figure 3).Variant C03 was also tested against TEV wt substrate, showing retained activity (Figure S1).Addition of negative selections could assist to control for substrate selectivity.One way is to introduce counter selections against, for example, wt substrate, including plasmid preparation, transformation into cells containing wt substrate, and sort for low In follow-up studies, we aim to design a second-generation library based on the reported sequences and screen for activity on Aβ.Such a design may prove to be of therapeutic interest as a successful protease toward unaggregated Aβ has the potential to mitigate the disease progression.
troporate electrocompetent E. coli One ShotTM TOP10 cells (Thermo Fisher Scientific).Twenty electroporation reactions were carried out in total and the cells were transferred to S.O.C. medium (Thermo Fisher Scientific) and incubated for 1 h at 37 • C and 150 rpm.This was inoculated into 500 mL LB medium (Becton, Dickinson and Company [BD], Franklin Lakes, USA) containing chloramphenicol and grown at 37 • C and 150 rpm overnight.Number of transformants was determined via dilution spreading on agar plates containing chloramphenicol.The number of transformants was calculated to be 9 × 10 8 .The Supersteve library plasmid DNA was isolated from the culture using a QIAprep Spin Miniprep Kit (Qiagen GmbH).Sequence verification was carried out by Sanger DNA sequencing (Microsynth).
prepared electrocompetent E. coli cells containing the EKLVFQA reporter plasmid.Twenty-one electroporation reactions were carried out in total and the cells were transferred to S.O.C. medium (Thermo Fisher Scientific) and incubated for 90 min at 37 • C and 150 rpm.This was inoculated into 500 mL LB medium (Becton, Dickinson and Company [BD], Franklin Lakes, USA) containing chloramphenicol and grown at 37 • C and 150 rpm overnight.Number of transformants was determined via dilution spreading on agar plates containing kanamycin and chloramphenicol.The number of transformants was calculated to be 1 × 10 9 .Aliquots were stored with 25% glycerol at −80 • C.
37• C shaking at 250 rpm.Three rounds of sorting were performed with progressively tighter stringency.The final sort yielded 2000 cells.A sample of the cells was spread on kanamycin/chloramphenicol agar plates with the rest inoculated into LB medium containing antibiotics.Forty-eight single colonies were picked from the agar plates and the sequences of the reporter plasmids and protease plasmids were analyzed by Sanger DNA sequencing (Microsynth).Individual colonies of the final sort were analyzed by flow cytometry.For each clone, a control culture was prepared in which only the reporter plasmid was induced (i.e., no induction with L-arabinose).The whole-cell EGFP signal was analyzed using a Gallios flow cytometer (Beckman Coulter).
dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) with Coomassie staining.In vitro activity assays were carried out in a buffer containing 25 mM NaPi, 125 mM NaCl, and 7.5 mM DTT pH 7.4.5 µM MBP-EKLVFQA-Z-ABD was incubated with 0.1 µM of the respective purified protease at 37 • C for 300 min.Samples of the reaction were taken after 0, 30, 60, 90, 120, and 300 min.The samples were immediately quenched with Laemmli buffer containing β-mercaptoethanol, incubated at 95 • C for 5 min, and stored at −20 • C until analysis.Cleavage of the reporter protein was monitored by SDS-PAGE analysis.A different soluble reporter fusion protein, Z-ENLYFQG-ZABD, was used to evaluate the proteolytic activities of evolved protease C03 and wt TEVp on the consensus TEV substrate ENLYFQG.The cleavage reaction was performed as described above over a 24 h time period and monitored by SDS-PAGE analysis.
strate peptide sequences containing relevant substitutions.To this end, we generated five different reporter protein constructs, harboring the consensus TEV site (ENLYFQG), or the following divergences; EKLYFQA, EKLVFQA, QKLVFQA, and Aβ 15-21 (Figure1B).Plasmids encoding the five different reporter proteins were separately cotransformed into E. coli with a plasmid encoding wt TEVp.Proteolytic activity was determined through co-expression of the reporter proteins and wt TEVp in the cytoplasm of E. coli, followed by flow cytometric analysis of the whole-cell EGFP signals of 200,000 cells.
FigureF I G U R E 1
Figure 1C depicts the proteolytic activity on the different substrates, normalized to the activity on the consensus substrate.Introduction of the P3 (Y → V) substitution drastically decreased proteolytic cleavage.This mutation has previously been reported as being the most important TEVp specificity determinant, together with glutamine (Q)
F I G U R E 3
GFP fluorescence.Another alternative could be a triple fluorescence system, with the negative substrate fused to a third fluorescent protein for simultaneous positive and negative selection in the flow cytometer.Ranking of cleavage efficiency found in the intracellular method was corroborated by that of the in vitro assay, indicating the intracellular method as a robust and reliable initial screening of proteolytic activity.F I G U R E 2Flow-cytometric sorting of Supersteve library toward activity on EKLVFQA and intracellular proteolytic activity of evolved TEVp variants.(A) Representative dot plots during the three rounds of flow-cytometric sorting.The sorting gates applied are indicated.(B) Histograms of cells co-expressing Aβ42(Q15L/E22G)-EKLVFQA-EGFP and the respective protease variant are depicted in green.The red histograms represent a negative control of cells expressing Aβ42(Q15L/E22G)-EKLVFQA-EGFP only.Aβ, amyloid beta; EGFP, enhanced green fluorescent protein; TEVp, tobacco etch virus protease.In vitro soluble protein cleavage assay.(A) Schematic representation of the MBP-EKLVFQA-ZABD reporter protein and its cleavage products (B-E) MBP-EKLVFQA-ZABD and the respective protease were mixed in a 1:50 molar ratio and incubated for 37 • C. Samples were taken at the indicated time points and analyzed through SDS-PAGE.MBP, maltose binding protein; SDS-PAGE, sodium dodecyl sulfate polyacrylamide gel electrophoresis.
|
2023-07-15T06:17:36.460Z
|
2023-07-14T00:00:00.000
|
{
"year": 2023,
"sha1": "524ccb808b9d0d28e6fa8f7d1c01f24f203fefef",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/biot.202200625",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "ec4c5ffa307aac07eb23dad711e79be9406f5a98",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224893126
|
pes2o/s2orc
|
v3-fos-license
|
Financial sector readiness to support economic activities under COVID-19: The case of African continent
Organizational readiness for change is considered a critical precursor to the successful implementation of complex changes. Indeed, some suggest that failure to establish sufficient readiness accounts for one-half of all unsuccessful, large-scale organizational change efforts. The African economy and the world at large are at brink of economic depression as result of devastating effect of COVID-19. However, it is equally important to determine the readiness of countries to absorb the economic shock of the pandemic. Using the model by Battese and Coelli (1995), the translog production frontier was adopted to estimate technical efficiency of the financial sector of the continent. The 24 countries selected in Africa were based on the availability of data to cover our variables of interest for period 2000 to 2018. The findings were that financial sector in the continent have performed above average (72%) over the period of study and hence they are slightly ready to support ailing economy. Also lower middle income countries are relatively going to have more problems with pandemic. However, the probability of the continent not plunging into economic depression with the support of the financial sector is 0.42 which is not encouraging. It is recommended that policies to address interest rate margin, liquidity and market concentration should be managed properly to improve technical efficiency of the financial sectors on the continent.
INTRODUCTION
Readiness is the state of preparedness of organizations to meet a situation and carry out a planned sequence of actions. Organizational readiness for change is considered a critical precursor to the successful implementation of complex changes. Indeed, some suggest that failure to establish sufficient readiness accounts for one-half of all unsuccessful, large-scale organizational change efforts. Drawing on Lewin"s three-stage model of change, change management experts have prescribed various strategies to create readiness by 'unfreezing' existing mindsets and creating motivation for change. These strategies include highlighting the discrepancy between current and desired performance levels, fomenting dissatisfaction with the status quo, creating an appealing vision of a future state of affairs, and fostering confidence that this future state can be Future economic environment is very difficult to predict hence it is contingent on every economy to efficiently utilize its resources. For this reason countries or firms always have to strive to make the best out of the available resources so as to be able to weather the storm in future. A country or firm that has been performing in the past always stands the chance to support their economy when the need arises. During economic depression, countries that over the years have been doing well are always in better position to give stimulus packages to support livelihood and also to bring the economy back on track on time. For this reason the readiness of an economy to fight economic depression depends on the efficiency of the economy over the years. Efficient economies are able to save enough to cover unforeseen contingencies.
In the 80"s most emerging economies through directives from International Monetary Fund were taken through different forms of Financial Sector Adjustment Program (FINSAP). The objective was to address the institutional deficiencies of the financial system, and developing money and capital markets (Brownbridge and Gockel, 1996). According to Bikker (2010) banks contribute to investments, employment creation and the process of economic growth and development. They are the corner stone of an economy of a given nation (Omankhanlen, 2012;Athanasoglou et al., 2006;Sergeant, 2001). However, the continent Africa is always referred to as low income or developing countries. Since the mid-1990s, this continent have seen steady economic growth but struggled to sustain that development. United Nations Economic Commission for Africa (UNECA) is predicting a fall in the growth rate from 3.2 to 1.8% for emerging due to the novel virus COVID-19 (IMF, 2020).. The existence of a solid and efficient financial system is a crucial condition for a sustainable economic growth for the continent Africa.
The pandemic has posed a major disruption to economic activity across the world. It is estimated by World Bank that economic growth in sub-Saharan Africa will decline from 2.4% in 2019 to -2.1 to -5.1% in 2020, the first recession in the region in 25 years. Even though events on the coronavirus pandemic are still unfolding, a preliminary analysis of the impact of the coronavirus menace on the real sector for most countries if not all shows that the 2020 projected real GDP growth rate could decline significantly (AUC, 2020).
It is also stated that the effect will also depend on how countries India, China, the USA will perform or deal with the crisis. However, it is equally important to determine the readiness of countries to absorb the economic shock of the pandemic.
According to WHO as at 4th August, the situation by WHO regions were Americas 9,741,727, Europe 3,425,017, South-East Asia 2,242,656, Eastern Mediterranean 1,574,551, Africa 825,272 and Western Pacific 332,754. This is an indicative that relatively African is not had hit by the virus but it is a novel virus. On 17 April, the World Health Organization (WHO) warned that Africa could be the next epicentre of the Coronavirus. It worth noting that South Africa is fifth country in the world with the highest cases of COVID-19. Bloomberg estimates that in a pandemic situation, Covid-19 could cost the world economy $2.7 trillion Equivalent to UK"s economy. This implies that if developing countries and emerging economics will be significantly impacted without support. However, the global financial crisis in 2008 also did not spear emerging economies. Every single person on the continent must exercise the highest level of self-discipline which is evidently clear in the economic numbers. The greater the level of selfdiscipline and civic responsibility the greater chances of avoiding mass job losses and its concomitant hardships.
According to African for Center Economic Transformation (ACET), the continent needs growth with diversification, export competitiveness, productivity increases, technological upgrading and improve human well-being (DEPTH). To achieve DEPTH requires mobilization of savings and channeling of financial resources to seekers of loan (Gulde et al., 2006). In this era of COVID-19 the efficiency of the financial sector becomes more pronounced when it comes to economic development of the continent. The financial sector is crucial to the economies of various countries, and banks remain a core of the sector, especially in emerging economies where the capital market is not strong enough (Matthew and Laryea, 2012). Delis and Papanikolaou (2009) indicated that an efficient financial sector will be better able to withstand negative shocks and contribute to the stability of the financial system. Thus, it is crucial to analyze the efficiency performance of banks and the factors behind their efficiency performance.
This article seeks to determine the readiness of financial sector on the continent Africa to surpass the effect of COVID-19. To be precise on how market concentration, market liquidity and net interest margin account for efficiency of the sector. The main objective of our study is to identify the determinants of the technical efficiency. This is because what happens in the financial sector has impact on the productive capacity of the nation"s economy. Relieves and grants from World Bank, IMF, governments and other organizations and associations to members and various economic sectors will be implemented through the financial sector of various countries and hence the need for the sector to be relatively efficient in their activities with their stakeholders to get value for money.
Economic impact of COVID-19
The COVID-19 pandemic thus poses important risks, not only for people"s health but also for their economic wellbeing. Already disadvantaged groups will suffer disproportionately from the adverse effects. Low-income earners, especially informal workers, who earn a living on a day-to-day basis and have limited or no access to healthcare or social safety nets, are severely hit.
Informal sector businesses are like any other businesses. With COVID-19 bringing transportation and market demand to a halt, informal sector businesses, such as drinking and chop bars, small retail shops, hairdressers, and taxi drivers will see a reduction in customers because of the pandemic and they form the majority Africa.
Africa"s GDP growth at 2019 was 3.6%. This is not sufficient to accelerate economic and social progress and reduce poverty. Since 2000, Africa"s GDP growth has largely been driven by domestic demand (69% of the total), rather than increases in productivity. Africa"s labour productivity as a percentage of the US level stagnated between 2000 and 2018, and the Africa-to-Asia labour productivity ratio has decreased from 67% in 2000 to 50% today (AUC, 2020; OECD, 2020). Global markets account for 88% of Africa"s exports, mostly in oil, mineral resources and agricultural commodities.
At the onset of the crisis, prospects differed across economies. Some were displaying high growth-rates, in excess of 7.5% (Rwanda, Côte d"Ivoire and Ethiopia), but Africa"s largest economies had slowed down. In Nigeria (GDP growth of 2.3%), the non-oil sector has been sluggish, in Angola (-0.3%) the oil sector remained weak, while in South Africa (0.9%) low investment sentiment weighed on economic activity.
In response to the pandemic, most countries in the continent have come up with relief packages to support businesses and individuals. Côte d"Ivoire, Ethiopia, Nigeria, Senegal and Uganda are spending CFAF 96 billion, USD 1.64 billion, N 500 billion, CFAF 1 000 billion and USD 7 million respectively to support their economy. In the case of South Africa the government is assisting companies and workers facing distress through the Unemployment Insurance Fund (UIF) and special programmes from the Industrial Development Corporation. A new 6-month COVID-19 grant is also created to cover unemployed workers that do not receive grants or UIF benefits and the number of food parcels for distribution was increased. Funds are available to assist SMEs under stress.
Liquidity and technical efficiency
Liquidity plays a central role in its successfully functioning as a profitable firm. Liquidity has major importance to both shareholders and potential investors. The goal of liquidity management should be to enable a firm to maximize profits of its operations while meeting both short term debt and upcoming operational expenses, that is, to preserve liquidity (Panigrahi, 2014). Excessive investments in liquidity may lead managers to make investments towards maximizing their own utility, thus to the detriment of profitability (Fama and Jensen, 1983). Ensuring adequate liquidity is essential in banking operations because of the financial intermediation role of banks. This intermediation role of banks exposes them to an inherent liquidity risk, which can have dire consequences on the banks" earnings and solvency (Berger and Bouwman, 2009). The ability to meeting obligations as and when they fall due is an important determinant of bank efficiency. Literature on banking failures has provided sufficient evidence showing that undue liquidity shortage has a predominant impact on the efficiency and solvency of banks. This determinant, called liquidity is measured by the ratio of loans to assets. The lower the variable (ratio) the higher the bank"s liquidity and the vice versa. The article expects liquidity to have a positive impact on the efficiency of banks (Nassreddine et al., 2013). Tochkov and Nenovsky (2009) also revealed that liquidity has positive effect on financial sector efficiency.
Market concentration and technical efficiency
Market concentration is an important indicator to define the bank sector stability in financial terms. Shim (2019) examines the effect of loan portfolio, market concentration on the bank"s financial stability. He has investigated the effect of loan diversification and concentration factor in banking market. Findings of the study explains that factor of market concentration is negatively associated to financial strength of the banks. Additionally, those banking firms which are operating in more diversified situation and highly concentrated markets are more stable, comparatively to those which are dealing under less concentrated market. Some of the other studies have provided their limited contribution in the literature for market concentration and financial stability in banking sector (Ghosh, et al., 2018;Syadullah, 2018;Cheng et al., 2018;Omodero and Ogbonnaya, 2018;Hallunovi and Berdo, 2018;Elkhayat and ElBannan, 2018). The article proxy market concentration with HHI because it is better than market shares of the big three (CR3) banks or five banks (CR5). This is a good proxy because it reflects the degree of market share inequality across the different banks in the banking industry (Schumann, 2011). This article expected that highly concentrated markets will lower the cost of collusion and foster tacit and/or explicit collusion on the part of banks; hence, all banks in the market earn monopoly rents. The range of this variable is within the ranges from zero to one, where large number of banks, each with a small share, produces an HHI close to zero, while a single monopolist bank with a 100% share produces an HHI of one. Koutsomanoli-Filippaki et al. Interest margin and technical efficiency Seelanatha (2012) evaluated the drivers of Technical Efficiency with different approaches and the conclusion was that gross interest margin has positive impact of technical efficiency. However, Řepková (2015) evaluated the banking efficiency determinants in the Czech Banking Sector over the period 2001 to 2012 reveal that interest rate have negative impact of technical efficiency. Also, Shanmugam and Das (2006) reveal that technical efficiency of raising interest margin is varied widely across sample banks and is time-invariant. Even though several reform measures have been introduced since 1992, they have not so far helped the banks in raising their interest margin.
METHODOLOGY
Our sample covers a balanced panel dataset of 486 observations over the period 2000 to 2018 with 24 groups. The groups are the financial sector for 24 African countries. The selection of these countries was based on the availability of data. The data on the financial sector of these countries were collect from Global Findex Database 2018 and the unit of analysis is the financial sector.
Literature for estimating bank efficiency is Stochastic Frontier Approach (SFA), Distribution Free Approach (DFA) and Thick Frontier Approach (TFA). The non-parametric methods that have been employed in literature for estimating bank efficiency are Data Envelopment Analysis (DEA) and Free Disposable Hull (FDH). The SFA and DEA are the most commonly used parametric and nonparametric methods, respectively. The SFA approach is a stochastic technique which integrates the random errors but also requires the predefinition of the functional form (Eisenbeis et al., 1999). This article uses the stochastic frontier efficiency estimates since it present more precise stock price behavior and adopts the standard translog functional model for multi products in estimating efficiency. This is specified as follows: (1) where t = 1, 2 …..18 (2) where is the dependent variable of a country in the tth time period; , are the independent variables of the tth time period; are random variables which are assumed to be iid N(0, σv 2 ), and independent of the ; where the are non-negative random variables which are assumed to account for time-varying technical inefficiency in production and are assumed to be iid as truncations at zero of the N(μ, σu 2 ) distribution; , is an error item (or disturbance item); and is an unknown parameters to be estimated k = 0, 1,…, 23; η is an unknown scalar parameter to be estimated.
The truncated-normal distribution of the technical inefficiency effect is E[ ] which is the "mean inefficiency" at any time t. The article fits the SFA using maximum likelihood estimation technique.
To determine the technical efficiency effect, the article employs the one-side generalized likelihood-ratio test express as the following.
It is important to conduct a number of diagnostic checks to see whether the stochastic frontier model is a relevant model. To do that the article estimates the total variance ) or use the likelihood ratio model recommended by Kumbhakar et al. (2015). When this statistic by Kumbhakar, Wang and Homcastle is computed the article compares it with the critical values that are reported in Kodde and Palm (1986).
The efficiency score will be between zero (0) and one (1) with closer to one meaning technical efficient and closer to zero (0) implying low technical efficiency. The article estimates the probability of economic depression with the efficiency scores.
Data presentation and analysis
To analyze the readiness of African countries to give relief support and manage the economy our sample covers a balanced panel dataset of 486 observations over the period 2000 to 2018 with 24 groups. The countries selected can be categorized into low income country (12), low middle income country (8) and then upper middle income country (4). The database was extracted from global Findex database 2018. Adopting Sealey and Lindley (1977), the article considers financial sector as intermediary that used inputs like deposit and overhead cost to generate assets. Through the efficiency scores, the article will identify the difference between countries as well economic blocks over the period 2000 to 2018. Table 1 shows all the variables predicted to influence efficiency in the model presented. Each variable have mean, standard deviation and min-max. The standard deviations are given in three categories: overall, between and within. The between variation implies the variation of the same variable over time. On the other hand, within variation refers to the variable among the different countries in different time periods. The mean of interest margin (IM) variable is 7.085, the minimum is 1.162 and maximum is 28.982. The overall variation is 3.472, with between variation of 2.77 and within variation of 2.15 which implies between variations in relation to interest margin dominates the variations in interest margin. In the case of the variables concentration and liquidity between variations dominated the variations in the data as shown in Table 1. Figure 1 shows the trend net interest margin, bank concentration, liquid liability and technical efficiency over the period 2000 to 2020. On the average, net interest margin and bank concentration are all on the downward trend which is good for the continent. Also, wealth noting is the trend for liquid liability and technical efficiency which on the average have been positive indicative of the fact that the financial sector in the continent are in the position to pay their liabilities as when they fall due and efficiently managing resources to get the returns. However, with technical efficiency, Figure 1D shows that most of the countries selected have their technical efficiency dipping at around 2016 onwards even though the average for period is on the upward trend.
The trend of net interest margin, liquid liability and concentration converge with time is as shown in Figure 1E. This is indicative of the similarities in the sampled countries and also readiness of the sector to support economic growth on the continent.
RESULTS AND DISCUSSION
The version of the stochastic model used for this article is the true fixed effect. The Wald Chi 2 of the whole model was 21.3 with probability Chi 2 of 0.000 with a log likelihood of -298.3405. This is indicative of the fact that the model is well fitted.
With the frontier regression model all the interaction variables and the squared variables were dropped because of multicollinearity problem with the exception of the product of technology and other variables. The variable Credit (C) and Deposit (D) and overhead cost were all significant at different significant levels in explaining stochastic production frontier for determining the efficiency scores. The trend variable (T) which proxy for technology was positive and significant at 1% shown in Table 2. The signs of the other variables conform to theory.
The second part of Table 2 is the regression of technical efficiency on concentration, liquidity and interest margin. The variables for the second regression start with prefix "mu". The dependent variable in second regression is technical efficiency derived from the first regression. Interest margin is significant at 10% has positive effect on technical efficiency and the coefficient is negative 0.019. Put differently the negative effect here means that interest margin have negative effect on technical inefficiency. This implies that as interest margin improves or increases it improves the technical efficiency of financial sector on the continent Africa.
In the case of market concentration is also significant at 5% and coefficient of 0.001 implies that improvement or increase in market concentration affects technical efficiency of financial sector on the continent. These also conform to theory.
To validate whether the stochastic model is relevant to the article conducts a number of diagnostic checks. From the regression output the article computes the ratio of variance of the technical inefficiency to the total variance of the error term gives a statistic that accounts for the variations in output accounted for by technical inefficiency and this statistic should range between zero and one. If the statistic is close to one (1) then it means that much of the variations in output are accounted for by technical inefficiency implying the stochastic model is the appropriate one. From the regression output for Table 2 the article have standard error of Usigma to be 0.2518312 with a variation of 0.063419 and Vsigma is 0.1247323 with a variance of 0.015558 giving a total variance of the model to be 0.078977.
So the proportion of variation to technical inefficiency to total variance gives us 0.803. Since this statistic is closer to one (1) it is an indicative that the stochastic model is relevant. In other words technical inefficiency accounts for about 80% of the variations in the model.
The article again conducts the likelihood ratio statistic recommended by Kumbhakar et al. (2015) which is in this case is 1370.376. To validate whether the stochastic frontier model is relevant, the article computes the statistic recommended by Kumbhakar, Wang and Homcastle and compare it with the critical values that are reported in Kodde and Palm (1986). From the unrestricted and restricted regressions the maximum likelihood ratios are -290.4436 and -505.95362, respectively and this gives the statistic 431.02. Comparing the statistic 431.02 with critical values of the mixed Chi-square distribution it is realized that at one degree of freedom under 2.5% significance level the statistic is greater than the critical value of 3.841; hence the article can reject the null hypothesis which says that the stochastic frontier model is not appropriate. In other words the article strongly rejects the null hypothesis of no technical efficiencies.
Since the stochastic model is relevant the article predicts technical efficiency from the model. The average technical efficiency for the period 2000 to 2018 is 0.718. Within variations dominate the variations in the technical efficiency. The minimum and maximum within variations are 0.0781 and 0.9798, respectively as depicted in Table 3. The average technical efficiency of 0.718 is an indicative that about 72% of emerging economies are well positioned to help economies to bounce back from the smother of COVID-19 as at 2018. In the case of the readiness of the continent the 72% gives a readiness of 42% to the consequences of COVID-19 to economic activities. This implies that policy makers will have work hard to improve net interest margin, liquid assets and market concentration of financial sector on the continent if the sector can really help economic activities to bounce back. In other words, the continent faces a probability of 42% to prevent economic depression. Figure 2 shows the trend for the predicted technical efficiency for the continent Africa sampled for this article. From the trend it can be deduced that most of the countries sampled had technical efficiency below 0.5 and after 2006 most countries had technical efficiency above 0.5. The bar graph of Figure 3 confirms that for the period under study, all countries sampled on the African continent have technical efficiency greater than 0.5 which is indicative that the financial sectors in these countries were all technically efficient on the average. Appendix B depicts the rankings of the countries sampled in terms of technical efficiency. The first six (6) countries are Mauritius, South Africa, Botswana, Malawi, Madagascar and Kenya.
Countries on the continent that are likely to have challenges in the area of technical efficient in the financial sector are Zambia, Nigeria, Burkina Faso, Côte d'Ivoire, Cameroon and Gabon with Zambia being the worse country in terms of technical efficiency.
This article ranks South African financial sector among the best performing on the continent. However, according to S&P Global Ratings South African banks are unlikely to boost lending in 2020 amid economic uncertainty. This is because real GDP growth is expected to average 1.6% in 2020-2021 and also the fact the credit loses is expected to increase by 1.2% in 2020 hence banks extend credit slowly. For this reason, ratings for all toptier banks in South Africa were assigned negative outlooks. This is an indication that as leaders in the continent fight COVID-19 pragmatic solutions should be also tailored to address the financial sector to ensure the necessary support for the economy. The continuous efficient operation of a financial sector in an economy is necessary if an economy is saving enough the unforeseen contingency. Botswana on the other hand was ranked 3rd on the continent. This is not surprising since there has been series of projects intended to position the country to serve international financial sector. In relation to readiness of Botswana to support the shock of the pandemic our results put the country at 54% ready.
In the work of Antwi-Asare and Addison (2000) they concluded that Ghana's banks have performed poorly in the past, offering but a narrow range and inefficient delivery of financial instruments, being strongly biased in their reach to the urban formal sector and, in particular, to the holding of low-risk government paper, and being largely non-competitive in structure. These results are indicative that banks in Ghana pass on their inefficiencies to their customers by raising their lending rates and lowering their deposit rates.
Technical efficiency of financial sector by income grouping
Taking into consideration World Bank (WB) income groupings of countries, this article has considered low income, lower middle income and upper middle income countries. The mean technical efficiency for upper middle income (0.7934) and low income (0.7113) were as expected according to theory. However, mean technical efficiency for lower middle income of (0.6898) is not expected as shown in Appendix A. The implication here is that upper middle income and low income countries are more technically efficient than lower middle income countries. That means they are more resilient relatively when it comes to withstanding the economic shock as a result of COVID-19. The variations in technical efficiency for various income groups are more prominent in between countries for each group then within countries. However, Figure 4 shows that technical efficiency over the period improved significantly for upper income countries in 2005 and since then financial sectors have been doing very well. This is also the case for lower middle income countries. The difference is that financial sectors were really not doing very welling from 2000 to 2005. Afterwards there has been some appreciable improvement in the sector which gives the net effect performance in terms of technical efficiency behind the other groupings of low income and upper middle income countries. The implication is that lower middle income economies are more at risk to COVID-19 than low income and upper middle income countries on the African continent.
Conclusion
Organizational readiness for change is considered a critical precursor to the successful implementation of complex changes. Failure to establish sufficient readiness accounts for one-half of all unsuccessful, largescale organizational change efforts. The readiness of an economy to withstand the economic depression of COVID-19 depends on how efficient that economy has been performed over the years. An economy that has been operating on the efficient frontier is likely to be in a better position to give up some relief packages for social interventions and also to oil the economic engine of the country to bounce back. Countries have to shut down their economic activities in attempt to fight this pandemic. Countries that have been efficient over the years are likely to manage post COVID-19 crisis better. This is because to efficiently operate for some years they are likely to have more reserves to rely on in times of need. In the light of this the financial sector of Africa is about 72% ready to support the economy due to the efficiency level of the financial sector of the continent. In other words, there is the probability of 0.42 that economy of the African continent will not plunge into economic depression. This is far below average and by implication Africa should be bracing itself for severe economic shock if proper effective relief programs are not put in place. It is recommended that policies to address interest rate margin, liquidity and market concentration should be managed properly to improve technical efficiency of the financial sectors on the continent. In terms of income groupings on the continent the lower middle income countries are relatively challenged by technical efficiency to impact positively on economic development. In the light of this it is proposed as IMF and other international monetary fund plans to support economies, the continent Africa should be considered.
|
2020-10-19T18:05:25.958Z
|
2020-10-31T00:00:00.000
|
{
"year": 2020,
"sha1": "0d5f53581311d01bb9e304d1d0b94c17ede1f332",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JEIF/article-full-text-pdf/0BA002164892.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "db967fa3eb40a3d4a7146a34551a8946c3830190",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
49671390
|
pes2o/s2orc
|
v3-fos-license
|
Ectopic expression of a Brassica rapa AINTEGUMENTA gene (BrANT-1) increases organ size and stomatal density in Arabidopsis
The AINTEGUMENTA-like (AIL) family plays a central role in regulating the growth and development of organs in many plants. However, little is known about the characteristics and functions of the AIL family in Chinese cabbage (Brassica rapa L. ssp. pekinensis). In this study, a genome-wide analysis was performed to identify the members of the AIL family in Chinese cabbage. We identified three ANT genes and six ANT-like genes of Chinese cabbage, most of which were differentially expressed in different organs or tissues. Furthermore, compared with the wild-type line, the size of different organs in the 35S-BrANT-1 line was significantly increased by promoting cell proliferation. Meanwhile, over-expression of BrANT-1 also increases the stomatal number and delays the leaf senescence. Transcriptome analyses revealed that a set of cell proliferation and stoma development genes were up-regulated, while the senescence-associated genes were down-regulated, suggesting these genes may be involved in BrANT-1 regulated processes for controlling organ size, stomatal density and leaf senescence. In summary, this study offers important insights into the characteristics and functions of the ANT genes in Chinese cabbage, and provides a promising strategy to improve yield or head size in Chinese cabbage breeding programs.
in the production of organs with a large size 7,17,18 , while the loss-of-function mutations of the ANT gene have a smaller organ size 4,5,7 . Additionally, many studies revealed that ANT regulates organ size by changing the total cell numbers 7 . Furthermore, cells ectopically expressing ANT in fully differentiated organs exhibit neoplastic activity by producing calli and adventitious roots and shoots 7 . These studies strongly suggest that ANT most likely maintains ongoing cell proliferation coordinately with cell growth 19 . However, to our knowledge, the AIL family from Chinese cabbage has not been characterized. Therefore, identifying and analyzing the AIL family in Chinese cabbage is of great interest.
In this study, we investigated the BrAIL family members in Chinese cabbage through genome-wide bioinformatics analysis, including the identification and characterization of the AIL family members, gene structural analysis, phylogeny and motif analysis. The expression patterns of these genes were characterized in detail in response to the naphthaleneacetic acid (NAA) treatment in different tissues by quantitative real-time PCR (qRT-PCR). Our results show BrANT-1 had the highest sequence similarity to AtANT (AT4G37750), and hence was chosen for further functional analysis. Over-expression of BrANT-1 enhanced organ size by increasing cell number and stomatal density and delaying leaf senescence. Furthermore, by analyzing the potential pathways where BrANT-1 participates, it is possible to understand how the gene regulates Chinese cabbage yield and/or head size.
Results
Identification of AIL family members in Chinese cabbage. A total of nine BrAIL family members were identified in the Chinese cabbage genome, according to the taxonomy of the AP2 superfamily (Supplementary Tables S1 and S2). These included three ANT genes and six AIL genes ( Table 1). The sequences of each member were downloaded from the Brassica database (http://brassicadb.org/brad/) 20 and the two AP2 domains were confirmed according to the SMART database (http://smart.embl-heidelberg.de/). All nine BrAIL family members were named by the A. thaliana orthologs based on the sequence similarity of the protein sequences. The detailed description of sequence similarity between Arabidopsis thaliana and Brassica rapa is shown in Supplementary Table S3. When more than two Chinese cabbage genes were mapped to one homologous gene in A. thaliana, one additional number was added to the end of the gene name 21 . For example, three ANT genes, Bra017852, Bra011782 and Bra010610, were homologs of AtANT (AT4G37750). Accordingly, they were named BrANT-1, BrANT-2 and BrANT-3, respectively. The BrAIL family members were randomly mapped to different chromosomes of B. rapa (chromosome number: 01, 02, 03, 06, 08 and 10). The chromosome 02 and 03 contained three and two BrAIL genes, respectively, while the chromosome 01, 06, 08 and 10 contained only one BrAIL gene. Subsequent sequence analysis showed that the coding sequences of these nine BrAIL genes had a length of 1323 to 1725 bp, which encode a peptide of 440 to 574 aa. The encoded proteins had a predicted molecular weight varying from 49.6 to 64.1 kDa and a theoretical isoelectric point (pI) varying from 6.09 to 7.81.
Phylogenetic relationships and gene structure of the AIL protein in Arabidopsis, rice and Chinese cabbage. In order to understand the classification of the BrAIL genes in Chinese cabbage, the AIL genes from two other model plants were selected for comparative analyses, including a model C3 monocotyledon plant (rice) and a model eudicots plant (Arabidopsis). A phylogenetic tree was constructed based on the full-length protein sequences of Chinese cabbage, rice and Arabidopsis, by using the bootstrap-neighbor-joining method. The AIL genes of the other two species were obtained from previous reports on Arabidopsis 22 and rice 23 . According to the phylogenetic analysis, the 21 members were divided into seven groups (Class A-Class G) (Fig. 1A). Low bootstrap values were obtained because the difference of the AP2 domain sequences among these three species was small, indicating that BrAILs have high similarity to AtAILs. On the contrary, BrAIL proteins were only remotely related to the OsAIL proteins. To further explore the evolutionary relationships of the coding sequences, the structural analyses of intron/exon of the three species were performed by an online tool GSDS (http://gsds.cbi.pku.edu.cn/). It was found that most of the AIL family members in the three species had at least three introns, except OsAP2/EREBP#058, while the number of introns in the BrAIL genes ranged from six to eight (Fig. 1B). Furthermore, most of the genes (7 out of 9) had six introns, except BrAIL1 and BrAIL5, which contained seven and eight introns, respectively. Syntenic analysis of the AIL family members in Chinese cabbage. To understand the evolution mechanism of the BrAIL genes in Chinese cabbage, the syntenic genes were analyzed between A. thaliana and B. rapa with the BRAD program (http://brassicadb.org/brad/searchSynteny.php) 24 . The results showed that nine BrAIL genes were from five blocks of four translocation Proto-Calepineae Karyotype (tPCK) chromosomes of the ancestor, respectively. All genes were uniformly distributed on three subgenomes, namely, less fractionized (LF), more fractionized 1 (MF1), and more fractionized 2 (MF2). Additionally, two sets of triplicated genes were found in the Chinese cabbage genomes: BrANT-1, BrANT-2 and BrANT-3 in the U block; BrAIL6-1, BrAIL6-2 and BrAIL6-3 in the R block ( Table 2). The other genes (BrAIL1, BrAIL5 and BrAIL7) were singletons. Interestingly, the tandemly duplicated gene was not found in the BrAIL family. Supplementary Table S4. Additionally, the length of the linker region was conservative (30 aa). The motif of the BrAIL members was analyzed using an online tool MEME (meme.nbcr.net/meme/intro.html), and ten motifs were identified, including motif 1, 2, 3, 4 and 10 in nine BrAIL proteins, motif 5 and 6 in eight BrAIL proteins, motif 8 in seven BrAIL proteins, and motif 7 and 9 in three BrAIL proteins, respectively ( Supplementary Fig. S2).
Expression patterns of the BrAIL family members in various organs.
Accumulating experiments have shown that the AIL genes were expressed in multiple tissues and involved in regulating the development process of different tissues, such as root 25 , shoot 7 , floral organ [26][27][28] , leaf 29 and seed 18 . To explore the potential roles of the BrAIL genes in regulating the growth and development of Chinese cabbage, the expression patterns of the BrAIL genes was investigated in root (R), dwarf stem (DS), old leaf (OL), young leaf (YL), blooming flower (BFL) of Chinese cabbage by qRT-PCR. Eight genes were detected in different tissues, except BrAIL1, which was undetectable in all tissues (Fig. 2). For example, four BrAIL family members, including BrANT-2, BrANT-3, BrAIL6-1 and BrAIL6-2, showed higher expression levels in R than other tissues; three BrAIL genes (BrANT-1, BrAIL6-3 and BrAIL7) were mainly expressed in DS; BrAIL5 was mainly expressed in YL. Moreover, none of the genes were detected in old leaves.
Expression profiles of the BrAIL family members under NAA treatments. Previous studies have
shown that auxin plays an important role in plant growth and developmental processes 30 . In addition, the auxin signal may regulate organ growth by modulating ANT expression 10 . To understand the expression responses of the BrAIL members to auxin, we assessed the transcript levels of the BrAIL genes upon NAA treatments by qRT-PCR. As shown in Fig. 3, the transcriptional levels of most genes were induced by the NAA treatment, except BrAIL1. The expression level of BrAIL5 was only up-regulated at 3 h. Additionally, six genes were increased at 1 h and 3 h compared with the untreated control, including BrANT-1, BrANT-3, BrAIL6-1, BrAIL6-2, BrAIL6-3 and BrAIL7. However, the expression level of BrANT-2 showed a trend of decrease at 1 h, followed by an increase at 3 h.
Over-expression of the BrANT-1 gene caused a variety of phenotypic changes in Arabidopsis.
Compared with other BrAIL family members, BrANT-1 had the highest sequence similarity to Arabidopsis thaliana ANT (AT4G37750), while BrANT-1 was different from other AtAIL genes (Supplementary Table S5). Moreover, BrANT-1 was expressed in different tissues, suggesting that BrANT-1 might play an important role in growth and development of Chinese cabbage. To explore the potential function of BrANT-1, Arabidopsis plants were transformed with BrANT-1 under the control of the CaMV 35 S promoter. A series of pleiotropic effects were significantly distinguishable from the wild-type (WT) at 7 days after sowing, with the 35S-BrANT-1 transgenic seedling exhibiting a longer root (Fig. 4A). Additionally, the hypocotyl length of the transgenic seedling also showed a >24% increase compared with that of the WT (Fig. 4C). At 15 days after acclimating to the nutrient soil conditions, the seedling size of the transgenic plants was bigger than that of the WT (Fig. 4E). At 40 days after sowing, the leaf dimension of the transgenic plants was significantly increased compared with that of the WT (Fig. 5). Rachis length, flower dimension, seed size and silique were also enlarged ( Fig. 6), while the seed number per silique of the 35S-BrANT-1 transgenic line was slightly more than that in the WT (Fig. 6D). Furthermore, SEM was used to compare the epidermal cells in the adaxial of mature leaves of 35S-BrANT-1 and WT. The results showed the numbers of the adaxial epidermal cells of the 35S-BrANT-1 transgenic line had a >50% increase while the cell size of the 35S-BrANT-1 transgenic line was modestly decreased (<6%) compared with the WT (Fig. 5E). However, the slightly smaller cell size could not account for the big leaf size. Therefore, we conclude that the enlarged leaf area was caused by the increase of the cell number (Fig. 5B). All phenotypic analysis data were shown in Supplementary Table S6.
Additionally, at 40 days after sowing, the transgenic lines had a delayed leaf senescence phenotype, compared with the WT, in that the fifth rosette leaf (numbered from the bottom) became senescent, while the second rosette leaf was still green (Fig. 5A).
In addition, other interesting phenotypes were observed. For example, the stomatal density of the 35S-BrANT-1 line was significantly increased in the mature leaves compared with that of the WT (Fig. 5B, detailed in Supplementary Fig. S3). Consistently, the stomatal conductance (Gs) and transpiration rate (Tr) of the 40-day 35S-BrANT-1 plants were significantly increased compared with those of the WT, while the intracellular Table S7). Additionally, ten DEGs were randomly selected for further verification by qRT-PCR; all genes exhibited the same expression tendency as shown in the original data (Fig. 8). According to the Arabidopsis genome sequence and the DEGs could be assigned to different families, such as MADS-box, TCP, extension, expansin, early auxin-responsive genes (small auxin-up RNA, SAUR), VQ, STOMAGEN, SAGs and transcription factors. The DEGs that were related to plant growth and development were selected and characterized (Table 3). Further analysis indicated many genes were up-regulated in 35S-BrANT-1, which were mainly involved in cell proliferation (MADS-box protein, AT1G59920; TCP21, AT5G08330) and stoma development (STOMAGEN, AT4G12970). On the other hand, the genes inhibiting plant growth (SAUR33, AT3G61900; VQ22, AT3G22160; VQ3, AT1G21326 and ANAC036, AT2G17040) and promoting leaf senescence (AtNAC2, AT5G39610; SUAR36, AT2G45210; SAG13, AT2G29350) were down-regulated in the 35S-BrANT-1 line.
Discussion
AIL genes duplication in Chinese cabbage. The genome of the mesopolyploid crop species Brassica rapa has undergone whole genome triplication (WGT) since divergence from Arabidopsis thaliana, resulting in a significant increase of the numbers of the duplicated genes 31 . However, only nine AIL family members were identified in Chinese cabbage, including three BrANTs and six BrAILs, which were only 1.5-fold more than those in Arabidopsis, suggesting many genes were lost during genome duplication 32 . Similar results were also found in other gene families, such as BrGRF and BrVQ genes, which were about 1.89-and 1.9-fold more than those in Arabidopsis, respectively 13,33 . Additionally, the expansion of the BrAIL family members mainly depended on the segmental duplication, because no tandem duplicated pairs were found. Many studies on genome duplications have shown that the genes involved in transcription, protein binding, response to biotic stimuli and signal transduction path are preferentially retained by segmental duplication [34][35][36] . Duplication events within a genome can result in paralogs, and these genes may have different expression patterns following duplication indicative of subfunctionalization 37 . For example, two duplicated genes, BrVQ22-1 and BrVQ22-2, are differentially expressed in different tissues 33 . In this study, the triplicated genes of BrAIL6-1/-2/-3 and BrANT-1/2/3 also exhibited a different expression pattern in different organs and in response to the auxin treatment. Additionally, similar cases have also been reported in the AP2/ERF family in Chinese cabbage 38,39 . BrAIL family members were expressed in various tissues. By quantitative real time PCR, we have shown that most of the BrAIL family members were differentially expressed in both vegetative and reproductive tissues (Fig. 2). Similar to the AtAIL genes, the BrAIL genes had higher expression in young tissues (seedling and roots) and lower expression or absent in mature leaves 40 . Multiple BrAIL genes were expressed in different tissues, suggesting that the BrAIL genes play different roles in different organs, which has been confirmed in other plants. For example, ANT and AIL6 are involved in the regulation of flower or seed development in Arabidopsis or Medicago truncatula 9,17,18 and the VviANT1 gene plays an important role in the regulation of berry size 28 . Additionally, we observed the BrAIL genes were lowly expressed in blooming flowers. A previous study also found that the AIL genes are mainly expressed in the advanced stage B (B2), flowers from inflorescences at stage G (G), and the early stage H (H1) in grapevine, while the expression of the AIL genes is low in blooming flowers 28 . Besides, AtANT, AtAIL5, AtAIL6, and AtAIL7 also exhibit the same expression tendency during the flower development 40 .
Transgenic lines over-expressing BrANT-1 had enlarged organs by increasing the cell number.
Generally, the AIL genes play an important role in regulating organ growth through increasing cell number 7 or cell size 29 . In the present study, the cell number in the leaf was increased in the 35S-BrANT-1 transgenic lines, implying that BrANT-1 positively regulates cell proliferation. This result was consistent with the function of the AtANT gene in the leaf or floral organ 7 . However, it is different from the roles of PnANTL1 and PnANTL2 genes, which increase leaf length through increasing cell size in tobacco 29 .
Recently, Liu et al. 41 found that the OsMADS1 gene can positively regulate cell proliferation in rice. Therefore, the most up-regulated MADS-box gene (AT1G59920) among the DEGs may play critical roles in increasing the cell number in the 35S-BrANT-1 line. Additionally, previous studies have shown that the TCP gene can be divided into two classes (I and II). The class I genes like TCP20 function as positive regulators of cell growth 42 , while the class II genes like the Antirrhinum genes CINCINNATA (CIN) function as negative regulators of cell growth 43 . Here, AtTCP21 (AT5G08330), a member of the class I genes 44 was found to be up-regulated, suggesting that it may partially promote cell proliferation in the 35S-BrANT-1 line. Apart from the up-regulated genes, some down-regulated genes, such as SAUR36 (AT2G45210) 45 , VQ22 (AT3G22160) 46 and ANAC036 (AT2G17040) 47 , might also play an important role in increasing organ size in the 35S-BrANT-1 line, as indicated by previous studies, which have shown that some SAUR, VQ and NAC genes are involved in the negative control of plant growth [46][47][48][49][50] . Interestingly, some genes, such as LRX5 (AT4G18670) 51 and expansinB1 (AT2G20750) 52 , positively controlling cell size were also identified from the up-regulated DEGs, as indicated in our histological results which showed a slight reduce in cell size between the 35S-BrANT-1 and WT plants. The result was consistent with our previous studies on the BrARGOS gene, which regulates the ANT gene 10 and may also promote the transcription of AtEXP10 in transgenic Arabidopsis 53 . This is probably because meristematic competence is disrupted locally, cell division gradually ceases and differentiation begins with the expansion of the postmitotic cells 19 . In addition, cell proliferation is also coupled with a limited amount of cell expansion during the proliferation phase 54 . Therefore, we speculate that the extensin and expansin proteins promote cell expansion following a significant increase of cell proliferation. The modest reduction in cell size may compensate the increase in cell number 19 .
BrANT-1 might regulate leaf senescence. Leaf senescence constitutes the final phase of leaf development and is a highly complex but genetically programmed process involving the expression of many senescence-associated genes (SAGs) 55,56 . Our study shows that, compared with the wild type, leaf senescence in the 35S-BrANT-1 transgenic line was delayed, which is consistent with that in Arabidopsis over-expressing the AtANT gene 57 . Additionally, a number of SAGs such as AtNAC2 (AT5G39610) 58 , AtSAUR36(AT2G45210) 45 , and AtSAG13(AT2G29350) 59 were prominently down-regulated in the 35S-BrANT-1 line. AtSAUR36 (or SAG201), a member of the early auxin-responsive gene family, was remarkably up-regulated during leaf senescence. In Arabidopsis, a saur36 knockout line shows a delayed leaf senescence phenotype, but the transgenic plant overexpressing SAUR36 displays an opposite phenotype 45 . In addition, SAG13 as an early senescence marker is strongly induced before visible yellowing 60 . Accordingly, we speculate that BrANT-1, similar to the AtANT gene, is one of the negative factors that prevent premature senescence.
BrANT-1 positively increased the stomatal number in Arabidopsis.
Stomata is an important part of the epidermal tissues of leaves in plants, which controls gas exchange by paired subsidiary cells, and participates in the global carbon cycle 61 . Stomatal development in leaf is positively regulated by signaling factor STOMAGEN (At4g12970) through interacting with cell-surface receptor TOO MANY MOUTHS (TMM) 62 . In this study, it was found that the number of stoma significantly increased in the 35S-BrANT-1 line compared with the WT. Besides, the expression level of STOMAGEN (At4g12970) was significantly up-regulated in the transgenic line. The overexpression of STOMAGEN was found on many agminate stomata in mature leaves of Arabidopsis 62 . However, the homologous gene of AtANT regulating the stomatal development has not been reported. Additionally, previous studies have shown that the stomatal density in Arabidopsis influences the leaf photosynthetic capacity through regulating gas diffusion 63 . In this study, we assessed the leaf photosynthetic capacity of the BrANT-1-overexpressing transgenic and wild type lines. The result indicated that, with the increase of the stomatal number, the stomatal conductance (Gs) and transpiration rate (Tr) of mature leaves were significantly increased in the 35S-BrANT-1 transgenic plants. However, the net photosynthetic rate (Pn) was only slightly increased in the 35S-BrANT-1 plants, probably due to the low intracellular CO 2 concentration (Ci). Severe stomatal patchiness results in the underestimation of Pn due to the lack of uniform Ci in the leaf 64 . Taken together, these results indicated that BrANT-1 might be involved in regulation of the development of stomata by enhancing the expression level of STOMAGEN (At4g12970).
In summary, we identified three BrANT and six BrAIL proteins in Chinese cabbage, which can be classified into four subgroups. Phylogenetic analysis showed that the AIL genes of Chinese cabbage had high sequence similarity with those of Arabidopsis. Furthermore, multiple sequence alignment suggests that the nine genes belonged to the euANT subgroup according to the conserved motifs in Arabidopsis. Finally, ectopic expression of BrANT-1 in Arabidopsis controlled the organ size by regulating the cell number. BrANT-1 regulated the stomatal density and leaf senescence by increasing the expression of STOMAGEN and reducing the expression of SAGs. Taken together, these results not only enhance the understanding of the role of the AIL genes in controlling the organ size and other tissue growth, but also provide a promising tactic for Chinese cabbage molecular breeding program.
Methods
Identification and analysis of the AIL genes in Chinese cabbage. The gene and amino acid sequences of the AIL family members were confirmed according to the genome of the B. rapa line Chiifu (http:// brassicadb.org). The difference of the AIL genes was analyzed using DNAMAN 6.0.40 (Lynnon Biosoft, USA). Gene Structure Display Server (GSDS) (http://gsds.cbi.pku.edu.cn/) was used to perform the intron/exon structure analysis. Phylogenetic trees were constructed based on the amino acid sequences using the neighbor-joining method by MEGA 5.0 software 65 . The physicochemical properties of the AIL proteins were calculated by using the ProtParam tool (http://web.expasy.org/protparam/). MEME (meme.nbcr.net/meme/intro.html) was used to analyze the conserved motifs of the AIL proteins in Chinese cabbage 66 . The AP2 domain sequences of the AIL proteins from Chinese cabbage were identified with SMART (http://smart.embl-heidelberg.de/).
Plant materials, growth conditions and plant phytohormone treatments. The wild-type and
35S-BrANT-1 transgenic Arabidopsis plants were pre-treated for 3 days at 4 °C under dark condition before transferring to pots with nutrient media (Sheng Xiang Agricultural Science and Technology Co, China; peat: vermiculite: pearlite = 1: 1: 1). The plants were cultivated in a growth room with a continuous artificial light period of 16 h and a dark period of 8 h, and a constant temperature between 19-23 °C. The Chinese cabbage line "Guangdongzao" was used for all stress experiments, and the growth condition was the same as that for Arabidopsis.
Chinese cabbage young seedlings at the four-leafed stage (21 days after sowing) were used for the phytohormone treatments, during which, the plant leaves were treated with 100 μM naphthaleneacetic acid or distilled water (DW), respectively. Plant leaves were harvested after 0, 1 and 3 h of the phytohormone treatment. promoter was used instead of the intrinsic promoter. The product of polymerase chain reaction (PCR) was cloned into the binary vector pCAMBIA2300-35SOCS using the XbalI and SalI restriction sites. The constructed vector was used to transform Arabidopsis plants using the planta Agrobacterium-mediated method 67 Sequencing and data processing. The matured leaves of the WT and 35S-BrANT-1 transgenic Arabidopsis plants (40 days after sowing) were used for RNA-seq. Each line was biologically repeated three times. The sequencing of six cDNA libraries was performed at Beijing Genomics Institute (BGI, Shenzhen, China) using an Illumina HiSeqTM 2000 sequencing platform (Illumina Inc., San Diego, CA, USA). Clean reads were obtained from the raw reads that were clipped by abandoning the adaptor sequences and low-quality reads (reads with >10% ambiguous "N" bases or reads in which >50% of the bases had a Quality-score ≤ 5). The clean reads were mapped to the reference genome using a rapid short-read mapping program, namely SOAP aligner/soap2 68 . More than two mismatches were abandoned in the sequence alignment. The quality of sequencing was controlled by quality assessment of reads, statistics of alignment, sequencing saturation analysis and randomness assessments.
Identification and functional annotation of differentially expressed genes. The reads per kb per
Million reads (RPKM) method 69 was used to calculate the expression level of each unigene. Therefore, the RPKM values can be directly used for comparing the difference of gene expression among the WT and 35S-BrANT-1 lines. The DEGs from these two lines (six cDNA libraries) were enriched for further analysis according to the standard with false discovery rate (FDR) ≤0.01, and the absolute value of log2 ratio ≥ 1.
Quantitative RT-PCR (qRT-PCR) analysis.
To confirm the quality of the sequencing data and distinguish the expression level, the Chinese cabbage AIL genes were subjected to qRT-PCR analysis. qRT-PCR was performed under the following conditions: 94 °C for 2 min, followed by 45 cycles of reaction (94 °C for 20 s, followed by 60 °C for 34 s). The actin gene was used as a constitutive expression control. qRT-PCR was performed on an IQ5 Real-Time PCR System (BIO-RAD, Hercules, CA, USA). The specific primers for qRT-PCR (Supplementary Table S8 Data availability. The RNA-seq raw data are deposited in the Sequence Read Archive (SRA) under the number "SPR136061". Phenotype datasets are available in this article and its supplementary files.
|
2018-07-13T16:22:32.158Z
|
2018-07-12T00:00:00.000
|
{
"year": 2018,
"sha1": "4c4d78a0ae0ed5fb708556625303df3149e74d9d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-28606-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "804098cdcfb70bb8445abfcc5e744702e047c2f6",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
234756768
|
pes2o/s2orc
|
v3-fos-license
|
A NEW APPROACH TO WESTINGHOUSE TEMPO RATING SYSTEM WITH FUZZY LOGIC
When calculating the production cost of a product, two basic items are taken into consideration. The first is the material cost and the second is the labor cost. Material costs can be calculated very clearly due to the lack of personal judgment. Because the amount of material used for the production of that product is certain. However, the calculation of labor costs may not always be as simple and precise. The amount of product produced per unit time directly affects product costs. Therefore, to calculate the costs correctly, the number of parts produced per unit time must be calculated correctly. At this point, production times per product gain importance. It is not considered sufficient to use the values on the chronometer when calculating the production times of parts or products. Standard production times are calculated by adding performance and tolerance values to the chronometer values. Worker performance may vary from person to person. Performance varies depending on worker ability, effort, consistency and working environment. This difference in performance becomes very important in manual production processes. In automatic production processes, this difference is especially important when disassembling and assembly processes of the parts to the automatic machines. Tempo rating systems have been proposed to standardize worker performances and to minimize the negative effects of performance differences between individuals. The Classical Westinghouse Method is one of the common use tempo rating system. In the CWM, the performance of the worker is calculated by using skill, effort, environmental conditions, and consistency values. Evaluations of these four criteria are obtained by observing by decision-makers. In this study, the fuzzy rule-based Westinghouse method was developed because decision-makers use linguistic expressions to evaluate worker performance. The proposed Fuzzy Westinghouse Method is applied to a part of the company that produces automatic cutting machines in order to prove its validity. As a result of the study, it was determined that the proposed model produces more sensitive values.
Introduction
The main purpose of every profit-oriented business is to make a profit by producing products and/or services and offering them to the end user. In the simplest way, profit is obtained by subtracting production costs from the sales price of the product. The rise in production costs results in higher prices in a fixed profit margin. If the market of the product determines the price, then the rising costs will reduce the profit margin. In today's competitive business world, companies need to keep production costs as low as possible in order to sustain their market presence. First of all, it is very important to calculate the cost of parts correctly. For example, if the cost of a product is underestimated, this will result in a low price recommendation to the customer. In this case, lower profits can be made and even financial losses can be obtained. On the contrary, if the costs are calculated higher than they are, then the product price will be increased and fewer customers will be reached. Unit product costs are influenced by different factors such as product quality, labour and technology used, product design. In this direction, production costs can be reduced by increasing labour productivity as a result of standardization, simplification, and elimination of activities that do not add value. The work-study approach has been developed for this purpose.
The work study consists of two main techniques: method study and time study. The purpose of the method study is to eliminate non-value added processes and to obtain the fastest and most accurate method of performing value added processes. Time study is a methodology in which the time duration of each activity of a subject is recorded in order to create the workflow and achieve efficiency and effectiveness through eliminating waste and simplifying work (Chebolu-Subramanian, Sule, Sharma, Mistry 2019). Firstly, a process should be standardized with the method study, then the production times should be determined by the time study. In this study, the proposed method is developed to use in time study activities.
The main activities of the work-study method are shown in Figure 1. Firstly, the product or part on which the work-study method will be applied is determined. When determining this part, it is useful to give priority to the problematic parts that create bottlenecks in production. The second stage is a critical examination of the production stages of the product in full detail within the scope of the method-study. The most suitable method for the production of the related product is determined by developing solutions for the problems occurring during the production phase. Then the method is documented and standardized.
After the determination of most appropriate method and elimination of non-value added processes by the method-study, the time study phase is started. In the time study method, the observer first measures the production times. These times are called chronometer values (CV) because these values are read directly from the chronometer. Performance and tolerance coefficients are not added. In the next step, the performance value (P) of the observed worker is determined using any tempo rating method. Normal Time (NT) is obtained by multiplying this determined performance value with the chronometer value measured by the observer (Eq. 1).
This study proposed a suggestion for the tempo rating method used to determine performance in the time study method. Tempo rating is the evaluation of the working speed of the worker according to the standard speed concept of the observer. This standard is the normal speed that qualified workers can achieve with normal work (Bircan and İskender 2005).
The Classical Westinghouse Method (CWM) is one of the tempo rating methods. CWM is based on four factors which are skill, effort, environmental conditions, and consistency (Lehto and Landry 2012). In the CWM, numeric values are used instead of linguistic variables while determining the performance value. However, these numerical values are defined as intermittent values. For instance, according to the skill factor, a worker should have a value between +0.15 and +0.13 to be considered "Perfect". Similarly according to the skill factor, a worker should have a value between +0.11 and +0.08 to be considered "Excellent". As can be seen in Table 1 there is no a linguistic equivalent for the +0.12 value between "Perfect" and "Excellent" for skill factor in the CWM. It is aimed to obtain an alternative of CWM that can produce more precise results by ensuring intermediate values.
In the proposed method the intermediate values is taken into consideration by using fuzzy logic approach. Thus, it is aimed to obtain more realistic values for each limit value between "Excellent" and "Poor" levels of four factors that used in the CWM. This study aims to standardize linguistic expression of the evaluator in ambiguity situations by using the fuzzy logic technique. There are two studies in the literature about the CWM handled with the fuzzy approach. Çevikcan et al. (Çevikcan, Kılıç, Zaim 2012) proposed a fuzzy model with two factors (skill and effort) of CWM for determining the performance values. In another study, Çevikcan and Kılıç (Çevikcan and Kılıç 2016) proposed a fuzzy model with three factors (skill, effort and environmental conditions) for determining the performance values. In this study, a more holistic model (including each four factor of CWM) with fuzzy rules is proposed to determine the performance values.
Performance Rating
While the operator is working during a task, evaluation of the speed, pace or the tempo (the terms are used synonymously to refer to the speed of movement) is an important part of the time study. The operators speed must be judged by the analyst during the time working. This is called performance rating. There are several methods that is used for performance rating (Lehto and Landry 2013). These are briefly described below (Barnes 1980;Suwittayaruk, Van Goubergen, 2011).
Synthetic Rating: This method effort to provide a rating that is not influenced by human judgments by using predetermined time values which were evaluating an operator's speed. Performance rating factor can be found by the ratio of predetermined time value and the actual time value of an element. The computation of the performance rating factor's formula is given below (Eq. 3) (Barnes 1980;Matias 2001).
= /
where R = Performance Rating Factor P= Predetermined Time for the element A= Average Actual Time for the same element Thus the determined factor can be applied to other manual elements being studied.
Objective Rating: The purpose of this method is to reduce the amount of judgement in the time study. The procedure of this method is consist of two steps (Barnes 1980;Matias 2001); a. The degree of speed observed to an objective speed standard that is the same for all jobs. In this rating, attention is not paid to the difficulty of work and its limiting effect on possible speed; therefore, a single tempo standard can be used instead of multiple mental concept.
b. Not all jobs can be performed at standard pace, because in practice they will all be more or less difficult than the job where standard pace is created. The factors affecting the pace of work, as indicated by experimental or practical evidence, are as follows as: Total amount of body involved in the element Foot pedals Bimanualness Eye-Hand Co-ordination Handling requirements Weight handled or the resistance encountered Performance Rating: This method known as "Speed Rating" or "Effort Rating". In this method, the time study analyst only takes into account the success rate per unit time. The analyst compares the performance displayed with its normal performance concept for the operation under examination (Barnes 1980;Matias 2001).
Physiological Rating: Physiologists demonstrated the validity of using the oxygen consumption rate to measure energy consumption. Subsequent studies have shown that the change in heart rate is also a reliable measure of physical activity. In this method, the change in heart rate and oxygen consumption values of a worker while performing a job is determined as tempo (Barnes 1980;Matias 2001).
In this study, performance rating approach of CWM is discussed. CWM is developed by Westinghouse Company in 1927 (Bircan and İskender 2005). The need for full understanding and adequate training is strongly emphasized in the use of the technique to achieve consistent and accurate results. The system gives numerical weights to skill, effort, conditions, and consistency as found during a study. To what extent variables affect the productivity of an employee helps the analyst to make a more precise overall assessment. It is important to understand the concepts of skill, effort, conditions and consistency before the application of the method (Matias 2001).
The skill defined as "the ability to follow a given method" can be further explained by associating it with the craftsmanship shown by the proper coordination of the mind and hands. The effort can be described as "Being ready to work" or the willingness of an employee to spend energy in an effective job is a complex of human behavior that is worthy of close attention by industrial engineers. People work in conditions or environments that directly affect their productivity. The important factor in the following conditions is not what their absolute values can be, but whether a condition or other condition is normal and the best factor that can be achieved for the work done. Consistency is considered a factor that draws attention to the extent or lack of consistency. Instead of classifying, it is better to identify and correct the cause. Consistency may be seen as a small factor when it comes to the shares of other factors.
As the time study analyst observes the operator, the analyst notes the skills, effort, consistency, and conditions for each job along with their codes. The numerical degree of factors is then algebraically added to the nominal 100 percent to produce a finished rating (Matias 2001). The linguistic variables of the rating factors and their performance values are given in Table 1.
The Proposed Method
Fuzzy Westinghouse Method (FWM) includes 4 input and 1 output parameter. The input parameters of the FWM are determined as "Skill", "Effort", "Working Conditions" and "Consistency" to meet the CWM. The output parameter of the FWM is "Performance". Furthermore, due to the fact that consistency is regarded all parameters of CWM are included in "Performance" value. This value is attempted to be determined by considering these 4 main input parameters shown in Figure 2. In this study, Mamdani fuzzy inference system (Mamdani and Assilian 1975) is used and the model is created using fuzzy logic toolbox of Matlab software. Due to the lack of training set and the characteristic of the CWM, Mamdani model structure is developed on the basis of expert knowledge and training. In the proposed model, "and" /"or" processes (Eq. 4, Eq. 5) which are established between fuzzy rules "min" /"max" values (Eq. 6, Eq. 7) are preferred respectively (Pourjavad and Mayorga 2019).
There are many defuzzification methods as center of gravity, bisector of area, smallest of maximum, mean of maximum and largest of maximum etc. For subtraction "min", for addition "max" and for defuzzification "largest of maximum (LOM)" specifications are defined. In this study, LOM, which has maximum membership to the overall implied fuzzy set is preferred. This defuzzification method where crisp value is calculated as Eq. 8 (Namazov and Bastürk 2010). Structure of proposed model is shown in Figure 3.
Application of the Proposed Method
The proposed FWM is applied in a machine manufacturing company. Work-study process is carried out on an important part of the machine. This part which is the connection element of the machine is called sheet bar. The part and technical drawing are presented in Figure 7. Ethics committee approval was not required in the study, research and publication ethics were followed. The part is assembled to the product at the end of 5 operations after coming to the plant as raw material. These operations are cutting, universal milling, CNC processing, universal milling, and metal blackening respectively. The number of observations should be determined before starting the observation process. The number of observations is calculated in the work-study area with Eq. 9 (5% tolerance and 95% confidence interval) (Kanawaty 1992).
where : the value of i th trial observation of the related operation n : the number of trial observations N: the number of required observations to predict within the desired sensitivity and confidence interval Trial observations are performed to determine the number of observations required. In the literature, a certain amount is not determined for the number of trial observations. In this study, 5 experimental observations are made. The values of 5 trial observations are given in Table 3. Table 3 identifies the number of required observations for each process. These numbers vary between 5 and 13. In this study, the number of observations required for each process is 15. In the next step, the performance values of the workers in each process are determined through observation. Performance values for the FWM are determined with Matlab software (Table 4). 15 observations are made for each process and standard times are calculated. According to the company policy, the tolerance value is determined as 115% for each process. The number of observations, numerical values of each factor affects the performance, performance value, normal time and standard time for CWM and FWM for each processes are shown in Table 5-Table 9. In Table 5, the values of the 15 observations discussed for the cutting process are measured. For example, while the first observation is calculated as approximately 27 seconds according to CWM, this value is calculated as 26.3 seconds according to FWM. The difference between the averages of the 15 observations is calculated as 0.705 seconds, and this difference is very low since the cutting process is a short process. In Table 9, the values of the 15 observations discussed for the metal blackening process are measured. For example, while the first observation is calculated as 132.5 seconds according to CWM, this value is calculated as 131.2 seconds according to FWM. The difference between the averages of the 15 observations is calculated as 1.5 seconds.
Conclusion
The main cost items of a company are material cost and labor cost. The daily production amount is effective in the labor cost of a part. Proper evaluation of the performance of the worker involved in a process will benefit at many points. The first one is the correct calculation of labor costs, daily production quantities, deadlines and fair management of the wage-price policies. As a result of performance evaluations, low performance workers can be trained to increase their performance levels. Calculating operator's performance with minimum error is an advantage for companies because of providing the correct calculation of salary management (salary policy, overtime work), production volume (supplydemand balance). However, tempo rating methods may not be sufficient to reflect the actual performance of the workers. Especially in methods based on the observation of the decision maker, finding the numerical equivalent of linguistic expressions and adding them to the calculations can be challenging. Thanks to the fuzzy logic approach, errors that may occur due to the individual opinion of the observer can be minimized.
In this study, the performance rating approach of CWM is discussed with the fuzzy logic approach. The proposed FWM is applied to the company that produces automatic cutting machines in order to provide its validity. Obtaining more sensitive values with the proposed method will result in more accurate calculations in the studies where these values will be used. For example, when Table 7 is examined, the difference between the average of 15 observations according to CWM and the average according to FWM is determined as approximately 15 seconds. With the FWM approach, a lower standard time of about 15 seconds is achieved. Although this value seems to be a small difference when considering 1 product, its effect will increase as the number of products increases. In addition, differences arising from other operations increase the amount of error on the total standard time. In the study, performance values are calculated for each process separately. However, the tolerance value is pre-determined as 115% and is used in the calculation of standard times. This approach has been adopted to determine the limits of the study. However, tolerance values vary according to the working position of the operators, the weight of the work piece, environmental conditions and the mental and physical fatigue it creates. In cases where the tolerance value will be calculated separately, healthier results can be obtained thanks to the fuzzy logic approach. In the future study, a fuzzy-based approach may be proposed for calculating tolerance values. In this way, processbased tolerance values can be determined.
|
2021-05-18T01:48:27.656Z
|
2021-04-30T00:00:00.000
|
{
"year": 2021,
"sha1": "9fe9547b2b39231bcc9b8faa979e33c71a07dc02",
"oa_license": null,
"oa_url": "https://dergipark.org.tr/en/download/article-file/1340060",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9fe9547b2b39231bcc9b8faa979e33c71a07dc02",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
249314535
|
pes2o/s2orc
|
v3-fos-license
|
Discovery of the cyclotide caripe 11 as a ligand of the cholecystokinin-2 receptor
The cholecystokinin-2 receptor (CCK2R) is a G protein-coupled receptor (GPCR) that is expressed in peripheral tissues and the central nervous system and constitutes a promising target for drug development in several diseases, such as gastrointestinal cancer. The search for ligands of this receptor over the past years mainly resulted in the discovery of a set of distinct synthetic small molecule chemicals. Here, we carried out a pharmacological screening of cyclotide-containing plant extracts using HEK293 cells transiently-expressing mouse CCK2R, and inositol phosphate (IP1) production as a readout. Our data demonstrated that cyclotide-enriched plant extracts from Oldenlandia affinis, Viola tricolor and Carapichea ipecacuanha activate the CCK2R as measured by the production of IP1. These findings prompted the isolation of a representative cyclotide, namely caripe 11 from C. ipecacuanha for detailed pharmacological analysis. Caripe 11 is a partial agonist of the CCK2R (Emax = 71%) with a moderate potency of 8.5 µM, in comparison to the endogenous full agonist cholecystokinin-8 (CCK-8; EC50 = 11.5 nM). The partial agonism of caripe 11 is further characterized by an increase on basal activity (at low concentrations) and a dextral-shift of the potency of CCK-8 (at higher concentrations) following its co-incubation with the cyclotide. Therefore, cyclotides such as caripe 11 may be explored in the future for the design and development of cyclotide-based ligands or imaging probes targeting the CCK2R and related peptide GPCRs.
Scientific Reports
| (2022) 12:9215 | https://doi.org/10.1038/s41598-022-13142-z www.nature.com/scientificreports/ cyclotide molecules can bind to and activate G protein-coupled receptors (GPCRs) 17 , inhibit serine-proteases 18 , inactivate vascular endothelial growth factor 19 , and stimulate angiogenesis 20 . Therefore, cyclotides represent an ideal scaffold for drug discovery with potential for the development as imaging probes or drug lead candidates 21 . In recent years, numerous studies have unveiled the potential of nature-derived peptides as an extensive source for GPCR ligand design 22,23 . These peptide GPCR ligands have been isolated from various organisms including plants 22 . Accordingly, over 50 peptides targeting GPCRs have been approved as drugs 24 . Cyclotides are a rich source for GPCR ligand discovery, and they have previously been demonstrated to modulate the oxytocin/ vasopressin V 1a receptors 25 , the corticotropin-releasing factor type 1 receptor 6 , and the κ-opioid receptor 26 . Here, we explored yet another GPCR target of cyclotides namely the cholecystokinin-2 receptor (CCK 2 R). This receptor belongs to an important neuroendocrine system comprising the peptide hormones cholecystokinin (CCK) and gastrin, which mediate their physiological actions through two closely related receptors, i.e. the cholcystokinin-1 receptor (CCK 1 R) and the CCK 2 R (also referred to as CCK A R and CCK B R) 27,28 . Both receptors are known to be involved in various physiological processes, including the regulation of food intake, increasing pancreatic enzyme secretion and delaying gastric emptying 28 . Importantly, the CCK 2 R has been suggested to participate in tumor development and progression 27,28 . It is often overexpressed in cancer tissue, in particular in gastrointestinal stromal tumors, medullary thyroid cancers, small cell lung carcinomas and insulinomas tumors 29 . Therefore, it is not surprising that the CCK 2 R has been explored as possible drug target for cancer treatment, since reducing the intrinsic activity or blocking of the receptor has yielded promising results in human studies 30 . In this study, we (i) screened three cyclotide-enriched plant extracts of Oldenlandia affinis, Viola tricolor, and Carapichea ipecacuanha for modulation of CCK 2 R signaling, (ii) isolated a particular cyclotide (caripe 11) from C. ipecacuanha 6 and (iii) characterized its pharmacodynamic properties in HEK293 cells overexpressing the CCK 2 R.
Materials and methods
Plant material. Plant specimen of C. ipecacuanha (Brot.) L.Andersson and V. tricolor L. were purchased as powdered material from Alfred Galke GmbH (Germany; catalogue no. 66804 and 13804, respectively). O. affinis DC. was grown from seeds (derived from glasshouse grown plants) obtained as a gift from D. Craik (Australia) 25 .
Peptide extraction and purification. Dried and powdered C. ipecacuanha, O. affinis, and V. tricolor were extracted using dichloromethane:methanol (1:1, v/v) in the ratio of 1:10 (w/v) under permanent stirring at 25 °C for 18-24 h. After filtration, 0.5 volume of ddH 2 O was added to the extract and the aqueous methanol phase was separated by liquid-liquid extraction in a separation funnel. The lyophilized extract was dissolved in ddH 2 O, and peptides were pre-purified in batch using C 18 silica resin (40-60 µm, ZEOprep 60, ZEOCHEM, Switzerland) activated and equilibrated with methanol and solvent A (0.1% trifluoroacetic acid, TFA in ddH 2 O, v/v), respectively. The resin was washed with 30% solvent B (90% of acetonitrile, ACN, 9.9% of ddH 2 O and 0.1% of TFA, v/v/v) and eluted with 80% solvent B. The eluate was lyophilized and stored at -20 °C until further use. Native cyclotide (caripe 11) was purified by reversed-phase high performance liquid chromatography (RP-HPLC) as previously described 6,25,26 . The purity was assessed by analytical RP-HPLC column (250 × 4.6 mm, 5 µm, 100 Å; Kromasil) at a flow rate of 1 mL/min with a gradient of 5-65% solvent B and by using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) as previously described 6,25,26 . Peptide quantification. Peptide quantification was carried out by measuring absorbance at 280 nm using a nanodrop instrument and using Beer-Lambert equation. The molar extinction coefficient (Ɛ) for each peptide was determined according to the equation: Ɛ 280 = nC*120 + nW*5690 + nY*1280 [M −1 cm −1 ], where n is the number of residues.
Cloning of a CCK 2 R-encoding gene and expression vector preparation. A tagged ORF clone encoding the mouse CCK 2 R was purchased from the OriGene (CAT#: MR222564; Germany). The restriction sites of Nhe1 and Xho1 endonucleases were introduced into the CCK 2 R cDNA and amplified using the forward primer: 5′-aaaaaagctagcATG GAT CTG CTC AAG CTG AACCG-3′ and the reverse primer: 5′-aaaaaactcgagGCC AGG CCC CAG CGT-3′ (restriction sites are underlined; receptor specific sequence in capital letters). The amplified and digested PCR product was cloned into the pEGFP-N1 plasmid and transfected into competent E. coli XL1 cells. Following selection of positive-transfected bacteria, the plasmid was prepared and extracted using the NucleoBond Midi kit (Macherey-Nagel, Germany), quantified using a nanodrop protocol and its sequence confirmed by DNA sequencing. This plasmid produced a receptor with a C-terminal GFP tag; adding a stop codon to the reverse primer, yielded an untagged receptor, which was used for control studies (data not shown).
Cell culture and transfection. Human embryonic kidney (HEK293) cells (Ref. 31 ) were maintained in a fresh Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum, 100 U/mL penicillin, and 100 µg/mL streptomycin in a humidified atmosphere of 5% CO 2 at 37 °C. 2 mL of cell suspension was poured into each well of a 6-well plate and incubated overnight at 37 °C. After reaching to a confluency of 70-80%, the cells were transfected with a plasmid encoding EGFP-tagged CCK 2 R using the jetPRIME transfection reagent according to manufacturer's instructions (Polyplus-transfection, USA).
Cell viability assays. Effect on cell viability of plant extract was measured against the HEK293 cell line using the Cell Counting Kit-8 (VitaScientific, USA). Briefly, 100 µL of medium containing 1 × 10 4 cells was seeded in each well of a 96-well plate and incubated at 37 °C for 24 h. Afterwards, 10 µL of various concentration of extract (0-300 µg/mL) or caripe 11 (0.3, 1, 3, 10, 30, 100 µM) was added into each well and then incubated www.nature.com/scientificreports/ at 37 °C for 2 h. Finally, 10 µL of the kit reagent (final concentration of 10%, v/v) was added into each well and incubated at 37 °C for 3 h. Absorbance was measured at 450 nm using the FlexStation 3 multi-mode microplate reader (Molecular Devices, USA). Triton X-100 and medium were used as positive and negative controls, respectively. The cell viability (CV) percentage was calculated using the following equation: CV (%) = (A S /A C ) × 100, whereas A S and A C are related to the absorbance of sample and negative control at 450 nm.
Data analysis.
All experiments were performed in triplicate, analyzed using the GraphPad prism software (GraphPad Software, USA) and expressed as mean ± SD (standard deviation). Pharmacological data of concentration response curves were normalized to the maximal response of CCK-8 detected at the highest concentration. The potency (EC 50 ) and maximum efficacy (E max ) were calculated from concentration response curves fitted to the three-parameter non-linear regression with a top and bottom constrained to 100% (for CCK-8) and 0, respectively. Single concentration efficacy data were normalized to baseline (assay buffer) and data were expressed as IP1 formation (fold difference over baseline).
Ethics statement.
The study was carried out in accordance with relevant guidelines and regulation.
Results
Preparation and analysis of cyclotide-containing extracts. For this study we chose three representative cyclotide producing plants, namely O. affinis (Oaff), V. tricolor (Vitri) and C. ipecacuanha (Caripe). The cyclotides elute late in RP chromatography due to their hydrophobic surface properties and a molecular mass between m/z 2700-3500 in MALDI-TOF MS 6 . The initial aqueous extracts of Oaff, Vitri and Caripe were prepared by maceration and pre-purified with C 18 solid-phase extraction to remove polar compounds. Afterward, the presence of cyclotides in these extracts was confirmed and analyzed using RP-HPLC and MALDI-TOF MS (Fig. 1), Cyclotides were assigned by molecular weight according to previous studies 6, [31][32][33] (Table 1). Next, we opted to determine the GPCR-modulating activity of cyclotide extracts at the CCK 2 receptor. Since the extracts still contain many peptides and 'contaminations' of small molecules, we first examined suitable concentrations to be used for the pharmacological experiments by performing cell viability assays.
Cell viability assays of cyclotide extracts. The viability of HEK cells to be used for the IP1 second messenger accumulation experiments was determined with extracts of Caripe, Oaff and Vitri. Accordingly, Caripe had no major influence on viability against HEK293 cells up to 100 µg/mL with > 65% of cells remaining viable at 300 µg/mL (Fig. 2a). In addition, the Oaff extract showed no effect at concentrations of < 100 µg/mL, while increasing concentrations of up to 300 μg/mL led to a decrease in viability after 2 h of incubation (Fig. 2b).
The Vitri extract exhibited no major effects on viability up to concentrations of 300 µg/mL (Fig. 2c). Therefore, concentrations of up to 300 µg/mL of Caripe and Vitri extracts, and 100 µg/mL of Oaff extract were chosen for further analysis in the IP1 assay.
Modulating effects of cyclotide plant extracts at CCK 2 R. Cyclotide-containing extracts of Caripe, Oaff and Vitri were screened for their pharmacological activity at the CCK 2 R to detect agonism and antagonism using a commercially available IP1 second messenger assay and HEK293 cells transiently expressing EGFPtagged CCK 2 R. The pharmacological effect of Caripe extract was determined with two concentrations of 100 and 300 µg/mL, respectively, whereas one concentration was used for the Oaff (100 µg/mL) and Vitri (300 µg/mL) extracts. The functionality of the receptor was validated by CCK-8, the endogenous CCK 2 R peptide ligand. As expected, CCK-8 produced a concentration-dependent increase in IP1 (Fig. 3), an effect that was absent in nontransfected HEK293 cells (data not shown). All three plant extracts produced a moderate accumulation of IP1 (Fig. 3 Purification of caripe 11. Previously, we have demonstrated the pharmacology-guided isolation of cyclotides from C. ipecacuanha and other cyclotide-containing plants 6,33 . C. ipecacuanha is known to contain several cyclotides with closely related sequences (Table 1) 6,33 . Therefore, we isolated a representative cyclotide, namely caripe 11, to determine its properties to modulate CCK 2 R signaling. The cyclotide was purified using preparative RP-HPLC 6 and its purity and identity were determined using analytical RP-HPLC and MALDI-TOF MS, respectively, which yielded the cyclotide caripe 11 with 99.8% purity and a molecular weight of 3281.3 Da (Fig. 4). For cell-based assays, the concentration of a caripe 11 solution was calculated using the absorbance at 280 nm (extinction coefficient: 2000 M −1 cm −1 ). Furthermore, we determined its effect on cell viability; concentrations of ≤ 30 µM of caripe 11 did not have a pronounced effect, but the concentration of 100 µM slightly decreased HEK cell viability (Fig. 2d). Therefore, concentrations of up to 30 µM of caripe 11 were considered as appropriate for further analysis. www.nature.com/scientificreports/ Caripe 11 is a partial agonist of the CCK 2 R. Partial agonists of GPCRs are defined as ligands that trigger submaximal efficacy at the receptor as compared to full agonists 35 . To determine the activity of caripe 11 we measured concentration-response curves of the CCK-8 control agonist and caripe 11 using HEK293 cells transiently expressing CCK 2 R. By definition, CCK-8 is an endogenous peptide ligand that per se fully activates the receptor. Our results demonstrate that CCK-8 activates the receptor with a potency (EC 50 ) of 11.5 nM. On the other hand, caripe 11 activates the receptor with a submaximal efficacy (E max = 71%) and a potency of 8.5 µM compared to CCK-8 ( Fig. 5a and Table 2). This suggests that caripe 11 is a partial agonist of the CCK 2 R with moderate potency. To confirm the pharmacological mechanism of caripe 11, we determined the effects of three concentrations of caripe 11 upon pre-treatment of cells with an EC 80 concentration of CCK-8. As expected, caripe 11 concentration-dependently decreased the EC 80 effect of CCK-8 when both ligands were co-incubated. Accordingly, caripe 11 (30 µM) was able to reduce the EC 80 effect (70 nM) of CCK-8 by ~ 20% (Fig. S1a). Furthermore, in the concentration-response curve the basal activity of CCK-8 was increased (0 to 17%) upon co- www.nature.com/scientificreports/ treatment with caripe 11 (10 µM) (Fig. 5b), a phenomenon commonly observed for partial agonists 36 . Moreover, the curve of CCK-8 in combination with caripe 11 was shifted to the right; this led to a nearly sixfold decrease in the potency of CCK-8 from an EC 50 CCK-8 of 12.9 nM to an EC 50 CCK-8+caripe11 of 71 nM (Fig. 5b). These characteristics suggest that caripe 11 is a partial agonist of the CCK 2 R.
Discussion
Cyclotides are cyclic plant peptides comprising a unique structural topology that is currently being explored in drug discovery and development. Cyclotides and many other nature-derived peptides occupy a chemical space that is different from small molecules, and therefore they may be able to interact with proteins otherwise difficult to target by small molecules. The use of plants in traditional medicine for the discovery of new pharmaceuticals and lead compounds is one of the central dogmas of ethnopharmacology and pharmacognosy 1,25 . The prototypical cyclotide plant O. affinis, known for its traditional use in childbirth and post-partum care, was the source for the first nature-derived cyclotide/GPCR ligand, kalata B7, which acts as a partial agonist at the oxytocin-and vasopressin V 1a -receptors 25 . Meanwhile a number of different cyclotide GPCR ligands has been discovered and synthesized 23 , and hence in this study we aimed at increasing the repertoire of GPCR-modulating cyclotides by exploring the CCK 2 R. GPCRs are one of the largest group of membrane proteins in the human body with over 800 unique receptor sequences known to date 37 . The CCK 2 R is a class A GPCR that is relevant in many physiological and pathological processes, e.g. it is involved in several metabolic and gastric disorders 30 , as well as cancer 29 . Based on the therapeutic potential of CCK 2 R and therefore the need for new ligands targeting this receptor, in this study we screened cyclotide-containing extracts of O. affinis, C. ipecacuanha, and V. tricolor to explore their ability to modulate CCK 2 R signaling. The content and nature of cyclotides in these extracts were analyzed and confirmed by RP-HPLC and MALDI-TOF MS. In line with the defining criteria of cyclotides, peptides identified in the three plant extracts were late-eluting in RP-HPLC and exhibited a molecular mass between 2700 and 3500 Da 38 . We pharmacologically screened these cyclotides-enriched extracts in a functional IP1 assay using HEK293 cells overexpressing the mouse CCK 2 R. All three tested extracts exhibited the ability to modulate Gq-dependent CCK 2 R signaling. To demonstrate specificity of these effects we tried co-treatment with two competitive human CCK 2 R inhibitors, YM-022 39 and LY225910 40 . Unfortunately, it was not possible to block the agonist effects of the extracts (data not shown). Reasons for this are discussed in the following: (i) the antagonists were specifically designed to the human receptor; however, in our study we used the mouse receptor. Not surprisingly, LY225910 did not displace the effects of endogenous ligand CCK-8; to demonstrate an antagonistic effect of YM-022 we had to use high concentrations (> 500 nM) despite its reported picomolar affinity (Fig. S1b). (ii) Some compounds in the extract (which contain up to hundreds of peptides and other molecules) 38 may interfere with the activity of the antagonist, and (iii) cyclotides may form a stable complex with the receptor that cannot easily be displaced by the antagonist; this phenomenon has been observed for other GPCRs previously (summarized in Ref. 41 ). For instance, partial agonists of the adenosine receptor such as LUF7746 bind covalently and cannot be displaced by antagonists 42,43 , and the same is true for the cannabinoid 1 receptor ligand AM841 44 .
To identify cyclotides responsible for the modulation of CCK 2 R signaling, we next isolated several cyclotides from a representative cyclotide extract of C. ipecacuanha. At least one of these peptides, namely caripe 11 exhibited the ability to partially activate CCK 2 R. This partial agonism of caripe 11 was further analyzed by co-incubation of the cyclotide with CCK-8 and caripe 11, which led to a dextral shift of the CCK-8 concentration-response curve (i.e., decrease of CCK-8 potency). These findings are in line with our previous studies that identified kalata B7 cyclotide to be a partial agonist of oxytocin and vasopressin receptors 25 . Furthermore, caripe cyclotides, first reported by Koehbach et al. 33 , have been demonstrated to function as antagonists at the corticotropin-releasing factor type 1 receptor 6 and agonists of the κ-opioid receptor 26 .
Because partial agonists of GPCRs trigger submaximal effector coupling and thus induce less receptor desensitization as compared to full agonists, they provide opportunities to develop pharmacotherapies with improved side effect profiles 45 . A prime example is salmeterol, a partial agonist of β 2 -adrenoceptor 46 in clinical use for treatment of asthma and chronic obstructive pulmonary disease 47 . Furthermore, buprenorphine displays partial agonist activity at the µ-opioid receptor yet exerts pharmacological effects similar to an antagonist 48 . In fact, these characteristics of buprenorphine make it an attractive compound for clinical use in pain management 49 and opioid dependence 50 .
The unique features of cyclotides have led to their use in the design of cyclotide-based drugs with improved pharmacological properties. Due to their capability to accommodate structural variations, cyclotides have been extensively utilized as molecular scaffolds to design new molecules and ligands with interesting biological features by applying 'molecular grafting' 51 . For instance, Camarero et al. designed and synthesized a cyclotide-based antagonist of the HIV-1 viral replication that is able to target the chemokine receptor CXCR4 52 . In addition, the grafted cyclotide MCo-CVX-5c was used as a template for designing and synthesizing MCo-CVX-6D (the Lys www.nature.com/scientificreports/ residue in loop 1 has been conjugated to DOTA) that exhibited affinity at CXCR4 in the sub-nanomolar range 53 . These studies demonstrated feasibility for grafting GPCR-binding peptide motifs into the cyclotide framework. The grafted CXCR4 cyclotides are important and stable cancer imaging tools 53 . Accordingly, the affinity of the cyclotide caripe 11, identified in this current study, may be exploited as a scaffold for molecular grafting to design CCK 2 R ligands with improved pharmacological properties, e.g., enhanced potency/affinity and stability. Given the therapeutic potential of CCK 2 R in the treatment of cancer 27,29 , for instance, by grafting of the endogenous CCK-8 sequence into the cyclotide scaffold, and the conjugation of an imaging reagent could yield a high-affinity cancer imaging probe. In this study, we did not determine activity of caripe 11 at the related CCK 1 R, and therefore we cannot address receptor-subtype selectivity. Proof-of-principle for such a grafting application, and possibly to investigate receptor selectivity will have to be provided in future studies. At a more general level, our work provides yet another example that cyclotides are capable of modulating GPCR signaling 23 . Their functional diversity, structural plasticity and high stability make them suitable scaffolds to develop new GPCR-targeting ligands with unique pharmacological properties 23 . In fact, cyclotides are amenable to molecular grafting, which facilitates the engineering of chemical probes and ligands of GPCRs 23,53,54 . Here, we discovered for the first time a cyclotide ligand of the CCK 2 R that may for instance be utilized as a stable labelled ligand in imaging applications, as a gut-stable probe, or as a scaffold for designing stabilized peptide ligands of the CCK 2 R. Thus, our study helped to increase the rich diversity of cyclotides as ligands of GPCR and points to their potential use as starting points for the design of cyclotide-based ligands targeting CCK 2 R to treat human illnesses such as cancer.
Conclusion
GPCRs remain privileged drug targets. The CCK 2 R is an example of a GPCR with therapeutic potential for the treatment of gastrointestinal disorders including cancer. Cyclotides are nature-derived peptides that represent an emerging class of GPCR modulators. In this study, we demonstrated a cyclotide that modulates CCK 2 R signaling as a partial agonist. Therefore cyclotides may be utilized as templates for designing new GPCR ligands with unique pharmacological properties 25 .
|
2022-06-04T06:23:10.062Z
|
2022-06-02T00:00:00.000
|
{
"year": 2022,
"sha1": "811160e52c45cff6bdf233b68766a3f92c8221c0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7ba63dadee8206d9ed44b75e370ca2df55e4a100",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
79513273
|
pes2o/s2orc
|
v3-fos-license
|
Hypovitaminosis D and anthropometric measurement association in young healthy females of Karachi
Background: Vitamin D deficiency recognized as a pandemic problem around the globe. In the last few decades, the incidence of hypovitaminosis D is affected severely in both genders of Pakistan. Due to economical hurdles and sociocultural practices, the prevalence of hypovitaminosis D is much higher in females which leads to age-related chronic bone & skeletal deformities. To find out vitamin D profile associated with anthropometric measurement of young healthy Pakistani females. Methodology: 115 healthy female participants were recruited for the study. Demographic profile, physical activity status, dietary habits and anthropometric measurements of participants were collected by means of a questionnaire designed for the study. Participants were classified into two groups, vitamin D deficient (VDD) group, and vitamin D adequate (VDA) group. Anthropometric measurements of both groups included body mass index (BMI), waist-hip ratio (WHR), mid-upper arm circumference (MAC), triceps skinfold (TSF), corrected mid-upper arm muscle area (CMUAMA) and circumference of thigh size, neck size, biceps size and wrist size. Data were analyzed using Microsoft office 2013. A t-test was applied to find out the associations. P value < 0.05 was considered significant. Results: P-value for all anthropometric measurements, was found to be non-significant. Unhealthy dietary habits were much higher in VDD group as compared to VDA group. Thus, it was found that there is a high prevalence of hypovitaminosis D in young Pakistani females. Conclusion: Hypovitaminosis has great influence on physical activity, anthropometric measurement and dietary intake of an individual.
Introduction
Hypovitaminosis D defines serum 25 (OH) D level below to 20 ng/ml 1 . Vitamin D plays an important role in bone mineralization by metabolism of calcium and phosphate, maintain cell proliferation to reduce the risk of cancer 1,2 . There are many contributing factors which affect the level of vitamin D such as modifiable risk factor including ethnicity, dark skin pigmentation (melanin), reduced production of vitamin D, female gender is more prone to hypovitaminosis D due to wearing abaya or covered body for religious and cultural values 2 and winter season which affect the amount and quality of sunlight. Nonmodifiable risk factor includes low sunlight exposure, low calcium diet, obesity, sedentary lifestyle, geographical location 3 and air pollution 4 . All these factor initiating the later consequence of vitamin D deficiency such as osteoporosis (break down of bone), osteomalacia (softening bone), osteoarthritis (cartilage deformities), different types of cancer like prostate and breast, autoimmune disorder like diabetes, multiple sclerosis, crohn disease and rheumatoid arthritis, many types of infection like urinary tract infection, tuberculosis and different psychiatric conditions like depression and schizophrenia 5. Nowadays, hypovitaminosis D is recognized as global health problem which affects millions of individual around the world. Since the 19 th century there was no data reported on prevalence. In the 20 th century, first available statistics on the prevalence of vitamin D was reported in Indian population which showed that 90% of nearly all age groups of Indian population suffered from vitamin D deficiency 6 . A study conducted in Karachi found that 90% of the study sample had low serum 25(OH) D level. The study also revealed the inverse relation between serum 25(OH) D level and serum parathyroid level 7. A cross-sectional study conducted in Islamabad showed that females has a high incidence of hypovitaminosis D as compared to male population irrespective of age 8. Vitamin D is a fat-soluble vitamin which is present naturally in some foods and in milk products. Sunlight is the major source of vitamin D. Activation of vitamin D required hydroxylation via two ways (Figure 1).
A major source of vitamin D is ultraviolet radiation which is 290-320 nm wavelengths that required for the production of provitamin D through the skin, it is recommended that thirty minutes exposure of sun twice per day 9 (figure 1). Artificially, vitamin D2 synthesize by yeast with the help of ultraviolet radiation & ergosterol while vitamin D3 supplement prepared by chemically conversion of cholesterol or conversion from lanolin by irradiation of 7 dehydrocholesterol. Although both forms of vitamin D similar expect their side chain 10. While recommended consumption of these natural sources of Vitamin D is very low therefore it's better to consume vitamin D supplement to reach the recommended value of vitamin D.
Obesity is one of the important contributing factors of hypovitaminosis D. Higher body mass index along with inactive lifestyle and unhealthy dietary intake contribute to vitamin D deficiency. A series of researchers found different researchers to explain the possible pathway which develops hypovitaminosis D. In obese people, low rate of ultraviolet streak through skin and lesser intensity for absorption of sunlight and reduce production of vitamin D3 have addressed for hypovitaminosis D. Different studies showed the inverse relation of vitamin D deficiency and adiposity, body fats 11. The present study investigates the possible relation of hypovitaminosis D and anthropometric measurements on young healthy population. Table 2 shows that around 100% and 81% of participants of Vitamin D deficient were tea coffee consumers (cups/day) and Carbonated beverages as compared to Vitamin D adequate participants which were 88% and 66.6% respectively. Table 3 shows the difference between an anthropometric measurement of vitamin D deficient and adequate participants. P-value of all the comparisons are greater than 0.05 that is the relationship is significant.
Discussion
Vitamin D deficiency recognized as major health problem around the globe. Studies showed that people with high body fat (obese) had low serum 25(OH) D level 13. In present study, there is a difference between vitamin D deficient participants and Vitamin D adequate participants. According to their anthropometric measurements, vitamin D deficient participants had higher BMI, Waist to Hip Ratio, and mid-upper arm circumference as compared to Vitamin D adequate participants (table 3). Different studies showed that negative relation between BMI and vitamin D status. Vitamin D insufficiency was highly influence on BMI and body weight 14 . Adiposity is associated with low blood 25 (OH) D levels, in obese people vitamin D store in adipocyte by the lipophilic act 15. A strong inverse relation between serum 25(OH) D level and obese people were seen in children and adolescence 16. Anthropometric measurement are influenced by calcium availability in diet. Studies showed that there is inverse relation between calcium and body weight. Reason is that the intracellular calcium concentration regulates lipid (triacylglycerol) storage and adipocyte lipid metabolism 17 . Present study showed that Vitamin D adequate participants having low body weight and better dietary status as compared to vitamin D deficient participants ( Physical activity associated with calcium and phosphate balances and increase bone mass. Physical activity helps to enhanced 1, 25 (OH)2 D level in blood which increase efficiency of calcium absorption in intestine. Recent study showed that self-reported moderate-to-vigorous activities per day was associated with an increase in circulating vitamin D levels 20 .
Present study shows that the sedentary resting condition of vitamin D deficient participant was much high as compare to adequate participant (Table 2). Similarly, a population-based survey showed that a higher 25(OH) D level was associated with better neuromuscular function and exercise improved muscle strength, balance, and mobility 21 .
Nutritional factors that can negatively impact bone health include binge drinking, caffeinated beverages and carbonated beverages due to the interference of mineral absorption, dietary fibre. A study conducted on adolescence showed that high rate of consumption of carbonated soft drink may lead to different disease conditions like anaemia, bones weakness as well as increase in blood sugar level and body weight 22 .Similarly present study large population of vitamin D deficient participant consumed carbonated beverages as compared to adequate vitamin D level (Table 2).
Conclusion
Our research study showed the high prevalence of hypovitaminosis D. Different conditions like dietary insufficiency, reduced sun exposure, physical activity status are contributing factor to develop hypovitaminosis D which lead to increased BMI, mid upper arm circumference and waist to hip ratio, thereby increasing the risk of cardiovascular diseases. Pakistan needs fortification of vitamins in food product. Government should make the strategy for betterment of food policy towards fortification. Active life style, outdoor activity, and sun exposure at least 10-30 minutes and sufficient diary product intake make life stronger and healthy.
Conflicts of interest
None.
|
2019-03-17T13:12:22.392Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "bf472a9aa80f02a7b52191411c9f50ab7660ee68",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.29052/ijehsr.v6.i1.2018.16-22",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4deac03c6136ab5115edd1f8535ddc1183b62808",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247683547
|
pes2o/s2orc
|
v3-fos-license
|
Inspiratory Airway Resistance in Respiratory Failure Due to COVID-19
OBJECTIVES: To measure inspiratory airflow resistance in patients with acute respiratory distress syndrome (ARDS) due to COVID-19. DESIGN: Observational cohort of a convenience sample. SETTING: Three community ICUs. SUBJECTS: Fifty-five mechanically ventilated patients with COVID-19. INTERVENTIONS: Measurements of ventilatory mechanics during volume control ventilation. MEASUREMENTS: Flow-time and pressure-time scalars were used to measure inspiratory airways resistance. RESULTS: The median inspiratory airflow resistance was 12 cm H2O/L/s (interquartile range, 10–16). Inspiratory resistance was not significantly different among patients with asthma or chronic obstructive pulmonary disease compared with those without a history of obstructive airways disease (median 12.5 vs 12 cm H2O/L/s, respectively; p = 0.66). Survival to 90 days among patients with inspiratory resistance above 12 cm H2O/L/s was 68% compared with 60% for patients below 12 cm H2O/L/s (p = 0.58). Inspiratory resistance did not correlate with C-reactive protein, ferritin, Pao2/Fio2 ratio, or static compliance. CONCLUSIONS: Inspiratory airflow resistance was normal to slightly elevated among mechanically ventilated patients with ARDS due to COVID-19. Airways resistance was independent of a history of obstructive airways disease, did not correlate with biomarkers of disease severity, and did not predict mortality.
M ost descriptions of the pulmonary physiology of COVID-19 acute respiratory distress syndrome (ARDS) have focused on static respiratory system compliance and lung recruitability. Because autopsies of patients with COVID-19 show evidence of airway injury (1), we hypothesized that patients with ARDS due to COVID might have elevated airflow resistance.
To test this hypothesis, we measured inspiratory airway resistance in a convenience sample of 55 intubated and mechanically ventilated COVID patients meeting the Berlin criteria for ARDS (2) in three ICUs between March and September 2020. The study was approved with a waiver of informed consent by the Institutional Review Boards (IRBs) and Research Committees of Louisiana State University, University Medical Center of New Orleans, and Ochsner Medical Center (IRB protocol 1224). Patients were identified based solely on the availability of one of the investigators to photograph ventilator waveforms during the routine assessments of ventilatory mechanics.
All patients were intubated with an oral endotracheal tube greater than or equal to 7.0 mm internal diameter and were ventilated in the volume-assist-control mode with a target tidal volume of 6 mL/kg predicted body weight. Positive end-expiratory pressure (PEEP) and Fio 2 combinations were protocolized (3). Sedation, analgesia, and neuromuscular blockade were controlled by the primary treatment team. Patients were suctioned prior to the measurements, and the heat and moister exchanger (HME) was changed regularly. During a period of passive ventilation, photographs were obtained of pressure-time and flow-time scalars for offline analysis using a HIPAA compliant software program (Haiku, Epic Systems, Verona, WI). During each assessment, inspiratory flow (F inspir ) was set to 60 L/min using a square-wave flow pattern with an end-inspiratory pause of 0.3 seconds. Waveforms that demonstrated active or reverse triggering were excluded from analysis (approximately 5% of the sample). Measurements of peak inspiratory airway pressure (P peak ) and plateau airway pressure (P plat ) were used to calculate inspiratory airflow resistance (R inspir ) using the following formula: Deidentified patient demographics, smoking status, history of chronic lung disease, admission laboratory, COVID-specific treatments, vital status at 90 days, and dates of first positive severe acute respiratory syndrome coronavirus-2 test, intubation, ventilator liberation, and hospital discharge were captured. Baseline characteristics are described as means ± sd, medians (interquartile range [IQR]), or percentages. R inspir was compared between patients with and without chronic obstructive pulmonary disease (COPD)/ asthma using Mann-Whitney U test. Spearman correlation was used to determine associations between R inspir and inflammatory markers (C-reactive protein [CRP] and ferritin) as well as Pao 2 /Fio 2 ratio and static respiratory system compliance (C stat ). Survival to 90 days was calculated, and the proportion of survivors above or below the median R inspir was compared.
Patient characteristics are displayed in the table. The time from intubation to the waveform was 7 days (IQR, 2-12 d). The median R inspir was 12 cm H 2 O/L/s (IQR, 10-16). R inspir was not significantly different between patients with asthma or COPD compared with those without these diagnoses (median 12.5 vs 12 cm H 2 O/L/s, respectively; p = 0.66) or between patients who received either remdesivir or steroids and those who did not (13 vs 12 cm H 2 O/L/s, respectively; p = 0.83). Furthermore, R inspir did not correlate with CRP, ferritin, Pao 2 /Fio 2 , or C stat . Thirty-nine percent of the cohort survived to 90 days. Survival among patients with R inspir greater than 12 cm H 2 O/L/s was 68% compared with 60% among patients with R inspir less than 12 cm H 2 O/L/s (p = 0.58).
Our investigation has shown that inspiratory airflow resistance was normal to only slightly elevated (12 cm H 2 O/L/s; IQR, 10-16) among a convenience sample of critically ill, mechanically ventilated patients with COVID-19 ARDS compared with reported normal values (4). Our study can be compared with two prior investigations of inspiratory airways resistance in ARDS. In the first, Wright and Bernard (5) reported a mean inspiratory airflow resistance of 6.15 cm H 2 O/L/s in 10 patients with non-COVID ARDS versus 0.88 in three control subjects. However, this previous study measured airway pressures below the tip of the endotracheal tube and referenced them to pleural pressure.
In a second more recent study of patients with COVID-19 ARDS, Koppurapu et al (6) used methods similar to ours. They reported a higher mean inspiratory airflow resistance (20 cm H 2 O/L/s) than that observed in our study. Although our study is larger and, therefore, perhaps less susceptible to sampling error than that of Koppurapu et al (6), approximately 50% of our patients were receiving remdesivir, systemic glucocorticoids, beta-agonists, and/or anticholinergics. In addition, the median time from intubation to assessment was only 1.7 days in the study by Koppurapu et al (6) versus 7 days in our study, which could have led to reduced airways resistance in our cohort.
There are limitations to our study. First, we did not measure expiratory airways resistance that would have required a measurement of intrinsic PEEP. In addition, similar to the study by Koppurapu et al (6), we were unable to determine the small contribution of the endotracheal tube and HME to R inspir .
We conclude that patients with COVID-19 ARDS generally have only minimally increased airways resistance and that inspiratory airflow resistance was not correlated with a history of asthma or COPD, the duration of mechanical ventilation, or mortality.
|
2022-03-26T13:55:28.563Z
|
2022-03-25T00:00:00.000
|
{
"year": 2022,
"sha1": "410b4cd7b24b41ec6570c031003ba7d327f619b8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "410b4cd7b24b41ec6570c031003ba7d327f619b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
35449092
|
pes2o/s2orc
|
v3-fos-license
|
Legitimacy of the Malays as the Sons of the Soil
This paper presents evidence to defend the Malay as legitimate sons of the soil. The arguments are supported by linguistic, archeological, paleoanthropological, prehistorian, botanical, genetic and forensic evidence. The bulk of the sources on indigenous concept of the sons of the soil are the Malay classical texts. Based those sources, it is argued that the Malays are entitled to be regarded as legitimate sons of the soil, firstly, their ancestors were not migrants, instead originated from the Nusantara region; secondly, their ancestors were the first who constituted the political establishment or effective administration in Nusantara in general and in Malaysia in particular; thirdly, the status of the Malay masses as the sons of the soil had been legitimized by the Malay Sultanates, a single supreme authority over all matter on the Malay sovereignty of all time till today; and, fourthly, the Malay themselves constituted the concept of sons of the soil and also the geo-political entity called Tanah Melayu (the Malay Land) long before the coming of foreign influences.
Introduction
Hitherto there have been challenges by the non-Malays against the Malays who regard themselves as the sons of the soil of this country (Malaysia).The most classic example of this was from the noted agitator Lim Ching Ean/Yan, a member of the Straits Settlements Legislative Council (1929Council ( -1933)), who spoke to a Penang Chinese association in 1931 as follows: Who said this is a Malay country?When Captain Light arrived, did he find Malays, or Malay villages?Our forefathers came here and worked hard as coolies-weren't ashamed to become coolies-and they didn't send their money back to China.They married and spent their money here, and in this way the Government was able to open up the country from jungle to civilization.We've become inseparable from this country.It's ours, our country (Cited from Roff 1967: 209).
In 1933, during his second term, he was hailed a folk hero among the Chinese as he walked out of the council chamber in protest against the government's refusal to fund Chinese and Tamil vernacular education because the British regarded the Chinese and Indians as non-natives (Tan Kim Hong 2007: 142;The Star, 7 February 2002).He is still celebrated.For instance, a subscriber to Malaysiakini who wrote about his 'contribution' under a title "Sacrifice a must on the road to unity" (http://www.malaysiakini.com/letters/95022,Retrieved 10 May 2012), and The Star (7 February 2002) regarded him as a Distinguished Son of Penang.
The last decades have seen the surge of more intensive challenges toward the Malay position as the sun of the soil especially among those so-called 'colonial knowledge' reviewers (e.g Kahn 2006, Vickers 2004, Nah 2003, Fernandez 1999).By using words such as 'contesting' and 'deconstruction' as their tools, in the guise of post-structural school of thought, they have attempted to deny or challenge the status of Malays as the sons of the soil.For instance, while in January 1948, Malcolm MacDonald, the Governor General of Singapore and Malaya, announced that "The Malays are the truest sons and daughters of the Malayan soil," (Broadcast Speech by the Governor-General, 4 January 1948.NAA A1838 413/2/1/4 Part 1, BTSEA -Malayan Constitutional Reforms), Sornarajah (2010: 107) says, "Bumiputera, literally, the 'children of the land.'The Malays are not indigenous to the land.There are the Orang Asli to Malaysia, whom the Malays themselves regard as the indigenous people."They seem to be more 'colonial minded' than the real pre-independence colonialists.For instance, speaking on the origin of the concept of Tanah Melayu (Malay Land), while Valentyn (1726Valentyn ( , ed. 1885: 64-65): 64-65), who was writing during the time before the coming of the British to this country, said "This country has generally been known since that time, during the early Melaka Sultanate, by the name 'Tanah Malayu'" and while Crawfurd (1856: 251) in nineteenth-century said "The Malays themselves call the peninsula Tanah Malayu, that is, the 'Malay land, or country of the Malays'" and while Winstedt (1966: 8) also said "By Malays it came to be termed 'Malay land' (tanah Melayu), though parts of Sumatra and Borneo are also 'Malay land' the continent of Europe still calls the country the Peninsula of Malacca," those 'colonialism minded speculators' insist that this concept was invented by the colonial British.Fernandez (1999: 47), for instance, writes: It was the British too who gave geopolitical connotations to Malaya by calling it 'Malay land' i.e.Tanah Melayu, a term adopted by the Malay later.The term Tanah Melayu to mean land belonging to the Malays, was not a native construct.Early Malay texts had no such conception of statehood to encompass Malay.It would seem to me that prior to European colonialism these places were geographic rather than geo-political entities.
The same goes in dealing with the term sons of the soil.While Parmer (1957: 151) says, "the British favored the Malays on the theory that they are the sons of the soil to whom the British had treaty obligations" and while Winstedt (1966:16) also says, "The Malays have at least as much right to be regarded as the aboriginal people of Malaya as the English have to be called the aborigines of England," Nah (2003: 531) says: I argue, is anxiety-producing for the 'Malay' new-Self, for its claims to being essentially 'indigenous' come under serious questioning when there is evidence that the Orang Asli have had deep(er) historical continuities in the land.
We deliberately use the word 'reviewers,' albeit scholars, because though seemingly critical in their views to contest and deconstruct the Malay identity, none of them configure their arguments based on primary sources.They seem quite sophisticated in 'name dropping'; quoting huge secondary sources from published literature (books and journals).For example, in their argumentation on the Malay origin, they are overwhelmed by migrations theories by heavily drawn, in fact expanded, the pre-war models.Fernandez (1999: 46), for instance, says: The Malay history of origin was one of common history, on the migration of Malays to Malaya in two waves, the first being the proto-Malays (i.e.those we call the Orang Asli or aborigines) and the second, the deutero-Malays, the present Malay.The importance of a common history as the British wrote clearly became central towards later Malay understanding of the civilization in the region.Most probably, skeptical readers would be satisfied with this explanation.However, as will be enlightened in this paper, this model is an outdated version of Malay origin theories.In fact, most scholars in Malay Studies have declared that those migration theories lack of scholarly input and unprofessionally done and are thus deemed to contradict scientific knowledge.This paper, a rebuttal to those contestations and deconstructive speculations, offers concrete historical evidence which would demonstrate that the Malays are entitled to be regarded as the legitimate sons of the soil of Nusantara in general, and of Malaysia, in particular.Our argument is based on the following: 1) The Malay ancestors were originated from Nusantara, and for that we present evidence to reject those migration theories; 2) The Malays were the first people who constituted the political establishment with effective administration since the prehistoric period to the present-day Malay Sultanates; 3) The legitimacy of the Malay masses' claim to be regarded as the sons of the soil had been constituted by the legitimate, supreme indigenous institution, the Malay Sultanates, and 4) The Malays had constituted the geo-political entity, called Tanah Melayu (The Malay Land), on their own long before the coming of foreign influences.With all the evidence which will be presented, we could charge those 'colonial knowledge' speculations as totally ignorant of the basic historical facts on Malay history.
Malays Are Not Migrants
Our first argument to defend the Malays as legitimate sons of the soil is based on the fact that their ancestors were not migrants, but originated from the Nusantara region.Our assertion is based on the following evidence: Firstly, ancient primates and fossils found in Nusantara are much older than those found in other regions either in Mainland Asia or in the Oceania.Secondly, Nusantara is one of the regions located in the Equatorial Belt, the most likely place for human settlement during the Pleistocene and Holocene periods because it was the only region which had the thinnest layer of ice during those periods.Other regions located in the Equatorial Belt during those periods were Arabian continent, central Africa and Amazon basin.This could be evidenced from high concentration of oil wells in the Equatorial Belt.Scientifically, oil wells were formed from living organisms, a sign of human existence in those regions.Whereas, geological studies show that regions located to the North of Tropic of Cancer and to the South of the Tropic of Capricorn were covered by ice for some kilometers.
Thirdly, it would be plainly unacceptable to assume there were no humans in the whole Nusantara region before the arrival of human beings from other regions.Fourth, there is not even single evidence that shows the centre (core) of Austronesian speakers had ever been in other part of the world other than Nusantara.
Fifth, archaeological artifacts such as ancient bronze (drums and axes), stone tools, and also linguistic data, fossil as well as botanical remains, should not be assumed as reliable scientific evidence to claim that the ancestors of Austronesian and Austroasia speakers groups were originated from southern China because they are too tiny and too fragmented to represent human civilization in the context of the vast region of Nusantara and of the period thousands of years ago.In fact, we wonder if they are truly representative of the prehistoric civilization, hence ancient people in the whole Southeast Asian region, or whether they only represent fringe areas.
Rejection of the Migration Theories
Those acquainted with the works of Solheim II (1970& 1982), Lamp (1964), Benjamin (1987Benjamin ( & 2002)), Hiroki Oota et.al 2001), Detroit (2004), Terrell (2004), Mohamed Anwar Omar Din (2011), and more importantly Oppenhemeir (1999, 2004 & 2006), would probably be enlightened that those migration theories of Southeast Asian peoples have actually long been heavily criticized as invalid and obsolete for they do not seem to make any sense and almost absolutely opposed to scientific and empirical evidence.One of the most radical criticism is by Benjamin (1987Benjamin ( : 109 & 2002: 20) : 20) who says the constituent layer of a gigantic kueh lapis, layer cake theory, with the Negritos at the bottom, then the Senoi, followed by the so-called 'Proto-Malays' and the final and topmost layer supposedly made up of the 'Deutero-Malays,' much too often repeated in the history and geography textbooks of the last half-century, no longer seems acceptable to any professional prehistorian.Lamp (1964: 102) derives at the same conclusion that the spread of archaeological artifacts (such as Don Song and bronze axes) over such a wide Nusantara area indicates it was in some forms of commercial contacts with the southern fringes of the Chinese world, rather than to be perceived as evidences of human immigrations.Indeed, as a matter of fact, bronze Don Song drums for commercial purpose are still been produced till today by the people in the Red River Delta of northern Vietnam.Those who used Dong Song drums to suggest evidence of human immigrations had never referred to Heger Type Classification to justify the validity of their theories.
The migration theories also exhibit enduring methodological defects because inherently they are all laid, either deliberately or accidentally, on the basis of Sapir's paradigm (Sapir 1916) that assumed the place where observers think to have comparatively thicker, denser and diversified or higher variety of remains of archaeological, linguistic/cognates, fossil and botanical specimens, should be considered as the homeland of the people they studied.It had been on this basis that those migration theories pointed to Yunnan or to Tonkin or to Taiwan or to Southern Ocean Sea as the homeland of the Austronesian ancestors.For example, Comrie (2001:28) writes: The internal diversity among the Formosan languages is greater than that in all the rest of Austronesian put together, so there is a major genetic split within Austronesian between Formosan and the rest.Indeed, the genetic diversity within Formosan is so great that it may well consist of several primary branches of the overall Austronesian family.This paradigm, however, would never bring conclusive and definite scientific and objective answers as their results totally depend on how the observers determined and gauged the thickness, denseness, diversified and variety of those remains.Since there are always observers who would claim, as often than not, the place where they started doing the research as having comparatively thicker, denser, diversified and higher variety of archaeological, linguistic (cognates), fossil and botanical remains than other places, the results could also lead to a reverse direction in the migration movement, for example, Southeast Asia or Nusantara region as the homeland of Austronesian, had the observers started their studies in this region.It has been out of these differences in determining and gauging the thickness, denseness, diversified and variety of those remains that had brought a never ending race between 'Express Train from Taiwan' (represented by Bellwood 1998) on one side, and 'Slow Boat from Polenisia."(represented by Oppenhemeir 2006) on the other side.
On the other hand, Sapir's paradigm potentially yield results that are opposed to the obvious historical facts.For instance, should Sapir's paradigm-with its keywords 'thicker,' 'denser' and 'higher variety' of remains-had been used as a standard research procedure to explain the origin of indigenous people in African continent with the same way as it had been used to explain the origin of Southeast Asian people, it could imply also that those indigenous people in the African were originated from ghettos in Manhattan (United States of America).This is because even a single ghetto in that American metropolis has much thicker, denser and higher variety of whatever specimen as compared to any place in Africa.
There is another serious defect in the migration theories: they have been operating on the premise that the Malays (in the guise of Austronesian speakers) were from a single ancestor's stock, whereas in reality the Malays have always been polygenic-vis., not from a single ancestor.In fact, there is no clear cut between the so-called Austroasian speakers from the so-called Austronesian speakers, both in the cultural-linguistic as well as in their genetic elements (Benjamin 1987(Benjamin : 109 & 2002: 2-4;: 2-4;Maruyama et.al. 2010: 165).The most recent genetic study by a Japanese independent body (which did not involve a single Malay) undertaken among the Malays in Kuala Lumpur in 2010 reaffirmed those assumption that exert the Malays are homogenous or from single stock is misleading (Maruyama et.al. 2010: 165).The result of that study would probably be true of much of other Malays in Sumatera, the whole of Peninsula, Kalimantan, the Philippines, in fact Nusantara as a whole.
The polygenic features of the Malays does not only hold for the present but also prevalent for the whole of the Malay historical times.For instance, the founder of Melayu-Srivijaya kingdom, Dapunta Hyang Çri Yacanaca, could hardly be regarded as having fully Austronesian or Austroasian blood, but rather of mixed native Nusantaran, Javanese and Indian blood.This hypothesis is based on his very name: 'Hyang' is Nusantaran name, 'Dapunta' is Javanese and 'Çri Yacanaca' is Indian.It goes the same with the founder of fifteenth-century Melaka Sultanate, Parameswara, and also the most prestigious Malay scholar, Tun Seri Lanang.Those are only a few examples of mixed gene of Malays as there are so many that could not be easily quantified.Even all the Prime Ministers of Malaysia are of that kind.
In fact, historical facts in general would conclude that genetic studies are equally a matter for speculation as other disciplines.Though their methodologies were seemingly sophisticated with scientific laboratory apparatus and specimens, as well as technical jargons, the genetic studies stumbled into two most fundamental shortcomings: 1) to get indisputable testimony and general consensus on the acceptance of genetic prototype (DNA or gene sample) of who are the native, and in the case of Nusantaran, who are the real native Malays; and 2) there is no direct connection between genetics and the homeland (Terrell 2004: 586-587).In practice, the scientists in genetic studies rely or attest their findings heavily on the results of linguistic, archeological, paleoanthropological, prehistory, botanical and forensic studies (Terrell 2004: 586-587).Terrell's latter example is very much true in the case of genetic studies in Nusantara.It has been a usual practice, when they found that the genetic of the people in Nusantara have the same genetic features with the people in southern Taiwan or southern China, those scientists in genetic studies will refer to what had been stated by linguists, archeologists, paleoanthropologists, prehistorian and botanists experts.That is to say, one should not take for granted that genetic test could tell us the homeland or the place of origin of the people in Southeast Asia.Genetic studies cannot stand on their own but rather have to be supported by linguistic, archeologistic, paleoanthropologistic, prehistorical and botanical findings which all operated on Sapir's paradigm (Sapir 1916) as well.
The First to Establish Effective Administration
The second argument to defend the Malay should be regarded as legitimate sons of the soil is based on the fact that their ancestors were the first people who constituted the political establishment or effective administration in Nusantara in general and in Malaysia in particular.This could be proved by the 3 rd century B.C. Indian ancient texts Ramayana and Vayu Purana which had recorded an entity called 'Malayadvipa' in Nusantara.There are other evidence that supported this argumentation.The second century A.D. Ptolemy's Greek text Geography also recorded the entity called Malayu-kulon.Later, the 672/692 A.D. I-Tsing's (Yijing) account had also recorded the existence of the Malay political establishment called 'Melayu-Srivijaya' (I-Tsing 1896, transl.by Takakusu).This kingdom was later succeeded by the fifteenth-century Melaka Sultanate.According to Portuguese sources, Afonso De Albuquerque (transl.Birch 1774) and Tome Pires (1512Pires ( /1944)), the founder of Malay Melaka kingdom, Parameswara, married a daughter of the ruler of Pasai (Acheh) in 1414 and converted to Islam.During the reign of Sultan Mansor Shah, Sultan Alauddin Shah and Sultan Mahmud Shah I, with Tun Perak as their Bendahara (Prime Minister), the Melaka Sultanate's influence appeared to have extended over the neighboring kingdoms of Bengkalis, Bintang, the Carimon Islands, Muar, and the Sumateran states of Kampar, Siak, Aru, Rokan and Indragiri, and on the northern peninsula of Kelantan, Manjong, Bruas, Terengganu, Perak, Pahang and Kedah.To date there is no evidence that shows there had ever been a Chinese or Indian political establishment in this region before or after that periods, except, of course, the Republic of Singapore in the 1960s.
It has been a universal truth or generally accepted that those who first established political control are entitled to claim sovereignty over the land.As they were the first to establish, those Malay political establishments are prevailing factor to be regarded as the legitimate sovereign institution in this region.Hitherto this position has been well recognised and respected by the international communities including the British during their administration in the Peninsular-even though sometimes they used the terms 'British Possession' (Cameron 1865) and 'British Malaya' (Swettenham 1906).For example, in the nineteenth-century Cameron (1865: 125-126) stated: The Malays are entitled to be looked upon as the first rulers and the reprehensive people of Singapore and the Malay peninsula; for the aborigines were never numerous, nor do they appear at any time to have raised up a system of government, but only to have wandered about in scattered tribes; and though their traditions point to a time when they checked the Malayan invasion, it seems to me that this was in all likelihood only the driving back of a few stranger prahus, and not the repelling of an invasion.It will be seen from the short sketch which I have given of the early history of Singapore, that there at all events the Malays were met by no resistance, and as they had greatly increased in numbers before they were driven from the island (Palembang) by the Javanese to seek a new settlement on the mainland of the peninsula near Malacca, it is highly improbable that their landing there could have been seriously opposed by a few rudely armed tribes possessing no organization.
In essence, Cameron was referring to the coming of Parameswara and his ruling house from Palembang, Sumatera, to Melaka which had resulted in the establishment of the Melaka Sultanate in early fifteenth-century.Cameron (1865: 126-127) detailed the formation of the Malay political establishment as follows: The Malays that we find in Singapore and the British possessions in the Straits (vis.The Malay States of Peninsula) are but in part the representatives of the entire race.The independent native states of the peninsula are entirely peopled by them, and from these and from Sumatra, constant conditions are being made to the Malay population of the British possessions.
Unlike the nomadic tribes of the Aborigines, the Malays or the peninsula have always been lovers of good order and an established government.In their independent states they have first a Sultan, who is all powerful; under him are datuhs, or governors, selected from among the men of rank, and under these again there are pangulus, or magistrates, all standing very much in their relation to the people as our own nobility stood in feudal times to the people of England.There are, therefore, easily governed, and sensible of the benignity of English law, they form the most peaceable and probably the most loyal portion of our native population.
The Masses Had Been Legitimized by the Malay Sultanates
The third argument to defend the Malay position as legitimate sons of the soil is based on the status of the Malay masses as the sons of the soil had been legitimized by the Malay Sultanates.By virtue of having been the first to constitute the political establishment with prolonged effective administration, the Malay Sultanates had been recognised by the international communities as the sovereign and single legitimate indigenous political institution.
Brown's translation of MS Raffles No. 18 as follows: the Malays are your clay as the Tradition says, 'Al-'abdu tinu'l-murabbi', which being interpreted is 'the slave is as it were (? the clay of) his master (Malay Annals transl.by C.C. Brown 2009 from first published in 1952: 124) The usage of the word 'tanah' in Undang-Undang Melaka could be found in the phrase "hamba itu tanah tuannya"; in Hikayat Merpati Mas dan Merpati Perak in "hamba dengan bumi tanah hamba"; and in Adat Raja Melayu in "Mangkubumi itu, artinya ialah yang memerintah, memberi kebajikan atas bumi yang dilenggarai raja itu." These phrases, therefore, could qualify the Malays as the sons of the soil.For reasons of limited space, we would only analyze the Malay Annals.The keyword is 'the soil.'In essence it is about the relationship between the subjects and the Ruler.Literally, the phrase "Segala Melayu itu tanahmu" in the Malay Annals could be translated as 'that all Malays are thy soil,' whereas the phrase "yang hamba itu, tanah tuannya" as 'that the subjects are the soil of their owner.'In the first phrase, the word 'soil' clearly means 'the subjects' and the word 'mu' which means 'thy' in English is a possessive pronoun which refers to the Ruler.When it says 'tanahmu' ('thy' soil) it refers to the 'people' (or the subjects) who were owned by the Ruler.'Thus, the whole phrase "Segala Melayu itu tanahmu" here could be translated as 'all the Malays are thy soil'-with thy referring to the Sultan.
In the second phrase the word 'soil' also means 'the subjects.'However, this phrase has a possessive noun 'tuannya' which means 'the owner of the soil.'Thus, the phrase "yang hamba itu, tanah tuannya' could be translated as 'the subjects are the soil owned by its owner.'The owner here refers to the Malay Sultan.
Both phrases, 'the soil' and 'its owner,' could be defined as to refer to 'the subjects of the Ruler.'It is in this connotation that had brought out the legitimate criteria of ordinary people as sons of the soil.In a nutshell, sons of the soil among the masses refers to persons who are the subjects of the Malay ruler on one hand, and being accepted by the Malay rulers as their subjects, on the other hand.It is in this vein the ordinary people or the mass had been constituted as the sons of the soil.In essence, sons of the soil in the Malay context is characterized by the loyalty and readiness to menyembah (to bow to) the Sultan.Menyembah (to bow) or hamba raja here are in the sense to become obedient subjects of the Malay Sultanate, not as the slaves of the Sultan.
Sons of the Soil and Tanah Melayu Phrases
The fourth argument to defend the Malay position as legitimate sons of the soil is based on the facts that the Malay themselves constituted the concept of sons of the soil and also the geo-political entity called Tanah Melayu (the Malay Land) long before the coming of foreign influences.It is a grave mistake to speculate that these concepts are recent and of modern Europeans inventions as there are abundant indigenous sources that reveal these terms had existed and been well known among the Malay natives long before the coming of the Western powers.The proof for this assertion lies in the accounts left by De Eredia (1613), Valentyn (1884), Marsden (1811Marsden ( & 1812)), Raffles (1830Raffles ( & 1817)), Crawfurd (1819Crawfurd ( , 1852Crawfurd ( & 1856)), Hamilton (1721Hamilton ( /1970)), Logan (1848), Wallace (1869) and Howison (1801).For instance, Valentyn (1726Valentyn ( , ed. 1885: 64-65) : 64-65) writes: The Malays crossed under this Prince (SIRI TOORI BOWNA) from the island of Sumatea to the opposite, now the Malay Coast, and more especially to its North-East point, known as 'Oejong Tanah,' that is, "the extremity of the country," and known among geographers as 'Zir Baad' which means in Persian 'below wind' (to leeward), hence receiving a long time afterwards also the new name 'the people below wind' (to leeward), or else 'Easterlings' (above all the other nations in the East), from this name having been given afterwards also to some of their neighbours or other Easterlings.This country has generally been known since that time by the name 'Tanah Malayu,' i.e. 'the Malay territory' or else 'the Malay Coast,' comprising in a larger sense all the country from that very point or from the 2 nd degree till the 11 th degree North latitude and till Tenasserim, though, taking it in a more limited sense, only that country is understood, which now belongs under the governorship and jurisdiction of Malacca and its environs; they are, therefore, also called 'Orang Malayu,' i.e. the Malays, whilst all the other Malays, either closely or far, as those of Patani, Pahang, Peirah (Perak), Keidah (Kedah), Djohor, Bintan, Lingga, Gampar, Harus, and others in this same country or on the islands of Bintang, Lingga, or Sumatra, are also called Malays, but always with the addition of the name of country where they come from, as for instance: Malayu-Djohor.Malayu-Patani, &c,&c.
John Crawfurd (1856: 251), a contemporary of other early nineteenth-century European observers such as Marsden, Leyden and Raffles, writes: The Malays themselves call the peninsula Tanah Malayu, that is, the 'Malay land, or country of the Malays;' and they designate its wild inhabitants, speaking the Malay language, as the Orang banua, literally 'people of the soil;' or as we should express it, 'aborigines.'The term 'land of the Malays' is, however, given to the Peninsula by civilised Malays, perhaps only on account or its being the only country almost exclusively peopled by Malays; whereas in Sumatra and Borneo they are intermixed with other populations.The term 'son of the soil,' applied by these civilised Malays may in the same manner, be used by them only to distinguish the rude natives from themselves claiming to be foreign settlers.The expressions, however, would seem to imply that the civilised Malays considered the wild tribes, speaking the same language with themselves, as the primitive occupants of the land.But the same wild tribes, speaking the Malay language, although not distinguished as 'son of the soil,' exist also in Sumatra, and more especially on its eastern side opposite to the Peninsula: and they are found also, in several of the islands lying between those countries, extending even to Bancoa and Billiton.
Both writers precisely informed us that the concepts of sons of the soil and Tanah Melayu were invented by the indigenous Malays themselves, not by foreigners.These accounts sufficiently reflect the truth as both writers were writing when Western knowledge had not yet penetrated into the mind of the indigenous people in this part of the world.
It is another mistake to assume that early Malay texts had no such concept of statehood to encompass Malaya (as claimed by Fernandez 1999: 47).As shown in Table 1, even with uncritical eyes one would admit there are abundant of Malay classical texts which had used the term Tanah Melayu (The Malay Land) long before the coming of Western influences.De Eredia (1613: 8), also gave the same account, "Throughout all this continental territory of Ujontana the Malaya language is used, and the natives describe themselves as 'Malayos.'"Thus, even though some observers may refer 'Ujontana' as southern part of the Peninsula but in essence De Eredia did not mean there was no unified Malay political establishment called Tanah Melayu: ..starting point by the Island of Pulo Catay in the region of Pattane (Pattani), situated in the east coast in 8 degrees of latitude, the pass round to the other or western coast of Ujontana, to Taranda and Ujon Calan situated in the same latitude in district of Queda (Kedah): this stretch of territory lies within the region of "Malayos" and the same language prevail throughout (De Eredia 1613: 8).
Actually, he was using the old name for the Malay Peninsula as the term 'Ujontana' had been used before the Melaka Sultanate expanded its influence to constitute the Malay identity.
Conclusion
We think we have presented enough evidence to establish the legitimacy of the Malays as the sons of the soil.If all accounts presented here-such as there are no other places in the Asian region which had the oldest ancient primates and fossils been found other in Nusantara, the Nusantara was located in the Equatorial Belt which is the most suitable place for humans to settle during the Pleistocene and Holocene periods, it is illogical to imagine the whole Nusantara was totally inhibited before the arrival of human beings from outside.There is not even a single evidence to show the centre of Austronesian speakers had ever been in other parts of the world, there are fundamental defects in all migration theories, and the Malays were the first to constitute prolong political establishments as well as to have constituted geo-political entity (the Malay Land)-are taken into consideration.We have had ample evidence to conclude that the Malays are no less native, no less indigenous, and no less sons of the soil as the Aborigines or any other natives in this region such as Negrito (Kintaq, Lanok, Kensiu, Jahai, Mendriq and Bateq), Senoi (Temiar, Semai, Mahu Meri, Che Wong, Ja Hut and Semoq Beri) and Malayu (Orang Selatar, Jakun, Orang Kuala, Orang Kanaq, Temuan and Semelai) in the Peninsular; Iban, Bidayuh, Bisaya, Kayan, Kedayan, Kelabit, Kenyah, Melanau, Murut, Penan and Punan in Sarawak; and Kadazandusun (Kadazan + Dusun), Murut, Bajau, Bisaya, Kadayun, Orang Sungai, Orang Laut and Brunei in Sabah.
If all those accounts presented in this paper are perceived as not really the earliest then what else could be considered as the earliest in Southeast Asian context as far as written and other sources are available?If one goes on to argue on linguistic, archeological, paleoanthropological, prehistorian, botanical, genetic and forensic basis, we then revert to nothing because, as stated earlier, migration theories yield nothing other than half-baked conflicting theories.
Table 1 .
Crawfurd (1856)Melayu (the Malay land) in indigenous classical texts (pre-industrial eras) Malay Concordance Project (http://mcp.anu.edu.au/cgi-bin/tapis.pl)[Retrieved 8 April 2012]Those texts confirm what had been recorded byValentyn (1726)andCrawfurd (1856)as mentioned above.In fact, as mentioned in their above quotations, these eighteenth-century and nineteenth-century European observers implicitly stated it had been the Europeans who had taken the words from locals and translated it into English, not the other way round.Of course, not all the terms Tanah Melayu mentioned in those Malay classical texts refer to Malay Peninsula; some refers to Melaka.However, most of them refer to the current Malay Peninsula.For example, the phrase "tanah Melayu" in Hikayat Hang Tuah, which appear 46 times, means the Malay land and it refers to the Malay Peninsula (see alsoReid 2002).A seventeenth-century Portuguese observer,
|
2017-11-23T22:07:10.737Z
|
2012-12-31T00:00:00.000
|
{
"year": 2012,
"sha1": "6829c414eebf44df352f6c7fd8db5825c682b49b",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ass/article/download/23534/15030",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6829c414eebf44df352f6c7fd8db5825c682b49b",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
119462130
|
pes2o/s2orc
|
v3-fos-license
|
Gravitational Wave signatures of inflationary models from Primordial Black Hole Dark Matter
Primordial Black Holes (PBH) could be the cold dark matter of the universe. They could have arisen from large (order one) curvature fluctuations produced during inflation that reentered the horizon in the radiation era. At reentry, these fluctuations source gravitational waves (GW) via second order anisotropic stresses. These GW, together with those (possibly) sourced during inflation by the same mechanism responsible for the large curvature fluctuations, constitute a primordial stochastic GW background (SGWB) that unavoidably accompanies the PBH formation. We study how the amplitude and the range of frequencies of this signal depend on the statistics (Gaussian versus $\chi^2$) of the primordial curvature fluctuations, and on the evolution of the PBH mass function due to accretion and merging. We then compare this signal with the sensitivity of present and future detectors, at PTA and LISA scales. We find that this SGWB will help to probe, or strongly constrain, the early universe mechanism of PBH production. The comparison between the peak mass of the PBH distribution and the peak frequency of this SGWB will provide important information on the merging and accretion evolution of the PBH mass distribution from their formation to the present era. Different assumptions on the statistics and on the PBH evolution also result in different amounts of CMB $\mu$-distortions. Therefore the above results can be complemented by the detection (or the absence) of $\mu$-distortions with an experiment such as PIXIE.
Introduction
Massive primordial black holes (PBH) could constitute the dominant component of present dark matter, thus resolving one of the remaining mysteries of modern cosmology, see Ref. [1] for a recent review. Several mechanisms have been proposed for their origin and evolution. The most compelling possibility is related to high peaks in the primordial curvature power spectrum, that originated from quantum fluctuations during inflation, which backreact on space-time producing large amplitude curvature fluctuations. These large fluctuations collapse during the radiation era to form black holes with masses of the order of that within the horizon at reentry. If the peak in the curvature power spectrum is broad, subsequent large fluctuations enter close to each other and there is a higher probability that nearby horizons collapse to form PBH, so they are predicted to be clustered today. In this scenario, only a very small fraction of all causal domains collapse to form black holes. The PBH thus produced constitute only a small fraction of the total energy density during the radiation era, but their relative contribution over radiation grows (nearly) as the scale factor, and comes to dominate at matter-radiation equality. Such PBH could then constitute a considerable fraction of the present matter component. The probability of collapse during the radiation era is determined by the amplitude of the curvature perturbations at reentry. A PBH abundance compatible with that of the present dark matter requires perturbations that are significantly greater than the ones measured at the Cosmic Microwave Background (CMB) scales. While, in typical models of inflaton, CMB modes were generated approximately 60 e-folds before the end of inflation, fluctuations that can lead to present PBH dark matter were generated around 40-to-20 e-folds before the end of inflation, depending on the precise PBH mass distribution. Various mechanisms for producing PBH have been proposed in the literature, including: the use of a scalar field, coupled to the inflaton, with a symmetry breaking potential, triggering a rapid growth of modes during inflation [2][3][4]; from the presence of an inflection point in a single-field inflationary potential [5][6][7][8]; from domain walls [9]; from Q-balls [10,11]; from sourcing vector perturbations, amplified by a rolling axion [12][13][14][15][16][17]; or from multiple stages of inflation with an intermediate violation of the slow-roll conditions [18].
As a result of these different mechanisms, the amplified density perturbations have different statistical nature (i.e. single-field inflationary models with special features in the potential typically obey Gaussian distributions, but the sourced perturbations may have non-Gaussian, more specifically χ 2 , properties.) If the source for the amplified quantum perturbations results from a higher-order interaction, one can have a distribution of the form ζ ∝ G n − G n , where G denotes a Gaussian probability density function. With higher values of the exponent n, the PBH production efficiency increases since the probability distribution function becomes more spread, like in Critical Higgs Inflaton, [5,6] (i.e. the region under the tail of the distribution grows). Since PBH are the result of the very end tail of the probability distribution, 1 smaller amplitude curvature perturbations can produce the same amount of PBH in the case of a non-Gaussian vs. a Gaussian statistics [19].
As noted in Refs. [20][21][22], scalar and tensor modes couple to each other at second order in perturbation theory. Even though this coupling is suppressed by the Planck scale, the enhancement of the scalar perturbations required to produce PBH can induce a significant amount of gravitational waves (GW). This stochastic GW background (SGWB) is unavoidably present in all models that result in PBH. Its amplitude is very sensitive to both the amplitude of scalar perturbations and their statistics: for an equal abundance of PBH at formation, a smaller SGWB is obtained in the case of non-Gaussian vs. Gaussian primordial curvature modes. The simultaneous detection of the present PBH mass distribution and of the SGWB signal could therefore provide crucial information on the statistics of these modes, and can help discriminating between different models for their production.
For definiteness, we compare the case of a localized distribution of PBH masses originated from a primordial perturbations that obey χ 2 statistics (as obtained from the specific model [16]), vs. the case of a Gaussian distribution. In the former case, we also include the SGWB produced during inflation by the gauge fields that also source the curvature perturbations, and we find that this dominates over the SGWB produced by the curvature modes at reentry. 2 We focus our attention on two mass ranges where, in light of current uncertainties on the PBH limits, a distribution of PBH masses could account for the present dark matter. The most interesting range is that of M PBH ∼ O (10) M (where M 2 × 10 33 g is the solar mass), since collisions of PBH in this mass range may be responsible for the GW signals observed at LIGO [4,26,27]. In this case, the SGWB produced at reentry is peaked at Pulsar Timing Array (PTA) frequencies, f peak ∼ few nHz, where the experimental sensitivity is expected to strongly improve with the Square Kilometer Array (SKA) experiment [28]. In this work we show that PTA data can significantly probe this mass range of PBH dark matter.
The relevance of this SGWB at PTA frequencies has also been recently emphasized in Refs. [16,[29][30][31]. The two works [29,30] study the SGWB sourced at reentry at PTA scales emerging from different inflationary models, assuming Gaussian statistics of the curvature perturbations. Ref. [31] computed the peak of this SGWB assuming also Non-Gaussian statistics, and showing how this leads to a decrease of the GW amplitude (at fixed amount of PBH). Contrary to the present work, Ref. [31] did not compute the scale dependence of this SGWB. All these works assume a trivial evolution of the PBH from their formation to the present time. In our analysis, we stress the fact that this SGWB probes the PBH mass distribution at the time of its formation. Therefore, as already mentioned in Ref. [16], the measurement of this signal, together with detailed information on the PBH mass function today, obtained from frequent BH binary (BHB) mergers and close hyperbolic encounters [32], can reveal precious information about the PBH evolution and environment.
Another experimental handle that can be used to discriminate between the different assumptions (statistics of fluctuations and PBH evolution), is given by the amount of CMB µ-distortions generated in these PBH models [3,33]. This amount is strongly sensitive on the amplitude and the scale of the bump in the spatial curvature perturbations. We show that the detection (or its absence) of a significant µ-distortion in an experiment such as the Primordial Inflation Explorer (PIXIE) [34] or the Polarized Radiation Imaging and Spectroscopy Mission (PRISM) [35] can complement what we could learn from the SGWB detection.
The organization of the paper is as follows: In Section 2 we discuss the current bounds on PBH. In Section 3 we discuss the significance of statistical properties of perturbations specifically for Gaussian and χ 2 distributions. We introduce two models for these distinct statistics. We devote Section 4 to the detailed analysis of the contributions to the SGWB from sourced and induced tensor modes produced by these two models. While the production from a Gaussian distribution has been well studied in the literature, the detailed spectrum produced in the non-Gaussian model is an original result of this work. In Section 5, we compare the GW backgrounds of Gaussian and Non-Gaussian models against the expected sensitivity at PTA and LISA scales. We continue with study of the evolutionary effects in Section 6. Finally, Section 7 presents our conclusions.
Summary of bounds on PBH
The left panel of Figure 1 is a compilation of current bounds on the fraction f PBH of dark matter in PBH as a function of PBH mass (with the x-axis ranging from 10 −15 to 10 4 M ). Different mass scales have been constrained by different experiments, as we discuss below. The right panel shows the corresponding limits on the fraction β of regions (of a given size, become an irreducible GWB for LISA [25]. Furthermore, another class of Stochastic GWB exists due to the non-spherical collapse of PBH which has a peak around similar frequencies but with smaller magnitude, see Refs. [1,31]. corresponding to a given black hole mass) that collapse to form a black hole (see Eq. (6.1), with for the relation between f PBH and β). We note that evolutionary effects, such as PBH accretion from the surrounding plasma, and merging of PBH, are not included when producing the limits shown in the right panel. 3 Let us review the various constraints, from smaller to larger mass. The femto-lensing ("FL") line at the lowest masses shown in the figure is due to lack of femto-lensing detection of Gamma-ray bursts by Fermi [36]. The "Star Formation" limits are obtained from the capture of PBH dark matter by a star during its formation [37]. As discussed in [38], there are large uncertainties on these limits, and for this reason we do not include them in our discussion below, where we assume that this mass window can be compatible with PBH being a significant fraction, or the totality, of the dark matter. The "Kepler" line is due to the non-observation of microlensing events at Kepler [39]. The "Micro Lensing" bounds in the 10 26 < ∼ M (g) < ∼ 10 35 range come from observations of microlensing events by the MACHO and EROS Collaborations [40,41]. Both experiments lasted for about six years, and, as a consequence, they cannot constrain higher mass objects. The mass range 10 35 < ∼ M (g) < ∼ 10 37 is mostly constrained by the existence of a stellar cluster near the center of ultra-faint dwarf galaxy ("UFD") Eridanus II [42], which has been shown in [43] to be weakened if there is an intermediate mass black hole of a few thousand solar masses at its center, as the clustered PBH scenario predicts [1]. Therefore, we do not include this bound in our analysis. 4 Finally, the mass range M (g) > ∼ 10 35 , tagged as "CMB", is constrained by the lack of spectral distortions in the CMB spectrum resulting from the radiation emitted due to accretion on PBH [45,46]. 5
Gaussian vs. Non-Gaussian Primordial Overdensities
The efficiency of PBH formation is strongly dependent on the statistical properties of the primordial overdensitites. Therefore, the PBH bounds given in Figure 1 turn into very different upper bounds for the primordial scalar power spectrum, depending on the assumed Figure 1, using Eqs. (3.1). We note that the power spectrum is much more constrained in the case of a χ 2 statistics, as the perturbations in the tail of the distribution lead to a greater amount of PBH with respect to the Gaussian case.
distributions of the perturbations at those scales. For instance, the fraction β of causal regions collapsing onto primordial black holes is related to the power P ζ of primordial scalar perturbations by [47,48] , Gaussian statistics , , χ 2 statistics , where ζ c is the threshold for collapse and Erfc (x) ≡ 1 − Erf (x) is the complementary error function (see for instance Appendix A.2 of [16] for a detailed discussion of the above relations). In these relations, N denotes the number of e-folds before the end of inflation at which the density mode that eventually collapses to form a PBH left the horizon during inflation. It is related to the PBH mass through Eq. (A.2).
A given value of β corresponds to a very different power in the two cases considered in (3.1). The term 1/2 in the argument of the second complementary error function can be disregarded for ζ 2 c P ζ , which is always satisfied to very good approximation, leading to This equation relates the values of the power in the two cases that results in the same value of β. Using Eqs. (3.1) we can translate the bounds given in Figure 1 into bounds on P ζ . The resulting limits are shown in Figure 2. The two lines satisfy the relation (3.2) with great accuracy. Different inflationary mechanisms considered in the literature leading to PBH are characterized by different statistics of the scalar perturbations. For instance, the perturbations are Gaussian in the mechanism of Ref. [6], where a suitable inflaton potential provides an enhanced scalar spectrum at some specific scale in a model of single-field inflation. In the case of hybrid inflation models of PBH production [2,3], the statistics of the peak fluctuations deviates from Gaussian due to quantum diffusion. On the other hand, a χ 2 statistics is obtained in the mechanism of scalar field perturbations arising from the coupling between a rolling axion (different from the inflaton) and a vector field during inflation [16,49]. The motion of the axion induces a gauge field amplification, that in turn sources scalar primordial perturbations and primordial gravitational waves. The axion is assumed to roll only for a finite amount of e-folds ∆N during inflation, resulting in amplified scalar perturbations only at the scales that left the horizon during this period. This provides a localized "bump" in the primordial perturbations, with a width related to the model parameter ∆N . These perturbations are not statistically correlated with the vacuum ones (those produced by the expansion of the universe in the absence of this axion-gauge field coupling), resulting in a primordial scalar power spectrum of the form [49] In this expression, the first term is the power spectrum of the vacuum perturbations, for which we assume a standard power-law scaling, with amplitude and tilt given by CMB observations. The second term is the sourced signal resulting from the axion-gauge field coupling. It is characterized by three parameters: the position of the peak in (comoving) momentum space, the height of the signal at the peak, and the width. The sourced scalar perturbations are assumed to be dominant around k s,peak , but negligible away from it (in particular, they are negligible at CMB scales). They are responsible for the formation of PBH, since the vacuum signal is too small. The gauge field also sources a "bump" in the tensor perturbations, resulting in a GW power spectrum This expression is analogous to Eq. (3.3), and the position and width of the scalar and tensor bump are comparable to each other. In Appendix B we summarize the model of Ref. [49] and the precise relations between the model parameters and the scalar and tensor power spectra. We denote this model as the "rolling-axion bump model". The two key points for the phenomenology of this model are that (i) the scalar perturbations obtained from this mechanism obey a χ 2 statistics; (ii) the bump in the scalar spectrum is accompanied by a correlated bump in the tensor spectrum. To assess the relevance of these two features we compare results obtained for this model with results for a model in which: 1. scalar perturbations have a power spectrum still given by Eq. 2. there is no corresponding bump in the tensor spectrum. Namely the tensor perturbations obey Eq. (3.4) with P t,peak = 0.
For brevity we denote this model as the "Gaussian bump model".
Primordial vs. Induced Gravitational Waves
We identify three distinct populations of GW associated with PBH. 6 In order of their formation, they are: 1. The GW produced during inflation by the same mechanism that produces the enhanced scalar perturbations that later become PBH at reentry. We refer to this population as the "primordial GW", and we denote it as h p . 7 2. The GW sourced by the enhanced scalar perturbations. This gravitational production is maximized when the scalar modes re-enter the horizon during the radiation dominated era. We refer to this population as the "induced GW", and we denote it as h i .
3. The GW produced by the merging of PBH binaries, since formation until today [23,24].
In this work we study the first two populations, in the context of the Gaussian bump model and of the rolling axion bump model introduced in the previous section.
The Gaussian bump model assumes that no significant primordial GW are produced. The induced GW are produced by the scalar curvature modes through standard nonlinear gravitational interactions, through a process diagrammatically shown in Figure 3. The gravitational interaction is schematically of the type hζ 2 , where h is a tensor mode of the metric (the GW) and ζ is the scalar curvature (in this schematic discussion we do not indicate the tensorial indices, nor the spatial derivatives acting on ζ, which characterize the interaction). The tensor mode sourced by this interaction obeys a differential equation that can be solved through a Green function, G (η, η ), schematically described as where η is (conformal) time, and where the right hand side contains also a convolution in momenta. This leads to a contribution to the GW power spectrum, schematically as In addition to the signals considered here, there is also the stochastic background from the non-spherical collapse of PBH [1]. This background can be estimated as Ωnsc, 0 = E ·β ·Ω rad,0 , where E indicates the efficiency of converting the horizon mass during formation of PBH to GW and β is the fraction of causal domains that collapse into a PBH. Using the bound β < ∼ 2 × 10 −8 , from Figure 1, we can estimate Ωnsc, 0 h 2 < ∼ 10 −12 · E, which is much smaller than the signals studied here, and thus is ignored. 7 These are not the vacuum tensor fluctuations produced during quasi-de-Sitter inflation, which are negligible on these scales. The two expressions (4.1) and (4.2) are diagrammatically shown in Figure 3.
Adding up the two GW polarizations (the induced GW is not polarized, since it is sourced by the scalar ζ), the total explicit expression corresponding to (4.2) is [21] where p is the loop momentum, z is the cosine of the angle between k and p, and where (4.5) Let us now turn our attention to the rolling axion bump model. In this case, both primordial and induced GW are present. Figure 4 shows how the GW are produced from the vector field A amplified by the rolling axion. The primordial GW are produced by the vector fields during inflation. The autocorrelation h p h p is of the form (3.4). This correlator was computed in [16,49], and it is given by the first diagram of Figure 5.
The induced GWB is produced during the radiation dominated era (mostly at horizon re-entry) by the scalar perturbations that were sourced by the vector fields during inflation. The induced GW signal in this model was never computed, and it is one of the original results of the present work. Due to the fact that both h p and h i originate from the vector field perturbations, the total power spectrum (h p + h i ) 2 contains also a mixed-term contribution, given by the second and third diagram of Figure 6.
The presence of h p therefore provides additional contributions to the GW power, that are typically disregarded in works of GW from PBH. Disregarding this signal may not always be a proper assumption, since the production of PBH required a mechanism that enhances the scalar perturbations during inflation, and this mechanism can in principle enhance also the primordial GW. The relevance of h p over h i is particularly important in the case in which the scalar perturbations obey Non-Gaussian statistics, as we will show below. The reason for this is that PBH bounds constrain the scalar power much more in the case of Non-Gaussian vs. Gaussian statistics (see Figure 2). This then limits the amount of induced GW which are Figure 5. One and two-loop contributions to the GW signal in the rolling axion bump model. These diagrams give the amplitude of the primordial GW, and of the cross-correlation with the induced GW. Intermediate solid (resp. wiggly) lines represent scalar (resp. gauge field) perturbations. Figure 6. Auto-correlation of the induced GW signal in the rolling axion bump model. Intermediate solid (resp. wiggly) lines represent scalar (resp. gauge field) perturbations.
sourced by these scalar modes. In fact, we will see that h p dominates over h i in the rolling axion bump model. Even if we ignore h p , the study that we perform here constitutes, to the best of our knowledge, the first attempt to fully compute the h i h i auto-correlation in a Non-Gaussian model, where the source of the enhanced scalar perturbations is completely specified. In the previous literature, when studying the induced GW in the context of PBH formation, the scalar perturbations are typically assumed to be Gaussian, so that the source term ζ 4 in h i h i = dη dη G 2 ζ 4 can be written as the product of two point functions P 2 ζ , see Eq. (4.3). In the present context, this Gaussian contribution corresponds to just the first diagram of Figure 6. The other two diagrams only emerge when a concrete model is considered, and analogous additional diagrams could be present also for different concrete mechanisms, where e.g. more fields are involved.
In general, the 4-point correlator ζ 4 cannot be expressed completely in terms of products of 2-point correlators ζ 2 , and the expression (4.3) must be replaced by 8 where F T , T andT are given in Eq. (4.5) and cos θ kp =k ·p. Evaluating the ζ 4 correlator in the rolling axion bump model gives rise to the three diagrams shown in Figure 6. The three diagrams are evaluated in Appendix D. We denote the first diagram as "Reducible", since in this case the ζ 4 correlator can be reduced to the product of two scalar power spectra. Using the analytic result (3.3) for the scalar power spectra, this diagram can be evaluated through a one-loop computation. The other two diagrams, which we denote, respectively, as "Planar" and "Non-Planar", must instead be evaluated through a 3-loop computation. We evaluate them under the approximation of "zero-width" gauge modes, namely where p c is the momentum at which the exact amplitude (D.1) of the gauge fields is peaked. Each diagram is therefore proportional to 4 δ-functions, reducing the number of integration variables, and allowing for a reliable evaluation of the integral. The results of these diagrams are presented, 9 in the red, green, and blue lines respectively of Figure 7. For the Reducible diagram, we show both the exact (obtained using the exact gauge field modes) and the approximated (using the "zero-width" approximation) result. This allows one to quantify the goodness of the approximation: the signal has a peak which is about 2 times greater than the exact one, and centered at a value of k which is about 2 times smaller than the exact one. We therefore expect the approximate results to provide a reliable estimate of the exact ones, up to order one factors.
The total GW spectrum is given by the sum of the auto-correlator P h i , that we just discussed, the auto-correlator P hp , given in Eq. (3.4), and the cross-correlation between the + polarizations of both primordial and induced waves The mixed-term corresponds to the second and third diagram of Figure 5, and it is evaluated in Appendix E, from a two-loop computation that uses the exact expression (D.1) for the gauge field modes). The result is also shown in Figure 7 (where we show it multiplied by −1, as the cross-correlation is negative, and the plot has logarithmic axes). As can be seen from the figure, the total GW power spectrum is dominated by the primordial one. The Reducible and Planar diagrams provide contributions comparable to each other, which are about 10% of the primordial power spectrum. The Non-Planar diagram, and the cross-correlation term, are further suppressed. As mentioned above, the "zero-width" approximation squeezes the GW spectrum at smaller frequencies than the exact one, due to the fact that, if we assume that all gauge field momenta are precisely p c , it provides a sharp cut-off on the maximum possible frequency of the produced GW. Therefore, we cannot use this approximation to infer the precise spectrum of the total GW signal, and the broadening of the total GW signal at small frequencies visible in Figure 7 should be understood as an artifact of the approximation. We also note that the log-normal shape (3.4) of the primordial GW power spectrum is a fit which is extremely good at the peak of the sourced signal, but which does not appropriately fit the tail of the bump [49]. We verified that the IR tail of the bump scales as k 3 . 9 The background inflationary evolution assumed in the computation leading to these results is characterized by the slow roll parameters = 6.25 × 10 −4 (giving the tensor-to-scalar ratio r = 0.01 at CMB scales) and η = −0.015 (giving the Planck [50] central value ns 0.965). We then choose δ = 0.2 in the axion potential, and ξ * = 5.1 (see Appendix B). The axion evolution is chosen so that the axion acquires maximum speed at about 40 e-folds before the end of inflation, producing a peaked GW signal at PTA frequencies. The main conclusions of this study (namely, the hierarchy between the various GW signals shown in Figure 7) are unchanged if the spectrum is peaked at different scales, as we discuss at the end of this section. More in general, the ratio between the induced and the primordial GW PS is an increasing function of the peak amplitude of the gauge field modes, . This amplitude grows exponentially with the parameter ξ * (see Appendix B). A growing gauge field amplitude also increases the primordial scalar perturbations, P ζ ∝ A 4 , which is limited by the PBH bounds given in Figure 2. In Figure 7 we chose ξ * = 5.1, which is the largest value allowed by the PBH limits on ξ * for a bump at the chosen scale. We find that P h i P hp could be of order one only for an increase in gauge field production that would lead to an increase of the scalar power spectrum by about a factor of 10. This would violate the PBH bounds at all scales shown in Figure 2. We conclude that in the rolling axion bump model, the primordial GW always dominate over the induced ones. 10
Stochastic GW spectra as a probe of PBH dark matter
We now use the results of the previous section to understand what can be learned from the observation of a bump of primordial and induced GW associated with the PBH. We divide the discussion into two Subsections, devoted to the study of a signal at PTA and at LISA scales, respectively. [16], PTA measurements can provide useful information on PBH of such masses.
We quantify this in the context of the Gaussian vs. Non-Gaussian (rolling axion) bump models studied in the previous sections. In Figure 8 we show a distribution of current PBH masses, that saturates the PBH limit in this mass range and that constitutes all of the dark matter of the universe, In the left panel of Figure 9 we show the bump in the primordial scalar curvature required to produce this distribution, both in the case of the Gaussian peak and of the rolling axion peak models. 11 We note that the required distribution of P ζ in the Non-Gaussian case is much smaller, and narrower, than the required distribution in the Gaussian case. Nevertheless, they result in the same f PBH (M ), due to the very different relations (3.1) for the PBH formation fraction β.
In both models, this bump in the scalar modes is accompanied by a GW bump at PTA frequencies. In the Gaussian bump model, the GW signal is sourced by the scalar perturbations at horizon re-entry. In the Non-Gaussian rolling axion bump model (denoted as χ 2 in the figure), the GW signal is dominated by the primordial GWB produced during inflation, by the same mechanism that produced the bump in the scalar modes. As we already discussed in the previous section, we stress that the induced GW signal is much smaller in 11 We stress that this discussion ignores any possible merging and accretion of the PBH after their formation. For a proper discussion, see the next section. the Non-Gaussian vs. the Gaussian model, since the PBH bound on the scalar perturbations is much more stringent in the former case (a more constrained ζ implies a more constrained induced ζ + ζ → h i signal). The magnitude of this GW signal is shown in the right panel of Figure 9, where it is compared with the present PTA bounds [51][52][53], as well as the forecast bounds for the forthcoming Square Kilometer Array (SKA) experiment [28,54]. While consistent with the current bounds, both models produce a GW signal well within the reach of SKA.
Besides the PBH limit shown in Figure 2, the spatial curvature perturbations are also constrained by µ and y CMB distortions. Of relevance for the present discussion, see also Ref. [3], the µ distortion is given by [55,56] µ −3 × 10 −9 + 2.3 wherek = k Mpc andk 0 = 1. In this expression, the primodrial curvature power spectrum is multiplied by a window function with its main support at wavenumbers 50 < ∼k < ∼ 2 × 10 4 . Assuming N CMB = 60 at the scalek CMB = 0.002, this corresponds to modes that left the horizon between approximately 45 and 50 e-folds before the end of inflation. 12 The Gaussian and χ 2 distribution shown in Figure 9 lead, respectively, to the distortion µ 3.6 × 10 −5 and 3 × 10 −8 . Both values are below the current bound |µ| < ∼ 10 −4 from the COBE / FIRAS experiment [57,58]. The CMB distortion obtained in the Gaussian bump model is well within the reach of a PIXIE-like experiment, which has an estimated sensitivity |µ| = O 10 −7 [59]. The rolling axion model leads instead to a value below this sensitivity, and only slightly greater than the scale invariant case (a scale invariant spectrum corresponding to that of Figure 9, with no bump, leads to µ ∼ 10 −8 ).
We also see from the figure that the Gaussian bump model results in a much greater GW signal than the rolling axion bump model, and that the Gaussian bump case shown in 12 On the other hand, y-distortions are mostly sensitive to modes 1 < ∼k < ∼ 50, which roughly corresponds to 50 < ∼ N < ∼ 54. These scales are significantly larger than those considered in this work. the figure is only barely compatible with the present bounds. It is therefore important to understand how this conclusion is affected if we modify the PBH distribution with respect to the one shown in Figure 8. The most important factor in this discussion is the relation between the peak frequency of the GW signal and the peak mass of the PBH distribution. The peak frequency in the scalar and GW signal are equal to each other, up to an order one factor. Therefore, the peak frequency of the GW signal potentially detectable in PTA experiments scales with the peak mass of the PBH distribution as f peak ∝ M −1/2 peak . In Figure 10 we compare the GW signal generated by the Gaussian bump model described above with that generated by a different Gaussian bump model, peaked at a smaller PBH mass. More precisely, the second model (shown in dashed green lines in Figure 10) is peaked at M 2 M , a factor ∼ 41 smaller than the peak value M 83 M , of the first model (shown in solid blue lines). The second model produces a GW signal peaked at f ∼ 2.3 nHz, in a region where the PTA bounds are strongest. This frequency is a factor ∼ 6.2 greater than the peak frequency f ∼ 0.37 nHz of the GW signal generated in the first model, in very good agreement with the f peak ∝ M −1/2 peak scaling. Despite the fact that the second PBH distribution only accounts for ∼ 16% of the dark matter of the present Universe, the shift in frequency causes it to be already ruled out by the PTA data.
It is also important to stress that the examples studied in Figures 9 and 10 assume ζ c = 1 in Eq. (3.1). This quantity is the estimated threshold that a scalar perturbation must reach in order to form a PBH. Theoretical and numerical studies [60][61][62][63][64] indicate that this quantity is ζ c = O (0.05 − 1). Since the amount of PBH is controlled by the ratio P ζ /ζ c , a decrease of ζ c by a factor r leads to the same PBH abundance provided that P ζ is decreased by r 2 . This effect decreases the GW signal by r 2 in the rolling axion bump model (in which both P ζ and P GW are proportional to the same power of the sourcing fields), and by r 4 in the Gaussian model (in which P GW ∝ P 2 ζ ). Therefore, for all values of ζ c = O (0.05 − 1), the GW produced by these models will be testable at PTA-SKA frequencies.
We conclude that a significant dark matter component in the form of PBH with masses in the range M ∼ 1−100 M is compatible with the current PTA bounds for the rolling axion bump model, and barely compatible or excluded for the Gaussian bump model, depending on the precise peak PBH mass and on the value of the threshold parameter ζ c . The forthcoming improvement of several orders of magnitude on the PTA bounds from the SKA experiment will allow to conclusively probe both models. In Section 6 we discuss how this conclusion is modified by a nontrivial evolution (via accretion and merging) of the PBH distribution after their formation.
GW at LISA scales
Here we study the implications of LISA measurements on the PBH physics. The LISA experiment will be most sensitive at frequencies f ∼ few mHz, see Ref. [25]. This corresponds to modes that left the horizon about N ∼ 25 e-folds before the end of inflation. From Eq. (A.2), we see that scalar overdensities produced at N ∼ 25 collapse into primordial black holes of mass M few × 10 −12 M . Therefore, LISA measurements can provide useful information on PBH of such small masses.
Analogously to the previous subsection, in the left panel of Figure 11 we show a bump in the primordial curvature perturbations that saturates the present PBH bounds, given by neutron star capture [65]. The curves shown in the Figure correspond to a present PBH dark matter fraction equal to one (this mass range was also recently considered in Ref. [66]). In the right panel of Figure 11 we show the corresponding bump in the GW spectrum, as compared with the forecasted LISA sensitivity curve "N2A2M5L6" 13 given in Ref. [25].
As seen from the right panel, the GW signal from the Gaussian model (resp., from the rolling axion model) is about five orders of magnitude (resp., three orders of magnitude) stronger than the best sensitivity curve of LISA. As discussed in the previous subsection, the GW signal can be decreased, while keeping the same amount of PBH, if the threshold for formation ζ c is lowered with respect to the value ζ c = 1 assumed in Figure 11. We find that the GW signal is below the LISA sensitive curve only for ζ c < ∼ 0.07 in the Gaussian bump model, and for ζ c < ∼ 0.03 in the rolling axion bump model. Therefore, if PBH in the mass range M ∼ 10 −12 M constitute a significant fraction of the dark matter, the associated stochastic background of primordial or induced GW produced by the bump models will be observed by LISA. 14
Evolution of the PBH Mass Function
In this section we discuss the effect of accretion and merging on the PBH mass function, and how they can impact the observation at PTA scales of the primordial and induced GW signal studied in the previous sections. In appendix F we obtained the present time (t 0 ) PBH fraction This relation was derived in [67] in the case of no accretion (A = 1) and no merging (M = 1) of the PBH after their formation. The coefficient A in (6.1) accounts for the fact that the surrounding plasma can accrete a PBH after its formation. We parametrize this by an increase of each individual PBH mass, This rough parametrization does not account for the fact that, in reality, different PBH masses will merge and accrete differently, and that this will in general change the shape of the original distribution. This distortion will be subdominant if the PBH initial distribution is sufficiently peaked (since in this case, only one mass scale is relevant). As we now discuss, the main impact of accretion and merging on PTA observations is simply due to a shift of the PBH peak mass (6.2), and the precise shape of the final PBH distribution plays a far less relevant role.
This can be understood from the examples shown in Figure 12. In the left (respectively, right) panel of the figure we assume the Gaussian bump (resp., rolling axion bump) model, namely a Gaussian (resp., χ 2 ) statistics for the primordial perturbations. In each figure the PTA limits are compared with three GW distributions, obtained in the case of trivial PBH evolution (red curve, M = A = 1), of very strong merging and no accretion (blue curve, M = 10 5 and A = 1), and of very strong accretion and no merging (purple curve, M = 1 and A = 10 5 ). We expect that realistic amounts of accretion and merging should lie between these extreme cases. The GW signals are obtained as follows. In all cases we assume a present 14 The star formation limits of [37] PBH mass function given by Figure 8. This present distribution is peaked at M ∼ 83 M , and it accounts for all the dark matter of the universe. We then find the corresponding value for the PBH fraction at formation β, according to Eq. (6.1), accounting for the different values of M and A that characterize each case. We then compute the primordial curvature power spectrum P ζ leading to this fraction, in the two different cases of Gaussian vs. χ 2 distribution. Finally, we compute the corresponding amount of primordial and induced GW associated with this distribution. The various GW signals obtained in this way are plotted in the various curves of Figure 12. 15 The main feature that emerges when comparing the merging or accretion cases with the trivial evolution case is the increase of the peak frequency of the GW signal. The reason is the following: the distribution (8) probes the current PBH mass M . On the contrary, the primordial and induced GW signals shown in Figure 12 . This shift of the GW distribution is the main factor in determining whether the GW signal can be probed at PTA scales. We see that, for the case of a χ 2 bump, an accretion or a merging of a ∼ 10 5 factor would shift the GW signal towards too high frequencies to be observed in these experiments. The Gaussian model results instead in a greater and wider GW signal, that can be observed at PTA scales also for these large amounts of accretion and merging.
It is also interesting to compare the signal in the case of accretion and no merging, vs. the case of merging and no accretion. In the case of only accretion, the primordial signal must have a smaller amplitude with respect to the case of only merging. This results in smaller primordial and induced GW, as clearly visible in the figure. The difference in the amplitude of the two signals is however rather limited. This is due to the strong sensitivity of the PBH abundance to small changes of the primordial curvature (see Eq. (3.1) of the present work, and Figure 9 of [16]). If we want to produce the same PBH abundance today, a 10 5 accretion requires that the initial PBH abundance is decreased by a 10 5 factor. This however only requires a small decrease of the primordial curvature spectrum, so that the primordial and 15 We find that the width of the χ 2 distributions are δ 0.4 in the case of trivial evolution, δ 0.3 in the case of strong merging, and δ 0.2 in the case of strong accretion. The width of the each gaussian bump is given by the width of the corresponding χ 2 distribution times √ 2.
induced GW signals decrease by less than one order of magnitude. Finally, let us comment on the µ-distortion obtained in the case of large accretion and/or merging. At equal present PBH distributions, large accretion and merging imply a smaller formation mass M f . This in turn implies a shift of the primordial curvature modes to smaller scales (A.1), and a later formation time (smaller number of e-folds N ) during inflation (A.2). For the cases of M = 10 5 and A = 10 5 studied in this section, we find that, due to this shift, the bump does not induce additional µ-distortions with respect to a scale invariant spectrum.
To conclude, the main effect of merging and accretion on the detectability of the GW signal at PTA scales is related to the fact that they increase the frequency of the GW signal by the collective √ AM factor. For realistic values of merging and accretion, we expect that a significant PBH distribution of O (10) M will be associated to a GW signal visible at PTA-SKA frequencies. Interestingly, the comparison between the peak of the present PBH distribution and the peak of this GW signal will allow us to obtain information on √ AM, and therefore on the evolution of the PBH subsequent to their formation.
Conclusion and Outlook
Cosmic Microwave Background [50] and Large Scale Structure (LSS) [68] measurements strongly support the paradigm of primordial inflation. However, they only probe the range of wave numbers 10 −4 Mpc −1 < ∼ k < ∼ 0.1 Mpc −1 , corresponding to about 7 e-folds of inflation. We have currently little or no direct experimental information on the physics of inflation at later times / smaller scales, apart from the limits on the amount of scalar perturbations resulting from bounds on PBH and ultracompact minihalos [69,70]. If present, PBH can provide a new experimental window on smaller scales than those relevant for CMB and LSS observations [2]. Of particular interest is the M ∼ 1 − 100 M range of PBH masses. PBH in this range may be responsible for the GW signal observed at LIGO [4,26,27]. Moreover, the revision of the CMB bounds on PBH in this mass range [46], and uncertainties on the Eridanus II limits [43] allow for the possibility that a distribution of PBH masses in this range could constitute all of the dark matter in the Universe.
This mass interval corresponds to modes that left the horizon about 40 e-folds before the end of inflation (see Eq. A.2, where we assume that N CMB = 60). This scale corresponds to a present frequency of f = O(nHz), which can be probed at PTA. 16 The sensitivity of PTA measurements will soon dramatically improve with the SKA experiment.
There are two mechanisms responsible for a SGWB at these scales, associated with PBH. One is the production of GW from the enhanced curvature perturbations when they re-enter the horizon after inflation. This is a completely general mechanism, present for all models of PBH. The second mechanism is production during inflation, from the same inflationary physics that was responsible for the scalar curvature modes. Both these GW signals are generated at scales corresponding to those of the enhanced scalar modes. Their precise location and amplitude, when compared with the location and amplitude of the PBH mass distribution, can offer experimental information on the specific inflationary mechanism responsible for the PBH generation, and well as on the PBH evolution (via merging and accretion) after they are formed.
The amplitude of the GW signal is a direct probe of the statistics of the scalar perturbations produced during inflation. In this work we considered two different models that produce the same current PBH distribution (see Figure 8). One model is characterized by a peak of scalar and primordial GW modes that originate from a rolling axion during inflation, where the scalar curvature modes obey a χ 2 statistics. The second model is characterized by Gaussian scalar perturbations, and by the absence of primordial GW. The same PBH distribution at present can be obtained from a much smaller amplitude of scalar perturbations in the Non-Gaussian than in the Gaussian case. As a consequence, the power of the induced GWB (which is proportional to the square of the power of the curvature modes) is much smaller in the Non-Gaussian model. We found that the present PTA sensitivity can be used to discover soon a PBH distribution that significantly contributes to the dark matter of the universe, depending on the precise location of its peak. On the other hand, for the rolling axion model, the induced GW signal is dominated by the primordial one, which is below the current bounds for all values of the peak mass. Most importantly, we found that both the Gaussian and Non-Gaussian models lead to a signature well above the expected PTA-SKA sensitivity, and therefore could be efficiently used for the detection of this new SGWB from PBH formation.
The location of the SGWB peak, when compared to that of the PBH mass distribution from BHB merging at higher frequencies, will provide important information on the evolution, via merging and accretion, of the PBH mass distribution. While the primordial and induced GW signals probe the perturbations before and during the PBH formation, the current PBH distribution is a function of both the distribution at formation and the subsequent evolution of the PBH masses. The main components of this evolution are: the mass accretion of each individual PBH from the surrounding plasma, and the primordial black hole merging. We parametrized these effects as a change of the position of the peak mass of the distribution (due to both accretion and merging) as well as an increase of the total mass in PBH (due to accretion). The increase of mass implies that the original PBH distribution was peaked at smaller masses than today. The peak frequency of the primordial and induced GW signal is proportional to the inverse square root of the PBH peak mass at the moment of their formation (see Eq. (5.3) and the subsequent discussion). Therefore, an increase of the peak mass due to accretion and merging implies a smaller original peak mass, and a greater frequency of the primordial and induced GW signal (see the examples shown in Figure 12 for a measure of this effect, and the corresponding discussion).
Finally, we also computed the amount of CMB µ-distortion produced by the curvature perturbations responsible for the PBH. In the case of trivial PBH evolution (no merging, nor accretion) the large curvature bump that is required in the Gaussian case produces an amount of distortion below the current COBE / FIRAS limits, but well above what can be measured in an experiment such as PIXIE. The χ 2 distribution instead produces a distortion which is only marginally greater than that obtained in the absence of a bump, and below the expected PIXIE sensitivity. Accretion and merging imply smaller scale primordial perturbations, thus decreasing the amount of µ-distortion.
To conclude, PBH offer an exciting new possibility for dark matter, with several experimental consequences [1]. They offer a unique window on inflation at scales well below those probed by LSS and CMB observations. The most promising ranges for PBH dark matter are accompanied by an induced (and, depending on the model, primordial) GW signal at either PTA or LISA scales, and well above the sensitivity of these forthcoming experiments. This detection will provide invaluable experimental information on the specific inflationary mechanism responsible for the PBH generation, and well as on the subsequent PBH evolution. work of C.U. is supported by a Doctoral Dissertation Fellowship from the Graduate School of the University of Minnesota.
Appendices A The M − N relation
In this Appendix we derive the relation between the PBH mass caused by an overdensity mode produced during inflation, and the number of folds before the end of inflation at which this mode exited the horizon.
By employing entropy conservation, we can relate the mass M of a PBH to the wavenumber k of a density mode that collapsed to form the PBH: (A.1) See for instance [31] for the derivation of this equation. 17 The density mode collapses into a PBH when the mode re-enters the horizon during the radiation dominated stage. The quantity γ is the ratio between the mass collapsing into the PBH and the total mass associated to that mode within the horizon. In our plots we use the numerical value γ = (1/3) 3/2 0.2 suggested by the analytic computation of [60] for a gravitational collapse in the radiation dominated era (see [67] for a discussion). We stress that the mass value in Eq. (A.1) disregards any PBH mass growth due to merging or accretion. These effects are discussed in Section 6. We can relate the wavenumber k of the density mode to the number of e-folds N before the end of inflation at which this mode exited the horizon. We start by writing (A.2) 17 Our numerical value 20 is about 18% greater than the one of [31] since we are taking Ω rad h 2 4.2 × 10 −5 , in contrast to the value Ω rad h 2 3.6 × 10 −5 used in [31]. Our numerical value follows from the ratio Ωmh 2 /(1 + zeq.), with the two quantities taken from Table 4 of [71]. As in [31], we take g * = 10.75 for the number of degrees of freedom in the thermal bath when the mode responsible for the PBH re-enters the horizon. In principle, this value does not apply to the full PBH masses that we are considering, however, the value of M in (A.1) scales as g −1/6 * , and so fixing g * = 10.75 introduces a negligible error in comparison with the uncertainties associated to PBH collapse. 18 Our derivation, and Eq. (A.2), assumes that φ is constant from NCMB to N . The largest value for this interval considered in this work corresponds to modes at LISA scales, for which NCMB−N 35. The parameter
B The rolling axion bump model
In this Appendix we summarize the model introduced in [49] and used in [16] to produce PBH from inflation. We refer the interested reader to these works from more details. The lagrangian of the model is where φ is the inflaton field, σ is a pseudo-scalar (axion) spectator field (ie. different from the inflaton and subdominant in energy density) whose motion leads to gauge field amplification, f σ is a mass scale (often denoted as the axion decay constant) and α a dimensionless parameter. The potential for σ is chosen to be the simplest one typically associated to a pseudo-scalar, V σ (σ) = Λ 4 /2 1 + cos σ fσ . The curvature of this potential provides a mass m σ = Λ 2 /2f in the minimum, and we parametrize by δ the ratio δ ≡ m 2 σ /3H 2 , where H is the Hubble rate during inflation, and we fix δ < ∼ 1. It is easy to verify that, for this choice, the axion has a significant evolution in this potential for a number of e-folds equal to ∆N 1/δ [49].
The motion of the axion amplifies gauge field modes that exit the horizon during this time interval. These modes source both scalar and tensor primordial perturbations of comparable wavelength. This results in a bump in the scalar and tensor perturbations, near the modes that left the horizon in this period, and well fitted by the second terms in the relations (3.3) and (3.4). The spectra are characterized by three free parameters of the model: (i) the time during inflation at which the roll of σ occurs; this is immediately related to the position of the peaks k s,peak and k t,peak in (comoving) momentum space; (ii) the number of e-folds ∆N 1/δ for which the roll takes place; this is immediately related to the width σ 2 s and σ 2 t of the two peaks; (iii) the combination ξ * ≡ α |σ * |/(2f σ H), to which the height of the two peaks is exponentially sensitive; in this relation, σ * is the maximum speed attained by σ in its slow-roll. Typically, the PBH limits are saturated for ξ * ∼ 5 − 6. The explicit relations between the model parameters and the power spectra (3.3) and (3.4) are given in Eq. (4.5) and (4.6) of [16].
Ref. [72] studied under which conditions this mechanism is under perturbative control. Only the two cases δ = 0.2 and δ = 0.5 were studied in that work. It was found that perturbativity requires where Ω GW is the fractional energy density in the sourced GW per logarithmic k interval. We see that the case δ = 0.2 is the more restrictive of the two. In general, we expect the lower bound to be a smooth function of δ. In this work we consider values of δ in the [0.2, 0.5] interval. All cases we studied satisfy the condition (B.2) (in fact, they all satisfy the first among (B.2), which is the stronger condition in this interval).
C Stochastic GW Backgrounds (SGWB)
We denote as h ij the transverse and traceless spatial components of the metric perturbations. These modes, when inside the horizon, satisfy k 2 |h k (η)| 2 = |h k (η)| 2 (where prime denotes derivative wrt conformal time) and their energy density per logarithmic wavenumber is In this expression, f (t) denotes the time average of the oscillating function f (t) over one period. We have also decomposed the GW into positive and negative helicities h λ (the explicit expression for the Π ij,λ projectors can be found for instance in ref. [49]).
Once evaluated at the current time η 0 , the expression (C. 1) gives is the GW power spectrum. In models of PBH, various physical processes contribute to the stochastic GW background, as discussed at the beginning of Section 4. Throughout this work, we mostly concentrate on the "primordial" GW h p produced during inflation, and on the "induced" GW h i sourced by the scalar primordial perturbations when they re-enter the horizon during the radiation dominated era. Accounting also for the (negligible) vacuum GW signal h v produced during inflation, this results into We note that the vacuum and the induced GW signals are helicity-independent. On the contrary, one helicity of the primordial GW is much smaller than the other one, and can be disregarded. The cross-correlation P pi h includes only the dominant helicity.
D Auto-Correlation h i h i and the zero-width approximation
In this section we compute the diagrams shown in Figure 6, contributing to the power spectrum (4.6) of the induced GW. In the first of the three diagrams we note the presence of two internal power spectra of the sourced scalar perturbations, given by the second term in (3.3). Using this result, the diagram becomes a one loop diagram, which we evaluate numerically through Eq. (4.3). The other two diagrams are genuinely 3-loop diagrams, so in addition to the two time integrals shown explicitly in Eq. (4.6), they involve 8 dimensional integrals in momentum space. 19 Rather than performing a 10 dimensional integration, we estimate the last two diagrams by exploiting the fact that the sourcing gauge fields are very peaked in momentum space. The exact amplitude of the vector fields in the rolling axion bump model is given by [49,72] A + (τ, k) N ξ * , x * ≡ k k * , δ −τ 8 k ξ (τ ) where τ is conformal time (which is negative during inflation, τ −1/H) and τ * is the time at which the axion rolls fastest, and we have used ξ(τ ) = α |σ |/(2f H), with ξ * ≡ ξ (τ * ). The normalization factor can be well fitted by a log-normal shape as shown in [72] N ξ * , (D. 2) and the functional dependence of N c , σ 2 A , q c A on ξ * and δ is given in Ref. [72]. The parameter q c A is an order-one number, and therefore the gauge field amplitude (D.2) exhibits a peak in momentum space at k ∼ k * , namely at the scales that left the horizon when the axion rolled fastest. As the gauge fields source both scalar and tensor modes, this peak is the origin of the bumps in the scalar perturbations and gravitational waves produced in this model.
In the zero-width approximation, we replace the exact gauge field profile with a Dirac delta function, with support at k = q c A k * . More concretely, the approximation is done after writing the gauge field propagators. Each propagator is proportional to N 2 , and we approximate N 2 p k * , ξ * , δ → N 2 app p k * , ξ * , δ ≡ N 2 (ξ * , δ) δ ln p q c A k * , (D. 3) where p is the magnitude of the internal momentum of the corresponding gauge field, and where N 2 must be chosen so that the integral over p using the approximate propagator gives a good estimate of the exact integral. The amplitudes can be written as the appropriate power of the external momentum time dimensionless ratios involving external and internal momenta. As all the gauge fields wavefunctions are peaked at p = q c A k * , all the internal momenta in the diagrams are comparable to each other, and comparable to the external momentum. Therefore each internal momentum integral has a measure of the type dp/p times an order-one factor. We use this measure to determine N . Specifically, we require that dp p N 2 p k * , ξ * , δ = dp p N 2 app p k * , ξ * , δ . (D.4) Using the expression (D.2), the integration at the left hand side gives √ πN 2 c σ A . The integral at the right hand side is by definition N 2 . Therefore, our zero-width approximation consists in writing the exact amplitude, and then replace N 2 p k * , ξ * , δ → π 1/2 N 2 c (ξ * , δ) σ A [ξ * , δ] δ ln p q c A k * , (D. 5) in each gauge field propagator. As there are four gauge field propagators in each diagram, this introduces four δ-functions, that considerably simplify the 8-dimensional momentum integration.
The prescription (D.5) is just one simplifying procedure, among many possible ones. Eventually, its goodness can only be understood by comparing exact and approximate results. We can do so for the Reducible diagram (the first in Figure 6), for which we have the exact result (obtained by evaluating the one loop expression (4.3)) as well as the approximate one (given in the next subsection). The exact and the approximate results for this diagram are shown in Figure 7) where one can see that they lead to two bumps in the GW signal that differ from each other by a factor of ∼ 2 (both in the amplitude and the position of the peak). This allows us to conclude that the zero-width approximation (D.5) can indeed be used to estimate the order of magnitude of the second and third diagram of Figure 6.
The remainder of this appendix is divided into three parts, in which we give the explicit expression of the integrand of the three diagrams of Figure 6 in the zero-width approximation.
D.1 Reducible diagram
We now give the explicit expression of the integrand of the Reducible diagram in the zerowidth approximation. Referring to the first diagram in Figure 6, we label the external momentum as k (going from left to right in the diagram), and we label the internal momenta in the first, second, and third gauge field propagator (from top down), as u, n, and v, respectively (all going from left to right in the diagram). All the other momenta in the diagram can be obtained from these ones, using momentum conservation at each vertex. With this parametrization we can immediately perform the three integrals du dn dv using the Dirac δ-function in (D.5).
F Simple parametrization of the PBH evolution and present abundance
In this Appendix we derive the relation (6.1) of the main text. We follow the computations of [67] up to Eq. (F.5), and we then introduce two parameters to account for PBH nontrivial evolution (accretion and merging after their formation). We start by relating time and temperature during radiation domination. In a radiation dominated universe the Hubble rate H is related to time by H = 1 2t . From the Friedmann relation H 2 = ρ 3M 2 p , and from the energy density in a thermal bath of temperature T , given by ρ = π 2 30 g * T 4 , where g * is the number of effective bosonic degrees of freedom at the temperature T , one obtains
|
2017-07-08T13:40:41.000Z
|
2017-07-08T00:00:00.000
|
{
"year": 2017,
"sha1": "1d68e05e8d4bf01d893d3db41f030dff9778fd3f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.02441",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fdb6b84ecf2419ef0c70070e717f289fd0f1178d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
247491186
|
pes2o/s2orc
|
v3-fos-license
|
Humanin and diabetes mellitus: A review of in vitro and in vivo studies
Humanin (HN) is a 24-amino acid mitochondrial-derived polypeptide with cyto-protective and anti-apoptotic effects that regulates the mitochondrial functions under stress conditions. Accumulating evidence suggests the role of HN against age-related diseases, such as Alzheimer’s disease. The decline in insulin action is a metabolic feature of aging and thus, type 2 diabetes mellitus is considered an age-related disease, as well. It has been suggested that HN increases insulin sensitivity, improves the survival of pancreatic beta cells, and delays the onset of diabetes, actions that could be deployed in the treatment of diabetes. The aim of this review is to present the in vitro and in vivo studies that examined the role of HN in insulin resistance and diabetes and to discuss its newly emerging role as a therapeutic option against those conditions.
INTRODUCTION
Twenty years ago, three independent laboratories discovered humanin (HN) (MTRNR2), the first mitochondrial small open reading frame (sORF)-encoded microprotein found to have biological activity. The Hashimoto laboratory discovered HN while searching for survival factors in the unaffected brain section of an Alzheimer's patient [1]. The investigators revealed a cDNA fragment that mapped back to the mitochondrial 16S rRNA. This microprotein was named humanin because it displayed protection against Alzheimer's disease (AD)-related neurotoxicity, an action that the original authors though potentially could retrieve the "humanity" of patients suffering from dementia. Second, Ikonen et al [2] found that HN bound insulin like growth factor binding protein 3 (IGFBP3) using a yeast two-hybrid screening system and intensified the protective effects of IGFBP3 against amyloid-β (Aβ) toxicity. Also, Guo et al [3] showed that HN can bind and suppress the apoptotic protein BAX and, subsequently, alleviate cell apoptosis.
Physiologically, HN is produced by tissues in several organs, including kidney, skeletal muscles, brain, heart, and liver [4][5][6]. Subsequently, it is secreted into the blood circulation and transported to various target cells, protecting in parallel cells against several diseases strongly associated with oxidative stress, mitochondrial dysfunction, and cytotoxicity [7]. Beyond the cytoprotection HN possesses a key role in cell metabolism and mediates the production and secretion of endocrine/ paracrine/autocrine protective stress response factors [8]. Additionally, it plays a role in age-related diseases and several metabolic disorders (e.g., cardiovascular diseases [CVD], memory loss, stroke, diabetes type 2 [T2DM]).
Diabetes is a chronic disease that occurs either due to autoimmune destruction of the pancreatic beta cells, leading to absolute insulin deficiency (T1DM) or due to progressive attenuation of insulin secretion on a background of insulin resistance resulting in relative insulin deficiency (T2DM). The number of people with diabetes rose from 108 million in 1980 to 422 million in 2014. Prevalence has been increasing faster in low-and middle-income countries than in high-income countries. The rising burden of T2DM is a major concern in health care worldwide. In 2017 6.28% of the worldwide population was affected by T2DM. It is disconcerting that the burden of the disease is rising globally, and at a more rapid rate in developed regions such as western Europe [9]. As for the T1DM, its incidence is estimated 15 per 100000 people and the global prevalence 9.5%[10]. Since diabetes and its complications affect individuals' functional capacities and quality of life leading to significant morbidity and premature mortality, effective agents are required for its treatment.
STRUCTURE OF HUMANIN PEPTIDE
HN is encoded by a sORF within the gene for the 16S ribosomal subunit in the mitochondrial genome [11]. HN has a positively charged N-terminal (Met-Ala-Pro-Arg), central hydrophobic region (Gly-Phe-Ser-Cys-Leu-Leu-Leu-Leu-Thr-Ser-Glu-Ile-Asp-Leu), and negatively charged C-terminal (Pro-Val-Lys-Arg-Arg-Ala)[1]. Last three amino acid residues in the C-terminal are considered as dispensable because both 21 and 24-amino acid long peptides have indistinguishable intracellular and extracellular effects [12]. Thirteen nuclear-encoded HN isoforms have been identified. HN-like ORF has been named MTRNR2L1 to MTRNR2L13 after the original humanin MTRNR2 gene in the mitochondrial genome. MTRNR2L1-MTRNR2L10 are expressed in most human tissues, with MTRNR2 being expressed in higher proportion in comparison to the other isoforms. Molecular manipulations of HN at key amino acids lead to changes in chemical characteristics. Additionally, single amino acid substitutions can lead to significant modifications to its biological functions and potency[13].
MECHANISMS OF ACTION
HN exerts its functions after connecting to either intracellular molecules or cell membrane receptors (Figure 1). Immediately after HN's receptor binding, extracellular signal-regulated kinase 1/2 (ERK1/2) phosphorylation increases [14]. Once ERK1/2 is phosphorylated, it separates from its anchoring proteins, and transfers to other subcellular compartments. ERK1/2, a member of the mitogen-activated protein kinase pathway, participates in several essential cellular processes such as cell proliferation, survival, differentiation, mobility, and apoptosis [15,16]. HN behaves as a link to two different types of receptors: the seven-transmembrane G protein-coupled receptor formyl peptide receptor-like 1 (FPRL1) which plays a role in the cytoprotective properties of HN and a trimeric receptor, consisting of ciliary neurotrophic factor receptor (CNTFR), the cytokine receptor WSX-1 and the transmembrane glycoprotein 130 (GP130) (CNTFR/WSX-1/GP130) which is essential for HN activity and its neuroprotective effects [17]. As regards GP130, it is a transmembrane protein that acts as the signal transduction unit of the IL-6 receptor family [18]. Dimerization of GP130 receptors provokes the stimulation of janus kinases (JAK1 and JAK2), then subsequently provokes signal transducer and activator of transcription 3 (STAT3) and STAT1 [19]. The dimerized STATs move to the nucleus and control transcription. The second signaling pathway directed by GP130 recruits SHP-2. SHP-2 is phosphorylated by JAK and interacts with growth-factor receptor bound protein 2 (Grb2), which allows the activation of mitogenactivated protein kinase (MAPK) [19].
HN is regulated by insulin-growth factor 1 (IGF-1) and growth hormone (GH Especially concerning diabetes, HN provides protection against apoptosis by binding pro-apoptotic Bax, inhibiting its mitochondrial localization, and lessening Bax-mediated apoptosis activation[3], acting either directly on Bax or through the FPRL-1 receptor [17]. As for its neuroprotective action, which has also a place in the neuroendocrine beta cells protection, it involves HN binding to a complex involving CNTFR/WSX-1/GP130[17] and activation of tyrosine kinases and STAT-3 phosphorylation[41]. Moreover, an important mechanism of cell protection may be via interfering with Jun N-terminal kinase (JNK) activity [42]. Important is also the interaction between HN and insulin-like growth factor binding protein-3 (IGFBP-3) which prevents the activation of caspases[2]. Furthermore, an alteration at position [Gly14]-HN (S14G, HNG) seems to induce neurosurvival activity and a substitution of phenylalanine in the 6 th position with alanine (F6A, F6AHN) changes the binding of HN to IGFBP-3 and enhances its main effect on glucose metabolism and insulin sensitivity[5].
ROLE OF HUMANIN IN THE PATHOGENESIS OF TYPE 1 DIABETES
The role of HN in T1DM has been scarcely investigated. T1DM is characterized by the loss of pancreatic beta cells which results in insulin deficiency. The beta cells destruction, the dominant event in the pathogenesis of T1DM, occurs as a result of the IL-1, TNF-a, and IFN-γ actions which are originated from T cells and macrophages. Since HN is identified as a survival factor[43], it seems to serve also as a survival factor for neuroendocrine beta cells by decreasing cytokine-induced apoptosis and subsequently, improves glucose tolerance and onset of diabetes as it has been demonstrated in NOD mice in vivo [44]. Yet, no studies juxtaposing the HN levels in T1DM and T2DM have been published thus far.
ROLE OF HUMANIN IN THE PATHOGENESIS OF TYPE 2 DIABETES
T2DM is one of the most common metabolic diseases. This metabolic disorder and its comorbidities and complications, such as CVD, stroke, chronic kidney disease (CKD), and cancer, are global health problems which, noticeably, diminish quality of life and life expectancy [45][46][47][48].
Mitochondrial dysfunction and oxidative stress are involved in the pathogenesis of diabetes. Mitochondria are principal elements for the maintenance of metabolic health and cellular energy homeostasis. Mitochondrial dysfunction causes glycaemic dysregulation and metabolic derangement [49]. It causes inefficiency in the electron transport chain and beta-oxidation, thus trigging insulin resistance [50]. Furthermore, hyperglycemia provokes ROS generation which, in turn, causes oxidative stress in several tissues, cellular lipids, proteins, and DNA, and subsequently, provokes chronic inflammation [51]. The accumulation of oxidative damage leads to a decrement of mitochondrial function which can result in increased ROS production [29]. It has been suggested that mitochondrial dysfunction is implicated in diabetes-related complications impairing the kidneys, nervous system, heart and retina, and that mitochondrial dysfunction-related oxidative stress contributes to these complications [52]. Subsequently, an increase in ROS concentrations may provoke HN mobilization from various tissues to the impaired areas, where HN acts against oxidative stress, decreases ROS production, and promotes cell survival [51]. Mitochondrial derived peptides (MDPs), such as HN, have been suggested to play a critical role in reducing oxidative stress [53][54][55] and improving T2DM [56]. It has also been demonstrated that HN promotes mitochondrial biogenesis in pancreatic β-cells[57].
In vitro and animal studies
Considering that diseases related with ageing, named T2DM and neurodegeneration, have been suggested to be associated with mitochondrial dysfunction [58,59], it follows that the mitochondrialderived peptide HN regulates them (Table 1). Based upon the molecular interaction between HN and IGFBP-3, that prevents the activation of caspases, and since IGFBP-3, independent of IGF-1, provokes IR both at the liver and periphery [60,61], Muzumdar et al [23] hypothesized that HN, besides its neuroprotective action, may regulate glucose homeostasis. Utilizing state of the art clamp technology, they investigated the role and the mechanism of action of central and peripheral HN in glucose metabolism. They finally demonstrated that infusion of HN improves both hepatic and peripheral insulin sensitivity and that hypothalamic STAT-3 activation is essential for the insulin-sensitizing action of HN. Moreover, treatment with a highly potent HN analog significantly lowered blood glucose in Zucker diabetic fatty rats. As for the levels of HN in tissues like hypothalamus, skeletal muscle, and cortex, they reduced with age in rodents, and its' circulating levels were also diminished with age in humans and mice.
A year later, a group from California [44] investigated whether HN could improve the survival of beta cells and delay or even treat diabetes in NOD mice. HN prevented apoptosis induced by serum starvation in NIT-1 cells and decreased cytokine exposure-related apoptosis (caused by interleukin [IL]-1β, tumor necrosis factor [TNF]α, and interferon[IFN]γ). STAT3 is considered as a principal survival signaling protein in beta cells, regulating the pro-survival effects of various growth factors and cytokines. HN activated STAT3 and ERK over a 24-hour time course. Interestingly, HN improved glucose tolerance in NOD mice and after 6 wk of treatment decreased lymphocyte infiltration was observed in their pancreata. When the treatment was extended up to 20 wk the investigators noted that HN delayed or prevented the onset of diabetes in NOD mice. A few years later, the group we mentioned first [23] hypothesized that HNGF6A, a potent non-IGFBP-3 binding HN analog, may affect acutely and independently insulin secretion, since insulin concen-trations were not reduced along with hypoglycemia caused by HNGF6A in Sprague Dawley rats [22]. Sprague Dawley rats that received HNGF6A presented higher insulin levels during hyperglycemic clamps compared to controls. Similarly, in vitro, HNGF6A enhanced glucose-stimulated insulin secretion in isolated islets and cultured murine β cell line. This effect was dose dependent, combined with ATP production in the β cell, related to the KATP-channel-independent augmentation phase of insulin release [62], and associated with amplified glucose metabolism. These potent effects on insulin secretion in combination with the effects on insulin action suggested a role of HN in the treatment of T2DM.
The protective effects of [Gly14]-Humanin (HNG) against high glucose-induced apoptosis were investigated in human umbilical vein endothelial cells (HUVECs). Pretreatment of HUVECs with HNG inhibited cell death, nucleus pyknosis and deformation [63]. Also, HNG diminished the expression of cleaved poly ADP-ribose polymerase (PARP) which reflects the level of apoptosis as well as reactive oxygen species (ROS). Regarding the level of bax, which is a pro-apoptotic protein, it decreased after pretreatment with HNG, while bcl-2, which exerts anti-apoptotic effects, it increased.
Another group identified a different sORF within the mitochondrial 12S rRNA encoding a 16-aminoacid peptide named MOTS-c (mitochondrial open reading frame of the 12S rRNA type-c) which also regulates insulin sensitivity and metabolic homeostasis [56]. Particularly, MOTS-c treatment in mice protected against age-dependent and high-fat-diet-induced insulin resistance and diet-induced obesity as well. Finally, they suggested that MDPs, like MOTS-c and HN, with such systemic effects may be useful in ameliorating the abnormal metabolism associated with aging in humans and regulating biological processes like weight and metabolic homeostasis.
Kim and his colleagues from California tried to elucidate the signaling pathways underlying HN'S cytoprotective roles in vitro and in vivo [14]. Utilizing multiple models, they showed that HN is a major GP130 agonist which acts through the GP130/IL6ST receptor complex and activates AKT, ERK1/2, and STAT3. PI3K, MEK, and JAK were suggested to be involved in the activation of those three signaling pathways, respectively.
Concerning the effects of HN on mitochondrial biogenesis in pancreatic β-cells, HN treatment in MIN6 β-cells increased the expression of peroxisome proliferator-activated receptor (PPAR) γ coactivator-1α (PGC-1α)[57] which promotes mitochondrial biogenesis by activating the expression of nuclear respiratory factor 1 (NRF-1) and mtDNA transcription factor A (TFAM) [64]. Also, HN treatment promoted mitochondrial biogenesis by increasing mitochondrial mass, elevating mitochondrial DNA (mtDNA)/nDNA ratio (reduced mtDNA copy number plays a key role in insulin resistance [65]), and increasing cytochrome B expression. Finally, HN treatment resulted in the phosphorylation of AMPK, which was involved in the induction of PGC-1α, NRF-1, and TFAM and improved mitochondrial respiration and stimulated ATP generation leading to a possible functional gain of the mitochondria.
In HUVECs also, HN displayed protective action against high-glucose-induced endothelial dysfunction and macrovascular complications [66]. HN treatment promoted the expression of Krüppellike factor 2 (KLF2), a principal transcriptional regulator of endothelial function, by activating ERK5. In addition, HN significantly reduced the expression of vascular cell adhesion molecule 1 (VCAM-1) and E-selectin, which regulate the adhesion of circulating leukocytes to the endothelium, a principal procedure in the initiation of atherosclerosis. Furthermore, HN impeded the secretion of pro-inflammatory cytokines, such as TNF-α and IL-1β.
Human subjects research and clinical trials
The first attempt to measure HN levels in a clinical population with impaired fasting glucose (IFG) was made in participants attending a diabetes complications screening clinic (DiabHealth) [67,68]. Previous clinical studies reported noticeably increased HN levels in patients with mitochondrial encephalomyopathy, lactic acidosis, and stroke−like episodes (MELAS) and chronic progressive external ophthalmoplegia (CPEO), which are associated with excess oxidative stress[69,70]. However, a significant reduction (P = 0.0001) in HN was reported in the IFG group (n = 23; 204.84 ± 92.87 pg mL −1 ) compared to control (n = 58; 124.3 ± 83.91 pg mL −1 ) in accord with an adaptive cellular response by HN to a slight raise in fasting blood glucose level (BGL). As we described above, HN protects neuroendocrine β-cells [44] and increases glucose tolerance and insulin sensitivity [20,44]. Moreover, it is considered to interact with hydrogen peroxide and α-actinin-4 which rise during oxidative stress and IFG [71][72][73] and binds extracellularly with the CNTFR/WSX−1/GP130 receptor [69,74,75]. Interestingly, mild to moderate levels of ROS result in positive adaptive mechanisms of the mitochondria [76]. All these mechanisms, which benefit cell function and survival, lead to a reduction in HN levels, indicating a protective role of HN. However, with disease progression to T2DM and further oxidative stress, mitochondria may upregulate HN levels as observed in studies of Alzheimer's disease and in those of MELAS and CPEO.
These conditions are related to extensive oxidative stress which is also a key feature of DM. Particularly, hyperglycemia causes extended free radical activity and mitochondrial dysfunction which induce oxidative stress and release more ROS [76]. The advanced diseases MELAS and CPEO are March 15, 2022 Volume 13 Issue 3 associated with increased plasma HN levels. HN has a protective role and is upregulated with disease progression. On the contrary, the minor elevations of blood glucose levels are combined with a decrease in HN concentrations which supports the protective role of HN when levels are expected to decrease as a result of stimulation of oxidative stress-associated agents that are inhibited by HN. However, with disease progression to T2DM and further oxidative stress, mitochondria increase HN levels, as reported in MELAS and CPEO. A few years earlier, another group from Toronto suggested that plasma HN levels were significantly higher in T1D men by comparison with the healthy control men (P < 0.0001) [77].
At the end of 2018 Ma et al [78] evaluated HN concentrations in pregnant women with and without gestational diabetes mellitus (GDM) aiming to define the role of HN in the development of GDM. 157 women were enrolled in the study. Serum HN levels were significantly lower in women with GDM compared to controls. Like Lee et al [21], who found that HN was regulated by IGF-1 in mice and humans, they suggested that the IGF axis influenced the HN levels and affected its normal function in GDM. By performing logistic regression analysis, they also showed that low HN levels were the independent risk factor of GDM and, therefore, might be a predictor for the GDM diagnosis. Additionally, HN levels were significantly negatively correlated with the body weight, body mass index (BMI) and homeostatic model assessment for insulin resistance (HOMA-IR).
The most recent study which attempted to evaluate MDP levels in normal, prediabetes and diabetes subjects enrolled 225 participants [49]. The investigators found that serum HN concentrations are lower in T2DM (P < 0.0001) and correlate with HbA1c. Interestingly, HN levels decreased by 62% in the prediabetes group, 66% in diabetes subjects with good control and 77% in uncontrolled diabetes patients compared to participants without diabetes. Also, this study confirmed that there are no significant differences in HN levels between healthy men and women and the levels of HN were not affected by the different anti-diabetic treatment (insulin, metformin, other hypoglycemic regimens) or the duration of therapy. Furthermore, since HN was associated with adiponectin, which has been suggested to be reduced in prediabetes and T2DM [79], it can be concluded that mitochondrial dysfunction contributes to glycemic dysregulation and metabolic effects in T2DM. Adiponectin levels were positively correlated with HN. Adiponectin concentrations decrease in pre-diabetes and DM [79]. It has also been demonstrated that adiponectin knockout mice have reduced mitochondrial content combined with insulin resistance [80]. In addition adiponectin may impair mitochondrial biogenesis [81]. Therefore, the affected mitochondrial function may arise from the low adiponectin levels.
As for the changes in HN levels with ageing, Voigt et al [67] showed that HN decreased with age among individuals attending a diabetes complications screening clinic suggesting a protective function of HN and this observation was consistent with a previous study among human and mice [23]. On the contrary, circulating levels of HN increase in age-associated diseases such as T2DM. With disease progression and additional oxidative stress, mitochondria may increase HN levels.
Besides the initial and principal lifestyle interventions for glycemic control in DM, currently, we have various oral and injectable pharmacologic agents at our disposal including metformin, thiazolidinediones, sulfonylureas, glucagon-like peptide 1 (GLP-1) receptor agonists, dipeptyl-peptidase 4 (DPP-4) inhibitors, sodium-glucose co-transporter 2 (SGLT-2) inhibitors, and insulin [82]. These medicines can be administered in various dosages and in many combinations in each patient diagnosed with DM. However, there is still room for additional new factors that could efficiently contribute to the management of the disease. Given HN's protective properties, it may represent a novel treatment option to decrease the cellular damage caused by diabetes. Altered HN levels in diabetes could serve as a potential biomarker. Nevertheless, no clinical trials investigating the effects of HN or its analogues (e.g. HNGF6a) administration have thus far been published, albeit it would be an innovative and promising breakthrough in diabetes prevention and treatment.
CONCLUSION
In summary, HN shows cytoprotective effects in many biological processes, including oxidative stress and apoptosis. Altered HN levels could serve as a potential biomarker in prediabetes and T2DM, since they seem to be an effect or a response to the increased ROS production, oxidative stress, and reduced mtDNA copy number which all contribute to IR[83]. However, further study is needed to define the role of age and other modifiable confounding factors, like fitness level, adiposity, other metabolic comorbidities, such as CVD, stroke, inflammation. Undoubtedly, the major and important question is whether HN could be used as a potential therapeutic option for diabetes, that could even replace the current diabetes mellitus treatment strategies soon. Towards this direction, further studies are needed to identify the contribution of HN in the metabolic dysregulation of T2DM.
|
2022-03-17T15:12:10.210Z
|
2022-03-15T00:00:00.000
|
{
"year": 2022,
"sha1": "aebf09d41a5e642ae625fe0f2a5a9641fbe6a126",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4239/wjd.v13.i3.213",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c13de1397c83721326f1391fd03d6f04eb4d0b6b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
248893743
|
pes2o/s2orc
|
v3-fos-license
|
Involvement of PKMζ in Stress Response and Depression
The stress system in the brain plays a pivotal role in keeping humans and animals from harmful stimuli. However, excessive stress will cause maladaptive changes to the stress system and lead to depression. Despite the high prevalence of depression, the treatment remains limited. PKMζ, an atypical PKC isoform, has been demonstrated to play a crucial role in maintaining long-term potentiation and memory. Recent evidence shows that PKMζ is also involved in stress response and depressive-like behavior. In particular, it was demonstrated that stress that resulted in depressive-like behavior could decrease the expression of PKMζ in the prefrontal cortex, which could be reversed by antidepressants. Importantly, modulation of PKMζ expression could regulate depressive-like behaviors and the actions of antidepressants. These data suggested that PKMζ could be a molecular target for developing novel antidepressants. Here, I review the advance on the role of PKMζ in mediating stress response and its involvement in the development of depression.
INTRODUCTION
Stress is a common life experience that we may come across almost daily. Humans and animals rely on the stress system in the brain to react and adapt to various stressful events and make responses. Appropriate responses to stress are essential for survival when facing life-threatening conditions (Godoy et al., 2018). However, if particular stress causes an overwhelming burden that a subject could bear, it results in maladaptive changes to the stress system in the brain, which then leads to or triggers the occurrence of many psychiatric disorders, such as depression, also known as major depressive disorder (de Kloet et al., 2005;Wohleb et al., 2016;Godoy et al., 2018). According to the 2020 National Survey on Drug Use and Health (NSDUH), about 6.7% of the adults in the United States age 18 and older suffer from depression. Despite the high prevalence, the treatment for depression remains limited.
The treatment strategy for depression includes pharmacological intervention, psychotherapy, and a combination of these two. Pharmacological intervention, such as selective serotonin reuptake inhibitors (SSRIs), is usually required for patients with moderate and severe symptoms (Davidson, 2010). Generally, pharmacological intervention is more acceptable and widely used, especially in countries and regions where psychotherapy is unavailable. However, current antidepressants have many limitations. Most currently available antidepressants require weeks of treatment before providing clinical benefits (Insel and Wang, 2009). Furthermore, depressive symptoms usually last for a long-term, even life-long for many patients, for whom daily treatment is generally required. In the past decades, emerging studies have investigated new systems and molecular targets that do not belong to the traditionally focused monoamine systems, such as serotoninergic and norepinephrinergic systems, in the hope of developing novel antidepressants (Lener et al., 2017;Shinohara et al., 2021). Recent evidence shows that PKMζ, an atypical PKC isoform that plays a pivotal role in the maintenance of LTP, may participate in the development of depression and might be one of the critical targets that mediate actions of antidepressants, suggesting that PKMζ might be a potential target for the treatment of depression. Here, I review the advance on the role of PKMζ in regulating brain function and its involvement in the pathology of depression.
PKMζ MAINTAINS LONG-TERM POTENTIATION AND STRESS-RELATED MEMORY
PKMζ is an isoform of the protein kinase C (PKC), which is an enzyme that has the ability to phosphorylate serine/threonine residues (Osten et al., 1996). There are various isoforms of PKC, including conventional isoforms (α, βI, βII, and γ), novel isoforms (δ, ε, η, and θ), and atypical isoforms (ζ and ι). PKMζ is the constitutively activated form of PKCζ, which only has the ζ catalytic domain but not a regulatory domain (Hernandez et al., 2003). PKMζ was widely expressed in many brain regions, including the hippocampus, prefrontal cortex (PFC), thalamus, striatum, and so forth (Naik et al., 2000). A pioneering study by Sacktor et al. (1993) found that PKMζ was increased in the maintenance of long-term potentiation (LTP), which first linked PKMζ to the LTP. It was further shown that protein synthesis inhibitors anisomycin and cycloheximide reversed the maintenance of hippocampal LTP and prevented the increase in PKMζ (Osten et al., 1996), which suggested that PKMζ could be newly synthesized during LTP (Figure 1). The following study indicated that the de novo synthesis of PKMζ during LTP required many protein kinases, including phosphoinositide 3-kinase (PI3K), Ca 2+ /calmodulin-dependent protein kinase II (CaMKII), mitogen-activated protein kinase (MAPK), protein kinase A (PKA), mammalian target of rapamycin (mTOR), and preexisting PKMζ (Kelly et al., 2007). To determine the causal role of PKMζ in the maintenance of LTP, the Sacktor group synthesized the selective ζ-pseudosubstrate inhibitory peptide (ZIP) and showed that ZIP selectively prevented the maintenance of LTP without affecting baseline EPSP in vitro (Ling et al., 2002;Serrano et al., 2005). Thus, these data strongly suggest that PKMζ is essential for the maintenance of LTP.
Since the publication of the landmark report on hippocampal LTP by Bliss and Lomo in 1973, extensive studies have investigated this particular form of synaptic plasticity (Bliss and Lomo, 1973;Nicoll, 2017). Although many molecules had been reported to participate in LTP induction, such as CaMKII and PKA, as mentioned above, little was known about the mechanism underlying the maintenance of LTP (Lisman et al., 2012;Herring and Nicoll, 2016). Thus, when the selective involvement of FIGURE 1 | PKMζ maintains long-term potentiation. Long-term potentiation (LTP), once triggered, could last for hours or even weeks. Many protein kinases such as CaMKII, mTOR, PKA, MAPK, and preexisting PKMζ are required for the early LTP. Reactivation of these protein kinases then induced translation of PKMζ from the mRNA. The newly synthesized PKMζ then maintains the membrane expression of GluR2, which is essential for the maintenance of LTP. CaMKII, calcium/calmodulin-dependent protein kinase II; GluR2, ionotropic glutamate receptor AMPA type subunit 2; MAPK, mitogen-activated protein kinase; mTOR, mammalian target of rapamycin; PKA, protein kinase A; PKMζ, protein kinase Mζ; ZIP, ζ-inhibitory peptide.
PKMζ in LTP maintenance was revealed, it soon attracted much attention from many researchers, especially those who were studying the mechanism underlying learning and memory since LTP has been widely accepted as one of the primary cellular mechanisms underlying learning and memory (Lynch, 2004). In the last two decades, a large number of studies have reported the critical role of PKMζ in the storage of memory. For example, it was shown that the PKMζ inhibitor ZIP disrupted the maintenance of hippocampal LTP in vivo as well as abolished long-term memory in an active place avoidance task in rats (Pastalkova et al., 2006). Following studies indicated that the disruptive effects of ZIP on memory seem to be consistent across many memory tasks, including spatial memory, recognition memory, aversive and appetitive memories, which suggested that PKMζ could be a common mechanism underlying the storage of long-term memories (Pastalkova et al., 2006;Serrano et al., 2008;Migues et al., 2010).
Notably, PKMζ was demonstrated to maintain stress-related memory. This is primarily supported by numerous studies shown that PKMζ is essential for the maintenance of fear memory induced by footshock stress (Serrano et al., 2008;Kwapis et al., 2009Kwapis et al., , 2012Migues et al., 2010;Parsons and Davis, 2011;Xue et al., 2015;Oliver et al., 2016;Schuette et al., 2016;Marcondes et al., 2021). It was shown that microinjection of the PKMζ inhibitory peptide ZIP into the basolateral amygdala (BLA) reduced the retention of cued fear memory (Serrano et al., 2008;Zhang et al., 2019), indicating that PKMζ in the BLA was a key molecule for maintaining fear memory. Consistent with this, intra-BLA injection of ZIP also disrupted the footshock-derived inhibitory avoidance memory (Serrano et al., 2008). Another study showed that the virus-mediated expression of PKMζ in the prelimbic cortex of PFC enhanced fear memory, suggesting that PKMζ in the PFC is also involved in fear memory (Xue et al., 2015). Furthermore, not only a newly formed memory but PKMζ has also been demonstrated to regulate remote fear memory (Sacco and Sacchetti, 2010). Since fear memory has been widely taken as an animal model of post-traumatic stress disorder (PTSD) (Bienvenu et al., 2021), approaches that affect the expression of PKMζ might be promising strategies to treat traumatic stress-related diseases. Nevertheless, the role of PKMζ in maintaining memory has been critically questioned in the last decade. In 2012, two independent groups reported no memory loss or LTP disruption in PKMζ knockout animals, which provided direct evidence that PKMζ might not be essential for memory or LTP (Lee et al., 2013;Volk et al., 2013). In addition, some studies showed that ZIP, which was widely used as the selective PKMζ inhibitory peptide, was not specific at all (see discussion below). Up to this point, the role of PKMζ in stress-related memory remains in debate, which needs further investigation to be fully understood.
PKMζ PARTICIPATES IN STRESS RESPONSE, ANXIETY, AND DEPRESSION
Extensive studies have demonstrated that stress, a crucial factor affecting synaptic plasticity, has dramatic influences on LTP (Peters et al., 2018). Stress could cause impairment or enhancement of LTP, which depends on various paradigms of the experienced stress, including controllability, severity, and duration (Kim et al., 2006;Abush and Akirav, 2013;Peters et al., 2018). Generally, long-lasting and uncontrollable stress is thought to impair LTP (Kim et al., 2006). Given the importance of PKMζ in LTP maintenance, it could be inferred that PKMζ might be involved in stress response. Consistently, several studies have shown that stress could affect PKMζ expression in the hippocampus and medial prefrontal cortex (mPFC), two critical brain areas that mediates stress response and depression, and PKMζ in these brain regions might meditate stress-related behaviors in some conditions. However, the related reports are controversial, and the particular role of PKMζ in mediating stress response and related disorders remain in debate.
Effects of Stress on Hippocampal PKMζ Expression
The findings on the effects of stress on PKMζ expression in the hippocampus are mixed through the literature. One study found that in the isolated rat embryonic hippocampal neural stem cells, dexamethasone, a pharmacological treatment mimicking stressinduced glucocorticoid secretion, decreased the expression of PKMζ mRNA, and protein. This regulation was specific since dexamethasone did not affect the expression of PKCι, the other atypical PKC isoform expressed in isolated hippocampal neural stem cells (Wang et al., 2014). A recent study showed that nonhuman primates who experienced stress in early life showed a lifelong reduction of PKMζ in the ventral hippocampus (Fulton et al., 2021). In contrast, chronic stress enhanced the cytosolic but not synaptic expression of PKMζ in the hippocampus (Zanca et al., 2015). Consistent with these, our recent study showed that CUS caused a reduction of PKMζ in the hippocampus (Yan et al., 2018). However, the inhibitory effect of stress on PKMζ expression was inconsistent in the literature. For example, it was shown that acute stress increased the synaptic but not cytosolic expression of PKMζ in the hippocampus (Zanca et al., 2015). Single-prolonged stress (SPS), a behavioral paradigm mimicking the development of PTSD, also increased PKMζ expression in the hippocampus of rats 7 and 14 days after experiencing the stress treatment (Ji et al., 2014). Ji et al. (2014) further showed that intra-hippocampus microinjection of ZIP reduced SPSinduced depressive-like behavior in the forced swimming task and anxiety-like behavior in the open field tests and elevated plus-maze. In contrast, another study showed that the synaptic PKMζ level in the hippocampus was not altered in rats after social defeat stress, a behavioral model of depression based on social motivation (Iniguez et al., 2016). Since different types of stress were used in these studies, the discrepancy among these studies might suggest that stress type could be an essential factor determine the effects of stress on hippocampal PKMζ expression. Furthermore, it should be noted that some studies examined the cytosolic expression of PKMζ, whereas others examined the synaptic PKMζ, which might be another factor that caused the discrepancy (Iniguez et al., 2016). Therefore, these studies suggest that stress could affect hippocampal PKMζ expression, however, the effects could be influenced by many factors, such as stress paradigm and subcellular expression (cytosolic vs. synaptic).
PKMζ in the Medial Prefrontal Cortex Is Negatively Associated With Depressive-Like Behaviors
So far, there is only one study investigated the role of PKMζ in the medial prefrontal cortex (mPFC) in mediating depressive-like behaviors (Yan et al., 2018). The study showed that PKMζ in the mPFC was decreased in two behavioral models of depression, i.e., chronic mild unpredictable stress (CUS) and learned helplessness (Figure 2; Yan et al., 2018). CUS did not change PKMζ expression in the orbitofrontal cortex, an adjacent brain region to the PFC, indicating that the PFC was the particular brain site where CUS affected PKMζ expression. Notably, SUS did not alter the expression of other PKC isoforms, including PKCα, β, θ, or λ in the PFC, suggesting that PKMζ is a unique PKC isoform influenced by CUS (Yan et al., 2018).
The causal role of PKMζ in the mPFC in depression has been implicated by studies using selective PKMζ inhibitory peptide ZIP and viruses that overexpress PKMζ or express dominant-negative mutant PKMζ (Yan et al., 2018). Intra-mPFC microinjection of ZIP enhanced stress-induced depressive-like behavior in both chronic stress and learned helplessness models (Yan et al., 2018). Because of the non-specific inhibition of ZIP on PKMζ (see details discussed below), it is hard to conclude whether PKMζ in the mPFC regulated depressivelike behaviors only based on the effects of ZIP. To confirm the role of PKMζ, Yan et al. (2018) further showed that virusmediated expression of PKMζ in the mPFC reversed CUS-and learned helplessness-induced depressive-like behaviors as well as CUS-induced reduction in spine density and mEPCS frequency. In contrast, virus-mediated dominant-negative mutant PKMζ, which could competitively inhibit the function of endogenous PKMζ, facilitated subthreshold CUS-and learned helplessnessinduced depressive-like behaviors (Figure 2; Yan et al., 2018). Unlike ZIP, virus-mediated expression of PKMζ or the dominantnegative mutant PKMζ could specifically regulate PKMζ expression or activity; thus, this study provided solid evidence that PKMζ in the mPFC mediated the development of depression.
Antidepressants Increases PKMζ in Both the Hippocampus and Medial Prefrontal Cortex
Some evidence has shown the involvement of PKMζ in the actions of antidepressants. The selective 5-HT reuptake inhibitor fluoxetine could increase PKMζ expression and prevent dexamethasone-induced downregulation of PKMζ in isolated hippocampal neural stem cells (Wang et al., 2014). Importantly, PKMζ mediated fluoxetine-induced neurogenesis and signaling activation (Wang et al., 2014). These in vitro findings are consistent with our recent in vivo study, in which we showed that both fluoxetine and desipramine, a tricyclic antidepressant, reversed CUS-induced reduction in PKMζ expression in the mPFC. As mentioned before, antidepressants, including fluoxetine and desipramine, require several weeks of treatment to exert their antidepressant actions. Recent studies have indicated that the NMDA receptor antagonist ketamine has been shown to exert fast-acting and long-lasting antidepressant action. It has been demonstrated that ketamine could rescue chronic stress-induced molecular changes, morphological alterations of neurons, and microcircuit dysfunction in the prefrontal cortex (PFC) (Li et al., 2010(Li et al., , 2011Moda-Sava et al., 2019). Intriguingly, ketamine could prevent the CUS-induced downregulation of PKMζ in the PFC; PKMζ was necessary for the antidepressant action of ketamine in the learned helplessness model. These findings insofar demonstrated that PKMζ is a critical and common target that mediates the actions of slow-acting and fast-acting antidepressants.
PKMζ Positively Mediates Anxiety-Like Behaviors
Generally, anxiety is characterized by a persistent feeling of apprehension or dread, a specific reaction to stress. A great variety of behavioral models has been developed to mimic anxiety disorders (Kumar et al., 2013). It is not uncommon that patients with depression may also suffer from anxiety disorders (Kaiser et al., 2021). Besides fear conditioning as described above, PKMζ is also involved in other anxiety-like behaviors. Microinjection of ZIP into the hippocampus alleviated the anxiety-like behavior in rats after single prolonged stress (SPS), a paradigm used to trigger PTSD-like symptoms in animals (Ji et al., 2014). In another animal model of PSTD, it was shown that PKMζ in different brain regions exerts a time-dependent role in storing traumatic memory and mediating anxiety-like behaviors in rats exposed to predator scent stress. The study showed that injection of ZIP into the dorsal hippocampus 1 h after predator scent stress exposure disrupted anxiety-like behavior and trauma cue response 8 days later, whereas intra-insular cortex injection of ZIP 10 days after predator scent exposure showed a similar effect (Cohen et al., 2010). This suggested that PKMζ in dorsal hippocampus and insular cortex might regulate different stages of anxiety disorders. In a valproic acid model of autism, mice with valproic acid injection showed a higher level of PKMζ in the BLA. Injection of ZIP into the BLA decreased anxiety-like behavior in the VPA-injected mice (Gao et al., 2019). In another study, microinjection of ZIP into the anterior cingulate cortex reversed pain-induced anxiety-like behavior (Du et al., 2017). These studies suggested that PKMζ in many brain regions could be a common molecule that maintains different traumatic memories and mediates anxiety-like behaviors. In addition, the anxiolytic effects of ZIP are not dependent on the types of stress used to trigger anxiety-like behavior.
Consistent with the role of PKMζ in mediating stress-induced anxiety-like behaviors, PKMζ has been demonstrated to regulate the basal level of anxiety. Genetically modified mice that lack both PKCζ and PKMζ showed reduced anxiety behavior (Lee et al., 2013). In contrast, virus-mediated overexpression of PKMζ in the BLA of wild-type mice increased anxiety-like behavior (Gao et al., 2019). However, although virus-mediated overexpression of PKMζ in the prelimbic cortex enhanced fear memory, this intervention showed no effect on basal anxiety-like behavior evaluated by open field test and elevated plus-maze task (Xue et al., 2015). These studies might suggest that BLA but not the prelimbic cortex might be crucial for basal anxiety behavior.
POTENTIAL MECHANISMS OF PKMζ IN REGULATING DEPRESSION
Evidence has illustrated the molecular mechanism underlying the role in maintaining LTP (Sacktor, 2011). In hippocampal slices, perfusion of PKMζ resulted in a robust potentiation of AMPAR-mediated excitatory postsynaptic currents (EPSCs), which could be blocked by non-NMDA glutamate receptors antagonist CNQX (Ling et al., 2002), suggesting that PKMζ was sufficient for AMPAR but not NMDAR currents. Further studies have proposed particular processes of LTP initiation and maintenance (Sacktor, 2011): (1) in the initiation of LTP, NMDA receptors are activated, which then result in the reactivation of multiple protein kinases that are essential for the removal of the translational block of PKMζ synthesis; (2) the de novo synthesized PKMζ is then converted into a conformation with constitutive activity after phosphorylation by phosphoinositide-dependent protein kinase 1 (PDK1); (3) the constitutively activated PKMζ increases N-ethylmaleimide-sensitive factor (NSF)/the glutamate receptor 2 (GluR2)-dependent trafficking of AMPAR and maintains the AMPAR expression at postsynaptic sites to potentiate synaptic transmission.
PKMζ was shown to be able to phosphorylate and inhibit PIN1 (protein interacting with NIMA1), a prolyl isomerase, which has the capacity for suppressing the translation of PKMζ from mRNA (Westmark et al., 2010). This self-perpetuating mechanism of PKMζ translation in synapses thus explained the maintained high levels of PKMζ and its activity, which is required for maintaining synaptic plasticity. As a critical subunit of AMPARs, GluR2 is crucial for AMPAR assembly and trafficking and determines the property of Ca 2+ permeability and function of AMPAR (Isaac et al., 2007). It is of interest that the Ca 2+ permeable AMPAR, which has been revealed to play an important role in short-term and long-term synaptic plasticity, contains unedited GluR2 or lacks GluR2 (Isaac et al., 2007). As described above, PKMζ regulates synaptic plasticity and LTP via maintaining the membrane GluR2 expression, presumably increasing the membrane expression of GluR2-containing AMPAR (Sacktor, 2011). A hypothesis could be that the GluR2 subunit composition of AMPAR switches between the initiation and maintenance of the LTP, and PKMζ might be essential for this switching (Liu and Cull-Candy, 2000); however, this requires further investigation to be determined.
Regulation of GluR2 trafficking through the interaction between PKMζ and NSF/GluR2 might also be the mechanism underlying the role of PKMζ in the PFC in depression. (Yan et al. (2018) showed that CUS and learned helplessness stress reduced PKMζ level and synaptic expression of GluR2 in the mPFC, which could be reversed by the virus-mediated expression of PKMζ. Inconsistent, virus-mediated expression of the dominant-negative PKMζ facilitated a subthreshold chronic stress-induced decrease in GluR2 in the mPFC (Ling et al., 2002). These studies may suggest that, even though different stress affects the expression of PKMζ differently in distinct brain regions, PKMζ and GluR2 levels were parallel after particular stress in a certain brain region. However, the causal role of GluR2 in mediating the function of PKMζ in stress conditions remains unclear. Elucidation of this issue may uncover the mechanism of PKMζ in response to stress and stress-related disorders.
Selectivity of Approaches That Modulate PKMζ Activity
As described above, most work that supports the fundamental role of PKMζ in maintaining LTP and related behaviors used ZIP as a selective inhibitor of PKMζ. However, other studies have suggested that ZIP might not be an appropriate inhibitor of PKMζ.
Some evidence shows that ZIP may not be able to inhibit PKMζ. In cultured 293T cells expressing PKMζ, ZIP could not reverse PKMζ overexpression-induced increase in the phosphorylation of multiple PKC substrates. In COS-7 cells co-transfected with CKAR and PKM-RFP, ZIP did not affect the baseline normalized FRET ratio. Furthermore, ZIP did not affect MAPK2 activity in brain slices transfected with PKMζ. The authors then concluded that ZIP could not inhibit PKMζ (Wu-Zhang et al., 2012). However, the protocol used in the study could lead to a 30-fold increase in PKMζ expression, which was beyond the inhibitory ability of ZIP. Yao et al. (2013) demonstrated that ZIP was a competitive inhibitor of PKMζ and could be ineffective in inhibiting an excessively high level of PKMζ. In addition, ZIP inhibited PKMζ-induced enhancement of AMPAR potentiation but not baseline AMPAR-mediated EPSC mediated by other cellular molecules, suggesting that ZIP could selectively suppress the function of PKMζ (Yao et al., 2013).
Other studies suggested that ZIP was not selective on PKMζ. ZIP at a concentration (10 µM) could inhibit the activity of both PKCa and PKMζ (Bogard and Tavalin, 2015). ZIP also disrupted the ability of PKC to bind to AKAP79. Since AKAP79 interacts with PKCa via a pseudosubstrate-like mechanism, suggesting ZIP might exert its effects through the displacement of PKCa from targeted sites (Bogard and Tavalin, 2015). In a recent study, both ZIP and its control peptide scr-ZIP caused GluR1 redistribution in HEK293 cells expressing GluR1 (Bingor et al., 2020). It is of interest that HEK293 cells did not express PKMζ. The same study further showed that the effects of ZIP on AMPAR function were mediated by NOS signaling, which suggested that NOS signaling rather than PKMζ was the key target of ZIP (Bingor et al., 2020). In a physiological situation, ZIP and scr-ZIP could decrease AMPAR EPSCs in the NAc brain slices. Consistent with this, both ZIP and scr-ZIP disrupted cocaine-induced CPP, a reward memory task that requires the function of the NAc (Bingor et al., 2020).
Evidence also suggested that the effects of ZIP could be attributed to ZIP-induced cellular toxicity. One study showed that both ZIP and scr-ZIP dose-dependently caused rapid cell death of cultured hippocampal neurons (Sadeh et al., 2015). These effects might be due to ZIP and scr-ZIP-induced spontaneous activity and sustained increase in Ca 2+ activity after application of ZIP and scr-ZIP in cultured hippocampal cells (Sadeh et al., 2015). The fact that ZIP could lead to detrimental hyperactivity in cultured hippocampal neurons suggested that ZIP was excitotoxic to neurons (Sadeh et al., 2015). In contrast, one study indicated that ZIP could lead to neural silence in the hippocampus in vivo (LeBlancq et al., 2016). LeBlancq et al. (2016) recorded local field potential from the CA1 subarea of the hippocampus with the infusion of ZIP directly into the recording area. Astonishingly, they found that ZIP caused a profound inhibition of LFP comparable to the magnitude of that induced by lidocaine, a sodium channel blocker. The duration of LFP inhibition maintained by ZIP was even longer than lidocaine. Although these two studies reported contradictory findings that ZIP excited or inhibited neural activity, it is possible that ZIP-induced inhibition of LFP might be a consequence of ZIP-induced excitotoxicity (Patel and Zamani, 2021).
Given these critical concerns on ZIP, i.e., the ineffectiveness in some conditions, non-specificity, and neurotoxicity, it should be cautious when interpreting the results used ZIP. In particular, ZIP should not be taken as a specific inhibitor of PKMζ. Other approaches rather than ZIP should be employed to determine the causal role of PKMζ in regulating brain functions and related behaviors. These approaches may include virus-mediated downregulation or expression of PKMζ in particular brain regions or subtype of cells. For example, previous studies have used viruses to overexpress PKMζ or the negative-dominant mutant PKMζ that could competitively inhibit PKMζ activity in particular brain regions could modulate animal behaviors (Shema et al., 2011;Xue et al., 2015;Yan et al., 2018).
PKMζ Might Be a Maintenance Mechanism for Depression
Depression is a brain disorder characterized by persistently depressed mood and loss of interest. Patients usually need to take antidepressants to benefit from the treatment continually. Furthermore, some patients may stop benefiting from particular antidepressants after long-term treatment (Anderson, 2013). At least to some extent, these phenomena suggest that currently available antidepressants only transiently suppress the symptoms of depression but do not directly affect the maintenance of depression. Theoretically, a medicine that directly influences the maintenance of depression may permanently reverse the related maladaptive changes and cure depression. Since PKMζ has been implicated in the maintenance of synaptic plasticity and memory (Sacktor, 2011), it is presumed that stress-induced reduction in PKMζ in the PFC may result in a persistent dysfunction of this brain region. This could be a mechanism underlying the persistence of depressive-like symptoms. However, several questions should be addressed before concluding the role of PKMζ in the persistence of depressive-like behaviors. For example, it remains unknown how long the stress-induced reduction in PKMζ could last. Another interesting question would be whether PKMζ is a molecule that mediates the sustained effects of antidepressants.
Since PKMζ has been long taken as a memory molecule, it is of great curiosity to examine whether PKMζ could be a link between depression and memory problems. Patients with depression usually suffer short-term memory loss and are at risk of long-term memory loss (Dillon and Pizzagalli, 2018). As PKMζ in the mPFC is crucial for depressive-like behavior and memory maintenance. It could be presumed that depressive-like behaviorassociated reduction in PKMζ expression in the PFC might underlie the memory problems found in patients with depression. Addressing this issue may shed light on the understanding of the relationship between stress-induced cognition dysfunction and the development of depression.
Furthermore, depression is a complicated disease that involves many brain regions (Nestler, 2015;Hare and Duman, 2020). As mentioned above, stress increased PKMζ expression in the hippocampus but decreased it in the mPFC, indicating that PKMζ in the hippocampus and mPFC may be involved in stress response and depression distinctively. Future studies are needed to address the role of PKMζ in different brain regions in regulating depression and the actions of antidepressants.
Is PKMζ Involved in Stress Resilience?
Since PKMζ in the PFC was negatively associated with depression symptoms, it could be predicted that pharmacological or behavioral approaches that could elevate PKMζ expression in the mPFC might lead to stress resilience and protect subjects from experiencing detrimental consequences of stress, which thus prevents the development of stress-induced depression. Yan et al. (2018) showed that even though antidepressants prevented CUS-or learned helplessness-induced reduction in PKMζ expression in the PFC, they did not influence PKMζ expression in non-stressed animals. Virus-mediated expression of dominant-negative mutant PKMζ in the PFC did not induce depressive-like behaviors. These results may suggest that basal PKMζ activity in the PFC could not be critical for depressive-like behaviors. However, it would be interesting to examine whether the virus-medicated expression of PKMζ or its dominantnegative mutant would influence the resilience or susceptibility in responding to stress.
CONCLUSION
Recent evidence shows that PKMζ is involved in stress response and depressive-like behaviors. PKMζ in the PFC could be a common molecule that mediates the actions of slow-acting and fast-acting antidepressants. However, the role of PKMζ in mediating stress response and depression remains largely unknown, which needs further investigation. Addressing this issue will determine whether PKMζ could be a therapeutic target for developing novel antidepressants.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.
FUNDING
This work was supported by a grant from the National Natural Science Foundation of China (grant no. 81601163).
|
2022-05-20T13:22:01.199Z
|
2022-05-20T00:00:00.000
|
{
"year": 2022,
"sha1": "d7f3d225be5048430872952aa7ae9819ec1b04c5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d7f3d225be5048430872952aa7ae9819ec1b04c5",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
137054460
|
pes2o/s2orc
|
v3-fos-license
|
Effect of fouling, thermal barrier coating degradation and film cooling holes blockage on gas turbine engine creep life
Gas turbines are sometimes operated in very hostile conditions due to service exigencies. These environments are characterized by degradation modes such as fouling, thermal barrier coating degradation and blockage of cooling holes which affect the creep life of engines. Therefore this paper presents a performance-based creep life estimation model capable of predicting the impact of different forms of degradation on the creep life of gas turbines. The model comprises performance, thermal, stress
Introduction
Gas turbines are sometimes operated in chemically aggressive and harsh operating conditions such as the desert, marine, coastal and offshore environments due to service exigencies, where the working fluid (atmospheric air) is often contaminated and harsh. Under these harsh conditions, gas turbine components such as compressors, combustors and turbines which are usually designed for total life, experience different forms of degradation due to the ingestion of contaminants that often lead to their premature failure. Some of these degradations that gas turbine components suffer in such operating environments include fouling, thermal barrier coating (TBC) degradation, plugging of cooling holes etc. In a bid to minimize engine component degradation due to ingestion of these harmful contaminants, air filtration systems are used to control the quality of air entering the gas turbine intake [1]. However, complete filtration of the intake air is not feasible, because of the tiny nature of some these contaminants and the associated pressure drop, cleaning requirements, filter replacement and the overall increase in engine weight (aero gas turbines) [2].
However, most creep life prediction models for life critical components are built based on only operating temperatures, stresses and Engine Operating Time (EOT) without considering the effect of some of these degradation modes. Sometimes, substantial safety factors are included to ensure a failure free operation because, their predictions are not environment specific, but based on empirical experience. According to Naeem et al [3], such approaches may not be accurate as they do not represent the actual operating environment of the components within gas turbines. Moreover, the methodologies and criteria used to arrive at such life predictions and recommendations are obscure. The implication is that either the creep life of the component will be overestimated or underestimated. Therefore, to enhance the prediction of creep life of gas turbine engine components in such hostile and chemically aggressive environments, degradation modes such as fouling, TBC degradation etc which contribute substantially to the failure mechanisms of engine components could be taken into account. Therefore, this paper aims at introducing a performance-based integrated creep life estimation method being capable of predicting the impact of different degradation modes on the creep life of gas turbine engines. Such a model provides end users of gas turbines with a flexible creep life estimation method which is capable of performing feasibility studies on the effects of environmental factors on the creep life of gas turbines. The work has been focused on the impact of hot corrosion, oxidation, fouling, TBC degradation and plugging of cooling holes on the creep life of high pressure turbine (HPT) blades. However, the effect of oxidation has been published elsewhere [4], hence this paper focuses on only the impact of plugging of cooling holes, TBC degradation and compressor fouling on HPT blade creep life.
Methodology
A HPT blade of a two-shaft aero derivative model gas turbine engine is selected as the creep life limiting component of the gas turbine. This is due to the fact that it experiences the highest mechanical and thermal loads. Consequently, the idea of this research is to analyze the effect of compressor fouling, TBC degradation and plugging of film cooling holes on the creep behavior of the HPT blade using a Creep Factor approach [5]. In doing this, an engine performance model was created using TURBOMATCH [6], which is a gas turbine engine performance simulation code developed at Cranfield University. A creep life model was set up and the data from the performance model and information available in open literature was used for the prediction of engine creep life. Degradation indexes ranging from 1% to 25% were used to quantify the impact of different degradation modes on the performance and creep-life of the model engine. The integrated model was subsequently used to assess the impact of fouling, TBC degradation and plugging of film cooling holes on the creep life of the model gas turbine engine.
Environmental factors
The ingestion of air contaminants such as CMAS-type materials, sand/dust, salt, etc. into gas turbines could cause fouling/erosion problem for the compressor section and TBC degradation as well as plugging of cooling holes for the turbine section. According to Santini et al [7], the ingestion of air contaminants can generate plugging of cooling holes and fusion of particles in the hot sections of gas turbines. Similarly, particles like sea salt or volcanic ash that endure through the compressor are carried into the airstream of the burner where they melt due to the high temperature and deposit on the turbine blades in a molten state thereby plugging the cooling holes and attacking the TBC. In this section, brief description of fouling, blockage of cooling holes and TBC degradation is presented.
Fouling
Fouling is caused by the adherence of particles less than 10μm (such as salt, dust, ash, smoke, carbon, mist etc.) to airfoils and annulus surfaces [8]. Fouling often leads to changes in airfoil shape, surface roughness, or changes in airfoil inlet-angles. It has been widely reported that gas turbines operating in industrial, marine, rural and agricultural environments will experience compressor fouling [9; 10]. Commercial filters can remove these particles to a reasonable extent; however, most submicron particles are difficult to remove due to their tiny nature. Compressor fouling reduces the compressor mass flow rate, pressure ratio and the cycle efficiency which subsequently leads to a reduction in power and increase heat rate [11; 12]. In a bid to recover lost power, firing temperatures are raised which has a detrimental effect on the blade creep life.
Blockage of cooling holes
The complex design of HPT blade cooling geometry presents itself with a serious issue especially when operated in salty, dusty or sandy environments because the cooling holes easily becomes blocked by tiny particles. Blockage of these holes means the turbine blade cooling system will not function at it optimum efficiency. The blockage of film holes reduces the film cooling efficiency and this causes an increase in blade metal temperature which will reduce the blade creep life.
Thermal barrier coating degradation
Air contaminants such as sand/dust, volcanic ash and salt are sometimes efficiently milled into fine particles during compression and subsequently carried beyond the compression section into burner section. Subsequently, when the air is heated up during combustion, these debris melt and are propelled into the nozzle guide vanes (NGVs) and turbines where they are deposited on the blade surface. The resultant effect is that the molten deposit will chemically attack the coatings on the blade surface. The attack will result in a gradual loss of TBC thickness. When this happens, the overall cooling effectiveness reduces which leads to a rise in blade metal temperature. The rise in blade metal temperature will cause a subsequent reduction in blade creep life.
Blade creep life assessment
To predict and analyze the impact of degradation modes such as compressor fouling, TBC degradation and plugging of cooling holes on the creep life of gas turbine engines, it was imperative to develop a creep life assessment model. Therefore, a physics-based creep life assessment model was developed based on the earlier work done at Canfield University [5; 13] and applied to the HPT first stage rotor blade. The creep life assessment model was based on an analytical approach consisting of three main sub models which include stress, thermal and life estimation models. Fig 1 shows the methodology used for the research.
Stress model
The stress model calculates the stress distribution along the blade and it was developed based on the algorithm provided in [4]. The 2 main sources of stress considered are stresses due to centrifugal load caused by engine rotation and gas bending momentum. The variations of the blade stresses are predicted at locations along the blade span and chord. At span-wise, the blade span was divided into sections at intervals of 25% of the blade span whereas the chord was split into three areas namely blade leading edge (LE), trailing edge (TE) and the back of the blade.
Thermal model
The thermal model estimates the blade metal temperature distribution. It was developed and used to estimate the blade metal temperature. The blade is regarded as a heat exchanger which is subjected to a mainstream of hot gas flow from the burner. The main considerations of the model are the cooling methods, the blade geometry, the TBC thickness, the heat transfer coefficients, the gas properties, the radial temperature distribution factor (RTDF), the blade material etc. More details of the thermal model used in this paper can be found in [4].
Life estimation model
The creep life estimation model evaluates the creep life of blades based on the estimated stress and metal temperature and displays the results as a Creep Factor. The Larson Miller Parameter (LMP) [14] is used to evaluate the creep life of the HPT rotor blades and it is expressed in Equation (1). (1) Where t f is time-to-fracture; T M is the material temperature; P is the LMP while C is the parameter constant.
Creep factor
Creep Factor (CF), a concept developed by Abdul Ghafir et al. [9] is adopted in this study to estimate the impact of actual operating conditions on creep life consumption of gas turbine engines. It is defined as the ratio between the actual creep life and the creep life at a reference condition. The CF approach appraises the rate of creep consumption relative to a specific operating condition desired by the operator. The concept of CF is further explained in [5].
Engine performance simulation and blade geometry
In order to study the impact of degradation modes on gas turbine engine creep life, a model aero derivative gas turbine engine similar to GE LM2500+ was created in this study using the engine performance specification provided by [15]. TURBOMATCH [6] was used to create an engine performance model shown in Fig 2 based on the engine configuration provided in [15]. . The model engine has an axial compressor driven by a compressor turbine, a combustor and a power turbine providing power output. Using the results from the engine performance simulations and available information from open literature, the blades of the first stage of the HPT of the model engine was sized using the constant mean diameter method [16]. The model HPT blade used for this study is described elsewhere [4].
Effect of compressor fouling on blade creep life
The integrated creep life model was used to assess the impact of compressor fouling on HPT blade creep life. In doing this, different Fouling Indexes (FI) ranging from 1% to 4% are used to examine the impact of compressor fouling on the performance and creep life of gas turbines. The definition of FI is shown in Table 1 and the values are chosen based on typical values used to represent compressor fouling [16; 17]. Based on the defined FI in Table 1, TURBOMATCH was used to simulate the impact of compressor fouling on the model engine performance and the results are illustrated in Fig 3 and The results show that compressor fouling increases the Turbine Entry Temperature (TET) and fuel flow (Fig 3). On the other hand, it reduces the mass flow and efficiency (Fig 4). The TET and fuel flow are increasing because compressor fouling causes reduction in blade efficiency due to tip clearances, tip chord variation and increased surface roughness. These effects subsequently results in less air passing through the compressor and thereby reducing the pressure ratio at a given non-dimensional speed line. Because of the reduced flow capacity and efficiency, to produce a given power output at constant PCN, the engine runs at a higher TET giving rise to high fuel flows (Fig 3) at reduced efficiencies (Fig 4). The rise in TET will in turn increase the blade metal temperature and consequently reduce the creep life of the turbine blades. In comparison with the baseline condition (DP), compressor fouling increased the blade metal temperature. The results were consistent for all the different points on the blade span and the severity increased as the FI was increased. From Fig 6, it could be seen that the CF reduced when the FI was increased. For instance, at FI of 1%, the Creep Factor has reduced from 1.0 to 0.61 but as FI is increased to 3%, the Creep Factor further reduced to 0.21. This means that in comparison with the reference condition denoted by 0% in Fig 6, at FI of 1% and 3%, the blades' creep life has reduced by 39% and 79% respectively. The values show that as compressor fouling increases, the creep consumption also increases.
Effect of film cooling holes blockage on blade creep life
The blockage of film cooling holes reduces the film cooling efficiency and this causes an increase in blade metal temperature which will reduce the blade creep life. The influence of film cooling efficiency loss on the blade metal temperature is shown in The results represents a situation where the HPT blades are suffering from either turbine glazing, CMAS attack or film cooling hole blockage, therefore the TBC and the convective efficiency are not affected hence not reflected in this analysis. The results show that for every percentage drop in film cooling effectiveness, there is a corresponding rise in blade metal temperature. Even though the increment is marginal, the overall impact on the creep life was massive. Subsequently, the impact of film cooling efficiency loss on the rotor blade overall cooling efficiency, ε and creep life was investigated using the integrated life estimation model. The results illustrated in Fig 8 show that as the film cooling efficiency reduces, the overall cooling efficiency dropped. The reduction in ε increased the blade metal temperature (Fig 7) which in turn reduced the blade creep life. For instance, at 15% loss of film cooling efficiency from its original value, the overall cooling effectiveness, ε dropped by 2% while the creep factor reduced from 1.0 to 0.69 representing approximately 31% reduction in blade creep life. This means blockage of film cooling holes could have a significant negative effect on the blade creep life.
Effect of thermal barrier coating degradation on blade creep life
Similar to the previous cases discussed, TBC degradation affects the HPT blade metal temperature, overall cooling effectiveness, ε and creep life. The variation in blade metal temperature due to TBC degradation is illustrated in Fig 9. The result shows that for a given set of gas conditions with a predetermined level of film cooling and convective efficiencies, TBC degradation will cause a rise in blade metal temperature. For instance, it could be seen in Fig 9 that The result in Fig 10 shows that a reduction in TBC thickness through either a chemical attack or erosion will cause a reduction in the blade cooling effectiveness which will in turn increase the blade metal temperature hence leading to a reduction in blade creep life. For instance as the TBC thickness reduced by 25% from its reference value, the overall cooling effectiveness dropped by approximately 4% while the Creep Factor reduced from 1.0 to 0.482. This means 25% loss of TBC thickness could halve the blade creep life. It is important to note that the internal convection cooling efficiency and film cooling effectiveness were kept constant. The results generally show the importance TBC in determining the creep life of gas turbine HPT blades.
Conclusion
This paper presents a novel creep life analysis model which has been used to assess the impact of fouling, TBC degradation and plugging of film cooling holes on the creep life of HPT blades. The results show that compressor fouling, TBC degradation and plugging of cooling holes have significant detrimental effect on the creep life of the turbine blades. For instance, at FI of 1% and 3%, the HPT blade creep life reduced by 39% and 79% respectively from its reference creep life. Similarly, 25% loss of TBC thickness from its reference value reduced the overall cooling effectiveness by approximately 4% which subsequently reduced the Creep Factor from unity (1.0) to 0.482. This means 25% loss of TBC could halve the blade creep life.
Acknowledgement
This work was partly funded by the Niger Delta Development Commission of Nigeria, the support of which is gratefully acknowledged.
|
2019-04-28T13:07:17.061Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "80d09a0799aef6f9ab77f02e1e0c47a1675746f1",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.procir.2015.07.017",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bbf57b3ff6368ba0baf18a6f859f8e8c85e1fb98",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
245217090
|
pes2o/s2orc
|
v3-fos-license
|
The Turkish Online Journal of Design, Art and Communication -TOJDAC PRINCIPLES OF THE RULE "SIMPLE REALITY IS THE WHOLE THINGS" IN THOUGHTS OF LEADER...
The Divine, as the sole originator cause, has a simple essence, which is essence of His activity, and meanwhile has numerous essential aspects of excellence through which He can be the cause of many effects despite of His simple activity; and this while conforming to the principle of congruity can be explained and proved by principles of the rule “Al-Wahid” (the rule based on which one thing can be the cause of only one thing) as well. By adding this issue to the entity of relationship between the effects, principles of the rule “simple reality is the whole things” will naturally become necessary in the universe. That is to believe that the Divine has a simple essence which while being simple, complies with essences of all – and not only one – of His effects, due to degree of His existence.
THE CAUSE'S STATUS OF ACTIVITY
The truth is that when a thing is simple, and it is the cause of another thing, its essence is the entity of being the cause, so that the reason cannot analyze it to an essence and a cause; otherwise it cannot be essentially simple. Therefore, the simple source which is a cause, due to its simple essence, will be the origin of anything other than itself; so, inevitably, the simple cause has characters due to which, it can originate a certain effect and nothing else. It is therefore that certain character which is the real source of originating the cause, and existence of the cause necessitates existence of the effect. 1 Considering this issue and what mentioned before about necessity of cause and effect companion, it can be concluded that if the cause is existentially prior to its effect, such priority is not related to its activity as the cause; it is rather related to its essence before it has actually caused an effect. 2 Here, it becomes clear that whatever which gives existence to other things is a true cause, and all other things are not true causes; they are preparing causes which are called "causes" for the ease of reference. For further clarification of the subject, Leader of Theosophists has allocated a chapter to this issue in his "Four Intellectual Journeys", where he explains: the cause can takes either of the two positions in relation with the effect; it either influences the effect directly or not. If the cause does not influence the effect directly, then there has to be a medium, such as a condition, an attribute, a will, a tool, an expedience, or any other things. In this case, whatever has been considered as the cause without the medium cannot be the true cause; the cause here would be a combination of the essence and the medium or mediums which influence the effect. Therefore, in this case activity of a cause depends solely on its essence, nature, and reality, and not on anything influencing on it. According to this explanation, when it is proven that a true cause is active essentially and naturally, and based on this ground we attribute decisiveness and influence to it, it is then proven that essence of cause requires the effect, which is abstracted from the cause and is attributed to it, has to exist. 3 Having proven that the cause is essentially active only when it is singular, and that a cause essentially has to have an effect attributed to it, it can be concluded that the effect occurring as the essential requirement of a real simple cause, comes to existence through a simple generation. According to the Leader of Theosophists, the philosophers consider a cause to be a true cause 4 only when it gives existence to an effect; that it to say the effect's sole attribution to the cause is the existence; and it cannot be anything else, because, if the cause generates anything anywhere other than existence, the concepts of reflective attribution and illuminative attribution are no more realized, and therefore giving existence does not take place; therefore the causality has not been real. Based on this ground, natural acts cannot be true causes as they do not give existence, and their sole action is to incite. 5 Therefore, it is implied that the true cause gives existence to the effect only through a simple generation. On the other hand, a preparing cause relates an existing being to a state or act such as movement; for example moves a still object. It is therefore proven that an attribute cannot be generated, because firstly it is not simple and a compound cannot be primarily and essentially generated, and secondly it requires compound generation and a true cause gives existence only through simple generation: "and generation and origination are indeed related solely to existence of an attribute and not to the essence of an attribute." 6
2.SIMPLICITY OF THE EFFECT
Another issue revealed by thinking about aspect and direction of influence of the cause and its attribution to the effect is the fact that aspect and direction of influence of the cause depends on entity of its simple existence (as it is the cause and active) 7 , and this simplicity becomes evident when we separate the cause from all the things interfering its causality and effectiveness, in a way that what remains is solely existence of the causal and effective aspect of the cause. Also, when we separate the effect from all the things not influencing continuity of its attribution, it becomes clear that a cause is a cause due to entity of its essence and reality, and an effect is an effect due to its essence and reality. Therefore the effect has essentially no reality other than being attributed and secondary, and has no meaning other than being caused and dependant, without having an essence being attributed to these meanings. 8 Simplicity of generation requires simplicity of the generated being 9 as indicated in discussion of "primary being" amongst all possible assumptions and aspect raised by rational analysis of existence of the effect, including essence, existence, appearance, transformation of essence to the existence or any other assumptions, it is only the existence that can be generated by the cause, and this is due to originality of the existence and unity of the same type of existences. Based on the same ground, simplicity of the primary being is proven.
3.PRINCIPLE OF HOMOGENEITY AND RULE "AL-WAHID"
Another way of proving simplicity of directions of activity, generation, and attribution in discussion of causality is proving the rule "Al-Wahid". Principles of the rule "Al-Wahid" suggest that a real singular cause cannot have more than one singular effect of the same direction. Therefore, it becomes clear that a singular being is a simple being that has no plural composition in its essence; so the singular cause is a simple cause which has become a cause due to entity of its essence, and a singular effect is a simple effect which has become an effect due to entity of its essence. Further explanation of the rule "Al-Wahid" is that the source of existence of the effect is nothing other than existence of the cause, which is the entity of its essence and has been discussed in details in discussion of direction of activity.
On the other hand, there should be an essential similarity between cause and effect due to which a certain effect is attributed to a certain cause; otherwise, everything should be cause of everything, and everything should be effect of everything. Therefore, if a singular cause whose essence does not consist of more than one direction, generates diverse and contradictory effects that cannot be attributed to that one direction in the cause, then essence of the cause must be consisting of diverse and contradictory directions, while from the beginning the cause was assumed singular and one-directional, and this is against the assumption. Therefore, it is proven that a real singular cause will not generate more than one single effect. 10 Based on the above reasoning it becomes clear that due to necessity of essential homogeneity between cause and effect, this rule is not only limited to the Divine; it includes any being which is simple due to its external existence. Therefore, principle of homogeneity is more general, because it includes both the singular being and the singular type. Therefore, requirements of the rule "Al-Wahid" do not deny attribution of diverse effects to the Divine, because it is not contradictory to the principle of homogeneity and is rather in accordance with this principle. It is clear that the rule "Al-Wahid", which is based on principle of homogeneity of cause and effect and is included in generality of this principle, requires comprehensive homogeneity of cause and effect in all aspects, so that nature of generation of a certain effect is limited to it and cannot be found in any other effects; otherwise the cause should be of divers directions of activity, which is against the assumption. 11 Therefore, if a really singular cause generates two effects, the two effects are either attributed to one same direction, or to two different directions. There is no way to accept the first hypothesis, because agreeableness is mutual similarity, and mutual similarity is a type of similarity of attributes. It is indeed a unity either between two attributes, which is called similarity, or between two attributed beings, which is called uniformity. In both cases similarity refers to real unity and homogeneity; therefore two directions are related to one direction. On the other hand, the second hypothesis is not acceptable as well, because if existence of the cause consists of two directions, the cause can no more be a real cause. 12 However, the important issue raised due to correctness of principles of the rule "Al-Wahid" is the question of how to explain diversity of beings in the world, where due to necessity of singularity and homogeneity of cause and effect, a singular cause cannot generate more than one singular effect? A cause can generate only one effect, and therefore existence of similar beings which are not in a causal relation is not possible for a singular type. The Leader of Theosophists' answer is: "the best answer is to say that what is generated is a singular being; but it is accompanied by other things which exist as requirements and are not generated by any generator. According to the questioner, what is generated by the Divine is solely the existence, and nature, potential, and things of that kind do not need to be generated. Therefore such qualities are primary effects and not secondary effects; so the primary effect can cause diverse effects due to its inherent diversity, and is generated by the real singular cause due to its singular existence which has been generated primarily." 13 Analyzing Reality of Causality -Connector Existence One of the most identical philosophical accuracies of the Leader of Theosophists is to provide an image of the effect's type of existence, showing its entire state of dependence and attribution to another being. In this image, he considers existence of the effect to be the entity of its relation and dependence on an independent existence called the cause, and not an essence which is related to its cause. In other words, in the relationship between cause and effect, and the effect's attribution to its cause, what comes about through rational analysis consists of two things and not more that. One is the cause, and the other is the relation generated and determined by the cause, and this relation is seen an entity of independent of existence of the cause from and independent perspective; otherwise, it does not have an independent identity by itself; and thus the Leader of Theosophists name it "connector existence", which means an existence whose identity is entity of relation to its cause, and not an existence which is related to its cause. The Leader of Theosophists explains that "existence of the effect essentially depends on existence of its cause, in a way that it cannot be imagined without existence of its cause." 14 The most accurate understanding of the nature of connector existence is provided by Scholar Tabatabaie in his image of connector existence in discussion of propositions, suggesting that in all predictive bounding and compound propositions there is something between the subject and the predicate, which is considered a relation, and is neither seen in subject alone, nor in predicate alone, and cannot be seen independently between subject and predicate, or between subject and non-predicate, or between predicate and nonsubject. Therefore in such propositions, there is a third being other than subject and predicate, which is not essentially independent of subject and predicate. It is a being which depends on subject and predicate and is not separated from them, and is meanwhile neither entity nor part of either of them. Therefore, existence of a different existence named "connector existence" is proven. 15 Considering what explained above, connector existence in propositions requires identical and essential unity between the two sides of proposition, because its existence depends on realization of existence of the two sides. Connector existence, due to its dependant nature, cannot have an identity; because identity is an answer to "what is…", and has an independent concept, while connector existences cannot have independent concepts. 16 That is to say, an identity with substantially independent existence makes a concept in accordance to its existence which is indicative of its independent identity as well. The connector existence depends on existence of subject and predicate; so it cannot be of such property, and cannot make an independent concept based on its existence.
-Ambiguous Essence of Connector Existence
An interesting issue about the connector existence is the fact that as per the Leader of Theosophists' clear explanation of existence of the effect, from an independent perspective, one may see only one secondary existence and not more than that, while having thought deeply about reality of the essence of existence, we confirm Mohaghegh Sabzevari's argument in this regard suggesting that reality of existence is conceptually known by all of us, and this is something clear, yet reality of essence and its origin is something beyond human perception; "its reality is the most know fact, while its essence is ultimately concealed," 17 With this introduction it can be concluded that it is not possible to understand reality, state, and essence of being "appointed link": "we have evidently explained that the effect as a secondary being is related to the cause in a certain way whose essence is not known." 18 -Illuminative Attribution Maybe the Leader of Theosophists' best interpretation for connector existence in position of effect, and its relation to the cause is the term "illuminative attribution" 19 which implies attribution and relation to entity of the cause's existence, resembling arrays of light for the sun.
In his explanation of types of attribution, he introduces the effect's attribution to the cause as a specific type of attribution in which, if the cause appears in the world of existence, it has a kind of companion whose existence depends on a being although its existence is accompanied by another being. Existence of this companion is entity of existence of the same effect, and the companionship is specified to this type of attribution, like paternity; a father is inherently a father and doesn't become a father due to others' paternity. 20 Here, what is real and original is existence of the cause, and whatever attributed or related to it is considered an aspect of the cause: "…whatever is called the cause, is the origin, and the effect is only an aspect of it; and causality and effectiveness refers to evolution of the cause in its self, realization of its potentials, and not disjunction of something independent from the cause" 21 The Leader of Theosophists, based on this definition of cause and effect existence, believes that the Divine is the sole real instance of primary and true existence, and therefore nothing other than the Divine could be a primary being. All the beings come to existence due to their attributions to the Devine, and are thus secondary beings. Therefore, reality of effecting and generating consists of granting existence upon the effect by the cause while relating and attributing it to the cause (the Divine) in a way that the effect, due to its attribution to the Divine, is capable of abstracting concept of the existence and becomes an instance for predicating "the being" to it. 22 In Four Journeys he says: "the conclusion is that philosophers and followers of the Divine philosophy believe that all the beings, including reason, carnal soul, and corporate forms are all arrays of the true light and manifestations of the eternal existence of the Divine." 23 Therefore, according to the Leader of Theosophists, attribution to the cause and being an aspect of it is the entire identity of the effect, and as pointed out before 24 , the effect has no essential reality other than this attribution and its sole significance is to be the effect and subordinate of the cause without an essence as subject of such meanings. On the other hand definition of the active cause means to be the origin, source, reference, and influential, and these are all entities of His existence. Beside this true issue, considering the fact that chain of causes end by a simple real being whose light of existence is free from defects, diversities, and uncertainties, it can be concluded that this simple and real source of light, which is the Divine, is inherently active, and His existence is the source of the world of creation and order; and the same fact reveals that all beings are of a same origin and nature, which is the sole reality, and anything else is as aspect of it. He is the true existence, and all other things are attributed to His directions and aspects. It is concluded from this exposition that whatever called an existence in any way is nothing but an aspect from many aspects, and an attribution from many attributions of the Divine. Therefore, based on what was initially said about preliminary division of existence to cause and effect, it was eventually concluded that the real existence belongs to the cause, and the effect is only as aspect of the cause. Thus, causality and influence is evolution of the cause in one of his aspects, and not disjunction of something contrasted to it. 25 The Leader of Theosophists stresses on inherent singularity of the existence and accordingly limits the causality to manifesting aspects of the cause; meanwhile, believing in unity of the existence, he rather uses attribution instead of causality, and considers the simple existence to be the first thing attributed to the Devine and introduces it as the first thing sourced by the Necessary Being. He doesn't consider originality of the Devine's donation of existence to the first being a causal relation, because he believes that causality requires contradiction between cause and effect, which occurs only when their exclusive existence attributed to their unchangeable entity is taken into consideration, while from a unity-oriented approach to the existence, the absolute existence is singular and different from all types of homogeneity including numeral, typical, and corporal homogeneity. 26 Therefore, due to the differences between attribution and causality, it is revealed that these two concepts are similar in terms of being attributed to the source, but the source can be the cause (in the approach not believing in existential unity) or not (in the approach believing in existential unity); therefore, attribution is more general than causality. That is to say a cause is certainly a source, but a source is not necessarily a cause. 27 -Singularity of Existence Although individual singularity of existence is an indicator of theosophical teachings, it has been criticized extensively. By the way, the Leader of Theosophists has accepted it and tends to prove it as a requirement of attribution-oriented approach to causality.
There are three assumptions about concept of the singularity of existence: 1-Everything is the Devine and nothing else exists; 2-Anything other than the Divine is unreal, void, and illusive; 3-All the things other than the Divine are manifestations and aspects of the Divine. Here we describe the concept taken and proven by the Leader of Theosophists and explain the difference between bounding aspects and causal aspect: in causal aspect there are two realized existences; one is original and exists independently, and the other exists due to existence of the first one. The first existence is named the cause, and the second is named the effect generated due to existence of the cause, and define the effect's aspect of existence as causal existential aspect. In this aspect, the effect has its own existence, but its existence depends on something else; it is an existence besides existence of the cause. On the contrary, in bounding aspect, the second being has no existence other than existence of its source, and its existence is like the secondary philosophical concepts, which exist and are not solely abstract, yet their existence is realized through existence of their source of abstraction and not separated from it. This bounding existence is the entire and entity of existence of it source of abstraction, like attributes of the Divine which are rooted in His existence. Sometimes this bounding existence is considered only an aspect of its source of abstraction, like powers of the soul for the soul, or beauty for the human. According to Mulla Sadra, all the things other than the Divine are of a bounding aspect and not of causal aspect. That is to say, unlike peripatetic philosophers such and Avicenna, he believes that no existence can stand next to existence of the Divine, and all the things exist due to His existence. Of course they are not rooted in His existence like His attributes; they are rather manifestations of His aspects. This is how the Leader of Theosophists defines individual singularity of existence. Certainly, such definition of existence of all things other than the Divine is derived from, and even similar to, his definition of connector existence, which was explained before. Generally speaking, the Leader of Theosophists argues about individual singularity of existence in two different ways; one is to suggest that existence of the effect is a connector to the cause, and that the entire universe is of such character; so the entire universe with all its components is the entity of attribution to the Devine. Even the intermediate causes are of the same character, and therefore the entire universe is manifestation of aspect of the Divine aspects from this perspective. In the second argument, reality of the existence is simple and singular, and simple reality includes the entire universe with no exception. Therefore, there is only one existence, and it is individual and singular. Apparently, the main issue is the questionable unity of the existence, in which existence of the Divine and its aspects are seen as effects and degrees of existence, and due to this questionable unity, the personal unity raised by the attribution-oriented approach is realized as a secondary existence. These two are not two things or two existences; it is rather a same reality perceived in two different ways; otherwise, perceiving personal unity of existence as a specific singular existence with personal unity (although its unity is real) besides perceiving diversities as aspects of this personal unit which are of real ranks, is a difficult thing to do in realm of philosophy, and discussing its possibility or impossibility in details is out of our current discussion. By the way, others have made researches in this regard. Some philosophers believe that amongst all reasonable assumptions about unity of existence, only "unity of existence and diversity of the beings" can be proven, which is same as questionable unity or unity despite of diversity. Naturally the Sufis' approach to this issue, which is unity of existence and beings, can be proven only through contemplation and not in the realm of philosophy.
4.CONCLUSION OF THE RULE "SIMPLE REALITY IS THE WHOLE THINGS"
After proving simplicity of causality (in three directions of cause, generation, and effect) and providing an accurate image of existence of the effect in terms of being attributed to the cause through illuminative attribution, especially from perspective of personal unity of existence, in which there are no contradictions between cause and effect, and the entire existence is consisting of a single reality and its aspects, which are sources of all other beings, reality of the existence can be seen as a singular and simple being which includes the entire universe, yet cannot be related to all its instances separately. Here, the rule "simple reality is the whole things and nothing is independent of it" is apprehended well. That is to say that the Divine is reality of all things, and not all things. All the things which are apparently separated from the Divine are indeed abstracted from a reality which is an aspect of the Devine existence. Therefore, as suggested by the Leader of Theosophists, a simple being, despite of being simple, can carry diverse meanings other than its own concept without negating its unity of essence, or unity of direction of its essence. 28 Mulla Sadra believes that understanding meaning of "simple reality is all things" in a way to consider it a simple rational essence which includes all the things is true, pleasant, and obscure; and for this reason none of the preceding philosophers, even Avicenna, have not been able to apprehend it as it is. 29 In order to prove the simple reality rule, the Leader of Theosophists brings about a contradictory argument: if something is independent of real existence of the Divine, its existence must be attributed to something depriving from it; otherwise depriving from a deprived being occurs in it, which is indeed the proof of the being, because combining two contradictory beings is not possible. When negation of a being occurs in the Devine existence, the Divine existence would have to be consisting of reality of something and negation of something else; and this requires a composition. Even if this composition is outcome of a rational analysis, it would be against the assumed simplicity of the Divine existence. This argument proves inclusion of all existences in the simple reality. The Leader of Theosophists says: "When something is simple in all aspects, that thing with its entire characteristics includes all the things. Otherwise, its essence has to be consisting of two qualities. One to be essence of something, and one to be negation of something else; therefore, it will be of a compound essence, though limited to rational
|
2019-08-20T01:54:52.282Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "c956ecbe72db7f0d9aac38334a3cf585b0106162",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7456/tojdac",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6922137300308d9e3e39a6255e009713f50505c3",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": []
}
|
14726772
|
pes2o/s2orc
|
v3-fos-license
|
Adaptive Edge-Oriented Shot Boundary Detection
We study the problem of video shot boundary detection using an adaptive edge-oriented framework. Our approach is distinct in its use of multiple multilevel features in the required processing. Adaptation is provided by a careful analysis of these multilevel features, based on shot variability. We consider three levels of adaptation: at the feature extraction stage using locally-adaptive edge maps, at the video sequence level, and at the individual shot level. We show how to provide adaptive parameters for the multilevel edge-based approach, and how to determine adaptive thresholds for the shot boundaries based on the characteristics of the particular shot being indexed. The result is a fast adaptive scheme that provides a slightly better performance in terms of robustness, and a five fold e ffi ciency improvement in shot characterization and classification. The reported work has applications beyond direct video indexing, and could be used in real-time applications, such as in dynamic monitoring and modeling of video data tra ffi c in multimedia communications, and in real-time video surveillance. Experimental results are included.
Introduction
Video shot boundary detection (also called video partitioning or video segmentation) is a fundamental step in video indexing and retrieval, and in general video data management.The general objectives are to segment a given video sequence into its constituent shots, and to identify and classify the different shot transitions in the sequence.Different algorithms have been proposed, for instance, based on simple color histograms [1,2], pixel color differences [3], color ratio histograms [4], edges [5], and motion [6][7][8].In this work, we study the problem of video partitioning using an edge-based approach.Unlike ordinary colors, edges are largely invariant under local illumination changes and are much less affected by the possible motion in the video.To ensure robustness, we use both edge-based and colorbased features under a multilevel decomposition framework.With the multiple decompositions, we can avoid the timeconsuming problem of motion estimation by a careful choice of the decomposition level to operate at.Improvements in video partitioning have been recorded by performing a dynamic classification of the shots as the video is being analyzed, and then adaptively choosing the shot partitioning parameters based on the predicted class of the shot [9].Automatic shot classification can also serve as an important step in approaching the elusive problem of capturing semantics or meaning in the video sequence (see, e.g., [14]).
We note that the problem of video shot partitioning (or segmentation) is not only relevant to video indexing and video data management.(See [11][12][13][14] for discussion on video query, browsing, and video object management).It is also an important issue in other areas of video communication, such as video compression and video traffic modeling [15].In particular, for problems such as video traffic characterization and modeling, shot-level adaptation becomes mandatory, if the network is to dynamically allocate limited network resources in response to changing video data traffic.
In this work, we introduce adaptation at different stages in the video analysis process-both at the feature extraction stage and at the later stage of frame difference comparison.We propose a new method for fast shot characterization and classification required for such adaptation, using a new set of edge-based features.We introduce a method for automated threshold selection for adaptive scene partitioning schemes.In the next section, we describe recent reported work that is closely related to our approach.Section 3 presents the multilevel edge-response vectors, the basic features we propose for video partitioning.Shot characterization and adaptation in video partitioning in the context of the edgebased features is described in Section 4. Section 5 presents results on real video sequences.We conclude the paper in Section 6.
Related Work
The first step in content-based video data management is shot boundary detection.Simply put, it is the process of partitioning a given video sequence into its constituent shots.The purpose is to determine the beginning (and/or end) of different types of transitions that may occur in the sequence.The problem of video partitioning is compounded by the various changes that might occur in the video, (say due to illumination, motion and/or occlusion), and by the different types of shot transitions (such as fades and dissolves).The inherent variability in video shot characteristics, even for shots from the same sequence introduces further complication.The partitioning algorithm depends on the specific features used, and the similarity evaluation functions adopted.Earlier methods for video shot partitioning are described in [2][3][4][16][17][18].See [19][20][21] for a survey.
Most approaches to video partitioning make use of the color (or gray level) information in the video.The limitations of color in video partitioning are the problems of illumination variation and motion-induced false alarm.Edge based methods have thus been proposed to reduce the problem of invariance due to illumination and motion.Zabih et al. [5] made explicit use of edges in video indexing, and showed how the exiting and entering edges can be used to classify different types of shot breaks.Related methods that exploit edge information for shot detection directly in the compressed domain were proposed in [4,18,22,23].In [4] color ratio features were proposed as an alternative to color histograms, and were used to identify different types of shot changes without decompressing the video.The motivation was that color ratios capture the color boundaries or color edges in the frames.In [18,23] methods were proposed to extract edges directly from the DCT coefficients, which can then be used for video partitioning.In [22], Abdel-Mottaleb and Krishnamachari described the edge-based information used as part of the descriptions in MPEG-7.Edge descriptors were given as 4-bin histograms, where each bin is for one of the four directions: vertical, horizontal, left-diagonal, and right-diagonal.Other related compressed domain methods are reported in [9,13,16].
More recent approaches to the video partitioning problem have been proposed in [9,[24][25][26][27].Li and Lai [28] described methods for video partitioning using motion estimation, where the motion vectors are extracted using optical flow computations.To account for potential changes in the lighting conditions, the optical flow computations included a parameter to model the local illumination changes during motion estimation.Cooper et al. [25,29] partitioning techniques that exploit possible self-similarity in the video, by classifying temporal patterns in the video sequence using kernel-based correlation.Li and Lee [26] studied video partitioning, with special emphasis on gradual transitions.Yoo et al. [27] studied both gradual and abrupt shot transitions, and proposed methods based on localized edge blocks.For abrupt shot boundaries, they proposed a correlation-based method, based on which localized edge gradients are then used for detecting gradual shot transitions.
The need for adaptation in the video indexing process was first identified in [30] (see also [9]), where they showed that video shots vary considerably from one shot to the other, even for shots that come from the same video sequence.They thus suggested that the results of an indexing scheme could be improved by treating different shots differently, for instance, by use of a different set of analysis parameters.Since then, there has been an increasing attention to the problem.In [31], detailed experiments were carried out using television news video.It was concluded that the selection of similarity thresholds was a major problem, and hence there is a need for adaptive thresholds to capture the different characteristics of broadcast news video.Vansconcelos and Lippman [10,32] considered the duration of video shots, and showed that the shot duration can be used to predict the position of a new shot partition, and that the short duration depends critically on the video content.They used a statistical model of the shot duration to propose shot break thresholds.By classifying video shots in terms of the shot complexity and shot duration, and then performing indexing adaptively based on the video shot classes, it was shown in [9,30] that, indeed, adaptation could be used to improve both the precision and recall simultaneously, without introducing an intolerable amount of extra computation.Dawood and Ghanbari [15] used a similar classification to model MPEG video traffic.The problem of video indexing and retrieval is very closely related to that of image indexing.Surveys on video (and/or image) indexing and retrieval can be found in [19,21,33,34].Video partitioning or segmentation has been reviewed in [20].
In this paper, we study the use of both color and edges in adaptive video partitioning.Our approach is distinct in its use of multilevel edge-based features in video partitioning, and in the provision of adaptation by a careful analysis of these multilevel features, based on the notion of shot variability.Adaptation is provided at three levels-at the feature extraction stage for the locally-adaptive edge maps, at the video sequence level, and at the individual shot level.
Multilevel Edge-Response Vectors
In our approach, we place emphasis on the structural information in the video, as these are generally invariant under various changes in the video, such as illumination changes, translation, and partial occlusion.Thus, in addition to the intensity values, we also make use of the edges in computing the features to be used.In particular, we use multi scale edges, since these can more easily capture localized structures in the video frames.
Multilevel Image Decomposition.
Let I(x, y) be an M 1 × M 2 image, with x = 1, 2, . . ., M 1 ; y = 1, 2, . . ., M 2 .Given I(x, y), we decompose it into different blocks.For each block, we consider its content at different scales, and compute edgebased features at each of these scales.We then use the features to compare two adjacent frames in the video sequence.For simplicity in the discussion, we assume images are square, that is, M 1 = M 2 = M.We also assume M = 2 p , for some integer p.The ideas can easily be extended for the general rectangular image.
Let b be the number of blocks at a given decomposition level.We choose k, the level of decomposition, such that b = 1, 4, 16, . . ., 2 2k , k = 0, 1, 2, . . ., log 2 M. Let s be the scale, s = 0, 1, 2, . . ., S.Then, given the original image, I(x, y), we can select relevant areas of the image at different scales, s.Let I s (i, j) be the sub image part selected at scale s, where i, j = 1, 2, . . ., (M/2)(1 + (1/2 s )).At the lowest scale (s = 0), we will have the entire image, viz: where x = 1, 2, . . ., M; y = 1, 2, . . ., M, and x 0 , y 0 are starting positions (typically, x 0 = y 0 = 0).Let x s 0 , y s 0 be the corresponding starting positions at scale s (these are with respect to x 0 , y 0 in I(x, y)).Then we have: where, ( At a given scale s, the starting positions are computed as The size of the image block selected at scale s will therefore be m s × m s , where m s = (M/2)(1 − 1/2 s ).For a given decomposition level, we consider each of the m s × m s -sized blocks and compute the required image features.If we fix the number of scales to 1 (i.e., s = 0) at each level k, (i.e., at each level, we select all the image positions within the block to compute the feature), then the multi scale scheme defaults to a simple multilevel representation of the image.Thus, using s = 0 with L maximum number of levels (i.e., k = 0, 1, 2, . . ., L − 1), we will have an N-dimensional feature vector, where Clearly, the number of features grows quickly with increasing L (e.g., at L = 4, N = 85; at L = 5, N = 341).For the multi scale representation, we have more than one scale per level.With S as the number of scales per level (i.e., s = 0, 1, 2, . . ., S − 1), we will have S • N feature values for each particular feature.In the following, we will assume a single scale (i.e., S = 1, and s = 0).Figure 1 shows schematic diagram of an image at different levels of decomposition, and a tree representation of the individual blocks from each level.
Edge-Oriented Features.
To reduce the possible effect of noise in the video, we first apply Gaussian smoothening on the input image before computing the edge-based features.
Given the image I(x, y), the edge gradients are defined as G x (x, y) = ∂I/∂x, G y (x, y) = ∂I/∂y.The gradients are obtained using appropriate edge kernels: where H x and H y are the horizontal and vertical gradient masks, respectively, and * represents convolution.The gradient amplitude is given by which can be approximated using the simple absolute sum: The phase angle is given by These will be calculated once for each frame, but will be used at different levels of decomposition.
Locally Adaptive-Edge Map.
The major motivation for a multilevel approach is that certain variations in an image, such as those due to edges are local in nature, and hence will be better captured by use of local (rather than global) information.For video in particular, this becomes very important.Although some variations (such as panning, tilting, and illumination) in the video could be global with respect to a particular frame, object motion and some other camera operations (such as zooming) are more easily modeled as a local phenomenon.(Note, although zooming could also be global over the video frame, the direction of the motion vectors will vary from one area of the image to the other).We capture global information by using information from the lower levels of decomposition (smaller values of k).
With higher levels, we can obtain information about more localized structures in the frame.Such localized structures could be treated differently for improved performance.
We use locally adaptive thresholds to define the edge map at different decomposition levels.Each block is considered using it's own local threshold.For a given block r, at the kth level (r = 1, 2, 3, . . ., 2 2k ), we define the edge map as follows: EURASIP Journal on Image and Video Processing Multilevel decomposition Tree representation for resultant sub-image blocks where G k A,r is the corresponding gradient response in the rth block at level k, and τ k r is a local threshold.We can choose the threshold simply as where (m k r ) 2 is the size of the rth block at level k, α is a constant.While the above approach to local thresholds is simple and conceptual, it however considers each block independent of the other blocks in the frame.It might be advantageous to consider the local threshold with respect to the global image variations [35].At a given k, we can write m k r = m k , ∀r since the block size would all be the same for any block, r.
Define the overall global image threshold as where α is a constant (which can be determined empirically).
The local threshold for a given block r at each level k is then given by where μ k e,r , σ k e,r are, respectively, the edge response mean and standard deviation for block r, at level k.
Edge-Based Features.
At a given level k, and for each given block r, we compute the following features.
(i) Color, μ k c,r , σ k c,r : color mean and standard deviation using I(x, y).
(ii) Edge response, μ k e,r , σ k e,r : edge response mean and standard deviation using G A (x, y): (iii) Phase angle, μ k ϕ,r , σ k ϕ,r : mean phase angle and standard deviation using G φ (x, y).
(iv) Edge length, μ k λ,r : edge length using the edge map, E k r (x, y), where μ k λ,r = x,y E k r (x, y).(v) Edge response at the edge points, μ k p,r , σ k p,r : edge response mean and standard deviation computed only at the edge points, as defined by the edge maps.
The edge points are the pixel positions that lie on the edgesas determined by the thresholds above.We call the combined features including the color features multilevel edge-response vectors (MERVs).
Similarity Evaluation Using MERVs.
Having extracted the features, the next question is how to find appropriate metrics to compare two video frames using these features.Given two images I 1 (x, y) and I 2 (x, y), we can compute the distance between them using the general Minkowski distance, or some other metrics.In the following we use the simple city-block distance.
For the edge length, there will be no standard deviation, hence the distance will be For the other features, we need to consider both the mean and the standard deviation.For example, for the edge response feature, we will have Similarly, we obtain the corresponding distance d c (•), d ϕ (•), and d p (•) for color, phase angle, and edge response at edge points, respectively.The overall distance between the two images is then determined as a simple weighted-average of the individual distances from the different features: where w c + w e + w p + w ϕ + w λ = 1.The parameters w c , w e , w ϕ , w λ , w p are respective weights for features based on the color, edge response, phase angle, edge length, and edge response at edge points.By simply varying the weights, we can completely ignore the contribution of any particular feature.For the weights above to be meaningful however, we need to be sure that the range of values for the individual distances will be similar.Thus, we either have to normalize all the features to the same range of values, or we can compute the distance such that the overall distance from each feature is normalized.We take the later approach, and perform normalization at the time of distance computation, based on the model-data feature pairs where again w μ and w σ are weights, with w μ + w σ = 1.The normalized distances can then be used with the weights in (17) to obtain the overall distance between the frames.Another important issue is the effect of each individual block in the overall difference.Let w k f ,r be the weight of feature f from the rth block at level k.That is, f ∈ {c, e, ϕ, λ, p}, where c, e, ϕ, λ, p denote respective features based on color, edge response, phase angle, edge length, and edge response at edge points.A simple approach is to adopt a method whereby for a chosen feature f, the contribution from every block at each level is given an equal weight.Effectively, . N k is simply the number of blocks at the kth level.This makes the features from the lower levels of the decomposition to become more important.As the number of decomposition levels L increases, the lower-level features will dominate in the computation of the overall difference, and hence this will become very sensitive to small spatial differences in the frames.This will hence be more susceptible to noise and minute motion variations in the video.For shot classification however, this can be beneficial, since the domination of global movement or features in the video can be avoided.
A better approach could be to divide up the contribution to the overall difference amongst the k levels.The blocks that make up the kth level will then share the contribution allocated to that level.A simple way to do this will be by using an equal distribution of the contribution to all the levels: In all cases, we must have r k w k f ,r = 1.The effect of the weights using the two cases considered above can be appreciated from Table 1.
Considering the weights at each level, the distance between adjacent frames can be computed: w k e,r , (20) or in weighted and normalized form:
Adaptive Video Partitioning
When the distance D(•) is computed for a series of adjacent video frames, the result will be a sequence of frame differences, FD-sequence for short.The actual video partitioning is performed by a further analysis of the FDsequence.Let D i = D( f i , f i+1 ) be the difference between two adjacent frames, f i and f i+1 .The FD-sequence is defined as where n is the number of frames in the video.The FD-sequence is usually characterized by significant peaks at frame positions where a shot change has occurred.With the FD-sequence, the video partitioning problem then becomes that of determining appropriate thresholds to isolate these "significant peaks" from other peaks that might occur in the sequence.The shot threshold is defined as τ s = τ • max i {D( f i , f i+1 )}.We declare a shot partition at frame t whenever the distance exceeds the threshold: that is, whenever D t = FD(t) > τ S .
Adaptation at the Video Sequence
Level.The description above assumes that video sequences are homogeneous, and hence can all be considered using the same set of parameters.However, video sequences vary considerably from one sequence to the other.First we consider adapting the video analysis algorithm based on the entire video sequence.That is, for each video sequence, we determine the set of analysis parameters that will produce the best results.This set of parameters is then used to analyze all the frames or shots in the video sequence.Given the weights on the multilevel features (see ( 17)), we can parameterize the analysis algorithm in terms of these weights, w = (w c , w e , w p , w ϕ , w λ ) and the threshold, τ.For adaptation at the sequence level, rather than considering all the features for the distance calculation, we consider only the features that are relevant to the video being analyzed.Thus, based on the particular video, we can determine the best (w, τ) pair for segmenting the video.
To check the effect of the weights and the thresholds on different video sequences, we used a combination of the weights at different thresholds.Based on empirical analysis, we chose 32 combinations of the weights (Table 2) and 9 thresholds (Table 3).
We observed that different videos may require different contributions from each feature (i.e., different weights, w) for best results.Also, at a given w, different thresholds could produce different results.(See Table 6, Section 5).Similarly, for a given video sequence, various sets of weights can produce the same (best) results, but at different thresholds.Conceptually, adaptation at the sequence level should be simple.But there are several problems.First, at the sequence level, the video is still being considered at a very coarse granularity.Video shots are known to vary greatly, even for shots in the same video.Hence, different shots in the same video sequence could be very different in content.More importantly, automated mapping of the (w, τ) pair for each given video is a major problem, requiring a two-pass approach.This makes sequence-level adaptation unsuitable for real-time applications, or for network applications, where dynamic modeling of video data traffic is required.
Shot-Level Adaptation.
The above problems can be addressed by considering the individual shots that make up the video.In [9], shots were characterized based on the activity and motion in the shots, and the respective shot duration.Using the characterization, video shots were grouped into nine classes, based on which video partitioning was performed by adaptively choosing different thresholds for each shot class.In the current work, we take a different approach for the problems of video characterization and classification.
Estimating Video Shot Complexity.
To make the thresholds sensitive to the different shot classes, we need some methods to make such thresholds locally adaptive.The overall video shot complexity depends on the activity and the motion, while the shot class depends on both the complexity and the duration of the shot.The shot duration has a strong correlation with the amount of motion in the video.The length of the shot is typically inversely proportional to the amount of motion in the video [9].We can determine the temporal duration as we analyze the shot.We could also determine the motion complexity by computing the motion vectors using motion estimation techniques [36].However, motion estimation is very computationally intensive.w p 0 0 1 0 0 0 1 2 w λ 1 0 0 0 0 1 2 Since we do not need accurate motion estimation to classify the shots or for adaptive indexing, an estimate of the amount of motion in the shot is enough.Thus, we can approximate the amount of motion using the differences between adjacent frames (e.g., by analyzing the FD-sequence), rather than direct computation of the motion vectors.A similar observation has been made by Tao and Orchard [37], where they noticed that the residual signal generated after motion-compensated predication is highly correlated with the gradient magnitude: the motion compensated error is larger for pixels with larger gradient magnitude on average.They thus suggested that the gradient (from one frame to the other) could be estimated from the reconstructed image using the motion estimates.In this work, we are interested in the reverse procedure; given the gradient information (as captured by the edge response vectors), we wish to estimate the amount of motion in the shot, without explicit motion estimation.
We can estimate both the image activity and the motion by using the already available multilevel edge response vectors, with appropriate weights.For example, if we use ∀i, w i = 1/N (e.g., w i = 1/85, for 4 decomposition levels), or if we ignore the global averages altogether, (i.e., the contributions from level k = 0), then the lower-level features (which are increasingly localized) can be used to predict the amount of motion.We could also ignore further higher level features, for instance, levels at k ≤ L/2.We can estimate the activity by using the MERVs from just one frame in a given shot.
The motion and activity will generally result in an overall variability of the shot.The shot complexity depends directly on this shot variability.To estimate the shot variability, we use the mean and standard deviation of the frame-difference sequence (the FD-sequence) within the shot.We compute this for each of the MERV features, and use a weighted average to determine the shot variability.Given two time instants, t 1 and t 2 , (t 2 > t 1 ), we compute the shot variability as follows.Let T = t 2 − t 1 be the duration.Let FD λ be the frame difference sequence using a particular multilevel feature, say λ: 2 . ( Similarly, we compute for color, edge response, edgeresponse at edge points, and the phase angle.Then, as with the between-frame distances, we obtain the shot-variability using a weighted combination from all the features: The weights here may not necessarily be the same as those used for the distances. In [4,9], different methods were proposed for computing the motion and image complexities, for instance, using the spectral entropy, and other metrics.With the above approach, one problem will be computing the standard deviation at each frame as the shot is progressing.This problem can be solved by doing the computations at only defined periodic intervals (the periods could also be chosen adaptively).However, one advantage of using the shot variability defined above is that the parameters required can be computed incrementally, using the preceding values.We can do this from the general definition of mean and standard deviation.
Given a data ensemble, X = x 1 , x 2 , x 3 , . . ., x k−1 , x k , x k+1 , . .., and the mean of the first k items, x i , we can estimate the mean when the (k + 1)th item is added: Similarly, for the variance (or standard deviation), we have Solving ( 25) simultaneously, we obtain the incremental formula We can use these to incrementally estimate the shot variability using the available FD-sequence.Based on the shot variability, we classify the shots into nine classes, as follows.
Given μ(t 1 , t 2 ) and σ(t 1 , t 2 ) for a given shot, we classify each into three classes, namely, low (I), medium (II), and high (III), based on an equi probability classification.Let μ c (t 1 , t 2 ) ∈ {I, II, III} be the classification due to μ(t 1 , t 2 ).Similarly, let σ c (t 1 , t 2 ) ∈ {I, II, III} be the classification due to σ(t 1 , t 2 ).Using the classifications from the two dimensions of shot variability, we define a simple mapping function f (•) to determine the overall shot class, viz: Table 4 shows the classification results for the test video sequence, based on the above scheme.
Adaptive Shot Thresholds.
Having characterized and classified the shots based on the shot variability, the next question is to determine the parameters for video shot partitioning for a given shot.Ideally, given the FD sequence, (and assuming that it was obtained from a distance (and not a similarity) measure), we expect that the threshold for shot changes should decrease with increasing shot length, but increase with increasing shot complexity (or variability).Formally, given a video shot s j , we classify it into a certain shot class, c i ∈ {I, II, III, . . ., IX}.The problem of shot-level adaptation then is to determine the parameter set (i.e., the (w, τ) pair) that will produce the best results for all shots, s j ∈ c i , ∀i, j.Here, best results are defined in terms of information retrieval measures of precision and recall.
We take a pragmatic approach to the problem of determining the parameters.Using a training set of video shots, we use a simple clustering technique to determine the (w, τ) pairs that produce the best results for each shot class in the training set.We then use these pairs for analysis of the test video sequences.
Let P = (w, τ) be the weight-threshold pair that defines the parameter set for video segmentation.Let V be the number of video sequences used for the training set.We use the edge-response vectors to analyze the video shots in the training set, using all the available weights and thresholds (i.e., 32 weights and 9 thresholds in all, see Tables 2 and 3. Let P c j denote the set of (w, τ) pairs that produced correct partitioning results for the class c shots in video sequence j.To select the best (w, τ) pair for a given shot class, c, all we need is the intersection of P c j , for all the V sequences: When we have |P c | > 1, then any member of P c can be used as the best parameter set.The major problem is when P c = ∅, that is, the intersection is empty, implying that no single parameter set always produced correct results for all the class c shots in the training sequences.Two approaches can be used to address this problem.
For each shot in a given video sequence, we define an array a i, j , i = 1, 2, . . ., w max , j = 1, 2, . . ., τ max , such that a i, j = 1 if the shot is correctly partitioned with the parameter set (i, j) pair, and a i, j = 0 otherwise.We use w max = 32, τ max = 9 in our implementation.Let a c i, j (q) denote the cumulative value in the a i, j arrays for all the class c shots in video sequence q.Then, the best parameter set for the class c shots is determined as where, a c i, j = V q=1 a c i, j (q).The above selects the parameter set that produced the best overall result, over all the shots of a given class in the training set.This could be dominated by one video sequence that has many shots of the given type.A variation could be to use the parameter set that produced the best result over the shots of a given class from most of the sequences, although it may not necessarily produce the best results over all shots.Thus, where, P c (q) = argmax i, j {a c i, j (q)}.
Results
To test the performance of the proposed edge-based adaptive method, we ran some experiments using two sets of video sequences.The first set had 6 sequences taken from standard MPEG-7 sequences, and from available online video sources [31].For each video sequence, the frame size was fixed at 352 × 288.The second set had 5 sequences taken from the US National Institute of Standards (NIST) benchmark TRECVID 2001 test sequences.The frame size for sequences in this set was 320 × 240.The experiments were carried out in a MATLAB Version 7.3.0.267 (R2006b) environment using a personal computer with Intel(R) CPU T2400, running at 1.83 GHz with 1.99 GB RAM.We measure performance in terms of the information retrieval measures of precision and recall.We use the following notation: D = set of all positions of true scene cuts in a test video sequence, B = set of all positions of scene cuts returned by the system, C = subset of B that are true scene cuts (i.e., correct detection, or C = B ∩ D).Then, precision Pr = |C|/|B|, and recall Rc = |C|/|D|.
Effectiveness of MERVs on Non-Adaptive Partitioning.
First, we tested the effectiveness of the proposed edgeresponse vectors in video partitioning, without consideration for adaptation.This is important, since the results of the adaptive schemes will also be influenced by the inherent robustness of the edge-based features.The results are shown in Table 5.As can be seen, the edge-oriented approach produced about 90% in terms of precision and recall.
Adaptive Partitioning.
Table 6 shows the results for adaptation at the video sequence level.The last two columns show the weight-threshold parameter pairs that were used to produce the indicated results.Where there are more than two entries, it means that the indicated entries all produced the same result.The table shows a significant improvement over the non-adaptive approach.The sequence-level adaptation is a two-pass method.That is, it needs a first pass on the data to determine the analysis parameters, and a second pass to perform the analysis.For some applications, such as real-time video streaming, the twopass approach may not be applicable.Shot-level adaptation avoids the two-pass problem.Table 7 shows that results for shot-level adaptation, based on shot characterization and classification using the proposed shot variability measure.
The results are a little worse than the two-pass method using sequence-level adaptation, but generally better than the static approach.
Comparative Results
. We performed a comparative experiment using other popular techniques.Table 8 shows the results.For color histograms, we used region-based histograms with 16 blocks ((M 1 /4) × (M 2 /4) regions) per frame, where M 1 and M 2 are the frame dimensions.Analysis using motion-vector-based methods [28] are based on 20 × 20 sub blocks.The specific kernel size used for Cooper et al.'s DCT-based method [25] are also indicated in the table, as this varied significantly from sequence to sequence.In all cases, we have reported results using the parameters that gave the best overall result for a given video sequence, or for the test video set used.Apart from the results in [9], none of the other methods used adaptive partitioning.Thus, we can compare the performance of the static (nonadaptive) method using the proposed MERVs as features with the results from the other schemes.The table shows that MERV features are very competitive, having a comparable performance with the correlation-based method [27], the best performing technique of the other schemes tested.While simple color histogram did well on some video sequences, it produced poor performance on CROPS and CANYON video sets.This is mainly because these two sequences have both indoor and outdoor scenes involving significant variation in illumination.Obviously color features are easily affected by this variation, and hence the precision of the color-histogram based method was quite low for these sequences.The same explains the poor performance of the motion-vector-based method.Illumination variation between frames often leads to poor motion detection and thus a significant error in the motion vectors (even with the special parameter for illumination handling used in [28]).Overall, the results from the adaptive schemes are generally better than those from the non-adaptive schemes.This can be explained by the fact that the adaptive schemes spend time to analyze each shot first, before deciding on the analysis parameters.Thus, they are able to adapt better to the changing nature of shot characteristics as we move along in the video sequence.
We then tested the methods on another set of video sequences, this time using five sequences from the NIST benchmark TRECVID 2001 video sequences.The sequences and annotations by NIST, such as positions of true scene cuts are available via the NIST TRECVID website (http://www-nlpir.nist.gov/projects/trecvid/revised.html).(We could not get access to more recent sequences used in the TRECVID series.The most recent versions are available only for competitors in the TRECVID challenge.All the same, we believe that the 2001 data still provides another independent data set suitable for testing the algorithms).The results on the TRECVID sequences are shown in Table 9.The overall result is not too different from those of Table 8.Both the proposed method and the correlation method produced better results than the others.Both had about the same average recall, with the proposed method performing slightly better in recall (0.922 versus 0.906).
We also compared the proposed adaptive scheme with the scene-adaptive method proposed in [9].The major difference was in terms of scene characterization.After characterization, we then used the same MERV features to perform video partitioning.Thus, this essentially compares the performance of the proposed shot variability measure for video shot characterization and classification against that of characterization using explicit motion and activity.Using shot variability for shot characterization and classification is slightly superior to using motion and activity complexity measures [9], with (precision, recall) values of (0.96, 0.93) versus (0.94, 0.91).A more striking difference, however, can be observed by considering the computational requirements for the two approaches.Using shot variability as a shot complexity measure is about 5 times faster than using motion and activity.The shot variability measure does not involve explicit motion estimation and activity characterization, but rather uses the same features (i.e., the FD-sequence) that were used in the analysis.Thus, it is generally more efficient than using motion and activity.Table 10 shows the overall time taken by the different methods in video partitioning.The reported time represents the average feature extraction time per frame required in analyzing a given video sequence.
Experimental results show that the proposed multilevel edge-based features provide a performance of about 90% in terms of average precision and recall.In comparison with traditional approaches, the adaptive schemes provide a better performance over non-adaptive approaches, using the same multilevel edge-based features-with video sequence level adaptation producing about 99% performance.Further, the use of shot variability as a measure of shot complexity resulted in a slightly superior performance (about 2% improvement in precision) over a previously proposed method of explicit motion estimation and shot activity analysis.However, in terms of efficiency, using the shot variability led to a five fold improvement in efficiency.The reported work has applications beyond video indexing and retrieval.In particular, given the significant reduction in computations, the approach becomes attractive for real-time applications, such as in dynamic monitoring, characterization and modeling of video data traffic, and in real-time video surveillance.
Figure 1 :
Figure 1: Multilevel decomposition (a) an image at three levels of decomposition; (b) tree representation of the decomposition.
Table 1 :
Weights for two choices for the contribution from each decomposition level (L = 4).
Table 4 :
Shot classification results based on shot variability.
Table 5 :
Effectiveness of MERVs for non-adaptive video partitioning.
Table 6 :
Results for proposed sequence-level adaptive partitioning.
Table 7 :
Results for adaptive partitioning using proposed shot variability measure.
|
2014-10-01T00:00:00.000Z
|
2009-01-01T00:00:00.000
|
{
"year": 2009,
"sha1": "22a62f8eeb85de089d3886062f79095967573c9f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2009/859371",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "22a62f8eeb85de089d3886062f79095967573c9f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119149500
|
pes2o/s2orc
|
v3-fos-license
|
A Debt Management Problem with Currency Devaluation
We consider a model of debt management, where a sovereign state trade some bonds to service the debt with a pool of risk-neutral competitive foreign investors. At each time, the government decides which fraction of the gross domestic product (GDP) must be used to repay the debt, and how much to devaluate its currency. Both these operations have the effect to reduce the actual size of the debt but have a social cost in terms of welfare sustainability. Moreover, at any time the sovereign state can declare bankruptcy by paying a correspondent bankruptcy cost. We show that these optimization problems admits an equilibrium solution, leading to bankruptcy or to a stationary state, depending on the initial conditions.
Introduction
According to US Senate Levin-Coburn Report [10], the financial crisis of 2007-2008, which originated the worldwide Great Recession of 2008-2012 and to the European sovereign debt crisis of 2010-2012, and whose effects are still present in many countries, "was not a natural disaster, but the result of high risk, complex financial products, undisclosed conflicts of interest; and the failure of regulators, the credit rating agencies, and the market itself to rein in the excesses of Wall Street." The first part of the report analyze some topic cases of (1) High Risk Lending; (2) Regulatory Failure; (3) Inflated Credit Ratings; (4) Investment Bank Abuses. In the final recommendation of the report, a whole section is devoted to the management of high risk lending, in order to prevent abuses.
In the Eurozone, the crisis -whose consequences lasted until 2016 -took the form of a speculative attack to the sovereign debt of some EU countries (Portugal, Ireland, Greece, Spain), but also strongly affects also two major economic powers like Italy and France. The undertaken actions of the EU governments to face the crisis had very high social costs, leading also to an heavy political impact. These considerations lead to the following natural problems: • to identify suitable tools to estimate the risk of a lender's bankruptcy (as in the subprime mortgage crisis, which originated); • to have quantitative tools, relying on reliable prediction of realistic models, which would allow the regulation authority to prevent abuses; • to provide optimal strategies in the management of sovreign debts. In [12], the authors introduced a variational model where a government issues nominal defaultable debt and chooses fiscal and monetary policy under discretion. In particular, to reduce the actual size of the debt, the government can choose to devaluate its currency, producing inflaction and thus increasing the welfare cost and negatively affecting the trust of the investors, or to rely only on fiscal policy to serve the debt. The government can also declare the default, which imply to pay a bankruptcy cost due to the temporary exclusion from capital markets, and a drop in the output endowment. The aim is to find a strategy minimizing a cost functional dealing with the trade-off between inflaction, social costs, and debt sustainability and possibly declaring the default if this option would me preferable to continue servicing the debt.
The analysis of the model in [12] was performed by a numerical methods, and as a final conclusion of their analysis, the authors claim that the tool of currency devaluation, though useful in a short-term perspective, is not recommended unless the government is able to make credible commitments about their future inflaction policy. In this sense, it is worth of notice that many countries with limited inflaction credibility, decide either to issue bonds directly in a foreign stable currency (e.g., US dollars), or delegates the monetary policy to an independent authority with a strong anti-inflaction commitment (e.g., Eurozone Central Bank).
An analytical study of a variant of the model in [12] was performed in [5], in the case where no currency devaluation is available to the governement, and provided a semi-explicit formula for the optimal strategy in the deterministic case (i.e., when the GDP evolves deterministically).
This paper aims to develop the analytical study of the model in [5], allowing also the possibility of currency devaluation as in [12].
The paper is structured as follows: in Section 2 we introduce the stochastic model together with the main assumptions, in Section 3 we prove the existence of an equilibrium solution for the stochastic model as the steady state of an auxiliary parabolic system, and study its asymptotic behaviour as the maximum debt-to-income threshold is pushed to +∞. In Section 4 we study the deterministic model obtaining by setting the volatility σ = 0. In this case we provide a semi-explicit construction for an equilibrium solution, together with a study of its asymptotic behaviour as the maximum debt-to-income threshold is pushed to +∞.
A model with stochastic growth
In this section, we develop the model in [5], allowing the possibility of currency devaluation as in [12]. Here the borrower is a sovereign state, that can decide to devaluate its currency (for example, printing more paper money). The total income Y , i.e., the gross national product GDP measured in terms of the floating currency unit, can quickly increase if the currency is devaluated, producing inflation. It is governed by a stochastic process where W is a Brownian motion on a filtered probability space and • µ = average growth rate of the economy; • σ = the volatility; • v(t) ≥ 0 = the devaluation rate at time t, regarded as an additional control. We refer to [12] for a more detailed derivation v in the above system from economic primitives.
Let X(t) be the outstanding stock of nominal government bonds, expressed in the local currency unit. In particular, X(t) represents also the total nominal value of the outstanding debt. To service the debt, the government trades a nominal non-contingent bond with risk-neutral competitive foreign investors. In case of bankruptcy, the lenders recover only a fraction θ ∈ [0, 1] of their outstanding capital, depending on the total amount of the debt at the time of bankruptcy. To offset this possible loss, the buys a bond with unit nominal value at a discounted price p ∈ [0, 1]. We denote by U (t) the rate of payments that the borrower chooses to make to the lenders at time t. If this amount is not enough to cover the running interest and pay back part of the principal, new bonds are issued, at the discounted price p(t). As in [5], the nominal value of the outstanding debt thus evolves according toẊ Here the constants • λ = rate at which the borrower pays back the principal; • r = discount rate .
The debt-to-GDP ratio (DTI) is defined as x = X/Y . By Itō's formula [13,14], the evolution of x(·) is is the fraction of the total income allocated to reduce the debt. Throughout the following we will assume that r > µ.
In this model, the borrower has three controls: at each time t he can decide the portion u(t) of the total income is allocated to repaying the debt, he can decide the devaluation rate v(t) and he can also decide the time T b he is going to declare bankruptcy, paying a bankruptcy cost. More precisely, we assume that • there exists a threshold x * > 0 such that if x(t) reaches x * then the borrower is forced to declare bankruptcy; • the borrower decides to declare bankruptcy as soon as x(t) reaches x ′ , where x ′ ∈ [0, x * ] is an additional control parameter, chosen by the borrower in order to minimize his expected cost; • the optimal control has feedback form, so that In this case the random variable bankruptcy time is and the total expected cost to the borrower, exponentially discounted in time, is where • B is the bankruptcy cost, which summarizes the penalties of temporary exclusion from the capital markets, the bad reputation among the investors, and the social costs of the default; • c(v) is a social cost resulting by devaluation, i.e., the increasing cost of the welfare and of the imported goods; • L(u) is the cost for the borrower to implement the control u, i.e., adversion toward austerity policies and welfare's budget cuts. By a Dynamic Programming argument, it is never convenient for the borrower to declare bankruptcy unless he is not forced to do so, i.e., unless the threshold x * is reached. This argument is a slight variant of [5].
Lemma 2.1. For any admissible control strategy (u ′ (·), v ′ (·)) declaring bankruptcy at x = x ′ < x * , there exists a control strategy with smaller cost declaring bankruptcy at x = x * .
Proof. By contradiction, assume that the borrower implement any strategy and decide to declare bankruptcy when reaching x = x ′ < x * . Denote by T the time when he declares bankruptcy and let J be the cost of this strategy up to time T . The total cost is then J + e −rT B. We can construct a better strategy simply avoiding to declare bankruptcy at x ′ , and switching off the controls (u * , v * ) after having reached x ′ until the threshold x * is reached. In this case, the total cost before T is the same as before, and the total cost is Thus from now on we assume that the borrower will declare bankruptcy exactly at time t = T * b . Moreover, the goal of the borrower is to To complete the model, we need an equation determining the discounted bond price p in the evolution equation (2.1). For every x > 0, let θ(x) be the salvage rate, i.e. the fraction of the outstanding capital that can be recovered by lenders, if bankruptcy occurs when the debt has size x * . As in [5], assuming that the investors are risk-neutral, the discounted bond price coincides with the expected payoff to a lender purchasing a coupon with unit nominal value Having described the model, we can introduce the definition of optimal solution, in feedback form.
Definition 2.2 (Stochastic optimal feedback solution). In connection with the above model, we say that a triple of functions u = u * (x), v = v * (x) p = p(x) provides an optimal solution to the problem of optimal debt management (2.1)-(2.4) if (i) Given the function p(·), for every initial value x 0 ∈ [0, x * ] the feedback control (u * (·), v * (·)) with stopping time T * b as in (2.2) provides an optimal solution to the stochastic control problem (2.3), with dynamics (2.1). (ii) Given the feedback control u * (·) and the set S, for every initial value x 0 the discounted price p(x 0 ) satisfies (2.4), where T * b is the stopping time (2.2) determined by the dynamics (2.1).
The value function of the control system (2.1) with the cost J in (2.3) is defined by In the following we shall assume that (A1) The implementing cost function L is twice continuously differentiable for u ∈ [0, 1[ and satisfies (A2) The social cost c(·) determined by currency devaluation is twice continuously differentiable and satisfies for some constant δ 0 > 0.
Calling B a fixed (large) cost associated with bankruptcy, under the above assumptions, we have Denote by the Legendre transform of L and c. By the strict convexity of L and c, for every ρ there exist unique u ρ ∈ [0, 1] and v ρ ≥ 0 such that L * (ρ) + L(u ρ ) = ρu ρ and c * (ρ) + c(v ρ ) = ρv ρ . They can be characterized by the relations The Hamiltonian associated to the dynamics (2.1) and the cost functions L, c in (2.3) is The necessary conditions for optimality imply that the value function V should solve the second order implicit ODE On the other hand, if the feedback controls u = u * (x) and v = v * (x) are known, then by using Feynman-Kac formula, we obtain the second order nonlinear ODE for the discounted bond price p in (2.4) with boundary values p(0) = 1, p(x * ) = θ(x * ). Recalling 2.8 and (2.10), a direct computation yields and (2.14) Combining (2.9)-(2.13), we are thus led to the system of second order implicit ODEs with the boundary conditions In the next section, an optimal feedback solution to the problem of optimal debt management (2.1)-(2.4) will be obtained by solving the above system of ODEs for the value function V (·) and for the discounted bond price p(·).
We close this section by collecting some useful properties of the Hamiltonian function. As in [5], the followings holds: (2) for every x, p > 0 the map ξ → H(x, ξ, p) is concave down and satisfies and so thus, recalling that by assumption we have L ′′ (u) ≥ δ 0 and c ′′ (v) ≥ δ 0 for 0 < u < 1 and v ≥ 0, we obtain We recall this general fact: assume that f : I → R is a C 2 convex strictly increasing function defined on a real interval I, and satisfying f ′′ ≥ δ > 0. Then, denoted by g its inverse function, g : f (I) → I, we have that g is 1/2-Hölder continuous. Indeed, let x 1 , x 2 ∈ f −1 (I) with x 1 ≤ x 2 , and set y 1 = g(x 1 ) and y 2 = g(x 2 ).
since f is strictly increasing, f ′′ (s) ≥ δ, and y 1 ≤ y 2 . Thus if x 2 ≥ x 1 we have By switching the roles of x 2 and x 1 , the same holds true if x 1 ≥ x 2 .
In our case, set f (·) = − 1 r H(x, ·, p), we have To conclude the proof is enough to choose 3. Stochastic optimal feedback solutions 3.1. Existence of optimal feedback solutions. In this subsection, we prove the existence of an optimal feedback solution to the problem of optimal debt management (2.1)-(2.4) for a given bankruptcy threshold x * . It is well-known (see Theorem 4.1, p.149, in [9] or Theorem 11.2.2, p. 141, in [13]) that if (V, p) is a solution to the boundary value problem (2.15)-(2.16), then a standard result in the theory of stochastic optimization implies that the feedback control (u * (·), v * (·)) in (2.10) is optimal for the problem (2.3) with dynamics (2.1).
Let x * > 0 be given. We shall construct a solution of (2.15)-(2.16) by considering the auxiliary parabolic system with the boundary conditions In particular, we have Following [2], the main idea is to construct a compact, convex set of functions (V, p) : 1] for some positive constant θ min which is positively invariant for the parabolic evolution problem. A topological technique will then yield the existence of a steady state, i.e. a solution to (3.1)-(3.2).
Then there exists a positive constant θ min such that the system of second order ODEs (2.15) with boundary conditions Moreover, the function V is monotone increasing and As a consequence, we obtain the following result. Proof. The proof follows the same line of the proof of Theorem 3.1 in [5]. It is divided into several steps: From (A1)-(A2) and (3.4), one has that For any 0 < ε < r − µ λ + µ , consider the parabolic system together with the boundary conditions (3.2), which is obtained from (3.1) by adding the terms εV xx , εp xx on the right hand sides. For any ε > 0 this makes the system uniformly parabolic also in a neighborhood of x = 0.
2.
Recalling Theorem 1 in [2], the system (3.7), coupled with the initial conditions admits a unique solution. Let t → (V (t, ·), p(t, ·)) = S t (V 0 , p 0 ) be the solution of the system (3.7), with initial data (3.7). Consider the closed, convex set of functions where the constant We claim that the above domain is positively invariant under the semigroup S t , namely Let us now consider the constant functions [5], one can easily to check that V + is a supersolution and V − is a subsolution of the first scalar parabolic equation in (3.7). Moreover, p + is a supersolution and p − is a subsolution of the second scalar parabolic equation in (3.7). Therefore, for any
Recalling Lemma 2.3 (2), we have
, then for every t ≥ 0 the solution of the system (3.7) will satisfy 3. Thanks to the bounds of Lemma 2.3 (1) and (3.6) we can now apply Theorem 3 in [2] and obtain the existence of a steady state (V ε , p ε ) ∈ D for the system (3.7). (3.10) Assume by a contradiction that V ε is not monotone increasing. Then there exists
This implies that
Thus, (2.8) yields and it yields a contradiction.
4.
It now remains to derive a priori estimate on this stationary solution, which will allow to take the limit as ε → 0. Let us first provide upper bounds for Observing that for any p ∈ [0, 1], it holds In particular, this implies that for any p ∈ [0, θ min ] By Gronwall's inequalities, from the above differential inequality and the estimate (3.13), one obtains a uniform bound on for some M * which does not depend on ε. As a consequence, (3.11) implies that Thus, the first equation in (3.10) yields Moreover, the second equation in (3.10) implies that From (3.18), the Gronwall's lemma yields On the other hand, recalling (2.8), (2.10), (2.13) and (3.14), we obtain that H and H ξ are uniformly Lipschitz on [δ, are also uniformly bounded and uniformly Lipschitz on [δ, x * ].
6. By choosing a suitable subsequence, we achieve the uniform convergence In order to show that lim x→0+ p(x) = 1, one needs to provide a lower bound on p ε in a neighborhood of x = 0, independent of ε. Let's introduce the constant Then define (3.16). We prove that p − is a lower solution of the second equation of (3.10) in the interval [0,x 0 ]. Indeed, by (3.10), (3.16) and (2.13), one estimates and it yields lim
3.2.
Dependence on the bankruptcy threshold x * . In this subsection, we will study the behavior of the expected total cost for servicing when the maximum size x * of the debt, at which bankruptcy is declared, becomes very large. More precisely, for a given x * , let p(·, x * ) be a solution to the system of second order ODEs (2.15) with boundary conditions (3.2). We investigate whether, as x * → ∞, the value function V (·, x * ) remains positive or approaches zero uniformly on bounded sets.
With the same argument in steps 2-5 in the proof of Theorem 3.1, we obtain that We will construct upper and lower bounds for V (·, x * ), p(·, x * ), in the form where • for any V (·, ·) with V x ≥ 0, the functions p 1 (·) and p 2 (·) are a subsolution and a supersolution of the second equation in (2.15), respectively. • for any p(·, ·) with p ∈ [0, 1] and p x ≤ 0, the function V 1 (·) and V 2 (·) are a supersolution and a subsolution of the first equation in (2.15), respectively. Toward this goal, we introduce the constants Fix x * > 0, let us construct a suitable pair of functions V 1 , p 1 . Two cases are considered: be the solution to the backward Cauchy problem Solving the above ODE, we obtain that This implies that and thus V 1 is a super-solution of the first equation in (2.15). A standard comparison arguments yields • If θ(x * ) < γ then let (p 1 , V 1 ) be the solution to the backward Cauchy problem This solution satisfies that p 1 is strictly decreasing, Using (3.26) and (3.27) one obtains Since,p 1 is strictly decreasing, we then obtain thatp ′′ 1 (x) > 0 for all x ∈ (0, x * ) and thus Thus, from Lemma 2.3 (1), it holds On the other hand, recallingx 0 = c ′ (0) M * in (3.22), letp 1 be the solution of the backward Cauchy problem As in the previous step, one obtains thatp 1 is decreasing, lim x→0+p1 (x) = 1 and One has that p 1 (0) = p(0, x * ) = 1 and Recalling that v(x, x * ) = 0 for all x ∈ [0, x 0 ], (3.28) and (3.29) implies that for all x ∈ (0, x 0 ) ∪ (x 0 , x * ). Thus, p 1 is a sub-solution of the second equation in (2.15). A standard comparison arguments yields Next, differentiating both sides of the first ODE in (3.26), we obtain From (2.14), the map p → H(x, ξ, p) is monotone decreasing when ξ ≥ 0 and For any x ∈ (0,x 1 ), it holds For any x ∈ (x 1 , x * ), from (3.32) and (3.31), one gets Hence, V 1 (x) is a super-solution of the first equation in (2.15). A standard comparison arguments yields From (3.25) and (3.33), we finally obtain that and it yields (3.20).
One has that On the other hand, recalling (2.13), we get for all x ∈ (x 2 , x * ). Therefore, p 2 (·) is a super-solution of the first equation in (2.15) and Next, set For every x ∈ (0,x 3 ), V 2 (x) = 0 and For every x ∈ (x 2 , x * ), we have , together with (3.36), the function V 2 is a sub-solution of the first equation in (2.15) and Letting x * to +∞, we obtain (3.21).
The deterministic case σ = 0
In the case σ = 0, the stochastic control system (2.1) reduces to the deterministic one Here the control u(t) is assumed to be in [0, 1] for all t ≥ 0.
Throughout the paper, we always assume r > µ. The deterministic Debt Management Problem can be formulated as follows.
(DMP) Given an initial value x(0) = x 0 ∈ [0, x * ] of the DTI, minimize subject to the dynamics (4.1), where the bankruptcy time T b is defined as in (2.2), while the discount bond price Since in this case the optimal feedback control u * , v * and the corresponding functions V * , p * may not be smooth, a concept of equilibrium solution should be more carefully defined.
In the deterministic case, (2.15) becomes the following implicit system of the first order ODEs with the boundary conditions (4.9) For further use, we compute the gradient of the Hamiltonian function H(·) where for x > 0 and p ∈]0, 1] we have The following Lemma will catch some relevant properties of H(·) needed to study the system (4.8).
Definition 4.3 (Normal form of the system). Given x > 0, 0 < p ≤ 1, 0 < rη ≤ H max (x, p) we define the maps Notice that if rV (x) > H max (x, p), then the first equation of (4.8) has no solution. Other- Remark 4.4. Recalling (4.1) and (4.13), we observe that corresponds to the choice of an optimal control such thatẋ(t) < 0. The total debt-to-ratio is descreasing. p) corresponds to the choice of an optimal control such thatẋ(x) > 0. The total debt-to-ratio is increasing.
corresponds to the unique control strategy such thatẋ(t) = 0.
4.1.
Construction of a solution. We will begin our analysis from the control strategies keeping the DTI constant in time, i.e., such that the corresponding solution x(·) of (4.1) is constant. In this case, there is no bankruptcy risk, i.e., T b = +∞.
Definition 4.6 (Constant strategies). Letx > 0 be given. We say that a couple (ū,v) where the second relation comes from taking T b = +∞ in (4.3).
From these equations, if a couple (ū,v) ∈ [0, 1[×[0, +∞[ is a constant strategy then it holds (r + λ)(r − µ)x = (r + λ +v)ū. In this case, the borrower will never go bankrupt and thus the cost of this strategy in (4.2) is computed by We notice that ifx(r − µ) > 1, we must havev > 1 andp < 1, in particular if DTI is sufficiently large, every constant strategy needs to implement currency devaluation, with a consequently drop of p. A more precise estimate will be provided in Proposition 4.9.
We are now interested in the minimum cost of a strategy keeping the debt constant. To this aim, we first characterize the cost of a constant strategy in terms of the variables x, p.
Moreover, since sup ξ∈R H(x, ξ, p) is attained only at ξ = ξ ♯ (x, p) according to the strict concavity of ξ → H(x, ξ, p), (û,v) realizes the minimum in the right hand side of (4.17) if and only if The second relation can be rewritten as Formula (4.17) allows us to give a simpler characterization of the minimum cost of a strategy keeping the debt-to-income ratio constant in time. Indeed, given x ∈ [0, x * ], we select (u(x), v(x)) keeping the debt-to-income ratio constant in time. This defines uniquely a value p = p(x) by Definition 4.6 and impose a relation between u(x) and v(x). Then we take the minimum over all the costs of such strategies, i.e., the right hand side of formula (4.17). This naturally leads to the following definition.
For every x ∈ [0, x * ], W (x) denotes the minimum cost of a strategy keeping the DTI ratio constant in time.
The next results proves that if the debt-to-income ratio is sufficiently small, the optimal strategy keeping it constant does not use the devaluation of currency. Proposition 4.9 (Non-devaluating regime for optimal constant strategies). Let x c ≥ 0 be the unique solution of the following equation in x where v c (x) > 0 solves the following equation in v • for every x ∈]0, x * [ we have Proof. Given x ∈]0, x * [, we define the convex function We compute which is monotone increasing and satisfies lim Two cases may occur: and it implies W (x) = 1 r · L((r − µ)x) and p c (x) = 1.
• If we have min{x c , x * } < x ≤ x * , then there exists a unique point v c (x) > 0 such that F ′ (v c (x)) = 0, and this point is characterized by The remaining statements follows noticing that for min{x c , x * } < x ≤ x * we have and deriving the explicit expression of W (x) for [0, min{x c , x * }] yields the same formula. Notice that, by (4.14), we have where we used the fact that L ′ is strictly increasing and, since the argument of L ′ must be nonnegative, we have We pass now to study the properties of the backward solution. We will construct an equilibrium solution of 4.8 by a suitable concatenation of backward solutions.
The following Lemma states some basic properties of the backward solution. In particular, the backward solution Z(·, x * ), starting from B at x * with W (x * ) < B, survives backward at least until the first intersection with the graph of W (·). Moreover, in this interval is monotone increasing and positive. In the same way, q(·, x * ) is always in ]0, 1].
Denote by I x * ⊆ [0, x * ] the maximal domain of the backward equation (4.20), define y(x) to be the maximal solution of y(x * ) = 0, 1. We first claim that q(·, x * ) is non-increasing on J x * ]x * W , x * [ and thus (4.22) By contradiction, assume that there exists x 1 ∈ J B ∩]x BW , x * [ such that (4.23) This yields Two cases are considered: In particular, we have q ′′ (x 1 , x * ) = 0, which yields a contradiction.
From the first equation of (4.8) and (4.11), it holds Observe that Z ′ (x 1 , x * ) > 0 and H ξ (x 1 , Z ′ (x 1 , x * ), q(x 1 , x * )) > 0, one obtains that Taking the derivative respect to x in both sides of the second equation of (4.8), we have Recalling (4.23), we obtain that and it yields a contradiction. Now assume that there exists and Z(x 2 , x * ) = 1 r · H max (x 2 , q(x 2 , x * )).
Moreover,
On the other hand, since q( , we estimate Thus, and it yields a contradiction. 2. By construction, y(·) is strictly monotone and invertible in ]x * W , x * ], let x = x(y) be its inverse, from the inverse function theorem we get Since the map ξ → H(x, ξ, q) is concave, it holds and this yields Z(x, x * ) ≥ Be ry(x) > 0 for all With a similar argument for q(·, x * ), we obtain and so which in particular implies that for all and so q(x, x * ) ∈]0, 1] for all x ∈]x * W , x * ].
As far as the graph of Z(·, x * ) intersects the graph of W (·), we have that Z(·, x * ) is no longer optimal. We investigate now the local behavior of Z(·, x * ) and W (·) near to an intersection of their graphs. Then Proof. Let {x j } j∈N ⊆ I be a sequence converging tox and qx ∈ [0, 1] be such that qx = lim sup By assumption, we have Recalling Lemma 4.2 (4), we have p c (x) ≥ qx. By Proposition 4.9, we have W ′ (x) < ξ ♯ (x, p c (x)), and so thus, by applying the strictly increasing map F − (x, ·, p c (x)) on both sides, we obtain On the other hand, since the functions F − (x, Z, q) and G − (x, Z, q) are smooth for H ξ (x, Z, q) = 0 but not only Hölder continuous with respect to Z near to the surface Thus, for any x 0 ∈ [0, x * ), the definition of the solution of the Cauchy problem (4.25) requires some care.
For any ε > 0, we denote by Z ε (·, x 0 ), q ε (·, x 0 ) the backward solution to (4.25) with the terminal data With the same argument in the proof of proposition 4.11, this solution is uniquely defined on a maximal interval [a ε (x 0 ), Let x ♭ be the unique solution to the equation It is clear that 0 < x ♭ < x c where x c is defined in Proposition 4.9 as the unique solution to the equaiton (r + λ)c ′ (0) = (r − µ)xL ′ ((r − µ)x) . Two cases are considered: • CASE 1: For any x 0 ∈ (0, x ♭ ], we claim that and Z ε (·, x 0 ) solves backward the following ODE for ε > 0 sufficiently small. Indeed, let Z 1 be the unique backward solution of (4.27). From (4.14), it holds for all x ∈ (0, x ♭ ]. As in [5]A contradiction argument yields Thus, Z 1 is well-defined on [0, x 0 ] and Z 1 (0) = 0. On the other hand, it holds Therefore, (Z 1 (x), 1) solves (4.25) and the uniqueness yields Thanks to the monotone increasing property of the map ξ → F − (x, ξ, 1), a pair (Z(·, x 0 ), q(·, x 0 )) denoted by is the unique solution of (4.25). If the initial size of the debt isx ∈ [0, x 0 ] we think of Z(x, x 0 ) is as the expected cost of (4.4)-(4.5) with p(·, x 0 ) = 1, x(0) = x 0 achieved by the feedback strategies for all x ∈ [0, x 0 ]. With this strategy, the debt has the asymptotic behavior x(t) → x 0 as t → ∞.
• CASE 2: For x 0 ∈ (x ♭ , x * W ], system of ODEs (4.25) does not admit a unique solution in general since it is not monotone, it. The following lemma will provide the existence result of (4.25) for all x 0 ∈ (x ♭ , x * W ]. Lemma 4.13. There exists a constant δ ♭ > 0 depending only on x ♭ such that for any for some ε 0 > 0 sufficiently small. Proof. From (4.19) and (4.14), it holds In particular, we have By continuity of the map η → F − (x 0 , η, p c (x 0 )) on [0, W (x)], one can find a constant ε 1 > 0 sufficiently small such that On the other hand, the continuity of W ′ yields For a fixed ε ∈ (0, ε 1 ), denote by Recalling that the function (x, η, p) → F − (x, η, p) is defined by H(x, F − (x, η, p), p) = rη, by the implicit function theorem, set ξ = F − (x, η, p), we have Since q ε (·, x 0 ) is decreasing, it holds and (4.29)-(4.30) yield On the other hand, from (4.11) one shows that the map x → F − (x, η, p) is monotone decreasing and thus (4.32) Observe that the map η → F − (x, η, p) is Hölder continuous due to Lemma 2.4. More precisely, there exist a constant C x ♭ > 0 such that Recalling (4.31), we have and it yields Therefore, Remark 4.14. In general, the backward Cauchy problem (4.25) may admits more than one solution.
We are now ready to to construct an equilibrium solution in the feedback form by a suitable concatenation of backward solutions. By induction, we define a family of back solutions as follows: for all x ∈ [x 1 , x * ] and x n+1 . = a(x n ), (Z(x, x n ), q(x, x n )) for all x ∈ [x n+1 , x n ] .
From Case 1 and Lemma 4.13, there exists a natural number N 0 < 1 + x * − x ♭ δ x ♭ such that our construction will be stop in N 0 step, i.e., x N 0 > 0, a(x N 0 ) = 0 and lim Figure 2. Construction of a solution: starting from (x * , B) we solve backward the system until the first touch with the graph of W at (x 1 , W (x 1 )). Then we restart by solving backward the system with the new terminal conditions (W (x 1 ), p c (x 1 )), until the next touch with the graph of W at (x 2 , W (x 2 )) and so on. In a finite number of steps we reach the origin. If a touch occurs at x n 0 < x ♭ then the backward solution from x n 0 reaches the origin with q ≡ 1. Given an initial valuex of the DTI, if 0 ≤ x n+1 <x < x n < x 1 the the optimal strategy let the DTI increase asymptotically to x n (no banktuptcy), while if x 1 <x < x * then the optimal strategy let the DTI increase to x * , thus providing bankruptcy in finite time.
|
2018-05-14T07:52:47.000Z
|
2018-05-14T00:00:00.000
|
{
"year": 2018,
"sha1": "b039e251f7ebe8584f73a063b1866e31feece5dc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b039e251f7ebe8584f73a063b1866e31feece5dc",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
245874788
|
pes2o/s2orc
|
v3-fos-license
|
Pain-Related Abnormal Neuronal Synchronization of the Nucleus Accumbens in Parkinson’s Disease
Patients with Parkinson’s disease (PD) often experience pain, which fluctuates in “on” and “off” states, but the underlying mechanism is unclear. The nucleus accumbens (NAc) is a central component of the mesolimbic dopaminergic pathway involved in pain processing. We conducted resting-state functional magnetic resonance imaging (rsfMRI) analysis to explore the relationship between the neuronal synchronization of NAc with pain-related brain regions and pain intensity in “on” and “off” states. We assessed 23 patients with sporadic PD based on rsfMRI and pain intensity using the revised Short-Form McGill Pain Questionnaire. Patients with PD displayed higher pain intensity scores in the “off” state than in the “on” state. The pain intensity in the “off” state was substantially correlated with the functional connectivity (FC) between the NAc and primary motor/sensory cortices and contralateral NAc. Changes in pain intensity from the “on” to “off” state displayed correlations with those between the right (rNA) and left NAc (lNAc) and the right precentral gyrus (rPreCG) /right insular cortex (rIC) from the “off” to “on” state. Aberrant bilateral NAc and rNAc–rPreCG/rIC FC in the “off” state were closely related to pain symptoms developed from the “on” to “off” states. These results suggest that the NAc in the mesolimbic pathway is related to pain in PD and may help understand the mechanism of pain development in patients with PD.
Introduction
In recent years, researchers have gained interest in non-motor symptoms of Parkinson's disease (PD). Several non-motor symptoms, including pain, affect patients with PD [1]. Pain is present in approximately two-thirds of patients with PD and exerts a major impact on their quality of life. Most patients with PD perceive pain as the most challenging symptom at all stages of the disease [2]. Pain symptoms tend to worsen in the "off" state [3]. Oral antiparkinsonian drugs particularly affect the fluctuation of pain, thus suggesting the dopaminergic fluctuation associated with pain in PD [4]. This pain is often underrecognized and undertreated, which can be attributed to inadequate understanding of the associated mechanism.
Several pathways involved in sensory processing modulate pain [5]. These pathways are altered across the "on" and "off" states in PD, and an alteration of the pathways is closely related to pain processing in the brain [6][7][8]. The mesolimbic pathway is associated with reward and motivation and plays an important role in the perception and modulation of pain [5]. The nucleus accumbens (NAc) is a central component of the mesolimbic pathway and is connected to pain-related brain regions [5]. In addition, connectivity between the NAc and other brain regions substantially contribute to the sensory and emotional aspects of pain sensation and its modulation in neuropsychiatric disorders, including PD [5,9].
Abnormal neuronal synchronization in brain networks is related to the development of pain in PD [8,10]. We hypothesized that changes in functional connectivity (FC) of NAc in patients with PD are related to the fluctuation of pain from the "on" to "off" states. To explore the mechanism of pain related to the "on" and "off" state fluctuations, we aimed to investigate the alterations of FC in brain networks related to NAc using resting-state functional MRI (rsfMRI) analysis. We assessed the severity and characteristics of pain experienced by PD patients using the Japanese version of the revised short-form McGill Pain Questionnaire 2 (SF-MPQ-2) [11]. We intended to explore the relationship between the severity and characteristics of pain and FC from the "on" to "off" states in patients with PD.
Patients
We enrolled consecutive patients with sporadic PD who visited Nara Medical University Hospital (Nara, Japan) between June 2020 and May 2021. A total of 23 patients fulfilled the inclusion and exclusion criteria. The major inclusion criteria were: (1) presenting confirmed PD and meeting the UK PD Society Brain Bank criteria [12]; (2) being 20-90 years old; (3) presenting the wearing-off phenomenon; and (4) absence of significant lesions on MRI. The exclusion criteria were: (1) having other known causes of pain (e.g., spinal cord disease, orthopedic disease, and peripheral neuropathy); (2) having undergone operations such as deep brain stimulation; (3) presence of severe cognitive dysfunction (Mini-Mental State Examination [MMSE] < 10); (4) presence of severe mental disorders; and (5) not being considered appropriate for participation in the trial. We excluded patients with claustrophobia and those with involuntary movements who could not remain still during MRI acquisition. Ethical approval for this study was obtained from the Nara Medical University Clinical Research Ethics Board. Written informed consent was obtained from all patients. All study procedures were performed in accordance with the ethical standards of the aforementioned institutional research committee and adhered to the principles of the Declaration of Helsinki and the Ethical Guidelines for Medical and Health Research Involving Human Subjects in Japan.
Clinical Assessments and Study Design
Patients with PD were observed for one week prior to MRI examinations and were assessed in the "on" and "off" states, which correspond to states when dopamine replacement therapy is effective and symptoms are controlled and when the treatment effect wears off and symptoms re-emerge. The MRI examinations were performed on each patient during both the "on" and "off" states over two testing days. Patients were assessed using the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) part III, Numerical Rating Scale (NRS; 0 to 10 points), and SF-MPQ-2 at the time of the MRI examination [11,13,14]. SF-MPQ-2 total scores (22 items, 0 to 220 points) were divided into four subscales: continuous pain (six items, 0 to 60 points), intermittent pain (six items, 0 to 60 points), neuropathic pain (six items, 0 to 60 points), and affective descriptors (four items, 0 to 40 points) [11]. The differences in the SF-MPQ-2 total and subscale scores of each patient between the "on" and "off" states were calculated. The cognitive and neurological assessment of patients was performed using the MMSE, Frontal Assessment Battery, and the MDS-UPDRS (part I, II, IV) on separate days [15]. The levodopa-equivalent daily dose was calculated [16]. The clinical features of the patients are summarized in Table 1.
MRI Acquisition
Patients underwent MRI examination on the two testing days: one testing day was an "on" day and the other one was an "off" day. The MRI examinations were preceded by a clinical assessment based on the MDS-UPDRS Part III, NRS, and SF-MPQ-2. Functional and structural MRI data were acquired using a 3-Tesla MRI scanner (MAGNETOM Skyra, Siemens Healthcare) with a 20-or 32-channel head coil at Nara Medical University Hospital. A total of four patients received MRI scans with a 20-channel head coil because of postural instability. The functional images were acquired using a gradient-echo echo-planar pulse sequence sensitive to blood oxygen level dependent (BOLD) contrast with the following parameters: repetition time (TR) = 1190 ms, echo time (TE) = 31 ms, matrix size = 64 × 64 mm 2 , flip angle (FA) = 90 • , field of view (FOV) = 212 × 212 mm 2 , slice thickness = 3.2 mm, and voxel size = 3.3 × 3.3 × 3.2 mm 3 , for a total of 40 slices. Two runs, each with 245 volumes, were performed for each patient. The T1-weighted structural images were acquired with the following parameters: TR = 1900 ms, TE = 2.75 ms, inversion time = 900 ms, matrix size = 256 × 256 mm 2 , FA = 9 • , FOV = 256 × 256 mm 2 , slice thickness = 1 mm, and voxel size = 1.0 × 1.0 × 1.0 mm 3 , for a total of 192 slices. The patients were instructed to keep their eyes centered on the fixation cross and to not think about anything during the acquisition phase [17].
Functional MRI Data Preprocessing
The MRI data were preprocessed using the Statistical Parametric Mapping software, version 12 (SPM12), and the CONN toolbox, version 17, in MATLAB (version R2020a, Math-Works, Natick, MA, USA) [18]. The preprocessing of functional and structural data was performed following the default preprocessing pipeline in the CONN toolbox. The initial ten scans of functional images were removed to eliminate equilibration effects. The functional images were realigned and unwarped; slice-timing corrected (in ascending order); segmented into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF); spatially normalized to the Montreal Neurological Institute (MNI)-space (Montreal, Canada); screened for outliers (artifact detection tools [ART]-based scrubbing); and smoothed to 8 mm full width at half maximum Gaussian kernel. To eliminate the influence of head motion and artifacts during the experiment, we identified outlier time points in the motion parameters and global signal intensity using ART, which is included in the CONN toolbox. The structural images were also segmented into GM, WM, and CSF and normalized to the MNI-space. Subsequently, denoising was performed on the functional images by default, including band-pass filtering (0.008 to 0.09 Hz), linear detrending, and nuisance regression of motion related components via the component-based noise correction method.
Functional Connectivity Analysis
The resting-state FC analysis was performed using a seed-based approach using the CONN toolbox. Region of interests (ROIs) were selected from the FMRIB Software Library Harvard-Oxford cortical and subcortical structural atlases [18]. Based on previous literature, we chose 22 ROIs known to be associated with pain: bilateral NAc, bilateral globus pallidus, bilateral thalamus, bilateral insular cortex, bilateral amygdala, bilateral posterior parahippocampal gyrus, bilateral hippocampus, bilateral anterior parahippocampal gyrus, bilateral postcentral gyrus, bilateral precentral gyrus, anterior cingulate gyrus, and brainstem [8,[19][20][21][22]. The bilateral NAc ROIs were used as seed regions, and the other ROIs were used as target regions for further analysis (Table S1). In the first-level analysis, the mean BOLD timeseries from each ROI was estimated by averaging the timeseries of all voxels at each ROI. Bivariate correlation coefficients were calculated between each pair of the BOLD timeseries from the selected ROIs. Fisher's transformation was applied to the resulting coefficient values. Each Fisher's-transformed coefficient value represented the FC value between two brain regions. The difference in FC values measuring ROI-to-ROI connectivity was calculated between "on" and "off" states (the subtraction of FC values in the "off" state from those in the "on" state). Correlations between the SF-MPQ-2 scores and FC values were assessed using the Pearson's correlation coefficient.
Statistical Analysis
All statistics were performed using MATLAB version R2020a. The means of the obtained values were compared between the "on" and "off" states using the paired twosample t-test. Correlations between scores on the clinical assessments and FC values were assessed using the Pearson's correlation coefficient. The Shapiro-Wilk test was used to assess the normality of data distribution. p-values of <0.05 were considered statistically significant, and a non-significant trend was established for a p-value between 0.05-0.15. A correlation was considered strong if the coefficient (r) was >0.40.
Demographic and Clinical Characteristics
Twenty-three patients with PD (12 male and 11 female) were enrolled in this study. The demographic characteristics and clinical features of the 23 patients with PD are summarized in Table 1. Patients displayed significantly higher MDS-UPDRS Part III scores in the "off" state than in the "on" state (p < 0.05). There was a non-significant trend for higher NRS scores in the "off" state than in the "on" state (p = 0.068). Moreover, patients displayed higher SF-MPQ-2 total and affective subscale scores in the "off" state than in the "on" state (p < 0.05). There was a non-significant trend for higher continuous, intermittent, and neuropathic subscale scores of SF-MPQ-2 in the "off" state than in the "on" state (p = 0.11, 0.12, and 0.12, respectively). These results indicate that patients with PD experience pain, as well as motor disability, during the "off" state in their daily life.
Relationship between FC of the NAc and Pain Intensity in "on" and "off" States
We explored the relationship between FC in brain networks and pain intensity in patients with PD. Using the bilateral NAc as seed regions, the seed-based FC analysis was conducted. In the "off" state, the SF-MPQ-2 total scores displayed a positive correlation (r = 0.40) with FC between the right NAc (rNAc) and right precentral gyrus (rPreCG) (Figure 1). Dividing the total SF-MPQ-2 scores into four subscales, the continuous subscale scores displayed a positive correlation with FCs of the rNAc with rPreCG (r = 0.44), right postcentral gyrus (rPostCG; r = 0.43), and left NAc (lNAc; r = 0.41), whereas neuropathic subscale scores displayed a positive correlation with FCs of bilateral NAc with the bilateral PreCG and PostCG (Figure 1). In contrast, in the "on" state, there were significant correlations between the SF-MPQ-2 scores and FCs, which were not found in the "off" state. For example, the SF-MPQ-2 total, continuous, and neuropathic subscale scores displayed a negative correlation with the FCs of the bilateral NAc with the right amygdala ( Figure 1). The correlation of FC and pain intensity was altered between "on" and "off" states in patients with PD.
Relationship between FC of the NAc from "on" to "off" State and Pain Intensity
We explored whether pain intensity in the "off" state is related to a change in FC from "on" to "off" state. Indeed, the SF-MPQ-2 total, continuous, and neuropathic subscale scores displayed a correlation with FC in the "off" state. Specifically, the SF-MPQ-2 total scores displayed a negative correlation with FC between the rNAc and rPreCG/rPostCG (r = −0.54 and −0.43, respectively) and between the rNAc and lNAc (r = −0.40). Regarding the subscales, continuous subscale scores displayed a correlation with FC of the rNAc with bilateral PreCG/PostCG and lNAc from "on" to "off" state, whereas neuropathic subscale scores displayed a correlation of FC between the bilateral NAc and bilateral PreCG/PostCG ( Figure 2). Therefore, pain intensity in the "off" state was correlated with a change in FC between the NAc and primary motor/sensory cortices from "on" to "off" state.
Relationship between FC of the NAc from "on" to "off" State and Pain Developing from "on" to "off" State
We conducted further analysis to explore whether the change in FC between the NA and primary motor/sensory cortices in the ipsilateral hemisphere was related to pain i tensity from "on" to "off" state. The changes in FC between the rNAc and rPreCG fro "on" to "off" state displayed a negative correlation with the corresponding changes in t SF-MPQ-2 total (r = −0.41), continuous (r = −0.45), and neuropathic (r = −0.44) subscal (Figures 3 and S1). The changes in FC between the rNAc and right insular cortex (rI from the "on" to "off" state showed a negative correlation with the changes in the scor of the SF-MPQ-2 total (r = −0.48) and neuropathic (r = −0.48) subscales (Figure 3, Figu S1). The changes in FC of the right NAc and ipsilateral cerebral cortices, such as the PreC and IC, were related to pain symptoms. Furthermore, the changes in FC from "on" to "o state between the bilateral NAc displayed a negative correlation with the changes in t SF-MPQ-2 neuropathic subscale scores (r = −0.42; Figure 3). Taken together, aberra changes in FC of the rNAc with the rPreCG/rIC and lNAc, from "on" to "off" state, we closely related to pain symptoms developing from "on" to "off" state. 3.4. Relationship between FC of the NAc from "on" to "off" State and Pain Developing from "on" to "off" State We conducted further analysis to explore whether the change in FC between the NAc and primary motor/sensory cortices in the ipsilateral hemisphere was related to pain intensity from "on" to "off" state. The changes in FC between the rNAc and rPreCG from "on" to "off" state displayed a negative correlation with the corresponding changes in the SF-MPQ-2 total (r = −0.41), continuous (r = −0.45), and neuropathic (r = −0.44) subscales (Figure 3 and Figure S1). The changes in FC between the rNAc and right insular cortex (rIC) from the "on" to "off" state showed a negative correlation with the changes in the scores of the SF-MPQ-2 total (r = −0.48) and neuropathic (r = −0.48) subscales ( Figure 3 and Figure S1). The changes in FC of the right NAc and ipsilateral cerebral cortices, such as the PreCG and IC, were related to pain symptoms. Furthermore, the changes in FC from "on" to "off" state between the bilateral NAc displayed a negative correlation with the changes in the SF-MPQ-2 neuropathic subscale scores (r = −0.42; Figure 3). Taken together, aberrant changes in FC of the rNAc with the rPreCG/rIC and lNAc, from "on" to "off" state, were closely related to pain symptoms developing from "on" to "off" state.
Discussion
The results of current study revealed that aberrant FC associated with the NAc w related to pain symptoms, as both FC and pain symptoms were shown to change betw the "on" and "off" states in PD.
As the disease progresses, patients with PD undergo treatment adjustments a show several motor complications, such as levodopa-induced dyskinesia, wearing phenomenon, and on-off fluctuations [23]. In PD, the on-off fluctuations not only in ence the motor symptoms, but also the frequency and severity of non-motor sympto
Discussion
The results of current study revealed that aberrant FC associated with the NAc was related to pain symptoms, as both FC and pain symptoms were shown to change between the "on" and "off" states in PD.
As the disease progresses, patients with PD undergo treatment adjustments and show several motor complications, such as levodopa-induced dyskinesia, wearing-off phenomenon, and on-off fluctuations [23]. In PD, the on-off fluctuations not only influence the motor symptoms, but also the frequency and severity of non-motor symptoms, including pain [4,24,25]. The on-off fluctuations in non-motor symptoms were suggested to be related to dopaminergic stimulation and modulation of other neurotransmitter systems: glutamatergic, serotoninergic, and adrenergic [24]. The FC in brain networks is also altered between the "on" and "off" states, and this alteration in pain-related brain regions causes the fluctuation of pain observed in PD [7,8].
In previous studies, patients with PD showed more frequent and severe pain symptoms during the "off" state compared to the "on" state [4,24]. The loss of dopamine neurons in the substantia nigra pars compacta and the ventral tegmental area (VTA) leads to a reduced ability to regulate the release of dopamine, particularly during the "off" state [6,9]. In the mesolimbic dopaminergic pathway, the NAc receives dopamine from the VTA. The NAc, centered in the limbic circuit of the basal ganglia, plays an important role in pain processing, as well as in emotional learning, motivated and addictive behavior, and reward [5]. After receiving painful stimuli, the NAc releases opioids, which are regulated by dopamine from the VTA. Opioids decrease perception of painful stimuli by blocking pain signals from pain-related brain regions [6,9]. In the "off" state, PD-affected brains under central dopamine depletion show reduction of endogenous opioid release from the NAc, which results in patients reporting higher pain intensity compared with that experienced during the "on" state [4,26]. Indeed, our results were consistent with these findings, indicating that patients with PD have more severe pain intensity during the "off" state.
The mesolimbic pathway, which comprises dopaminergic neurons, plays an important role in pain processing. Dopaminergic neurotransmission in the mesolimbic pathway is impaired in patients with chronic pain [5,27], as well as depression and addiction, due to alteration in NAc plasticity [6]. The correlations between FC of the NAc with pain-related brain regions and pain intensity were different between the "on" and "off" states in this experiment. The changes in FC of the NAc in primary motor/sensory cortices from "on" to "off" state were correlated with pain intensity in the "off" state, and from "on" to "off" state. Previous studies have indicated that FC during the resting-state may be involved in PD-related pain [8,10]. Dopamine depletion is thought to be linked to pathological and compensatory changes in brain connectivity. During the dopaminergic "off" state, cortico-striatal hyperconnectivity is observed, and increased neuronal synchronization in the "off" state may reflect the compensatory response of non-dopaminergic systems and the network reorganization across the cortex and subcortex [7]. In an animal model of neuropathic pain, the FC of the NAc with the striatum and cortex were altered, and the disrupted NAc neuronal activity reduced the expression of neuropathic pain-related behavior [28]. Our results support previous findings showing that aberrant FC of the NAc with the cerebral cortex is correlated with the development of pain and its intensity from "on" to "off" state.
The current study provides novel insights into a potential mechanism underlying fluctuating pain in PD; however, several limitations should be considered. First, this clinical study was a pilot study with a limited sample size. In addition, since healthy subjects present less fluctuations in pain, we could not implement a proper control group of healthy subjects [29]. Therefore, a large clinical trial with an appropriate methodology and research design is needed to validate our findings. Second, we conducted a seed-based rsfMRI analysis by strictly using the bilateral NAc as seed regions because the present study focused on the NAc as a central component of pain processing. More detailed ROIs and whole-brain functional connectivity analyses are needed to elucidate the mechanisms of abnormal FC underpinning pain in PD. Additionally, we did not consider other clinical factors that might influence pain, such as mental orientation, depression, and sleep disorders. Indeed, the symptoms of depression have been suggested to be associated with aberrant connectivity in the default mode networks in patients with PD [30].
Conclusions
We conducted an rsfMRI analysis in patients with PD to explore the relationship between neuronal synchronization of the NAc and pain. The NAc in the mesolimbic pathway is suggested to be involved in the perception and modulation of pain symptoms [5,27]. Aberrant FC of the NAc with the cerebral cortex was related to the generation of pain from "on" to "off" state. These findings may facilitate our understanding of the mechanisms of abnormal connectivity underpinning pain development in patients with PD.
|
2022-01-12T16:38:34.107Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "fad11ce045b9c835c059bba5550ed2fce6f6399e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/12/1/84/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0c1fdb6444c1fdb32de1aae59e1462c7a42e28c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.